public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-12 22:29 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-12 22:29 UTC (permalink / raw
  To: gentoo-commits

commit:     20a1a171771baa15abd085ae7edf2861aa258bd6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 12 22:28:44 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 12 22:28:44 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=20a1a171

Add BMQ(BitMap Queue) Scheduler, use=experimental

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                 |     4 +
 5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch | 11240 ++++++++++++++++++++++++++
 2 files changed, 11244 insertions(+)

diff --git a/0000_README b/0000_README
index c4cff3f2..e23280d4 100644
--- a/0000_README
+++ b/0000_README
@@ -86,3 +86,7 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch
+From:   https://gitlab.com/alfredchen/projectc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch
new file mode 100644
index 00000000..d0ff29ff
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch
@@ -0,0 +1,11240 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 65731b060e3f..56f8c9453e24 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5714,6 +5714,12 @@
+ 	sa1100ir	[NET]
+ 			See drivers/net/irda/sa1100_ir.c.
+ 
++	sched_timeslice=
++			[KNL] Time slice in ms for Project C BMQ/PDS scheduler.
++			Format: integer 2, 4
++			Default: 4
++			See Documentation/scheduler/sched-BMQ.txt
++
+ 	sched_verbose	[KNL] Enables verbose scheduler debug messages.
+ 
+ 	schedstats=	[KNL,X86] Enable or disable scheduled statistics.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 6584a1f9bfe3..e332d9eff0d4 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1646,3 +1646,13 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield will perform.
++
++  0 - No yield.
++  1 - Deboost and requeue task. (default)
++  2 - Set run queue skip task.
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index dd31e3b6bf77..12d1248cb4df 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -480,7 +480,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 292c31697248..f5b026795dc6 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -769,8 +769,14 @@ struct task_struct {
+ 	unsigned int			ptrace;
+ 
+ #ifdef CONFIG_SMP
+-	int				on_cpu;
+ 	struct __call_single_node	wake_entry;
++#endif
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
++	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -784,6 +790,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -792,6 +799,20 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	int				sq_idx;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -802,6 +823,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1561,6 +1583,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ static inline struct pid *task_pid(struct task_struct *task)
+ {
+ 	return task->thread_pid;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index df3aca89d4f5..1df1f7635188 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..a9a1dfa99140 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,32 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++
++#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
++#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
++#endif
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index b2b9e6eb9683..09bd4d8758b2 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index de545ba85218..941bb18ff72c 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -238,7 +238,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index 9ffb103fc927..8f0b7eeff77e 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -629,6 +629,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -794,6 +795,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -840,6 +842,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -893,6 +924,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -990,6 +1022,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -1012,6 +1045,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config SCHED_MM_CID
+@@ -1260,6 +1294,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 5727d42149c3..e2e2622d50d5 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -75,9 +75,15 @@ struct task_struct init_task
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -89,6 +95,17 @@ struct task_struct init_task
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++	.sq_idx		= 15,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -96,6 +113,7 @@ struct task_struct init_task
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index c2f1fd95a821..41654679b1b2 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 615daaf87f1f..16fb54ec732c 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -848,7 +848,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1247,7 +1247,7 @@ static void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3206,12 +3206,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3232,6 +3235,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3255,12 +3259,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index 6f0c358e73d8..8111481ce8b1 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -150,7 +150,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index aedc0832c9f4..ff8bf6cddc34 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -174,7 +174,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -195,7 +195,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 4a10e8c16fd2..cfbbdd64b851 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -362,7 +362,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -383,16 +383,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -401,16 +405,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -419,8 +429,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..b0c4ade2788c
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,8715 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.7-r0"
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
++u64 sched_timeslice_ns __read_mostly = (4 << 20);
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++struct affinity_context {
++	const struct cpumask *new_mask;
++	struct cpumask *user_mask;
++	unsigned int flags;
++};
++
++static int __init sched_timeslice(char *str)
++{
++	int timeslice_ms;
++
++	get_option(&str, &timeslice_ms);
++	if (2 != timeslice_ms)
++		timeslice_ms = 4;
++	sched_timeslice_ns = timeslice_ms << 20;
++	sched_timeslice_imp(timeslice_ms);
++
++	return 0;
++}
++early_param("sched_timeslice", sched_timeslice);
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Choose what sort of yield sched_yield will perform.
++ * 0: No yield.
++ * 1: Deboost and requeue task. (default)
++ * 2: Set rq skip task.
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
++#endif
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS] ____cacheline_aligned_in_smp;
++static cpumask_t *const sched_idle_mask = &sched_preempt_mask[0];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
++	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
++	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
++}
++
++static inline void
++clear_recorded_preempt_mask(int pr, int low, int high, int cpu)
++{
++	if (low < pr && pr <= high)
++		cpumask_clear_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
++}
++
++static inline void
++set_recorded_preempt_mask(int pr, int low, int high, int cpu)
++{
++	if (low < pr && pr <= high)
++		cpumask_set_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
++}
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	unsigned long prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	unsigned long last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++#ifdef CONFIG_SCHED_SMT
++			if (static_branch_likely(&sched_smt_present))
++				cpumask_andnot(&sched_sg_idle_mask,
++					       &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++			cpumask_clear_cpu(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		clear_recorded_preempt_mask(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
++			cpumask_or(&sched_sg_idle_mask,
++				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		cpumask_set_cpu(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	set_recorded_preempt_mask(pr, last_prio, prio, cpu);
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_prio2idx(rq->prio, rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct *
++sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	unsigned long idx = p->sq_idx;
++	struct list_head *head = &rq->queue.heads[idx];
++
++	if (list_is_last(&p->sq_node, head)) {
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++static inline struct task_struct *rq_runnable_task(struct rq *rq)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (unlikely(next == rq->skip))
++		next = sched_rq_next_task(next, rq);
++
++	return next;
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq
++*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void
++__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq
++*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
++			  unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq &&
++				   rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
++			      unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void
++rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void
++rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++	delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
++			RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++						  cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)				\
++	sched_info_dequeue(rq, p);						\
++										\
++	list_del(&p->sq_node);							\
++	if (list_empty(&rq->queue.heads[p->sq_idx])) { 				\
++		clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);	\
++		func;								\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
++	sched_info_enqueue(rq, p);					\
++									\
++	p->sq_idx = task_sched_prio_idx(p, rq);				\
++	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
++	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags);
++	update_sched_preempt_mask(rq);
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++
++	list_del(&p->sq_node);
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
++	if (idx != p->sq_idx) {
++		if (list_empty(&rq->queue.heads[p->sq_idx]))
++			clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		p->sq_idx = idx;
++		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		update_sched_preempt_mask(rq);
++	}
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++void nohz_balance_enter_idle(int cpu) {}
++
++void select_nohz_load_balancer(int stop_tick) {}
++
++void set_cpu_sd_state_idle(void) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
++		static_prio + MAX_PRIORITY_ADJ;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++	p->on_rq = 0;
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	if (WARN_ON_ONCE(!p->migration_disabled))
++		return;
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
++				   new_cpu)
++{
++	int src_cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	src_cpu = cpu_of(rq);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p, src_cpu);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
++				 dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++static cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_idle_mask);
++
++	for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++		if (prio < cpu_rq(cpu)->prio)
++			cpumask_set_cpu(cpu, mask);
++	}
++}
++
++static inline int
++preempt_mask_check(struct task_struct *p, cpumask_t *allow_mask, cpumask_t *preempt_mask)
++{
++	int task_prio = task_sched_prio(p);
++	cpumask_t *mask = sched_preempt_mask + SCHED_QUEUE_BITS - 1 - task_prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != task_prio) {
++		sched_preempt_mask_flush(mask, task_prio);
++		atomic_set(&sched_prio_record, task_prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&mask, &allow_mask, &sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
++	    preempt_mask_check(p, &allow_mask, &mask))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (p->migration_disabled) {
++			if (likely(p->cpus_ptr != &p->cpus_mask))
++				__do_set_cpus_ptr(p, &p->cpus_mask);
++			p->migration_disabled = 0;
++			p->migration_flags |= MDF_FORCE_ENABLED;
++			/* When p is migrate_disabled, rq->lock should be held */
++			rq->nr_pinned--;
++		}
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is ok to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local irq enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required.  Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sched_timeslice_ns;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++	{}
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void scheduler_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_SCHED_SMT
++static inline int sg_balance_cpu_stop(void *data)
++{
++	struct rq *rq = this_rq();
++	struct task_struct *p = data;
++	cpumask_t tmp;
++	unsigned long flags;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	rq->active_balance = 0;
++	/* _something_ may have changed the task, double check again */
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
++	    !is_migration_disabled(p)) {
++		int cpu = cpu_of(rq);
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock(&p->pi_lock);
++
++	local_irq_restore(flags);
++
++	return 0;
++}
++
++/* sg_balance_trigger - trigger slibing group balance for @cpu */
++static inline int sg_balance_trigger(const int cpu)
++{
++	struct rq *rq= cpu_rq(cpu);
++	unsigned long flags;
++	struct task_struct *curr;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++	curr = rq->curr;
++	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
++	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
++	      !is_migration_disabled(curr) && (!rq->active_balance);
++
++	if (res)
++		rq->active_balance = 1;
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res)
++		stop_one_cpu_nowait(cpu, sg_balance_cpu_stop, curr,
++				    &rq->active_balance_work);
++	return res;
++}
++
++/*
++ * sg_balance - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance(struct rq *rq, int cpu)
++{
++	cpumask_t chk;
++
++	/* exit when cpu is offline */
++	if (unlikely(!rq->online))
++		return;
++
++	/*
++	 * Only cpu in slibing idle group will do the checking and then
++	 * find potential cpus which can migrate the current running task
++	 */
++	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
++	    cpumask_andnot(&chk, cpu_online_mask, sched_idle_mask) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (!cpumask_intersects(cpu_smt_mask(i), sched_idle_mask) &&\
++			    sg_balance_trigger(i))
++				return;
++		}
++	}
++}
++#endif /* CONFIG_SCHED_SMT */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	struct cpumask *topo_mask, *end_mask;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq, task_sched_prio_idx(p, rq));
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next;
++
++	if (unlikely(rq->skip)) {
++		next = rq_runnable_task(rq);
++		if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++			if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++				rq->skip = NULL;
++				schedstat_inc(rq->sched_goidle);
++				return next;
++#ifdef	CONFIG_SMP
++			}
++			next = rq_runnable_task(rq);
++#endif
++		}
++		rq->skip = NULL;
++#ifdef CONFIG_HIGH_RES_TIMERS
++		hrtick_start(rq, next->time_slice);
++#endif
++		return next;
++	}
++
++	next = sched_rq_first_task(rq);
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE			0x0
++#define SM_PREEMPT		0x1
++#define SM_RTLOCK_WAIT		0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT	(~0U)
++#else
++# define SM_MASK_PREEMPT	SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler scheduler_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, !!sched_mode);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(!!sched_mode);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev_state & TASK_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++#ifdef CONFIG_SCHED_BMQ
++		rq->last_ts_switch = rq->clock;
++#endif
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
++		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 */
++		++*switch_count;
++
++		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	sg_balance(rq, cpu);
++#endif
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(unsigned int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXE
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_NONE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		int idx;
++
++		update_rq_clock(rq);
++		idx = task_sched_prio_idx(p, rq);
++		if (idx != p->sq_idx) {
++			requeue_task(p, rq, idx);
++			wakeup_preempt(rq);
++		}
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
++ * @p: task
++ * @nice: nice value
++ */
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++static struct task_struct *find_get_task(pid_t pid)
++{
++	struct task_struct *p;
++	guard(rcu)();
++
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++
++	return p;
++}
++
++DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
++	     find_get_task(pid), pid_t pid)
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	guard(rcu)();
++
++	pcred = __task_cred(p);
++	return (uid_eq(cred->euid, pcred->euid) ||
++	        uid_eq(cred->euid, pcred->uid));
++}
++
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++					 const struct sched_attr *attr,
++					 int policy, int reset_on_fork)
++{
++	if (rt_policy(policy)) {
++		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++		/* Can't set/change the rt policy: */
++		if (policy != p->policy && !rlim_rtprio)
++			goto req_priv;
++
++		/* Can't increase priority: */
++		if (attr->sched_priority > p->rt_priority &&
++		    attr->sched_priority > rlim_rtprio)
++			goto req_priv;
++	}
++
++	/* Can't change other user's priorities: */
++	if (!check_same_owner(p))
++		goto req_priv;
++
++	/* Normal users shall not reset the sched_reset_on_fork flag: */
++	if (p->sched_reset_on_fork && !reset_on_fork)
++		goto req_priv;
++
++	return 0;
++
++req_priv:
++	if (!capable(CAP_SYS_NICE))
++		return -EPERM;
++
++	return 0;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	return sched_setscheduler(p, policy, &lparam);
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++static void get_params(struct task_struct *p, struct sched_attr *attr)
++{
++	if (task_has_rt_policy(p))
++		attr->sched_priority = p->rt_priority;
++	else
++		attr->sched_nice = task_nice(p);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
++		get_params(p, &attr);
++
++	return sched_setattr(p, &attr);
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		return -ESRCH;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (!retval)
++		retval = p->policy;
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		int retval;
++
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -EINVAL;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		if (task_has_rt_policy(p))
++			lp.sched_priority = p->rt_priority;
++	}
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -ESRCH;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		kattr.sched_policy = p->policy;
++		if (p->sched_reset_on_fork)
++			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		get_params(p, &kattr);
++		kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++	}
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++}
++
++#ifdef CONFIG_SMP
++int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
++{
++	return 0;
++}
++#endif
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
++{
++	int retval;
++	cpumask_var_t cpus_allowed, new_mask;
++
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++		return -ENOMEM;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
++
++	ctx->new_mask = new_mask;
++	ctx->flags |= SCA_CHECK;
++
++	retval = __set_cpus_allowed_ptr(p, ctx);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	if (!cpumask_subset(new_mask, cpus_allowed)) {
++		/*
++		 * We must have raced with a concurrent cpuset
++		 * update. Just reset the cpus_allowed to the
++		 * cpuset's cpus_allowed
++		 */
++		cpumask_copy(new_mask, cpus_allowed);
++
++		/*
++		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
++		 * will restore the previous user_cpus_ptr value.
++		 *
++		 * In the unlikely event a previous user_cpus_ptr exists,
++		 * we need to further restrict the mask to what is allowed
++		 * by that old user_cpus_ptr.
++		 */
++		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
++			bool empty = !cpumask_and(new_mask, new_mask,
++						  ctx->user_mask);
++
++			if (WARN_ON_ONCE(empty))
++				cpumask_copy(new_mask, cpus_allowed);
++		}
++		__set_cpus_allowed_ptr(p, ctx);
++		retval = -EINVAL;
++	}
++
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	struct affinity_context ac;
++	struct cpumask *user_mask;
++	int retval;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (p->flags & PF_NO_SETAFFINITY)
++		return -EINVAL;
++
++	if (!check_same_owner(p)) {
++		guard(rcu)();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
++			return -EPERM;
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		return retval;
++
++	/*
++	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
++	 * alloc_user_cpus_ptr() returns NULL.
++	 */
++	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
++	if (user_mask) {
++		cpumask_copy(user_mask, in_mask);
++	} else if (IS_ENABLED(CONFIG_SMP)) {
++		return -ENOMEM;
++	}
++
++	ac = (struct affinity_context){
++		.new_mask  = in_mask,
++		.user_mask = user_mask,
++		.flags     = SCA_USER,
++	};
++
++	retval = __sched_setaffinity(p, &ac);
++	kfree(ac.user_mask);
++
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	int retval;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	guard(raw_spinlock_irqsave)(&p->pi_lock);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min(len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	if (1 == sched_yield_type) {
++		if (!rt_task(current))
++			do_sched_yield_type_1(current, rq);
++	} else if (2 == sched_yield_type) {
++		if (rq->nr_running > 1)
++			rq->skip = current;
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	klp_sched_try_switch();
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	if (!klp_override)
++		preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++	__klp_sched_try_switch();
++	return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = true;
++	static_call_update(cond_resched, klp_cond_resched);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = false;
++	__sched_dynamic_update(preempt_dynamic_mode);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -EINVAL;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	*t = ns_to_timespec64(sched_timeslice_ns);
++	return 0;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (cpu == smp_processor_id() && in_hardirq()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(&sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpumask_of(cpu));
++		tmp++;
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++		/*per_cpu(sd_llc_id, cpu) = cpu;*/
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
++
++		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++		rq->skip = NULL;
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->active_balance = 0;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{ }	/* Terminate */
++};
++
++
++static struct cftype cpu_files[] = {
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_files,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	rcu_read_lock();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		rcu_read_unlock();
++		t->last_mm_cid = -1;
++		return -1;
++	}
++	rcu_read_unlock();
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, dst_cid;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many cpus.
++	 *
++	 * If destination cid is already set, we may have to just clear
++	 * the src cid to ensure compactness in frequent migrations
++	 * scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed cpus, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed cpus.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++	if (!mm_cid_is_unset(dst_cid) &&
++	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (!mm_cid_is_unset(dst_cid)) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1212a031700e
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,31 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..1a4dab2b2bb5
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,921 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct task_struct		*skip;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue	queue;
++#ifdef CONFIG_SCHED_PDS
++	u64			time_edge;
++#endif
++	unsigned long		prio;
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	int active_balance;
++	struct cpu_stop_work	active_balance_work;
++#endif
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++#ifdef CONFIG_SCHED_BMQ
++	u64			last_ts_switch;
++#endif
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++	ITSELF_LEVEL_SPACE_HOLDER,
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++extern void flush_smp_call_function_queue(void);
++
++#else  /* !CONFIG_SMP */
++static inline void flush_smp_call_function_queue(void) { }
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++				  struct task_struct *p)
++{
++	return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++	struct cpumask *cpumask;
++	int cid;
++
++	cpumask = mm_cidmask(mm);
++	/*
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cpumask);
++		if (cid < nr_cpu_ids)
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cpumask))
++		return -1;
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..d8f6381c27a9
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,101 @@
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
++#define boost_threshold(p)	(sched_timeslice_ns >> ((14 - (p)->boost_prio) / 2))
++
++static inline void boost_task(struct task_struct *p)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	if (p->boost_prio > limit)
++		p->boost_prio--;
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MAX_RT_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MAX_RT_PRIO) / 2;
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return task_sched_prio(p);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq) {}
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
++		boost_task(p);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	u64 switch_ns = rq_switch_time(rq);
++
++	if (switch_ns < boost_threshold(p))
++		boost_task(p);
++	else if (switch_ns > sched_timeslice_ns)
++		deboost_task(p);
++}
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index d9dc9ab3773f..71a25540d65e 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -42,13 +42,19 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
+-#include "deadline.c"
+ 
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..bc17d5a6fc41 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -84,7 +84,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 5888176354e2..6ab2534714f6 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -155,12 +155,18 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ {
+-	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
+ 	struct rq *rq = cpu_rq(sg_cpu->cpu);
+ 
++#ifndef CONFIG_SCHED_ALT
++	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
++
+ 	sg_cpu->bw_dl = cpu_bw_dl(rq);
+ 	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
+ 					  FREQUENCY_UTIL, NULL);
++#else
++	sg_cpu->bw_dl = 0;
++	sg_cpu->util = rq_load_util(rq, arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -306,8 +312,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -636,6 +644,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index af7952f12e6c..6461cbbb734d 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -630,7 +630,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 4580a450700e..21c15b4488fe 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -332,6 +335,7 @@ static const struct file_operations sched_debug_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -341,12 +345,16 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++	debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+@@ -373,11 +381,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1106,6 +1116,7 @@ void proc_sched_set_task(struct task_struct *p)
+ 	memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 565f8374ddbb..67d51e05a8ac 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -380,6 +380,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -501,3 +502,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..b20226ed47cc
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,142 @@
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms)
++{
++	if (2 == timeslice_ms)
++		sched_timeslice_shift = 22;
++}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	s64 delta = p->deadline - rq->time_edge + SCHED_EDGE_DELTA;
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", delta))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return max(0LL, delta);
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	u64 idx;
++
++	if (p->prio < MIN_NORMAL_PRIO)
++		return p->prio >> 2;
++
++	idx = max(p->deadline + SCHED_EDGE_DELTA, rq->time_edge);
++	/*printk(KERN_INFO "sched: task_sched_prio_idx edge:%llu, deadline=%llu idx=%llu\n", rq->time_edge, p->deadline, idx);*/
++	return MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(idx);
++}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		struct task_struct *p;
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		list_for_each_entry(p, &head, sq_node)
++			p->sq_idx = idx;
++
++		list_splice(&head, rq->queue.heads + idx);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = (rq->prio < MIN_SCHED_NORMAL_PRIO + delta) ?
++		MIN_SCHED_NORMAL_PRIO : rq->prio - delta;
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq);
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	time_slice_expired(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 63b6cf898220..9ca10ece4d3a 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * thermal:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 3a0e0dc28721..e8a7d84aa5a5 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 thermal_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 2e5a95486a42..0c86131a2a64 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3509,4 +3513,9 @@ static inline void init_sched_mm_cid(struct task_struct *t) { }
+ extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
+ extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 857f837f52cb..5486c63e4790 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -125,8 +125,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 38f3698f5e5b..b9d597394316 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 10d1391e7416..120933a5b206 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1445,8 +1446,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1680,6 +1683,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1716,6 +1720,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2793,3 +2798,20 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 157f7ce2942d..63083a9a2935 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ 
+ /* Constants used for minimum and maximum */
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 760793998cdd..3198ed8ab40a 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2091,8 +2091,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+-	if (rt_task(current))
++	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index e9c6f9d0e42c..43ee0a94abdd 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -867,6 +867,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -874,6 +875,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -901,8 +903,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -916,7 +920,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index 529590499b1f..d04bb99b4f0e 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 2989b57e154a..7313d9f5585f 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1114,6 +1114,7 @@ static bool kick_pool(struct worker_pool *pool)
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1139,6 +1140,8 @@ static bool kick_pool(struct worker_pool *pool)
+ 		get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++;
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1263,7 +1266,11 @@ void wq_worker_running(struct task_struct *task)
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1348,7 +1355,11 @@ void wq_worker_tick(struct task_struct *task)
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -2559,7 +2570,11 @@ __acquires(&pool->lock)
+ 	worker->current_work = work;
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-20 11:44 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-20 11:44 UTC (permalink / raw
  To: gentoo-commits

commit:     6ccd8f98cbb9ef0b68716a894f3434a6833a530b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 20 11:44:10 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jan 20 11:44:10 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6ccd8f98

Linux patch 6.7.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1000_linux-6.7.1.patch | 1688 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1692 insertions(+)

diff --git a/0000_README b/0000_README
index e23280d4..9e6fa4b3 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-6.7.1.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.1
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1000_linux-6.7.1.patch b/1000_linux-6.7.1.patch
new file mode 100644
index 00000000..3bbdfffe
--- /dev/null
+++ b/1000_linux-6.7.1.patch
@@ -0,0 +1,1688 @@
+diff --git a/Documentation/admin-guide/features.rst b/Documentation/admin-guide/features.rst
+index 8c167082a84f9..7651eca38227d 100644
+--- a/Documentation/admin-guide/features.rst
++++ b/Documentation/admin-guide/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features
++.. kernel-feat:: features
+diff --git a/Documentation/arch/arc/features.rst b/Documentation/arch/arc/features.rst
+index b793583d688a4..49ff446ff744c 100644
+--- a/Documentation/arch/arc/features.rst
++++ b/Documentation/arch/arc/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features arc
++.. kernel-feat:: features arc
+diff --git a/Documentation/arch/arm/features.rst b/Documentation/arch/arm/features.rst
+index 7414ec03dd157..0e76aaf68ecab 100644
+--- a/Documentation/arch/arm/features.rst
++++ b/Documentation/arch/arm/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features arm
++.. kernel-feat:: features arm
+diff --git a/Documentation/arch/arm64/features.rst b/Documentation/arch/arm64/features.rst
+index dfa4cb3cd3efa..03321f4309d0b 100644
+--- a/Documentation/arch/arm64/features.rst
++++ b/Documentation/arch/arm64/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features arm64
++.. kernel-feat:: features arm64
+diff --git a/Documentation/arch/loongarch/features.rst b/Documentation/arch/loongarch/features.rst
+index ebacade3ea454..009f44c7951f8 100644
+--- a/Documentation/arch/loongarch/features.rst
++++ b/Documentation/arch/loongarch/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features loongarch
++.. kernel-feat:: features loongarch
+diff --git a/Documentation/arch/m68k/features.rst b/Documentation/arch/m68k/features.rst
+index 5107a21194724..de7f0ccf7fc8e 100644
+--- a/Documentation/arch/m68k/features.rst
++++ b/Documentation/arch/m68k/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features m68k
++.. kernel-feat:: features m68k
+diff --git a/Documentation/arch/mips/features.rst b/Documentation/arch/mips/features.rst
+index 1973d729b29a9..6e0ffe3e73540 100644
+--- a/Documentation/arch/mips/features.rst
++++ b/Documentation/arch/mips/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features mips
++.. kernel-feat:: features mips
+diff --git a/Documentation/arch/nios2/features.rst b/Documentation/arch/nios2/features.rst
+index 8449e63f69b2b..89913810ccb5a 100644
+--- a/Documentation/arch/nios2/features.rst
++++ b/Documentation/arch/nios2/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features nios2
++.. kernel-feat:: features nios2
+diff --git a/Documentation/arch/openrisc/features.rst b/Documentation/arch/openrisc/features.rst
+index 3f7c40d219f2c..bae2e25adfd64 100644
+--- a/Documentation/arch/openrisc/features.rst
++++ b/Documentation/arch/openrisc/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features openrisc
++.. kernel-feat:: features openrisc
+diff --git a/Documentation/arch/parisc/features.rst b/Documentation/arch/parisc/features.rst
+index 501d7c4500379..b3aa4d243b936 100644
+--- a/Documentation/arch/parisc/features.rst
++++ b/Documentation/arch/parisc/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features parisc
++.. kernel-feat:: features parisc
+diff --git a/Documentation/arch/powerpc/features.rst b/Documentation/arch/powerpc/features.rst
+index aeae73df86b0c..ee4b95e04202d 100644
+--- a/Documentation/arch/powerpc/features.rst
++++ b/Documentation/arch/powerpc/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features powerpc
++.. kernel-feat:: features powerpc
+diff --git a/Documentation/arch/riscv/features.rst b/Documentation/arch/riscv/features.rst
+index c70ef6ac2368c..36e90144adabd 100644
+--- a/Documentation/arch/riscv/features.rst
++++ b/Documentation/arch/riscv/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features riscv
++.. kernel-feat:: features riscv
+diff --git a/Documentation/arch/s390/features.rst b/Documentation/arch/s390/features.rst
+index 57c296a9d8f30..2883dc9506817 100644
+--- a/Documentation/arch/s390/features.rst
++++ b/Documentation/arch/s390/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features s390
++.. kernel-feat:: features s390
+diff --git a/Documentation/arch/sh/features.rst b/Documentation/arch/sh/features.rst
+index f722af3b6c993..fae48fe81e9bd 100644
+--- a/Documentation/arch/sh/features.rst
++++ b/Documentation/arch/sh/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features sh
++.. kernel-feat:: features sh
+diff --git a/Documentation/arch/sparc/features.rst b/Documentation/arch/sparc/features.rst
+index c0c92468b0fe9..96835b6d598a1 100644
+--- a/Documentation/arch/sparc/features.rst
++++ b/Documentation/arch/sparc/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features sparc
++.. kernel-feat:: features sparc
+diff --git a/Documentation/arch/x86/features.rst b/Documentation/arch/x86/features.rst
+index b663f15053ce8..a33616346a388 100644
+--- a/Documentation/arch/x86/features.rst
++++ b/Documentation/arch/x86/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features x86
++.. kernel-feat:: features x86
+diff --git a/Documentation/arch/xtensa/features.rst b/Documentation/arch/xtensa/features.rst
+index 6b92c7bfa19da..28dcce1759be4 100644
+--- a/Documentation/arch/xtensa/features.rst
++++ b/Documentation/arch/xtensa/features.rst
+@@ -1,3 +1,3 @@
+ .. SPDX-License-Identifier: GPL-2.0
+ 
+-.. kernel-feat:: $srctree/Documentation/features xtensa
++.. kernel-feat:: features xtensa
+diff --git a/Documentation/sphinx/kernel_feat.py b/Documentation/sphinx/kernel_feat.py
+index b5fa2f0542a5d..b9df61eb45013 100644
+--- a/Documentation/sphinx/kernel_feat.py
++++ b/Documentation/sphinx/kernel_feat.py
+@@ -37,8 +37,6 @@ import re
+ import subprocess
+ import sys
+ 
+-from os import path
+-
+ from docutils import nodes, statemachine
+ from docutils.statemachine import ViewList
+ from docutils.parsers.rst import directives, Directive
+@@ -76,33 +74,26 @@ class KernelFeat(Directive):
+         self.state.document.settings.env.app.warn(message, prefix="")
+ 
+     def run(self):
+-
+         doc = self.state.document
+         if not doc.settings.file_insertion_enabled:
+             raise self.warning("docutils: file insertion disabled")
+ 
+         env = doc.settings.env
+-        cwd = path.dirname(doc.current_source)
+-        cmd = "get_feat.pl rest --enable-fname --dir "
+-        cmd += self.arguments[0]
+-
+-        if len(self.arguments) > 1:
+-            cmd += " --arch " + self.arguments[1]
+ 
+-        srctree = path.abspath(os.environ["srctree"])
++        srctree = os.path.abspath(os.environ["srctree"])
+ 
+-        fname = cmd
++        args = [
++            os.path.join(srctree, 'scripts/get_feat.pl'),
++            'rest',
++            '--enable-fname',
++            '--dir',
++            os.path.join(srctree, 'Documentation', self.arguments[0]),
++        ]
+ 
+-        # extend PATH with $(srctree)/scripts
+-        path_env = os.pathsep.join([
+-            srctree + os.sep + "scripts",
+-            os.environ["PATH"]
+-        ])
+-        shell_env = os.environ.copy()
+-        shell_env["PATH"]    = path_env
+-        shell_env["srctree"] = srctree
++        if len(self.arguments) > 1:
++            args.extend(['--arch', self.arguments[1]])
+ 
+-        lines = self.runCmd(cmd, shell=True, cwd=cwd, env=shell_env)
++        lines = subprocess.check_output(args, cwd=os.path.dirname(doc.current_source)).decode('utf-8')
+ 
+         line_regex = re.compile(r"^\.\. FILE (\S+)$")
+ 
+@@ -121,30 +112,6 @@ class KernelFeat(Directive):
+         nodeList = self.nestedParse(out_lines, fname)
+         return nodeList
+ 
+-    def runCmd(self, cmd, **kwargs):
+-        u"""Run command ``cmd`` and return its stdout as unicode."""
+-
+-        try:
+-            proc = subprocess.Popen(
+-                cmd
+-                , stdout = subprocess.PIPE
+-                , stderr = subprocess.PIPE
+-                , **kwargs
+-            )
+-            out, err = proc.communicate()
+-
+-            out, err = codecs.decode(out, 'utf-8'), codecs.decode(err, 'utf-8')
+-
+-            if proc.returncode != 0:
+-                raise self.severe(
+-                    u"command '%s' failed with return code %d"
+-                    % (cmd, proc.returncode)
+-                )
+-        except OSError as exc:
+-            raise self.severe(u"problems with '%s' directive: %s."
+-                              % (self.name, ErrorString(exc)))
+-        return out
+-
+     def nestedParse(self, lines, fname):
+         content = ViewList()
+         node    = nodes.section()
+diff --git a/Documentation/translations/zh_CN/arch/loongarch/features.rst b/Documentation/translations/zh_CN/arch/loongarch/features.rst
+index 82bfac180bdc0..cec38dda8298c 100644
+--- a/Documentation/translations/zh_CN/arch/loongarch/features.rst
++++ b/Documentation/translations/zh_CN/arch/loongarch/features.rst
+@@ -5,4 +5,4 @@
+ :Original: Documentation/arch/loongarch/features.rst
+ :Translator: Huacai Chen <chenhuacai@loongson.cn>
+ 
+-.. kernel-feat:: $srctree/Documentation/features loongarch
++.. kernel-feat:: features loongarch
+diff --git a/Documentation/translations/zh_CN/arch/mips/features.rst b/Documentation/translations/zh_CN/arch/mips/features.rst
+index da1b956e4a40f..0d6df97db069b 100644
+--- a/Documentation/translations/zh_CN/arch/mips/features.rst
++++ b/Documentation/translations/zh_CN/arch/mips/features.rst
+@@ -10,4 +10,4 @@
+ 
+ .. _cn_features:
+ 
+-.. kernel-feat:: $srctree/Documentation/features mips
++.. kernel-feat:: features mips
+diff --git a/Documentation/translations/zh_TW/arch/loongarch/features.rst b/Documentation/translations/zh_TW/arch/loongarch/features.rst
+index b64e430f55aef..c2175fd32b54b 100644
+--- a/Documentation/translations/zh_TW/arch/loongarch/features.rst
++++ b/Documentation/translations/zh_TW/arch/loongarch/features.rst
+@@ -5,5 +5,5 @@
+ :Original: Documentation/arch/loongarch/features.rst
+ :Translator: Huacai Chen <chenhuacai@loongson.cn>
+ 
+-.. kernel-feat:: $srctree/Documentation/features loongarch
++.. kernel-feat:: features loongarch
+ 
+diff --git a/Documentation/translations/zh_TW/arch/mips/features.rst b/Documentation/translations/zh_TW/arch/mips/features.rst
+index f694104200354..3d3906c4d08e2 100644
+--- a/Documentation/translations/zh_TW/arch/mips/features.rst
++++ b/Documentation/translations/zh_TW/arch/mips/features.rst
+@@ -10,5 +10,5 @@
+ 
+ .. _tw_features:
+ 
+-.. kernel-feat:: $srctree/Documentation/features mips
++.. kernel-feat:: features mips
+ 
+diff --git a/Makefile b/Makefile
+index c6f549f6a4aeb..186da2386a067 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 9bd9f79cd4099..c3536c236be99 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -510,6 +510,13 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "GMxXGxx"),
+ 		},
+ 	},
++	{
++		/* TongFang GMxXGxx sold as Eluktronics Inc. RP-15 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Eluktronics Inc."),
++			DMI_MATCH(DMI_BOARD_NAME, "RP-15"),
++		},
++	},
+ 	{
+ 		/* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
+ 		.matches = {
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 92128aae2d060..71a40a4c546f5 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -5030,7 +5030,7 @@ static __poll_t binder_poll(struct file *filp,
+ 
+ 	thread = binder_get_thread(proc);
+ 	if (!thread)
+-		return POLLERR;
++		return EPOLLERR;
+ 
+ 	binder_inner_proc_lock(thread->proc);
+ 	thread->looper |= BINDER_LOOPER_STATE_POLL;
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 138f6d43d13b2..e5fa2042585a4 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -557,7 +557,7 @@ err_alloc_buf_struct_failed:
+  * is the sum of the three given sizes (each rounded up to
+  * pointer-sized boundary)
+  *
+- * Return:	The allocated buffer or %NULL if error
++ * Return:	The allocated buffer or %ERR_PTR(-errno) if error
+  */
+ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
+ 					   size_t data_size,
+@@ -706,7 +706,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc,
+ 	/*
+ 	 * We could eliminate the call to binder_alloc_clear_buf()
+ 	 * from binder_alloc_deferred_release() by moving this to
+-	 * binder_alloc_free_buf_locked(). However, that could
++	 * binder_free_buf_locked(). However, that could
+ 	 * increase contention for the alloc mutex if clear_on_free
+ 	 * is used frequently for large buffers. The mutex is not
+ 	 * needed for correctness here.
+@@ -1005,7 +1005,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 		goto err_mmget;
+ 	if (!mmap_read_trylock(mm))
+ 		goto err_mmap_read_lock_failed;
+-	vma = binder_alloc_get_vma(alloc);
++	vma = vma_lookup(mm, page_addr);
++	if (vma && vma != binder_alloc_get_vma(alloc))
++		goto err_invalid_vma;
+ 
+ 	list_lru_isolate(lru, item);
+ 	spin_unlock(lock);
+@@ -1031,6 +1033,8 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 	mutex_unlock(&alloc->mutex);
+ 	return LRU_REMOVED_RETRY;
+ 
++err_invalid_vma:
++	mmap_read_unlock(mm);
+ err_mmap_read_lock_failed:
+ 	mmput_async(mm);
+ err_mmget:
+diff --git a/drivers/bus/moxtet.c b/drivers/bus/moxtet.c
+index 5eb0fe73ddc45..e384fbc6c1d93 100644
+--- a/drivers/bus/moxtet.c
++++ b/drivers/bus/moxtet.c
+@@ -755,7 +755,7 @@ static int moxtet_irq_setup(struct moxtet *moxtet)
+ 	moxtet->irq.masked = ~0;
+ 
+ 	ret = request_threaded_irq(moxtet->dev_irq, NULL, moxtet_irq_thread_fn,
+-				   IRQF_ONESHOT, "moxtet", moxtet);
++				   IRQF_SHARED | IRQF_ONESHOT, "moxtet", moxtet);
+ 	if (ret < 0)
+ 		goto err_free;
+ 
+@@ -830,6 +830,12 @@ static void moxtet_remove(struct spi_device *spi)
+ 	mutex_destroy(&moxtet->lock);
+ }
+ 
++static const struct spi_device_id moxtet_spi_ids[] = {
++	{ "moxtet" },
++	{ },
++};
++MODULE_DEVICE_TABLE(spi, moxtet_spi_ids);
++
+ static const struct of_device_id moxtet_dt_ids[] = {
+ 	{ .compatible = "cznic,moxtet" },
+ 	{},
+@@ -841,6 +847,7 @@ static struct spi_driver moxtet_spi_driver = {
+ 		.name		= "moxtet",
+ 		.of_match_table = moxtet_dt_ids,
+ 	},
++	.id_table	= moxtet_spi_ids,
+ 	.probe		= moxtet_probe,
+ 	.remove		= moxtet_remove,
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index 2d1f5efa9091a..b5b29451d2db8 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -1698,7 +1698,7 @@ static enum bp_result bios_parser_enable_disp_power_gating(
+ static enum bp_result bios_parser_enable_lvtma_control(
+ 	struct dc_bios *dcb,
+ 	uint8_t uc_pwr_on,
+-	uint8_t panel_instance,
++	uint8_t pwrseq_instance,
+ 	uint8_t bypass_panel_control_wait)
+ {
+ 	struct bios_parser *bp = BP_FROM_DCB(dcb);
+@@ -1706,7 +1706,7 @@ static enum bp_result bios_parser_enable_lvtma_control(
+ 	if (!bp->cmd_tbl.enable_lvtma_control)
+ 		return BP_RESULT_FAILURE;
+ 
+-	return bp->cmd_tbl.enable_lvtma_control(bp, uc_pwr_on, panel_instance, bypass_panel_control_wait);
++	return bp->cmd_tbl.enable_lvtma_control(bp, uc_pwr_on, pwrseq_instance, bypass_panel_control_wait);
+ }
+ 
+ static bool bios_parser_is_accelerated_mode(
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+index 90a02d7bd3da3..ab0adabf9dd4c 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+@@ -976,7 +976,7 @@ static unsigned int get_smu_clock_info_v3_1(struct bios_parser *bp, uint8_t id)
+ static enum bp_result enable_lvtma_control(
+ 	struct bios_parser *bp,
+ 	uint8_t uc_pwr_on,
+-	uint8_t panel_instance,
++	uint8_t pwrseq_instance,
+ 	uint8_t bypass_panel_control_wait);
+ 
+ static void init_enable_lvtma_control(struct bios_parser *bp)
+@@ -989,7 +989,7 @@ static void init_enable_lvtma_control(struct bios_parser *bp)
+ static void enable_lvtma_control_dmcub(
+ 	struct dc_dmub_srv *dmcub,
+ 	uint8_t uc_pwr_on,
+-	uint8_t panel_instance,
++	uint8_t pwrseq_instance,
+ 	uint8_t bypass_panel_control_wait)
+ {
+ 
+@@ -1002,8 +1002,8 @@ static void enable_lvtma_control_dmcub(
+ 			DMUB_CMD__VBIOS_LVTMA_CONTROL;
+ 	cmd.lvtma_control.data.uc_pwr_action =
+ 			uc_pwr_on;
+-	cmd.lvtma_control.data.panel_inst =
+-			panel_instance;
++	cmd.lvtma_control.data.pwrseq_inst =
++			pwrseq_instance;
+ 	cmd.lvtma_control.data.bypass_panel_control_wait =
+ 			bypass_panel_control_wait;
+ 	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+@@ -1012,7 +1012,7 @@ static void enable_lvtma_control_dmcub(
+ static enum bp_result enable_lvtma_control(
+ 	struct bios_parser *bp,
+ 	uint8_t uc_pwr_on,
+-	uint8_t panel_instance,
++	uint8_t pwrseq_instance,
+ 	uint8_t bypass_panel_control_wait)
+ {
+ 	enum bp_result result = BP_RESULT_FAILURE;
+@@ -1021,7 +1021,7 @@ static enum bp_result enable_lvtma_control(
+ 	    bp->base.ctx->dc->debug.dmub_command_table) {
+ 		enable_lvtma_control_dmcub(bp->base.ctx->dmub_srv,
+ 				uc_pwr_on,
+-				panel_instance,
++				pwrseq_instance,
+ 				bypass_panel_control_wait);
+ 		return BP_RESULT_OK;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.h b/drivers/gpu/drm/amd/display/dc/bios/command_table2.h
+index b6d09bf6cf72b..41c8c014397f2 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.h
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.h
+@@ -96,7 +96,7 @@ struct cmd_tbl {
+ 			struct bios_parser *bp, uint8_t id);
+ 	enum bp_result (*enable_lvtma_control)(struct bios_parser *bp,
+ 			uint8_t uc_pwr_on,
+-			uint8_t panel_instance,
++			uint8_t pwrseq_instance,
+ 			uint8_t bypass_panel_control_wait);
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_bios_types.h b/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
+index be9aa1a71847d..26940d94d8fb4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
+@@ -140,7 +140,7 @@ struct dc_vbios_funcs {
+ 	enum bp_result (*enable_lvtma_control)(
+ 		struct dc_bios *bios,
+ 		uint8_t uc_pwr_on,
+-		uint8_t panel_instance,
++		uint8_t pwrseq_instance,
+ 		uint8_t bypass_panel_control_wait);
+ 
+ 	enum bp_result (*get_soc_bb_info)(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
+index d3e6544022b78..930fd929e93a4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
+@@ -145,7 +145,11 @@ static bool dmub_abm_save_restore_ex(
+ 	return ret;
+ }
+ 
+-static bool dmub_abm_set_pipe_ex(struct abm *abm, uint32_t otg_inst, uint32_t option, uint32_t panel_inst)
++static bool dmub_abm_set_pipe_ex(struct abm *abm,
++		uint32_t otg_inst,
++		uint32_t option,
++		uint32_t panel_inst,
++		uint32_t pwrseq_inst)
+ {
+ 	bool ret = false;
+ 	unsigned int feature_support;
+@@ -153,7 +157,7 @@ static bool dmub_abm_set_pipe_ex(struct abm *abm, uint32_t otg_inst, uint32_t op
+ 	feature_support = abm_feature_support(abm, panel_inst);
+ 
+ 	if (feature_support == ABM_LCD_SUPPORT)
+-		ret = dmub_abm_set_pipe(abm, otg_inst, option, panel_inst);
++		ret = dmub_abm_set_pipe(abm, otg_inst, option, panel_inst, pwrseq_inst);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+index 592a8f7a1c6d0..42c802afc4681 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+@@ -254,7 +254,11 @@ bool dmub_abm_save_restore(
+ 	return true;
+ }
+ 
+-bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t option, uint32_t panel_inst)
++bool dmub_abm_set_pipe(struct abm *abm,
++		uint32_t otg_inst,
++		uint32_t option,
++		uint32_t panel_inst,
++		uint32_t pwrseq_inst)
+ {
+ 	union dmub_rb_cmd cmd;
+ 	struct dc_context *dc = abm->ctx;
+@@ -264,6 +268,7 @@ bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t option, uint
+ 	cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
+ 	cmd.abm_set_pipe.header.sub_type = DMUB_CMD__ABM_SET_PIPE;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = otg_inst;
++	cmd.abm_set_pipe.abm_set_pipe_data.pwrseq_inst = pwrseq_inst;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = option;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = panel_inst;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.h b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.h
+index 853564d7f4714..07ea6c8d414f3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.h
+@@ -44,7 +44,7 @@ bool dmub_abm_save_restore(
+ 		struct dc_context *dc,
+ 		unsigned int panel_inst,
+ 		struct abm_save_restore *pData);
+-bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t option, uint32_t panel_inst);
++bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t option, uint32_t panel_inst, uint32_t pwrseq_inst);
+ bool dmub_abm_set_backlight_level(struct abm *abm,
+ 		unsigned int backlight_pwm_u16_16,
+ 		unsigned int frame_ramp,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
+index 217acd4e292a3..d849b1eaa4a5c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
+@@ -50,7 +50,7 @@ static bool dcn31_query_backlight_info(struct panel_cntl *panel_cntl, union dmub
+ 	cmd->panel_cntl.header.type = DMUB_CMD__PANEL_CNTL;
+ 	cmd->panel_cntl.header.sub_type = DMUB_CMD__PANEL_CNTL_QUERY_BACKLIGHT_INFO;
+ 	cmd->panel_cntl.header.payload_bytes = sizeof(cmd->panel_cntl.data);
+-	cmd->panel_cntl.data.inst = dcn31_panel_cntl->base.inst;
++	cmd->panel_cntl.data.pwrseq_inst = dcn31_panel_cntl->base.pwrseq_inst;
+ 
+ 	return dm_execute_dmub_cmd(dc_dmub_srv->ctx, cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
+ }
+@@ -78,7 +78,7 @@ static uint32_t dcn31_panel_cntl_hw_init(struct panel_cntl *panel_cntl)
+ 	cmd.panel_cntl.header.type = DMUB_CMD__PANEL_CNTL;
+ 	cmd.panel_cntl.header.sub_type = DMUB_CMD__PANEL_CNTL_HW_INIT;
+ 	cmd.panel_cntl.header.payload_bytes = sizeof(cmd.panel_cntl.data);
+-	cmd.panel_cntl.data.inst = dcn31_panel_cntl->base.inst;
++	cmd.panel_cntl.data.pwrseq_inst = dcn31_panel_cntl->base.pwrseq_inst;
+ 	cmd.panel_cntl.data.bl_pwm_cntl = panel_cntl->stored_backlight_registers.BL_PWM_CNTL;
+ 	cmd.panel_cntl.data.bl_pwm_period_cntl = panel_cntl->stored_backlight_registers.BL_PWM_PERIOD_CNTL;
+ 	cmd.panel_cntl.data.bl_pwm_ref_div1 =
+@@ -157,4 +157,5 @@ void dcn31_panel_cntl_construct(
+ 	dcn31_panel_cntl->base.funcs = &dcn31_link_panel_cntl_funcs;
+ 	dcn31_panel_cntl->base.ctx = init_data->ctx;
+ 	dcn31_panel_cntl->base.inst = init_data->inst;
++	dcn31_panel_cntl->base.pwrseq_inst = init_data->pwrseq_inst;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 960a55e06375b..9b8299d97e400 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -790,7 +790,7 @@ void dce110_edp_power_control(
+ 	struct dc_context *ctx = link->ctx;
+ 	struct bp_transmitter_control cntl = { 0 };
+ 	enum bp_result bp_result;
+-	uint8_t panel_instance;
++	uint8_t pwrseq_instance;
+ 
+ 
+ 	if (dal_graphics_object_id_get_connector_id(link->link_enc->connector)
+@@ -873,7 +873,7 @@ void dce110_edp_power_control(
+ 		cntl.coherent = false;
+ 		cntl.lanes_number = LANE_COUNT_FOUR;
+ 		cntl.hpd_sel = link->link_enc->hpd_source;
+-		panel_instance = link->panel_cntl->inst;
++		pwrseq_instance = link->panel_cntl->pwrseq_inst;
+ 
+ 		if (ctx->dc->ctx->dmub_srv &&
+ 				ctx->dc->debug.dmub_command_table) {
+@@ -881,11 +881,11 @@ void dce110_edp_power_control(
+ 			if (cntl.action == TRANSMITTER_CONTROL_POWER_ON) {
+ 				bp_result = ctx->dc_bios->funcs->enable_lvtma_control(ctx->dc_bios,
+ 						LVTMA_CONTROL_POWER_ON,
+-						panel_instance, link->link_powered_externally);
++						pwrseq_instance, link->link_powered_externally);
+ 			} else {
+ 				bp_result = ctx->dc_bios->funcs->enable_lvtma_control(ctx->dc_bios,
+ 						LVTMA_CONTROL_POWER_OFF,
+-						panel_instance, link->link_powered_externally);
++						pwrseq_instance, link->link_powered_externally);
+ 			}
+ 		}
+ 
+@@ -956,7 +956,7 @@ void dce110_edp_backlight_control(
+ {
+ 	struct dc_context *ctx = link->ctx;
+ 	struct bp_transmitter_control cntl = { 0 };
+-	uint8_t panel_instance;
++	uint8_t pwrseq_instance;
+ 	unsigned int pre_T11_delay = OLED_PRE_T11_DELAY;
+ 	unsigned int post_T7_delay = OLED_POST_T7_DELAY;
+ 
+@@ -1009,7 +1009,7 @@ void dce110_edp_backlight_control(
+ 	 */
+ 	/* dc_service_sleep_in_milliseconds(50); */
+ 		/*edp 1.2*/
+-	panel_instance = link->panel_cntl->inst;
++	pwrseq_instance = link->panel_cntl->pwrseq_inst;
+ 
+ 	if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_ON) {
+ 		if (!link->dc->config.edp_no_power_sequencing)
+@@ -1034,11 +1034,11 @@ void dce110_edp_backlight_control(
+ 		if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_ON)
+ 			ctx->dc_bios->funcs->enable_lvtma_control(ctx->dc_bios,
+ 					LVTMA_CONTROL_LCD_BLON,
+-					panel_instance, link->link_powered_externally);
++					pwrseq_instance, link->link_powered_externally);
+ 		else
+ 			ctx->dc_bios->funcs->enable_lvtma_control(ctx->dc_bios,
+ 					LVTMA_CONTROL_LCD_BLOFF,
+-					panel_instance, link->link_powered_externally);
++					pwrseq_instance, link->link_powered_externally);
+ 	}
+ 
+ 	link_transmitter_control(ctx->dc_bios, &cntl);
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+index 467812cf33686..08783ad097d21 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+@@ -137,7 +137,8 @@ void dcn21_PLAT_58856_wa(struct dc_state *context, struct pipe_ctx *pipe_ctx)
+ 	pipe_ctx->stream->dpms_off = true;
+ }
+ 
+-static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t option, uint32_t panel_inst)
++static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst,
++		uint32_t option, uint32_t panel_inst, uint32_t pwrseq_inst)
+ {
+ 	union dmub_rb_cmd cmd;
+ 	struct dc_context *dc = abm->ctx;
+@@ -147,6 +148,7 @@ static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t optio
+ 	cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
+ 	cmd.abm_set_pipe.header.sub_type = DMUB_CMD__ABM_SET_PIPE;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = otg_inst;
++	cmd.abm_set_pipe.abm_set_pipe_data.pwrseq_inst = pwrseq_inst;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = option;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = panel_inst;
+ 	cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
+@@ -179,7 +181,6 @@ void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx)
+ 	struct abm *abm = pipe_ctx->stream_res.abm;
+ 	uint32_t otg_inst = pipe_ctx->stream_res.tg->inst;
+ 	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
+-
+ 	struct dmcu *dmcu = pipe_ctx->stream->ctx->dc->res_pool->dmcu;
+ 
+ 	if (dmcu) {
+@@ -190,9 +191,13 @@ void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx)
+ 	if (abm && panel_cntl) {
+ 		if (abm->funcs && abm->funcs->set_pipe_ex) {
+ 			abm->funcs->set_pipe_ex(abm, otg_inst, SET_ABM_PIPE_IMMEDIATELY_DISABLE,
+-			panel_cntl->inst);
++					panel_cntl->inst, panel_cntl->pwrseq_inst);
+ 		} else {
+-			dmub_abm_set_pipe(abm, otg_inst, SET_ABM_PIPE_IMMEDIATELY_DISABLE, panel_cntl->inst);
++				dmub_abm_set_pipe(abm,
++						otg_inst,
++						SET_ABM_PIPE_IMMEDIATELY_DISABLE,
++						panel_cntl->inst,
++						panel_cntl->pwrseq_inst);
+ 		}
+ 		panel_cntl->funcs->store_backlight_level(panel_cntl);
+ 	}
+@@ -212,9 +217,16 @@ void dcn21_set_pipe(struct pipe_ctx *pipe_ctx)
+ 
+ 	if (abm && panel_cntl) {
+ 		if (abm->funcs && abm->funcs->set_pipe_ex) {
+-			abm->funcs->set_pipe_ex(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst);
++			abm->funcs->set_pipe_ex(abm,
++					otg_inst,
++					SET_ABM_PIPE_NORMAL,
++					panel_cntl->inst,
++					panel_cntl->pwrseq_inst);
+ 		} else {
+-			dmub_abm_set_pipe(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst);
++				dmub_abm_set_pipe(abm, otg_inst,
++						SET_ABM_PIPE_NORMAL,
++						panel_cntl->inst,
++						panel_cntl->pwrseq_inst);
+ 		}
+ 	}
+ }
+@@ -237,9 +249,17 @@ bool dcn21_set_backlight_level(struct pipe_ctx *pipe_ctx,
+ 
+ 		if (abm && panel_cntl) {
+ 			if (abm->funcs && abm->funcs->set_pipe_ex) {
+-				abm->funcs->set_pipe_ex(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst);
++				abm->funcs->set_pipe_ex(abm,
++						otg_inst,
++						SET_ABM_PIPE_NORMAL,
++						panel_cntl->inst,
++						panel_cntl->pwrseq_inst);
+ 			} else {
+-				dmub_abm_set_pipe(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst);
++					dmub_abm_set_pipe(abm,
++							otg_inst,
++							SET_ABM_PIPE_NORMAL,
++							panel_cntl->inst,
++							panel_cntl->pwrseq_inst);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/abm.h b/drivers/gpu/drm/amd/display/dc/inc/hw/abm.h
+index 33db15d69f233..9f521cf0fc5a2 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/abm.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/abm.h
+@@ -64,7 +64,8 @@ struct abm_funcs {
+ 	bool (*set_pipe_ex)(struct abm *abm,
+ 			unsigned int otg_inst,
+ 			unsigned int option,
+-			unsigned int panel_inst);
++			unsigned int panel_inst,
++			unsigned int pwrseq_inst);
+ };
+ 
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h b/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h
+index 24af9d80b9373..248adc1705e35 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h
+@@ -56,12 +56,14 @@ struct panel_cntl_funcs {
+ struct panel_cntl_init_data {
+ 	struct dc_context *ctx;
+ 	uint32_t inst;
++	uint32_t pwrseq_inst;
+ };
+ 
+ struct panel_cntl {
+ 	const struct panel_cntl_funcs *funcs;
+ 	struct dc_context *ctx;
+ 	uint32_t inst;
++	uint32_t pwrseq_inst;
+ 	/* registers setting needs to be saved and restored at InitBacklight */
+ 	struct panel_cntl_backlight_registers stored_backlight_registers;
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+index 7abfc67d10a62..ff7801aa552a4 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+@@ -368,6 +368,30 @@ static enum transmitter translate_encoder_to_transmitter(
+ 	}
+ }
+ 
++static uint8_t translate_dig_inst_to_pwrseq_inst(struct dc_link *link)
++{
++	uint8_t pwrseq_inst = 0xF;
++	struct dc_context *dc_ctx = link->dc->ctx;
++
++	DC_LOGGER_INIT(dc_ctx->logger);
++
++	switch (link->eng_id) {
++	case ENGINE_ID_DIGA:
++		pwrseq_inst = 0;
++		break;
++	case ENGINE_ID_DIGB:
++		pwrseq_inst = 1;
++		break;
++	default:
++		DC_LOG_WARNING("Unsupported pwrseq engine id: %d!\n", link->eng_id);
++		ASSERT(false);
++		break;
++	}
++
++	return pwrseq_inst;
++}
++
++
+ static void link_destruct(struct dc_link *link)
+ {
+ 	int i;
+@@ -595,24 +619,6 @@ static bool construct_phy(struct dc_link *link,
+ 	link->ddc_hw_inst =
+ 		dal_ddc_get_line(get_ddc_pin(link->ddc));
+ 
+-
+-	if (link->dc->res_pool->funcs->panel_cntl_create &&
+-		(link->link_id.id == CONNECTOR_ID_EDP ||
+-			link->link_id.id == CONNECTOR_ID_LVDS)) {
+-		panel_cntl_init_data.ctx = dc_ctx;
+-		panel_cntl_init_data.inst =
+-			panel_cntl_init_data.ctx->dc_edp_id_count;
+-		link->panel_cntl =
+-			link->dc->res_pool->funcs->panel_cntl_create(
+-								&panel_cntl_init_data);
+-		panel_cntl_init_data.ctx->dc_edp_id_count++;
+-
+-		if (link->panel_cntl == NULL) {
+-			DC_ERROR("Failed to create link panel_cntl!\n");
+-			goto panel_cntl_create_fail;
+-		}
+-	}
+-
+ 	enc_init_data.ctx = dc_ctx;
+ 	bp_funcs->get_src_obj(dc_ctx->dc_bios, link->link_id, 0,
+ 			      &enc_init_data.encoder);
+@@ -643,6 +649,23 @@ static bool construct_phy(struct dc_link *link,
+ 	link->dc->res_pool->dig_link_enc_count++;
+ 
+ 	link->link_enc_hw_inst = link->link_enc->transmitter;
++
++	if (link->dc->res_pool->funcs->panel_cntl_create &&
++		(link->link_id.id == CONNECTOR_ID_EDP ||
++			link->link_id.id == CONNECTOR_ID_LVDS)) {
++		panel_cntl_init_data.ctx = dc_ctx;
++		panel_cntl_init_data.inst = panel_cntl_init_data.ctx->dc_edp_id_count;
++		panel_cntl_init_data.pwrseq_inst = translate_dig_inst_to_pwrseq_inst(link);
++		link->panel_cntl =
++			link->dc->res_pool->funcs->panel_cntl_create(
++								&panel_cntl_init_data);
++		panel_cntl_init_data.ctx->dc_edp_id_count++;
++
++		if (link->panel_cntl == NULL) {
++			DC_ERROR("Failed to create link panel_cntl!\n");
++			goto panel_cntl_create_fail;
++		}
++	}
+ 	for (i = 0; i < 4; i++) {
+ 		if (bp_funcs->get_device_tag(dc_ctx->dc_bios,
+ 					     link->link_id, i,
+diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+index ed4379c047151..3cea96a36432b 100644
+--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
++++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+@@ -3357,6 +3357,16 @@ struct dmub_cmd_abm_set_pipe_data {
+ 	 * TODO: Remove.
+ 	 */
+ 	uint8_t ramping_boundary;
++
++	/**
++	 * PwrSeq HW Instance.
++	 */
++	uint8_t pwrseq_inst;
++
++	/**
++	 * Explicit padding to 4 byte boundary.
++	 */
++	uint8_t pad[3];
+ };
+ 
+ /**
+@@ -3737,7 +3747,7 @@ enum dmub_cmd_panel_cntl_type {
+  * struct dmub_cmd_panel_cntl_data - Panel control data.
+  */
+ struct dmub_cmd_panel_cntl_data {
+-	uint32_t inst; /**< panel instance */
++	uint32_t pwrseq_inst; /**< pwrseq instance */
+ 	uint32_t current_backlight; /* in/out */
+ 	uint32_t bl_pwm_cntl; /* in/out */
+ 	uint32_t bl_pwm_period_cntl; /* in/out */
+@@ -3796,7 +3806,7 @@ struct dmub_cmd_lvtma_control_data {
+ 	uint8_t uc_pwr_action; /**< LVTMA_ACTION */
+ 	uint8_t bypass_panel_control_wait;
+ 	uint8_t reserved_0[2]; /**< For future use */
+-	uint8_t panel_inst; /**< LVTMA control instance */
++	uint8_t pwrseq_inst; /**< LVTMA control instance */
+ 	uint8_t reserved_1[3]; /**< For future use */
+ };
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index 20e2e4cb76146..da17b6c49b0f1 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -1036,7 +1036,7 @@ struct etmv4_drvdata {
+ 	u8				ctxid_size;
+ 	u8				vmid_size;
+ 	u8				ccsize;
+-	u8				ccitmin;
++	u16				ccitmin;
+ 	u8				s_ex_level;
+ 	u8				ns_ex_level;
+ 	u8				q_support;
+diff --git a/drivers/leds/trigger/ledtrig-tty.c b/drivers/leds/trigger/ledtrig-tty.c
+index 8ae0d2d284aff..3e69a7bde9284 100644
+--- a/drivers/leds/trigger/ledtrig-tty.c
++++ b/drivers/leds/trigger/ledtrig-tty.c
+@@ -168,6 +168,10 @@ static void ledtrig_tty_deactivate(struct led_classdev *led_cdev)
+ 
+ 	cancel_delayed_work_sync(&trigger_data->dwork);
+ 
++	kfree(trigger_data->ttyname);
++	tty_kref_put(trigger_data->tty);
++	trigger_data->tty = NULL;
++
+ 	kfree(trigger_data);
+ }
+ 
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 26e1e8a5e9419..b02b1a3010f71 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,7 +36,6 @@
+  */
+ 
+ #include <linux/blkdev.h>
+-#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -6820,18 +6819,7 @@ static void raid5d(struct md_thread *thread)
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_check_recovery(mddev);
+ 			spin_lock_irq(&conf->device_lock);
+-
+-			/*
+-			 * Waiting on MD_SB_CHANGE_PENDING below may deadlock
+-			 * seeing md_check_recovery() is needed to clear
+-			 * the flag when using mdmon.
+-			 */
+-			continue;
+ 		}
+-
+-		wait_event_lock_irq(mddev->sb_wait,
+-			!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
+-			conf->device_lock);
+ 	}
+ 	pr_debug("%d stripes handled\n", handled);
+ 
+diff --git a/drivers/parport/parport_serial.c b/drivers/parport/parport_serial.c
+index 9f5d784cd95d5..3644997a83425 100644
+--- a/drivers/parport/parport_serial.c
++++ b/drivers/parport/parport_serial.c
+@@ -65,6 +65,10 @@ enum parport_pc_pci_cards {
+ 	sunix_5069a,
+ 	sunix_5079a,
+ 	sunix_5099a,
++	brainboxes_uc257,
++	brainboxes_is300,
++	brainboxes_uc414,
++	brainboxes_px263,
+ };
+ 
+ /* each element directly indexed from enum list, above */
+@@ -158,6 +162,10 @@ static struct parport_pc_pci cards[] = {
+ 	/* sunix_5069a */		{ 1, { { 1, 2 }, } },
+ 	/* sunix_5079a */		{ 1, { { 1, 2 }, } },
+ 	/* sunix_5099a */		{ 1, { { 1, 2 }, } },
++	/* brainboxes_uc257 */	{ 1, { { 3, -1 }, } },
++	/* brainboxes_is300 */	{ 1, { { 3, -1 }, } },
++	/* brainboxes_uc414 */  { 1, { { 3, -1 }, } },
++	/* brainboxes_px263 */	{ 1, { { 3, -1 }, } },
+ };
+ 
+ static struct pci_device_id parport_serial_pci_tbl[] = {
+@@ -277,6 +285,38 @@ static struct pci_device_id parport_serial_pci_tbl[] = {
+ 	{ PCI_VENDOR_ID_SUNIX, PCI_DEVICE_ID_SUNIX_1999, PCI_VENDOR_ID_SUNIX,
+ 	  0x0104, 0, 0, sunix_5099a },
+ 
++	/* Brainboxes UC-203 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0bc1,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0bc2,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++
++	/* Brainboxes UC-257 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0861,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0862,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0863,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++
++	/* Brainboxes UC-414 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0e61,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc414 },
++
++	/* Brainboxes UC-475 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0981,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0982,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc257 },
++
++	/* Brainboxes IS-300/IS-500 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x0da0,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_is300 },
++
++	/* Brainboxes PX-263/PX-295 */
++	{ PCI_VENDOR_ID_INTASHIELD, 0x402c,
++	  PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_px263 },
++
+ 	{ 0, } /* terminate list */
+ };
+ MODULE_DEVICE_TABLE(pci,parport_serial_pci_tbl);
+@@ -542,6 +582,30 @@ static struct pciserial_board pci_parport_serial_boards[] = {
+ 		.base_baud      = 921600,
+ 		.uart_offset	= 0x8,
+ 	},
++	[brainboxes_uc257] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 2,
++		.base_baud	= 115200,
++		.uart_offset	= 8,
++	},
++	[brainboxes_is300] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 1,
++		.base_baud	= 115200,
++		.uart_offset	= 8,
++	},
++	[brainboxes_uc414] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 4,
++		.base_baud	= 115200,
++		.uart_offset	= 8,
++	},
++	[brainboxes_px263] = {
++		.flags		= FL_BASE2,
++		.num_ports	= 4,
++		.base_baud	= 921600,
++		.uart_offset	= 8,
++	},
+ };
+ 
+ struct parport_serial_private {
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index ea476252280ab..d55a3ffae4b8b 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4699,17 +4699,21 @@ static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags)
+  * But the implementation could block peer-to-peer transactions between them
+  * and provide ACS-like functionality.
+  */
+-static int  pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags)
++static int pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ 	if (!pci_is_pcie(dev) ||
+ 	    ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) &&
+ 	     (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM)))
+ 		return -ENOTTY;
+ 
++	/*
++	 * Future Zhaoxin Root Ports and Switch Downstream Ports will
++	 * implement ACS capability in accordance with the PCIe Spec.
++	 */
+ 	switch (dev->device) {
+ 	case 0x0710 ... 0x071e:
+ 	case 0x0721:
+-	case 0x0723 ... 0x0732:
++	case 0x0723 ... 0x0752:
+ 		return pci_acs_ctrl_enabled(acs_flags,
+ 			PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ 	}
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 62082d64ece00..2d572f6c8ec83 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -466,13 +466,13 @@ static int uio_open(struct inode *inode, struct file *filep)
+ 
+ 	mutex_lock(&minor_lock);
+ 	idev = idr_find(&uio_idr, iminor(inode));
+-	mutex_unlock(&minor_lock);
+ 	if (!idev) {
+ 		ret = -ENODEV;
++		mutex_unlock(&minor_lock);
+ 		goto out;
+ 	}
+-
+ 	get_device(&idev->dev);
++	mutex_unlock(&minor_lock);
+ 
+ 	if (!try_module_get(idev->owner)) {
+ 		ret = -ENODEV;
+@@ -1064,9 +1064,8 @@ void uio_unregister_device(struct uio_info *info)
+ 	wake_up_interruptible(&idev->wait);
+ 	kill_fasync(&idev->async_queue, SIGIO, POLL_HUP);
+ 
+-	device_unregister(&idev->dev);
+-
+ 	uio_free_minor(minor);
++	device_unregister(&idev->dev);
+ 
+ 	return;
+ }
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 47e88b4d4e7d0..a8fc2cac68799 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -754,6 +754,12 @@ retry:
+ 		memcpy(pval, value, size);
+ 		last->e_value_size = cpu_to_le16(size);
+ 		new_hsize += newsize;
++		/*
++		 * Explicitly add the null terminator.  The unused xattr space
++		 * is supposed to always be zeroed, which would make this
++		 * unnecessary, but don't depend on that.
++		 */
++		*(u32 *)((u8 *)last + newsize) = 0;
+ 	}
+ 
+ 	error = write_all_xattrs(inode, new_hsize, base_addr, ipage);
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 652ab429bf2e9..ca2c528c9de3a 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -2971,7 +2971,7 @@ int smb2_open(struct ksmbd_work *work)
+ 					    &may_flags);
+ 
+ 	if (!test_tree_conn_flag(tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
+-		if (open_flags & O_CREAT) {
++		if (open_flags & (O_CREAT | O_TRUNC)) {
+ 			ksmbd_debug(SMB,
+ 				    "User does not have write permission\n");
+ 			rc = -EACCES;
+@@ -5943,12 +5943,6 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 	}
+ 	case FILE_RENAME_INFORMATION:
+ 	{
+-		if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
+-			ksmbd_debug(SMB,
+-				    "User does not have write permission\n");
+-			return -EACCES;
+-		}
+-
+ 		if (buf_len < sizeof(struct smb2_file_rename_info))
+ 			return -EINVAL;
+ 
+@@ -5968,12 +5962,6 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 	}
+ 	case FILE_DISPOSITION_INFORMATION:
+ 	{
+-		if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
+-			ksmbd_debug(SMB,
+-				    "User does not have write permission\n");
+-			return -EACCES;
+-		}
+-
+ 		if (buf_len < sizeof(struct smb2_file_disposition_info))
+ 			return -EINVAL;
+ 
+@@ -6035,7 +6023,7 @@ int smb2_set_info(struct ksmbd_work *work)
+ {
+ 	struct smb2_set_info_req *req;
+ 	struct smb2_set_info_rsp *rsp;
+-	struct ksmbd_file *fp;
++	struct ksmbd_file *fp = NULL;
+ 	int rc = 0;
+ 	unsigned int id = KSMBD_NO_FID, pid = KSMBD_NO_FID;
+ 
+@@ -6055,6 +6043,13 @@ int smb2_set_info(struct ksmbd_work *work)
+ 		rsp = smb2_get_msg(work->response_buf);
+ 	}
+ 
++	if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) {
++		ksmbd_debug(SMB, "User does not have write permission\n");
++		pr_err("User does not have write permission\n");
++		rc = -EACCES;
++		goto err_out;
++	}
++
+ 	if (!has_file_id(id)) {
+ 		id = req->VolatileFileId;
+ 		pid = req->PersistentFileId;
+diff --git a/fs/smb/server/smbacl.c b/fs/smb/server/smbacl.c
+index 1164365533f08..1c9775f1efa56 100644
+--- a/fs/smb/server/smbacl.c
++++ b/fs/smb/server/smbacl.c
+@@ -401,10 +401,6 @@ static void parse_dacl(struct mnt_idmap *idmap,
+ 	if (num_aces > ULONG_MAX / sizeof(struct smb_ace *))
+ 		return;
+ 
+-	ppace = kmalloc_array(num_aces, sizeof(struct smb_ace *), GFP_KERNEL);
+-	if (!ppace)
+-		return;
+-
+ 	ret = init_acl_state(&acl_state, num_aces);
+ 	if (ret)
+ 		return;
+@@ -414,6 +410,13 @@ static void parse_dacl(struct mnt_idmap *idmap,
+ 		return;
+ 	}
+ 
++	ppace = kmalloc_array(num_aces, sizeof(struct smb_ace *), GFP_KERNEL);
++	if (!ppace) {
++		free_acl_state(&default_acl_state);
++		free_acl_state(&acl_state);
++		return;
++	}
++
+ 	/*
+ 	 * reset rwx permissions for user/group/other.
+ 	 * Also, if num_aces is 0 i.e. DACL has no ACEs,
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 7a5fc89a86528..c9c2ad5e2681e 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -101,9 +101,11 @@ static int set_memmap_mode(const char *val, const struct kernel_param *kp)
+ 
+ static int get_memmap_mode(char *buffer, const struct kernel_param *kp)
+ {
+-	if (*((int *)kp->arg) == MEMMAP_ON_MEMORY_FORCE)
+-		return sprintf(buffer,  "force\n");
+-	return param_get_bool(buffer, kp);
++	int mode = *((int *)kp->arg);
++
++	if (mode == MEMMAP_ON_MEMORY_FORCE)
++		return sprintf(buffer, "force\n");
++	return sprintf(buffer, "%c\n", mode ? 'Y' : 'N');
+ }
+ 
+ static const struct kernel_param_ops memmap_mode_ops = {
+diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh
+index 564c5632e1a24..bfe5a4082d8ea 100755
+--- a/scripts/decode_stacktrace.sh
++++ b/scripts/decode_stacktrace.sh
+@@ -16,6 +16,21 @@ elif type c++filt >/dev/null 2>&1 ; then
+ 	cppfilt_opts=-i
+ fi
+ 
++UTIL_SUFFIX=
++if [[ -z ${LLVM:-} ]]; then
++	UTIL_PREFIX=${CROSS_COMPILE:-}
++else
++	UTIL_PREFIX=llvm-
++	if [[ ${LLVM} == */ ]]; then
++		UTIL_PREFIX=${LLVM}${UTIL_PREFIX}
++	elif [[ ${LLVM} == -* ]]; then
++		UTIL_SUFFIX=${LLVM}
++	fi
++fi
++
++READELF=${UTIL_PREFIX}readelf${UTIL_SUFFIX}
++ADDR2LINE=${UTIL_PREFIX}addr2line${UTIL_SUFFIX}
++
+ if [[ $1 == "-r" ]] ; then
+ 	vmlinux=""
+ 	basepath="auto"
+@@ -75,7 +90,7 @@ find_module() {
+ 
+ 	if [[ "$modpath" != "" ]] ; then
+ 		for fn in $(find "$modpath" -name "${module//_/[-_]}.ko*") ; do
+-			if readelf -WS "$fn" | grep -qwF .debug_line ; then
++			if ${READELF} -WS "$fn" | grep -qwF .debug_line ; then
+ 				echo $fn
+ 				return
+ 			fi
+@@ -169,7 +184,7 @@ parse_symbol() {
+ 	if [[ $aarray_support == true && "${cache[$module,$address]+isset}" == "isset" ]]; then
+ 		local code=${cache[$module,$address]}
+ 	else
+-		local code=$(${CROSS_COMPILE}addr2line -i -e "$objfile" "$address" 2>/dev/null)
++		local code=$(${ADDR2LINE} -i -e "$objfile" "$address" 2>/dev/null)
+ 		if [[ $aarray_support == true ]]; then
+ 			cache[$module,$address]=$code
+ 		fi
+diff --git a/sound/pci/hda/cs35l41_hda.c b/sound/pci/hda/cs35l41_hda.c
+index 92ca2b3b6c924..d3fa6e136744d 100644
+--- a/sound/pci/hda/cs35l41_hda.c
++++ b/sound/pci/hda/cs35l41_hda.c
+@@ -12,6 +12,7 @@
+ #include <sound/hda_codec.h>
+ #include <sound/soc.h>
+ #include <linux/pm_runtime.h>
++#include <linux/spi/spi.h>
+ #include "hda_local.h"
+ #include "hda_auto_parser.h"
+ #include "hda_jack.h"
+@@ -996,6 +997,11 @@ static int cs35l41_smart_amp(struct cs35l41_hda *cs35l41)
+ 	__be32 halo_sts;
+ 	int ret;
+ 
++	if (cs35l41->bypass_fw) {
++		dev_warn(cs35l41->dev, "Bypassing Firmware.\n");
++		return 0;
++	}
++
+ 	ret = cs35l41_init_dsp(cs35l41);
+ 	if (ret) {
+ 		dev_warn(cs35l41->dev, "Cannot Initialize Firmware. Error: %d\n", ret);
+@@ -1588,6 +1594,7 @@ static int cs35l41_hda_read_acpi(struct cs35l41_hda *cs35l41, const char *hid, i
+ 	u32 values[HDA_MAX_COMPONENTS];
+ 	struct acpi_device *adev;
+ 	struct device *physdev;
++	struct spi_device *spi;
+ 	const char *sub;
+ 	char *property;
+ 	size_t nval;
+@@ -1610,7 +1617,7 @@ static int cs35l41_hda_read_acpi(struct cs35l41_hda *cs35l41, const char *hid, i
+ 	ret = cs35l41_add_dsd_properties(cs35l41, physdev, id, hid);
+ 	if (!ret) {
+ 		dev_info(cs35l41->dev, "Using extra _DSD properties, bypassing _DSD in ACPI\n");
+-		goto put_physdev;
++		goto out;
+ 	}
+ 
+ 	property = "cirrus,dev-index";
+@@ -1701,8 +1708,20 @@ static int cs35l41_hda_read_acpi(struct cs35l41_hda *cs35l41, const char *hid, i
+ 		hw_cfg->bst_type = CS35L41_EXT_BOOST;
+ 
+ 	hw_cfg->valid = true;
++out:
+ 	put_device(physdev);
+ 
++	cs35l41->bypass_fw = false;
++	if (cs35l41->control_bus == SPI) {
++		spi = to_spi_device(cs35l41->dev);
++		if (spi->max_speed_hz < CS35L41_MAX_ACCEPTABLE_SPI_SPEED_HZ) {
++			dev_warn(cs35l41->dev,
++				 "SPI speed is too slow to support firmware download: %d Hz.\n",
++				 spi->max_speed_hz);
++			cs35l41->bypass_fw = true;
++		}
++	}
++
+ 	return 0;
+ 
+ err:
+@@ -1711,14 +1730,13 @@ err:
+ 	hw_cfg->gpio1.valid = false;
+ 	hw_cfg->gpio2.valid = false;
+ 	acpi_dev_put(cs35l41->dacpi);
+-put_physdev:
+ 	put_device(physdev);
+ 
+ 	return ret;
+ }
+ 
+ int cs35l41_hda_probe(struct device *dev, const char *device_name, int id, int irq,
+-		      struct regmap *regmap)
++		      struct regmap *regmap, enum control_bus control_bus)
+ {
+ 	unsigned int regid, reg_revid;
+ 	struct cs35l41_hda *cs35l41;
+@@ -1737,6 +1755,7 @@ int cs35l41_hda_probe(struct device *dev, const char *device_name, int id, int i
+ 	cs35l41->dev = dev;
+ 	cs35l41->irq = irq;
+ 	cs35l41->regmap = regmap;
++	cs35l41->control_bus = control_bus;
+ 	dev_set_drvdata(dev, cs35l41);
+ 
+ 	ret = cs35l41_hda_read_acpi(cs35l41, device_name, id);
+diff --git a/sound/pci/hda/cs35l41_hda.h b/sound/pci/hda/cs35l41_hda.h
+index 3d925d677213d..43d55292b327a 100644
+--- a/sound/pci/hda/cs35l41_hda.h
++++ b/sound/pci/hda/cs35l41_hda.h
+@@ -20,6 +20,8 @@
+ #include <linux/firmware/cirrus/cs_dsp.h>
+ #include <linux/firmware/cirrus/wmfw.h>
+ 
++#define CS35L41_MAX_ACCEPTABLE_SPI_SPEED_HZ	1000000
++
+ struct cs35l41_amp_cal_data {
+ 	u32 calTarget[2];
+ 	u32 calTime[2];
+@@ -46,6 +48,11 @@ enum cs35l41_hda_gpio_function {
+ 	CS35l41_SYNC,
+ };
+ 
++enum control_bus {
++	I2C,
++	SPI
++};
++
+ struct cs35l41_hda {
+ 	struct device *dev;
+ 	struct regmap *regmap;
+@@ -74,6 +81,9 @@ struct cs35l41_hda {
+ 	struct cs_dsp cs_dsp;
+ 	struct acpi_device *dacpi;
+ 	bool mute_override;
++	enum control_bus control_bus;
++	bool bypass_fw;
++
+ };
+ 
+ enum halo_state {
+@@ -85,7 +95,7 @@ enum halo_state {
+ extern const struct dev_pm_ops cs35l41_hda_pm_ops;
+ 
+ int cs35l41_hda_probe(struct device *dev, const char *device_name, int id, int irq,
+-		      struct regmap *regmap);
++		      struct regmap *regmap, enum control_bus control_bus);
+ void cs35l41_hda_remove(struct device *dev);
+ int cs35l41_get_speaker_id(struct device *dev, int amp_index, int num_amps, int fixed_gpio_id);
+ 
+diff --git a/sound/pci/hda/cs35l41_hda_i2c.c b/sound/pci/hda/cs35l41_hda_i2c.c
+index b44536fbba17d..603e9bff3a71d 100644
+--- a/sound/pci/hda/cs35l41_hda_i2c.c
++++ b/sound/pci/hda/cs35l41_hda_i2c.c
+@@ -30,7 +30,7 @@ static int cs35l41_hda_i2c_probe(struct i2c_client *clt)
+ 		return -ENODEV;
+ 
+ 	return cs35l41_hda_probe(&clt->dev, device_name, clt->addr, clt->irq,
+-				 devm_regmap_init_i2c(clt, &cs35l41_regmap_i2c));
++				 devm_regmap_init_i2c(clt, &cs35l41_regmap_i2c), I2C);
+ }
+ 
+ static void cs35l41_hda_i2c_remove(struct i2c_client *clt)
+diff --git a/sound/pci/hda/cs35l41_hda_property.c b/sound/pci/hda/cs35l41_hda_property.c
+index c1afb721b4c67..35277ce890a46 100644
+--- a/sound/pci/hda/cs35l41_hda_property.c
++++ b/sound/pci/hda/cs35l41_hda_property.c
+@@ -16,10 +16,6 @@
+ 
+ struct cs35l41_config {
+ 	const char *ssid;
+-	enum {
+-		SPI,
+-		I2C
+-	} bus;
+ 	int num_amps;
+ 	enum {
+ 		INTERNAL,
+@@ -35,42 +31,72 @@ struct cs35l41_config {
+ };
+ 
+ static const struct cs35l41_config cs35l41_config_table[] = {
++	{ "10280B27", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10280B28", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10280BEB", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 0, 0, 0 },
++	{ "10280C4D", 4, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, CS35L41_LEFT, CS35L41_RIGHT }, 0, 1, -1, 1000, 4500, 24 },
+ /*
+  * Device 103C89C6 does have _DSD, however it is setup to use the wrong boost type.
+  * We can override the _DSD to correct the boost type here.
+  * Since this laptop has valid ACPI, we do not need to handle cs-gpios, since that already exists
+  * in the ACPI. The Reset GPIO is also valid, so we can use the Reset defined in _DSD.
+  */
+-	{ "103C89C6", SPI, 2, INTERNAL, { CS35L41_RIGHT, CS35L41_LEFT, 0, 0 }, -1, -1, -1, 1000, 4500, 24 },
+-	{ "104312AF", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431433", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431463", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431473", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
+-	{ "10431483", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
+-	{ "10431493", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "104314D3", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "104314E3", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431503", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431533", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431573", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431663", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
+-	{ "104316D3", SPI, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+-	{ "104316F3", SPI, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+-	{ "104317F3", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431863", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "104318D3", I2C, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
+-	{ "10431C9F", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431CAF", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431CCF", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431CDF", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431CEF", SPI, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+-	{ "10431D1F", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431DA2", SPI, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+-	{ "10431E02", SPI, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+-	{ "10431EE2", I2C, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 },
+-	{ "10431F12", I2C, 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+-	{ "10431F1F", SPI, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 0, 0, 0 },
+-	{ "10431F62", SPI, 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "103C89C6", 2, INTERNAL, { CS35L41_RIGHT, CS35L41_LEFT, 0, 0 }, -1, -1, -1, 1000, 4500, 24 },
++	{ "103C8A28", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A29", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A2A", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A2B", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A2C", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A2D", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A2E", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A30", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8A31", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BB3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BB4", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BDF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE0", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE1", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE2", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE9", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BDD", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BDE", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE5", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8BE6", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "103C8B3A", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4100, 24 },
++	{ "104312AF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431433", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431463", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431473", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
++	{ "10431483", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
++	{ "10431493", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "104314D3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "104314E3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431503", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431533", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431573", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431663", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
++	{ "104316D3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "104316F3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "104317F3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431863", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "104318D3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
++	{ "10431C9F", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431CAF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431CCF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431CDF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431CEF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
++	{ "10431D1F", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431DA2", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "10431E02", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "10431EE2", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 },
++	{ "10431F12", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
++	{ "10431F1F", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 0, 0, 0 },
++	{ "10431F62", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "17AA38B4", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
++	{ "17AA38B5", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
++	{ "17AA38B6", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
++	{ "17AA38B7", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
+ 	{}
+ };
+ 
+@@ -208,7 +234,7 @@ static int generic_dsd_config(struct cs35l41_hda *cs35l41, struct device *physde
+ 			 "_DSD already exists.\n");
+ 	}
+ 
+-	if (cfg->bus == SPI) {
++	if (cs35l41->control_bus == SPI) {
+ 		cs35l41->index = id;
+ 
+ 		/*
+@@ -345,7 +371,33 @@ struct cs35l41_prop_model {
+ static const struct cs35l41_prop_model cs35l41_prop_model_table[] = {
+ 	{ "CLSA0100", NULL, lenovo_legion_no_acpi },
+ 	{ "CLSA0101", NULL, lenovo_legion_no_acpi },
++	{ "CSC3551", "10280B27", generic_dsd_config },
++	{ "CSC3551", "10280B28", generic_dsd_config },
++	{ "CSC3551", "10280BEB", generic_dsd_config },
++	{ "CSC3551", "10280C4D", generic_dsd_config },
+ 	{ "CSC3551", "103C89C6", generic_dsd_config },
++	{ "CSC3551", "103C8A28", generic_dsd_config },
++	{ "CSC3551", "103C8A29", generic_dsd_config },
++	{ "CSC3551", "103C8A2A", generic_dsd_config },
++	{ "CSC3551", "103C8A2B", generic_dsd_config },
++	{ "CSC3551", "103C8A2C", generic_dsd_config },
++	{ "CSC3551", "103C8A2D", generic_dsd_config },
++	{ "CSC3551", "103C8A2E", generic_dsd_config },
++	{ "CSC3551", "103C8A30", generic_dsd_config },
++	{ "CSC3551", "103C8A31", generic_dsd_config },
++	{ "CSC3551", "103C8BB3", generic_dsd_config },
++	{ "CSC3551", "103C8BB4", generic_dsd_config },
++	{ "CSC3551", "103C8BDF", generic_dsd_config },
++	{ "CSC3551", "103C8BE0", generic_dsd_config },
++	{ "CSC3551", "103C8BE1", generic_dsd_config },
++	{ "CSC3551", "103C8BE2", generic_dsd_config },
++	{ "CSC3551", "103C8BE9", generic_dsd_config },
++	{ "CSC3551", "103C8BDD", generic_dsd_config },
++	{ "CSC3551", "103C8BDE", generic_dsd_config },
++	{ "CSC3551", "103C8BE3", generic_dsd_config },
++	{ "CSC3551", "103C8BE5", generic_dsd_config },
++	{ "CSC3551", "103C8BE6", generic_dsd_config },
++	{ "CSC3551", "103C8B3A", generic_dsd_config },
+ 	{ "CSC3551", "104312AF", generic_dsd_config },
+ 	{ "CSC3551", "10431433", generic_dsd_config },
+ 	{ "CSC3551", "10431463", generic_dsd_config },
+@@ -375,6 +427,10 @@ static const struct cs35l41_prop_model cs35l41_prop_model_table[] = {
+ 	{ "CSC3551", "10431F12", generic_dsd_config },
+ 	{ "CSC3551", "10431F1F", generic_dsd_config },
+ 	{ "CSC3551", "10431F62", generic_dsd_config },
++	{ "CSC3551", "17AA38B4", generic_dsd_config },
++	{ "CSC3551", "17AA38B5", generic_dsd_config },
++	{ "CSC3551", "17AA38B6", generic_dsd_config },
++	{ "CSC3551", "17AA38B7", generic_dsd_config },
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/cs35l41_hda_spi.c b/sound/pci/hda/cs35l41_hda_spi.c
+index eb287aa5f7825..b76c0dfd5fefc 100644
+--- a/sound/pci/hda/cs35l41_hda_spi.c
++++ b/sound/pci/hda/cs35l41_hda_spi.c
+@@ -26,7 +26,7 @@ static int cs35l41_hda_spi_probe(struct spi_device *spi)
+ 		return -ENODEV;
+ 
+ 	return cs35l41_hda_probe(&spi->dev, device_name, spi_get_chipselect(spi, 0), spi->irq,
+-				 devm_regmap_init_spi(spi, &cs35l41_regmap_spi));
++				 devm_regmap_init_spi(spi, &cs35l41_regmap_spi), SPI);
+ }
+ 
+ static void cs35l41_hda_spi_remove(struct spi_device *spi)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 70b17b08d4ffa..04a3dffcb4127 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6956,6 +6956,11 @@ static void cs35l41_fixup_i2c_two(struct hda_codec *cdc, const struct hda_fixup
+ 	cs35l41_generic_fixup(cdc, action, "i2c", "CSC3551", 2);
+ }
+ 
++static void cs35l41_fixup_i2c_four(struct hda_codec *cdc, const struct hda_fixup *fix, int action)
++{
++	cs35l41_generic_fixup(cdc, action, "i2c", "CSC3551", 4);
++}
++
+ static void cs35l41_fixup_spi_two(struct hda_codec *codec, const struct hda_fixup *fix, int action)
+ {
+ 	cs35l41_generic_fixup(codec, action, "spi", "CSC3551", 2);
+@@ -7441,6 +7446,7 @@ enum {
+ 	ALC287_FIXUP_LEGION_16ACHG6,
+ 	ALC287_FIXUP_CS35L41_I2C_2,
+ 	ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED,
++	ALC287_FIXUP_CS35L41_I2C_4,
+ 	ALC245_FIXUP_CS35L41_SPI_2,
+ 	ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED,
+ 	ALC245_FIXUP_CS35L41_SPI_4,
+@@ -9427,6 +9433,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC285_FIXUP_HP_MUTE_LED,
+ 	},
++	[ALC287_FIXUP_CS35L41_I2C_4] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_i2c_four,
++	},
+ 	[ALC245_FIXUP_CS35L41_SPI_2] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+@@ -9703,6 +9713,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x0b1a, "Dell Precision 5570", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0b27, "Dell", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1028, 0x0b28, "Dell", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1028, 0x0b37, "Dell Inspiron 16 Plus 7620 2-in-1", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1028, 0x0b71, "Dell Inspiron 16 Plus 7620", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1028, 0x0beb, "Dell XPS 15 9530 (2023)", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+@@ -9713,6 +9725,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0c1c, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c1d, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c4d, "Dell", ALC287_FIXUP_CS35L41_I2C_4),
+ 	SND_PCI_QUIRK(0x1028, 0x0cbd, "Dell Oasis 13 CS MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1028, 0x0cbe, "Dell Oasis 13 2-IN-1 MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1028, 0x0cbf, "Dell Oasis 13 Low Weight MTU-L", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+@@ -9816,6 +9829,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8735, "HP ProBook 435 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x876e, "HP ENVY x360 Convertible 13-ay0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ 	SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8780, "HP ZBook Fury 17 G7 Mobile Workstation",
+@@ -10229,6 +10243,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3886, "Y780 VECO DUAL", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38a7, "Y780P AMD YG dual", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38a8, "Y780P AMD VECO dual", ALC287_FIXUP_TAS2781_I2C),
++	SND_PCI_QUIRK(0x17aa, 0x38b4, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x38b5, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x38b6, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x38b7, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38ba, "Yoga S780-14.5 Air AMD quad YC", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38bb, "Yoga S780-14.5 Air AMD quad AAC", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38be, "Yoga S980-14.5 proX YC Dual", ALC287_FIXUP_TAS2781_I2C),


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-21 15:14 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-21 15:14 UTC (permalink / raw
  To: gentoo-commits

commit:     b1ce9fced726466e93c67fb0eee103164931ecd3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 21 15:14:20 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan 21 15:14:20 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b1ce9fce

media: solo6x10: replace max(a, min(b, c)) by clamp(b, a, c)

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 ++
 2700_solo6x10-mem-resource-reduction-fix.patch | 61 ++++++++++++++++++++++++++
 2 files changed, 65 insertions(+)

diff --git a/0000_README b/0000_README
index 9e6fa4b3..99feeab4 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2700_solo6x10-mem-resource-reduction-fix.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc:   media: solo6x10: replace max(a, min(b, c)) by clamp(b, a, c)
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_solo6x10-mem-resource-reduction-fix.patch b/2700_solo6x10-mem-resource-reduction-fix.patch
new file mode 100644
index 00000000..bf406a92
--- /dev/null
+++ b/2700_solo6x10-mem-resource-reduction-fix.patch
@@ -0,0 +1,61 @@
+From 31e97d7c9ae3de072d7b424b2cf706a03ec10720 Mon Sep 17 00:00:00 2001
+From: Aurelien Jarno <aurelien@aurel32.net>
+Date: Sat, 13 Jan 2024 19:33:31 +0100
+Subject: media: solo6x10: replace max(a, min(b, c)) by clamp(b, a, c)
+
+This patch replaces max(a, min(b, c)) by clamp(b, a, c) in the solo6x10
+driver.  This improves the readability and more importantly, for the
+solo6x10-p2m.c file, this reduces on my system (x86-64, gcc 13):
+
+ - the preprocessed size from 121 MiB to 4.5 MiB;
+
+ - the build CPU time from 46.8 s to 1.6 s;
+
+ - the build memory from 2786 MiB to 98MiB.
+
+In fine, this allows this relatively simple C file to be built on a
+32-bit system.
+
+Reported-by: Jiri Slaby <jirislaby@gmail.com>
+Closes: https://lore.kernel.org/lkml/18c6df0d-45ed-450c-9eda-95160a2bbb8e@gmail.com/
+Cc:  <stable@vger.kernel.org> # v6.7+
+Suggested-by: David Laight <David.Laight@ACULAB.COM>
+Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
+Reviewed-by: David Laight <David.Laight@ACULAB.COM>
+Reviewed-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+---
+ drivers/media/pci/solo6x10/solo6x10-offsets.h | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+(limited to 'drivers/media/pci/solo6x10/solo6x10-offsets.h')
+
+diff --git a/drivers/media/pci/solo6x10/solo6x10-offsets.h b/drivers/media/pci/solo6x10/solo6x10-offsets.h
+index f414ee1316f29c..fdbb817e63601c 100644
+--- a/drivers/media/pci/solo6x10/solo6x10-offsets.h
++++ b/drivers/media/pci/solo6x10/solo6x10-offsets.h
+@@ -57,16 +57,16 @@
+ #define SOLO_MP4E_EXT_ADDR(__solo) \
+ 	(SOLO_EREF_EXT_ADDR(__solo) + SOLO_EREF_EXT_AREA(__solo))
+ #define SOLO_MP4E_EXT_SIZE(__solo) \
+-	max((__solo->nr_chans * 0x00080000),				\
+-	    min(((__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo)) -	\
+-		 __SOLO_JPEG_MIN_SIZE(__solo)), 0x00ff0000))
++	clamp(__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo) -	\
++	      __SOLO_JPEG_MIN_SIZE(__solo),			\
++	      __solo->nr_chans * 0x00080000, 0x00ff0000)
+ 
+ #define __SOLO_JPEG_MIN_SIZE(__solo)		(__solo->nr_chans * 0x00080000)
+ #define SOLO_JPEG_EXT_ADDR(__solo) \
+ 		(SOLO_MP4E_EXT_ADDR(__solo) + SOLO_MP4E_EXT_SIZE(__solo))
+ #define SOLO_JPEG_EXT_SIZE(__solo) \
+-	max(__SOLO_JPEG_MIN_SIZE(__solo),				\
+-	    min((__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo)), 0x00ff0000))
++	clamp(__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo),	\
++	      __SOLO_JPEG_MIN_SIZE(__solo), 0x00ff0000)
+ 
+ #define SOLO_SDRAM_END(__solo) \
+ 	(SOLO_JPEG_EXT_ADDR(__solo) + SOLO_JPEG_EXT_SIZE(__solo))
+-- 
+cgit 1.2.3-korg
+


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-25 13:26 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-25 13:26 UTC (permalink / raw
  To: gentoo-commits

commit:     acc4fdace95188da98d42a6d022f435b5dd8e1c0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 25 13:26:15 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 25 13:26:15 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=acc4fdac

Add gcc14 patches

btrfs: fix kvcalloc() arguments order
drm: i915: Adapt to -Walloc-size
objtool: Fix calloc call for new -Walloc-size

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        | 12 +++++++
 2930_gcc14-btrfs-fix-kvcalloc-args-order.patch     | 37 +++++++++++++++++++
 2931_gcc14-drm-i915-Adapt-to-Walloc-size.patch     | 37 +++++++++++++++++++
 ...jtool-Fix-calloc-call-for-new-Walloc-size.patch | 41 ++++++++++++++++++++++
 4 files changed, 127 insertions(+)

diff --git a/0000_README b/0000_README
index 99feeab4..9dc64b1b 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,18 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
+Patch:  2930_gcc14-btrfs-fix-kvcalloc-args-order.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc:   btrfs: fix kvcalloc() arguments order
+
+Patch:  2931_gcc14-drm-i915-Adapt-to-Walloc-size.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc:   drm: i915: Adapt to -Walloc-size
+
+Patch:  2932_gcc14-objtool-Fix-calloc-call-for-new-Walloc-size.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc:   objtool: Fix calloc call for new -Walloc-size
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2930_gcc14-btrfs-fix-kvcalloc-args-order.patch b/2930_gcc14-btrfs-fix-kvcalloc-args-order.patch
new file mode 100644
index 00000000..0ed049e2
--- /dev/null
+++ b/2930_gcc14-btrfs-fix-kvcalloc-args-order.patch
@@ -0,0 +1,37 @@
+Subject: [gcc-14 PATCH] btrfs: fix kvcalloc() arguments order
+
+When compiling with gcc version 14.0.0 20231220 (experimental)
+and W=1, I've noticed the following warning:
+
+fs/btrfs/send.c: In function 'btrfs_ioctl_send':
+fs/btrfs/send.c:8208:44: warning: 'kvcalloc' sizes specified with 'sizeof'
+in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
+ 8208 |         sctx->clone_roots = kvcalloc(sizeof(*sctx->clone_roots),
+      |                                            ^
+
+Since 'n' and 'size' arguments of 'kvcalloc()' are multiplied to
+calculate the final size, their actual order doesn't affect the
+result and so this is not a bug. But it's still worth to fix it.
+
+Link: https://lore.kernel.org/linux-btrfs/20231221084748.10094-1-dmantipov@yandex.ru/T/#u
+---
+ fs/btrfs/send.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 4e36550618e5..2d7519a6ce72 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -8205,8 +8205,8 @@ long btrfs_ioctl_send(struct inode *inode, struct btrfs_ioctl_send_args *arg)
+ 		goto out;
+ 	}
+ 
+-	sctx->clone_roots = kvcalloc(sizeof(*sctx->clone_roots),
+-				     arg->clone_sources_count + 1,
++	sctx->clone_roots = kvcalloc(arg->clone_sources_count + 1,
++				     sizeof(*sctx->clone_roots),
+ 				     GFP_KERNEL);
+ 	if (!sctx->clone_roots) {
+ 		ret = -ENOMEM;
+-- 
+2.43.0

diff --git a/2931_gcc14-drm-i915-Adapt-to-Walloc-size.patch b/2931_gcc14-drm-i915-Adapt-to-Walloc-size.patch
new file mode 100644
index 00000000..b6f6ab38
--- /dev/null
+++ b/2931_gcc14-drm-i915-Adapt-to-Walloc-size.patch
@@ -0,0 +1,37 @@
+Subject: [gcc-14 PATCH] drm: i915: Adapt to -Walloc-size
+
+GCC 14 introduces a new -Walloc-size included in -Wextra which errors out
+like:
+```
+drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c: In function ‘eb_copy_relocations’:
+drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:1681:24: error: allocation of insufficient size ‘1’ for type ‘struct drm_i915_gem_relocation_entry’ with size ‘32’ [-Werror=alloc-size]
+ 1681 |                 relocs = kvmalloc_array(size, 1, GFP_KERNEL);
+      |                        ^
+
+```
+
+So, just swap the number of members and size arguments to match the prototype, as
+we're initialising 1 element of size `size`. GCC then sees we're not
+doing anything wrong.
+
+Link: https://lore.kernel.org/intel-gfx/20231107215538.1891359-1-sam@gentoo.org/
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index 683fd8d3151c..45b9d9e34b8b 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -1678,7 +1678,7 @@ static int eb_copy_relocations(const struct i915_execbuffer *eb)
+ 		urelocs = u64_to_user_ptr(eb->exec[i].relocs_ptr);
+ 		size = nreloc * sizeof(*relocs);
+ 
+-		relocs = kvmalloc_array(size, 1, GFP_KERNEL);
++		relocs = kvmalloc_array(1, size, GFP_KERNEL);
+ 		if (!relocs) {
+ 			err = -ENOMEM;
+ 			goto err;
+-- 
+2.42.1

diff --git a/2932_gcc14-objtool-Fix-calloc-call-for-new-Walloc-size.patch b/2932_gcc14-objtool-Fix-calloc-call-for-new-Walloc-size.patch
new file mode 100644
index 00000000..dca9494a
--- /dev/null
+++ b/2932_gcc14-objtool-Fix-calloc-call-for-new-Walloc-size.patch
@@ -0,0 +1,41 @@
+Subject: [gcc-14 PATCH] objtool: Fix calloc call for new -Walloc-size
+
+GCC 14 introduces a new -Walloc-size included in -Wextra which errors out
+like:
+```
+check.c: In function ‘cfi_alloc’:
+check.c:294:33: error: allocation of insufficient size ‘1’ for type ‘struct cfi_state’ with size ‘320’ [-Werror=alloc-size]
+  294 |         struct cfi_state *cfi = calloc(sizeof(struct cfi_state), 1);
+      |                                 ^~~~~~
+```
+
+The calloc prototype is:
+```
+void *calloc(size_t nmemb, size_t size);
+```
+
+So, just swap the number of members and size arguments to match the prototype, as
+we're initialising 1 struct of size `sizeof(struct ...)`. GCC then sees we're not
+doing anything wrong.
+
+Link: https://lore.kernel.org/all/20231107205504.1470006-1-sam@gentoo.org/
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ tools/objtool/check.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index e94756e09ca9..548ec3cd7c00 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -291,7 +291,7 @@ static void init_insn_state(struct objtool_file *file, struct insn_state *state,
+ 
+ static struct cfi_state *cfi_alloc(void)
+ {
+-	struct cfi_state *cfi = calloc(sizeof(struct cfi_state), 1);
++	struct cfi_state *cfi = calloc(1, sizeof(struct cfi_state));
+ 	if (!cfi) {
+ 		WARN("calloc failed");
+ 		exit(1);
+-- 
+2.42.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-26  0:17 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-26  0:17 UTC (permalink / raw
  To: gentoo-commits

commit:     7c0a432b1f50762a4f512c3f4b799524dd1e2bd8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 26 00:17:11 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 26 00:17:11 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7c0a432b

Linux patch 6.7.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1001_linux-6.7.2.patch | 23155 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 23159 insertions(+)

diff --git a/0000_README b/0000_README
index 9dc64b1b..1eabc2b1 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-6.7.1.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.1
 
+Patch:  1001_linux-6.7.2.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.2
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1001_linux-6.7.2.patch b/1001_linux-6.7.2.patch
new file mode 100644
index 00000000..551da90c
--- /dev/null
+++ b/1001_linux-6.7.2.patch
@@ -0,0 +1,23155 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 65731b060e3fe..b72e2049c4876 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5302,6 +5302,12 @@
+ 			Dump ftrace buffer after reporting RCU CPU
+ 			stall warning.
+ 
++	rcupdate.rcu_cpu_stall_notifiers= [KNL]
++			Provide RCU CPU stall notifiers, but see the
++			warnings in the RCU_CPU_STALL_NOTIFIER Kconfig
++			option's help text.  TL;DR:  You almost certainly
++			do not want rcupdate.rcu_cpu_stall_notifiers.
++
+ 	rcupdate.rcu_cpu_stall_suppress= [KNL]
+ 			Suppress RCU CPU stall warning messages.
+ 
+diff --git a/Documentation/devicetree/bindings/arm/qcom.yaml b/Documentation/devicetree/bindings/arm/qcom.yaml
+index 7f80f48a09544..8a6466d1fc4ee 100644
+--- a/Documentation/devicetree/bindings/arm/qcom.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom.yaml
+@@ -138,7 +138,7 @@ description: |
+   There are many devices in the list below that run the standard ChromeOS
+   bootloader setup and use the open source depthcharge bootloader to boot the
+   OS. These devices do not use the scheme described above. For details, see:
+-  https://docs.kernel.org/arm/google/chromebook-boot-flow.html
++  https://docs.kernel.org/arch/arm/google/chromebook-boot-flow.html
+ 
+ properties:
+   $nodename:
+diff --git a/Documentation/devicetree/bindings/gpio/xlnx,gpio-xilinx.yaml b/Documentation/devicetree/bindings/gpio/xlnx,gpio-xilinx.yaml
+index c1060e5fcef3a..d3d8a2e143ed2 100644
+--- a/Documentation/devicetree/bindings/gpio/xlnx,gpio-xilinx.yaml
++++ b/Documentation/devicetree/bindings/gpio/xlnx,gpio-xilinx.yaml
+@@ -126,7 +126,7 @@ examples:
+   - |
+     #include <dt-bindings/interrupt-controller/arm-gic.h>
+ 
+-        gpio@e000a000 {
++        gpio@a0020000 {
+             compatible = "xlnx,xps-gpio-1.00.a";
+             reg = <0xa0020000 0x10000>;
+             #gpio-cells = <2>;
+diff --git a/Documentation/devicetree/bindings/media/mediatek,mdp3-rdma.yaml b/Documentation/devicetree/bindings/media/mediatek,mdp3-rdma.yaml
+index 7032c7e150390..3e128733ef535 100644
+--- a/Documentation/devicetree/bindings/media/mediatek,mdp3-rdma.yaml
++++ b/Documentation/devicetree/bindings/media/mediatek,mdp3-rdma.yaml
+@@ -61,6 +61,9 @@ properties:
+       - description: used for 1st data pipe from RDMA
+       - description: used for 2nd data pipe from RDMA
+ 
++  '#dma-cells':
++    const: 1
++
+ required:
+   - compatible
+   - reg
+@@ -70,6 +73,7 @@ required:
+   - clocks
+   - iommus
+   - mboxes
++  - '#dma-cells'
+ 
+ additionalProperties: false
+ 
+@@ -80,16 +84,17 @@ examples:
+     #include <dt-bindings/power/mt8183-power.h>
+     #include <dt-bindings/memory/mt8183-larb-port.h>
+ 
+-    mdp3_rdma0: mdp3-rdma0@14001000 {
+-      compatible = "mediatek,mt8183-mdp3-rdma";
+-      reg = <0x14001000 0x1000>;
+-      mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x1000 0x1000>;
+-      mediatek,gce-events = <CMDQ_EVENT_MDP_RDMA0_SOF>,
+-                            <CMDQ_EVENT_MDP_RDMA0_EOF>;
+-      power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+-      clocks = <&mmsys CLK_MM_MDP_RDMA0>,
+-               <&mmsys CLK_MM_MDP_RSZ1>;
+-      iommus = <&iommu>;
+-      mboxes = <&gce 20 CMDQ_THR_PRIO_LOWEST>,
+-               <&gce 21 CMDQ_THR_PRIO_LOWEST>;
++    dma-controller@14001000 {
++        compatible = "mediatek,mt8183-mdp3-rdma";
++        reg = <0x14001000 0x1000>;
++        mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x1000 0x1000>;
++        mediatek,gce-events = <CMDQ_EVENT_MDP_RDMA0_SOF>,
++                              <CMDQ_EVENT_MDP_RDMA0_EOF>;
++        power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
++        clocks = <&mmsys CLK_MM_MDP_RDMA0>,
++                 <&mmsys CLK_MM_MDP_RSZ1>;
++        iommus = <&iommu>;
++        mboxes = <&gce 20 CMDQ_THR_PRIO_LOWEST>,
++                 <&gce 21 CMDQ_THR_PRIO_LOWEST>;
++        #dma-cells = <1>;
+     };
+diff --git a/Documentation/devicetree/bindings/media/mediatek,mdp3-wrot.yaml b/Documentation/devicetree/bindings/media/mediatek,mdp3-wrot.yaml
+index 0baa77198fa21..64ea98aa05928 100644
+--- a/Documentation/devicetree/bindings/media/mediatek,mdp3-wrot.yaml
++++ b/Documentation/devicetree/bindings/media/mediatek,mdp3-wrot.yaml
+@@ -50,6 +50,9 @@ properties:
+   iommus:
+     maxItems: 1
+ 
++  '#dma-cells':
++    const: 1
++
+ required:
+   - compatible
+   - reg
+@@ -58,6 +61,7 @@ required:
+   - power-domains
+   - clocks
+   - iommus
++  - '#dma-cells'
+ 
+ additionalProperties: false
+ 
+@@ -68,13 +72,14 @@ examples:
+     #include <dt-bindings/power/mt8183-power.h>
+     #include <dt-bindings/memory/mt8183-larb-port.h>
+ 
+-    mdp3_wrot0: mdp3-wrot0@14005000 {
+-      compatible = "mediatek,mt8183-mdp3-wrot";
+-      reg = <0x14005000 0x1000>;
+-      mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x5000 0x1000>;
+-      mediatek,gce-events = <CMDQ_EVENT_MDP_WROT0_SOF>,
+-                            <CMDQ_EVENT_MDP_WROT0_EOF>;
+-      power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+-      clocks = <&mmsys CLK_MM_MDP_WROT0>;
+-      iommus = <&iommu>;
++    dma-controller@14005000 {
++        compatible = "mediatek,mt8183-mdp3-wrot";
++        reg = <0x14005000 0x1000>;
++        mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x5000 0x1000>;
++        mediatek,gce-events = <CMDQ_EVENT_MDP_WROT0_SOF>,
++                              <CMDQ_EVENT_MDP_WROT0_EOF>;
++        power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
++        clocks = <&mmsys CLK_MM_MDP_WROT0>;
++        iommus = <&iommu>;
++        #dma-cells = <1>;
+     };
+diff --git a/Documentation/devicetree/bindings/media/rockchip-isp1.yaml b/Documentation/devicetree/bindings/media/rockchip-isp1.yaml
+index e466dff8286d2..afcaa427d48b0 100644
+--- a/Documentation/devicetree/bindings/media/rockchip-isp1.yaml
++++ b/Documentation/devicetree/bindings/media/rockchip-isp1.yaml
+@@ -90,15 +90,16 @@ properties:
+         description: connection point for input on the parallel interface
+ 
+         properties:
+-          bus-type:
+-            enum: [5, 6]
+-
+           endpoint:
+             $ref: video-interfaces.yaml#
+             unevaluatedProperties: false
+ 
+-        required:
+-          - bus-type
++            properties:
++              bus-type:
++                enum: [5, 6]
++
++            required:
++              - bus-type
+ 
+     anyOf:
+       - required:
+diff --git a/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb43dp-phy.yaml b/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb43dp-phy.yaml
+index 9af203dc8793f..fa7408eb74895 100644
+--- a/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb43dp-phy.yaml
++++ b/Documentation/devicetree/bindings/phy/qcom,sc8280xp-qmp-usb43dp-phy.yaml
+@@ -62,12 +62,12 @@ properties:
+   "#clock-cells":
+     const: 1
+     description:
+-      See include/dt-bindings/dt-bindings/phy/phy-qcom-qmp.h
++      See include/dt-bindings/phy/phy-qcom-qmp.h
+ 
+   "#phy-cells":
+     const: 1
+     description:
+-      See include/dt-bindings/dt-bindings/phy/phy-qcom-qmp.h
++      See include/dt-bindings/phy/phy-qcom-qmp.h
+ 
+   orientation-switch:
+     description:
+diff --git a/Documentation/devicetree/bindings/timer/thead,c900-aclint-mtimer.yaml b/Documentation/devicetree/bindings/timer/thead,c900-aclint-mtimer.yaml
+index fbd235650e52c..2e92bcdeb423a 100644
+--- a/Documentation/devicetree/bindings/timer/thead,c900-aclint-mtimer.yaml
++++ b/Documentation/devicetree/bindings/timer/thead,c900-aclint-mtimer.yaml
+@@ -17,7 +17,12 @@ properties:
+       - const: thead,c900-aclint-mtimer
+ 
+   reg:
+-    maxItems: 1
++    items:
++      - description: MTIMECMP Registers
++
++  reg-names:
++    items:
++      - const: mtimecmp
+ 
+   interrupts-extended:
+     minItems: 1
+@@ -28,6 +33,7 @@ additionalProperties: false
+ required:
+   - compatible
+   - reg
++  - reg-names
+   - interrupts-extended
+ 
+ examples:
+@@ -39,5 +45,6 @@ examples:
+                             <&cpu3intc 7>,
+                             <&cpu4intc 7>;
+       reg = <0xac000000 0x00010000>;
++      reg-names = "mtimecmp";
+     };
+ ...
+diff --git a/Documentation/driver-api/pci/p2pdma.rst b/Documentation/driver-api/pci/p2pdma.rst
+index 44deb52beeb47..d0b241628cf13 100644
+--- a/Documentation/driver-api/pci/p2pdma.rst
++++ b/Documentation/driver-api/pci/p2pdma.rst
+@@ -83,19 +83,9 @@ this to include other types of resources like doorbells.
+ Client Drivers
+ --------------
+ 
+-A client driver typically only has to conditionally change its DMA map
+-routine to use the mapping function :c:func:`pci_p2pdma_map_sg()` instead
+-of the usual :c:func:`dma_map_sg()` function. Memory mapped in this
+-way does not need to be unmapped.
+-
+-The client may also, optionally, make use of
+-:c:func:`is_pci_p2pdma_page()` to determine when to use the P2P mapping
+-functions and when to use the regular mapping functions. In some
+-situations, it may be more appropriate to use a flag to indicate a
+-given request is P2P memory and map appropriately. It is important to
+-ensure that struct pages that back P2P memory stay out of code that
+-does not have support for them as other code may treat the pages as
+-regular memory which may not be appropriate.
++A client driver only has to use the mapping API :c:func:`dma_map_sg()`
++and :c:func:`dma_unmap_sg()` functions as usual, and the implementation
++will do the right thing for the P2P capable memory.
+ 
+ 
+ Orchestrator Drivers
+diff --git a/Makefile b/Makefile
+index 186da2386a067..0564d3d093954 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
+index 59fd86b9fb471..099a16c34e1f1 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
+@@ -738,7 +738,7 @@
+ 
+ 				xoadc: xoadc@197 {
+ 					compatible = "qcom,pm8921-adc";
+-					reg = <197>;
++					reg = <0x197>;
+ 					interrupts-extended = <&pmicintc 78 IRQ_TYPE_EDGE_RISING>;
+ 					#address-cells = <2>;
+ 					#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8226.dtsi b/arch/arm/boot/dts/qcom/qcom-msm8226.dtsi
+index 97a377b5a0eca..5cd03ea7b084f 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8226.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-msm8226.dtsi
+@@ -442,8 +442,8 @@
+ 				 <&gcc GPLL0_VOTE>,
+ 				 <&gcc GPLL1_VOTE>,
+ 				 <&rpmcc RPM_SMD_GFX3D_CLK_SRC>,
+-				 <0>,
+-				 <0>;
++				 <&mdss_dsi0_phy 1>,
++				 <&mdss_dsi0_phy 0>;
+ 			clock-names = "xo",
+ 				      "mmss_gpll0_vote",
+ 				      "gpll0_vote",
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
+index 2aa5089a8513d..a88f186fcf03c 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
+@@ -436,9 +436,9 @@
+ 			status = "disabled";
+ 		};
+ 
+-		pcie_phy: phy@1c07000 {
++		pcie_phy: phy@1c06000 {
+ 			compatible = "qcom,sdx55-qmp-pcie-phy";
+-			reg = <0x01c07000 0x2000>;
++			reg = <0x01c06000 0x2000>;
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 			ranges;
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx65.dtsi b/arch/arm/boot/dts/qcom/qcom-sdx65.dtsi
+index e559adaaeee7a..27b7f50a18327 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx65.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-sdx65.dtsi
+@@ -338,7 +338,7 @@
+ 			power-domains = <&gcc PCIE_GDSC>;
+ 
+ 			phys = <&pcie_phy>;
+-			phy-names = "pcie-phy";
++			phy-names = "pciephy";
+ 
+ 			max-link-speed = <3>;
+ 			num-lanes = <2>;
+@@ -530,7 +530,7 @@
+ 			reg = <0x0c264000 0x1000>;
+ 		};
+ 
+-		spmi_bus: qcom,spmi@c440000 {
++		spmi_bus: spmi@c440000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+ 			reg = <0xc440000 0xd00>,
+ 				<0xc600000 0x2000000>,
+diff --git a/arch/arm/boot/dts/st/stm32mp157a-dk1-scmi.dts b/arch/arm/boot/dts/st/stm32mp157a-dk1-scmi.dts
+index afcd6285890cc..c27963898b5e6 100644
+--- a/arch/arm/boot/dts/st/stm32mp157a-dk1-scmi.dts
++++ b/arch/arm/boot/dts/st/stm32mp157a-dk1-scmi.dts
+@@ -11,7 +11,7 @@
+ 
+ / {
+ 	model = "STMicroelectronics STM32MP157A-DK1 SCMI Discovery Board";
+-	compatible = "st,stm32mp157a-dk1-scmi", "st,stm32mp157a-dk1", "st,stm32mp157";
++	compatible = "st,stm32mp157a-dk1-scmi", "st,stm32mp157";
+ 
+ 	reserved-memory {
+ 		optee@de000000 {
+diff --git a/arch/arm/boot/dts/st/stm32mp157c-dk2-scmi.dts b/arch/arm/boot/dts/st/stm32mp157c-dk2-scmi.dts
+index 39358d9020003..6226189431340 100644
+--- a/arch/arm/boot/dts/st/stm32mp157c-dk2-scmi.dts
++++ b/arch/arm/boot/dts/st/stm32mp157c-dk2-scmi.dts
+@@ -11,7 +11,7 @@
+ 
+ / {
+ 	model = "STMicroelectronics STM32MP157C-DK2 SCMI Discovery Board";
+-	compatible = "st,stm32mp157c-dk2-scmi", "st,stm32mp157c-dk2", "st,stm32mp157";
++	compatible = "st,stm32mp157c-dk2-scmi", "st,stm32mp157";
+ 
+ 	reserved-memory {
+ 		optee@de000000 {
+diff --git a/arch/arm/boot/dts/st/stm32mp157c-ed1-scmi.dts b/arch/arm/boot/dts/st/stm32mp157c-ed1-scmi.dts
+index 07ea765a4553a..c7c4d7e89d612 100644
+--- a/arch/arm/boot/dts/st/stm32mp157c-ed1-scmi.dts
++++ b/arch/arm/boot/dts/st/stm32mp157c-ed1-scmi.dts
+@@ -11,7 +11,7 @@
+ 
+ / {
+ 	model = "STMicroelectronics STM32MP157C-ED1 SCMI eval daughter";
+-	compatible = "st,stm32mp157c-ed1-scmi", "st,stm32mp157c-ed1", "st,stm32mp157";
++	compatible = "st,stm32mp157c-ed1-scmi", "st,stm32mp157";
+ 
+ 	reserved-memory {
+ 		optee@fe000000 {
+diff --git a/arch/arm/boot/dts/st/stm32mp157c-ev1-scmi.dts b/arch/arm/boot/dts/st/stm32mp157c-ev1-scmi.dts
+index 813086ec24895..2ab77e64f1bbb 100644
+--- a/arch/arm/boot/dts/st/stm32mp157c-ev1-scmi.dts
++++ b/arch/arm/boot/dts/st/stm32mp157c-ev1-scmi.dts
+@@ -11,8 +11,7 @@
+ 
+ / {
+ 	model = "STMicroelectronics STM32MP157C-EV1 SCMI eval daughter on eval mother";
+-	compatible = "st,stm32mp157c-ev1-scmi", "st,stm32mp157c-ev1", "st,stm32mp157c-ed1",
+-		     "st,stm32mp157";
++	compatible = "st,stm32mp157c-ev1-scmi", "st,stm32mp157c-ed1", "st,stm32mp157";
+ 
+ 	reserved-memory {
+ 		optee@fe000000 {
+diff --git a/arch/arm/mach-davinci/Kconfig b/arch/arm/mach-davinci/Kconfig
+index 4316e1370627c..2a8a9fe46586d 100644
+--- a/arch/arm/mach-davinci/Kconfig
++++ b/arch/arm/mach-davinci/Kconfig
+@@ -4,12 +4,14 @@ menuconfig ARCH_DAVINCI
+ 	bool "TI DaVinci"
+ 	depends on ARCH_MULTI_V5
+ 	depends on CPU_LITTLE_ENDIAN
++	select CPU_ARM926T
+ 	select DAVINCI_TIMER
+ 	select ZONE_DMA
+ 	select PM_GENERIC_DOMAINS if PM
+ 	select PM_GENERIC_DOMAINS_OF if PM && OF
+ 	select REGMAP_MMIO
+ 	select RESET_CONTROLLER
++	select PINCTRL
+ 	select PINCTRL_SINGLE
+ 
+ if ARCH_DAVINCI
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 738024baaa578..54faf83cb436e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -1408,7 +1408,7 @@
+ 			assigned-clocks = <&clk IMX8MM_CLK_GPU3D_CORE>,
+ 					  <&clk IMX8MM_GPU_PLL_OUT>;
+ 			assigned-clock-parents = <&clk IMX8MM_GPU_PLL_OUT>;
+-			assigned-clock-rates = <0>, <1000000000>;
++			assigned-clock-rates = <0>, <800000000>;
+ 			power-domains = <&pgc_gpu>;
+ 		};
+ 
+@@ -1423,7 +1423,7 @@
+ 			assigned-clocks = <&clk IMX8MM_CLK_GPU2D_CORE>,
+ 					  <&clk IMX8MM_GPU_PLL_OUT>;
+ 			assigned-clock-parents = <&clk IMX8MM_GPU_PLL_OUT>;
+-			assigned-clock-rates = <0>, <1000000000>;
++			assigned-clock-rates = <0>, <800000000>;
+ 			power-domains = <&pgc_gpu>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/hisilicon/hikey970-pmic.dtsi b/arch/arm64/boot/dts/hisilicon/hikey970-pmic.dtsi
+index 970047f2dabd5..c06e011a6c3ff 100644
+--- a/arch/arm64/boot/dts/hisilicon/hikey970-pmic.dtsi
++++ b/arch/arm64/boot/dts/hisilicon/hikey970-pmic.dtsi
+@@ -25,9 +25,6 @@
+ 			gpios = <&gpio28 0 0>;
+ 
+ 			regulators {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+ 				ldo3: ldo3 { /* HDMI */
+ 					regulator-name = "ldo3";
+ 					regulator-min-microvolt = <1500000>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 9eab2bb221348..805ef2d79b401 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -130,7 +130,7 @@
+ 		compatible = "microchip,mcp7940x";
+ 		reg = <0x6f>;
+ 		interrupt-parent = <&gpiosb>;
+-		interrupts = <5 0>; /* GPIO2_5 */
++		interrupts = <5 IRQ_TYPE_EDGE_FALLING>; /* GPIO2_5 */
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 976dc968b3ca1..df6e9990cd5fa 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1660,7 +1660,7 @@
+ 			mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0 0x1000>;
+ 		};
+ 
+-		mdp3-rdma0@14001000 {
++		dma-controller0@14001000 {
+ 			compatible = "mediatek,mt8183-mdp3-rdma";
+ 			reg = <0 0x14001000 0 0x1000>;
+ 			mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x1000 0x1000>;
+@@ -1672,6 +1672,7 @@
+ 			iommus = <&iommu M4U_PORT_MDP_RDMA0>;
+ 			mboxes = <&gce 20 CMDQ_THR_PRIO_LOWEST 0>,
+ 				 <&gce 21 CMDQ_THR_PRIO_LOWEST 0>;
++			#dma-cells = <1>;
+ 		};
+ 
+ 		mdp3-rsz0@14003000 {
+@@ -1692,7 +1693,7 @@
+ 			clocks = <&mmsys CLK_MM_MDP_RSZ1>;
+ 		};
+ 
+-		mdp3-wrot0@14005000 {
++		dma-controller@14005000 {
+ 			compatible = "mediatek,mt8183-mdp3-wrot";
+ 			reg = <0 0x14005000 0 0x1000>;
+ 			mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x5000 0x1000>;
+@@ -1701,6 +1702,7 @@
+ 			power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+ 			clocks = <&mmsys CLK_MM_MDP_WROT0>;
+ 			iommus = <&iommu M4U_PORT_MDP_WROT0>;
++			#dma-cells = <1>;
+ 		};
+ 
+ 		mdp3-wdma@14006000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index df0c04f2ba1da..2fec6fd1c1a71 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -22,7 +22,7 @@
+ 
+ 	aliases {
+ 		ovl0 = &ovl0;
+-		ovl_2l0 = &ovl_2l0;
++		ovl-2l0 = &ovl_2l0;
+ 		rdma0 = &rdma0;
+ 		rdma1 = &rdma1;
+ 	};
+@@ -1160,14 +1160,14 @@
+ 			status = "disabled";
+ 		};
+ 
+-		adsp_mailbox0: mailbox@10686000 {
++		adsp_mailbox0: mailbox@10686100 {
+ 			compatible = "mediatek,mt8186-adsp-mbox";
+ 			#mbox-cells = <0>;
+ 			reg = <0 0x10686100 0 0x1000>;
+ 			interrupts = <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH 0>;
+ 		};
+ 
+-		adsp_mailbox1: mailbox@10687000 {
++		adsp_mailbox1: mailbox@10687100 {
+ 			compatible = "mediatek,mt8186-adsp-mbox";
+ 			#mbox-cells = <0>;
+ 			reg = <0 0x10687100 0 0x1000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index e0ac2e9f5b720..6708c4d21abf9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -2873,7 +2873,7 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 		};
+ 
+-		vdo1_rdma0: rdma@1c104000 {
++		vdo1_rdma0: dma-controller@1c104000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c104000 0 0x1000>;
+ 			interrupts = <GIC_SPI 495 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2881,9 +2881,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vdo M4U_PORT_L2_MDP_RDMA0>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x4000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma1: rdma@1c105000 {
++		vdo1_rdma1: dma-controller@1c105000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c105000 0 0x1000>;
+ 			interrupts = <GIC_SPI 496 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2891,9 +2892,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vpp M4U_PORT_L3_MDP_RDMA1>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x5000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma2: rdma@1c106000 {
++		vdo1_rdma2: dma-controller@1c106000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c106000 0 0x1000>;
+ 			interrupts = <GIC_SPI 497 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2901,9 +2903,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vdo M4U_PORT_L2_MDP_RDMA2>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x6000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma3: rdma@1c107000 {
++		vdo1_rdma3: dma-controller@1c107000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c107000 0 0x1000>;
+ 			interrupts = <GIC_SPI 498 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2911,9 +2914,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vpp M4U_PORT_L3_MDP_RDMA3>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x7000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma4: rdma@1c108000 {
++		vdo1_rdma4: dma-controller@1c108000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c108000 0 0x1000>;
+ 			interrupts = <GIC_SPI 499 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2921,9 +2925,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vdo M4U_PORT_L2_MDP_RDMA4>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x8000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma5: rdma@1c109000 {
++		vdo1_rdma5: dma-controller@1c109000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c109000 0 0x1000>;
+ 			interrupts = <GIC_SPI 500 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2931,9 +2936,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vpp M4U_PORT_L3_MDP_RDMA5>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x9000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma6: rdma@1c10a000 {
++		vdo1_rdma6: dma-controller@1c10a000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c10a000 0 0x1000>;
+ 			interrupts = <GIC_SPI 501 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2941,9 +2947,10 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vdo M4U_PORT_L2_MDP_RDMA6>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0xa000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+-		vdo1_rdma7: rdma@1c10b000 {
++		vdo1_rdma7: dma-controller@1c10b000 {
+ 			compatible = "mediatek,mt8195-vdo1-rdma";
+ 			reg = <0 0x1c10b000 0 0x1000>;
+ 			interrupts = <GIC_SPI 502 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -2951,6 +2958,7 @@
+ 			power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ 			iommus = <&iommu_vpp M4U_PORT_L3_MDP_RDMA7>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0xb000 0x1000>;
++			#dma-cells = <1>;
+ 		};
+ 
+ 		merge1: vpp-merge@1c10c000 {
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index e59b9df96c7e6..0b1330b521df4 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -557,7 +557,7 @@
+ 					  <&gcc GCC_USB0_MOCK_UTMI_CLK>;
+ 			assigned-clock-rates = <133330000>,
+ 					       <133330000>,
+-					       <20000000>;
++					       <24000000>;
+ 
+ 			resets = <&gcc GCC_USB0_BCR>;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts b/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts
+index 94885b9c21c89..fd38a6278f2fa 100644
+--- a/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts
++++ b/arch/arm64/boot/dts/qcom/qrb2210-rb1.dts
+@@ -415,6 +415,10 @@
+ 	status = "okay";
+ };
+ 
++&usb_dwc3 {
++	dr_mode = "host";
++};
++
+ &usb_hsphy {
+ 	vdd-supply = <&pm2250_l12>;
+ 	vdda-pll-supply = <&pm2250_l13>;
+diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+index a7278a9472ed9..9738c0dacd58c 100644
+--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
++++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+@@ -518,7 +518,6 @@
+ 
+ &usb_dwc3 {
+ 	maximum-speed = "super-speed";
+-	dr_mode = "peripheral";
+ };
+ 
+ &usb_hsphy {
+diff --git a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+index c8cd40a462a3f..4501c00d124b4 100644
+--- a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
++++ b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+@@ -64,8 +64,8 @@
+ 			function = LED_FUNCTION_INDICATOR;
+ 			color = <LED_COLOR_ID_GREEN>;
+ 			gpios = <&pm8150_gpios 10 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "panic-indicator";
+ 			default-state = "off";
++			panic-indicator;
+ 		};
+ 
+ 		led-wlan {
+@@ -1425,7 +1425,7 @@
+ 
+ 		altmodes {
+ 			displayport {
+-				svid = <0xff01>;
++				svid = /bits/ 16 <0xff01>;
+ 				vdo = <0x00001c46>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index b6a93b11cbbd4..1274dcda22563 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -1610,8 +1610,8 @@
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+ 			interrupts-extended = <&intc GIC_SPI 287 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 14 IRQ_TYPE_EDGE_RISING>,
+-					      <&pdc 15 IRQ_TYPE_EDGE_RISING>,
++					      <&pdc 14 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 15 IRQ_TYPE_EDGE_BOTH>,
+ 					      <&pdc 12 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "pwr_event",
+ 					  "dp_hs_phy_irq",
+@@ -1697,8 +1697,8 @@
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+ 			interrupts-extended = <&intc GIC_SPI 352 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 8 IRQ_TYPE_EDGE_RISING>,
+-					      <&pdc 7 IRQ_TYPE_EDGE_RISING>,
++					      <&pdc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 7 IRQ_TYPE_EDGE_BOTH>,
+ 					      <&pdc 13 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "pwr_event",
+ 					  "dp_hs_phy_irq",
+@@ -1760,8 +1760,8 @@
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+ 			interrupts-extended = <&intc GIC_SPI 444 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 10 IRQ_TYPE_EDGE_RISING>,
+-					      <&pdc 9 IRQ_TYPE_EDGE_RISING>;
++					      <&pdc 10 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "pwr_event",
+ 					  "dp_hs_phy_irq",
+ 					  "dm_hs_phy_irq";
+@@ -2181,7 +2181,7 @@
+ 			compatible = "qcom,apss-wdt-sa8775p", "qcom,kpss-wdt";
+ 			reg = <0x0 0x17c10000 0x0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		memtimer: timer@17c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-acer-aspire1.dts b/arch/arm64/boot/dts/qcom/sc7180-acer-aspire1.dts
+index dbb48934d4995..3342cb0480385 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-acer-aspire1.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-acer-aspire1.dts
+@@ -209,9 +209,22 @@
+ 		AVDD-supply = <&vreg_l15a_1p8>;
+ 		MICVDD-supply = <&reg_codec_3p3>;
+ 		VBAT-supply = <&reg_codec_3p3>;
++		DBVDD-supply = <&vreg_l15a_1p8>;
++		LDO1-IN-supply = <&vreg_l15a_1p8>;
++
++		/*
++		 * NOTE: The board has a path from this codec to the
++		 * DMIC microphones in the lid, however some of the option
++		 * resistors are absent and the microphones are connected
++		 * to the SoC instead.
++		 *
++		 * If the resistors were to be changed by the user to
++		 * connect the codec, the following could be used:
++		 *
++		 * realtek,dmic1-data-pin = <1>;
++		 * realtek,dmic1-clk-pin = <1>;
++		 */
+ 
+-		realtek,dmic1-data-pin = <1>;
+-		realtek,dmic1-clk-pin = <1>;
+ 		realtek,jd-src = <1>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 11f353d416b4d..c0365832c3152 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3576,7 +3576,7 @@
+ 			compatible = "qcom,apss-wdt-sc7180", "qcom,kpss-wdt";
+ 			reg = <0 0x17c10000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		timer@17c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 66f1eb83cca7e..f7f616906f6f3 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -974,6 +974,7 @@
+ 
+ 			bus-width = <8>;
+ 			supports-cqe;
++			dma-coherent;
+ 
+ 			qcom,dll-config = <0x0007642c>;
+ 			qcom,ddr-config = <0x80040868>;
+@@ -2598,7 +2599,8 @@
+ 				    "cx_mem",
+ 				    "cx_dbgc";
+ 			interrupts = <GIC_SPI 300 IRQ_TYPE_LEVEL_HIGH>;
+-			iommus = <&adreno_smmu 0 0x401>;
++			iommus = <&adreno_smmu 0 0x400>,
++				 <&adreno_smmu 1 0x400>;
+ 			operating-points-v2 = <&gpu_opp_table>;
+ 			qcom,gmu = <&gmu>;
+ 			interconnects = <&gem_noc MASTER_GFX3D 0 &mc_virt SLAVE_EBI1 0>;
+@@ -2772,6 +2774,7 @@
+ 					"gpu_cc_hub_aon_clk";
+ 
+ 			power-domains = <&gpucc GPU_CC_CX_GDSC>;
++			dma-coherent;
+ 		};
+ 
+ 		remoteproc_mpss: remoteproc@4080000 {
+@@ -3329,6 +3332,7 @@
+ 			operating-points-v2 = <&sdhc2_opp_table>;
+ 
+ 			bus-width = <4>;
++			dma-coherent;
+ 
+ 			qcom,dll-config = <0x0007642c>;
+ 
+@@ -3426,8 +3430,8 @@
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+ 			interrupts-extended = <&intc GIC_SPI 240 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 12 IRQ_TYPE_EDGE_RISING>,
+-					      <&pdc 13 IRQ_TYPE_EDGE_RISING>;
++					      <&pdc 12 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 13 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq",
+ 					  "dp_hs_phy_irq",
+ 					  "dm_hs_phy_irq";
+@@ -5222,7 +5226,7 @@
+ 			compatible = "qcom,apss-wdt-sc7280", "qcom,kpss-wdt";
+ 			reg = <0 0x17c10000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 			status = "reserved"; /* Owned by Gunyah hyp */
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x-primus.dts b/arch/arm64/boot/dts/qcom/sc8180x-primus.dts
+index fd2fab4895b39..a40ef23a2a4f5 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x-primus.dts
++++ b/arch/arm64/boot/dts/qcom/sc8180x-primus.dts
+@@ -43,7 +43,7 @@
+ 		pinctrl-0 = <&hall_int_active_state>;
+ 
+ 		lid-switch {
+-			gpios = <&tlmm 121 GPIO_ACTIVE_HIGH>;
++			gpios = <&tlmm 121 GPIO_ACTIVE_LOW>;
+ 			linux,input-type = <EV_SW>;
+ 			linux,code = <SW_LID>;
+ 			wakeup-source;
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index a34f438ef2d9a..b5eb842878703 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -1751,6 +1751,7 @@
+ 
+ 			phys = <&pcie0_phy>;
+ 			phy-names = "pciephy";
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -1761,7 +1762,7 @@
+ 			clocks = <&gcc GCC_PCIE_PHY_AUX_CLK>,
+ 				 <&gcc GCC_PCIE_0_CFG_AHB_CLK>,
+ 				 <&gcc GCC_PCIE_0_CLKREF_CLK>,
+-				 <&gcc GCC_PCIE1_PHY_REFGEN_CLK>,
++				 <&gcc GCC_PCIE0_PHY_REFGEN_CLK>,
+ 				 <&gcc GCC_PCIE_0_PIPE_CLK>;
+ 			clock-names = "aux",
+ 				      "cfg_ahb",
+@@ -1848,6 +1849,7 @@
+ 
+ 			phys = <&pcie3_phy>;
+ 			phy-names = "pciephy";
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -1858,7 +1860,7 @@
+ 			clocks = <&gcc GCC_PCIE_PHY_AUX_CLK>,
+ 				 <&gcc GCC_PCIE_3_CFG_AHB_CLK>,
+ 				 <&gcc GCC_PCIE_3_CLKREF_CLK>,
+-				 <&gcc GCC_PCIE2_PHY_REFGEN_CLK>,
++				 <&gcc GCC_PCIE3_PHY_REFGEN_CLK>,
+ 				 <&gcc GCC_PCIE_3_PIPE_CLK>;
+ 			clock-names = "aux",
+ 				      "cfg_ahb",
+@@ -1946,6 +1948,7 @@
+ 
+ 			phys = <&pcie1_phy>;
+ 			phy-names = "pciephy";
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -2044,6 +2047,7 @@
+ 
+ 			phys = <&pcie2_phy>;
+ 			phy-names = "pciephy";
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -2062,7 +2066,7 @@
+ 				      "refgen",
+ 				      "pipe";
+ 			#clock-cells = <0>;
+-			clock-output-names = "pcie_3_pipe_clk";
++			clock-output-names = "pcie_2_pipe_clk";
+ 
+ 			#phy-cells = <0>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+index 38edaf51aa345..f2055899ae7ae 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+@@ -82,6 +82,9 @@
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
++		pinctrl-names = "default";
++		pinctrl-0 = <&cam_indicator_en>;
++
+ 		led-camera-indicator {
+ 			label = "white:camera-indicator";
+ 			function = LED_FUNCTION_INDICATOR;
+@@ -601,6 +604,7 @@
+ };
+ 
+ &mdss0_dp3_phy {
++	compatible = "qcom,sc8280xp-edp-phy";
+ 	vdda-phy-supply = <&vreg_l6b>;
+ 	vdda-pll-supply = <&vreg_l3b>;
+ 
+@@ -1277,6 +1281,13 @@
+ 		};
+ 	};
+ 
++	cam_indicator_en: cam-indicator-en-state {
++		pins = "gpio28";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
+ 	edp_reg_en: edp-reg-en-state {
+ 		pins = "gpio25";
+ 		function = "gpio";
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index cad59af7ccef1..b8081513176ac 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -4225,7 +4225,7 @@
+ 			compatible = "qcom,apss-wdt-sc8280xp", "qcom,kpss-wdt";
+ 			reg = <0 0x17c10000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		timer@17c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index c7eba6c491be2..7e7bf3fb3be63 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -67,8 +67,8 @@
+ 			function = LED_FUNCTION_INDICATOR;
+ 			color = <LED_COLOR_ID_GREEN>;
+ 			gpios = <&pm8998_gpios 13 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "panic-indicator";
+ 			default-state = "off";
++			panic-indicator;
+ 		};
+ 
+ 		led-1 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index bf5e6eb9d3138..9648505644ff1 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -5088,7 +5088,7 @@
+ 			compatible = "qcom,apss-wdt-sdm845", "qcom,kpss-wdt";
+ 			reg = <0 0x17980000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		apss_shared: mailbox@17990000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+index eb07eca3a48df..1dd3a4056e26f 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+@@ -1185,6 +1185,10 @@
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <66666667>;
+ 
++			interrupts = <GIC_SPI 260 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 422 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "hs_phy_irq", "ss_phy_irq";
++
+ 			power-domains = <&gcc USB30_PRIM_GDSC>;
+ 			qcom,select-utmi-as-pipe-clk;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 8fd6f4d034900..6464e144c228c 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -2524,7 +2524,7 @@
+ 			compatible = "qcom,apss-wdt-sm6350", "qcom,kpss-wdt";
+ 			reg = <0 0x17c10000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		timer@17c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm6375.dtsi b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+index e7ff55443da70..e56f7ea4ebc6a 100644
+--- a/arch/arm64/boot/dts/qcom/sm6375.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+@@ -311,6 +311,25 @@
+ 		};
+ 	};
+ 
++	mpm: interrupt-controller {
++		compatible = "qcom,mpm";
++		qcom,rpm-msg-ram = <&apss_mpm>;
++		interrupts = <GIC_SPI 197 IRQ_TYPE_EDGE_RISING>;
++		mboxes = <&ipcc IPCC_CLIENT_AOP IPCC_MPROC_SIGNAL_SMP2P>;
++		interrupt-controller;
++		#interrupt-cells = <2>;
++		#power-domain-cells = <0>;
++		interrupt-parent = <&intc>;
++		qcom,mpm-pin-count = <96>;
++		qcom,mpm-pin-map = <5 296>,  /* Soundwire wake_irq */
++				   <12 422>, /* DWC3 ss_phy_irq */
++				   <86 183>, /* MPM wake, SPMI */
++				   <89 314>, /* TSENS0 0C */
++				   <90 315>, /* TSENS1 0C */
++				   <93 164>, /* DWC3 dm_hs_phy_irq */
++				   <94 165>; /* DWC3 dp_hs_phy_irq */
++	};
++
+ 	memory@80000000 {
+ 		device_type = "memory";
+ 		/* We expect the bootloader to fill in the size */
+@@ -486,6 +505,7 @@
+ 
+ 		CLUSTER_PD: power-domain-cpu-cluster0 {
+ 			#power-domain-cells = <0>;
++			power-domains = <&mpm>;
+ 			domain-idle-states = <&CLUSTER_SLEEP_0>;
+ 		};
+ 	};
+@@ -808,7 +828,7 @@
+ 			reg = <0 0x00500000 0 0x800000>;
+ 			interrupts = <GIC_SPI 227 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-ranges = <&tlmm 0 0 157>;
+-			/* TODO: Hook up MPM as wakeup-parent when it's there */
++			wakeup-parent = <&mpm>;
+ 			interrupt-controller;
+ 			gpio-controller;
+ 			#interrupt-cells = <2>;
+@@ -930,7 +950,7 @@
+ 			      <0 0x01c0a000 0 0x26000>;
+ 			reg-names = "core", "chnls", "obsrvr", "intr", "cnfg";
+ 			interrupt-names = "periph_irq";
+-			interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&mpm 86 IRQ_TYPE_LEVEL_HIGH>;
+ 			qcom,ee = <0>;
+ 			qcom,channel = <0>;
+ 			#address-cells = <2>;
+@@ -962,8 +982,15 @@
+ 		};
+ 
+ 		rpm_msg_ram: sram@45f0000 {
+-			compatible = "qcom,rpm-msg-ram";
++			compatible = "qcom,rpm-msg-ram", "mmio-sram";
+ 			reg = <0 0x045f0000 0 0x7000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
++			ranges = <0 0x0 0x045f0000 0x7000>;
++
++			apss_mpm: sram@1b8 {
++				reg = <0x1b8 0x48>;
++			};
+ 		};
+ 
+ 		sram@4690000 {
+@@ -1360,10 +1387,10 @@
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <133333333>;
+ 
+-			interrupts = <GIC_SPI 302 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 302 IRQ_TYPE_LEVEL_HIGH>,
++					      <&mpm 12 IRQ_TYPE_LEVEL_HIGH>,
++					      <&mpm 93 IRQ_TYPE_EDGE_BOTH>,
++					      <&mpm 94 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq",
+ 					  "ss_phy_irq",
+ 					  "dm_hs_phy_irq",
+diff --git a/arch/arm64/boot/dts/qcom/sm8150-hdk.dts b/arch/arm64/boot/dts/qcom/sm8150-hdk.dts
+index bb161b536da46..f4c6e1309a7e9 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8150-hdk.dts
+@@ -127,8 +127,6 @@
+ 		vdda_sp_sensor:
+ 		vdda_ufs_2ln_core_1:
+ 		vdda_ufs_2ln_core_2:
+-		vdda_usb_ss_dp_core_1:
+-		vdda_usb_ss_dp_core_2:
+ 		vdda_qlink_lv:
+ 		vdda_qlink_lv_ck:
+ 		vreg_l5a_0p875: ldo5 {
+@@ -210,6 +208,12 @@
+ 			regulator-max-microvolt = <3008000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
++
++		vreg_l18a_0p8: ldo18 {
++			regulator-min-microvolt = <880000>;
++			regulator-max-microvolt = <880000>;
++			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++		};
+ 	};
+ 
+ 	regulators-1 {
+@@ -445,13 +449,13 @@
+ &usb_1_qmpphy {
+ 	status = "okay";
+ 	vdda-phy-supply = <&vreg_l3c_1p2>;
+-	vdda-pll-supply = <&vdda_usb_ss_dp_core_1>;
++	vdda-pll-supply = <&vreg_l18a_0p8>;
+ };
+ 
+ &usb_2_qmpphy {
+ 	status = "okay";
+ 	vdda-phy-supply = <&vreg_l3c_1p2>;
+-	vdda-pll-supply = <&vdda_usb_ss_dp_core_1>;
++	vdda-pll-supply = <&vreg_l5a_0p875>;
+ };
+ 
+ &usb_1 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 97623af13464c..0e1aa86758795 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -3932,6 +3932,7 @@
+ 				      "dp_phy_pll_link_clk",
+ 				      "dp_phy_pll_vco_div_clk";
+ 			power-domains = <&rpmhpd SM8150_MMCX>;
++			required-opps = <&rpmhpd_opp_low_svs>;
+ 			#clock-cells = <1>;
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
+@@ -4170,7 +4171,7 @@
+ 			compatible = "qcom,apss-wdt-sm8150", "qcom,kpss-wdt";
+ 			reg = <0 0x17c10000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		timer@17c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index be970472f6c4e..72db75ca77310 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -6018,7 +6018,7 @@
+ 			compatible = "qcom,apss-wdt-sm8250", "qcom,kpss-wdt";
+ 			reg = <0 0x17c10000 0 0x1000>;
+ 			clocks = <&sleep_clk>;
+-			interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 0 IRQ_TYPE_EDGE_RISING>;
+ 		};
+ 
+ 		timer@17c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index b46236235b7f4..1d597af15bb3e 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -919,9 +919,9 @@
+ 			};
+ 		};
+ 
+-		gpi_dma0: dma-controller@9800000 {
++		gpi_dma0: dma-controller@900000 {
+ 			compatible = "qcom,sm8350-gpi-dma", "qcom,sm6350-gpi-dma";
+-			reg = <0 0x09800000 0 0x60000>;
++			reg = <0 0x00900000 0 0x60000>;
+ 			interrupts = <GIC_SPI 244 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 1783fa78bdbcb..dc904ccb3d6c2 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -2309,7 +2309,7 @@
+ 				     <GIC_SPI 520 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "core", "wakeup";
+ 
+-			clocks = <&vamacro>;
++			clocks = <&txmacro>;
+ 			clock-names = "iface";
+ 			label = "TX";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 7b9ddde0b2c9a..5cf813a579d58 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -285,9 +285,9 @@
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "silver-rail-power-collapse";
+ 				arm,psci-suspend-param = <0x40000004>;
+-				entry-latency-us = <800>;
++				entry-latency-us = <550>;
+ 				exit-latency-us = <750>;
+-				min-residency-us = <4090>;
++				min-residency-us = <6700>;
+ 				local-timer-stop;
+ 			};
+ 
+@@ -296,8 +296,18 @@
+ 				idle-state-name = "gold-rail-power-collapse";
+ 				arm,psci-suspend-param = <0x40000004>;
+ 				entry-latency-us = <600>;
+-				exit-latency-us = <1550>;
+-				min-residency-us = <4791>;
++				exit-latency-us = <1300>;
++				min-residency-us = <8136>;
++				local-timer-stop;
++			};
++
++			PRIME_CPU_SLEEP_0: cpu-sleep-2-0 {
++				compatible = "arm,idle-state";
++				idle-state-name = "goldplus-rail-power-collapse";
++				arm,psci-suspend-param = <0x40000004>;
++				entry-latency-us = <500>;
++				exit-latency-us = <1350>;
++				min-residency-us = <7480>;
+ 				local-timer-stop;
+ 			};
+ 		};
+@@ -306,17 +316,17 @@
+ 			CLUSTER_SLEEP_0: cluster-sleep-0 {
+ 				compatible = "domain-idle-state";
+ 				arm,psci-suspend-param = <0x41000044>;
+-				entry-latency-us = <1050>;
+-				exit-latency-us = <2500>;
+-				min-residency-us = <5309>;
++				entry-latency-us = <750>;
++				exit-latency-us = <2350>;
++				min-residency-us = <9144>;
+ 			};
+ 
+ 			CLUSTER_SLEEP_1: cluster-sleep-1 {
+ 				compatible = "domain-idle-state";
+ 				arm,psci-suspend-param = <0x4100c344>;
+-				entry-latency-us = <2700>;
+-				exit-latency-us = <3500>;
+-				min-residency-us = <13959>;
++				entry-latency-us = <2800>;
++				exit-latency-us = <4400>;
++				min-residency-us = <10150>;
+ 			};
+ 		};
+ 	};
+@@ -400,7 +410,7 @@
+ 		CPU_PD7: power-domain-cpu7 {
+ 			#power-domain-cells = <0>;
+ 			power-domains = <&CLUSTER_PD>;
+-			domain-idle-states = <&BIG_CPU_SLEEP_0>;
++			domain-idle-states = <&PRIME_CPU_SLEEP_0>;
+ 		};
+ 
+ 		CLUSTER_PD: power-domain-cluster {
+@@ -2194,7 +2204,7 @@
+ 			interrupts = <GIC_SPI 496 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 520 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "core", "wakeup";
+-			clocks = <&lpass_vamacro>;
++			clocks = <&lpass_txmacro>;
+ 			clock-names = "iface";
+ 			label = "TX";
+ 
+@@ -2923,8 +2933,8 @@
+ 
+ 			interrupts-extended = <&intc GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>,
+ 					      <&pdc 17 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 15 IRQ_TYPE_EDGE_RISING>,
+-					      <&pdc 14 IRQ_TYPE_EDGE_RISING>;
++					      <&pdc 15 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 14 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq",
+ 					  "ss_phy_irq",
+ 					  "dm_hs_phy_irq",
+diff --git a/arch/arm64/boot/dts/renesas/r8a779g0-white-hawk-cpu.dtsi b/arch/arm64/boot/dts/renesas/r8a779g0-white-hawk-cpu.dtsi
+index bb4a5270f71b6..913f70fe6c5cd 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779g0-white-hawk-cpu.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779g0-white-hawk-cpu.dtsi
+@@ -187,6 +187,9 @@
+ };
+ 
+ &hscif0 {
++	pinctrl-0 = <&hscif0_pins>;
++	pinctrl-names = "default";
++
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts b/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
+index 1c6d83b47cd21..6ecdf5d283390 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
+@@ -455,7 +455,7 @@
+ &pinctrl {
+ 	leds {
+ 		sys_led_pin: sys-status-led-pin {
+-			rockchip,pins = <0 RK_PC7 RK_FUNC_GPIO &pcfg_pull_none>;
++			rockchip,pins = <0 RK_PC5 RK_FUNC_GPIO &pcfg_pull_none>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+index 4ae7fdc5221b2..ccd708b09acd6 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+@@ -462,7 +462,7 @@
+ 			     <193>, <194>, <195>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		ti,ngpio = <87>;
++		ti,ngpio = <92>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 77 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 77 0>;
+@@ -480,7 +480,7 @@
+ 			     <183>, <184>, <185>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		ti,ngpio = <88>;
++		ti,ngpio = <52>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 78 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 78 0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi b/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi
+index ba1c14a54acf4..b849648d51f91 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi
+@@ -14,6 +14,16 @@
+ 
+ / {
+ 	aliases {
++		serial0 = &wkup_uart0;
++		serial1 = &mcu_uart0;
++		serial2 = &main_uart0;
++		serial3 = &main_uart1;
++		i2c0 = &wkup_i2c0;
++		i2c1 = &mcu_i2c0;
++		i2c2 = &main_i2c0;
++		i2c3 = &main_i2c1;
++		i2c4 = &main_i2c2;
++		i2c5 = &main_i2c3;
+ 		spi0 = &mcu_spi0;
+ 		mmc0 = &sdhci1;
+ 		mmc1 = &sdhci0;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index 5ebb87f467de5..29048d6577cf6 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -1034,7 +1034,7 @@
+ 		assigned-clocks = <&k3_clks 67 2>;
+ 		assigned-clock-parents = <&k3_clks 67 5>;
+ 
+-		interrupts = <GIC_SPI 166 IRQ_TYPE_EDGE_RISING>;
++		interrupts = <GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>;
+ 
+ 		dma-coherent;
+ 
+diff --git a/arch/arm64/boot/dts/xilinx/Makefile b/arch/arm64/boot/dts/xilinx/Makefile
+index 5e40c0b4fa0a9..1068b0fa8e984 100644
+--- a/arch/arm64/boot/dts/xilinx/Makefile
++++ b/arch/arm64/boot/dts/xilinx/Makefile
+@@ -22,11 +22,10 @@ dtb-$(CONFIG_ARCH_ZYNQMP) += zynqmp-sm-k26-revA.dtb
+ dtb-$(CONFIG_ARCH_ZYNQMP) += zynqmp-smk-k26-revA.dtb
+ 
+ zynqmp-sm-k26-revA-sck-kv-g-revA-dtbs := zynqmp-sm-k26-revA.dtb zynqmp-sck-kv-g-revA.dtbo
++dtb-$(CONFIG_ARCH_ZYNQMP) += zynqmp-sm-k26-revA-sck-kv-g-revA.dtb
+ zynqmp-sm-k26-revA-sck-kv-g-revB-dtbs := zynqmp-sm-k26-revA.dtb zynqmp-sck-kv-g-revB.dtbo
++dtb-$(CONFIG_ARCH_ZYNQMP) += zynqmp-sm-k26-revA-sck-kv-g-revB.dtb
+ zynqmp-smk-k26-revA-sck-kv-g-revA-dtbs := zynqmp-smk-k26-revA.dtb zynqmp-sck-kv-g-revA.dtbo
++dtb-$(CONFIG_ARCH_ZYNQMP) += zynqmp-smk-k26-revA-sck-kv-g-revA.dtb
+ zynqmp-smk-k26-revA-sck-kv-g-revB-dtbs := zynqmp-smk-k26-revA.dtb zynqmp-sck-kv-g-revB.dtbo
+-
+-zynqmp-sm-k26-revA-sck-kr-g-revA-dtbs := zynqmp-sm-k26-revA.dtb zynqmp-sck-kr-g-revA.dtbo
+-zynqmp-sm-k26-revA-sck-kr-g-revB-dtbs := zynqmp-sm-k26-revA.dtb zynqmp-sck-kr-g-revB.dtbo
+-zynqmp-smk-k26-revA-sck-kr-g-revA-dtbs := zynqmp-smk-k26-revA.dtb zynqmp-sck-kr-g-revA.dtbo
+-zynqmp-smk-k26-revA-sck-kr-g-revB-dtbs := zynqmp-smk-k26-revA.dtb zynqmp-sck-kr-g-revB.dtbo
++dtb-$(CONFIG_ARCH_ZYNQMP) += zynqmp-smk-k26-revA-sck-kv-g-revB.dtb
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 20d7ef82de90a..b3f64144b5cd9 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1107,12 +1107,13 @@ static int za_set(struct task_struct *target,
+ 		}
+ 	}
+ 
+-	/* Allocate/reinit ZA storage */
+-	sme_alloc(target, true);
+-	if (!target->thread.sme_state) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
++	/*
++	 * Only flush the storage if PSTATE.ZA was not already set,
++	 * otherwise preserve any existing data.
++	 */
++	sme_alloc(target, !thread_za_enabled(&target->thread));
++	if (!target->thread.sme_state)
++		return -ENOMEM;
+ 
+ 	/* If there is no data then disable ZA */
+ 	if (!count) {
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index 2dad2d095160d..e2764d0ffa9f3 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -590,7 +590,11 @@ static struct vgic_irq *vgic_its_check_cache(struct kvm *kvm, phys_addr_t db,
+ 	unsigned long flags;
+ 
+ 	raw_spin_lock_irqsave(&dist->lpi_list_lock, flags);
++
+ 	irq = __vgic_its_check_cache(dist, db, devid, eventid);
++	if (irq)
++		vgic_get_irq_kref(irq);
++
+ 	raw_spin_unlock_irqrestore(&dist->lpi_list_lock, flags);
+ 
+ 	return irq;
+@@ -769,6 +773,7 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
+ 	raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ 	irq->pending_latch = true;
+ 	vgic_queue_irq_unlock(kvm, irq, flags);
++	vgic_put_irq(kvm, irq);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index a764b0ab8bf91..2533f264b954e 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -365,19 +365,26 @@ static int vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu,
+ 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
+ 
+ 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+-		if (test_bit(i, &val)) {
+-			/*
+-			 * pending_latch is set irrespective of irq type
+-			 * (level or edge) to avoid dependency that VM should
+-			 * restore irq config before pending info.
+-			 */
+-			irq->pending_latch = true;
+-			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+-		} else {
++
++		/*
++		 * pending_latch is set irrespective of irq type
++		 * (level or edge) to avoid dependency that VM should
++		 * restore irq config before pending info.
++		 */
++		irq->pending_latch = test_bit(i, &val);
++
++		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
++			irq_set_irqchip_state(irq->host_irq,
++					      IRQCHIP_STATE_PENDING,
++					      irq->pending_latch);
+ 			irq->pending_latch = false;
+-			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+ 		}
+ 
++		if (irq->pending_latch)
++			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
++		else
++			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
++
+ 		vgic_put_irq(vcpu->kvm, irq);
+ 	}
+ 
+diff --git a/arch/csky/include/asm/jump_label.h b/arch/csky/include/asm/jump_label.h
+index d488ba6084bc6..98a3f4b168bd2 100644
+--- a/arch/csky/include/asm/jump_label.h
++++ b/arch/csky/include/asm/jump_label.h
+@@ -43,5 +43,10 @@ label:
+ 	return true;
+ }
+ 
++enum jump_label_type;
++void arch_jump_label_transform_static(struct jump_entry *entry,
++				      enum jump_label_type type);
++#define arch_jump_label_transform_static arch_jump_label_transform_static
++
+ #endif  /* __ASSEMBLY__ */
+ #endif	/* __ASM_CSKY_JUMP_LABEL_H */
+diff --git a/arch/loongarch/include/asm/elf.h b/arch/loongarch/include/asm/elf.h
+index 9b16a3b8e7060..f16bd42456e4c 100644
+--- a/arch/loongarch/include/asm/elf.h
++++ b/arch/loongarch/include/asm/elf.h
+@@ -241,8 +241,6 @@ void loongarch_dump_regs64(u64 *uregs, const struct pt_regs *regs);
+ do {									\
+ 	current->thread.vdso = &vdso_info;				\
+ 									\
+-	loongarch_set_personality_fcsr(state);				\
+-									\
+ 	if (personality(current->personality) != PER_LINUX)		\
+ 		set_personality(PER_LINUX);				\
+ } while (0)
+@@ -259,7 +257,6 @@ do {									\
+ 	clear_thread_flag(TIF_32BIT_ADDR);				\
+ 									\
+ 	current->thread.vdso = &vdso_info;				\
+-	loongarch_set_personality_fcsr(state);				\
+ 									\
+ 	p = personality(current->personality);				\
+ 	if (p != PER_LINUX32 && p != PER_LINUX)				\
+@@ -340,6 +337,4 @@ extern int arch_elf_pt_proc(void *ehdr, void *phdr, struct file *elf,
+ extern int arch_check_elf(void *ehdr, bool has_interpreter, void *interp_ehdr,
+ 			  struct arch_elf_state *state);
+ 
+-extern void loongarch_set_personality_fcsr(struct arch_elf_state *state);
+-
+ #endif /* _ASM_ELF_H */
+diff --git a/arch/loongarch/kernel/elf.c b/arch/loongarch/kernel/elf.c
+index 183e94fc9c69c..0fa81ced28dcd 100644
+--- a/arch/loongarch/kernel/elf.c
++++ b/arch/loongarch/kernel/elf.c
+@@ -23,8 +23,3 @@ int arch_check_elf(void *_ehdr, bool has_interpreter, void *_interp_ehdr,
+ {
+ 	return 0;
+ }
+-
+-void loongarch_set_personality_fcsr(struct arch_elf_state *state)
+-{
+-	current->thread.fpu.fcsr = boot_cpu_data.fpu_csr0;
+-}
+diff --git a/arch/loongarch/kernel/process.c b/arch/loongarch/kernel/process.c
+index 767d94cce0de0..f2ff8b5d591e4 100644
+--- a/arch/loongarch/kernel/process.c
++++ b/arch/loongarch/kernel/process.c
+@@ -85,6 +85,7 @@ void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp)
+ 	regs->csr_euen = euen;
+ 	lose_fpu(0);
+ 	lose_lbt(0);
++	current->thread.fpu.fcsr = boot_cpu_data.fpu_csr0;
+ 
+ 	clear_thread_flag(TIF_LSX_CTX_LIVE);
+ 	clear_thread_flag(TIF_LASX_CTX_LIVE);
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index 4fcd6cd6da234..5c634de4ee367 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -465,7 +465,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ 	const u8 dst = regmap[insn->dst_reg];
+ 	const s16 off = insn->off;
+ 	const s32 imm = insn->imm;
+-	const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
+ 	const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32;
+ 
+ 	switch (code) {
+@@ -923,8 +922,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ 
+ 	/* dst = imm64 */
+ 	case BPF_LD | BPF_IMM | BPF_DW:
++	{
++		const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
++
+ 		move_imm(ctx, dst, imm64, is32);
+ 		return 1;
++	}
+ 
+ 	/* dst = *(size *)(src + off) */
+ 	case BPF_LDX | BPF_MEM | BPF_B:
+diff --git a/arch/mips/alchemy/devboards/db1200.c b/arch/mips/alchemy/devboards/db1200.c
+index f521874ebb07b..67f067706af27 100644
+--- a/arch/mips/alchemy/devboards/db1200.c
++++ b/arch/mips/alchemy/devboards/db1200.c
+@@ -847,7 +847,7 @@ int __init db1200_dev_setup(void)
+ 	i2c_register_board_info(0, db1200_i2c_devs,
+ 				ARRAY_SIZE(db1200_i2c_devs));
+ 	spi_register_board_info(db1200_spi_devs,
+-				ARRAY_SIZE(db1200_i2c_devs));
++				ARRAY_SIZE(db1200_spi_devs));
+ 
+ 	/* SWITCHES:	S6.8 I2C/SPI selector  (OFF=I2C	 ON=SPI)
+ 	 *		S6.7 AC97/I2S selector (OFF=AC97 ON=I2S)
+diff --git a/arch/mips/alchemy/devboards/db1550.c b/arch/mips/alchemy/devboards/db1550.c
+index fd91d9c9a2525..6c6837181f555 100644
+--- a/arch/mips/alchemy/devboards/db1550.c
++++ b/arch/mips/alchemy/devboards/db1550.c
+@@ -589,7 +589,7 @@ int __init db1550_dev_setup(void)
+ 	i2c_register_board_info(0, db1550_i2c_devs,
+ 				ARRAY_SIZE(db1550_i2c_devs));
+ 	spi_register_board_info(db1550_spi_devs,
+-				ARRAY_SIZE(db1550_i2c_devs));
++				ARRAY_SIZE(db1550_spi_devs));
+ 
+ 	c = clk_get(NULL, "psc0_intclk");
+ 	if (!IS_ERR(c)) {
+diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
+index 27415a288adf5..dc397f630c660 100644
+--- a/arch/mips/include/asm/dmi.h
++++ b/arch/mips/include/asm/dmi.h
+@@ -5,7 +5,7 @@
+ #include <linux/io.h>
+ #include <linux/memblock.h>
+ 
+-#define dmi_early_remap(x, l)		ioremap_cache(x, l)
++#define dmi_early_remap(x, l)		ioremap(x, l)
+ #define dmi_early_unmap(x, l)		iounmap(x)
+ #define dmi_remap(x, l)			ioremap_cache(x, l)
+ #define dmi_unmap(x)			iounmap(x)
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 2d2ca024bd47a..0461ab49e8f1a 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -321,11 +321,11 @@ static void __init bootmem_init(void)
+ 		panic("Incorrect memory mapping !!!");
+ 
+ 	if (max_pfn > PFN_DOWN(HIGHMEM_START)) {
++		max_low_pfn = PFN_DOWN(HIGHMEM_START);
+ #ifdef CONFIG_HIGHMEM
+-		highstart_pfn = PFN_DOWN(HIGHMEM_START);
++		highstart_pfn = max_low_pfn;
+ 		highend_pfn = max_pfn;
+ #else
+-		max_low_pfn = PFN_DOWN(HIGHMEM_START);
+ 		max_pfn = max_low_pfn;
+ #endif
+ 	}
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 1f11a62809f20..dbb6c44d59382 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -858,6 +858,7 @@ config THREAD_SHIFT
+ 	int "Thread shift" if EXPERT
+ 	range 13 15
+ 	default "15" if PPC_256K_PAGES
++	default "15" if PPC_PSERIES || PPC_POWERNV
+ 	default "14" if PPC64
+ 	default "13"
+ 	help
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index eddc031c4b95f..87d65bdd3ecae 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -544,6 +544,21 @@ static int __init rtas_token_to_function_xarray_init(void)
+ }
+ arch_initcall(rtas_token_to_function_xarray_init);
+ 
++/*
++ * For use by sys_rtas(), where the token value is provided by user
++ * space and we don't want to warn on failed lookups.
++ */
++static const struct rtas_function *rtas_token_to_function_untrusted(s32 token)
++{
++	return xa_load(&rtas_token_to_function_xarray, token);
++}
++
++/*
++ * Reverse lookup for deriving the function descriptor from a
++ * known-good token value in contexts where the former is not already
++ * available. @token must be valid, e.g. derived from the result of a
++ * prior lookup against the function table.
++ */
+ static const struct rtas_function *rtas_token_to_function(s32 token)
+ {
+ 	const struct rtas_function *func;
+@@ -551,7 +566,7 @@ static const struct rtas_function *rtas_token_to_function(s32 token)
+ 	if (WARN_ONCE(token < 0, "invalid token %d", token))
+ 		return NULL;
+ 
+-	func = xa_load(&rtas_token_to_function_xarray, token);
++	func = rtas_token_to_function_untrusted(token);
+ 
+ 	if (WARN_ONCE(!func, "unexpected failed lookup for token %d", token))
+ 		return NULL;
+@@ -1726,7 +1741,7 @@ static bool block_rtas_call(int token, int nargs,
+ 	 * If this token doesn't correspond to a function the kernel
+ 	 * understands, you're not allowed to call it.
+ 	 */
+-	func = rtas_token_to_function(token);
++	func = rtas_token_to_function_untrusted(token);
+ 	if (!func)
+ 		goto err;
+ 	/*
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 1ed6ec140701d..002a7573a5d44 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4736,13 +4736,19 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ 
+ 	if (!nested) {
+ 		kvmppc_core_prepare_to_enter(vcpu);
+-		if (__kvmppc_get_msr_hv(vcpu) & MSR_EE) {
+-			if (xive_interrupt_pending(vcpu))
++		if (test_bit(BOOK3S_IRQPRIO_EXTERNAL,
++			     &vcpu->arch.pending_exceptions) ||
++		    xive_interrupt_pending(vcpu)) {
++			/*
++			 * For nested HV, don't synthesize but always pass MER,
++			 * the L0 will be able to optimise that more
++			 * effectively than manipulating registers directly.
++			 */
++			if (!kvmhv_on_pseries() && (__kvmppc_get_msr_hv(vcpu) & MSR_EE))
+ 				kvmppc_inject_interrupt_hv(vcpu,
+-						BOOK3S_INTERRUPT_EXTERNAL, 0);
+-		} else if (test_bit(BOOK3S_IRQPRIO_EXTERNAL,
+-			     &vcpu->arch.pending_exceptions)) {
+-			lpcr |= LPCR_MER;
++							   BOOK3S_INTERRUPT_EXTERNAL, 0);
++			else
++				lpcr |= LPCR_MER;
+ 		}
+ 	} else if (vcpu->arch.pending_exceptions ||
+ 		   vcpu->arch.doorbell_request ||
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 51ad0397c17ab..6eac63e79a899 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -45,7 +45,7 @@ obj-$(CONFIG_FUNCTION_ERROR_INJECTION)	+= error-inject.o
+ # so it is only needed for modules, and only for older linkers which
+ # do not support --save-restore-funcs
+ ifndef CONFIG_LD_IS_BFD
+-extra-$(CONFIG_PPC64)	+= crtsavres.o
++always-$(CONFIG_PPC64)	+= crtsavres.o
+ endif
+ 
+ obj-$(CONFIG_PPC_BOOK3S_64) += copyuser_power7.o copypage_power7.o \
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index 39dbe6b348df2..27f18119fda17 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -534,6 +534,9 @@ static ssize_t affinity_domain_via_partition_show(struct device *dev, struct dev
+ 	if (!ret)
+ 		goto parse_result;
+ 
++	if (ret && (ret != H_PARAMETER))
++		goto out;
++
+ 	/*
+ 	 * ret value as 'H_PARAMETER' implies that the current buffer size
+ 	 * can't accommodate all the information, and a partial buffer
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index 5d12ca386c1fc..8664a7d297ad8 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -299,6 +299,8 @@ static int update_events_in_group(struct device_node *node, struct imc_pmu *pmu)
+ 	attr_group->attrs = attrs;
+ 	do {
+ 		ev_val_str = kasprintf(GFP_KERNEL, "event=0x%x", pmu->events[i].value);
++		if (!ev_val_str)
++			continue;
+ 		dev_str = device_str_attr_create(pmu->events[i].name, ev_val_str);
+ 		if (!dev_str)
+ 			continue;
+@@ -306,6 +308,8 @@ static int update_events_in_group(struct device_node *node, struct imc_pmu *pmu)
+ 		attrs[j++] = dev_str;
+ 		if (pmu->events[i].scale) {
+ 			ev_scale_str = kasprintf(GFP_KERNEL, "%s.scale", pmu->events[i].name);
++			if (!ev_scale_str)
++				continue;
+ 			dev_str = device_str_attr_create(ev_scale_str, pmu->events[i].scale);
+ 			if (!dev_str)
+ 				continue;
+@@ -315,6 +319,8 @@ static int update_events_in_group(struct device_node *node, struct imc_pmu *pmu)
+ 
+ 		if (pmu->events[i].unit) {
+ 			ev_unit_str = kasprintf(GFP_KERNEL, "%s.unit", pmu->events[i].name);
++			if (!ev_unit_str)
++				continue;
+ 			dev_str = device_str_attr_create(ev_unit_str, pmu->events[i].unit);
+ 			if (!dev_str)
+ 				continue;
+diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig
+index 1624ebf95497b..35a1f4b9f8272 100644
+--- a/arch/powerpc/platforms/44x/Kconfig
++++ b/arch/powerpc/platforms/44x/Kconfig
+@@ -173,6 +173,7 @@ config ISS4xx
+ config CURRITUCK
+ 	bool "IBM Currituck (476fpe) Support"
+ 	depends on PPC_47x
++	select I2C
+ 	select SWIOTLB
+ 	select 476FPE
+ 	select FORCE_PCI
+diff --git a/arch/powerpc/platforms/powernv/opal-irqchip.c b/arch/powerpc/platforms/powernv/opal-irqchip.c
+index f9a7001dacb7a..56a1f7ce78d2c 100644
+--- a/arch/powerpc/platforms/powernv/opal-irqchip.c
++++ b/arch/powerpc/platforms/powernv/opal-irqchip.c
+@@ -275,6 +275,8 @@ int __init opal_event_init(void)
+ 		else
+ 			name = kasprintf(GFP_KERNEL, "opal");
+ 
++		if (!name)
++			continue;
+ 		/* Install interrupt handler */
+ 		rc = request_irq(r->start, opal_interrupt, r->flags & IRQD_TRIGGER_MASK,
+ 				 name, NULL);
+diff --git a/arch/powerpc/platforms/powernv/opal-powercap.c b/arch/powerpc/platforms/powernv/opal-powercap.c
+index 7bfe4cbeb35a9..ea917266aa172 100644
+--- a/arch/powerpc/platforms/powernv/opal-powercap.c
++++ b/arch/powerpc/platforms/powernv/opal-powercap.c
+@@ -196,6 +196,12 @@ void __init opal_powercap_init(void)
+ 
+ 		j = 0;
+ 		pcaps[i].pg.name = kasprintf(GFP_KERNEL, "%pOFn", node);
++		if (!pcaps[i].pg.name) {
++			kfree(pcaps[i].pattrs);
++			kfree(pcaps[i].pg.attrs);
++			goto out_pcaps_pattrs;
++		}
++
+ 		if (has_min) {
+ 			powercap_add_attr(min, "powercap-min",
+ 					  &pcaps[i].pattrs[j]);
+diff --git a/arch/powerpc/platforms/powernv/opal-xscom.c b/arch/powerpc/platforms/powernv/opal-xscom.c
+index 262cd6fac9071..748c2b97fa537 100644
+--- a/arch/powerpc/platforms/powernv/opal-xscom.c
++++ b/arch/powerpc/platforms/powernv/opal-xscom.c
+@@ -165,6 +165,11 @@ static int scom_debug_init_one(struct dentry *root, struct device_node *dn,
+ 	ent->chip = chip;
+ 	snprintf(ent->name, 16, "%08x", chip);
+ 	ent->path.data = (void *)kasprintf(GFP_KERNEL, "%pOF", dn);
++	if (!ent->path.data) {
++		kfree(ent);
++		return -ENOMEM;
++	}
++
+ 	ent->path.size = strlen((char *)ent->path.data);
+ 
+ 	dir = debugfs_create_dir(ent->name, root);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index a43bfb01720ae..6f2eebae7bee9 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -436,14 +436,15 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
+ 		}
+ 	}
+ 
+-	if (!lmb_found)
++	if (!lmb_found) {
++		pr_debug("Failed to look up LMB for drc index %x\n", drc_index);
+ 		rc = -EINVAL;
+-
+-	if (rc)
++	} else if (rc) {
+ 		pr_debug("Failed to hot-remove memory at %llx\n",
+ 			 lmb->base_addr);
+-	else
++	} else {
+ 		pr_debug("Memory at %llx was hot-removed\n", lmb->base_addr);
++	}
+ 
+ 	return rc;
+ }
+diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h
+index 32336e8a17cb0..a393d5035c543 100644
+--- a/arch/riscv/include/asm/sections.h
++++ b/arch/riscv/include/asm/sections.h
+@@ -13,6 +13,7 @@ extern char _start_kernel[];
+ extern char __init_data_begin[], __init_data_end[];
+ extern char __init_text_begin[], __init_text_end[];
+ extern char __alt_start[], __alt_end[];
++extern char __exittext_begin[], __exittext_end[];
+ 
+ static inline bool is_va_kernel_text(uintptr_t va)
+ {
+diff --git a/arch/riscv/include/asm/xip_fixup.h b/arch/riscv/include/asm/xip_fixup.h
+index d4ffc3c37649f..b65bf6306f69c 100644
+--- a/arch/riscv/include/asm/xip_fixup.h
++++ b/arch/riscv/include/asm/xip_fixup.h
+@@ -13,7 +13,7 @@
+         add \reg, \reg, t0
+ .endm
+ .macro XIP_FIXUP_FLASH_OFFSET reg
+-	la t1, __data_loc
++	la t0, __data_loc
+ 	REG_L t1, _xip_phys_offset
+ 	sub \reg, \reg, t1
+ 	add \reg, \reg, t0
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index aac019ed63b1b..862834bb1d643 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -894,7 +894,8 @@ void *module_alloc(unsigned long size)
+ {
+ 	return __vmalloc_node_range(size, 1, MODULES_VADDR,
+ 				    MODULES_END, GFP_KERNEL,
+-				    PAGE_KERNEL, 0, NUMA_NO_NODE,
++				    PAGE_KERNEL, VM_FLUSH_RESET_PERMS,
++				    NUMA_NO_NODE,
+ 				    __builtin_return_address(0));
+ }
+ #endif
+diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
+index 13ee7bf589a15..37e87fdcf6a00 100644
+--- a/arch/riscv/kernel/patch.c
++++ b/arch/riscv/kernel/patch.c
+@@ -14,6 +14,7 @@
+ #include <asm/fixmap.h>
+ #include <asm/ftrace.h>
+ #include <asm/patch.h>
++#include <asm/sections.h>
+ 
+ struct patch_insn {
+ 	void *addr;
+@@ -25,6 +26,14 @@ struct patch_insn {
+ int riscv_patch_in_stop_machine = false;
+ 
+ #ifdef CONFIG_MMU
++
++static inline bool is_kernel_exittext(uintptr_t addr)
++{
++	return system_state < SYSTEM_RUNNING &&
++		addr >= (uintptr_t)__exittext_begin &&
++		addr < (uintptr_t)__exittext_end;
++}
++
+ /*
+  * The fix_to_virt(, idx) needs a const value (not a dynamic variable of
+  * reg-a0) or BUILD_BUG_ON failed with "idx >= __end_of_fixed_addresses".
+@@ -35,7 +44,7 @@ static __always_inline void *patch_map(void *addr, const unsigned int fixmap)
+ 	uintptr_t uintaddr = (uintptr_t) addr;
+ 	struct page *page;
+ 
+-	if (core_kernel_text(uintaddr))
++	if (core_kernel_text(uintaddr) || is_kernel_exittext(uintaddr))
+ 		page = phys_to_page(__pa_symbol(addr));
+ 	else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX))
+ 		page = vmalloc_to_page(addr);
+diff --git a/arch/riscv/kernel/vmlinux-xip.lds.S b/arch/riscv/kernel/vmlinux-xip.lds.S
+index 50767647fbc64..8c3daa1b05313 100644
+--- a/arch/riscv/kernel/vmlinux-xip.lds.S
++++ b/arch/riscv/kernel/vmlinux-xip.lds.S
+@@ -29,10 +29,12 @@ SECTIONS
+ 	HEAD_TEXT_SECTION
+ 	INIT_TEXT_SECTION(PAGE_SIZE)
+ 	/* we have to discard exit text and such at runtime, not link time */
++	__exittext_begin = .;
+ 	.exit.text :
+ 	{
+ 		EXIT_TEXT
+ 	}
++	__exittext_end = .;
+ 
+ 	.text : {
+ 		_text = .;
+diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
+index 492dd4b8f3d69..002ca58dd998c 100644
+--- a/arch/riscv/kernel/vmlinux.lds.S
++++ b/arch/riscv/kernel/vmlinux.lds.S
+@@ -69,10 +69,12 @@ SECTIONS
+ 		__soc_builtin_dtb_table_end = .;
+ 	}
+ 	/* we have to discard exit text and such at runtime, not link time */
++	__exittext_begin = .;
+ 	.exit.text :
+ 	{
+ 		EXIT_TEXT
+ 	}
++	__exittext_end = .;
+ 
+ 	__init_text_end = .;
+ 	. = ALIGN(SECTION_ALIGN);
+diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
+index fc5fc4f785c48..01398fee5cf82 100644
+--- a/arch/riscv/mm/pageattr.c
++++ b/arch/riscv/mm/pageattr.c
+@@ -305,8 +305,13 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
+ 				goto unlock;
+ 		}
+ 	} else if (is_kernel_mapping(start) || is_linear_mapping(start)) {
+-		lm_start = (unsigned long)lm_alias(start);
+-		lm_end = (unsigned long)lm_alias(end);
++		if (is_kernel_mapping(start)) {
++			lm_start = (unsigned long)lm_alias(start);
++			lm_end = (unsigned long)lm_alias(end);
++		} else {
++			lm_start = start;
++			lm_end = end;
++		}
+ 
+ 		ret = split_linear_mapping(lm_start, lm_end);
+ 		if (ret)
+@@ -378,7 +383,7 @@ int set_direct_map_invalid_noflush(struct page *page)
+ int set_direct_map_default_noflush(struct page *page)
+ {
+ 	return __set_memory((unsigned long)page_address(page), 1,
+-			    PAGE_KERNEL, __pgprot(0));
++			    PAGE_KERNEL, __pgprot(_PAGE_EXEC));
+ }
+ 
+ #ifdef CONFIG_DEBUG_PAGEALLOC
+diff --git a/arch/s390/include/asm/pci_io.h b/arch/s390/include/asm/pci_io.h
+index 287bb88f76986..2686bee800e3d 100644
+--- a/arch/s390/include/asm/pci_io.h
++++ b/arch/s390/include/asm/pci_io.h
+@@ -11,6 +11,8 @@
+ /* I/O size constraints */
+ #define ZPCI_MAX_READ_SIZE	8
+ #define ZPCI_MAX_WRITE_SIZE	128
++#define ZPCI_BOUNDARY_SIZE	(1 << 12)
++#define ZPCI_BOUNDARY_MASK	(ZPCI_BOUNDARY_SIZE - 1)
+ 
+ /* I/O Map */
+ #define ZPCI_IOMAP_SHIFT		48
+@@ -125,16 +127,18 @@ out:
+ int zpci_write_block(volatile void __iomem *dst, const void *src,
+ 		     unsigned long len);
+ 
+-static inline u8 zpci_get_max_write_size(u64 src, u64 dst, int len, int max)
++static inline int zpci_get_max_io_size(u64 src, u64 dst, int len, int max)
+ {
+-	int count = len > max ? max : len, size = 1;
++	int offset = dst & ZPCI_BOUNDARY_MASK;
++	int size;
+ 
+-	while (!(src & 0x1) && !(dst & 0x1) && ((size << 1) <= count)) {
+-		dst = dst >> 1;
+-		src = src >> 1;
+-		size = size << 1;
+-	}
+-	return size;
++	size = min3(len, ZPCI_BOUNDARY_SIZE - offset, max);
++	if (IS_ALIGNED(src, 8) && IS_ALIGNED(dst, 8) && IS_ALIGNED(size, 8))
++		return size;
++
++	if (size >= 8)
++		return 8;
++	return rounddown_pow_of_two(size);
+ }
+ 
+ static inline int zpci_memcpy_fromio(void *dst,
+@@ -144,9 +148,9 @@ static inline int zpci_memcpy_fromio(void *dst,
+ 	int size, rc = 0;
+ 
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) src,
+-					       (u64) dst, n,
+-					       ZPCI_MAX_READ_SIZE);
++		size = zpci_get_max_io_size((u64 __force) src,
++					    (u64) dst, n,
++					    ZPCI_MAX_READ_SIZE);
+ 		rc = zpci_read_single(dst, src, size);
+ 		if (rc)
+ 			break;
+@@ -166,9 +170,9 @@ static inline int zpci_memcpy_toio(volatile void __iomem *dst,
+ 		return -EINVAL;
+ 
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) dst,
+-					       (u64) src, n,
+-					       ZPCI_MAX_WRITE_SIZE);
++		size = zpci_get_max_io_size((u64 __force) dst,
++					    (u64) src, n,
++					    ZPCI_MAX_WRITE_SIZE);
+ 		if (size > 8) /* main path */
+ 			rc = zpci_write_block(dst, src, size);
+ 		else
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index bf06b7283f0ca..c7fbeedeb0a48 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -779,7 +779,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 				 int i, bool extra_pass, u32 stack_depth)
+ {
+ 	struct bpf_insn *insn = &fp->insnsi[i];
+-	s16 branch_oc_off = insn->off;
++	s32 branch_oc_off = insn->off;
+ 	u32 dst_reg = insn->dst_reg;
+ 	u32 src_reg = insn->src_reg;
+ 	int last, insn_count = 1;
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 5880893329310..a90499c087f0c 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -97,9 +97,9 @@ static inline int __memcpy_toio_inuser(void __iomem *dst,
+ 		return -EINVAL;
+ 
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) dst,
+-					       (u64 __force) src, n,
+-					       ZPCI_MAX_WRITE_SIZE);
++		size = zpci_get_max_io_size((u64 __force) dst,
++					    (u64 __force) src, n,
++					    ZPCI_MAX_WRITE_SIZE);
+ 		if (size > 8) /* main path */
+ 			rc = __pcistb_mio_inuser(dst, src, size, &status);
+ 		else
+@@ -242,9 +242,9 @@ static inline int __memcpy_fromio_inuser(void __user *dst,
+ 	u8 status;
+ 
+ 	while (n > 0) {
+-		size = zpci_get_max_write_size((u64 __force) src,
+-					       (u64 __force) dst, n,
+-					       ZPCI_MAX_READ_SIZE);
++		size = zpci_get_max_io_size((u64 __force) src,
++					    (u64 __force) dst, n,
++					    ZPCI_MAX_READ_SIZE);
+ 		rc = __pcilg_mio_inuser(dst, src, size, &status);
+ 		if (rc)
+ 			break;
+diff --git a/arch/um/drivers/virt-pci.c b/arch/um/drivers/virt-pci.c
+index ffe2ee8a02465..97a37c0629972 100644
+--- a/arch/um/drivers/virt-pci.c
++++ b/arch/um/drivers/virt-pci.c
+@@ -971,7 +971,7 @@ static long um_pci_map_platform(unsigned long offset, size_t size,
+ 	*ops = &um_pci_device_bar_ops;
+ 	*priv = &um_pci_platform_device->resptr[0];
+ 
+-	return 0;
++	return offset;
+ }
+ 
+ static const struct logic_iomem_region_ops um_pci_platform_ops = {
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 8250f0f59c2bb..49bc27ab26ad0 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -5596,7 +5596,7 @@ static int discover_upi_topology(struct intel_uncore_type *type, int ubox_did, i
+ 	struct pci_dev *ubox = NULL;
+ 	struct pci_dev *dev = NULL;
+ 	u32 nid, gid;
+-	int i, idx, ret = -EPERM;
++	int i, idx, lgc_pkg, ret = -EPERM;
+ 	struct intel_uncore_topology *upi;
+ 	unsigned int devfn;
+ 
+@@ -5614,8 +5614,13 @@ static int discover_upi_topology(struct intel_uncore_type *type, int ubox_did, i
+ 		for (i = 0; i < 8; i++) {
+ 			if (nid != GIDNIDMAP(gid, i))
+ 				continue;
++			lgc_pkg = topology_phys_to_logical_pkg(i);
++			if (lgc_pkg < 0) {
++				ret = -EPERM;
++				goto err;
++			}
+ 			for (idx = 0; idx < type->num_boxes; idx++) {
+-				upi = &type->topology[nid][idx];
++				upi = &type->topology[lgc_pkg][idx];
+ 				devfn = PCI_DEVFN(dev_link0 + idx, ICX_UPI_REGS_ADDR_FUNCTION);
+ 				dev = pci_get_domain_bus_and_slot(pci_domain_nr(ubox->bus),
+ 								  ubox->bus->number,
+@@ -5626,6 +5631,7 @@ static int discover_upi_topology(struct intel_uncore_type *type, int ubox_did, i
+ 						goto err;
+ 				}
+ 			}
++			break;
+ 		}
+ 	}
+ err:
+diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
+index 6c98f4bb4228b..058bc636356a1 100644
+--- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
++++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
+@@ -22,7 +22,7 @@ KVM_X86_PMU_OP(get_msr)
+ KVM_X86_PMU_OP(set_msr)
+ KVM_X86_PMU_OP(refresh)
+ KVM_X86_PMU_OP(init)
+-KVM_X86_PMU_OP(reset)
++KVM_X86_PMU_OP_OPTIONAL(reset)
+ KVM_X86_PMU_OP_OPTIONAL(deliver_pmi)
+ KVM_X86_PMU_OP_OPTIONAL(cleanup)
+ 
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index 778df05f85391..bae83810505bf 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -115,8 +115,15 @@ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned lo
+ 		}
+ 
+ 		__monitor((void *)&current_thread_info()->flags, 0, 0);
+-		if (!need_resched())
+-			__mwait(eax, ecx);
++
++		if (!need_resched()) {
++			if (ecx & 1) {
++				__mwait(eax, ecx);
++			} else {
++				__sti_mwait(eax, ecx);
++				raw_local_irq_disable();
++			}
++		}
+ 	}
+ 	current_clr_polling();
+ }
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 4d8d4bcf915dd..72f0695c3dc1d 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -746,6 +746,7 @@ static void check_hw_inj_possible(void)
+ 
+ 		wrmsrl_safe(mca_msr_reg(bank, MCA_STATUS), status);
+ 		rdmsrl_safe(mca_msr_reg(bank, MCA_STATUS), &status);
++		wrmsrl_safe(mca_msr_reg(bank, MCA_STATUS), 0);
+ 
+ 		if (!status) {
+ 			hw_injection_possible = false;
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 070426b9895fe..334972c097d96 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -370,14 +370,14 @@ static __init struct microcode_intel *get_microcode_blob(struct ucode_cpu_info *
+ {
+ 	struct cpio_data cp;
+ 
++	intel_collect_cpu_info(&uci->cpu_sig);
++
+ 	if (!load_builtin_intel_microcode(&cp))
+ 		cp = find_microcode_in_initrd(ucode_path);
+ 
+ 	if (!(cp.data && cp.size))
+ 		return NULL;
+ 
+-	intel_collect_cpu_info(&uci->cpu_sig);
+-
+ 	return scan_microcode(cp.data, cp.size, uci, save);
+ }
+ 
+@@ -410,13 +410,13 @@ void __init load_ucode_intel_bsp(struct early_load_data *ed)
+ {
+ 	struct ucode_cpu_info uci;
+ 
+-	ed->old_rev = intel_get_microcode_revision();
+-
+ 	uci.mc = get_microcode_blob(&uci, false);
+-	if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED)
+-		ucode_patch_va = UCODE_BSP_LOADED;
++	ed->old_rev = uci.cpu_sig.rev;
+ 
+-	ed->new_rev = uci.cpu_sig.rev;
++	if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED) {
++		ucode_patch_va = UCODE_BSP_LOADED;
++		ed->new_rev = uci.cpu_sig.rev;
++	}
+ }
+ 
+ void load_ucode_intel_ap(void)
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index fb8f52149be9a..f2fff625576d5 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -24,8 +24,8 @@
+ 
+ static int kvmclock __initdata = 1;
+ static int kvmclock_vsyscall __initdata = 1;
+-static int msr_kvm_system_time __ro_after_init = MSR_KVM_SYSTEM_TIME;
+-static int msr_kvm_wall_clock __ro_after_init = MSR_KVM_WALL_CLOCK;
++static int msr_kvm_system_time __ro_after_init;
++static int msr_kvm_wall_clock __ro_after_init;
+ static u64 kvm_sched_clock_offset __ro_after_init;
+ 
+ static int __init parse_no_kvmclock(char *arg)
+@@ -195,7 +195,8 @@ static void kvm_setup_secondary_clock(void)
+ 
+ void kvmclock_disable(void)
+ {
+-	native_write_msr(msr_kvm_system_time, 0, 0);
++	if (msr_kvm_system_time)
++		native_write_msr(msr_kvm_system_time, 0, 0);
+ }
+ 
+ static void __init kvmclock_init_mem(void)
+@@ -294,7 +295,10 @@ void __init kvmclock_init(void)
+ 	if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE2)) {
+ 		msr_kvm_system_time = MSR_KVM_SYSTEM_TIME_NEW;
+ 		msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK_NEW;
+-	} else if (!kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE)) {
++	} else if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE)) {
++		msr_kvm_system_time = MSR_KVM_SYSTEM_TIME;
++		msr_kvm_wall_clock = MSR_KVM_WALL_CLOCK;
++	} else {
+ 		return;
+ 	}
+ 
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index 9ae07db6f0f64..dc8e8e907cfbf 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -250,6 +250,24 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
+ 	return true;
+ }
+ 
++static void pmc_release_perf_event(struct kvm_pmc *pmc)
++{
++	if (pmc->perf_event) {
++		perf_event_release_kernel(pmc->perf_event);
++		pmc->perf_event = NULL;
++		pmc->current_config = 0;
++		pmc_to_pmu(pmc)->event_count--;
++	}
++}
++
++static void pmc_stop_counter(struct kvm_pmc *pmc)
++{
++	if (pmc->perf_event) {
++		pmc->counter = pmc_read_counter(pmc);
++		pmc_release_perf_event(pmc);
++	}
++}
++
+ static int filter_cmp(const void *pa, const void *pb, u64 mask)
+ {
+ 	u64 a = *(u64 *)pa & mask;
+@@ -639,24 +657,53 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	return 0;
+ }
+ 
+-/* refresh PMU settings. This function generally is called when underlying
+- * settings are changed (such as changes of PMU CPUID by guest VMs), which
+- * should rarely happen.
++void kvm_pmu_reset(struct kvm_vcpu *vcpu)
++{
++	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
++	struct kvm_pmc *pmc;
++	int i;
++
++	pmu->need_cleanup = false;
++
++	bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX);
++
++	for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) {
++		pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i);
++		if (!pmc)
++			continue;
++
++		pmc_stop_counter(pmc);
++		pmc->counter = 0;
++
++		if (pmc_is_gp(pmc))
++			pmc->eventsel = 0;
++	}
++
++	pmu->fixed_ctr_ctrl = pmu->global_ctrl = pmu->global_status = 0;
++
++	static_call_cond(kvm_x86_pmu_reset)(vcpu);
++}
++
++
++/*
++ * Refresh the PMU configuration for the vCPU, e.g. if userspace changes CPUID
++ * and/or PERF_CAPABILITIES.
+  */
+ void kvm_pmu_refresh(struct kvm_vcpu *vcpu)
+ {
+ 	if (KVM_BUG_ON(kvm_vcpu_has_run(vcpu), vcpu->kvm))
+ 		return;
+ 
++	/*
++	 * Stop/release all existing counters/events before realizing the new
++	 * vPMU model.
++	 */
++	kvm_pmu_reset(vcpu);
++
+ 	bitmap_zero(vcpu_to_pmu(vcpu)->all_valid_pmc_idx, X86_PMC_IDX_MAX);
+ 	static_call(kvm_x86_pmu_refresh)(vcpu);
+ }
+ 
+-void kvm_pmu_reset(struct kvm_vcpu *vcpu)
+-{
+-	static_call(kvm_x86_pmu_reset)(vcpu);
+-}
+-
+ void kvm_pmu_init(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
+index 1d64113de4883..a46aa9b25150f 100644
+--- a/arch/x86/kvm/pmu.h
++++ b/arch/x86/kvm/pmu.h
+@@ -80,24 +80,6 @@ static inline void pmc_write_counter(struct kvm_pmc *pmc, u64 val)
+ 	pmc->counter &= pmc_bitmask(pmc);
+ }
+ 
+-static inline void pmc_release_perf_event(struct kvm_pmc *pmc)
+-{
+-	if (pmc->perf_event) {
+-		perf_event_release_kernel(pmc->perf_event);
+-		pmc->perf_event = NULL;
+-		pmc->current_config = 0;
+-		pmc_to_pmu(pmc)->event_count--;
+-	}
+-}
+-
+-static inline void pmc_stop_counter(struct kvm_pmc *pmc)
+-{
+-	if (pmc->perf_event) {
+-		pmc->counter = pmc_read_counter(pmc);
+-		pmc_release_perf_event(pmc);
+-	}
+-}
+-
+ static inline bool pmc_is_gp(struct kvm_pmc *pmc)
+ {
+ 	return pmc->type == KVM_PMC_GP;
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 3fea8c47679e6..60891b9ce25f6 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -247,18 +247,6 @@ static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa, u32 size)
+ 	    kvm_vcpu_is_legal_gpa(vcpu, addr + size - 1);
+ }
+ 
+-static bool nested_svm_check_tlb_ctl(struct kvm_vcpu *vcpu, u8 tlb_ctl)
+-{
+-	/* Nested FLUSHBYASID is not supported yet.  */
+-	switch(tlb_ctl) {
+-		case TLB_CONTROL_DO_NOTHING:
+-		case TLB_CONTROL_FLUSH_ALL_ASID:
+-			return true;
+-		default:
+-			return false;
+-	}
+-}
+-
+ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
+ 					 struct vmcb_ctrl_area_cached *control)
+ {
+@@ -278,9 +266,6 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
+ 					   IOPM_SIZE)))
+ 		return false;
+ 
+-	if (CC(!nested_svm_check_tlb_ctl(vcpu, control->tlb_ctl)))
+-		return false;
+-
+ 	if (CC((control->int_ctl & V_NMI_ENABLE_MASK) &&
+ 	       !vmcb12_is_intercept(control, INTERCEPT_NMI))) {
+ 		return false;
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 373ff6a6687b3..3fd47de14b38a 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -233,21 +233,6 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-static void amd_pmu_reset(struct kvm_vcpu *vcpu)
+-{
+-	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+-	int i;
+-
+-	for (i = 0; i < KVM_AMD_PMC_MAX_GENERIC; i++) {
+-		struct kvm_pmc *pmc = &pmu->gp_counters[i];
+-
+-		pmc_stop_counter(pmc);
+-		pmc->counter = pmc->prev_counter = pmc->eventsel = 0;
+-	}
+-
+-	pmu->global_ctrl = pmu->global_status = 0;
+-}
+-
+ struct kvm_pmu_ops amd_pmu_ops __initdata = {
+ 	.hw_event_available = amd_hw_event_available,
+ 	.pmc_idx_to_pmc = amd_pmc_idx_to_pmc,
+@@ -259,7 +244,6 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = {
+ 	.set_msr = amd_pmu_set_msr,
+ 	.refresh = amd_pmu_refresh,
+ 	.init = amd_pmu_init,
+-	.reset = amd_pmu_reset,
+ 	.EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT,
+ 	.MAX_NR_GP_COUNTERS = KVM_AMD_PMC_MAX_GENERIC,
+ 	.MIN_NR_GP_COUNTERS = AMD64_NUM_COUNTERS,
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 820d3e1f6b4f8..90c1f7f07e53b 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -632,26 +632,6 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu)
+ 
+ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+-	struct kvm_pmc *pmc = NULL;
+-	int i;
+-
+-	for (i = 0; i < KVM_INTEL_PMC_MAX_GENERIC; i++) {
+-		pmc = &pmu->gp_counters[i];
+-
+-		pmc_stop_counter(pmc);
+-		pmc->counter = pmc->prev_counter = pmc->eventsel = 0;
+-	}
+-
+-	for (i = 0; i < KVM_PMC_MAX_FIXED; i++) {
+-		pmc = &pmu->fixed_counters[i];
+-
+-		pmc_stop_counter(pmc);
+-		pmc->counter = pmc->prev_counter = 0;
+-	}
+-
+-	pmu->fixed_ctr_ctrl = pmu->global_ctrl = pmu->global_status = 0;
+-
+ 	intel_pmu_release_guest_lbr_event(vcpu);
+ }
+ 
+diff --git a/arch/x86/lib/misc.c b/arch/x86/lib/misc.c
+index 92cd8ecc3a2c8..40b81c338ae5b 100644
+--- a/arch/x86/lib/misc.c
++++ b/arch/x86/lib/misc.c
+@@ -8,7 +8,7 @@
+  */
+ int num_digits(int val)
+ {
+-	int m = 10;
++	long long m = 10;
+ 	int d = 1;
+ 
+ 	if (val < 0) {
+diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c
+index 4b3efaa82ab7c..e9497ee0f8547 100644
+--- a/arch/x86/pci/mmconfig-shared.c
++++ b/arch/x86/pci/mmconfig-shared.c
+@@ -525,6 +525,8 @@ static bool __ref is_mmconf_reserved(check_reserved_t is_reserved,
+ static bool __ref
+ pci_mmcfg_check_reserved(struct device *dev, struct pci_mmcfg_region *cfg, int early)
+ {
++	struct resource *conflict;
++
+ 	if (!early && !acpi_disabled) {
+ 		if (is_mmconf_reserved(is_acpi_reserved, cfg, dev,
+ 				       "ACPI motherboard resource"))
+@@ -542,8 +544,17 @@ pci_mmcfg_check_reserved(struct device *dev, struct pci_mmcfg_region *cfg, int e
+ 			       &cfg->res);
+ 
+ 		if (is_mmconf_reserved(is_efi_mmio, cfg, dev,
+-				       "EfiMemoryMappedIO"))
++				       "EfiMemoryMappedIO")) {
++			conflict = insert_resource_conflict(&iomem_resource,
++							    &cfg->res);
++			if (conflict)
++				pr_warn("MMCONFIG %pR conflicts with %s %pR\n",
++					&cfg->res, conflict->name, conflict);
++			else
++				pr_info("MMCONFIG %pR reserved to work around lack of ACPI motherboard _CRS\n",
++					&cfg->res);
+ 			return true;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/block/bio.c b/block/bio.c
+index 816d412c06e9b..5eba53ca953b4 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1145,13 +1145,22 @@ EXPORT_SYMBOL(bio_add_folio);
+ 
+ void __bio_release_pages(struct bio *bio, bool mark_dirty)
+ {
+-	struct bvec_iter_all iter_all;
+-	struct bio_vec *bvec;
++	struct folio_iter fi;
++
++	bio_for_each_folio_all(fi, bio) {
++		struct page *page;
++		size_t done = 0;
+ 
+-	bio_for_each_segment_all(bvec, bio, iter_all) {
+-		if (mark_dirty && !PageCompound(bvec->bv_page))
+-			set_page_dirty_lock(bvec->bv_page);
+-		bio_release_page(bio, bvec->bv_page);
++		if (mark_dirty) {
++			folio_lock(fi.folio);
++			folio_mark_dirty(fi.folio);
++			folio_unlock(fi.folio);
++		}
++		page = folio_page(fi.folio, fi.offset / PAGE_SIZE);
++		do {
++			bio_release_page(bio, page++);
++			done += PAGE_SIZE;
++		} while (done < fi.length);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(__bio_release_pages);
+@@ -1439,18 +1448,12 @@ EXPORT_SYMBOL(bio_free_pages);
+  * bio_set_pages_dirty() and bio_check_pages_dirty() are support functions
+  * for performing direct-IO in BIOs.
+  *
+- * The problem is that we cannot run set_page_dirty() from interrupt context
++ * The problem is that we cannot run folio_mark_dirty() from interrupt context
+  * because the required locks are not interrupt-safe.  So what we can do is to
+  * mark the pages dirty _before_ performing IO.  And in interrupt context,
+  * check that the pages are still dirty.   If so, fine.  If not, redirty them
+  * in process context.
+  *
+- * We special-case compound pages here: normally this means reads into hugetlb
+- * pages.  The logic in here doesn't really work right for compound pages
+- * because the VM does not uniformly chase down the head page in all cases.
+- * But dirtiness of compound pages is pretty meaningless anyway: the VM doesn't
+- * handle them at all.  So we skip compound pages here at an early stage.
+- *
+  * Note that this code is very hard to test under normal circumstances because
+  * direct-io pins the pages with get_user_pages().  This makes
+  * is_page_cache_freeable return false, and the VM will not clean the pages.
+@@ -1466,12 +1469,12 @@ EXPORT_SYMBOL(bio_free_pages);
+  */
+ void bio_set_pages_dirty(struct bio *bio)
+ {
+-	struct bio_vec *bvec;
+-	struct bvec_iter_all iter_all;
++	struct folio_iter fi;
+ 
+-	bio_for_each_segment_all(bvec, bio, iter_all) {
+-		if (!PageCompound(bvec->bv_page))
+-			set_page_dirty_lock(bvec->bv_page);
++	bio_for_each_folio_all(fi, bio) {
++		folio_lock(fi.folio);
++		folio_mark_dirty(fi.folio);
++		folio_unlock(fi.folio);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(bio_set_pages_dirty);
+@@ -1515,12 +1518,11 @@ static void bio_dirty_fn(struct work_struct *work)
+ 
+ void bio_check_pages_dirty(struct bio *bio)
+ {
+-	struct bio_vec *bvec;
++	struct folio_iter fi;
+ 	unsigned long flags;
+-	struct bvec_iter_all iter_all;
+ 
+-	bio_for_each_segment_all(bvec, bio, iter_all) {
+-		if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
++	bio_for_each_folio_all(fi, bio) {
++		if (!folio_test_dirty(fi.folio))
+ 			goto defer;
+ 	}
+ 
+diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h
+index fd482439afbc9..b927a4a0ad030 100644
+--- a/block/blk-cgroup.h
++++ b/block/blk-cgroup.h
+@@ -252,7 +252,8 @@ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg,
+ 	if (blkcg == &blkcg_root)
+ 		return q->root_blkg;
+ 
+-	blkg = rcu_dereference(blkcg->blkg_hint);
++	blkg = rcu_dereference_check(blkcg->blkg_hint,
++			lockdep_is_held(&q->queue_lock));
+ 	if (blkg && blkg->q == q)
+ 		return blkg;
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index ac18f802c027e..7e743ac58c31d 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2951,12 +2951,6 @@ void blk_mq_submit_bio(struct bio *bio)
+ 	blk_status_t ret;
+ 
+ 	bio = blk_queue_bounce(bio, q);
+-	if (bio_may_exceed_limits(bio, &q->limits)) {
+-		bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
+-		if (!bio)
+-			return;
+-	}
+-
+ 	bio_set_ioprio(bio);
+ 
+ 	if (plug) {
+@@ -2965,6 +2959,11 @@ void blk_mq_submit_bio(struct bio *bio)
+ 			rq = NULL;
+ 	}
+ 	if (rq) {
++		if (unlikely(bio_may_exceed_limits(bio, &q->limits))) {
++			bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
++			if (!bio)
++				return;
++		}
+ 		if (!bio_integrity_prep(bio))
+ 			return;
+ 		if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
+@@ -2975,6 +2974,11 @@ void blk_mq_submit_bio(struct bio *bio)
+ 	} else {
+ 		if (unlikely(bio_queue_enter(bio)))
+ 			return;
++		if (unlikely(bio_may_exceed_limits(bio, &q->limits))) {
++			bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
++			if (!bio)
++				goto fail;
++		}
+ 		if (!bio_integrity_prep(bio))
+ 			goto fail;
+ 	}
+diff --git a/block/genhd.c b/block/genhd.c
+index c9d06f72c587e..d74fb5b4ae681 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -432,7 +432,9 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
+ 				DISK_MAX_PARTS);
+ 			disk->minors = DISK_MAX_PARTS;
+ 		}
+-		if (disk->first_minor + disk->minors > MINORMASK + 1)
++		if (disk->first_minor > MINORMASK ||
++		    disk->minors > MINORMASK + 1 ||
++		    disk->first_minor + disk->minors > MINORMASK + 1)
+ 			goto out_exit_elevator;
+ 	} else {
+ 		if (WARN_ON(disk->minors))
+@@ -542,6 +544,7 @@ out_put_holder_dir:
+ 	kobject_put(disk->part0->bd_holder_dir);
+ out_del_block_link:
+ 	sysfs_remove_link(block_depr, dev_name(ddev));
++	pm_runtime_set_memalloc_noio(ddev, false);
+ out_device_del:
+ 	device_del(ddev);
+ out_free_ext_minor:
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 4160f4e6bd5b4..9c73a763ef883 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -18,7 +18,7 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ {
+ 	struct gendisk *disk = bdev->bd_disk;
+ 	struct blkpg_partition p;
+-	long long start, length;
++	sector_t start, length;
+ 
+ 	if (disk->flags & GENHD_FL_NO_PART)
+ 		return -EINVAL;
+@@ -35,14 +35,17 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 	if (op == BLKPG_DEL_PARTITION)
+ 		return bdev_del_partition(disk, p.pno);
+ 
++	if (p.start < 0 || p.length <= 0 || p.start + p.length < 0)
++		return -EINVAL;
++	/* Check that the partition is aligned to the block size */
++	if (!IS_ALIGNED(p.start | p.length, bdev_logical_block_size(bdev)))
++		return -EINVAL;
++
+ 	start = p.start >> SECTOR_SHIFT;
+ 	length = p.length >> SECTOR_SHIFT;
+ 
+ 	switch (op) {
+ 	case BLKPG_ADD_PARTITION:
+-		/* check if partition is aligned to blocksize */
+-		if (p.start & (bdev_logical_block_size(bdev) - 1))
+-			return -EINVAL;
+ 		return bdev_add_partition(disk, p.pno, start, length);
+ 	case BLKPG_RESIZE_PARTITION:
+ 		return bdev_resize_partition(disk, p.pno, start, length);
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index ea6fb8e89d065..68cc9290cabe9 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -1116,9 +1116,13 @@ EXPORT_SYMBOL_GPL(af_alg_sendmsg);
+ void af_alg_free_resources(struct af_alg_async_req *areq)
+ {
+ 	struct sock *sk = areq->sk;
++	struct af_alg_ctx *ctx;
+ 
+ 	af_alg_free_areq_sgls(areq);
+ 	sock_kfree_s(sk, areq, areq->areqlen);
++
++	ctx = alg_sk(sk)->private;
++	ctx->inflight = false;
+ }
+ EXPORT_SYMBOL_GPL(af_alg_free_resources);
+ 
+@@ -1188,11 +1192,19 @@ EXPORT_SYMBOL_GPL(af_alg_poll);
+ struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk,
+ 					   unsigned int areqlen)
+ {
+-	struct af_alg_async_req *areq = sock_kmalloc(sk, areqlen, GFP_KERNEL);
++	struct af_alg_ctx *ctx = alg_sk(sk)->private;
++	struct af_alg_async_req *areq;
++
++	/* Only one AIO request can be in flight. */
++	if (ctx->inflight)
++		return ERR_PTR(-EBUSY);
+ 
++	areq = sock_kmalloc(sk, areqlen, GFP_KERNEL);
+ 	if (unlikely(!areq))
+ 		return ERR_PTR(-ENOMEM);
+ 
++	ctx->inflight = true;
++
+ 	areq->areqlen = areqlen;
+ 	areq->sk = sk;
+ 	areq->first_rsgl.sgl.sgt.sgl = areq->first_rsgl.sgl.sgl;
+diff --git a/crypto/rsa.c b/crypto/rsa.c
+index c79613cdce6e4..b9cd11fb7d367 100644
+--- a/crypto/rsa.c
++++ b/crypto/rsa.c
+@@ -220,6 +220,8 @@ static int rsa_check_exponent_fips(MPI e)
+ 	}
+ 
+ 	e_max = mpi_alloc(0);
++	if (!e_max)
++		return -ENOMEM;
+ 	mpi_set_bit(e_max, 256);
+ 
+ 	if (mpi_cmp(e, e_max) >= 0) {
+diff --git a/crypto/scompress.c b/crypto/scompress.c
+index 442a82c9de7de..b108a30a76001 100644
+--- a/crypto/scompress.c
++++ b/crypto/scompress.c
+@@ -117,6 +117,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
+ 	struct crypto_scomp *scomp = *tfm_ctx;
+ 	void **ctx = acomp_request_ctx(req);
+ 	struct scomp_scratch *scratch;
++	unsigned int dlen;
+ 	int ret;
+ 
+ 	if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
+@@ -128,6 +129,8 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
+ 	if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
+ 		req->dlen = SCOMP_SCRATCH_SIZE;
+ 
++	dlen = req->dlen;
++
+ 	scratch = raw_cpu_ptr(&scomp_scratch);
+ 	spin_lock(&scratch->lock);
+ 
+@@ -145,6 +148,9 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
+ 				ret = -ENOMEM;
+ 				goto out;
+ 			}
++		} else if (req->dlen > dlen) {
++			ret = -ENOSPC;
++			goto out;
+ 		}
+ 		scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
+ 					 1);
+diff --git a/drivers/accel/habanalabs/common/habanalabs_ioctl.c b/drivers/accel/habanalabs/common/habanalabs_ioctl.c
+index 8ef36effb95bc..a7cd625d82c01 100644
+--- a/drivers/accel/habanalabs/common/habanalabs_ioctl.c
++++ b/drivers/accel/habanalabs/common/habanalabs_ioctl.c
+@@ -685,7 +685,7 @@ static int sec_attest_info(struct hl_fpriv *hpriv, struct hl_info_args *args)
+ 	if (!sec_attest_info)
+ 		return -ENOMEM;
+ 
+-	info = kmalloc(sizeof(*info), GFP_KERNEL);
++	info = kzalloc(sizeof(*info), GFP_KERNEL);
+ 	if (!info) {
+ 		rc = -ENOMEM;
+ 		goto free_sec_attest_info;
+diff --git a/drivers/acpi/acpi_extlog.c b/drivers/acpi/acpi_extlog.c
+index e120a96e1eaee..71e8d4e7a36cc 100644
+--- a/drivers/acpi/acpi_extlog.c
++++ b/drivers/acpi/acpi_extlog.c
+@@ -145,9 +145,14 @@ static int extlog_print(struct notifier_block *nb, unsigned long val,
+ 	static u32 err_seq;
+ 
+ 	estatus = extlog_elog_entry_check(cpu, bank);
+-	if (estatus == NULL || (mce->kflags & MCE_HANDLED_CEC))
++	if (!estatus)
+ 		return NOTIFY_DONE;
+ 
++	if (mce->kflags & MCE_HANDLED_CEC) {
++		estatus->block_status = 0;
++		return NOTIFY_DONE;
++	}
++
+ 	memcpy(elog_buf, (void *)estatus, ELOG_ENTRY_LEN);
+ 	/* clear record status to enable BIOS to update it again */
+ 	estatus->block_status = 0;
+diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
+index c5598b6d5db8b..794962c5c88e9 100644
+--- a/drivers/acpi/acpi_lpit.c
++++ b/drivers/acpi/acpi_lpit.c
+@@ -105,7 +105,7 @@ static void lpit_update_residency(struct lpit_residency_info *info,
+ 		return;
+ 
+ 	info->frequency = lpit_native->counter_frequency ?
+-				lpit_native->counter_frequency : tsc_khz * 1000;
++				lpit_native->counter_frequency : mul_u32_u32(tsc_khz, 1000U);
+ 	if (!info->frequency)
+ 		info->frequency = 1;
+ 
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 875de44961bf4..d48407472dfb9 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -461,8 +461,9 @@ static int register_device_clock(struct acpi_device *adev,
+ 		if (!clk_name)
+ 			return -ENOMEM;
+ 		clk = clk_register_fractional_divider(NULL, clk_name, parent,
++						      0, prv_base, 1, 15, 16, 15,
+ 						      CLK_FRAC_DIVIDER_POWER_OF_TWO_PS,
+-						      prv_base, 1, 15, 16, 15, 0, NULL);
++						      NULL);
+ 		parent = clk_name;
+ 
+ 		clk_name = kasprintf(GFP_KERNEL, "%s-update", devname);
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index 6cee536c229a6..375010e575d0e 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -1713,12 +1713,12 @@ static void acpi_video_dev_register_backlight(struct acpi_video_device *device)
+ 		return;
+ 	count++;
+ 
+-	acpi_get_parent(device->dev->handle, &acpi_parent);
+-
+-	pdev = acpi_get_pci_dev(acpi_parent);
+-	if (pdev) {
+-		parent = &pdev->dev;
+-		pci_dev_put(pdev);
++	if (ACPI_SUCCESS(acpi_get_parent(device->dev->handle, &acpi_parent))) {
++		pdev = acpi_get_pci_dev(acpi_parent);
++		if (pdev) {
++			parent = &pdev->dev;
++			pci_dev_put(pdev);
++		}
+ 	}
+ 
+ 	memset(&props, 0, sizeof(struct backlight_properties));
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 6979a3f9f90a8..4d042673d57bf 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -852,6 +852,7 @@ static int acpi_get_ref_args(struct fwnode_reference_args *args,
+  * @index: Index of the reference to return
+  * @num_args: Maximum number of arguments after each reference
+  * @args: Location to store the returned reference with optional arguments
++ *	  (may be NULL)
+  *
+  * Find property with @name, verifify that it is a package containing at least
+  * one object reference and if so, store the ACPI device object pointer to the
+@@ -908,6 +909,9 @@ int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode,
+ 		if (!device)
+ 			return -EINVAL;
+ 
++		if (!args)
++			return 0;
++
+ 		args->fwnode = acpi_fwnode_handle(device);
+ 		args->nargs = 0;
+ 		return 0;
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index e5fa2042585a4..a56cbfd9ba44a 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -271,7 +271,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
+ 	}
+ 	if (mm) {
+ 		mmap_write_unlock(mm);
+-		mmput(mm);
++		mmput_async(mm);
+ 	}
+ 	return 0;
+ 
+@@ -304,7 +304,7 @@ err_page_ptr_cleared:
+ err_no_vma:
+ 	if (mm) {
+ 		mmap_write_unlock(mm);
+-		mmput(mm);
++		mmput_async(mm);
+ 	}
+ 	return vma ? -ENOMEM : -ESRCH;
+ }
+@@ -344,8 +344,7 @@ static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
+ 			continue;
+ 		if (!buffer->async_transaction)
+ 			continue;
+-		total_alloc_size += binder_alloc_buffer_size(alloc, buffer)
+-			+ sizeof(struct binder_buffer);
++		total_alloc_size += binder_alloc_buffer_size(alloc, buffer);
+ 		num_buffers++;
+ 	}
+ 
+@@ -407,17 +406,17 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 				alloc->pid, extra_buffers_size);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+-	if (is_async &&
+-	    alloc->free_async_space < size + sizeof(struct binder_buffer)) {
++
++	/* Pad 0-size buffers so they get assigned unique addresses */
++	size = max(size, sizeof(void *));
++
++	if (is_async && alloc->free_async_space < size) {
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
+ 			     "%d: binder_alloc_buf size %zd failed, no async space left\n",
+ 			      alloc->pid, size);
+ 		return ERR_PTR(-ENOSPC);
+ 	}
+ 
+-	/* Pad 0-size buffers so they get assigned unique addresses */
+-	size = max(size, sizeof(void *));
+-
+ 	while (n) {
+ 		buffer = rb_entry(n, struct binder_buffer, rb_node);
+ 		BUG_ON(!buffer->free);
+@@ -519,7 +518,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 	buffer->pid = pid;
+ 	buffer->oneway_spam_suspect = false;
+ 	if (is_async) {
+-		alloc->free_async_space -= size + sizeof(struct binder_buffer);
++		alloc->free_async_space -= size;
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+ 			     "%d: binder_alloc_buf size %zd async free %zd\n",
+ 			      alloc->pid, size, alloc->free_async_space);
+@@ -657,8 +656,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
+ 	BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
+ 
+ 	if (buffer->async_transaction) {
+-		alloc->free_async_space += buffer_size + sizeof(struct binder_buffer);
+-
++		alloc->free_async_space += buffer_size;
+ 		binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
+ 			     "%d: binder_free_buf size %zd async free %zd\n",
+ 			      alloc->pid, size, alloc->free_async_space);
+diff --git a/drivers/base/class.c b/drivers/base/class.c
+index 7e78aee0fd6c3..7b38fdf8e1d78 100644
+--- a/drivers/base/class.c
++++ b/drivers/base/class.c
+@@ -213,6 +213,7 @@ int class_register(const struct class *cls)
+ 	return 0;
+ 
+ err_out:
++	lockdep_unregister_key(key);
+ 	kfree(cp);
+ 	return error;
+ }
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 493d533f83755..4d588f4658c85 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -868,11 +868,15 @@ int __register_one_node(int nid)
+ {
+ 	int error;
+ 	int cpu;
++	struct node *node;
+ 
+-	node_devices[nid] = kzalloc(sizeof(struct node), GFP_KERNEL);
+-	if (!node_devices[nid])
++	node = kzalloc(sizeof(struct node), GFP_KERNEL);
++	if (!node)
+ 		return -ENOMEM;
+ 
++	INIT_LIST_HEAD(&node->access_list);
++	node_devices[nid] = node;
++
+ 	error = register_node(node_devices[nid], nid);
+ 
+ 	/* link cpu under this node */
+@@ -881,7 +885,6 @@ int __register_one_node(int nid)
+ 			register_cpu_under_node(cpu, nid);
+ 	}
+ 
+-	INIT_LIST_HEAD(&node_devices[nid]->access_list);
+ 	node_init_caches(nid);
+ 
+ 	return error;
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index 1886995a0b3a3..079bd14bdedc7 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -541,6 +541,9 @@ software_node_get_reference_args(const struct fwnode_handle *fwnode,
+ 	if (nargs > NR_FWNODE_REFERENCE_ARGS)
+ 		return -EINVAL;
+ 
++	if (!args)
++		return 0;
++
+ 	args->fwnode = software_node_get(refnode);
+ 	args->nargs = nargs;
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 9f2d412fc560e..552f56a84a7eb 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -165,39 +165,37 @@ static loff_t get_loop_size(struct loop_device *lo, struct file *file)
+ 	return get_size(lo->lo_offset, lo->lo_sizelimit, file);
+ }
+ 
++/*
++ * We support direct I/O only if lo_offset is aligned with the logical I/O size
++ * of backing device, and the logical block size of loop is bigger than that of
++ * the backing device.
++ */
++static bool lo_bdev_can_use_dio(struct loop_device *lo,
++		struct block_device *backing_bdev)
++{
++	unsigned short sb_bsize = bdev_logical_block_size(backing_bdev);
++
++	if (queue_logical_block_size(lo->lo_queue) < sb_bsize)
++		return false;
++	if (lo->lo_offset & (sb_bsize - 1))
++		return false;
++	return true;
++}
++
+ static void __loop_update_dio(struct loop_device *lo, bool dio)
+ {
+ 	struct file *file = lo->lo_backing_file;
+-	struct address_space *mapping = file->f_mapping;
+-	struct inode *inode = mapping->host;
+-	unsigned short sb_bsize = 0;
+-	unsigned dio_align = 0;
++	struct inode *inode = file->f_mapping->host;
++	struct block_device *backing_bdev = NULL;
+ 	bool use_dio;
+ 
+-	if (inode->i_sb->s_bdev) {
+-		sb_bsize = bdev_logical_block_size(inode->i_sb->s_bdev);
+-		dio_align = sb_bsize - 1;
+-	}
++	if (S_ISBLK(inode->i_mode))
++		backing_bdev = I_BDEV(inode);
++	else if (inode->i_sb->s_bdev)
++		backing_bdev = inode->i_sb->s_bdev;
+ 
+-	/*
+-	 * We support direct I/O only if lo_offset is aligned with the
+-	 * logical I/O size of backing device, and the logical block
+-	 * size of loop is bigger than the backing device's.
+-	 *
+-	 * TODO: the above condition may be loosed in the future, and
+-	 * direct I/O may be switched runtime at that time because most
+-	 * of requests in sane applications should be PAGE_SIZE aligned
+-	 */
+-	if (dio) {
+-		if (queue_logical_block_size(lo->lo_queue) >= sb_bsize &&
+-		    !(lo->lo_offset & dio_align) &&
+-		    (file->f_mode & FMODE_CAN_ODIRECT))
+-			use_dio = true;
+-		else
+-			use_dio = false;
+-	} else {
+-		use_dio = false;
+-	}
++	use_dio = dio && (file->f_mode & FMODE_CAN_ODIRECT) &&
++		(!backing_bdev || lo_bdev_can_use_dio(lo, backing_bdev));
+ 
+ 	if (lo->use_dio == use_dio)
+ 		return;
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 3021d58ca51c1..13ed446b5e198 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -2186,10 +2186,8 @@ static int null_add_dev(struct nullb_device *dev)
+ 
+ 	blk_queue_logical_block_size(nullb->q, dev->blocksize);
+ 	blk_queue_physical_block_size(nullb->q, dev->blocksize);
+-	if (!dev->max_sectors)
+-		dev->max_sectors = queue_max_hw_sectors(nullb->q);
+-	dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS);
+-	blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
++	if (dev->max_sectors)
++		blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
+ 
+ 	if (dev->virt_boundary)
+ 		blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
+@@ -2289,12 +2287,6 @@ static int __init null_init(void)
+ 		g_bs = PAGE_SIZE;
+ 	}
+ 
+-	if (g_max_sectors > BLK_DEF_MAX_SECTORS) {
+-		pr_warn("invalid max sectors\n");
+-		pr_warn("defaults max sectors to %u\n", BLK_DEF_MAX_SECTORS);
+-		g_max_sectors = BLK_DEF_MAX_SECTORS;
+-	}
+-
+ 	if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) {
+ 		pr_err("invalid home_node value\n");
+ 		g_home_node = NUMA_NO_NODE;
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index 935feab815d97..203a000a84e34 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -336,7 +336,7 @@ mtk_stp_split(struct btmtkuart_dev *bdev, const unsigned char *data, int count,
+ 	return data;
+ }
+ 
+-static int btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
++static void btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
+ {
+ 	struct btmtkuart_dev *bdev = hci_get_drvdata(hdev);
+ 	const unsigned char *p_left = data, *p_h4;
+@@ -375,25 +375,20 @@ static int btmtkuart_recv(struct hci_dev *hdev, const u8 *data, size_t count)
+ 			bt_dev_err(bdev->hdev,
+ 				   "Frame reassembly failed (%d)", err);
+ 			bdev->rx_skb = NULL;
+-			return err;
++			return;
+ 		}
+ 
+ 		sz_left -= sz_h4;
+ 		p_left += sz_h4;
+ 	}
+-
+-	return 0;
+ }
+ 
+ static int btmtkuart_receive_buf(struct serdev_device *serdev, const u8 *data,
+ 				 size_t count)
+ {
+ 	struct btmtkuart_dev *bdev = serdev_device_get_drvdata(serdev);
+-	int err;
+ 
+-	err = btmtkuart_recv(bdev->hdev, data, count);
+-	if (err < 0)
+-		return err;
++	btmtkuart_recv(bdev->hdev, data, count);
+ 
+ 	bdev->hdev->stat.byte_rx += count;
+ 
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index b7e66b7ac5702..951fe3014a3f3 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1276,11 +1276,10 @@ static int btnxpuart_receive_buf(struct serdev_device *serdev, const u8 *data,
+ 	if (IS_ERR(nxpdev->rx_skb)) {
+ 		int err = PTR_ERR(nxpdev->rx_skb);
+ 		/* Safe to ignore out-of-sync bootloader signatures */
+-		if (is_fw_downloading(nxpdev))
+-			return count;
+-		bt_dev_err(nxpdev->hdev, "Frame reassembly failed (%d)", err);
++		if (!is_fw_downloading(nxpdev))
++			bt_dev_err(nxpdev->hdev, "Frame reassembly failed (%d)", err);
+ 		nxpdev->rx_skb = NULL;
+-		return err;
++		return count;
+ 	}
+ 	if (!is_fw_downloading(nxpdev))
+ 		nxpdev->hdev->stat.byte_rx += count;
+diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
+index 600881808982a..582d5c166a75e 100644
+--- a/drivers/bus/mhi/ep/main.c
++++ b/drivers/bus/mhi/ep/main.c
+@@ -71,45 +71,77 @@ err_unlock:
+ static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+ 					struct mhi_ring_element *tre, u32 len, enum mhi_ev_ccs code)
+ {
+-	struct mhi_ring_element event = {};
++	struct mhi_ring_element *event;
++	int ret;
++
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	if (!event)
++		return -ENOMEM;
+ 
+-	event.ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(*tre));
+-	event.dword[0] = MHI_TRE_EV_DWORD0(code, len);
+-	event.dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
++	event->ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(*tre));
++	event->dword[0] = MHI_TRE_EV_DWORD0(code, len);
++	event->dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
+ 
+-	return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event, MHI_TRE_DATA_GET_BEI(tre));
++	ret = mhi_ep_send_event(mhi_cntrl, ring->er_index, event, MHI_TRE_DATA_GET_BEI(tre));
++	kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
++
++	return ret;
+ }
+ 
+ int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
+ {
+-	struct mhi_ring_element event = {};
++	struct mhi_ring_element *event;
++	int ret;
++
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	if (!event)
++		return -ENOMEM;
++
++	event->dword[0] = MHI_SC_EV_DWORD0(state);
++	event->dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
+ 
+-	event.dword[0] = MHI_SC_EV_DWORD0(state);
+-	event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
++	ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
++	kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
+ 
+-	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
++	return ret;
+ }
+ 
+ int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env)
+ {
+-	struct mhi_ring_element event = {};
++	struct mhi_ring_element *event;
++	int ret;
++
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	if (!event)
++		return -ENOMEM;
++
++	event->dword[0] = MHI_EE_EV_DWORD0(exec_env);
++	event->dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
+ 
+-	event.dword[0] = MHI_EE_EV_DWORD0(exec_env);
+-	event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
++	ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
++	kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
+ 
+-	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
++	return ret;
+ }
+ 
+ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
+ {
+ 	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
+-	struct mhi_ring_element event = {};
++	struct mhi_ring_element *event;
++	int ret;
++
++	event = kmem_cache_zalloc(mhi_cntrl->ev_ring_el_cache, GFP_KERNEL | GFP_DMA);
++	if (!event)
++		return -ENOMEM;
+ 
+-	event.ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(struct mhi_ring_element));
+-	event.dword[0] = MHI_CC_EV_DWORD0(code);
+-	event.dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
++	event->ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(struct mhi_ring_element));
++	event->dword[0] = MHI_CC_EV_DWORD0(code);
++	event->dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
+ 
+-	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
++	ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
++	kmem_cache_free(mhi_cntrl->ev_ring_el_cache, event);
++
++	return ret;
+ }
+ 
+ static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
+@@ -292,10 +324,9 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
+ 	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ 	size_t tr_len, read_offset, write_offset;
++	struct mhi_ep_buf_info buf_info = {};
+ 	struct mhi_ring_element *el;
+ 	bool tr_done = false;
+-	void *write_addr;
+-	u64 read_addr;
+ 	u32 buf_left;
+ 	int ret;
+ 
+@@ -324,11 +355,13 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
+ 
+ 		read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
+ 		write_offset = len - buf_left;
+-		read_addr = mhi_chan->tre_loc + read_offset;
+-		write_addr = result->buf_addr + write_offset;
++
++		buf_info.host_addr = mhi_chan->tre_loc + read_offset;
++		buf_info.dev_addr = result->buf_addr + write_offset;
++		buf_info.size = tr_len;
+ 
+ 		dev_dbg(dev, "Reading %zd bytes from channel (%u)\n", tr_len, ring->ch_id);
+-		ret = mhi_cntrl->read_from_host(mhi_cntrl, read_addr, write_addr, tr_len);
++		ret = mhi_cntrl->read_from_host(mhi_cntrl, &buf_info);
+ 		if (ret < 0) {
+ 			dev_err(&mhi_chan->mhi_dev->dev, "Error reading from channel\n");
+ 			return ret;
+@@ -419,7 +452,7 @@ static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_elem
+ 		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ 	} else {
+ 		/* UL channel */
+-		result.buf_addr = kzalloc(len, GFP_KERNEL);
++		result.buf_addr = kmem_cache_zalloc(mhi_cntrl->tre_buf_cache, GFP_KERNEL | GFP_DMA);
+ 		if (!result.buf_addr)
+ 			return -ENOMEM;
+ 
+@@ -427,7 +460,7 @@ static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_elem
+ 			ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
+ 			if (ret < 0) {
+ 				dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
+-				kfree(result.buf_addr);
++				kmem_cache_free(mhi_cntrl->tre_buf_cache, result.buf_addr);
+ 				return ret;
+ 			}
+ 
+@@ -439,7 +472,7 @@ static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_elem
+ 			/* Read until the ring becomes empty */
+ 		} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
+ 
+-		kfree(result.buf_addr);
++		kmem_cache_free(mhi_cntrl->tre_buf_cache, result.buf_addr);
+ 	}
+ 
+ 	return 0;
+@@ -451,12 +484,11 @@ int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb)
+ 	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+ 	struct mhi_ep_chan *mhi_chan = mhi_dev->dl_chan;
+ 	struct device *dev = &mhi_chan->mhi_dev->dev;
++	struct mhi_ep_buf_info buf_info = {};
+ 	struct mhi_ring_element *el;
+ 	u32 buf_left, read_offset;
+ 	struct mhi_ep_ring *ring;
+ 	enum mhi_ev_ccs code;
+-	void *read_addr;
+-	u64 write_addr;
+ 	size_t tr_len;
+ 	u32 tre_len;
+ 	int ret;
+@@ -485,11 +517,13 @@ int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb)
+ 
+ 		tr_len = min(buf_left, tre_len);
+ 		read_offset = skb->len - buf_left;
+-		read_addr = skb->data + read_offset;
+-		write_addr = MHI_TRE_DATA_GET_PTR(el);
++
++		buf_info.dev_addr = skb->data + read_offset;
++		buf_info.host_addr = MHI_TRE_DATA_GET_PTR(el);
++		buf_info.size = tr_len;
+ 
+ 		dev_dbg(dev, "Writing %zd bytes to channel (%u)\n", tr_len, ring->ch_id);
+-		ret = mhi_cntrl->write_to_host(mhi_cntrl, read_addr, write_addr, tr_len);
++		ret = mhi_cntrl->write_to_host(mhi_cntrl, &buf_info);
+ 		if (ret < 0) {
+ 			dev_err(dev, "Error writing to the channel\n");
+ 			goto err_exit;
+@@ -748,14 +782,14 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
+ 		if (ret) {
+ 			dev_err(dev, "Error updating write offset for ring\n");
+ 			mutex_unlock(&chan->lock);
+-			kfree(itr);
++			kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
+ 			continue;
+ 		}
+ 
+ 		/* Sanity check to make sure there are elements in the ring */
+ 		if (ring->rd_offset == ring->wr_offset) {
+ 			mutex_unlock(&chan->lock);
+-			kfree(itr);
++			kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
+ 			continue;
+ 		}
+ 
+@@ -767,12 +801,12 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
+ 			dev_err(dev, "Error processing ring for channel (%u): %d\n",
+ 				ring->ch_id, ret);
+ 			mutex_unlock(&chan->lock);
+-			kfree(itr);
++			kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
+ 			continue;
+ 		}
+ 
+ 		mutex_unlock(&chan->lock);
+-		kfree(itr);
++		kmem_cache_free(mhi_cntrl->ring_item_cache, itr);
+ 	}
+ }
+ 
+@@ -828,7 +862,7 @@ static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl, unsigned lon
+ 		u32 ch_id = ch_idx + i;
+ 
+ 		ring = &mhi_cntrl->mhi_chan[ch_id].ring;
+-		item = kzalloc(sizeof(*item), GFP_ATOMIC);
++		item = kmem_cache_zalloc(mhi_cntrl->ring_item_cache, GFP_ATOMIC);
+ 		if (!item)
+ 			return;
+ 
+@@ -1375,6 +1409,28 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+ 		goto err_free_ch;
+ 	}
+ 
++	mhi_cntrl->ev_ring_el_cache = kmem_cache_create("mhi_ep_event_ring_el",
++							sizeof(struct mhi_ring_element), 0,
++							SLAB_CACHE_DMA, NULL);
++	if (!mhi_cntrl->ev_ring_el_cache) {
++		ret = -ENOMEM;
++		goto err_free_cmd;
++	}
++
++	mhi_cntrl->tre_buf_cache = kmem_cache_create("mhi_ep_tre_buf", MHI_EP_DEFAULT_MTU, 0,
++						      SLAB_CACHE_DMA, NULL);
++	if (!mhi_cntrl->tre_buf_cache) {
++		ret = -ENOMEM;
++		goto err_destroy_ev_ring_el_cache;
++	}
++
++	mhi_cntrl->ring_item_cache = kmem_cache_create("mhi_ep_ring_item",
++							sizeof(struct mhi_ep_ring_item), 0,
++							0, NULL);
++	if (!mhi_cntrl->ev_ring_el_cache) {
++		ret = -ENOMEM;
++		goto err_destroy_tre_buf_cache;
++	}
+ 	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
+ 	INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
+ 	INIT_WORK(&mhi_cntrl->cmd_ring_work, mhi_ep_cmd_ring_worker);
+@@ -1383,7 +1439,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+ 	mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
+ 	if (!mhi_cntrl->wq) {
+ 		ret = -ENOMEM;
+-		goto err_free_cmd;
++		goto err_destroy_ring_item_cache;
+ 	}
+ 
+ 	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
+@@ -1442,6 +1498,12 @@ err_ida_free:
+ 	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+ err_destroy_wq:
+ 	destroy_workqueue(mhi_cntrl->wq);
++err_destroy_ring_item_cache:
++	kmem_cache_destroy(mhi_cntrl->ring_item_cache);
++err_destroy_ev_ring_el_cache:
++	kmem_cache_destroy(mhi_cntrl->ev_ring_el_cache);
++err_destroy_tre_buf_cache:
++	kmem_cache_destroy(mhi_cntrl->tre_buf_cache);
+ err_free_cmd:
+ 	kfree(mhi_cntrl->mhi_cmd);
+ err_free_ch:
+@@ -1463,6 +1525,9 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
+ 
+ 	free_irq(mhi_cntrl->irq, mhi_cntrl);
+ 
++	kmem_cache_destroy(mhi_cntrl->tre_buf_cache);
++	kmem_cache_destroy(mhi_cntrl->ev_ring_el_cache);
++	kmem_cache_destroy(mhi_cntrl->ring_item_cache);
+ 	kfree(mhi_cntrl->mhi_cmd);
+ 	kfree(mhi_cntrl->mhi_chan);
+ 
+diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
+index 115518ec76a43..c673d7200b3e1 100644
+--- a/drivers/bus/mhi/ep/ring.c
++++ b/drivers/bus/mhi/ep/ring.c
+@@ -30,7 +30,8 @@ static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
+ {
+ 	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+-	size_t start, copy_size;
++	struct mhi_ep_buf_info buf_info = {};
++	size_t start;
+ 	int ret;
+ 
+ 	/* Don't proceed in the case of event ring. This happens during mhi_ep_ring_start(). */
+@@ -43,30 +44,34 @@ static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
+ 
+ 	start = ring->wr_offset;
+ 	if (start < end) {
+-		copy_size = (end - start) * sizeof(struct mhi_ring_element);
+-		ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
+-						(start * sizeof(struct mhi_ring_element)),
+-						&ring->ring_cache[start], copy_size);
++		buf_info.size = (end - start) * sizeof(struct mhi_ring_element);
++		buf_info.host_addr = ring->rbase + (start * sizeof(struct mhi_ring_element));
++		buf_info.dev_addr = &ring->ring_cache[start];
++
++		ret = mhi_cntrl->read_from_host(mhi_cntrl, &buf_info);
+ 		if (ret < 0)
+ 			return ret;
+ 	} else {
+-		copy_size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
+-		ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
+-						(start * sizeof(struct mhi_ring_element)),
+-						&ring->ring_cache[start], copy_size);
++		buf_info.size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
++		buf_info.host_addr = ring->rbase + (start * sizeof(struct mhi_ring_element));
++		buf_info.dev_addr = &ring->ring_cache[start];
++
++		ret = mhi_cntrl->read_from_host(mhi_cntrl, &buf_info);
+ 		if (ret < 0)
+ 			return ret;
+ 
+ 		if (end) {
+-			ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase,
+-							&ring->ring_cache[0],
+-							end * sizeof(struct mhi_ring_element));
++			buf_info.host_addr = ring->rbase;
++			buf_info.dev_addr = &ring->ring_cache[0];
++			buf_info.size = end * sizeof(struct mhi_ring_element);
++
++			ret = mhi_cntrl->read_from_host(mhi_cntrl, &buf_info);
+ 			if (ret < 0)
+ 				return ret;
+ 		}
+ 	}
+ 
+-	dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
++	dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, buf_info.size);
+ 
+ 	return 0;
+ }
+@@ -102,6 +107,7 @@ int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *e
+ {
+ 	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
++	struct mhi_ep_buf_info buf_info = {};
+ 	size_t old_offset = 0;
+ 	u32 num_free_elem;
+ 	__le64 rp;
+@@ -133,12 +139,11 @@ int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *e
+ 	rp = cpu_to_le64(ring->rd_offset * sizeof(*el) + ring->rbase);
+ 	memcpy_toio((void __iomem *) &ring->ring_ctx->generic.rp, &rp, sizeof(u64));
+ 
+-	ret = mhi_cntrl->write_to_host(mhi_cntrl, el, ring->rbase + (old_offset * sizeof(*el)),
+-				       sizeof(*el));
+-	if (ret < 0)
+-		return ret;
++	buf_info.host_addr = ring->rbase + (old_offset * sizeof(*el));
++	buf_info.dev_addr = el;
++	buf_info.size = sizeof(*el);
+ 
+-	return 0;
++	return mhi_cntrl->write_to_host(mhi_cntrl, &buf_info);
+ }
+ 
+ void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
+diff --git a/drivers/cdx/cdx.c b/drivers/cdx/cdx.c
+index 4461c6c9313f0..7c1c1f82a3263 100644
+--- a/drivers/cdx/cdx.c
++++ b/drivers/cdx/cdx.c
+@@ -57,7 +57,10 @@
+ 
+ #include <linux/init.h>
+ #include <linux/kernel.h>
++#include <linux/of.h>
+ #include <linux/of_device.h>
++#include <linux/of_platform.h>
++#include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/mm.h>
+ #include <linux/idr.h>
+@@ -569,12 +572,12 @@ static ssize_t rescan_store(const struct bus_type *bus,
+ 
+ 	/* Rescan all the devices */
+ 	for_each_compatible_node(np, NULL, compat_node_name) {
+-		if (!np)
+-			return -EINVAL;
+-
+ 		pd = of_find_device_by_node(np);
+-		if (!pd)
+-			return -EINVAL;
++		if (!pd) {
++			of_node_put(np);
++			count = -EINVAL;
++			goto unlock;
++		}
+ 
+ 		cdx = platform_get_drvdata(pd);
+ 		if (cdx && cdx->controller_registered && cdx->ops->scan)
+@@ -583,6 +586,7 @@ static ssize_t rescan_store(const struct bus_type *bus,
+ 		put_device(&pd->dev);
+ 	}
+ 
++unlock:
+ 	mutex_unlock(&cdx_controller_lock);
+ 
+ 	return count;
+diff --git a/drivers/char/hw_random/stm32-rng.c b/drivers/char/hw_random/stm32-rng.c
+index 41e1dbea5d2eb..efd6edcd70661 100644
+--- a/drivers/char/hw_random/stm32-rng.c
++++ b/drivers/char/hw_random/stm32-rng.c
+@@ -325,6 +325,7 @@ static int stm32_rng_init(struct hwrng *rng)
+ 							(!(reg & RNG_CR_CONDRST)),
+ 							10, 50000);
+ 		if (err) {
++			clk_disable_unprepare(priv->clk);
+ 			dev_err((struct device *)priv->rng.priv,
+ 				"%s: timeout %x!\n", __func__, reg);
+ 			return -EINVAL;
+diff --git a/drivers/clk/clk-renesas-pcie.c b/drivers/clk/clk-renesas-pcie.c
+index 380245f635d66..6606aba253c56 100644
+--- a/drivers/clk/clk-renesas-pcie.c
++++ b/drivers/clk/clk-renesas-pcie.c
+@@ -163,7 +163,7 @@ static u8 rs9_calc_dif(const struct rs9_driver_data *rs9, int idx)
+ 	enum rs9_model model = rs9->chip_info->model;
+ 
+ 	if (model == RENESAS_9FGV0241)
+-		return BIT(idx) + 1;
++		return BIT(idx + 1);
+ 	else if (model == RENESAS_9FGV0441)
+ 		return BIT(idx);
+ 
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 845b451511d23..6e8dd7387cfdd 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -895,10 +895,8 @@ static int si5341_output_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	r[0] = r_div ? (r_div & 0xff) : 1;
+ 	r[1] = (r_div >> 8) & 0xff;
+ 	r[2] = (r_div >> 16) & 0xff;
+-	err = regmap_bulk_write(output->data->regmap,
++	return regmap_bulk_write(output->data->regmap,
+ 			SI5341_OUT_R_REG(output), r, 3);
+-
+-	return 0;
+ }
+ 
+ static int si5341_output_reparent(struct clk_si5341_output *output, u8 index)
+diff --git a/drivers/clk/clk-sp7021.c b/drivers/clk/clk-sp7021.c
+index 01d3c4c7b0b23..7cb7d501d7a6e 100644
+--- a/drivers/clk/clk-sp7021.c
++++ b/drivers/clk/clk-sp7021.c
+@@ -604,14 +604,14 @@ static int sp7021_clk_probe(struct platform_device *pdev)
+ 	int i;
+ 
+ 	clk_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (!clk_base)
+-		return -ENXIO;
++	if (IS_ERR(clk_base))
++		return PTR_ERR(clk_base);
+ 	pll_base = devm_platform_ioremap_resource(pdev, 1);
+-	if (!pll_base)
+-		return -ENXIO;
++	if (IS_ERR(pll_base))
++		return PTR_ERR(pll_base);
+ 	sys_base = devm_platform_ioremap_resource(pdev, 2);
+-	if (!sys_base)
+-		return -ENXIO;
++	if (IS_ERR(sys_base))
++		return PTR_ERR(sys_base);
+ 
+ 	/* enable default clks */
+ 	for (i = 0; i < ARRAY_SIZE(sp_clken); i++)
+diff --git a/drivers/clk/qcom/dispcc-sm8550.c b/drivers/clk/qcom/dispcc-sm8550.c
+index aefa19f3c2c51..f96d8b81fd9ad 100644
+--- a/drivers/clk/qcom/dispcc-sm8550.c
++++ b/drivers/clk/qcom/dispcc-sm8550.c
+@@ -81,6 +81,10 @@ static const struct alpha_pll_config disp_cc_pll0_config = {
+ 	.config_ctl_val = 0x20485699,
+ 	.config_ctl_hi_val = 0x00182261,
+ 	.config_ctl_hi1_val = 0x82aa299c,
++	.test_ctl_val = 0x00000000,
++	.test_ctl_hi_val = 0x00000003,
++	.test_ctl_hi1_val = 0x00009000,
++	.test_ctl_hi2_val = 0x00000034,
+ 	.user_ctl_val = 0x00000000,
+ 	.user_ctl_hi_val = 0x00000005,
+ };
+@@ -108,6 +112,10 @@ static const struct alpha_pll_config disp_cc_pll1_config = {
+ 	.config_ctl_val = 0x20485699,
+ 	.config_ctl_hi_val = 0x00182261,
+ 	.config_ctl_hi1_val = 0x82aa299c,
++	.test_ctl_val = 0x00000000,
++	.test_ctl_hi_val = 0x00000003,
++	.test_ctl_hi1_val = 0x00009000,
++	.test_ctl_hi2_val = 0x00000034,
+ 	.user_ctl_val = 0x00000000,
+ 	.user_ctl_hi_val = 0x00000005,
+ };
+@@ -1766,8 +1774,8 @@ static int disp_cc_sm8550_probe(struct platform_device *pdev)
+ 		goto err_put_rpm;
+ 	}
+ 
+-	clk_lucid_evo_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
+-	clk_lucid_evo_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
++	clk_lucid_ole_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
++	clk_lucid_ole_pll_configure(&disp_cc_pll1, regmap, &disp_cc_pll1_config);
+ 
+ 	/* Enable clock gating for MDP clocks */
+ 	regmap_update_bits(regmap, DISP_CC_MISC_CMD, 0x10, 0x10);
+diff --git a/drivers/clk/qcom/gcc-sm8550.c b/drivers/clk/qcom/gcc-sm8550.c
+index 586126c4dd907..b883dffe5f7aa 100644
+--- a/drivers/clk/qcom/gcc-sm8550.c
++++ b/drivers/clk/qcom/gcc-sm8550.c
+@@ -401,7 +401,7 @@ static struct clk_rcg2 gcc_gp1_clk_src = {
+ 		.parent_data = gcc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -416,7 +416,7 @@ static struct clk_rcg2 gcc_gp2_clk_src = {
+ 		.parent_data = gcc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -431,7 +431,7 @@ static struct clk_rcg2 gcc_gp3_clk_src = {
+ 		.parent_data = gcc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_1),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -451,7 +451,7 @@ static struct clk_rcg2 gcc_pcie_0_aux_clk_src = {
+ 		.parent_data = gcc_parent_data_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -471,7 +471,7 @@ static struct clk_rcg2 gcc_pcie_0_phy_rchng_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -486,7 +486,7 @@ static struct clk_rcg2 gcc_pcie_1_aux_clk_src = {
+ 		.parent_data = gcc_parent_data_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -501,7 +501,7 @@ static struct clk_rcg2 gcc_pcie_1_phy_rchng_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -521,7 +521,7 @@ static struct clk_rcg2 gcc_pdm2_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -536,7 +536,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s0_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -551,7 +551,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s1_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -566,7 +566,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s2_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -581,7 +581,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s3_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -596,7 +596,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s4_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -611,7 +611,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s5_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -626,7 +626,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s6_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -641,7 +641,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s7_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -656,7 +656,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s8_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -671,7 +671,7 @@ static struct clk_rcg2 gcc_qupv3_i2c_s9_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -700,7 +700,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -717,7 +717,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -750,7 +750,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+@@ -767,7 +767,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -784,7 +784,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -801,7 +801,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -818,7 +818,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = {
+@@ -835,7 +835,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = {
+@@ -852,7 +852,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = {
+@@ -869,7 +869,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = {
+@@ -886,7 +886,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = {
+@@ -903,7 +903,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = {
+@@ -920,7 +920,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = {
+@@ -937,7 +937,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = {
+@@ -975,7 +975,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s6_clk_src_init = {
+ 	.parent_data = gcc_parent_data_8,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_8),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = {
+@@ -992,7 +992,7 @@ static struct clk_init_data gcc_qupv3_wrap2_s7_clk_src_init = {
+ 	.parent_data = gcc_parent_data_0,
+ 	.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 	.flags = CLK_SET_RATE_PARENT,
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = {
+@@ -1025,7 +1025,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.parent_data = gcc_parent_data_9,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_9),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1048,7 +1048,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1071,7 +1071,7 @@ static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1093,7 +1093,7 @@ static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = {
+ 		.parent_data = gcc_parent_data_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1114,7 +1114,7 @@ static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = {
+ 		.parent_data = gcc_parent_data_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_4),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1136,7 +1136,7 @@ static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1159,7 +1159,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1174,7 +1174,7 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1189,7 +1189,7 @@ static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = {
+ 		.parent_data = gcc_parent_data_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_2),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -2998,38 +2998,46 @@ static struct clk_branch gcc_video_axi1_clk = {
+ 
+ static struct gdsc pcie_0_gdsc = {
+ 	.gdscr = 0x6b004,
++	.collapse_ctrl = 0x52020,
++	.collapse_mask = BIT(0),
+ 	.pd = {
+ 		.name = "pcie_0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc pcie_0_phy_gdsc = {
+ 	.gdscr = 0x6c000,
++	.collapse_ctrl = 0x52020,
++	.collapse_mask = BIT(3),
+ 	.pd = {
+ 		.name = "pcie_0_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc pcie_1_gdsc = {
+ 	.gdscr = 0x8d004,
++	.collapse_ctrl = 0x52020,
++	.collapse_mask = BIT(1),
+ 	.pd = {
+ 		.name = "pcie_1_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc pcie_1_phy_gdsc = {
+ 	.gdscr = 0x8e000,
++	.collapse_ctrl = 0x52020,
++	.collapse_mask = BIT(4),
+ 	.pd = {
+ 		.name = "pcie_1_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc ufs_phy_gdsc = {
+@@ -3038,7 +3046,7 @@ static struct gdsc ufs_phy_gdsc = {
+ 		.name = "ufs_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc ufs_mem_phy_gdsc = {
+@@ -3047,7 +3055,7 @@ static struct gdsc ufs_mem_phy_gdsc = {
+ 		.name = "ufs_mem_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc usb30_prim_gdsc = {
+@@ -3056,7 +3064,7 @@ static struct gdsc usb30_prim_gdsc = {
+ 		.name = "usb30_prim_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc usb3_phy_gdsc = {
+@@ -3065,7 +3073,7 @@ static struct gdsc usb3_phy_gdsc = {
+ 		.name = "usb3_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = POLL_CFG_GDSCR,
++	.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct clk_regmap *gcc_sm8550_clocks[] = {
+diff --git a/drivers/clk/qcom/gpucc-sm8150.c b/drivers/clk/qcom/gpucc-sm8150.c
+index 8422fd0474932..c89a5b59ddb7c 100644
+--- a/drivers/clk/qcom/gpucc-sm8150.c
++++ b/drivers/clk/qcom/gpucc-sm8150.c
+@@ -37,8 +37,8 @@ static struct alpha_pll_config gpu_cc_pll1_config = {
+ 	.config_ctl_hi_val = 0x00002267,
+ 	.config_ctl_hi1_val = 0x00000024,
+ 	.test_ctl_val = 0x00000000,
+-	.test_ctl_hi_val = 0x00000002,
+-	.test_ctl_hi1_val = 0x00000000,
++	.test_ctl_hi_val = 0x00000000,
++	.test_ctl_hi1_val = 0x00000020,
+ 	.user_ctl_val = 0x00000000,
+ 	.user_ctl_hi_val = 0x00000805,
+ 	.user_ctl_hi1_val = 0x000000d0,
+diff --git a/drivers/clk/qcom/gpucc-sm8550.c b/drivers/clk/qcom/gpucc-sm8550.c
+index 420dcb27b47dd..2fa8673424d78 100644
+--- a/drivers/clk/qcom/gpucc-sm8550.c
++++ b/drivers/clk/qcom/gpucc-sm8550.c
+@@ -35,12 +35,12 @@ enum {
+ };
+ 
+ static const struct pll_vco lucid_ole_vco[] = {
+-	{ 249600000, 2300000000, 0 },
++	{ 249600000, 2000000000, 0 },
+ };
+ 
+ static const struct alpha_pll_config gpu_cc_pll0_config = {
+-	.l = 0x0d,
+-	.alpha = 0x0,
++	.l = 0x1e,
++	.alpha = 0xbaaa,
+ 	.config_ctl_val = 0x20485699,
+ 	.config_ctl_hi_val = 0x00182261,
+ 	.config_ctl_hi1_val = 0x82aa299c,
+diff --git a/drivers/clk/qcom/videocc-sm8150.c b/drivers/clk/qcom/videocc-sm8150.c
+index 1afdbe4a249d6..5579463f7e462 100644
+--- a/drivers/clk/qcom/videocc-sm8150.c
++++ b/drivers/clk/qcom/videocc-sm8150.c
+@@ -33,6 +33,7 @@ static struct alpha_pll_config video_pll0_config = {
+ 	.config_ctl_val = 0x20485699,
+ 	.config_ctl_hi_val = 0x00002267,
+ 	.config_ctl_hi1_val = 0x00000024,
++	.test_ctl_hi1_val = 0x00000020,
+ 	.user_ctl_val = 0x00000000,
+ 	.user_ctl_hi_val = 0x00000805,
+ 	.user_ctl_hi1_val = 0x000000D0,
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 764bd72cf0591..3d2daa4ba2a4b 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -1410,41 +1410,33 @@ fail:
+ 
+ #define rcdev_to_priv(x)	container_of(x, struct rzg2l_cpg_priv, rcdev)
+ 
+-static int rzg2l_cpg_reset(struct reset_controller_dev *rcdev,
+-			   unsigned long id)
+-{
+-	struct rzg2l_cpg_priv *priv = rcdev_to_priv(rcdev);
+-	const struct rzg2l_cpg_info *info = priv->info;
+-	unsigned int reg = info->resets[id].off;
+-	u32 dis = BIT(info->resets[id].bit);
+-	u32 we = dis << 16;
+-
+-	dev_dbg(rcdev->dev, "reset id:%ld offset:0x%x\n", id, CLK_RST_R(reg));
+-
+-	/* Reset module */
+-	writel(we, priv->base + CLK_RST_R(reg));
+-
+-	/* Wait for at least one cycle of the RCLK clock (@ ca. 32 kHz) */
+-	udelay(35);
+-
+-	/* Release module from reset state */
+-	writel(we | dis, priv->base + CLK_RST_R(reg));
+-
+-	return 0;
+-}
+-
+ static int rzg2l_cpg_assert(struct reset_controller_dev *rcdev,
+ 			    unsigned long id)
+ {
+ 	struct rzg2l_cpg_priv *priv = rcdev_to_priv(rcdev);
+ 	const struct rzg2l_cpg_info *info = priv->info;
+ 	unsigned int reg = info->resets[id].off;
+-	u32 value = BIT(info->resets[id].bit) << 16;
++	u32 mask = BIT(info->resets[id].bit);
++	s8 monbit = info->resets[id].monbit;
++	u32 value = mask << 16;
+ 
+ 	dev_dbg(rcdev->dev, "assert id:%ld offset:0x%x\n", id, CLK_RST_R(reg));
+ 
+ 	writel(value, priv->base + CLK_RST_R(reg));
+-	return 0;
++
++	if (info->has_clk_mon_regs) {
++		reg = CLK_MRST_R(reg);
++	} else if (monbit >= 0) {
++		reg = CPG_RST_MON;
++		mask = BIT(monbit);
++	} else {
++		/* Wait for at least one cycle of the RCLK clock (@ ca. 32 kHz) */
++		udelay(35);
++		return 0;
++	}
++
++	return readl_poll_timeout_atomic(priv->base + reg, value,
++					 value & mask, 10, 200);
+ }
+ 
+ static int rzg2l_cpg_deassert(struct reset_controller_dev *rcdev,
+@@ -1453,14 +1445,40 @@ static int rzg2l_cpg_deassert(struct reset_controller_dev *rcdev,
+ 	struct rzg2l_cpg_priv *priv = rcdev_to_priv(rcdev);
+ 	const struct rzg2l_cpg_info *info = priv->info;
+ 	unsigned int reg = info->resets[id].off;
+-	u32 dis = BIT(info->resets[id].bit);
+-	u32 value = (dis << 16) | dis;
++	u32 mask = BIT(info->resets[id].bit);
++	s8 monbit = info->resets[id].monbit;
++	u32 value = (mask << 16) | mask;
+ 
+ 	dev_dbg(rcdev->dev, "deassert id:%ld offset:0x%x\n", id,
+ 		CLK_RST_R(reg));
+ 
+ 	writel(value, priv->base + CLK_RST_R(reg));
+-	return 0;
++
++	if (info->has_clk_mon_regs) {
++		reg = CLK_MRST_R(reg);
++	} else if (monbit >= 0) {
++		reg = CPG_RST_MON;
++		mask = BIT(monbit);
++	} else {
++		/* Wait for at least one cycle of the RCLK clock (@ ca. 32 kHz) */
++		udelay(35);
++		return 0;
++	}
++
++	return readl_poll_timeout_atomic(priv->base + reg, value,
++					 !(value & mask), 10, 200);
++}
++
++static int rzg2l_cpg_reset(struct reset_controller_dev *rcdev,
++			   unsigned long id)
++{
++	int ret;
++
++	ret = rzg2l_cpg_assert(rcdev, id);
++	if (ret)
++		return ret;
++
++	return rzg2l_cpg_deassert(rcdev, id);
+ }
+ 
+ static int rzg2l_cpg_status(struct reset_controller_dev *rcdev,
+@@ -1468,18 +1486,21 @@ static int rzg2l_cpg_status(struct reset_controller_dev *rcdev,
+ {
+ 	struct rzg2l_cpg_priv *priv = rcdev_to_priv(rcdev);
+ 	const struct rzg2l_cpg_info *info = priv->info;
+-	unsigned int reg = info->resets[id].off;
+-	u32 bitmask = BIT(info->resets[id].bit);
+ 	s8 monbit = info->resets[id].monbit;
++	unsigned int reg;
++	u32 bitmask;
+ 
+ 	if (info->has_clk_mon_regs) {
+-		return !!(readl(priv->base + CLK_MRST_R(reg)) & bitmask);
++		reg = CLK_MRST_R(info->resets[id].off);
++		bitmask = BIT(info->resets[id].bit);
+ 	} else if (monbit >= 0) {
+-		u32 monbitmask = BIT(monbit);
+-
+-		return !!(readl(priv->base + CPG_RST_MON) & monbitmask);
++		reg = CPG_RST_MON;
++		bitmask = BIT(monbit);
++	} else {
++		return -ENOTSUPP;
+ 	}
+-	return -ENOTSUPP;
++
++	return !!(readl(priv->base + reg) & bitmask);
+ }
+ 
+ static const struct reset_control_ops rzg2l_cpg_reset_ops = {
+diff --git a/drivers/clk/zynqmp/clk-mux-zynqmp.c b/drivers/clk/zynqmp/clk-mux-zynqmp.c
+index 60359333f26db..9b5d3050b7422 100644
+--- a/drivers/clk/zynqmp/clk-mux-zynqmp.c
++++ b/drivers/clk/zynqmp/clk-mux-zynqmp.c
+@@ -89,7 +89,7 @@ static int zynqmp_clk_mux_set_parent(struct clk_hw *hw, u8 index)
+ static const struct clk_ops zynqmp_clk_mux_ops = {
+ 	.get_parent = zynqmp_clk_mux_get_parent,
+ 	.set_parent = zynqmp_clk_mux_set_parent,
+-	.determine_rate = __clk_mux_determine_rate,
++	.determine_rate = __clk_mux_determine_rate_closest,
+ };
+ 
+ static const struct clk_ops zynqmp_clk_mux_ro_ops = {
+diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
+index 33a3b2a226595..5a00487ae408b 100644
+--- a/drivers/clk/zynqmp/divider.c
++++ b/drivers/clk/zynqmp/divider.c
+@@ -110,52 +110,6 @@ static unsigned long zynqmp_clk_divider_recalc_rate(struct clk_hw *hw,
+ 	return DIV_ROUND_UP_ULL(parent_rate, value);
+ }
+ 
+-static void zynqmp_get_divider2_val(struct clk_hw *hw,
+-				    unsigned long rate,
+-				    struct zynqmp_clk_divider *divider,
+-				    u32 *bestdiv)
+-{
+-	int div1;
+-	int div2;
+-	long error = LONG_MAX;
+-	unsigned long div1_prate;
+-	struct clk_hw *div1_parent_hw;
+-	struct zynqmp_clk_divider *pdivider;
+-	struct clk_hw *div2_parent_hw = clk_hw_get_parent(hw);
+-
+-	if (!div2_parent_hw)
+-		return;
+-
+-	pdivider = to_zynqmp_clk_divider(div2_parent_hw);
+-	if (!pdivider)
+-		return;
+-
+-	div1_parent_hw = clk_hw_get_parent(div2_parent_hw);
+-	if (!div1_parent_hw)
+-		return;
+-
+-	div1_prate = clk_hw_get_rate(div1_parent_hw);
+-	*bestdiv = 1;
+-	for (div1 = 1; div1 <= pdivider->max_div;) {
+-		for (div2 = 1; div2 <= divider->max_div;) {
+-			long new_error = ((div1_prate / div1) / div2) - rate;
+-
+-			if (abs(new_error) < abs(error)) {
+-				*bestdiv = div2;
+-				error = new_error;
+-			}
+-			if (divider->flags & CLK_DIVIDER_POWER_OF_TWO)
+-				div2 = div2 << 1;
+-			else
+-				div2++;
+-		}
+-		if (pdivider->flags & CLK_DIVIDER_POWER_OF_TWO)
+-			div1 = div1 << 1;
+-		else
+-			div1++;
+-	}
+-}
+-
+ /**
+  * zynqmp_clk_divider_round_rate() - Round rate of divider clock
+  * @hw:			handle between common and hardware-specific interfaces
+@@ -174,6 +128,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ 	u32 div_type = divider->div_type;
+ 	u32 bestdiv;
+ 	int ret;
++	u8 width;
+ 
+ 	/* if read only, just return current value */
+ 	if (divider->flags & CLK_DIVIDER_READ_ONLY) {
+@@ -193,23 +148,12 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ 		return DIV_ROUND_UP_ULL((u64)*prate, bestdiv);
+ 	}
+ 
+-	bestdiv = zynqmp_divider_get_val(*prate, rate, divider->flags);
+-
+-	/*
+-	 * In case of two divisors, compute best divider values and return
+-	 * divider2 value based on compute value. div1 will  be automatically
+-	 * set to optimum based on required total divider value.
+-	 */
+-	if (div_type == TYPE_DIV2 &&
+-	    (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) {
+-		zynqmp_get_divider2_val(hw, rate, divider, &bestdiv);
+-	}
++	width = fls(divider->max_div);
+ 
+-	if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
+-		bestdiv = rate % *prate ? 1 : bestdiv;
++	rate = divider_round_rate(hw, rate, prate, NULL, width, divider->flags);
+ 
+-	bestdiv = min_t(u32, bestdiv, divider->max_div);
+-	*prate = rate * bestdiv;
++	if (divider->is_frac && (clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && (rate % *prate))
++		*prate = rate;
+ 
+ 	return rate;
+ }
+diff --git a/drivers/clocksource/timer-ep93xx.c b/drivers/clocksource/timer-ep93xx.c
+index bc0ca6e123349..6981ff3ac8a94 100644
+--- a/drivers/clocksource/timer-ep93xx.c
++++ b/drivers/clocksource/timer-ep93xx.c
+@@ -155,9 +155,8 @@ static int __init ep93xx_timer_of_init(struct device_node *np)
+ 	ep93xx_tcu = tcu;
+ 
+ 	irq = irq_of_parse_and_map(np, 0);
+-	if (irq == 0)
+-		irq = -EINVAL;
+-	if (irq < 0) {
++	if (!irq) {
++		ret = -EINVAL;
+ 		pr_err("EP93XX Timer Can't parse IRQ %d", irq);
+ 		goto out_free;
+ 	}
+diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c
+index 5f60f6bd33866..56acf26172621 100644
+--- a/drivers/clocksource/timer-ti-dm.c
++++ b/drivers/clocksource/timer-ti-dm.c
+@@ -183,7 +183,7 @@ static inline u32 dmtimer_read(struct dmtimer *timer, u32 reg)
+  * dmtimer_write - write timer registers in posted and non-posted mode
+  * @timer:      timer pointer over which write operation is to perform
+  * @reg:        lowest byte holds the register offset
+- * @value:      data to write into the register
++ * @val:        data to write into the register
+  *
+  * The posted mode bit is encoded in reg. Note that in posted mode, the write
+  * pending bit must be checked. Otherwise a write on a register which has a
+@@ -949,7 +949,7 @@ static int omap_dm_timer_set_int_enable(struct omap_dm_timer *cookie,
+ 
+ /**
+  * omap_dm_timer_set_int_disable - disable timer interrupts
+- * @timer:	pointer to timer handle
++ * @cookie:	pointer to timer cookie
+  * @mask:	bit mask of interrupts to be disabled
+  *
+  * Disables the specified timer interrupts for a timer.
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index c8a7ccc42c164..4ee23f4ebf4a4 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -334,8 +334,11 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
+ 
+ #ifdef CONFIG_COMMON_CLK
+ 	/* dummy clock provider as needed by OPP if clocks property is used */
+-	if (of_property_present(dev->of_node, "#clock-cells"))
+-		devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);
++	if (of_property_present(dev->of_node, "#clock-cells")) {
++		ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);
++		if (ret)
++			return dev_err_probe(dev, ret, "%s: registering clock provider failed\n", __func__);
++	}
+ #endif
+ 
+ 	ret = cpufreq_register_driver(&scmi_cpufreq_driver);
+diff --git a/drivers/cpuidle/cpuidle-haltpoll.c b/drivers/cpuidle/cpuidle-haltpoll.c
+index e66df22f96955..d8515d5c0853d 100644
+--- a/drivers/cpuidle/cpuidle-haltpoll.c
++++ b/drivers/cpuidle/cpuidle-haltpoll.c
+@@ -25,13 +25,12 @@ MODULE_PARM_DESC(force, "Load unconditionally");
+ static struct cpuidle_device __percpu *haltpoll_cpuidle_devices;
+ static enum cpuhp_state haltpoll_hp_state;
+ 
+-static int default_enter_idle(struct cpuidle_device *dev,
+-			      struct cpuidle_driver *drv, int index)
++static __cpuidle int default_enter_idle(struct cpuidle_device *dev,
++					struct cpuidle_driver *drv, int index)
+ {
+-	if (current_clr_polling_and_test()) {
+-		local_irq_enable();
++	if (current_clr_polling_and_test())
+ 		return index;
+-	}
++
+ 	arch_cpu_idle();
+ 	return index;
+ }
+diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
+index aa4e1a5006919..cb8e99936abb7 100644
+--- a/drivers/crypto/ccp/ccp-ops.c
++++ b/drivers/crypto/ccp/ccp-ops.c
+@@ -179,8 +179,11 @@ static int ccp_init_dm_workarea(struct ccp_dm_workarea *wa,
+ 
+ 		wa->dma.address = dma_map_single(wa->dev, wa->address, len,
+ 						 dir);
+-		if (dma_mapping_error(wa->dev, wa->dma.address))
++		if (dma_mapping_error(wa->dev, wa->dma.address)) {
++			kfree(wa->address);
++			wa->address = NULL;
+ 			return -ENOMEM;
++		}
+ 
+ 		wa->dma.length = len;
+ 	}
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index 56777099ef696..3255b2a070c78 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -118,8 +118,6 @@
+ #define HPRE_DFX_COMMON2_LEN		0xE
+ #define HPRE_DFX_CORE_LEN		0x43
+ 
+-#define HPRE_DEV_ALG_MAX_LEN	256
+-
+ static const char hpre_name[] = "hisi_hpre";
+ static struct dentry *hpre_debugfs_root;
+ static const struct pci_device_id hpre_dev_ids[] = {
+@@ -135,12 +133,7 @@ struct hpre_hw_error {
+ 	const char *msg;
+ };
+ 
+-struct hpre_dev_alg {
+-	u32 alg_msk;
+-	const char *alg;
+-};
+-
+-static const struct hpre_dev_alg hpre_dev_algs[] = {
++static const struct qm_dev_alg hpre_dev_algs[] = {
+ 	{
+ 		.alg_msk = BIT(0),
+ 		.alg = "rsa\n"
+@@ -233,6 +226,20 @@ static const struct hisi_qm_cap_info hpre_basic_info[] = {
+ 	{HPRE_CORE10_ALG_BITMAP_CAP, 0x3170, 0, GENMASK(31, 0), 0x0, 0x10, 0x10}
+ };
+ 
++enum hpre_pre_store_cap_idx {
++	HPRE_CLUSTER_NUM_CAP_IDX = 0x0,
++	HPRE_CORE_ENABLE_BITMAP_CAP_IDX,
++	HPRE_DRV_ALG_BITMAP_CAP_IDX,
++	HPRE_DEV_ALG_BITMAP_CAP_IDX,
++};
++
++static const u32 hpre_pre_store_caps[] = {
++	HPRE_CLUSTER_NUM_CAP,
++	HPRE_CORE_ENABLE_BITMAP_CAP,
++	HPRE_DRV_ALG_BITMAP_CAP,
++	HPRE_DEV_ALG_BITMAP_CAP,
++};
++
+ static const struct hpre_hw_error hpre_hw_errors[] = {
+ 	{
+ 		.int_msk = BIT(0),
+@@ -355,42 +362,13 @@ bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg)
+ {
+ 	u32 cap_val;
+ 
+-	cap_val = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_DRV_ALG_BITMAP_CAP, qm->cap_ver);
++	cap_val = qm->cap_tables.dev_cap_table[HPRE_DRV_ALG_BITMAP_CAP_IDX].cap_val;
+ 	if (alg & cap_val)
+ 		return true;
+ 
+ 	return false;
+ }
+ 
+-static int hpre_set_qm_algs(struct hisi_qm *qm)
+-{
+-	struct device *dev = &qm->pdev->dev;
+-	char *algs, *ptr;
+-	u32 alg_msk;
+-	int i;
+-
+-	if (!qm->use_sva)
+-		return 0;
+-
+-	algs = devm_kzalloc(dev, HPRE_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
+-	if (!algs)
+-		return -ENOMEM;
+-
+-	alg_msk = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_DEV_ALG_BITMAP_CAP, qm->cap_ver);
+-
+-	for (i = 0; i < ARRAY_SIZE(hpre_dev_algs); i++)
+-		if (alg_msk & hpre_dev_algs[i].alg_msk)
+-			strcat(algs, hpre_dev_algs[i].alg);
+-
+-	ptr = strrchr(algs, '\n');
+-	if (ptr)
+-		*ptr = '\0';
+-
+-	qm->uacce->algs = algs;
+-
+-	return 0;
+-}
+-
+ static int hpre_diff_regs_show(struct seq_file *s, void *unused)
+ {
+ 	struct hisi_qm *qm = s->private;
+@@ -460,16 +438,6 @@ static u32 vfs_num;
+ module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
+ MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
+ 
+-static inline int hpre_cluster_num(struct hisi_qm *qm)
+-{
+-	return hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CLUSTER_NUM_CAP, qm->cap_ver);
+-}
+-
+-static inline int hpre_cluster_core_mask(struct hisi_qm *qm)
+-{
+-	return hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_CORE_ENABLE_BITMAP_CAP, qm->cap_ver);
+-}
+-
+ struct hisi_qp *hpre_create_qp(u8 type)
+ {
+ 	int node = cpu_to_node(smp_processor_id());
+@@ -536,13 +504,15 @@ static int hpre_cfg_by_dsm(struct hisi_qm *qm)
+ 
+ static int hpre_set_cluster(struct hisi_qm *qm)
+ {
+-	u32 cluster_core_mask = hpre_cluster_core_mask(qm);
+-	u8 clusters_num = hpre_cluster_num(qm);
+ 	struct device *dev = &qm->pdev->dev;
+ 	unsigned long offset;
++	u32 cluster_core_mask;
++	u8 clusters_num;
+ 	u32 val = 0;
+ 	int ret, i;
+ 
++	cluster_core_mask = qm->cap_tables.dev_cap_table[HPRE_CORE_ENABLE_BITMAP_CAP_IDX].cap_val;
++	clusters_num = qm->cap_tables.dev_cap_table[HPRE_CLUSTER_NUM_CAP_IDX].cap_val;
+ 	for (i = 0; i < clusters_num; i++) {
+ 		offset = i * HPRE_CLSTR_ADDR_INTRVL;
+ 
+@@ -737,11 +707,12 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
+ 
+ static void hpre_cnt_regs_clear(struct hisi_qm *qm)
+ {
+-	u8 clusters_num = hpre_cluster_num(qm);
+ 	unsigned long offset;
++	u8 clusters_num;
+ 	int i;
+ 
+ 	/* clear clusterX/cluster_ctrl */
++	clusters_num = qm->cap_tables.dev_cap_table[HPRE_CLUSTER_NUM_CAP_IDX].cap_val;
+ 	for (i = 0; i < clusters_num; i++) {
+ 		offset = HPRE_CLSTR_BASE + i * HPRE_CLSTR_ADDR_INTRVL;
+ 		writel(0x0, qm->io_base + offset + HPRE_CLUSTER_INQURY);
+@@ -1028,13 +999,14 @@ static int hpre_pf_comm_regs_debugfs_init(struct hisi_qm *qm)
+ 
+ static int hpre_cluster_debugfs_init(struct hisi_qm *qm)
+ {
+-	u8 clusters_num = hpre_cluster_num(qm);
+ 	struct device *dev = &qm->pdev->dev;
+ 	char buf[HPRE_DBGFS_VAL_MAX_LEN];
+ 	struct debugfs_regset32 *regset;
+ 	struct dentry *tmp_d;
++	u8 clusters_num;
+ 	int i, ret;
+ 
++	clusters_num = qm->cap_tables.dev_cap_table[HPRE_CLUSTER_NUM_CAP_IDX].cap_val;
+ 	for (i = 0; i < clusters_num; i++) {
+ 		ret = snprintf(buf, HPRE_DBGFS_VAL_MAX_LEN, "cluster%d", i);
+ 		if (ret >= HPRE_DBGFS_VAL_MAX_LEN)
+@@ -1139,8 +1111,37 @@ static void hpre_debugfs_exit(struct hisi_qm *qm)
+ 	debugfs_remove_recursive(qm->debug.debug_root);
+ }
+ 
++static int hpre_pre_store_cap_reg(struct hisi_qm *qm)
++{
++	struct hisi_qm_cap_record *hpre_cap;
++	struct device *dev = &qm->pdev->dev;
++	size_t i, size;
++
++	size = ARRAY_SIZE(hpre_pre_store_caps);
++	hpre_cap = devm_kzalloc(dev, sizeof(*hpre_cap) * size, GFP_KERNEL);
++	if (!hpre_cap)
++		return -ENOMEM;
++
++	for (i = 0; i < size; i++) {
++		hpre_cap[i].type = hpre_pre_store_caps[i];
++		hpre_cap[i].cap_val = hisi_qm_get_hw_info(qm, hpre_basic_info,
++				      hpre_pre_store_caps[i], qm->cap_ver);
++	}
++
++	if (hpre_cap[HPRE_CLUSTER_NUM_CAP_IDX].cap_val > HPRE_CLUSTERS_NUM_MAX) {
++		dev_err(dev, "Device cluster num %u is out of range for driver supports %d!\n",
++			hpre_cap[HPRE_CLUSTER_NUM_CAP_IDX].cap_val, HPRE_CLUSTERS_NUM_MAX);
++		return -EINVAL;
++	}
++
++	qm->cap_tables.dev_cap_table = hpre_cap;
++
++	return 0;
++}
++
+ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ {
++	u64 alg_msk;
+ 	int ret;
+ 
+ 	if (pdev->revision == QM_HW_V1) {
+@@ -1171,7 +1172,16 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = hpre_set_qm_algs(qm);
++	/* Fetch and save the value of capability registers */
++	ret = hpre_pre_store_cap_reg(qm);
++	if (ret) {
++		pci_err(pdev, "Failed to pre-store capability registers!\n");
++		hisi_qm_uninit(qm);
++		return ret;
++	}
++
++	alg_msk = qm->cap_tables.dev_cap_table[HPRE_DEV_ALG_BITMAP_CAP_IDX].cap_val;
++	ret = hisi_qm_set_algs(qm, alg_msk, hpre_dev_algs, ARRAY_SIZE(hpre_dev_algs));
+ 	if (ret) {
+ 		pci_err(pdev, "Failed to set hpre algs!\n");
+ 		hisi_qm_uninit(qm);
+@@ -1184,11 +1194,12 @@ static int hpre_show_last_regs_init(struct hisi_qm *qm)
+ {
+ 	int cluster_dfx_regs_num =  ARRAY_SIZE(hpre_cluster_dfx_regs);
+ 	int com_dfx_regs_num = ARRAY_SIZE(hpre_com_dfx_regs);
+-	u8 clusters_num = hpre_cluster_num(qm);
+ 	struct qm_debug *debug = &qm->debug;
+ 	void __iomem *io_base;
++	u8 clusters_num;
+ 	int i, j, idx;
+ 
++	clusters_num = qm->cap_tables.dev_cap_table[HPRE_CLUSTER_NUM_CAP_IDX].cap_val;
+ 	debug->last_words = kcalloc(cluster_dfx_regs_num * clusters_num +
+ 			com_dfx_regs_num, sizeof(unsigned int), GFP_KERNEL);
+ 	if (!debug->last_words)
+@@ -1225,10 +1236,10 @@ static void hpre_show_last_dfx_regs(struct hisi_qm *qm)
+ {
+ 	int cluster_dfx_regs_num =  ARRAY_SIZE(hpre_cluster_dfx_regs);
+ 	int com_dfx_regs_num = ARRAY_SIZE(hpre_com_dfx_regs);
+-	u8 clusters_num = hpre_cluster_num(qm);
+ 	struct qm_debug *debug = &qm->debug;
+ 	struct pci_dev *pdev = qm->pdev;
+ 	void __iomem *io_base;
++	u8 clusters_num;
+ 	int i, j, idx;
+ 	u32 val;
+ 
+@@ -1243,6 +1254,7 @@ static void hpre_show_last_dfx_regs(struct hisi_qm *qm)
+ 			  hpre_com_dfx_regs[i].name, debug->last_words[i], val);
+ 	}
+ 
++	clusters_num = qm->cap_tables.dev_cap_table[HPRE_CLUSTER_NUM_CAP_IDX].cap_val;
+ 	for (i = 0; i < clusters_num; i++) {
+ 		io_base = qm->io_base + hpre_cluster_offsets[i];
+ 		for (j = 0; j <  cluster_dfx_regs_num; j++) {
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 18599f3634c3c..40da95dbab250 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -229,6 +229,8 @@
+ #define QM_QOS_MAX_CIR_U		6
+ #define QM_AUTOSUSPEND_DELAY		3000
+ 
++#define QM_DEV_ALG_MAX_LEN		256
++
+ #define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \
+ 	(((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \
+ 	((pg_sz) << QM_CQ_PAGE_SIZE_SHIFT) | \
+@@ -294,6 +296,13 @@ enum qm_basic_type {
+ 	QM_VF_IRQ_NUM_CAP,
+ };
+ 
++enum qm_pre_store_cap_idx {
++	QM_EQ_IRQ_TYPE_CAP_IDX = 0x0,
++	QM_AEQ_IRQ_TYPE_CAP_IDX,
++	QM_ABN_IRQ_TYPE_CAP_IDX,
++	QM_PF2VF_IRQ_TYPE_CAP_IDX,
++};
++
+ static const struct hisi_qm_cap_info qm_cap_info_comm[] = {
+ 	{QM_SUPPORT_DB_ISOLATION, 0x30,   0, BIT(0),  0x0, 0x0, 0x0},
+ 	{QM_SUPPORT_FUNC_QOS,     0x3100, 0, BIT(8),  0x0, 0x0, 0x1},
+@@ -323,6 +332,13 @@ static const struct hisi_qm_cap_info qm_basic_info[] = {
+ 	{QM_VF_IRQ_NUM_CAP,     0x311c,   0,  GENMASK(15, 0), 0x1,       0x2,       0x3},
+ };
+ 
++static const u32 qm_pre_store_caps[] = {
++	QM_EQ_IRQ_TYPE_CAP,
++	QM_AEQ_IRQ_TYPE_CAP,
++	QM_ABN_IRQ_TYPE_CAP,
++	QM_PF2VF_IRQ_TYPE_CAP,
++};
++
+ struct qm_mailbox {
+ 	__le16 w0;
+ 	__le16 queue_num;
+@@ -828,6 +844,40 @@ static void qm_get_xqc_depth(struct hisi_qm *qm, u16 *low_bits,
+ 	*high_bits = (depth >> QM_XQ_DEPTH_SHIFT) & QM_XQ_DEPTH_MASK;
+ }
+ 
++int hisi_qm_set_algs(struct hisi_qm *qm, u64 alg_msk, const struct qm_dev_alg *dev_algs,
++		     u32 dev_algs_size)
++{
++	struct device *dev = &qm->pdev->dev;
++	char *algs, *ptr;
++	int i;
++
++	if (!qm->uacce)
++		return 0;
++
++	if (dev_algs_size >= QM_DEV_ALG_MAX_LEN) {
++		dev_err(dev, "algs size %u is equal or larger than %d.\n",
++			dev_algs_size, QM_DEV_ALG_MAX_LEN);
++		return -EINVAL;
++	}
++
++	algs = devm_kzalloc(dev, QM_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
++	if (!algs)
++		return -ENOMEM;
++
++	for (i = 0; i < dev_algs_size; i++)
++		if (alg_msk & dev_algs[i].alg_msk)
++			strcat(algs, dev_algs[i].alg);
++
++	ptr = strrchr(algs, '\n');
++	if (ptr) {
++		*ptr = '\0';
++		qm->uacce->algs = algs;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(hisi_qm_set_algs);
++
+ static u32 qm_get_irq_num(struct hisi_qm *qm)
+ {
+ 	if (qm->fun_type == QM_HW_PF)
+@@ -4816,7 +4866,7 @@ static void qm_unregister_abnormal_irq(struct hisi_qm *qm)
+ 	if (qm->fun_type == QM_HW_VF)
+ 		return;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_ABN_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_ABN_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK))
+ 		return;
+ 
+@@ -4833,7 +4883,7 @@ static int qm_register_abnormal_irq(struct hisi_qm *qm)
+ 	if (qm->fun_type == QM_HW_VF)
+ 		return 0;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_ABN_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_ABN_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_ABN_IRQ_TYPE_MASK))
+ 		return 0;
+ 
+@@ -4850,7 +4900,7 @@ static void qm_unregister_mb_cmd_irq(struct hisi_qm *qm)
+ 	struct pci_dev *pdev = qm->pdev;
+ 	u32 irq_vector, val;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_PF2VF_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_PF2VF_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK))
+ 		return;
+ 
+@@ -4864,7 +4914,7 @@ static int qm_register_mb_cmd_irq(struct hisi_qm *qm)
+ 	u32 irq_vector, val;
+ 	int ret;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_PF2VF_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_PF2VF_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK))
+ 		return 0;
+ 
+@@ -4881,7 +4931,7 @@ static void qm_unregister_aeq_irq(struct hisi_qm *qm)
+ 	struct pci_dev *pdev = qm->pdev;
+ 	u32 irq_vector, val;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_AEQ_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_AEQ_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK))
+ 		return;
+ 
+@@ -4895,7 +4945,7 @@ static int qm_register_aeq_irq(struct hisi_qm *qm)
+ 	u32 irq_vector, val;
+ 	int ret;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_AEQ_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_AEQ_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK))
+ 		return 0;
+ 
+@@ -4913,7 +4963,7 @@ static void qm_unregister_eq_irq(struct hisi_qm *qm)
+ 	struct pci_dev *pdev = qm->pdev;
+ 	u32 irq_vector, val;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_EQ_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_EQ_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK))
+ 		return;
+ 
+@@ -4927,7 +4977,7 @@ static int qm_register_eq_irq(struct hisi_qm *qm)
+ 	u32 irq_vector, val;
+ 	int ret;
+ 
+-	val = hisi_qm_get_hw_info(qm, qm_basic_info, QM_EQ_IRQ_TYPE_CAP, qm->cap_ver);
++	val = qm->cap_tables.qm_cap_table[QM_EQ_IRQ_TYPE_CAP_IDX].cap_val;
+ 	if (!((val >> QM_IRQ_TYPE_SHIFT) & QM_IRQ_TYPE_MASK))
+ 		return 0;
+ 
+@@ -5015,7 +5065,29 @@ static int qm_get_qp_num(struct hisi_qm *qm)
+ 	return 0;
+ }
+ 
+-static void qm_get_hw_caps(struct hisi_qm *qm)
++static int qm_pre_store_irq_type_caps(struct hisi_qm *qm)
++{
++	struct hisi_qm_cap_record *qm_cap;
++	struct pci_dev *pdev = qm->pdev;
++	size_t i, size;
++
++	size = ARRAY_SIZE(qm_pre_store_caps);
++	qm_cap = devm_kzalloc(&pdev->dev, sizeof(*qm_cap) * size, GFP_KERNEL);
++	if (!qm_cap)
++		return -ENOMEM;
++
++	for (i = 0; i < size; i++) {
++		qm_cap[i].type = qm_pre_store_caps[i];
++		qm_cap[i].cap_val = hisi_qm_get_hw_info(qm, qm_basic_info,
++							qm_pre_store_caps[i], qm->cap_ver);
++	}
++
++	qm->cap_tables.qm_cap_table = qm_cap;
++
++	return 0;
++}
++
++static int qm_get_hw_caps(struct hisi_qm *qm)
+ {
+ 	const struct hisi_qm_cap_info *cap_info = qm->fun_type == QM_HW_PF ?
+ 						  qm_cap_info_pf : qm_cap_info_vf;
+@@ -5046,6 +5118,9 @@ static void qm_get_hw_caps(struct hisi_qm *qm)
+ 		if (val)
+ 			set_bit(cap_info[i].type, &qm->caps);
+ 	}
++
++	/* Fetch and save the value of irq type related capability registers */
++	return qm_pre_store_irq_type_caps(qm);
+ }
+ 
+ static int qm_get_pci_res(struct hisi_qm *qm)
+@@ -5067,7 +5142,10 @@ static int qm_get_pci_res(struct hisi_qm *qm)
+ 		goto err_request_mem_regions;
+ 	}
+ 
+-	qm_get_hw_caps(qm);
++	ret = qm_get_hw_caps(qm);
++	if (ret)
++		goto err_ioremap;
++
+ 	if (test_bit(QM_SUPPORT_DB_ISOLATION, &qm->caps)) {
+ 		qm->db_interval = QM_QP_DB_INTERVAL;
+ 		qm->db_phys_base = pci_resource_start(pdev, PCI_BAR_4);
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 3e57fc04b3770..410c83712e285 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -220,6 +220,13 @@ enum sec_cap_type {
+ 	SEC_CORE4_ALG_BITMAP_HIGH,
+ };
+ 
++enum sec_cap_reg_record_idx {
++	SEC_DRV_ALG_BITMAP_LOW_IDX = 0x0,
++	SEC_DRV_ALG_BITMAP_HIGH_IDX,
++	SEC_DEV_ALG_BITMAP_LOW_IDX,
++	SEC_DEV_ALG_BITMAP_HIGH_IDX,
++};
++
+ void sec_destroy_qps(struct hisi_qp **qps, int qp_num);
+ struct hisi_qp **sec_create_qps(void);
+ int sec_register_to_crypto(struct hisi_qm *qm);
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 6fcabbc87860a..ba7f305d43c10 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -2547,9 +2547,12 @@ err:
+ 
+ int sec_register_to_crypto(struct hisi_qm *qm)
+ {
+-	u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
++	u64 alg_mask;
+ 	int ret = 0;
+ 
++	alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH_IDX,
++				      SEC_DRV_ALG_BITMAP_LOW_IDX);
++
+ 	mutex_lock(&sec_algs_lock);
+ 	if (sec_available_devs) {
+ 		sec_available_devs++;
+@@ -2578,7 +2581,10 @@ unlock:
+ 
+ void sec_unregister_from_crypto(struct hisi_qm *qm)
+ {
+-	u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
++	u64 alg_mask;
++
++	alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH_IDX,
++				      SEC_DRV_ALG_BITMAP_LOW_IDX);
+ 
+ 	mutex_lock(&sec_algs_lock);
+ 	if (--sec_available_devs)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 0e56a47eb8626..878d94ab5d6d8 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -120,7 +120,6 @@
+ 					GENMASK_ULL(42, 25))
+ #define SEC_AEAD_BITMAP			(GENMASK_ULL(7, 6) | GENMASK_ULL(18, 17) | \
+ 					GENMASK_ULL(45, 43))
+-#define SEC_DEV_ALG_MAX_LEN		256
+ 
+ struct sec_hw_error {
+ 	u32 int_msk;
+@@ -132,11 +131,6 @@ struct sec_dfx_item {
+ 	u32 offset;
+ };
+ 
+-struct sec_dev_alg {
+-	u64 alg_msk;
+-	const char *algs;
+-};
+-
+ static const char sec_name[] = "hisi_sec2";
+ static struct dentry *sec_debugfs_root;
+ 
+@@ -173,15 +167,22 @@ static const struct hisi_qm_cap_info sec_basic_info[] = {
+ 	{SEC_CORE4_ALG_BITMAP_HIGH, 0x3170, 0, GENMASK(31, 0), 0x3FFF, 0x3FFF, 0x3FFF},
+ };
+ 
+-static const struct sec_dev_alg sec_dev_algs[] = { {
++static const u32 sec_pre_store_caps[] = {
++	SEC_DRV_ALG_BITMAP_LOW,
++	SEC_DRV_ALG_BITMAP_HIGH,
++	SEC_DEV_ALG_BITMAP_LOW,
++	SEC_DEV_ALG_BITMAP_HIGH,
++};
++
++static const struct qm_dev_alg sec_dev_algs[] = { {
+ 		.alg_msk = SEC_CIPHER_BITMAP,
+-		.algs = "cipher\n",
++		.alg = "cipher\n",
+ 	}, {
+ 		.alg_msk = SEC_DIGEST_BITMAP,
+-		.algs = "digest\n",
++		.alg = "digest\n",
+ 	}, {
+ 		.alg_msk = SEC_AEAD_BITMAP,
+-		.algs = "aead\n",
++		.alg = "aead\n",
+ 	},
+ };
+ 
+@@ -394,8 +395,8 @@ u64 sec_get_alg_bitmap(struct hisi_qm *qm, u32 high, u32 low)
+ {
+ 	u32 cap_val_h, cap_val_l;
+ 
+-	cap_val_h = hisi_qm_get_hw_info(qm, sec_basic_info, high, qm->cap_ver);
+-	cap_val_l = hisi_qm_get_hw_info(qm, sec_basic_info, low, qm->cap_ver);
++	cap_val_h = qm->cap_tables.dev_cap_table[high].cap_val;
++	cap_val_l = qm->cap_tables.dev_cap_table[low].cap_val;
+ 
+ 	return ((u64)cap_val_h << SEC_ALG_BITMAP_SHIFT) | (u64)cap_val_l;
+ }
+@@ -1077,37 +1078,31 @@ static int sec_pf_probe_init(struct sec_dev *sec)
+ 	return ret;
+ }
+ 
+-static int sec_set_qm_algs(struct hisi_qm *qm)
++static int sec_pre_store_cap_reg(struct hisi_qm *qm)
+ {
+-	struct device *dev = &qm->pdev->dev;
+-	char *algs, *ptr;
+-	u64 alg_mask;
+-	int i;
+-
+-	if (!qm->use_sva)
+-		return 0;
++	struct hisi_qm_cap_record *sec_cap;
++	struct pci_dev *pdev = qm->pdev;
++	size_t i, size;
+ 
+-	algs = devm_kzalloc(dev, SEC_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
+-	if (!algs)
++	size = ARRAY_SIZE(sec_pre_store_caps);
++	sec_cap = devm_kzalloc(&pdev->dev, sizeof(*sec_cap) * size, GFP_KERNEL);
++	if (!sec_cap)
+ 		return -ENOMEM;
+ 
+-	alg_mask = sec_get_alg_bitmap(qm, SEC_DEV_ALG_BITMAP_HIGH, SEC_DEV_ALG_BITMAP_LOW);
+-
+-	for (i = 0; i < ARRAY_SIZE(sec_dev_algs); i++)
+-		if (alg_mask & sec_dev_algs[i].alg_msk)
+-			strcat(algs, sec_dev_algs[i].algs);
+-
+-	ptr = strrchr(algs, '\n');
+-	if (ptr)
+-		*ptr = '\0';
++	for (i = 0; i < size; i++) {
++		sec_cap[i].type = sec_pre_store_caps[i];
++		sec_cap[i].cap_val = hisi_qm_get_hw_info(qm, sec_basic_info,
++				     sec_pre_store_caps[i], qm->cap_ver);
++	}
+ 
+-	qm->uacce->algs = algs;
++	qm->cap_tables.dev_cap_table = sec_cap;
+ 
+ 	return 0;
+ }
+ 
+ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ {
++	u64 alg_msk;
+ 	int ret;
+ 
+ 	qm->pdev = pdev;
+@@ -1142,7 +1137,16 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = sec_set_qm_algs(qm);
++	/* Fetch and save the value of capability registers */
++	ret = sec_pre_store_cap_reg(qm);
++	if (ret) {
++		pci_err(qm->pdev, "Failed to pre-store capability registers!\n");
++		hisi_qm_uninit(qm);
++		return ret;
++	}
++
++	alg_msk = sec_get_alg_bitmap(qm, SEC_DEV_ALG_BITMAP_HIGH_IDX, SEC_DEV_ALG_BITMAP_LOW_IDX);
++	ret = hisi_qm_set_algs(qm, alg_msk, sec_dev_algs, ARRAY_SIZE(sec_dev_algs));
+ 	if (ret) {
+ 		pci_err(qm->pdev, "Failed to set sec algs!\n");
+ 		hisi_qm_uninit(qm);
+diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
+index db4c964cd6495..403b074688417 100644
+--- a/drivers/crypto/hisilicon/zip/zip_main.c
++++ b/drivers/crypto/hisilicon/zip/zip_main.c
+@@ -74,7 +74,6 @@
+ #define HZIP_AXI_SHUTDOWN_ENABLE	BIT(14)
+ #define HZIP_WR_PORT			BIT(11)
+ 
+-#define HZIP_DEV_ALG_MAX_LEN		256
+ #define HZIP_ALG_ZLIB_BIT		GENMASK(1, 0)
+ #define HZIP_ALG_GZIP_BIT		GENMASK(3, 2)
+ #define HZIP_ALG_DEFLATE_BIT		GENMASK(5, 4)
+@@ -107,6 +106,14 @@
+ #define HZIP_CLOCK_GATED_EN		(HZIP_CORE_GATED_EN | \
+ 					 HZIP_CORE_GATED_OOO_EN)
+ 
++/* zip comp high performance */
++#define HZIP_HIGH_PERF_OFFSET		0x301208
++
++enum {
++	HZIP_HIGH_COMP_RATE,
++	HZIP_HIGH_COMP_PERF,
++};
++
+ static const char hisi_zip_name[] = "hisi_zip";
+ static struct dentry *hzip_debugfs_root;
+ 
+@@ -120,23 +127,18 @@ struct zip_dfx_item {
+ 	u32 offset;
+ };
+ 
+-struct zip_dev_alg {
+-	u32 alg_msk;
+-	const char *algs;
+-};
+-
+-static const struct zip_dev_alg zip_dev_algs[] = { {
++static const struct qm_dev_alg zip_dev_algs[] = { {
+ 		.alg_msk = HZIP_ALG_ZLIB_BIT,
+-		.algs = "zlib\n",
++		.alg = "zlib\n",
+ 	}, {
+ 		.alg_msk = HZIP_ALG_GZIP_BIT,
+-		.algs = "gzip\n",
++		.alg = "gzip\n",
+ 	}, {
+ 		.alg_msk = HZIP_ALG_DEFLATE_BIT,
+-		.algs = "deflate\n",
++		.alg = "deflate\n",
+ 	}, {
+ 		.alg_msk = HZIP_ALG_LZ77_BIT,
+-		.algs = "lz77_zstd\n",
++		.alg = "lz77_zstd\n",
+ 	},
+ };
+ 
+@@ -247,6 +249,26 @@ static struct hisi_qm_cap_info zip_basic_cap_info[] = {
+ 	{ZIP_CAP_MAX, 0x317c, 0, GENMASK(0, 0), 0x0, 0x0, 0x0}
+ };
+ 
++enum zip_pre_store_cap_idx {
++	ZIP_CORE_NUM_CAP_IDX = 0x0,
++	ZIP_CLUSTER_COMP_NUM_CAP_IDX,
++	ZIP_CLUSTER_DECOMP_NUM_CAP_IDX,
++	ZIP_DECOMP_ENABLE_BITMAP_IDX,
++	ZIP_COMP_ENABLE_BITMAP_IDX,
++	ZIP_DRV_ALG_BITMAP_IDX,
++	ZIP_DEV_ALG_BITMAP_IDX,
++};
++
++static const u32 zip_pre_store_caps[] = {
++	ZIP_CORE_NUM_CAP,
++	ZIP_CLUSTER_COMP_NUM_CAP,
++	ZIP_CLUSTER_DECOMP_NUM_CAP,
++	ZIP_DECOMP_ENABLE_BITMAP,
++	ZIP_COMP_ENABLE_BITMAP,
++	ZIP_DRV_ALG_BITMAP,
++	ZIP_DEV_ALG_BITMAP,
++};
++
+ enum {
+ 	HZIP_COMP_CORE0,
+ 	HZIP_COMP_CORE1,
+@@ -352,6 +374,37 @@ static int hzip_diff_regs_show(struct seq_file *s, void *unused)
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(hzip_diff_regs);
++
++static int perf_mode_set(const char *val, const struct kernel_param *kp)
++{
++	int ret;
++	u32 n;
++
++	if (!val)
++		return -EINVAL;
++
++	ret = kstrtou32(val, 10, &n);
++	if (ret != 0 || (n != HZIP_HIGH_COMP_PERF &&
++			 n != HZIP_HIGH_COMP_RATE))
++		return -EINVAL;
++
++	return param_set_int(val, kp);
++}
++
++static const struct kernel_param_ops zip_com_perf_ops = {
++	.set = perf_mode_set,
++	.get = param_get_int,
++};
++
++/*
++ * perf_mode = 0 means enable high compression rate mode,
++ * perf_mode = 1 means enable high compression performance mode.
++ * These two modes only apply to the compression direction.
++ */
++static u32 perf_mode = HZIP_HIGH_COMP_RATE;
++module_param_cb(perf_mode, &zip_com_perf_ops, &perf_mode, 0444);
++MODULE_PARM_DESC(perf_mode, "ZIP high perf mode 0(default), 1(enable)");
++
+ static const struct kernel_param_ops zip_uacce_mode_ops = {
+ 	.set = uacce_mode_set,
+ 	.get = param_get_int,
+@@ -410,40 +463,33 @@ bool hisi_zip_alg_support(struct hisi_qm *qm, u32 alg)
+ {
+ 	u32 cap_val;
+ 
+-	cap_val = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_DRV_ALG_BITMAP, qm->cap_ver);
++	cap_val = qm->cap_tables.dev_cap_table[ZIP_DRV_ALG_BITMAP_IDX].cap_val;
+ 	if ((alg & cap_val) == alg)
+ 		return true;
+ 
+ 	return false;
+ }
+ 
+-static int hisi_zip_set_qm_algs(struct hisi_qm *qm)
++static int hisi_zip_set_high_perf(struct hisi_qm *qm)
+ {
+-	struct device *dev = &qm->pdev->dev;
+-	char *algs, *ptr;
+-	u32 alg_mask;
+-	int i;
+-
+-	if (!qm->use_sva)
+-		return 0;
+-
+-	algs = devm_kzalloc(dev, HZIP_DEV_ALG_MAX_LEN * sizeof(char), GFP_KERNEL);
+-	if (!algs)
+-		return -ENOMEM;
+-
+-	alg_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_DEV_ALG_BITMAP, qm->cap_ver);
+-
+-	for (i = 0; i < ARRAY_SIZE(zip_dev_algs); i++)
+-		if (alg_mask & zip_dev_algs[i].alg_msk)
+-			strcat(algs, zip_dev_algs[i].algs);
+-
+-	ptr = strrchr(algs, '\n');
+-	if (ptr)
+-		*ptr = '\0';
++	u32 val;
++	int ret;
+ 
+-	qm->uacce->algs = algs;
++	val = readl_relaxed(qm->io_base + HZIP_HIGH_PERF_OFFSET);
++	if (perf_mode == HZIP_HIGH_COMP_PERF)
++		val |= HZIP_HIGH_COMP_PERF;
++	else
++		val &= ~HZIP_HIGH_COMP_PERF;
++
++	/* Set perf mode */
++	writel(val, qm->io_base + HZIP_HIGH_PERF_OFFSET);
++	ret = readl_relaxed_poll_timeout(qm->io_base + HZIP_HIGH_PERF_OFFSET,
++					 val, val == perf_mode, HZIP_DELAY_1_US,
++					 HZIP_POLL_TIMEOUT_US);
++	if (ret)
++		pci_err(qm->pdev, "failed to set perf mode\n");
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void hisi_zip_open_sva_prefetch(struct hisi_qm *qm)
+@@ -542,10 +588,8 @@ static int hisi_zip_set_user_domain_and_cache(struct hisi_qm *qm)
+ 	}
+ 
+ 	/* let's open all compression/decompression cores */
+-	dcomp_bm = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
+-				       ZIP_DECOMP_ENABLE_BITMAP, qm->cap_ver);
+-	comp_bm = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
+-				      ZIP_COMP_ENABLE_BITMAP, qm->cap_ver);
++	dcomp_bm = qm->cap_tables.dev_cap_table[ZIP_DECOMP_ENABLE_BITMAP_IDX].cap_val;
++	comp_bm = qm->cap_tables.dev_cap_table[ZIP_COMP_ENABLE_BITMAP_IDX].cap_val;
+ 	writel(HZIP_DECOMP_CHECK_ENABLE | dcomp_bm | comp_bm, base + HZIP_CLOCK_GATE_CTRL);
+ 
+ 	/* enable sqc,cqc writeback */
+@@ -772,9 +816,8 @@ static int hisi_zip_core_debug_init(struct hisi_qm *qm)
+ 	char buf[HZIP_BUF_SIZE];
+ 	int i;
+ 
+-	zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver);
+-	zip_comp_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CLUSTER_COMP_NUM_CAP,
+-						qm->cap_ver);
++	zip_core_num = qm->cap_tables.dev_cap_table[ZIP_CORE_NUM_CAP_IDX].cap_val;
++	zip_comp_core_num = qm->cap_tables.dev_cap_table[ZIP_CLUSTER_COMP_NUM_CAP_IDX].cap_val;
+ 
+ 	for (i = 0; i < zip_core_num; i++) {
+ 		if (i < zip_comp_core_num)
+@@ -916,7 +959,7 @@ static int hisi_zip_show_last_regs_init(struct hisi_qm *qm)
+ 	u32 zip_core_num;
+ 	int i, j, idx;
+ 
+-	zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver);
++	zip_core_num = qm->cap_tables.dev_cap_table[ZIP_CORE_NUM_CAP_IDX].cap_val;
+ 
+ 	debug->last_words = kcalloc(core_dfx_regs_num * zip_core_num + com_dfx_regs_num,
+ 				    sizeof(unsigned int), GFP_KERNEL);
+@@ -972,9 +1015,9 @@ static void hisi_zip_show_last_dfx_regs(struct hisi_qm *qm)
+ 				 hzip_com_dfx_regs[i].name, debug->last_words[i], val);
+ 	}
+ 
+-	zip_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CORE_NUM_CAP, qm->cap_ver);
+-	zip_comp_core_num = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_CLUSTER_COMP_NUM_CAP,
+-						qm->cap_ver);
++	zip_core_num = qm->cap_tables.dev_cap_table[ZIP_CORE_NUM_CAP_IDX].cap_val;
++	zip_comp_core_num = qm->cap_tables.dev_cap_table[ZIP_CLUSTER_COMP_NUM_CAP_IDX].cap_val;
++
+ 	for (i = 0; i < zip_core_num; i++) {
+ 		if (i < zip_comp_core_num)
+ 			scnprintf(buf, sizeof(buf), "Comp_core-%d", i);
+@@ -1115,6 +1158,10 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = hisi_zip_set_high_perf(qm);
++	if (ret)
++		return ret;
++
+ 	hisi_zip_open_sva_prefetch(qm);
+ 	hisi_qm_dev_err_init(qm);
+ 	hisi_zip_debug_regs_clear(qm);
+@@ -1126,8 +1173,31 @@ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+ 	return ret;
+ }
+ 
++static int zip_pre_store_cap_reg(struct hisi_qm *qm)
++{
++	struct hisi_qm_cap_record *zip_cap;
++	struct pci_dev *pdev = qm->pdev;
++	size_t i, size;
++
++	size = ARRAY_SIZE(zip_pre_store_caps);
++	zip_cap = devm_kzalloc(&pdev->dev, sizeof(*zip_cap) * size, GFP_KERNEL);
++	if (!zip_cap)
++		return -ENOMEM;
++
++	for (i = 0; i < size; i++) {
++		zip_cap[i].type = zip_pre_store_caps[i];
++		zip_cap[i].cap_val = hisi_qm_get_hw_info(qm, zip_basic_cap_info,
++				     zip_pre_store_caps[i], qm->cap_ver);
++	}
++
++	qm->cap_tables.dev_cap_table = zip_cap;
++
++	return 0;
++}
++
+ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ {
++	u64 alg_msk;
+ 	int ret;
+ 
+ 	qm->pdev = pdev;
+@@ -1163,7 +1233,16 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
+ 		return ret;
+ 	}
+ 
+-	ret = hisi_zip_set_qm_algs(qm);
++	/* Fetch and save the value of capability registers */
++	ret = zip_pre_store_cap_reg(qm);
++	if (ret) {
++		pci_err(qm->pdev, "Failed to pre-store capability registers!\n");
++		hisi_qm_uninit(qm);
++		return ret;
++	}
++
++	alg_msk = qm->cap_tables.dev_cap_table[ZIP_DEV_ALG_BITMAP_IDX].cap_val;
++	ret = hisi_qm_set_algs(qm, alg_msk, zip_dev_algs, ARRAY_SIZE(zip_dev_algs));
+ 	if (ret) {
+ 		pci_err(qm->pdev, "Failed to set zip algs!\n");
+ 		hisi_qm_uninit(qm);
+diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
+index 272c28b5a0883..b83818634ae47 100644
+--- a/drivers/crypto/inside-secure/safexcel_cipher.c
++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
+@@ -742,9 +742,9 @@ static int safexcel_send_req(struct crypto_async_request *base, int ring,
+ 				max(totlen_src, totlen_dst));
+ 			return -EINVAL;
+ 		}
+-		if (sreq->nr_src > 0)
+-			dma_map_sg(priv->dev, src, sreq->nr_src,
+-				   DMA_BIDIRECTIONAL);
++		if (sreq->nr_src > 0 &&
++		    !dma_map_sg(priv->dev, src, sreq->nr_src, DMA_BIDIRECTIONAL))
++			return -EIO;
+ 	} else {
+ 		if (unlikely(totlen_src && (sreq->nr_src <= 0))) {
+ 			dev_err(priv->dev, "Source buffer not large enough (need %d bytes)!",
+@@ -752,8 +752,9 @@ static int safexcel_send_req(struct crypto_async_request *base, int ring,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (sreq->nr_src > 0)
+-			dma_map_sg(priv->dev, src, sreq->nr_src, DMA_TO_DEVICE);
++		if (sreq->nr_src > 0 &&
++		    !dma_map_sg(priv->dev, src, sreq->nr_src, DMA_TO_DEVICE))
++			return -EIO;
+ 
+ 		if (unlikely(totlen_dst && (sreq->nr_dst <= 0))) {
+ 			dev_err(priv->dev, "Dest buffer not large enough (need %d bytes)!",
+@@ -762,9 +763,11 @@ static int safexcel_send_req(struct crypto_async_request *base, int ring,
+ 			goto unmap;
+ 		}
+ 
+-		if (sreq->nr_dst > 0)
+-			dma_map_sg(priv->dev, dst, sreq->nr_dst,
+-				   DMA_FROM_DEVICE);
++		if (sreq->nr_dst > 0 &&
++		    !dma_map_sg(priv->dev, dst, sreq->nr_dst, DMA_FROM_DEVICE)) {
++			ret = -EIO;
++			goto unmap;
++		}
+ 	}
+ 
+ 	memcpy(ctx->base.ctxr->data, ctx->key, ctx->key_len);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
+index 4ff5729a34969..9d5fdd529a2ee 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
+@@ -92,6 +92,7 @@ enum ras_errors {
+ 
+ struct adf_error_counters {
+ 	atomic_t counter[ADF_RAS_ERRORS];
++	bool sysfs_added;
+ 	bool enabled;
+ };
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.c b/drivers/crypto/intel/qat/qat_common/adf_rl.c
+index 86e3e2152b1b0..de1b214dba1f9 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_rl.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_rl.c
+@@ -812,17 +812,16 @@ static int add_update_sla(struct adf_accel_dev *accel_dev,
+ 	if (!sla_in) {
+ 		dev_warn(&GET_DEV(accel_dev),
+ 			 "SLA input data pointer is missing\n");
+-		ret = -EFAULT;
+-		goto ret_err;
++		return -EFAULT;
+ 	}
+ 
++	mutex_lock(&rl_data->rl_lock);
++
+ 	/* Input validation */
+ 	ret = validate_user_input(accel_dev, sla_in, is_update);
+ 	if (ret)
+ 		goto ret_err;
+ 
+-	mutex_lock(&rl_data->rl_lock);
+-
+ 	if (is_update) {
+ 		ret = validate_sla_id(accel_dev, sla_in->sla_id);
+ 		if (ret)
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.h b/drivers/crypto/intel/qat/qat_common/adf_rl.h
+index eb5a330f85437..269c6656fb90e 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_rl.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_rl.h
+@@ -79,6 +79,7 @@ struct adf_rl_interface_data {
+ 	struct adf_rl_sla_input_data input;
+ 	enum adf_base_services cap_rem_srv;
+ 	struct rw_semaphore lock;
++	bool sysfs_added;
+ };
+ 
+ struct adf_rl_hw_data {
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs.c b/drivers/crypto/intel/qat/qat_common/adf_sysfs.c
+index ddffc98119c6b..d450dad32c9e4 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_sysfs.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_sysfs.c
+@@ -215,6 +215,9 @@ static ssize_t rp2srv_show(struct device *dev, struct device_attribute *attr,
+ 	enum adf_cfg_service_type svc;
+ 
+ 	accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
++	if (!accel_dev)
++		return -EINVAL;
++
+ 	hw_data = GET_HW_DATA(accel_dev);
+ 
+ 	if (accel_dev->sysfs.ring_num == UNSET_RING_NUM)
+@@ -242,7 +245,8 @@ static ssize_t rp2srv_store(struct device *dev, struct device_attribute *attr,
+ 			    const char *buf, size_t count)
+ {
+ 	struct adf_accel_dev *accel_dev;
+-	int ring, num_rings, ret;
++	int num_rings, ret;
++	unsigned int ring;
+ 
+ 	accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+ 	if (!accel_dev)
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c b/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c
+index cffe2d7229953..e97c67c87b3cf 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c
+@@ -99,6 +99,8 @@ void adf_sysfs_start_ras(struct adf_accel_dev *accel_dev)
+ 	if (device_add_group(&GET_DEV(accel_dev), &qat_ras_group))
+ 		dev_err(&GET_DEV(accel_dev),
+ 			"Failed to create qat_ras attribute group.\n");
++
++	accel_dev->ras_errors.sysfs_added = true;
+ }
+ 
+ void adf_sysfs_stop_ras(struct adf_accel_dev *accel_dev)
+@@ -106,7 +108,10 @@ void adf_sysfs_stop_ras(struct adf_accel_dev *accel_dev)
+ 	if (!accel_dev->ras_errors.enabled)
+ 		return;
+ 
+-	device_remove_group(&GET_DEV(accel_dev), &qat_ras_group);
++	if (accel_dev->ras_errors.sysfs_added) {
++		device_remove_group(&GET_DEV(accel_dev), &qat_ras_group);
++		accel_dev->ras_errors.sysfs_added = false;
++	}
+ 
+ 	ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+ }
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c b/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c
+index abf9c52474eca..bedb514d4e304 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c
+@@ -441,11 +441,19 @@ int adf_sysfs_rl_add(struct adf_accel_dev *accel_dev)
+ 
+ 	data->cap_rem_srv = ADF_SVC_NONE;
+ 	data->input.srv = ADF_SVC_NONE;
++	data->sysfs_added = true;
+ 
+ 	return ret;
+ }
+ 
+ void adf_sysfs_rl_rm(struct adf_accel_dev *accel_dev)
+ {
++	struct adf_rl_interface_data *data;
++
++	data = &GET_RL_STRUCT(accel_dev);
++	if (!data->sysfs_added)
++		return;
++
+ 	device_remove_group(&GET_DEV(accel_dev), &qat_rl_group);
++	data->sysfs_added = false;
+ }
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index 6846a84295745..78a4930c64809 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -1869,9 +1869,8 @@ static int sa_aead_setkey(struct crypto_aead *authenc,
+ 	crypto_aead_set_flags(ctx->fallback.aead,
+ 			      crypto_aead_get_flags(authenc) &
+ 			      CRYPTO_TFM_REQ_MASK);
+-	crypto_aead_setkey(ctx->fallback.aead, key, keylen);
+ 
+-	return 0;
++	return crypto_aead_setkey(ctx->fallback.aead, key, keylen);
+ }
+ 
+ static int sa_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index 02065131c3008..fabe4f381fb60 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -43,7 +43,6 @@
+ #define FLAGS_MODE_MASK		0x000f
+ #define FLAGS_ENCRYPT		BIT(0)
+ #define FLAGS_CBC		BIT(1)
+-#define FLAGS_NEW_KEY		BIT(3)
+ 
+ #define SAHARA_HDR_BASE			0x00800000
+ #define SAHARA_HDR_SKHA_ALG_AES	0
+@@ -141,8 +140,6 @@ struct sahara_hw_link {
+ };
+ 
+ struct sahara_ctx {
+-	unsigned long flags;
+-
+ 	/* AES-specific context */
+ 	int keylen;
+ 	u8 key[AES_KEYSIZE_128];
+@@ -151,6 +148,7 @@ struct sahara_ctx {
+ 
+ struct sahara_aes_reqctx {
+ 	unsigned long mode;
++	u8 iv_out[AES_BLOCK_SIZE];
+ 	struct skcipher_request fallback_req;	// keep at the end
+ };
+ 
+@@ -446,27 +444,24 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 	int ret;
+ 	int i, j;
+ 	int idx = 0;
++	u32 len;
+ 
+-	/* Copy new key if necessary */
+-	if (ctx->flags & FLAGS_NEW_KEY) {
+-		memcpy(dev->key_base, ctx->key, ctx->keylen);
+-		ctx->flags &= ~FLAGS_NEW_KEY;
++	memcpy(dev->key_base, ctx->key, ctx->keylen);
+ 
+-		if (dev->flags & FLAGS_CBC) {
+-			dev->hw_desc[idx]->len1 = AES_BLOCK_SIZE;
+-			dev->hw_desc[idx]->p1 = dev->iv_phys_base;
+-		} else {
+-			dev->hw_desc[idx]->len1 = 0;
+-			dev->hw_desc[idx]->p1 = 0;
+-		}
+-		dev->hw_desc[idx]->len2 = ctx->keylen;
+-		dev->hw_desc[idx]->p2 = dev->key_phys_base;
+-		dev->hw_desc[idx]->next = dev->hw_phys_desc[1];
++	if (dev->flags & FLAGS_CBC) {
++		dev->hw_desc[idx]->len1 = AES_BLOCK_SIZE;
++		dev->hw_desc[idx]->p1 = dev->iv_phys_base;
++	} else {
++		dev->hw_desc[idx]->len1 = 0;
++		dev->hw_desc[idx]->p1 = 0;
++	}
++	dev->hw_desc[idx]->len2 = ctx->keylen;
++	dev->hw_desc[idx]->p2 = dev->key_phys_base;
++	dev->hw_desc[idx]->next = dev->hw_phys_desc[1];
++	dev->hw_desc[idx]->hdr = sahara_aes_key_hdr(dev);
+ 
+-		dev->hw_desc[idx]->hdr = sahara_aes_key_hdr(dev);
++	idx++;
+ 
+-		idx++;
+-	}
+ 
+ 	dev->nb_in_sg = sg_nents_for_len(dev->in_sg, dev->total);
+ 	if (dev->nb_in_sg < 0) {
+@@ -488,24 +483,27 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 			 DMA_TO_DEVICE);
+ 	if (!ret) {
+ 		dev_err(dev->device, "couldn't map in sg\n");
+-		goto unmap_in;
++		return -EINVAL;
+ 	}
++
+ 	ret = dma_map_sg(dev->device, dev->out_sg, dev->nb_out_sg,
+ 			 DMA_FROM_DEVICE);
+ 	if (!ret) {
+ 		dev_err(dev->device, "couldn't map out sg\n");
+-		goto unmap_out;
++		goto unmap_in;
+ 	}
+ 
+ 	/* Create input links */
+ 	dev->hw_desc[idx]->p1 = dev->hw_phys_link[0];
+ 	sg = dev->in_sg;
++	len = dev->total;
+ 	for (i = 0; i < dev->nb_in_sg; i++) {
+-		dev->hw_link[i]->len = sg->length;
++		dev->hw_link[i]->len = min(len, sg->length);
+ 		dev->hw_link[i]->p = sg->dma_address;
+ 		if (i == (dev->nb_in_sg - 1)) {
+ 			dev->hw_link[i]->next = 0;
+ 		} else {
++			len -= min(len, sg->length);
+ 			dev->hw_link[i]->next = dev->hw_phys_link[i + 1];
+ 			sg = sg_next(sg);
+ 		}
+@@ -514,12 +512,14 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 	/* Create output links */
+ 	dev->hw_desc[idx]->p2 = dev->hw_phys_link[i];
+ 	sg = dev->out_sg;
++	len = dev->total;
+ 	for (j = i; j < dev->nb_out_sg + i; j++) {
+-		dev->hw_link[j]->len = sg->length;
++		dev->hw_link[j]->len = min(len, sg->length);
+ 		dev->hw_link[j]->p = sg->dma_address;
+ 		if (j == (dev->nb_out_sg + i - 1)) {
+ 			dev->hw_link[j]->next = 0;
+ 		} else {
++			len -= min(len, sg->length);
+ 			dev->hw_link[j]->next = dev->hw_phys_link[j + 1];
+ 			sg = sg_next(sg);
+ 		}
+@@ -538,9 +538,6 @@ static int sahara_hw_descriptor_create(struct sahara_dev *dev)
+ 
+ 	return 0;
+ 
+-unmap_out:
+-	dma_unmap_sg(dev->device, dev->out_sg, dev->nb_out_sg,
+-		DMA_FROM_DEVICE);
+ unmap_in:
+ 	dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg,
+ 		DMA_TO_DEVICE);
+@@ -548,8 +545,24 @@ unmap_in:
+ 	return -EINVAL;
+ }
+ 
++static void sahara_aes_cbc_update_iv(struct skcipher_request *req)
++{
++	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
++	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
++	unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
++
++	/* Update IV buffer to contain the last ciphertext block */
++	if (rctx->mode & FLAGS_ENCRYPT) {
++		sg_pcopy_to_buffer(req->dst, sg_nents(req->dst), req->iv,
++				   ivsize, req->cryptlen - ivsize);
++	} else {
++		memcpy(req->iv, rctx->iv_out, ivsize);
++	}
++}
++
+ static int sahara_aes_process(struct skcipher_request *req)
+ {
++	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ 	struct sahara_dev *dev = dev_ptr;
+ 	struct sahara_ctx *ctx;
+ 	struct sahara_aes_reqctx *rctx;
+@@ -571,8 +584,17 @@ static int sahara_aes_process(struct skcipher_request *req)
+ 	rctx->mode &= FLAGS_MODE_MASK;
+ 	dev->flags = (dev->flags & ~FLAGS_MODE_MASK) | rctx->mode;
+ 
+-	if ((dev->flags & FLAGS_CBC) && req->iv)
+-		memcpy(dev->iv_base, req->iv, AES_KEYSIZE_128);
++	if ((dev->flags & FLAGS_CBC) && req->iv) {
++		unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
++
++		memcpy(dev->iv_base, req->iv, ivsize);
++
++		if (!(dev->flags & FLAGS_ENCRYPT)) {
++			sg_pcopy_to_buffer(req->src, sg_nents(req->src),
++					   rctx->iv_out, ivsize,
++					   req->cryptlen - ivsize);
++		}
++	}
+ 
+ 	/* assign new context to device */
+ 	dev->ctx = ctx;
+@@ -585,16 +607,20 @@ static int sahara_aes_process(struct skcipher_request *req)
+ 
+ 	timeout = wait_for_completion_timeout(&dev->dma_completion,
+ 				msecs_to_jiffies(SAHARA_TIMEOUT_MS));
+-	if (!timeout) {
+-		dev_err(dev->device, "AES timeout\n");
+-		return -ETIMEDOUT;
+-	}
+ 
+ 	dma_unmap_sg(dev->device, dev->out_sg, dev->nb_out_sg,
+ 		DMA_FROM_DEVICE);
+ 	dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg,
+ 		DMA_TO_DEVICE);
+ 
++	if (!timeout) {
++		dev_err(dev->device, "AES timeout\n");
++		return -ETIMEDOUT;
++	}
++
++	if ((dev->flags & FLAGS_CBC) && req->iv)
++		sahara_aes_cbc_update_iv(req);
++
+ 	return 0;
+ }
+ 
+@@ -608,7 +634,6 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	/* SAHARA only supports 128bit keys */
+ 	if (keylen == AES_KEYSIZE_128) {
+ 		memcpy(ctx->key, key, keylen);
+-		ctx->flags |= FLAGS_NEW_KEY;
+ 		return 0;
+ 	}
+ 
+@@ -624,12 +649,40 @@ static int sahara_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	return crypto_skcipher_setkey(ctx->fallback, key, keylen);
+ }
+ 
++static int sahara_aes_fallback(struct skcipher_request *req, unsigned long mode)
++{
++	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
++	struct sahara_ctx *ctx = crypto_skcipher_ctx(
++		crypto_skcipher_reqtfm(req));
++
++	skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
++	skcipher_request_set_callback(&rctx->fallback_req,
++				      req->base.flags,
++				      req->base.complete,
++				      req->base.data);
++	skcipher_request_set_crypt(&rctx->fallback_req, req->src,
++				   req->dst, req->cryptlen, req->iv);
++
++	if (mode & FLAGS_ENCRYPT)
++		return crypto_skcipher_encrypt(&rctx->fallback_req);
++
++	return crypto_skcipher_decrypt(&rctx->fallback_req);
++}
++
+ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
+ {
+ 	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
++	struct sahara_ctx *ctx = crypto_skcipher_ctx(
++		crypto_skcipher_reqtfm(req));
+ 	struct sahara_dev *dev = dev_ptr;
+ 	int err = 0;
+ 
++	if (!req->cryptlen)
++		return 0;
++
++	if (unlikely(ctx->keylen != AES_KEYSIZE_128))
++		return sahara_aes_fallback(req, mode);
++
+ 	dev_dbg(dev->device, "nbytes: %d, enc: %d, cbc: %d\n",
+ 		req->cryptlen, !!(mode & FLAGS_ENCRYPT), !!(mode & FLAGS_CBC));
+ 
+@@ -652,81 +705,21 @@ static int sahara_aes_crypt(struct skcipher_request *req, unsigned long mode)
+ 
+ static int sahara_aes_ecb_encrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_encrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, FLAGS_ENCRYPT);
+ }
+ 
+ static int sahara_aes_ecb_decrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_decrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, 0);
+ }
+ 
+ static int sahara_aes_cbc_encrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_encrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, FLAGS_ENCRYPT | FLAGS_CBC);
+ }
+ 
+ static int sahara_aes_cbc_decrypt(struct skcipher_request *req)
+ {
+-	struct sahara_aes_reqctx *rctx = skcipher_request_ctx(req);
+-	struct sahara_ctx *ctx = crypto_skcipher_ctx(
+-		crypto_skcipher_reqtfm(req));
+-
+-	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
+-		skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+-		skcipher_request_set_callback(&rctx->fallback_req,
+-					      req->base.flags,
+-					      req->base.complete,
+-					      req->base.data);
+-		skcipher_request_set_crypt(&rctx->fallback_req, req->src,
+-					   req->dst, req->cryptlen, req->iv);
+-		return crypto_skcipher_decrypt(&rctx->fallback_req);
+-	}
+-
+ 	return sahara_aes_crypt(req, FLAGS_CBC);
+ }
+ 
+@@ -783,6 +776,7 @@ static int sahara_sha_hw_links_create(struct sahara_dev *dev,
+ 				       int start)
+ {
+ 	struct scatterlist *sg;
++	unsigned int len;
+ 	unsigned int i;
+ 	int ret;
+ 
+@@ -804,12 +798,14 @@ static int sahara_sha_hw_links_create(struct sahara_dev *dev,
+ 	if (!ret)
+ 		return -EFAULT;
+ 
++	len = rctx->total;
+ 	for (i = start; i < dev->nb_in_sg + start; i++) {
+-		dev->hw_link[i]->len = sg->length;
++		dev->hw_link[i]->len = min(len, sg->length);
+ 		dev->hw_link[i]->p = sg->dma_address;
+ 		if (i == (dev->nb_in_sg + start - 1)) {
+ 			dev->hw_link[i]->next = 0;
+ 		} else {
++			len -= min(len, sg->length);
+ 			dev->hw_link[i]->next = dev->hw_phys_link[i + 1];
+ 			sg = sg_next(sg);
+ 		}
+@@ -890,24 +886,6 @@ static int sahara_sha_hw_context_descriptor_create(struct sahara_dev *dev,
+ 	return 0;
+ }
+ 
+-static int sahara_walk_and_recalc(struct scatterlist *sg, unsigned int nbytes)
+-{
+-	if (!sg || !sg->length)
+-		return nbytes;
+-
+-	while (nbytes && sg) {
+-		if (nbytes <= sg->length) {
+-			sg->length = nbytes;
+-			sg_mark_end(sg);
+-			break;
+-		}
+-		nbytes -= sg->length;
+-		sg = sg_next(sg);
+-	}
+-
+-	return nbytes;
+-}
+-
+ static int sahara_sha_prepare_request(struct ahash_request *req)
+ {
+ 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -944,36 +922,20 @@ static int sahara_sha_prepare_request(struct ahash_request *req)
+ 					hash_later, 0);
+ 	}
+ 
+-	/* nbytes should now be multiple of blocksize */
+-	req->nbytes = req->nbytes - hash_later;
+-
+-	sahara_walk_and_recalc(req->src, req->nbytes);
+-
++	rctx->total = len - hash_later;
+ 	/* have data from previous operation and current */
+ 	if (rctx->buf_cnt && req->nbytes) {
+ 		sg_init_table(rctx->in_sg_chain, 2);
+ 		sg_set_buf(rctx->in_sg_chain, rctx->rembuf, rctx->buf_cnt);
+-
+ 		sg_chain(rctx->in_sg_chain, 2, req->src);
+-
+-		rctx->total = req->nbytes + rctx->buf_cnt;
+ 		rctx->in_sg = rctx->in_sg_chain;
+-
+-		req->src = rctx->in_sg_chain;
+ 	/* only data from previous operation */
+ 	} else if (rctx->buf_cnt) {
+-		if (req->src)
+-			rctx->in_sg = req->src;
+-		else
+-			rctx->in_sg = rctx->in_sg_chain;
+-		/* buf was copied into rembuf above */
++		rctx->in_sg = rctx->in_sg_chain;
+ 		sg_init_one(rctx->in_sg, rctx->rembuf, rctx->buf_cnt);
+-		rctx->total = rctx->buf_cnt;
+ 	/* no data from previous operation */
+ 	} else {
+ 		rctx->in_sg = req->src;
+-		rctx->total = req->nbytes;
+-		req->src = rctx->in_sg;
+ 	}
+ 
+ 	/* on next call, we only have the remaining data in the buffer */
+@@ -994,7 +956,10 @@ static int sahara_sha_process(struct ahash_request *req)
+ 		return ret;
+ 
+ 	if (rctx->first) {
+-		sahara_sha_hw_data_descriptor_create(dev, rctx, req, 0);
++		ret = sahara_sha_hw_data_descriptor_create(dev, rctx, req, 0);
++		if (ret)
++			return ret;
++
+ 		dev->hw_desc[0]->next = 0;
+ 		rctx->first = 0;
+ 	} else {
+@@ -1002,7 +967,10 @@ static int sahara_sha_process(struct ahash_request *req)
+ 
+ 		sahara_sha_hw_context_descriptor_create(dev, rctx, req, 0);
+ 		dev->hw_desc[0]->next = dev->hw_phys_desc[1];
+-		sahara_sha_hw_data_descriptor_create(dev, rctx, req, 1);
++		ret = sahara_sha_hw_data_descriptor_create(dev, rctx, req, 1);
++		if (ret)
++			return ret;
++
+ 		dev->hw_desc[1]->next = 0;
+ 	}
+ 
+@@ -1015,18 +983,19 @@ static int sahara_sha_process(struct ahash_request *req)
+ 
+ 	timeout = wait_for_completion_timeout(&dev->dma_completion,
+ 				msecs_to_jiffies(SAHARA_TIMEOUT_MS));
+-	if (!timeout) {
+-		dev_err(dev->device, "SHA timeout\n");
+-		return -ETIMEDOUT;
+-	}
+ 
+ 	if (rctx->sg_in_idx)
+ 		dma_unmap_sg(dev->device, dev->in_sg, dev->nb_in_sg,
+ 			     DMA_TO_DEVICE);
+ 
++	if (!timeout) {
++		dev_err(dev->device, "SHA timeout\n");
++		return -ETIMEDOUT;
++	}
++
+ 	memcpy(rctx->context, dev->context_base, rctx->context_size);
+ 
+-	if (req->result)
++	if (req->result && rctx->last)
+ 		memcpy(req->result, rctx->context, rctx->digest_size);
+ 
+ 	return 0;
+@@ -1170,8 +1139,7 @@ static int sahara_sha_import(struct ahash_request *req, const void *in)
+ static int sahara_sha_cra_init(struct crypto_tfm *tfm)
+ {
+ 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+-				 sizeof(struct sahara_sha_reqctx) +
+-				 SHA_BUFFER_LEN + SHA256_BLOCK_SIZE);
++				 sizeof(struct sahara_sha_reqctx));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/starfive/jh7110-cryp.c b/drivers/crypto/starfive/jh7110-cryp.c
+index 08e974e0dd124..3a67ddc4d9367 100644
+--- a/drivers/crypto/starfive/jh7110-cryp.c
++++ b/drivers/crypto/starfive/jh7110-cryp.c
+@@ -180,12 +180,8 @@ static int starfive_cryp_probe(struct platform_device *pdev)
+ 	spin_unlock(&dev_list.lock);
+ 
+ 	ret = starfive_dma_init(cryp);
+-	if (ret) {
+-		if (ret == -EPROBE_DEFER)
+-			goto err_probe_defer;
+-		else
+-			goto err_dma_init;
+-	}
++	if (ret)
++		goto err_dma_init;
+ 
+ 	/* Initialize crypto engine */
+ 	cryp->engine = crypto_engine_alloc_init(&pdev->dev, 1);
+@@ -233,7 +229,7 @@ err_dma_init:
+ 
+ 	tasklet_kill(&cryp->aes_done);
+ 	tasklet_kill(&cryp->hash_done);
+-err_probe_defer:
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
+index 154590e1f7643..7059bbe5a2eba 100644
+--- a/drivers/crypto/virtio/virtio_crypto_common.h
++++ b/drivers/crypto/virtio/virtio_crypto_common.h
+@@ -10,6 +10,7 @@
+ #include <linux/virtio.h>
+ #include <linux/crypto.h>
+ #include <linux/spinlock.h>
++#include <linux/interrupt.h>
+ #include <crypto/aead.h>
+ #include <crypto/aes.h>
+ #include <crypto/engine.h>
+@@ -28,6 +29,7 @@ struct data_queue {
+ 	char name[32];
+ 
+ 	struct crypto_engine *engine;
++	struct tasklet_struct done_task;
+ };
+ 
+ struct virtio_crypto {
+diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
+index 43a0838d31ff0..b909c6a2bf1c3 100644
+--- a/drivers/crypto/virtio/virtio_crypto_core.c
++++ b/drivers/crypto/virtio/virtio_crypto_core.c
+@@ -72,27 +72,28 @@ int virtio_crypto_ctrl_vq_request(struct virtio_crypto *vcrypto, struct scatterl
+ 	return 0;
+ }
+ 
+-static void virtcrypto_dataq_callback(struct virtqueue *vq)
++static void virtcrypto_done_task(unsigned long data)
+ {
+-	struct virtio_crypto *vcrypto = vq->vdev->priv;
++	struct data_queue *data_vq = (struct data_queue *)data;
++	struct virtqueue *vq = data_vq->vq;
+ 	struct virtio_crypto_request *vc_req;
+-	unsigned long flags;
+ 	unsigned int len;
+-	unsigned int qid = vq->index;
+ 
+-	spin_lock_irqsave(&vcrypto->data_vq[qid].lock, flags);
+ 	do {
+ 		virtqueue_disable_cb(vq);
+ 		while ((vc_req = virtqueue_get_buf(vq, &len)) != NULL) {
+-			spin_unlock_irqrestore(
+-				&vcrypto->data_vq[qid].lock, flags);
+ 			if (vc_req->alg_cb)
+ 				vc_req->alg_cb(vc_req, len);
+-			spin_lock_irqsave(
+-				&vcrypto->data_vq[qid].lock, flags);
+ 		}
+ 	} while (!virtqueue_enable_cb(vq));
+-	spin_unlock_irqrestore(&vcrypto->data_vq[qid].lock, flags);
++}
++
++static void virtcrypto_dataq_callback(struct virtqueue *vq)
++{
++	struct virtio_crypto *vcrypto = vq->vdev->priv;
++	struct data_queue *dq = &vcrypto->data_vq[vq->index];
++
++	tasklet_schedule(&dq->done_task);
+ }
+ 
+ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
+@@ -150,6 +151,8 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
+ 			ret = -ENOMEM;
+ 			goto err_engine;
+ 		}
++		tasklet_init(&vi->data_vq[i].done_task, virtcrypto_done_task,
++				(unsigned long)&vi->data_vq[i]);
+ 	}
+ 
+ 	kfree(names);
+@@ -497,12 +500,15 @@ static void virtcrypto_free_unused_reqs(struct virtio_crypto *vcrypto)
+ static void virtcrypto_remove(struct virtio_device *vdev)
+ {
+ 	struct virtio_crypto *vcrypto = vdev->priv;
++	int i;
+ 
+ 	dev_info(&vdev->dev, "Start virtcrypto_remove.\n");
+ 
+ 	flush_work(&vcrypto->config_work);
+ 	if (virtcrypto_dev_started(vcrypto))
+ 		virtcrypto_dev_stop(vcrypto);
++	for (i = 0; i < vcrypto->max_data_queues; i++)
++		tasklet_kill(&vcrypto->data_vq[i].done_task);
+ 	virtio_reset_device(vdev);
+ 	virtcrypto_free_unused_reqs(vcrypto);
+ 	virtcrypto_clear_crypto_engines(vcrypto);
+diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
+index b7c93bb18f6e7..6b386771cc259 100644
+--- a/drivers/cxl/core/port.c
++++ b/drivers/cxl/core/port.c
+@@ -172,14 +172,10 @@ static ssize_t target_list_show(struct device *dev,
+ {
+ 	struct cxl_switch_decoder *cxlsd = to_cxl_switch_decoder(dev);
+ 	ssize_t offset;
+-	unsigned int seq;
+ 	int rc;
+ 
+-	do {
+-		seq = read_seqbegin(&cxlsd->target_lock);
+-		rc = emit_target_list(cxlsd, buf);
+-	} while (read_seqretry(&cxlsd->target_lock, seq));
+-
++	guard(rwsem_read)(&cxl_region_rwsem);
++	rc = emit_target_list(cxlsd, buf);
+ 	if (rc < 0)
+ 		return rc;
+ 	offset = rc;
+@@ -1633,7 +1629,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_find_port, CXL);
+ static int decoder_populate_targets(struct cxl_switch_decoder *cxlsd,
+ 				    struct cxl_port *port, int *target_map)
+ {
+-	int i, rc = 0;
++	int i;
+ 
+ 	if (!target_map)
+ 		return 0;
+@@ -1643,19 +1639,16 @@ static int decoder_populate_targets(struct cxl_switch_decoder *cxlsd,
+ 	if (xa_empty(&port->dports))
+ 		return -EINVAL;
+ 
+-	write_seqlock(&cxlsd->target_lock);
+-	for (i = 0; i < cxlsd->nr_targets; i++) {
++	guard(rwsem_write)(&cxl_region_rwsem);
++	for (i = 0; i < cxlsd->cxld.interleave_ways; i++) {
+ 		struct cxl_dport *dport = find_dport(port, target_map[i]);
+ 
+-		if (!dport) {
+-			rc = -ENXIO;
+-			break;
+-		}
++		if (!dport)
++			return -ENXIO;
+ 		cxlsd->target[i] = dport;
+ 	}
+-	write_sequnlock(&cxlsd->target_lock);
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ struct cxl_dport *cxl_hb_modulo(struct cxl_root_decoder *cxlrd, int pos)
+@@ -1725,7 +1718,6 @@ static int cxl_switch_decoder_init(struct cxl_port *port,
+ 		return -EINVAL;
+ 
+ 	cxlsd->nr_targets = nr_targets;
+-	seqlock_init(&cxlsd->target_lock);
+ 	return cxl_decoder_init(port, &cxlsd->cxld);
+ }
+ 
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 3e817a6f94c6a..76fec47a13a4c 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -397,7 +397,7 @@ static ssize_t interleave_ways_store(struct device *dev,
+ 		return rc;
+ 
+ 	/*
+-	 * Even for x3, x9, and x12 interleaves the region interleave must be a
++	 * Even for x3, x6, and x12 interleaves the region interleave must be a
+ 	 * power of 2 multiple of the host bridge interleave.
+ 	 */
+ 	if (!is_power_of_2(val / cxld->interleave_ways) ||
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index 687043ece1018..62fa96f8567e5 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -412,7 +412,6 @@ struct cxl_endpoint_decoder {
+ /**
+  * struct cxl_switch_decoder - Switch specific CXL HDM Decoder
+  * @cxld: base cxl_decoder object
+- * @target_lock: coordinate coherent reads of the target list
+  * @nr_targets: number of elements in @target
+  * @target: active ordered target list in current decoder configuration
+  *
+@@ -424,7 +423,6 @@ struct cxl_endpoint_decoder {
+  */
+ struct cxl_switch_decoder {
+ 	struct cxl_decoder cxld;
+-	seqlock_t target_lock;
+ 	int nr_targets;
+ 	struct cxl_dport *target[];
+ };
+diff --git a/drivers/edac/thunderx_edac.c b/drivers/edac/thunderx_edac.c
+index b9c5772da959c..90d46e5c4ff06 100644
+--- a/drivers/edac/thunderx_edac.c
++++ b/drivers/edac/thunderx_edac.c
+@@ -1133,7 +1133,7 @@ static irqreturn_t thunderx_ocx_com_threaded_isr(int irq, void *irq_id)
+ 		decode_register(other, OCX_OTHER_SIZE,
+ 				ocx_com_errors, ctx->reg_com_int);
+ 
+-		strncat(msg, other, OCX_MESSAGE_SIZE);
++		strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 
+ 		for (lane = 0; lane < OCX_RX_LANES; lane++)
+ 			if (ctx->reg_com_int & BIT(lane)) {
+@@ -1142,12 +1142,12 @@ static irqreturn_t thunderx_ocx_com_threaded_isr(int irq, void *irq_id)
+ 					 lane, ctx->reg_lane_int[lane],
+ 					 lane, ctx->reg_lane_stat11[lane]);
+ 
+-				strncat(msg, other, OCX_MESSAGE_SIZE);
++				strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 
+ 				decode_register(other, OCX_OTHER_SIZE,
+ 						ocx_lane_errors,
+ 						ctx->reg_lane_int[lane]);
+-				strncat(msg, other, OCX_MESSAGE_SIZE);
++				strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 			}
+ 
+ 		if (ctx->reg_com_int & OCX_COM_INT_CE)
+@@ -1217,7 +1217,7 @@ static irqreturn_t thunderx_ocx_lnk_threaded_isr(int irq, void *irq_id)
+ 		decode_register(other, OCX_OTHER_SIZE,
+ 				ocx_com_link_errors, ctx->reg_com_link_int);
+ 
+-		strncat(msg, other, OCX_MESSAGE_SIZE);
++		strlcat(msg, other, OCX_MESSAGE_SIZE);
+ 
+ 		if (ctx->reg_com_link_int & OCX_COM_LINK_INT_UE)
+ 			edac_device_handle_ue(ocx->edac_dev, 0, 0, msg);
+@@ -1896,7 +1896,7 @@ static irqreturn_t thunderx_l2c_threaded_isr(int irq, void *irq_id)
+ 
+ 		decode_register(other, L2C_OTHER_SIZE, l2_errors, ctx->reg_int);
+ 
+-		strncat(msg, other, L2C_MESSAGE_SIZE);
++		strlcat(msg, other, L2C_MESSAGE_SIZE);
+ 
+ 		if (ctx->reg_int & mask_ue)
+ 			edac_device_handle_ue(l2c->edac_dev, 0, 0, msg);
+diff --git a/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c b/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
+index a33acdaf7b781..32188f098ef34 100644
+--- a/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
++++ b/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
+@@ -325,8 +325,10 @@ static efi_status_t qsee_uefi_get_variable(struct qcuefi_client *qcuefi, const e
+ 	req_data->length = req_size;
+ 
+ 	status = ucs2_strscpy(((void *)req_data) + req_data->name_offset, name, name_length);
+-	if (status < 0)
+-		return EFI_INVALID_PARAMETER;
++	if (status < 0) {
++		efi_status = EFI_INVALID_PARAMETER;
++		goto out_free;
++	}
+ 
+ 	memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size);
+ 
+@@ -471,8 +473,10 @@ static efi_status_t qsee_uefi_set_variable(struct qcuefi_client *qcuefi, const e
+ 	req_data->length = req_size;
+ 
+ 	status = ucs2_strscpy(((void *)req_data) + req_data->name_offset, name, name_length);
+-	if (status < 0)
+-		return EFI_INVALID_PARAMETER;
++	if (status < 0) {
++		efi_status = EFI_INVALID_PARAMETER;
++		goto out_free;
++	}
+ 
+ 	memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size);
+ 
+@@ -563,8 +567,10 @@ static efi_status_t qsee_uefi_get_next_variable(struct qcuefi_client *qcuefi,
+ 	memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size);
+ 	status = ucs2_strscpy(((void *)req_data) + req_data->name_offset, name,
+ 			      *name_size / sizeof(*name));
+-	if (status < 0)
+-		return EFI_INVALID_PARAMETER;
++	if (status < 0) {
++		efi_status = EFI_INVALID_PARAMETER;
++		goto out_free;
++	}
+ 
+ 	status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, rsp_size);
+ 	if (status) {
+@@ -635,7 +641,7 @@ static efi_status_t qsee_uefi_get_next_variable(struct qcuefi_client *qcuefi,
+ 		 * have already been validated above, causing this function to
+ 		 * bail with EFI_BUFFER_TOO_SMALL.
+ 		 */
+-		return EFI_DEVICE_ERROR;
++		efi_status = EFI_DEVICE_ERROR;
+ 	}
+ 
+ out_free:
+diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
+index 7041befc756a9..8b9a2556de16d 100644
+--- a/drivers/firmware/ti_sci.c
++++ b/drivers/firmware/ti_sci.c
+@@ -164,7 +164,7 @@ static int ti_sci_debugfs_create(struct platform_device *pdev,
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct resource *res;
+-	char debug_name[50] = "ti_sci_debug@";
++	char debug_name[50];
+ 
+ 	/* Debug region is optional */
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+@@ -181,10 +181,10 @@ static int ti_sci_debugfs_create(struct platform_device *pdev,
+ 	/* Setup NULL termination */
+ 	info->debug_buffer[info->debug_region_size] = 0;
+ 
+-	info->d = debugfs_create_file(strncat(debug_name, dev_name(dev),
+-					      sizeof(debug_name) -
+-					      sizeof("ti_sci_debug@")),
+-				      0444, NULL, info, &ti_sci_debug_fops);
++	snprintf(debug_name, sizeof(debug_name), "ti_sci_debug@%s",
++		 dev_name(dev));
++	info->d = debugfs_create_file(debug_name, 0444, NULL, info,
++				      &ti_sci_debug_fops);
+ 	if (IS_ERR(info->d))
+ 		return PTR_ERR(info->d);
+ 
+diff --git a/drivers/gpio/gpio-mlxbf3.c b/drivers/gpio/gpio-mlxbf3.c
+index 7a3e1760fc5b7..d5906d419b0ab 100644
+--- a/drivers/gpio/gpio-mlxbf3.c
++++ b/drivers/gpio/gpio-mlxbf3.c
+@@ -215,6 +215,8 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 			gs->gpio_clr_io + MLXBF_GPIO_FW_DATA_OUT_CLEAR,
+ 			gs->gpio_set_io + MLXBF_GPIO_FW_OUTPUT_ENABLE_SET,
+ 			gs->gpio_clr_io + MLXBF_GPIO_FW_OUTPUT_ENABLE_CLEAR, 0);
++	if (ret)
++		return dev_err_probe(dev, ret, "%s: bgpio_init() failed", __func__);
+ 
+ 	gc->request = gpiochip_generic_request;
+ 	gc->free = gpiochip_generic_free;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+index 0e61ebdb3f3e5..8d4a3ff65c18c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+@@ -755,7 +755,7 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
+ 	int r;
+ 
+ 	if (!adev->smc_rreg)
+-		return -EPERM;
++		return -EOPNOTSUPP;
+ 
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+@@ -814,7 +814,7 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
+ 	int r;
+ 
+ 	if (!adev->smc_wreg)
+-		return -EPERM;
++		return -EOPNOTSUPP;
+ 
+ 	if (size & 0x3 || *pos & 0x3)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 93cf73d6fa118..3677d644183b8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5172,7 +5172,6 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ 	struct amdgpu_device *tmp_adev = NULL;
+ 	bool need_full_reset, skip_hw_reset, vram_lost = false;
+ 	int r = 0;
+-	bool gpu_reset_for_dev_remove = 0;
+ 
+ 	/* Try reset handler method first */
+ 	tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
+@@ -5192,10 +5191,6 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ 		test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
+ 	skip_hw_reset = test_bit(AMDGPU_SKIP_HW_RESET, &reset_context->flags);
+ 
+-	gpu_reset_for_dev_remove =
+-		test_bit(AMDGPU_RESET_FOR_DEVICE_REMOVE, &reset_context->flags) &&
+-			test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
+-
+ 	/*
+ 	 * ASIC reset has to be done on all XGMI hive nodes ASAP
+ 	 * to allow proper links negotiation in FW (within 1 sec)
+@@ -5238,18 +5233,6 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ 		amdgpu_ras_intr_cleared();
+ 	}
+ 
+-	/* Since the mode1 reset affects base ip blocks, the
+-	 * phase1 ip blocks need to be resumed. Otherwise there
+-	 * will be a BIOS signature error and the psp bootloader
+-	 * can't load kdb on the next amdgpu install.
+-	 */
+-	if (gpu_reset_for_dev_remove) {
+-		list_for_each_entry(tmp_adev, device_list_handle, reset_list)
+-			amdgpu_device_ip_resume_phase1(tmp_adev);
+-
+-		goto end;
+-	}
+-
+ 	list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
+ 		if (need_full_reset) {
+ 			/* post card */
+@@ -5486,11 +5469,6 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 	int i, r = 0;
+ 	bool need_emergency_restart = false;
+ 	bool audio_suspended = false;
+-	bool gpu_reset_for_dev_remove = false;
+-
+-	gpu_reset_for_dev_remove =
+-			test_bit(AMDGPU_RESET_FOR_DEVICE_REMOVE, &reset_context->flags) &&
+-				test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
+ 
+ 	/*
+ 	 * Special case: RAS triggered and full reset isn't supported
+@@ -5528,7 +5506,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 	if (!amdgpu_sriov_vf(adev) && (adev->gmc.xgmi.num_physical_nodes > 1)) {
+ 		list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
+ 			list_add_tail(&tmp_adev->reset_list, &device_list);
+-			if (gpu_reset_for_dev_remove && adev->shutdown)
++			if (adev->shutdown)
+ 				tmp_adev->shutdown = true;
+ 		}
+ 		if (!list_is_first(&adev->reset_list, &device_list))
+@@ -5613,10 +5591,6 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ 
+ retry:	/* Rest of adevs pre asic reset from XGMI hive. */
+ 	list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
+-		if (gpu_reset_for_dev_remove) {
+-			/* Workaroud for ASICs need to disable SMC first */
+-			amdgpu_device_smu_fini_early(tmp_adev);
+-		}
+ 		r = amdgpu_device_pre_asic_reset(tmp_adev, reset_context);
+ 		/*TODO Should we stop ?*/
+ 		if (r) {
+@@ -5648,9 +5622,6 @@ retry:	/* Rest of adevs pre asic reset from XGMI hive. */
+ 		r = amdgpu_do_asic_reset(device_list_handle, reset_context);
+ 		if (r && r == -EAGAIN)
+ 			goto retry;
+-
+-		if (!r && gpu_reset_for_dev_remove)
+-			goto recover_end;
+ 	}
+ 
+ skip_hw_reset:
+@@ -5706,7 +5677,6 @@ skip_sched_resume:
+ 		amdgpu_ras_set_error_query_ready(tmp_adev, true);
+ 	}
+ 
+-recover_end:
+ 	tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
+ 					    reset_list);
+ 	amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 8b33b130ea36e..f4174eebe993c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2265,6 +2265,8 @@ retry_init:
+ 
+ 		pci_wake_from_d3(pdev, TRUE);
+ 
++		pci_wake_from_d3(pdev, TRUE);
++
+ 		/*
+ 		 * For runpm implemented via BACO, PMFW will handle the
+ 		 * timing for BACO in and out:
+@@ -2313,38 +2315,6 @@ amdgpu_pci_remove(struct pci_dev *pdev)
+ 		pm_runtime_forbid(dev->dev);
+ 	}
+ 
+-	if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 2) &&
+-	    !amdgpu_sriov_vf(adev)) {
+-		bool need_to_reset_gpu = false;
+-
+-		if (adev->gmc.xgmi.num_physical_nodes > 1) {
+-			struct amdgpu_hive_info *hive;
+-
+-			hive = amdgpu_get_xgmi_hive(adev);
+-			if (hive->device_remove_count == 0)
+-				need_to_reset_gpu = true;
+-			hive->device_remove_count++;
+-			amdgpu_put_xgmi_hive(hive);
+-		} else {
+-			need_to_reset_gpu = true;
+-		}
+-
+-		/* Workaround for ASICs need to reset SMU.
+-		 * Called only when the first device is removed.
+-		 */
+-		if (need_to_reset_gpu) {
+-			struct amdgpu_reset_context reset_context;
+-
+-			adev->shutdown = true;
+-			memset(&reset_context, 0, sizeof(reset_context));
+-			reset_context.method = AMD_RESET_METHOD_NONE;
+-			reset_context.reset_req_dev = adev;
+-			set_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
+-			set_bit(AMDGPU_RESET_FOR_DEVICE_REMOVE, &reset_context.flags);
+-			amdgpu_device_gpu_recover(adev, NULL, &reset_context);
+-		}
+-	}
+-
+ 	amdgpu_driver_unload_kms(dev);
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 583cf03950cd7..598867919c4f9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -1105,7 +1105,12 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 			if (amdgpu_dpm_read_sensor(adev,
+ 						   AMDGPU_PP_SENSOR_GPU_AVG_POWER,
+ 						   (void *)&ui32, &ui32_size)) {
+-				return -EINVAL;
++				/* fall back to input power for backwards compat */
++				if (amdgpu_dpm_read_sensor(adev,
++							   AMDGPU_PP_SENSOR_GPU_INPUT_POWER,
++							   (void *)&ui32, &ui32_size)) {
++					return -EINVAL;
++				}
+ 			}
+ 			ui32 >>= 8;
+ 			break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+index b0335a1c5e90c..19899f6b9b2b4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+@@ -32,7 +32,6 @@ enum AMDGPU_RESET_FLAGS {
+ 
+ 	AMDGPU_NEED_FULL_RESET = 0,
+ 	AMDGPU_SKIP_HW_RESET = 1,
+-	AMDGPU_RESET_FOR_DEVICE_REMOVE = 2,
+ };
+ 
+ struct amdgpu_reset_context {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
+index 6cab882e8061e..1592c63b3099b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
+@@ -43,7 +43,6 @@ struct amdgpu_hive_info {
+ 	} pstate;
+ 
+ 	struct amdgpu_reset_domain *reset_domain;
+-	uint32_t device_remove_count;
+ 	atomic_t ras_recovery;
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c b/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
+index 62b205dac63a0..6604a3f99c5ec 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
+@@ -330,12 +330,6 @@ static void kfd_init_apertures_vi(struct kfd_process_device *pdd, uint8_t id)
+ 	pdd->gpuvm_limit =
+ 		pdd->dev->kfd->shared_resources.gpuvm_size - 1;
+ 
+-	/* dGPUs: the reserved space for kernel
+-	 * before SVM
+-	 */
+-	pdd->qpd.cwsr_base = SVM_CWSR_BASE;
+-	pdd->qpd.ib_base = SVM_IB_BASE;
+-
+ 	pdd->scratch_base = MAKE_SCRATCH_APP_BASE_VI();
+ 	pdd->scratch_limit = MAKE_SCRATCH_APP_LIMIT(pdd->scratch_base);
+ }
+@@ -345,18 +339,18 @@ static void kfd_init_apertures_v9(struct kfd_process_device *pdd, uint8_t id)
+ 	pdd->lds_base = MAKE_LDS_APP_BASE_V9();
+ 	pdd->lds_limit = MAKE_LDS_APP_LIMIT(pdd->lds_base);
+ 
+-	pdd->gpuvm_base = PAGE_SIZE;
++        /* Raven needs SVM to support graphic handle, etc. Leave the small
++         * reserved space before SVM on Raven as well, even though we don't
++         * have to.
++         * Set gpuvm_base and gpuvm_limit to CANONICAL addresses so that they
++         * are used in Thunk to reserve SVM.
++         */
++        pdd->gpuvm_base = SVM_USER_BASE;
+ 	pdd->gpuvm_limit =
+ 		pdd->dev->kfd->shared_resources.gpuvm_size - 1;
+ 
+ 	pdd->scratch_base = MAKE_SCRATCH_APP_BASE_V9();
+ 	pdd->scratch_limit = MAKE_SCRATCH_APP_LIMIT(pdd->scratch_base);
+-
+-	/*
+-	 * Place TBA/TMA on opposite side of VM hole to prevent
+-	 * stray faults from triggering SVM on these pages.
+-	 */
+-	pdd->qpd.cwsr_base = pdd->dev->kfd->shared_resources.gpuvm_size;
+ }
+ 
+ int kfd_init_apertures(struct kfd_process *process)
+@@ -413,6 +407,12 @@ int kfd_init_apertures(struct kfd_process *process)
+ 					return -EINVAL;
+ 				}
+ 			}
++
++                        /* dGPUs: the reserved space for kernel
++                         * before SVM
++                         */
++                        pdd->qpd.cwsr_base = SVM_CWSR_BASE;
++                        pdd->qpd.ib_base = SVM_IB_BASE;
+ 		}
+ 
+ 		dev_dbg(kfd_device, "node id %u\n", id);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+index 6c25dab051d5c..b8680e0753cab 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+@@ -1021,7 +1021,7 @@ int kgd2kfd_init_zone_device(struct amdgpu_device *adev)
+ 	} else {
+ 		res = devm_request_free_mem_region(adev->dev, &iomem_resource, size);
+ 		if (IS_ERR(res))
+-			return -ENOMEM;
++			return PTR_ERR(res);
+ 		pgmap->range.start = res->start;
+ 		pgmap->range.end = res->end;
+ 		pgmap->type = MEMORY_DEVICE_PRIVATE;
+@@ -1037,10 +1037,10 @@ int kgd2kfd_init_zone_device(struct amdgpu_device *adev)
+ 	r = devm_memremap_pages(adev->dev, pgmap);
+ 	if (IS_ERR(r)) {
+ 		pr_err("failed to register HMM device memory\n");
+-		/* Disable SVM support capability */
+-		pgmap->type = 0;
+ 		if (pgmap->type == MEMORY_DEVICE_PRIVATE)
+ 			devm_release_mem_region(adev->dev, res->start, resource_size(res));
++		/* Disable SVM support capability */
++		pgmap->type = 0;
+ 		return PTR_ERR(r);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 4c8e278a0d0cc..28162bfbe1b33 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -971,7 +971,7 @@ struct kfd_process {
+ 	struct work_struct debug_event_workarea;
+ 
+ 	/* Tracks debug per-vmid request for debug flags */
+-	bool dbg_flags;
++	u32 dbg_flags;
+ 
+ 	atomic_t poison;
+ 	/* Queues are in paused stated because we are in the process of doing a CRIU checkpoint */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 057284bf50bbe..58d775a0668db 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1342,10 +1342,11 @@ static int kfd_create_indirect_link_prop(struct kfd_topology_device *kdev, int g
+ 		num_cpu++;
+ 	}
+ 
++	if (list_empty(&kdev->io_link_props))
++		return -ENODATA;
++
+ 	gpu_link = list_first_entry(&kdev->io_link_props,
+-					struct kfd_iolink_properties, list);
+-	if (!gpu_link)
+-		return -ENOMEM;
++				    struct kfd_iolink_properties, list);
+ 
+ 	for (i = 0; i < num_cpu; i++) {
+ 		/* CPU <--> GPU */
+@@ -1423,15 +1424,17 @@ static int kfd_add_peer_prop(struct kfd_topology_device *kdev,
+ 				peer->gpu->adev))
+ 		return ret;
+ 
++	if (list_empty(&kdev->io_link_props))
++		return -ENODATA;
++
+ 	iolink1 = list_first_entry(&kdev->io_link_props,
+-							struct kfd_iolink_properties, list);
+-	if (!iolink1)
+-		return -ENOMEM;
++				   struct kfd_iolink_properties, list);
++
++	if (list_empty(&peer->io_link_props))
++		return -ENODATA;
+ 
+ 	iolink2 = list_first_entry(&peer->io_link_props,
+-							struct kfd_iolink_properties, list);
+-	if (!iolink2)
+-		return -ENOMEM;
++				   struct kfd_iolink_properties, list);
+ 
+ 	props = kfd_alloc_struct(props);
+ 	if (!props)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 4e82ee4d74aca..a9bd020b165a1 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2660,7 +2660,7 @@ static int dm_suspend(void *handle)
+ 	return 0;
+ }
+ 
+-struct amdgpu_dm_connector *
++struct drm_connector *
+ amdgpu_dm_find_first_crtc_matching_connector(struct drm_atomic_state *state,
+ 					     struct drm_crtc *crtc)
+ {
+@@ -2673,7 +2673,7 @@ amdgpu_dm_find_first_crtc_matching_connector(struct drm_atomic_state *state,
+ 		crtc_from_state = new_con_state->crtc;
+ 
+ 		if (crtc_from_state == crtc)
+-			return to_amdgpu_dm_connector(connector);
++			return connector;
+ 	}
+ 
+ 	return NULL;
+@@ -5527,6 +5527,7 @@ static void fill_stream_properties_from_drm_display_mode(
+ 			&& stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
+ 		timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
+ 	else if (drm_mode_is_420_also(info, mode_in)
++			&& aconnector
+ 			&& aconnector->force_yuv420_output)
+ 		timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420;
+ 	else if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCBCR444)
+@@ -5562,7 +5563,7 @@ static void fill_stream_properties_from_drm_display_mode(
+ 		timing_out->hdmi_vic = hv_frame.vic;
+ 	}
+ 
+-	if (is_freesync_video_mode(mode_in, aconnector)) {
++	if (aconnector && is_freesync_video_mode(mode_in, aconnector)) {
+ 		timing_out->h_addressable = mode_in->hdisplay;
+ 		timing_out->h_total = mode_in->htotal;
+ 		timing_out->h_sync_width = mode_in->hsync_end - mode_in->hsync_start;
+@@ -6039,14 +6040,14 @@ static void apply_dsc_policy_for_stream(struct amdgpu_dm_connector *aconnector,
+ }
+ 
+ static struct dc_stream_state *
+-create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
++create_stream_for_sink(struct drm_connector *connector,
+ 		       const struct drm_display_mode *drm_mode,
+ 		       const struct dm_connector_state *dm_state,
+ 		       const struct dc_stream_state *old_stream,
+ 		       int requested_bpc)
+ {
++	struct amdgpu_dm_connector *aconnector = NULL;
+ 	struct drm_display_mode *preferred_mode = NULL;
+-	struct drm_connector *drm_connector;
+ 	const struct drm_connector_state *con_state = &dm_state->base;
+ 	struct dc_stream_state *stream = NULL;
+ 	struct drm_display_mode mode;
+@@ -6065,20 +6066,22 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 	drm_mode_init(&mode, drm_mode);
+ 	memset(&saved_mode, 0, sizeof(saved_mode));
+ 
+-	if (aconnector == NULL) {
+-		DRM_ERROR("aconnector is NULL!\n");
++	if (connector == NULL) {
++		DRM_ERROR("connector is NULL!\n");
+ 		return stream;
+ 	}
+ 
+-	drm_connector = &aconnector->base;
+-
+-	if (!aconnector->dc_sink) {
+-		sink = create_fake_sink(aconnector);
+-		if (!sink)
+-			return stream;
+-	} else {
+-		sink = aconnector->dc_sink;
+-		dc_sink_retain(sink);
++	if (connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK) {
++		aconnector = NULL;
++		aconnector = to_amdgpu_dm_connector(connector);
++		if (!aconnector->dc_sink) {
++			sink = create_fake_sink(aconnector);
++			if (!sink)
++				return stream;
++		} else {
++			sink = aconnector->dc_sink;
++			dc_sink_retain(sink);
++		}
+ 	}
+ 
+ 	stream = dc_create_stream_for_sink(sink);
+@@ -6088,12 +6091,13 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		goto finish;
+ 	}
+ 
++	/* We leave this NULL for writeback connectors */
+ 	stream->dm_stream_context = aconnector;
+ 
+ 	stream->timing.flags.LTE_340MCSC_SCRAMBLE =
+-		drm_connector->display_info.hdmi.scdc.scrambling.low_rates;
++		connector->display_info.hdmi.scdc.scrambling.low_rates;
+ 
+-	list_for_each_entry(preferred_mode, &aconnector->base.modes, head) {
++	list_for_each_entry(preferred_mode, &connector->modes, head) {
+ 		/* Search for preferred mode */
+ 		if (preferred_mode->type & DRM_MODE_TYPE_PREFERRED) {
+ 			native_mode_found = true;
+@@ -6102,7 +6106,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 	}
+ 	if (!native_mode_found)
+ 		preferred_mode = list_first_entry_or_null(
+-				&aconnector->base.modes,
++				&connector->modes,
+ 				struct drm_display_mode,
+ 				head);
+ 
+@@ -6116,7 +6120,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		 * and the modelist may not be filled in time.
+ 		 */
+ 		DRM_DEBUG_DRIVER("No preferred mode found\n");
+-	} else {
++	} else if (aconnector) {
+ 		recalculate_timing = is_freesync_video_mode(&mode, aconnector);
+ 		if (recalculate_timing) {
+ 			freesync_mode = get_highest_refresh_rate_mode(aconnector, false);
+@@ -6139,13 +6143,17 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 	 */
+ 	if (!scale || mode_refresh != preferred_refresh)
+ 		fill_stream_properties_from_drm_display_mode(
+-			stream, &mode, &aconnector->base, con_state, NULL,
++			stream, &mode, connector, con_state, NULL,
+ 			requested_bpc);
+ 	else
+ 		fill_stream_properties_from_drm_display_mode(
+-			stream, &mode, &aconnector->base, con_state, old_stream,
++			stream, &mode, connector, con_state, old_stream,
+ 			requested_bpc);
+ 
++	/* The rest isn't needed for writeback connectors */
++	if (!aconnector)
++		goto finish;
++
+ 	if (aconnector->timing_changed) {
+ 		drm_dbg(aconnector->base.dev,
+ 			"overriding timing for automated test, bpc %d, changing to %d\n",
+@@ -6163,7 +6171,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 
+ 	fill_audio_info(
+ 		&stream->audio_info,
+-		drm_connector,
++		connector,
+ 		sink);
+ 
+ 	update_stream_signal(stream, sink);
+@@ -6633,7 +6641,7 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 	enum dc_status dc_result = DC_OK;
+ 
+ 	do {
+-		stream = create_stream_for_sink(aconnector, drm_mode,
++		stream = create_stream_for_sink(connector, drm_mode,
+ 						dm_state, old_stream,
+ 						requested_bpc);
+ 		if (stream == NULL) {
+@@ -6641,6 +6649,9 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 			break;
+ 		}
+ 
++		if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
++			return stream;
++
+ 		dc_result = dc_validate_stream(adev->dm.dc, stream);
+ 		if (dc_result == DC_OK && stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ 			dc_result = dm_dp_mst_is_port_support_mode(aconnector, stream);
+@@ -6928,7 +6939,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
+ 								    max_bpc);
+ 		bpp = convert_dc_color_depth_into_bpc(color_depth) * 3;
+ 		clock = adjusted_mode->clock;
+-		dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
++		dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp << 4);
+ 	}
+ 
+ 	dm_new_connector_state->vcpi_slots =
+@@ -9354,6 +9365,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 	 * update changed items
+ 	 */
+ 	struct amdgpu_crtc *acrtc = NULL;
++	struct drm_connector *connector = NULL;
+ 	struct amdgpu_dm_connector *aconnector = NULL;
+ 	struct drm_connector_state *drm_new_conn_state = NULL, *drm_old_conn_state = NULL;
+ 	struct dm_connector_state *dm_new_conn_state = NULL, *dm_old_conn_state = NULL;
+@@ -9363,15 +9375,17 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 	dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+ 	dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+ 	acrtc = to_amdgpu_crtc(crtc);
+-	aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
++	connector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc);
++	if (connector)
++		aconnector = to_amdgpu_dm_connector(connector);
+ 
+ 	/* TODO This hack should go away */
+-	if (aconnector && enable) {
++	if (connector && enable) {
+ 		/* Make sure fake sink is created in plug-in scenario */
+ 		drm_new_conn_state = drm_atomic_get_new_connector_state(state,
+-							    &aconnector->base);
++									connector);
+ 		drm_old_conn_state = drm_atomic_get_old_connector_state(state,
+-							    &aconnector->base);
++									connector);
+ 
+ 		if (IS_ERR(drm_new_conn_state)) {
+ 			ret = PTR_ERR_OR_ZERO(drm_new_conn_state);
+@@ -9518,7 +9532,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 		 * added MST connectors not found in existing crtc_state in the chained mode
+ 		 * TODO: need to dig out the root cause of that
+ 		 */
+-		if (!aconnector)
++		if (!connector)
+ 			goto skip_modeset;
+ 
+ 		if (modereset_required(new_crtc_state))
+@@ -9561,7 +9575,7 @@ skip_modeset:
+ 	 * We want to do dc stream updates that do not require a
+ 	 * full modeset below.
+ 	 */
+-	if (!(enable && aconnector && new_crtc_state->active))
++	if (!(enable && connector && new_crtc_state->active))
+ 		return 0;
+ 	/*
+ 	 * Given above conditions, the dc state cannot be NULL because:
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 3d480be802cb5..3710f4d0f2cb2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -834,7 +834,7 @@ struct dc_stream_state *
+ int dm_atomic_get_state(struct drm_atomic_state *state,
+ 			struct dm_atomic_state **dm_state);
+ 
+-struct amdgpu_dm_connector *
++struct drm_connector *
+ amdgpu_dm_find_first_crtc_matching_connector(struct drm_atomic_state *state,
+ 					     struct drm_crtc *crtc);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 11da0eebee6c4..602a2ab98abf8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1500,14 +1500,16 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ 		int ind = find_crtc_index_in_state_by_stream(state, stream);
+ 
+ 		if (ind >= 0) {
++			struct drm_connector *connector;
+ 			struct amdgpu_dm_connector *aconnector;
+ 			struct drm_connector_state *drm_new_conn_state;
+ 			struct dm_connector_state *dm_new_conn_state;
+ 			struct dm_crtc_state *dm_old_crtc_state;
+ 
+-			aconnector =
++			connector =
+ 				amdgpu_dm_find_first_crtc_matching_connector(state,
+ 									     state->crtcs[ind].ptr);
++			aconnector = to_amdgpu_dm_connector(connector);
+ 			drm_new_conn_state =
+ 				drm_atomic_get_new_connector_state(state,
+ 								   &aconnector->base);
+@@ -1642,7 +1644,7 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ 	} else {
+ 		/* check if mode could be supported within full_pbn */
+ 		bpp = convert_dc_color_depth_into_bpc(stream->timing.display_color_depth) * 3;
+-		pbn = drm_dp_calc_pbn_mode(stream->timing.pix_clk_100hz / 10, bpp, false);
++		pbn = drm_dp_calc_pbn_mode(stream->timing.pix_clk_100hz / 10, bpp << 4);
+ 		if (pbn > full_pbn)
+ 			return DC_FAIL_BANDWIDTH_VALIDATE;
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index a1f1d10039927..e4bb1e25ee3b2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -4512,7 +4512,7 @@ void dc_resource_state_copy_construct(
+ 	struct dml2_context *dml2 = NULL;
+ 
+ 	// Need to preserve allocated dml2 context
+-	if (src_ctx->clk_mgr->ctx->dc->debug.using_dml2)
++	if (src_ctx->clk_mgr && src_ctx->clk_mgr->ctx->dc->debug.using_dml2)
+ 		dml2 = dst_ctx->bw_ctx.dml2;
+ #endif
+ 
+@@ -4520,7 +4520,7 @@ void dc_resource_state_copy_construct(
+ 
+ #ifdef CONFIG_DRM_AMD_DC_FP
+ 	// Preserve allocated dml2 context
+-	if (src_ctx->clk_mgr->ctx->dc->debug.using_dml2)
++	if (src_ctx->clk_mgr && src_ctx->clk_mgr->ctx->dc->debug.using_dml2)
+ 		dst_ctx->bw_ctx.dml2 = dml2;
+ #endif
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index 90339c2dfd848..5a0b045189569 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -807,7 +807,7 @@ void dp_decide_lane_settings(
+ 		const struct link_training_settings *lt_settings,
+ 		const union lane_adjust ln_adjust[LANE_COUNT_DP_MAX],
+ 		struct dc_lane_settings hw_lane_settings[LANE_COUNT_DP_MAX],
+-		union dpcd_training_lane dpcd_lane_settings[LANE_COUNT_DP_MAX])
++		union dpcd_training_lane *dpcd_lane_settings)
+ {
+ 	uint32_t lane;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.h b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.h
+index 7d027bac82551..851bd17317a0c 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.h
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.h
+@@ -111,7 +111,7 @@ void dp_decide_lane_settings(
+ 	const struct link_training_settings *lt_settings,
+ 	const union lane_adjust ln_adjust[LANE_COUNT_DP_MAX],
+ 	struct dc_lane_settings hw_lane_settings[LANE_COUNT_DP_MAX],
+-	union dpcd_training_lane dpcd_lane_settings[LANE_COUNT_DP_MAX]);
++	union dpcd_training_lane *dpcd_lane_settings);
+ 
+ enum dc_dp_training_pattern decide_cr_training_pattern(
+ 		const struct dc_link_settings *link_settings);
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+index 5d28c951a3197..5cb4725c773f6 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+@@ -2735,10 +2735,8 @@ static int kv_parse_power_table(struct amdgpu_device *adev)
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+ 		ps = kzalloc(sizeof(struct kv_ps), GFP_KERNEL);
+-		if (ps == NULL) {
+-			kfree(adev->pm.dpm.ps);
++		if (ps == NULL)
+ 			return -ENOMEM;
+-		}
+ 		adev->pm.dpm.ps[i].ps_priv = ps;
+ 		k = 0;
+ 		idx = (u8 *)&power_state->v2.clockInfoIndex[0];
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+index 81fb4e5dd804b..60377747bab4f 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+@@ -272,10 +272,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset));
+ 			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
+ 								 dep_table);
+-			if (ret) {
+-				amdgpu_free_extended_power_table(adev);
++			if (ret)
+ 				return ret;
+-			}
+ 		}
+ 		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
+ 			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+@@ -283,10 +281,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				 le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset));
+ 			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
+ 								 dep_table);
+-			if (ret) {
+-				amdgpu_free_extended_power_table(adev);
++			if (ret)
+ 				return ret;
+-			}
+ 		}
+ 		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
+ 			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+@@ -294,10 +290,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset));
+ 			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
+ 								 dep_table);
+-			if (ret) {
+-				amdgpu_free_extended_power_table(adev);
++			if (ret)
+ 				return ret;
+-			}
+ 		}
+ 		if (power_info->pplib4.usMvddDependencyOnMCLKOffset) {
+ 			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+@@ -305,10 +299,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				 le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset));
+ 			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk,
+ 								 dep_table);
+-			if (ret) {
+-				amdgpu_free_extended_power_table(adev);
++			if (ret)
+ 				return ret;
+-			}
+ 		}
+ 		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
+ 			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
+@@ -339,10 +331,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				kcalloc(psl->ucNumEntries,
+ 					sizeof(struct amdgpu_phase_shedding_limits_entry),
+ 					GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries)
+ 				return -ENOMEM;
+-			}
+ 
+ 			entry = &psl->entries[0];
+ 			for (i = 0; i < psl->ucNumEntries; i++) {
+@@ -383,10 +373,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 			ATOM_PPLIB_CAC_Leakage_Record *entry;
+ 			u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table);
+ 			adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries)
+ 				return -ENOMEM;
+-			}
+ 			entry = &cac_table->entries[0];
+ 			for (i = 0; i < cac_table->ucNumEntries; i++) {
+ 				if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) {
+@@ -438,10 +426,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				sizeof(struct amdgpu_vce_clock_voltage_dependency_entry);
+ 			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
+ 				kzalloc(size, GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries)
+ 				return -ENOMEM;
+-			}
+ 			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
+ 				limits->numEntries;
+ 			entry = &limits->entries[0];
+@@ -493,10 +479,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry);
+ 			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
+ 				kzalloc(size, GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries)
+ 				return -ENOMEM;
+-			}
+ 			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
+ 				limits->numEntries;
+ 			entry = &limits->entries[0];
+@@ -525,10 +509,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				sizeof(struct amdgpu_clock_voltage_dependency_entry);
+ 			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
+ 				kzalloc(size, GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries)
+ 				return -ENOMEM;
+-			}
+ 			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
+ 				limits->numEntries;
+ 			entry = &limits->entries[0];
+@@ -548,10 +530,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				 le16_to_cpu(ext_hdr->usPPMTableOffset));
+ 			adev->pm.dpm.dyn_state.ppm_table =
+ 				kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.ppm_table) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.ppm_table)
+ 				return -ENOMEM;
+-			}
+ 			adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign;
+ 			adev->pm.dpm.dyn_state.ppm_table->cpu_core_number =
+ 				le16_to_cpu(ppm->usCpuCoreNumber);
+@@ -583,10 +563,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 				sizeof(struct amdgpu_clock_voltage_dependency_entry);
+ 			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
+ 				kzalloc(size, GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries)
+ 				return -ENOMEM;
+-			}
+ 			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
+ 				limits->numEntries;
+ 			entry = &limits->entries[0];
+@@ -606,10 +584,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 			ATOM_PowerTune_Table *pt;
+ 			adev->pm.dpm.dyn_state.cac_tdp_table =
+ 				kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL);
+-			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
+-				amdgpu_free_extended_power_table(adev);
++			if (!adev->pm.dpm.dyn_state.cac_tdp_table)
+ 				return -ENOMEM;
+-			}
+ 			if (rev > 0) {
+ 				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *)
+ 					(mode_info->atom_context->bios + data_offset +
+@@ -645,10 +621,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+ 			ret = amdgpu_parse_clk_voltage_dep_table(
+ 					&adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
+ 					dep_table);
+-			if (ret) {
+-				kfree(adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
++			if (ret)
+ 				return ret;
+-			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+index fc8e4ac6c8e76..df4f20293c16a 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+@@ -7379,10 +7379,9 @@ static int si_dpm_init(struct amdgpu_device *adev)
+ 		kcalloc(4,
+ 			sizeof(struct amdgpu_clock_voltage_dependency_entry),
+ 			GFP_KERNEL);
+-	if (!adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries) {
+-		amdgpu_free_extended_power_table(adev);
++	if (!adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries)
+ 		return -ENOMEM;
+-	}
++
+ 	adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.count = 4;
+ 	adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].clk = 0;
+ 	adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].v = 0;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index 11372fcc59c8f..b1a8799e2dee3 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -2974,6 +2974,8 @@ static int smu7_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 		result = smu7_get_evv_voltages(hwmgr);
+ 		if (result) {
+ 			pr_info("Get EVV Voltage Failed.  Abort Driver loading!\n");
++			kfree(hwmgr->backend);
++			hwmgr->backend = NULL;
+ 			return -EINVAL;
+ 		}
+ 	} else {
+@@ -3019,8 +3021,10 @@ static int smu7_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+ 	}
+ 
+ 	result = smu7_update_edc_leakage_table(hwmgr);
+-	if (result)
++	if (result) {
++		smu7_hwmgr_backend_fini(hwmgr);
+ 		return result;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 8f740154707db..51abe42c639e5 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1464,9 +1464,6 @@ static int _anx7625_hpd_polling(struct anx7625_data *ctx,
+ 	if (ctx->pdata.intp_irq)
+ 		return 0;
+ 
+-	/* Delay 200ms for FW HPD de-bounce */
+-	msleep(200);
+-
+ 	ret = readx_poll_timeout(anx7625_read_hpd_status_p0,
+ 				 ctx, val,
+ 				 ((val & HPD_STATUS) || (val < 0)),
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-hdcp.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-hdcp.c
+index 946212a955981..5e3b8edcf7948 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-hdcp.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-hdcp.c
+@@ -403,7 +403,8 @@ static int _cdns_mhdp_hdcp_disable(struct cdns_mhdp_device *mhdp)
+ 
+ static int _cdns_mhdp_hdcp_enable(struct cdns_mhdp_device *mhdp, u8 content_type)
+ {
+-	int ret, tries = 3;
++	int ret = -EINVAL;
++	int tries = 3;
+ 	u32 i;
+ 
+ 	for (i = 0; i < tries; i++) {
+diff --git a/drivers/gpu/drm/bridge/imx/imx93-mipi-dsi.c b/drivers/gpu/drm/bridge/imx/imx93-mipi-dsi.c
+index 3ff30ce80c5b8..2347f8dd632f9 100644
+--- a/drivers/gpu/drm/bridge/imx/imx93-mipi-dsi.c
++++ b/drivers/gpu/drm/bridge/imx/imx93-mipi-dsi.c
+@@ -226,8 +226,8 @@ dphy_pll_get_configure_from_opts(struct imx93_dsi *dsi,
+ 	unsigned long fout;
+ 	unsigned long best_fout = 0;
+ 	unsigned int fvco_div;
+-	unsigned int min_n, max_n, n, best_n;
+-	unsigned long m, best_m;
++	unsigned int min_n, max_n, n, best_n = UINT_MAX;
++	unsigned long m, best_m = 0;
+ 	unsigned long min_delta = ULONG_MAX;
+ 	unsigned long delta;
+ 	u64 tmp;
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index ef2e373606ba3..615cc8f950d7b 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -2273,7 +2273,7 @@ static int tc_probe(struct i2c_client *client)
+ 	} else {
+ 		if (tc->hpd_pin < 0 || tc->hpd_pin > 1) {
+ 			dev_err(dev, "failed to parse HPD number\n");
+-			return ret;
++			return -EINVAL;
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/bridge/ti-tpd12s015.c b/drivers/gpu/drm/bridge/ti-tpd12s015.c
+index e0e015243a602..b588fea12502d 100644
+--- a/drivers/gpu/drm/bridge/ti-tpd12s015.c
++++ b/drivers/gpu/drm/bridge/ti-tpd12s015.c
+@@ -179,7 +179,7 @@ static int tpd12s015_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int __exit tpd12s015_remove(struct platform_device *pdev)
++static int tpd12s015_remove(struct platform_device *pdev)
+ {
+ 	struct tpd12s015_device *tpd = platform_get_drvdata(pdev);
+ 
+@@ -197,7 +197,7 @@ MODULE_DEVICE_TABLE(of, tpd12s015_of_match);
+ 
+ static struct platform_driver tpd12s015_driver = {
+ 	.probe	= tpd12s015_probe,
+-	.remove	= __exit_p(tpd12s015_remove),
++	.remove = tpd12s015_remove,
+ 	.driver	= {
+ 		.name	= "tpd12s015",
+ 		.of_match_table = tpd12s015_of_match,
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 0e0d0e76de065..772b00ebd57bd 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4718,13 +4718,12 @@ EXPORT_SYMBOL(drm_dp_check_act_status);
+ 
+ /**
+  * drm_dp_calc_pbn_mode() - Calculate the PBN for a mode.
+- * @clock: dot clock for the mode
+- * @bpp: bpp for the mode.
+- * @dsc: DSC mode. If true, bpp has units of 1/16 of a bit per pixel
++ * @clock: dot clock
++ * @bpp: bpp as .4 binary fixed point
+  *
+  * This uses the formula in the spec to calculate the PBN value for a mode.
+  */
+-int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc)
++int drm_dp_calc_pbn_mode(int clock, int bpp)
+ {
+ 	/*
+ 	 * margin 5300ppm + 300ppm ~ 0.6% as per spec, factor is 1.006
+@@ -4735,18 +4734,9 @@ int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc)
+ 	 * peak_kbps *= (1006/1000)
+ 	 * peak_kbps *= (64/54)
+ 	 * peak_kbps *= 8    convert to bytes
+-	 *
+-	 * If the bpp is in units of 1/16, further divide by 16. Put this
+-	 * factor in the numerator rather than the denominator to avoid
+-	 * integer overflow
+ 	 */
+-
+-	if (dsc)
+-		return DIV_ROUND_UP_ULL(mul_u32_u32(clock * (bpp / 16), 64 * 1006),
+-					8 * 54 * 1000 * 1000);
+-
+-	return DIV_ROUND_UP_ULL(mul_u32_u32(clock * bpp, 64 * 1006),
+-				8 * 54 * 1000 * 1000);
++	return DIV_ROUND_UP_ULL(mul_u32_u32(clock * bpp, 64 * 1006 >> 4),
++				1000 * 8 * 54 * 1000);
+ }
+ EXPORT_SYMBOL(drm_dp_calc_pbn_mode);
+ 
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 535f16e7882e7..3c835c99daadd 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -949,8 +949,11 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
+ 			goto err_minors;
+ 	}
+ 
+-	if (drm_core_check_feature(dev, DRIVER_MODESET))
+-		drm_modeset_register_all(dev);
++	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
++		ret = drm_modeset_register_all(dev);
++		if (ret)
++			goto err_unload;
++	}
+ 
+ 	DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
+ 		 driver->name, driver->major, driver->minor,
+@@ -960,6 +963,9 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
+ 
+ 	goto out_unlock;
+ 
++err_unload:
++	if (dev->driver->unload)
++		dev->driver->unload(dev);
+ err_minors:
+ 	remove_compat_control_link(dev);
+ 	drm_minor_unregister(dev, DRM_MINOR_ACCEL);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 62ce92772367f..1710ef24c5117 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2025,7 +2025,7 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
+ 		}
+ 	}
+ 
+-	dsc_max_bpc = intel_dp_dsc_min_src_input_bpc(i915);
++	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(i915);
+ 	if (!dsc_max_bpc)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index aa10612626136..03ac2817664e6 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -106,8 +106,7 @@ static int intel_dp_mst_find_vcpi_slots_for_bpp(struct intel_encoder *encoder,
+ 			continue;
+ 
+ 		crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock,
+-						       dsc ? bpp << 4 : bpp,
+-						       dsc);
++						       bpp << 4);
+ 
+ 		slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr,
+ 						      connector->port,
+@@ -979,7 +978,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
+ 		return ret;
+ 
+ 	if (mode_rate > max_rate || mode->clock > max_dotclk ||
+-	    drm_dp_calc_pbn_mode(mode->clock, min_bpp, false) > port->full_pbn) {
++	    drm_dp_calc_pbn_mode(mode->clock, min_bpp << 4) > port->full_pbn) {
+ 		*status = MODE_CLOCK_HIGH;
+ 		return 0;
+ 	}
+diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.c b/drivers/gpu/drm/i915/display/intel_frontbuffer.c
+index ec46716b2f49e..2ea37c0414a95 100644
+--- a/drivers/gpu/drm/i915/display/intel_frontbuffer.c
++++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.c
+@@ -265,8 +265,6 @@ static void frontbuffer_release(struct kref *ref)
+ 	spin_unlock(&intel_bo_to_i915(obj)->display.fb_tracking.lock);
+ 
+ 	i915_active_fini(&front->write);
+-
+-	i915_gem_object_put(obj);
+ 	kfree_rcu(front, rcu);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_frontbuffer.h b/drivers/gpu/drm/i915/gem/i915_gem_object_frontbuffer.h
+index e5e870b6f186c..9fbf14867a2a6 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object_frontbuffer.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object_frontbuffer.h
+@@ -89,6 +89,7 @@ i915_gem_object_set_frontbuffer(struct drm_i915_gem_object *obj,
+ 
+ 	if (!front) {
+ 		RCU_INIT_POINTER(obj->frontbuffer, NULL);
++		drm_gem_object_put(intel_bo_to_drm_bo(obj));
+ 	} else if (rcu_access_pointer(obj->frontbuffer)) {
+ 		cur = rcu_dereference_protected(obj->frontbuffer, true);
+ 		kref_get(&cur->ref);
+diff --git a/drivers/gpu/drm/imx/lcdc/imx-lcdc.c b/drivers/gpu/drm/imx/lcdc/imx-lcdc.c
+index 22b65f4a0e303..4beb3b4bd6942 100644
+--- a/drivers/gpu/drm/imx/lcdc/imx-lcdc.c
++++ b/drivers/gpu/drm/imx/lcdc/imx-lcdc.c
+@@ -342,21 +342,12 @@ static const struct drm_mode_config_helper_funcs imx_lcdc_mode_config_helpers =
+ 	.atomic_commit_tail = drm_atomic_helper_commit_tail_rpm,
+ };
+ 
+-static void imx_lcdc_release(struct drm_device *drm)
+-{
+-	struct imx_lcdc *lcdc = imx_lcdc_from_drmdev(drm);
+-
+-	drm_kms_helper_poll_fini(drm);
+-	kfree(lcdc);
+-}
+-
+ DEFINE_DRM_GEM_DMA_FOPS(imx_lcdc_drm_fops);
+ 
+ static struct drm_driver imx_lcdc_drm_driver = {
+ 	.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
+ 	.fops = &imx_lcdc_drm_fops,
+ 	DRM_GEM_DMA_DRIVER_OPS_VMAP,
+-	.release = imx_lcdc_release,
+ 	.name = "imx-lcdc",
+ 	.desc = "i.MX LCDC driver",
+ 	.date = "20200716",
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_merge.c b/drivers/gpu/drm/mediatek/mtk_disp_merge.c
+index e525a6b9e5b0b..22f768d923d5a 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_merge.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_merge.c
+@@ -103,7 +103,7 @@ void mtk_merge_stop_cmdq(struct device *dev, struct cmdq_pkt *cmdq_pkt)
+ 	mtk_ddp_write(cmdq_pkt, 0, &priv->cmdq_reg, priv->regs,
+ 		      DISP_REG_MERGE_CTRL);
+ 
+-	if (priv->async_clk)
++	if (!cmdq_pkt && priv->async_clk)
+ 		reset_control_reset(priv->reset_ctl);
+ }
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index e4c16ba9902dc..2136a596efa18 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -2818,3 +2818,4 @@ MODULE_AUTHOR("Markus Schneider-Pargmann <msp@baylibre.com>");
+ MODULE_AUTHOR("Bo-Chen Chen <rex-bc.chen@mediatek.com>");
+ MODULE_DESCRIPTION("MediaTek DisplayPort Driver");
+ MODULE_LICENSE("GPL");
++MODULE_SOFTDEP("pre: phy_mtk_dp");
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 4e3d9f7b4d8c9..beb7d9d08e971 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -966,20 +966,6 @@ static const struct mtk_dpi_conf mt8186_conf = {
+ 	.csc_enable_bit = CSC_ENABLE,
+ };
+ 
+-static const struct mtk_dpi_conf mt8188_dpintf_conf = {
+-	.cal_factor = mt8195_dpintf_calculate_factor,
+-	.max_clock_khz = 600000,
+-	.output_fmts = mt8195_output_fmts,
+-	.num_output_fmts = ARRAY_SIZE(mt8195_output_fmts),
+-	.pixels_per_iter = 4,
+-	.input_2pixel = false,
+-	.dimension_mask = DPINTF_HPW_MASK,
+-	.hvsize_mask = DPINTF_HSIZE_MASK,
+-	.channel_swap_shift = DPINTF_CH_SWAP,
+-	.yuv422_en_bit = DPINTF_YUV422_EN,
+-	.csc_enable_bit = DPINTF_CSC_ENABLE,
+-};
+-
+ static const struct mtk_dpi_conf mt8192_conf = {
+ 	.cal_factor = mt8183_calculate_factor,
+ 	.reg_h_fre_con = 0xe0,
+@@ -1103,7 +1089,7 @@ static const struct of_device_id mtk_dpi_of_ids[] = {
+ 	{ .compatible = "mediatek,mt8173-dpi", .data = &mt8173_conf },
+ 	{ .compatible = "mediatek,mt8183-dpi", .data = &mt8183_conf },
+ 	{ .compatible = "mediatek,mt8186-dpi", .data = &mt8186_conf },
+-	{ .compatible = "mediatek,mt8188-dp-intf", .data = &mt8188_dpintf_conf },
++	{ .compatible = "mediatek,mt8188-dp-intf", .data = &mt8195_dpintf_conf },
+ 	{ .compatible = "mediatek,mt8192-dpi", .data = &mt8192_conf },
+ 	{ .compatible = "mediatek,mt8195-dp-intf", .data = &mt8195_dpintf_conf },
+ 	{ /* sentinel */ },
+diff --git a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c
+index c3adaeefd551a..c7233d0ac210f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c
++++ b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c
+@@ -246,8 +246,7 @@ int mtk_mdp_rdma_clk_enable(struct device *dev)
+ {
+ 	struct mtk_mdp_rdma *rdma = dev_get_drvdata(dev);
+ 
+-	clk_prepare_enable(rdma->clk);
+-	return 0;
++	return clk_prepare_enable(rdma->clk);
+ }
+ 
+ void mtk_mdp_rdma_clk_disable(struct device *dev)
+diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
+index 6309a857ca312..ad70b611b44f0 100644
+--- a/drivers/gpu/drm/msm/Kconfig
++++ b/drivers/gpu/drm/msm/Kconfig
+@@ -6,6 +6,7 @@ config DRM_MSM
+ 	depends on ARCH_QCOM || SOC_IMX5 || COMPILE_TEST
+ 	depends on COMMON_CLK
+ 	depends on IOMMU_SUPPORT
++	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+ 	depends on QCOM_OCMEM || QCOM_OCMEM=n
+ 	depends on QCOM_LLCC || QCOM_LLCC=n
+ 	depends on QCOM_COMMAND_DB || QCOM_COMMAND_DB=n
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index 41b13dec9befc..3ee14646ab6b9 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -464,7 +464,7 @@ static const struct adreno_info gpulist[] = {
+ 			{ 190, 1 },
+ 		),
+ 	}, {
+-		.chip_ids = ADRENO_CHIP_IDS(0x06080000),
++		.chip_ids = ADRENO_CHIP_IDS(0x06080001),
+ 		.family = ADRENO_6XX_GEN2,
+ 		.revn = 680,
+ 		.fw = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 9392ad2b4d3fe..c022e57864a43 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -77,7 +77,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_0,
++		.sblk = &sm8150_vig_sblk_0,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG0,
+@@ -85,7 +85,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_1,
++		.sblk = &sm8150_vig_sblk_1,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG1,
+@@ -93,7 +93,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_2,
++		.sblk = &sm8150_vig_sblk_2,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG2,
+@@ -101,7 +101,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_3,
++		.sblk = &sm8150_vig_sblk_3,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG3,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index e07f4c8c25b9f..cb0758f0829de 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -76,7 +76,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_0,
++		.sblk = &sm8150_vig_sblk_0,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG0,
+@@ -84,7 +84,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_1,
++		.sblk = &sm8150_vig_sblk_1,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG1,
+@@ -92,7 +92,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_2,
++		.sblk = &sm8150_vig_sblk_2,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG2,
+@@ -100,7 +100,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x1f0,
+ 		.features = VIG_SDM845_MASK,
+-		.sblk = &sdm845_vig_sblk_3,
++		.sblk = &sm8150_vig_sblk_3,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG3,
+@@ -367,6 +367,7 @@ static const struct dpu_perf_cfg sc8180x_perf_data = {
+ 	.min_llcc_ib = 800000,
+ 	.min_dram_ib = 800000,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
+ 		.entries = sc7180_qos_linear
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+index 94278a3e3483c..9f8068fa0175d 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+@@ -32,7 +32,7 @@ static const struct dpu_mdp_cfg sm8250_mdp = {
+ 		[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2bc, .bit_off = 20 },
+-		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x3b8, .bit_off = 24 },
++		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x2bc, .bit_off = 16 },
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+index c0d88ddccb28c..9bfa15e4e6451 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+@@ -25,7 +25,7 @@ static const struct dpu_mdp_cfg sc7180_mdp = {
+ 		[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2c4, .bit_off = 8 },
+-		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x3b8, .bit_off = 24 },
++		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x2bc, .bit_off = 16 },
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+index 15942fa5a8e06..b9c296e51e365 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+@@ -25,7 +25,7 @@ static const struct dpu_mdp_cfg sc7280_mdp = {
+ 		[DPU_CLK_CTRL_DMA0] = { .reg_off = 0x2ac, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2c4, .bit_off = 8 },
+-		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x3b8, .bit_off = 24 },
++		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x2bc, .bit_off = 16 },
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+index 7742f52be859b..72b0f547242fe 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+@@ -75,39 +75,39 @@ static const struct dpu_sspp_cfg sm8450_sspp[] = {
+ 	{
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x32c,
+-		.features = VIG_SC7180_MASK,
+-		.sblk = &sm8250_vig_sblk_0,
++		.features = VIG_SC7180_MASK_SDMA,
++		.sblk = &sm8450_vig_sblk_0,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG0,
+ 	}, {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x32c,
+-		.features = VIG_SC7180_MASK,
+-		.sblk = &sm8250_vig_sblk_1,
++		.features = VIG_SC7180_MASK_SDMA,
++		.sblk = &sm8450_vig_sblk_1,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG1,
+ 	}, {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x32c,
+-		.features = VIG_SC7180_MASK,
+-		.sblk = &sm8250_vig_sblk_2,
++		.features = VIG_SC7180_MASK_SDMA,
++		.sblk = &sm8450_vig_sblk_2,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG2,
+ 	}, {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x32c,
+-		.features = VIG_SC7180_MASK,
+-		.sblk = &sm8250_vig_sblk_3,
++		.features = VIG_SC7180_MASK_SDMA,
++		.sblk = &sm8450_vig_sblk_3,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+ 		.clk_ctrl = DPU_CLK_CTRL_VIG3,
+ 	}, {
+ 		.name = "sspp_8", .id = SSPP_DMA0,
+ 		.base = 0x24000, .len = 0x32c,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &sdm845_dma_sblk_0,
+ 		.xin_id = 1,
+ 		.type = SSPP_TYPE_DMA,
+@@ -115,7 +115,7 @@ static const struct dpu_sspp_cfg sm8450_sspp[] = {
+ 	}, {
+ 		.name = "sspp_9", .id = SSPP_DMA1,
+ 		.base = 0x26000, .len = 0x32c,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &sdm845_dma_sblk_1,
+ 		.xin_id = 5,
+ 		.type = SSPP_TYPE_DMA,
+@@ -123,7 +123,7 @@ static const struct dpu_sspp_cfg sm8450_sspp[] = {
+ 	}, {
+ 		.name = "sspp_10", .id = SSPP_DMA2,
+ 		.base = 0x28000, .len = 0x32c,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &sdm845_dma_sblk_2,
+ 		.xin_id = 9,
+ 		.type = SSPP_TYPE_DMA,
+@@ -131,7 +131,7 @@ static const struct dpu_sspp_cfg sm8450_sspp[] = {
+ 	}, {
+ 		.name = "sspp_11", .id = SSPP_DMA3,
+ 		.base = 0x2a000, .len = 0x32c,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &sdm845_dma_sblk_3,
+ 		.xin_id = 13,
+ 		.type = SSPP_TYPE_DMA,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 3c475f8042b03..db501ce1d8551 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2014-2021 The Linux Foundation. All rights reserved.
+  * Copyright (C) 2013 Red Hat
+  * Author: Rob Clark <robdclark@gmail.com>
+@@ -125,7 +125,7 @@ static void dpu_crtc_setup_lm_misr(struct dpu_crtc_state *crtc_state)
+ 			continue;
+ 
+ 		/* Calculate MISR over 1 frame */
+-		m->hw_lm->ops.setup_misr(m->hw_lm, true, 1);
++		m->hw_lm->ops.setup_misr(m->hw_lm);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 1cf7ff6caff4e..5dbb5d27bbea0 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2,7 +2,7 @@
+ /*
+  * Copyright (C) 2013 Red Hat
+  * Copyright (c) 2014-2018, 2020-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  *
+  * Author: Rob Clark <robdclark@gmail.com>
+  */
+@@ -255,7 +255,7 @@ void dpu_encoder_setup_misr(const struct drm_encoder *drm_enc)
+ 		if (!phys->hw_intf || !phys->hw_intf->ops.setup_misr)
+ 			continue;
+ 
+-		phys->hw_intf->ops.setup_misr(phys->hw_intf, true, 1);
++		phys->hw_intf->ops.setup_misr(phys->hw_intf);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index a1aada6307808..7056c08b9957f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -249,14 +249,17 @@ static const uint32_t wb2_formats[] = {
+  * SSPP sub blocks config
+  *************************************************************/
+ 
++#define SSPP_SCALER_VER(maj, min) (((maj) << 16) | (min))
++
+ /* SSPP common configuration */
+-#define _VIG_SBLK(sdma_pri, qseed_ver) \
++#define _VIG_SBLK(sdma_pri, qseed_ver, scaler_ver) \
+ 	{ \
+ 	.maxdwnscale = MAX_DOWNSCALE_RATIO, \
+ 	.maxupscale = MAX_UPSCALE_RATIO, \
+ 	.smart_dma_priority = sdma_pri, \
+ 	.scaler_blk = {.name = "scaler", \
+ 		.id = qseed_ver, \
++		.version = scaler_ver, \
+ 		.base = 0xa00, .len = 0xa0,}, \
+ 	.csc_blk = {.name = "csc", \
+ 		.id = DPU_SSPP_CSC_10BIT, \
+@@ -268,13 +271,14 @@ static const uint32_t wb2_formats[] = {
+ 	.rotation_cfg = NULL, \
+ 	}
+ 
+-#define _VIG_SBLK_ROT(sdma_pri, qseed_ver, rot_cfg) \
++#define _VIG_SBLK_ROT(sdma_pri, qseed_ver, scaler_ver, rot_cfg) \
+ 	{ \
+ 	.maxdwnscale = MAX_DOWNSCALE_RATIO, \
+ 	.maxupscale = MAX_UPSCALE_RATIO, \
+ 	.smart_dma_priority = sdma_pri, \
+ 	.scaler_blk = {.name = "scaler", \
+ 		.id = qseed_ver, \
++		.version = scaler_ver, \
+ 		.base = 0xa00, .len = 0xa0,}, \
+ 	.csc_blk = {.name = "csc", \
+ 		.id = DPU_SSPP_CSC_10BIT, \
+@@ -298,13 +302,17 @@ static const uint32_t wb2_formats[] = {
+ 	}
+ 
+ static const struct dpu_sspp_sub_blks msm8998_vig_sblk_0 =
+-				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 2));
+ static const struct dpu_sspp_sub_blks msm8998_vig_sblk_1 =
+-				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 2));
+ static const struct dpu_sspp_sub_blks msm8998_vig_sblk_2 =
+-				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 2));
+ static const struct dpu_sspp_sub_blks msm8998_vig_sblk_3 =
+-				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(0, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 2));
+ 
+ static const struct dpu_rotation_cfg dpu_rot_sc7280_cfg_v2 = {
+ 	.rot_maxheight = 1088,
+@@ -313,13 +321,30 @@ static const struct dpu_rotation_cfg dpu_rot_sc7280_cfg_v2 = {
+ };
+ 
+ static const struct dpu_sspp_sub_blks sdm845_vig_sblk_0 =
+-				_VIG_SBLK(5, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(5, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 3));
+ static const struct dpu_sspp_sub_blks sdm845_vig_sblk_1 =
+-				_VIG_SBLK(6, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(6, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 3));
+ static const struct dpu_sspp_sub_blks sdm845_vig_sblk_2 =
+-				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 3));
+ static const struct dpu_sspp_sub_blks sdm845_vig_sblk_3 =
+-				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED3);
++				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 3));
++
++static const struct dpu_sspp_sub_blks sm8150_vig_sblk_0 =
++				_VIG_SBLK(5, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 4));
++static const struct dpu_sspp_sub_blks sm8150_vig_sblk_1 =
++				_VIG_SBLK(6, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 4));
++static const struct dpu_sspp_sub_blks sm8150_vig_sblk_2 =
++				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 4));
++static const struct dpu_sspp_sub_blks sm8150_vig_sblk_3 =
++				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED3,
++					  SSPP_SCALER_VER(1, 4));
+ 
+ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_0 = _DMA_SBLK(1);
+ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_1 = _DMA_SBLK(2);
+@@ -327,34 +352,60 @@ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_2 = _DMA_SBLK(3);
+ static const struct dpu_sspp_sub_blks sdm845_dma_sblk_3 = _DMA_SBLK(4);
+ 
+ static const struct dpu_sspp_sub_blks sc7180_vig_sblk_0 =
+-				_VIG_SBLK(4, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(4, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 0));
+ 
+ static const struct dpu_sspp_sub_blks sc7280_vig_sblk_0 =
+-			_VIG_SBLK_ROT(4, DPU_SSPP_SCALER_QSEED4, &dpu_rot_sc7280_cfg_v2);
++			_VIG_SBLK_ROT(4, DPU_SSPP_SCALER_QSEED4,
++				      SSPP_SCALER_VER(3, 0),
++				      &dpu_rot_sc7280_cfg_v2);
+ 
+ static const struct dpu_sspp_sub_blks sm6115_vig_sblk_0 =
+-				_VIG_SBLK(2, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(2, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 0));
+ 
+ static const struct dpu_sspp_sub_blks sm6125_vig_sblk_0 =
+-				_VIG_SBLK(3, DPU_SSPP_SCALER_QSEED3LITE);
++				_VIG_SBLK(3, DPU_SSPP_SCALER_QSEED3LITE,
++					  SSPP_SCALER_VER(2, 4));
+ 
+ static const struct dpu_sspp_sub_blks sm8250_vig_sblk_0 =
+-				_VIG_SBLK(5, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(5, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 0));
+ static const struct dpu_sspp_sub_blks sm8250_vig_sblk_1 =
+-				_VIG_SBLK(6, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(6, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 0));
+ static const struct dpu_sspp_sub_blks sm8250_vig_sblk_2 =
+-				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 0));
+ static const struct dpu_sspp_sub_blks sm8250_vig_sblk_3 =
+-				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 0));
++
++static const struct dpu_sspp_sub_blks sm8450_vig_sblk_0 =
++				_VIG_SBLK(5, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 1));
++static const struct dpu_sspp_sub_blks sm8450_vig_sblk_1 =
++				_VIG_SBLK(6, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 1));
++static const struct dpu_sspp_sub_blks sm8450_vig_sblk_2 =
++				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 1));
++static const struct dpu_sspp_sub_blks sm8450_vig_sblk_3 =
++				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 1));
+ 
+ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_0 =
+-				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(7, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 2));
+ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_1 =
+-				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(8, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 2));
+ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_2 =
+-				_VIG_SBLK(9, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(9, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 2));
+ static const struct dpu_sspp_sub_blks sm8550_vig_sblk_3 =
+-				_VIG_SBLK(10, DPU_SSPP_SCALER_QSEED4);
++				_VIG_SBLK(10, DPU_SSPP_SCALER_QSEED4,
++					  SSPP_SCALER_VER(3, 2));
+ static const struct dpu_sspp_sub_blks sm8550_dma_sblk_4 = _DMA_SBLK(5);
+ static const struct dpu_sspp_sub_blks sm8550_dma_sblk_5 = _DMA_SBLK(6);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+index df024e10d3a32..62445075306af 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+@@ -265,7 +265,8 @@ enum {
+ /**
+  * struct dpu_scaler_blk: Scaler information
+  * @info:   HW register and features supported by this sub-blk
+- * @version: qseed block revision
++ * @version: qseed block revision, on QSEED3+ platforms this is the value of
++ *           scaler_blk.base + QSEED3_HW_VERSION registers.
+  */
+ struct dpu_scaler_blk {
+ 	DPU_HW_SUBBLK_INFO;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index e8b8908d3e122..27b9373cd7846 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+  */
+ 
+@@ -318,9 +318,9 @@ static u32 dpu_hw_intf_get_line_count(struct dpu_hw_intf *intf)
+ 	return DPU_REG_READ(c, INTF_LINE_COUNT);
+ }
+ 
+-static void dpu_hw_intf_setup_misr(struct dpu_hw_intf *intf, bool enable, u32 frame_count)
++static void dpu_hw_intf_setup_misr(struct dpu_hw_intf *intf)
+ {
+-	dpu_hw_setup_misr(&intf->hw, INTF_MISR_CTRL, enable, frame_count);
++	dpu_hw_setup_misr(&intf->hw, INTF_MISR_CTRL, 0x1);
+ }
+ 
+ static int dpu_hw_intf_collect_misr(struct dpu_hw_intf *intf, u32 *misr_value)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+index c539025c418bb..66a5603dc7eda 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+  */
+ 
+@@ -95,7 +95,7 @@ struct dpu_hw_intf_ops {
+ 
+ 	void (*bind_pingpong_blk)(struct dpu_hw_intf *intf,
+ 			const enum dpu_pingpong pp);
+-	void (*setup_misr)(struct dpu_hw_intf *intf, bool enable, u32 frame_count);
++	void (*setup_misr)(struct dpu_hw_intf *intf);
+ 	int (*collect_misr)(struct dpu_hw_intf *intf, u32 *misr_value);
+ 
+ 	// Tearcheck on INTF since DPU 5.0.0
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+index d1c3bd8379ea9..a590c1f7465fb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2015-2021, The Linux Foundation. All rights reserved.
+  */
+ 
+@@ -81,9 +81,9 @@ static void dpu_hw_lm_setup_border_color(struct dpu_hw_mixer *ctx,
+ 	}
+ }
+ 
+-static void dpu_hw_lm_setup_misr(struct dpu_hw_mixer *ctx, bool enable, u32 frame_count)
++static void dpu_hw_lm_setup_misr(struct dpu_hw_mixer *ctx)
+ {
+-	dpu_hw_setup_misr(&ctx->hw, LM_MISR_CTRL, enable, frame_count);
++	dpu_hw_setup_misr(&ctx->hw, LM_MISR_CTRL, 0x0);
+ }
+ 
+ static int dpu_hw_lm_collect_misr(struct dpu_hw_mixer *ctx, u32 *misr_value)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
+index 36992d046a533..98b77cda65472 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
+@@ -1,5 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2015-2021, The Linux Foundation. All rights reserved.
+  */
+ 
+@@ -57,7 +58,7 @@ struct dpu_hw_lm_ops {
+ 	/**
+ 	 * setup_misr: Enable/disable MISR
+ 	 */
+-	void (*setup_misr)(struct dpu_hw_mixer *ctx, bool enable, u32 frame_count);
++	void (*setup_misr)(struct dpu_hw_mixer *ctx);
+ 
+ 	/**
+ 	 * collect_misr: Read MISR signature
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.c
+index 18b16b2d2bf52..1d4f0b97c3c19 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+  */
+ #define pr_fmt(fmt)	"[drm:%s:%d] " fmt, __func__, __LINE__
+@@ -481,9 +481,11 @@ void _dpu_hw_setup_qos_lut(struct dpu_hw_blk_reg_map *c, u32 offset,
+ 		      cfg->danger_safe_en ? QOS_QOS_CTRL_DANGER_SAFE_EN : 0);
+ }
+ 
++/*
++ * note: Aside from encoders, input_sel should be set to 0x0 by default
++ */
+ void dpu_hw_setup_misr(struct dpu_hw_blk_reg_map *c,
+-		u32 misr_ctrl_offset,
+-		bool enable, u32 frame_count)
++		u32 misr_ctrl_offset, u8 input_sel)
+ {
+ 	u32 config = 0;
+ 
+@@ -492,15 +494,9 @@ void dpu_hw_setup_misr(struct dpu_hw_blk_reg_map *c,
+ 	/* Clear old MISR value (in case it's read before a new value is calculated)*/
+ 	wmb();
+ 
+-	if (enable) {
+-		config = (frame_count & MISR_FRAME_COUNT_MASK) |
+-			MISR_CTRL_ENABLE | MISR_CTRL_FREE_RUN_MASK;
+-
+-		DPU_REG_WRITE(c, misr_ctrl_offset, config);
+-	} else {
+-		DPU_REG_WRITE(c, misr_ctrl_offset, 0);
+-	}
+-
++	config = MISR_FRAME_COUNT | MISR_CTRL_ENABLE | MISR_CTRL_FREE_RUN_MASK |
++		((input_sel & 0xF) << 24);
++	DPU_REG_WRITE(c, misr_ctrl_offset, config);
+ }
+ 
+ int dpu_hw_collect_misr(struct dpu_hw_blk_reg_map *c,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h
+index 4bea139081bcd..ec09fc3865ab5 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  * Copyright (c) 2015-2021, The Linux Foundation. All rights reserved.
+  */
+ 
+@@ -13,7 +13,7 @@
+ #include "dpu_hw_catalog.h"
+ 
+ #define REG_MASK(n)                     ((BIT(n)) - 1)
+-#define MISR_FRAME_COUNT_MASK           0xFF
++#define MISR_FRAME_COUNT                0x1
+ #define MISR_CTRL_ENABLE                BIT(8)
+ #define MISR_CTRL_STATUS                BIT(9)
+ #define MISR_CTRL_STATUS_CLEAR          BIT(10)
+@@ -358,9 +358,7 @@ void _dpu_hw_setup_qos_lut(struct dpu_hw_blk_reg_map *c, u32 offset,
+ 			   const struct dpu_hw_qos_cfg *cfg);
+ 
+ void dpu_hw_setup_misr(struct dpu_hw_blk_reg_map *c,
+-		u32 misr_ctrl_offset,
+-		bool enable,
+-		u32 frame_count);
++		u32 misr_ctrl_offset, u8 input_sel);
+ 
+ int dpu_hw_collect_misr(struct dpu_hw_blk_reg_map *c,
+ 		u32 misr_ctrl_offset,
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
+index 169f9de4a12a7..3100957225a70 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
+@@ -269,6 +269,7 @@ static void mdp4_crtc_atomic_disable(struct drm_crtc *crtc,
+ {
+ 	struct mdp4_crtc *mdp4_crtc = to_mdp4_crtc(crtc);
+ 	struct mdp4_kms *mdp4_kms = get_kms(crtc);
++	unsigned long flags;
+ 
+ 	DBG("%s", mdp4_crtc->name);
+ 
+@@ -281,6 +282,14 @@ static void mdp4_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	mdp_irq_unregister(&mdp4_kms->base, &mdp4_crtc->err);
+ 	mdp4_disable(mdp4_kms);
+ 
++	if (crtc->state->event && !crtc->state->active) {
++		WARN_ON(mdp4_crtc->event);
++		spin_lock_irqsave(&mdp4_kms->dev->event_lock, flags);
++		drm_crtc_send_vblank_event(crtc, crtc->state->event);
++		crtc->state->event = NULL;
++		spin_unlock_irqrestore(&mdp4_kms->dev->event_lock, flags);
++	}
++
+ 	mdp4_crtc->enabled = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index 05621e5e7d634..b6314bb66d2fd 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -516,7 +516,9 @@ static int dsi_phy_enable_resource(struct msm_dsi_phy *phy)
+ 	struct device *dev = &phy->pdev->dev;
+ 	int ret;
+ 
+-	pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret)
++		return ret;
+ 
+ 	ret = clk_prepare_enable(phy->ahb_clk);
+ 	if (ret) {
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 118807e38422b..d093549f6eb71 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -982,8 +982,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
+ 		const int clock = crtc_state->adjusted_mode.clock;
+ 
+ 		asyh->or.bpc = connector->display_info.bpc;
+-		asyh->dp.pbn = drm_dp_calc_pbn_mode(clock, asyh->or.bpc * 3,
+-						    false);
++		asyh->dp.pbn = drm_dp_calc_pbn_mode(clock, asyh->or.bpc * 3 << 4);
+ 	}
+ 
+ 	mst_state = drm_atomic_get_mst_topology_state(state, &mstm->mgr);
+diff --git a/drivers/gpu/drm/nouveau/nv04_fence.c b/drivers/gpu/drm/nouveau/nv04_fence.c
+index 5b71a5a5cd85c..cdbc75e3d1f66 100644
+--- a/drivers/gpu/drm/nouveau/nv04_fence.c
++++ b/drivers/gpu/drm/nouveau/nv04_fence.c
+@@ -39,7 +39,7 @@ struct nv04_fence_priv {
+ static int
+ nv04_fence_emit(struct nouveau_fence *fence)
+ {
+-	struct nvif_push *push = fence->channel->chan.push;
++	struct nvif_push *push = unrcu_pointer(fence->channel)->chan.push;
+ 	int ret = PUSH_WAIT(push, 2);
+ 	if (ret == 0) {
+ 		PUSH_NVSQ(push, NV_SW, 0x0150, fence->base.seqno);
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index b2835b3ea6f58..6598c9c08ba11 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -69,7 +69,6 @@ static void omap_atomic_commit_tail(struct drm_atomic_state *old_state)
+ {
+ 	struct drm_device *dev = old_state->dev;
+ 	struct omap_drm_private *priv = dev->dev_private;
+-	bool fence_cookie = dma_fence_begin_signalling();
+ 
+ 	dispc_runtime_get(priv->dispc);
+ 
+@@ -92,6 +91,8 @@ static void omap_atomic_commit_tail(struct drm_atomic_state *old_state)
+ 		omap_atomic_wait_for_completion(dev, old_state);
+ 
+ 		drm_atomic_helper_commit_planes(dev, old_state, 0);
++
++		drm_atomic_helper_commit_hw_done(old_state);
+ 	} else {
+ 		/*
+ 		 * OMAP3 DSS seems to have issues with the work-around above,
+@@ -101,11 +102,9 @@ static void omap_atomic_commit_tail(struct drm_atomic_state *old_state)
+ 		drm_atomic_helper_commit_planes(dev, old_state, 0);
+ 
+ 		drm_atomic_helper_commit_modeset_enables(dev, old_state);
+-	}
+ 
+-	drm_atomic_helper_commit_hw_done(old_state);
+-
+-	dma_fence_end_signalling(fence_cookie);
++		drm_atomic_helper_commit_hw_done(old_state);
++	}
+ 
+ 	/*
+ 	 * Wait for completion of the page flips to ensure that old buffers
+diff --git a/drivers/gpu/drm/panel/panel-elida-kd35t133.c b/drivers/gpu/drm/panel/panel-elida-kd35t133.c
+index e7be15b681021..6de1172323464 100644
+--- a/drivers/gpu/drm/panel/panel-elida-kd35t133.c
++++ b/drivers/gpu/drm/panel/panel-elida-kd35t133.c
+@@ -104,6 +104,8 @@ static int kd35t133_unprepare(struct drm_panel *panel)
+ 		return ret;
+ 	}
+ 
++	gpiod_set_value_cansleep(ctx->reset_gpio, 1);
++
+ 	regulator_disable(ctx->iovcc);
+ 	regulator_disable(ctx->vdd);
+ 
+diff --git a/drivers/gpu/drm/panel/panel-newvision-nv3051d.c b/drivers/gpu/drm/panel/panel-newvision-nv3051d.c
+index 79de6c886292b..c44c6945662f2 100644
+--- a/drivers/gpu/drm/panel/panel-newvision-nv3051d.c
++++ b/drivers/gpu/drm/panel/panel-newvision-nv3051d.c
+@@ -261,6 +261,8 @@ static int panel_nv3051d_unprepare(struct drm_panel *panel)
+ 
+ 	usleep_range(10000, 15000);
+ 
++	gpiod_set_value_cansleep(ctx->reset_gpio, 1);
++
+ 	regulator_disable(ctx->vdd);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+index 0459965e1b4f7..036ac403ed213 100644
+--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
++++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+@@ -288,7 +288,7 @@ static void st7701_init_sequence(struct st7701 *st7701)
+ 		   FIELD_PREP(DSI_CMD2_BK1_PWRCTRL2_AVDD_MASK,
+ 			      DIV_ROUND_CLOSEST(desc->avdd_mv - 6200, 200)) |
+ 		   FIELD_PREP(DSI_CMD2_BK1_PWRCTRL2_AVCL_MASK,
+-			      DIV_ROUND_CLOSEST(-4400 + desc->avcl_mv, 200)));
++			      DIV_ROUND_CLOSEST(-4400 - desc->avcl_mv, 200)));
+ 
+ 	/* T2D = 0.2us * T2D[3:0] */
+ 	ST7701_DSI(st7701, DSI_CMD2_BK1_SPD1,
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index f0be7e19b13ed..311cf4525e1e1 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -71,7 +71,12 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev)
+ 	}
+ 
+ 	gpu_write(pfdev, GPU_INT_CLEAR, GPU_IRQ_MASK_ALL);
+-	gpu_write(pfdev, GPU_INT_MASK, GPU_IRQ_MASK_ALL);
++
++	/* Only enable the interrupts we care about */
++	gpu_write(pfdev, GPU_INT_MASK,
++		  GPU_IRQ_MASK_ERROR |
++		  GPU_IRQ_PERFCNT_SAMPLE_COMPLETED |
++		  GPU_IRQ_CLEAN_CACHES_COMPLETED);
+ 
+ 	/*
+ 	 * All in-flight jobs should have released their cycle
+@@ -362,28 +367,38 @@ unsigned long long panfrost_cycle_counter_read(struct panfrost_device *pfdev)
+ 	return ((u64)hi << 32) | lo;
+ }
+ 
++static u64 panfrost_get_core_mask(struct panfrost_device *pfdev)
++{
++	u64 core_mask;
++
++	if (pfdev->features.l2_present == 1)
++		return U64_MAX;
++
++	/*
++	 * Only support one core group now.
++	 * ~(l2_present - 1) unsets all bits in l2_present except
++	 * the bottom bit. (l2_present - 2) has all the bits in
++	 * the first core group set. AND them together to generate
++	 * a mask of cores in the first core group.
++	 */
++	core_mask = ~(pfdev->features.l2_present - 1) &
++		     (pfdev->features.l2_present - 2);
++	dev_info_once(pfdev->dev, "using only 1st core group (%lu cores from %lu)\n",
++		      hweight64(core_mask),
++		      hweight64(pfdev->features.shader_present));
++
++	return core_mask;
++}
++
+ void panfrost_gpu_power_on(struct panfrost_device *pfdev)
+ {
+ 	int ret;
+ 	u32 val;
+-	u64 core_mask = U64_MAX;
++	u64 core_mask;
+ 
+ 	panfrost_gpu_init_quirks(pfdev);
++	core_mask = panfrost_get_core_mask(pfdev);
+ 
+-	if (pfdev->features.l2_present != 1) {
+-		/*
+-		 * Only support one core group now.
+-		 * ~(l2_present - 1) unsets all bits in l2_present except
+-		 * the bottom bit. (l2_present - 2) has all the bits in
+-		 * the first core group set. AND them together to generate
+-		 * a mask of cores in the first core group.
+-		 */
+-		core_mask = ~(pfdev->features.l2_present - 1) &
+-			     (pfdev->features.l2_present - 2);
+-		dev_info_once(pfdev->dev, "using only 1st core group (%lu cores from %lu)\n",
+-			      hweight64(core_mask),
+-			      hweight64(pfdev->features.shader_present));
+-	}
+ 	gpu_write(pfdev, L2_PWRON_LO, pfdev->features.l2_present & core_mask);
+ 	ret = readl_relaxed_poll_timeout(pfdev->iomem + L2_READY_LO,
+ 		val, val == (pfdev->features.l2_present & core_mask),
+@@ -408,9 +423,26 @@ void panfrost_gpu_power_on(struct panfrost_device *pfdev)
+ 
+ void panfrost_gpu_power_off(struct panfrost_device *pfdev)
+ {
+-	gpu_write(pfdev, TILER_PWROFF_LO, 0);
+-	gpu_write(pfdev, SHADER_PWROFF_LO, 0);
+-	gpu_write(pfdev, L2_PWROFF_LO, 0);
++	int ret;
++	u32 val;
++
++	gpu_write(pfdev, SHADER_PWROFF_LO, pfdev->features.shader_present);
++	ret = readl_relaxed_poll_timeout(pfdev->iomem + SHADER_PWRTRANS_LO,
++					 val, !val, 1, 1000);
++	if (ret)
++		dev_err(pfdev->dev, "shader power transition timeout");
++
++	gpu_write(pfdev, TILER_PWROFF_LO, pfdev->features.tiler_present);
++	ret = readl_relaxed_poll_timeout(pfdev->iomem + TILER_PWRTRANS_LO,
++					 val, !val, 1, 1000);
++	if (ret)
++		dev_err(pfdev->dev, "tiler power transition timeout");
++
++	gpu_write(pfdev, L2_PWROFF_LO, pfdev->features.l2_present);
++	ret = readl_poll_timeout(pfdev->iomem + L2_PWRTRANS_LO,
++				 val, !val, 0, 1000);
++	if (ret)
++		dev_err(pfdev->dev, "l2 power transition timeout");
+ }
+ 
+ int panfrost_gpu_init(struct panfrost_device *pfdev)
+diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
+index affa9e0309b27..cfeca2694d5f9 100644
+--- a/drivers/gpu/drm/radeon/r100.c
++++ b/drivers/gpu/drm/radeon/r100.c
+@@ -2321,7 +2321,7 @@ int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track)
+ 	switch (prim_walk) {
+ 	case 1:
+ 		for (i = 0; i < track->num_arrays; i++) {
+-			size = track->arrays[i].esize * track->max_indx * 4;
++			size = track->arrays[i].esize * track->max_indx * 4UL;
+ 			if (track->arrays[i].robj == NULL) {
+ 				DRM_ERROR("(PW %u) Vertex array %u no buffer "
+ 					  "bound\n", prim_walk, i);
+@@ -2340,7 +2340,7 @@ int r100_cs_track_check(struct radeon_device *rdev, struct r100_cs_track *track)
+ 		break;
+ 	case 2:
+ 		for (i = 0; i < track->num_arrays; i++) {
+-			size = track->arrays[i].esize * (nverts - 1) * 4;
++			size = track->arrays[i].esize * (nverts - 1) * 4UL;
+ 			if (track->arrays[i].robj == NULL) {
+ 				DRM_ERROR("(PW %u) Vertex array %u no buffer "
+ 					  "bound\n", prim_walk, i);
+diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
+index 638f861af80fa..6cf54a747749d 100644
+--- a/drivers/gpu/drm/radeon/r600_cs.c
++++ b/drivers/gpu/drm/radeon/r600_cs.c
+@@ -1275,7 +1275,7 @@ static int r600_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ 			return -EINVAL;
+ 		}
+ 		tmp = (reg - CB_COLOR0_BASE) / 4;
+-		track->cb_color_bo_offset[tmp] = radeon_get_ib_value(p, idx) << 8;
++		track->cb_color_bo_offset[tmp] = (u64)radeon_get_ib_value(p, idx) << 8;
+ 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
+ 		track->cb_color_base_last[tmp] = ib[idx];
+ 		track->cb_color_bo[tmp] = reloc->robj;
+@@ -1302,7 +1302,7 @@ static int r600_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ 					"0x%04X\n", reg);
+ 			return -EINVAL;
+ 		}
+-		track->htile_offset = radeon_get_ib_value(p, idx) << 8;
++		track->htile_offset = (u64)radeon_get_ib_value(p, idx) << 8;
+ 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
+ 		track->htile_bo = reloc->robj;
+ 		track->db_dirty = true;
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 901e75ec70ff4..efd18c8d84c83 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -687,11 +687,16 @@ static void radeon_crtc_init(struct drm_device *dev, int index)
+ 	if (radeon_crtc == NULL)
+ 		return;
+ 
++	radeon_crtc->flip_queue = alloc_workqueue("radeon-crtc", WQ_HIGHPRI, 0);
++	if (!radeon_crtc->flip_queue) {
++		kfree(radeon_crtc);
++		return;
++	}
++
+ 	drm_crtc_init(dev, &radeon_crtc->base, &radeon_crtc_funcs);
+ 
+ 	drm_mode_crtc_set_gamma_size(&radeon_crtc->base, 256);
+ 	radeon_crtc->crtc_id = index;
+-	radeon_crtc->flip_queue = alloc_workqueue("radeon-crtc", WQ_HIGHPRI, 0);
+ 	rdev->mode_info.crtcs[index] = radeon_crtc;
+ 
+ 	if (rdev->family >= CHIP_BONAIRE) {
+diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c
+index 987cabbf1318e..c38b4d5d6a14f 100644
+--- a/drivers/gpu/drm/radeon/radeon_vm.c
++++ b/drivers/gpu/drm/radeon/radeon_vm.c
+@@ -1204,13 +1204,17 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm)
+ 	r = radeon_bo_create(rdev, pd_size, align, true,
+ 			     RADEON_GEM_DOMAIN_VRAM, 0, NULL,
+ 			     NULL, &vm->page_directory);
+-	if (r)
++	if (r) {
++		kfree(vm->page_tables);
++		vm->page_tables = NULL;
+ 		return r;
+-
++	}
+ 	r = radeon_vm_clear_bo(rdev, vm->page_directory);
+ 	if (r) {
+ 		radeon_bo_unref(&vm->page_directory);
+ 		vm->page_directory = NULL;
++		kfree(vm->page_tables);
++		vm->page_tables = NULL;
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index a91012447b56e..85e9cba49cecb 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -3611,6 +3611,10 @@ static int si_cp_start(struct radeon_device *rdev)
+ 	for (i = RADEON_RING_TYPE_GFX_INDEX; i <= CAYMAN_RING_TYPE_CP2_INDEX; ++i) {
+ 		ring = &rdev->ring[i];
+ 		r = radeon_ring_lock(rdev, ring, 2);
++		if (r) {
++			DRM_ERROR("radeon: cp failed to lock ring (%d).\n", r);
++			return r;
++		}
+ 
+ 		/* clear the compute context state */
+ 		radeon_ring_write(ring, PACKET3_COMPUTE(PACKET3_CLEAR_STATE, 0));
+diff --git a/drivers/gpu/drm/radeon/sumo_dpm.c b/drivers/gpu/drm/radeon/sumo_dpm.c
+index f74f381af05fd..d49c145db4370 100644
+--- a/drivers/gpu/drm/radeon/sumo_dpm.c
++++ b/drivers/gpu/drm/radeon/sumo_dpm.c
+@@ -1493,8 +1493,10 @@ static int sumo_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
++		if (!rdev->pm.power_state[i].clock_info) {
++			kfree(rdev->pm.dpm.ps);
+ 			return -EINVAL;
++		}
+ 		ps = kzalloc(sizeof(struct sumo_ps), GFP_KERNEL);
+ 		if (ps == NULL) {
+ 			kfree(rdev->pm.dpm.ps);
+diff --git a/drivers/gpu/drm/radeon/trinity_dpm.c b/drivers/gpu/drm/radeon/trinity_dpm.c
+index 08ea1c864cb23..ef1cc7bad20a7 100644
+--- a/drivers/gpu/drm/radeon/trinity_dpm.c
++++ b/drivers/gpu/drm/radeon/trinity_dpm.c
+@@ -1726,8 +1726,10 @@ static int trinity_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
++		if (!rdev->pm.power_state[i].clock_info) {
++			kfree(rdev->pm.dpm.ps);
+ 			return -EINVAL;
++		}
+ 		ps = kzalloc(sizeof(struct sumo_ps), GFP_KERNEL);
+ 		if (ps == NULL) {
+ 			kfree(rdev->pm.dpm.ps);
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 409e4256f6e7d..0a7a7e4ad8d19 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -81,12 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ 		 */
+ 		pr_warn("%s: called with uninitialized scheduler\n", __func__);
+ 	} else if (num_sched_list) {
+-		/* The "priority" of an entity cannot exceed the number
+-		 * of run-queues of a scheduler.
++		/* The "priority" of an entity cannot exceed the number of run-queues of a
++		 * scheduler. Protect against num_rqs being 0, by converting to signed.
+ 		 */
+-		if (entity->priority >= sched_list[0]->num_rqs)
+-			entity->priority = max_t(u32, sched_list[0]->num_rqs,
+-						 DRM_SCHED_PRIORITY_MIN);
++		if (entity->priority >= sched_list[0]->num_rqs) {
++			drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n",
++				entity->priority, sched_list[0]->num_rqs);
++			entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1,
++						 (s32) DRM_SCHED_PRIORITY_MIN);
++		}
+ 		entity->rq = sched_list[0]->sched_rq[entity->priority];
+ 	}
+ 
+diff --git a/drivers/gpu/drm/tests/drm_dp_mst_helper_test.c b/drivers/gpu/drm/tests/drm_dp_mst_helper_test.c
+index 545beea33e8c7..e3c818dfc0e6d 100644
+--- a/drivers/gpu/drm/tests/drm_dp_mst_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_dp_mst_helper_test.c
+@@ -42,13 +42,13 @@ static const struct drm_dp_mst_calc_pbn_mode_test drm_dp_mst_calc_pbn_mode_cases
+ 		.clock = 332880,
+ 		.bpp = 24,
+ 		.dsc = true,
+-		.expected = 50
++		.expected = 1191
+ 	},
+ 	{
+ 		.clock = 324540,
+ 		.bpp = 24,
+ 		.dsc = true,
+-		.expected = 49
++		.expected = 1161
+ 	},
+ };
+ 
+@@ -56,7 +56,7 @@ static void drm_test_dp_mst_calc_pbn_mode(struct kunit *test)
+ {
+ 	const struct drm_dp_mst_calc_pbn_mode_test *params = test->param_value;
+ 
+-	KUNIT_EXPECT_EQ(test, drm_dp_calc_pbn_mode(params->clock, params->bpp, params->dsc),
++	KUNIT_EXPECT_EQ(test, drm_dp_calc_pbn_mode(params->clock, params->bpp << 4),
+ 			params->expected);
+ }
+ 
+diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c
+index 9d9dee7abaefd..98efbaf3b0c23 100644
+--- a/drivers/gpu/drm/tidss/tidss_dispc.c
++++ b/drivers/gpu/drm/tidss/tidss_dispc.c
+@@ -2702,18 +2702,69 @@ static void dispc_init_errata(struct dispc_device *dispc)
+ 	}
+ }
+ 
+-static void dispc_softreset(struct dispc_device *dispc)
++static int dispc_softreset(struct dispc_device *dispc)
+ {
+ 	u32 val;
+ 	int ret = 0;
+ 
++	/* K2G display controller does not support soft reset */
++	if (dispc->feat->subrev == DISPC_K2G)
++		return 0;
++
+ 	/* Soft reset */
+ 	REG_FLD_MOD(dispc, DSS_SYSCONFIG, 1, 1, 1);
+ 	/* Wait for reset to complete */
+ 	ret = readl_poll_timeout(dispc->base_common + DSS_SYSSTATUS,
+ 				 val, val & 1, 100, 5000);
++	if (ret) {
++		dev_err(dispc->dev, "failed to reset dispc\n");
++		return ret;
++	}
++
++	return 0;
++}
++
++static int dispc_init_hw(struct dispc_device *dispc)
++{
++	struct device *dev = dispc->dev;
++	int ret;
++
++	ret = pm_runtime_set_active(dev);
++	if (ret) {
++		dev_err(dev, "Failed to set DSS PM to active\n");
++		return ret;
++	}
++
++	ret = clk_prepare_enable(dispc->fclk);
++	if (ret) {
++		dev_err(dev, "Failed to enable DSS fclk\n");
++		goto err_runtime_suspend;
++	}
++
++	ret = dispc_softreset(dispc);
+ 	if (ret)
+-		dev_warn(dispc->dev, "failed to reset dispc\n");
++		goto err_clk_disable;
++
++	clk_disable_unprepare(dispc->fclk);
++	ret = pm_runtime_set_suspended(dev);
++	if (ret) {
++		dev_err(dev, "Failed to set DSS PM to suspended\n");
++		return ret;
++	}
++
++	return 0;
++
++err_clk_disable:
++	clk_disable_unprepare(dispc->fclk);
++
++err_runtime_suspend:
++	ret = pm_runtime_set_suspended(dev);
++	if (ret) {
++		dev_err(dev, "Failed to set DSS PM to suspended\n");
++		return ret;
++	}
++
++	return ret;
+ }
+ 
+ int dispc_init(struct tidss_device *tidss)
+@@ -2777,10 +2828,6 @@ int dispc_init(struct tidss_device *tidss)
+ 			return r;
+ 	}
+ 
+-	/* K2G display controller does not support soft reset */
+-	if (feat->subrev != DISPC_K2G)
+-		dispc_softreset(dispc);
+-
+ 	for (i = 0; i < dispc->feat->num_vps; i++) {
+ 		u32 gamma_size = dispc->feat->vp_feat.color.gamma_size;
+ 		u32 *gamma_table;
+@@ -2829,6 +2876,10 @@ int dispc_init(struct tidss_device *tidss)
+ 	of_property_read_u32(dispc->dev->of_node, "max-memory-bandwidth",
+ 			     &dispc->memory_bandwidth_limit);
+ 
++	r = dispc_init_hw(dispc);
++	if (r)
++		return r;
++
+ 	tidss->dispc = dispc;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/tidss/tidss_kms.c b/drivers/gpu/drm/tidss/tidss_kms.c
+index c979ad1af2366..d096d8d2bc8f8 100644
+--- a/drivers/gpu/drm/tidss/tidss_kms.c
++++ b/drivers/gpu/drm/tidss/tidss_kms.c
+@@ -4,8 +4,6 @@
+  * Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
+  */
+ 
+-#include <linux/dma-fence.h>
+-
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_bridge.h>
+@@ -25,7 +23,6 @@ static void tidss_atomic_commit_tail(struct drm_atomic_state *old_state)
+ {
+ 	struct drm_device *ddev = old_state->dev;
+ 	struct tidss_device *tidss = to_tidss(ddev);
+-	bool fence_cookie = dma_fence_begin_signalling();
+ 
+ 	dev_dbg(ddev->dev, "%s\n", __func__);
+ 
+@@ -36,7 +33,6 @@ static void tidss_atomic_commit_tail(struct drm_atomic_state *old_state)
+ 	drm_atomic_helper_commit_modeset_enables(ddev, old_state);
+ 
+ 	drm_atomic_helper_commit_hw_done(old_state);
+-	dma_fence_end_signalling(fence_cookie);
+ 	drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+ 
+ 	drm_atomic_helper_cleanup_planes(ddev, old_state);
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_drv.c b/drivers/gpu/drm/tilcdc/tilcdc_drv.c
+index 8ebd7134ee21b..2f6eaac7f659b 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_drv.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_drv.c
+@@ -138,7 +138,7 @@ static int tilcdc_irq_install(struct drm_device *dev, unsigned int irq)
+ 	if (ret)
+ 		return ret;
+ 
+-	priv->irq_enabled = false;
++	priv->irq_enabled = true;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/greybus/gb-beagleplay.c b/drivers/greybus/gb-beagleplay.c
+index 43318c1993ba9..7d98ae1a8263a 100644
+--- a/drivers/greybus/gb-beagleplay.c
++++ b/drivers/greybus/gb-beagleplay.c
+@@ -85,17 +85,31 @@ struct hdlc_payload {
+ 	void *buf;
+ };
+ 
++/**
++ * struct hdlc_greybus_frame - Structure to represent greybus HDLC frame payload
++ *
++ * @cport: cport id
++ * @hdr: greybus operation header
++ * @payload: greybus message payload
++ *
++ * The HDLC payload sent over UART for greybus address has cport preappended to greybus message
++ */
++struct hdlc_greybus_frame {
++	__le16 cport;
++	struct gb_operation_msg_hdr hdr;
++	u8 payload[];
++} __packed;
++
+ static void hdlc_rx_greybus_frame(struct gb_beagleplay *bg, u8 *buf, u16 len)
+ {
+-	u16 cport_id;
+-	struct gb_operation_msg_hdr *hdr = (struct gb_operation_msg_hdr *)buf;
+-
+-	memcpy(&cport_id, hdr->pad, sizeof(cport_id));
++	struct hdlc_greybus_frame *gb_frame = (struct hdlc_greybus_frame *)buf;
++	u16 cport_id = le16_to_cpu(gb_frame->cport);
++	u16 gb_msg_len = le16_to_cpu(gb_frame->hdr.size);
+ 
+ 	dev_dbg(&bg->sd->dev, "Greybus Operation %u type %X cport %u status %u received",
+-		hdr->operation_id, hdr->type, cport_id, hdr->result);
++		gb_frame->hdr.operation_id, gb_frame->hdr.type, cport_id, gb_frame->hdr.result);
+ 
+-	greybus_data_rcvd(bg->gb_hd, cport_id, buf, len);
++	greybus_data_rcvd(bg->gb_hd, cport_id, (u8 *)&gb_frame->hdr, gb_msg_len);
+ }
+ 
+ static void hdlc_rx_dbg_frame(const struct gb_beagleplay *bg, const char *buf, u16 len)
+@@ -336,25 +350,39 @@ static struct serdev_device_ops gb_beagleplay_ops = {
+ 	.write_wakeup = gb_tty_wakeup,
+ };
+ 
++/**
++ * gb_message_send() - Send greybus message using HDLC over UART
++ *
++ * @hd: pointer to greybus host device
++ * @cport: AP cport where message originates
++ * @msg: greybus message to send
++ * @mask: gfp mask
++ *
++ * Greybus HDLC frame has the following payload:
++ * 1. le16 cport
++ * 2. gb_operation_msg_hdr msg_header
++ * 3. u8 *msg_payload
++ */
+ static int gb_message_send(struct gb_host_device *hd, u16 cport, struct gb_message *msg, gfp_t mask)
+ {
+ 	struct gb_beagleplay *bg = dev_get_drvdata(&hd->dev);
+-	struct hdlc_payload payloads[2];
++	struct hdlc_payload payloads[3];
++	__le16 cport_id = cpu_to_le16(cport);
+ 
+ 	dev_dbg(&hd->dev, "Sending greybus message with Operation %u, Type: %X on Cport %u",
+ 		msg->header->operation_id, msg->header->type, cport);
+ 
+-	if (msg->header->size > RX_HDLC_PAYLOAD)
++	if (le16_to_cpu(msg->header->size) > RX_HDLC_PAYLOAD)
+ 		return dev_err_probe(&hd->dev, -E2BIG, "Greybus message too big");
+ 
+-	memcpy(msg->header->pad, &cport, sizeof(cport));
+-
+-	payloads[0].buf = msg->header;
+-	payloads[0].len = sizeof(*msg->header);
+-	payloads[1].buf = msg->payload;
+-	payloads[1].len = msg->payload_size;
++	payloads[0].buf = &cport_id;
++	payloads[0].len = sizeof(cport_id);
++	payloads[1].buf = msg->header;
++	payloads[1].len = sizeof(*msg->header);
++	payloads[2].buf = msg->payload;
++	payloads[2].len = msg->payload_size;
+ 
+-	hdlc_tx_frames(bg, ADDRESS_GREYBUS, 0x03, payloads, 2);
++	hdlc_tx_frames(bg, ADDRESS_GREYBUS, 0x03, payloads, 3);
+ 	greybus_message_sent(bg->gb_hd, msg, 0);
+ 
+ 	return 0;
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 2eba152e8b905..26e93a331a510 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -632,7 +632,7 @@ static int sensor_hub_probe(struct hid_device *hdev,
+ 	}
+ 	INIT_LIST_HEAD(&hdev->inputs);
+ 
+-	ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
++	ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT | HID_CONNECT_DRIVER);
+ 	if (ret) {
+ 		hid_err(hdev, "hw start failed\n");
+ 		return ret;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 471db78dbbf02..8289ce7637044 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2649,8 +2649,8 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ {
+ 	struct hid_data *hid_data = &wacom_wac->hid_data;
+ 	bool mt = wacom_wac->features.touch_max > 1;
+-	bool prox = hid_data->tipswitch &&
+-		    report_touch_events(wacom_wac);
++	bool touch_down = hid_data->tipswitch && hid_data->confidence;
++	bool prox = touch_down && report_touch_events(wacom_wac);
+ 
+ 	if (touch_is_muted(wacom_wac)) {
+ 		if (!wacom_wac->shared->touch_down)
+@@ -2700,24 +2700,6 @@ static void wacom_wac_finger_slot(struct wacom_wac *wacom_wac,
+ 	}
+ }
+ 
+-static bool wacom_wac_slot_is_active(struct input_dev *dev, int key)
+-{
+-	struct input_mt *mt = dev->mt;
+-	struct input_mt_slot *s;
+-
+-	if (!mt)
+-		return false;
+-
+-	for (s = mt->slots; s != mt->slots + mt->num_slots; s++) {
+-		if (s->key == key &&
+-			input_mt_get_value(s, ABS_MT_TRACKING_ID) >= 0) {
+-			return true;
+-		}
+-	}
+-
+-	return false;
+-}
+-
+ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 		struct hid_field *field, struct hid_usage *usage, __s32 value)
+ {
+@@ -2768,14 +2750,8 @@ static void wacom_wac_finger_event(struct hid_device *hdev,
+ 	}
+ 
+ 	if (usage->usage_index + 1 == field->report_count) {
+-		if (equivalent_usage == wacom_wac->hid_data.last_slot_field) {
+-			bool touch_removed = wacom_wac_slot_is_active(wacom_wac->touch_input,
+-				wacom_wac->hid_data.id) && !wacom_wac->hid_data.tipswitch;
+-
+-			if (wacom_wac->hid_data.confidence || touch_removed) {
+-				wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
+-			}
+-		}
++		if (equivalent_usage == wacom_wac->hid_data.last_slot_field)
++			wacom_wac_finger_slot(wacom_wac, wacom_wac->touch_input);
+ 	}
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index c56886af724ea..c0fe96a4f2c4f 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -216,8 +216,17 @@ static bool is_ack(struct s3c24xx_i2c *i2c)
+ 	int tries;
+ 
+ 	for (tries = 50; tries; --tries) {
+-		if (readl(i2c->regs + S3C2410_IICCON)
+-			& S3C2410_IICCON_IRQPEND) {
++		unsigned long tmp = readl(i2c->regs + S3C2410_IICCON);
++
++		if (!(tmp & S3C2410_IICCON_ACKEN)) {
++			/*
++			 * Wait a bit for the bus to stabilize,
++			 * delay estimated experimentally.
++			 */
++			usleep_range(100, 200);
++			return true;
++		}
++		if (tmp & S3C2410_IICCON_IRQPEND) {
+ 			if (!(readl(i2c->regs + S3C2410_IICSTAT)
+ 				& S3C2410_IICSTAT_LASTBIT))
+ 				return true;
+@@ -270,16 +279,6 @@ static void s3c24xx_i2c_message_start(struct s3c24xx_i2c *i2c,
+ 
+ 	stat |= S3C2410_IICSTAT_START;
+ 	writel(stat, i2c->regs + S3C2410_IICSTAT);
+-
+-	if (i2c->quirks & QUIRK_POLL) {
+-		while ((i2c->msg_num != 0) && is_ack(i2c)) {
+-			i2c_s3c_irq_nextbyte(i2c, stat);
+-			stat = readl(i2c->regs + S3C2410_IICSTAT);
+-
+-			if (stat & S3C2410_IICSTAT_ARBITR)
+-				dev_err(i2c->dev, "deal with arbitration loss\n");
+-		}
+-	}
+ }
+ 
+ static inline void s3c24xx_i2c_stop(struct s3c24xx_i2c *i2c, int ret)
+@@ -685,7 +684,7 @@ static void s3c24xx_i2c_wait_idle(struct s3c24xx_i2c *i2c)
+ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
+ 			      struct i2c_msg *msgs, int num)
+ {
+-	unsigned long timeout;
++	unsigned long timeout = 0;
+ 	int ret;
+ 
+ 	ret = s3c24xx_i2c_set_master(i2c);
+@@ -705,16 +704,19 @@ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
+ 	s3c24xx_i2c_message_start(i2c, msgs);
+ 
+ 	if (i2c->quirks & QUIRK_POLL) {
+-		ret = i2c->msg_idx;
++		while ((i2c->msg_num != 0) && is_ack(i2c)) {
++			unsigned long stat = readl(i2c->regs + S3C2410_IICSTAT);
+ 
+-		if (ret != num)
+-			dev_dbg(i2c->dev, "incomplete xfer (%d)\n", ret);
++			i2c_s3c_irq_nextbyte(i2c, stat);
+ 
+-		goto out;
++			stat = readl(i2c->regs + S3C2410_IICSTAT);
++			if (stat & S3C2410_IICSTAT_ARBITR)
++				dev_err(i2c->dev, "deal with arbitration loss\n");
++		}
++	} else {
++		timeout = wait_event_timeout(i2c->wait, i2c->msg_num == 0, HZ * 5);
+ 	}
+ 
+-	timeout = wait_event_timeout(i2c->wait, i2c->msg_num == 0, HZ * 5);
+-
+ 	ret = i2c->msg_idx;
+ 
+ 	/*
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index dcda0afecfc5f..3e01a6b23e75f 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -131,11 +131,12 @@ static unsigned int mwait_substates __initdata;
+ #define MWAIT2flg(eax) ((eax & 0xFF) << 24)
+ 
+ static __always_inline int __intel_idle(struct cpuidle_device *dev,
+-					struct cpuidle_driver *drv, int index)
++					struct cpuidle_driver *drv,
++					int index, bool irqoff)
+ {
+ 	struct cpuidle_state *state = &drv->states[index];
+ 	unsigned long eax = flg2MWAIT(state->flags);
+-	unsigned long ecx = 1; /* break on interrupt flag */
++	unsigned long ecx = 1*irqoff; /* break on interrupt flag */
+ 
+ 	mwait_idle_with_hints(eax, ecx);
+ 
+@@ -159,19 +160,13 @@ static __always_inline int __intel_idle(struct cpuidle_device *dev,
+ static __cpuidle int intel_idle(struct cpuidle_device *dev,
+ 				struct cpuidle_driver *drv, int index)
+ {
+-	return __intel_idle(dev, drv, index);
++	return __intel_idle(dev, drv, index, true);
+ }
+ 
+ static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
+ 				    struct cpuidle_driver *drv, int index)
+ {
+-	int ret;
+-
+-	raw_local_irq_enable();
+-	ret = __intel_idle(dev, drv, index);
+-	raw_local_irq_disable();
+-
+-	return ret;
++	return __intel_idle(dev, drv, index, false);
+ }
+ 
+ static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev,
+@@ -184,7 +179,7 @@ static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev,
+ 	if (smt_active)
+ 		__update_spec_ctrl(0);
+ 
+-	ret = __intel_idle(dev, drv, index);
++	ret = __intel_idle(dev, drv, index, true);
+ 
+ 	if (smt_active)
+ 		__update_spec_ctrl(spec_ctrl);
+@@ -196,7 +191,7 @@ static __cpuidle int intel_idle_xstate(struct cpuidle_device *dev,
+ 				       struct cpuidle_driver *drv, int index)
+ {
+ 	fpu_idle_fpregs();
+-	return __intel_idle(dev, drv, index);
++	return __intel_idle(dev, drv, index, true);
+ }
+ 
+ /**
+diff --git a/drivers/iio/adc/ad7091r-base.c b/drivers/iio/adc/ad7091r-base.c
+index 8e252cde735b9..0e5d3d2e9c985 100644
+--- a/drivers/iio/adc/ad7091r-base.c
++++ b/drivers/iio/adc/ad7091r-base.c
+@@ -174,8 +174,8 @@ static const struct iio_info ad7091r_info = {
+ 
+ static irqreturn_t ad7091r_event_handler(int irq, void *private)
+ {
+-	struct ad7091r_state *st = (struct ad7091r_state *) private;
+-	struct iio_dev *iio_dev = dev_get_drvdata(st->dev);
++	struct iio_dev *iio_dev = private;
++	struct ad7091r_state *st = iio_priv(iio_dev);
+ 	unsigned int i, read_val;
+ 	int ret;
+ 	s64 timestamp = iio_get_time_ns(iio_dev);
+@@ -234,7 +234,7 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 	if (irq) {
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				ad7091r_event_handler,
+-				IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, st);
++				IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, iio_dev);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
+index 39eccc28debe4..f668313730cb6 100644
+--- a/drivers/iio/adc/ad9467.c
++++ b/drivers/iio/adc/ad9467.c
+@@ -4,8 +4,9 @@
+  *
+  * Copyright 2012-2020 Analog Devices Inc.
+  */
+-
++#include <linux/cleanup.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/device.h>
+ #include <linux/kernel.h>
+ #include <linux/slab.h>
+@@ -119,9 +120,11 @@ struct ad9467_state {
+ 	struct spi_device		*spi;
+ 	struct clk			*clk;
+ 	unsigned int			output_mode;
++	unsigned int                    (*scales)[2];
+ 
+ 	struct gpio_desc		*pwrdown_gpio;
+-	struct gpio_desc		*reset_gpio;
++	/* ensure consistent state obtained on multiple related accesses */
++	struct mutex			lock;
+ };
+ 
+ static int ad9467_spi_read(struct spi_device *spi, unsigned int reg)
+@@ -162,10 +165,12 @@ static int ad9467_reg_access(struct adi_axi_adc_conv *conv, unsigned int reg,
+ 	int ret;
+ 
+ 	if (readval == NULL) {
++		guard(mutex)(&st->lock);
+ 		ret = ad9467_spi_write(spi, reg, writeval);
+-		ad9467_spi_write(spi, AN877_ADC_REG_TRANSFER,
+-				 AN877_ADC_TRANSFER_SYNC);
+-		return ret;
++		if (ret)
++			return ret;
++		return ad9467_spi_write(spi, AN877_ADC_REG_TRANSFER,
++					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+ 	ret = ad9467_spi_read(spi, reg);
+@@ -212,6 +217,7 @@ static void __ad9467_get_scale(struct adi_axi_adc_conv *conv, int index,
+ 	.channel = _chan,						\
+ 	.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) |		\
+ 		BIT(IIO_CHAN_INFO_SAMP_FREQ),				\
++	.info_mask_shared_by_type_available = BIT(IIO_CHAN_INFO_SCALE), \
+ 	.scan_index = _si,						\
+ 	.scan_type = {							\
+ 		.sign = _sign,						\
+@@ -273,10 +279,13 @@ static int ad9467_get_scale(struct adi_axi_adc_conv *conv, int *val, int *val2)
+ 	const struct ad9467_chip_info *info1 = to_ad9467_chip_info(info);
+ 	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
+ 	unsigned int i, vref_val;
++	int ret;
+ 
+-	vref_val = ad9467_spi_read(st->spi, AN877_ADC_REG_VREF);
++	ret = ad9467_spi_read(st->spi, AN877_ADC_REG_VREF);
++	if (ret < 0)
++		return ret;
+ 
+-	vref_val &= info1->vref_mask;
++	vref_val = ret & info1->vref_mask;
+ 
+ 	for (i = 0; i < info->num_scales; i++) {
+ 		if (vref_val == info->scale_table[i][1])
+@@ -297,6 +306,7 @@ static int ad9467_set_scale(struct adi_axi_adc_conv *conv, int val, int val2)
+ 	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
+ 	unsigned int scale_val[2];
+ 	unsigned int i;
++	int ret;
+ 
+ 	if (val != 0)
+ 		return -EINVAL;
+@@ -306,11 +316,14 @@ static int ad9467_set_scale(struct adi_axi_adc_conv *conv, int val, int val2)
+ 		if (scale_val[0] != val || scale_val[1] != val2)
+ 			continue;
+ 
+-		ad9467_spi_write(st->spi, AN877_ADC_REG_VREF,
+-				 info->scale_table[i][1]);
+-		ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
+-				 AN877_ADC_TRANSFER_SYNC);
+-		return 0;
++		guard(mutex)(&st->lock);
++		ret = ad9467_spi_write(st->spi, AN877_ADC_REG_VREF,
++				       info->scale_table[i][1]);
++		if (ret < 0)
++			return ret;
++
++		return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
++					AN877_ADC_TRANSFER_SYNC);
+ 	}
+ 
+ 	return -EINVAL;
+@@ -359,6 +372,26 @@ static int ad9467_write_raw(struct adi_axi_adc_conv *conv,
+ 	}
+ }
+ 
++static int ad9467_read_avail(struct adi_axi_adc_conv *conv,
++			     struct iio_chan_spec const *chan,
++			     const int **vals, int *type, int *length,
++			     long mask)
++{
++	const struct adi_axi_adc_chip_info *info = conv->chip_info;
++	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
++
++	switch (mask) {
++	case IIO_CHAN_INFO_SCALE:
++		*vals = (const int *)st->scales;
++		*type = IIO_VAL_INT_PLUS_MICRO;
++		/* Values are stored in a 2D matrix */
++		*length = info->num_scales * 2;
++		return IIO_AVAIL_LIST;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int ad9467_outputmode_set(struct spi_device *spi, unsigned int mode)
+ {
+ 	int ret;
+@@ -371,6 +404,26 @@ static int ad9467_outputmode_set(struct spi_device *spi, unsigned int mode)
+ 				AN877_ADC_TRANSFER_SYNC);
+ }
+ 
++static int ad9467_scale_fill(struct adi_axi_adc_conv *conv)
++{
++	const struct adi_axi_adc_chip_info *info = conv->chip_info;
++	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
++	unsigned int i, val1, val2;
++
++	st->scales = devm_kmalloc_array(&st->spi->dev, info->num_scales,
++					sizeof(*st->scales), GFP_KERNEL);
++	if (!st->scales)
++		return -ENOMEM;
++
++	for (i = 0; i < info->num_scales; i++) {
++		__ad9467_get_scale(conv, i, &val1, &val2);
++		st->scales[i][0] = val1;
++		st->scales[i][1] = val2;
++	}
++
++	return 0;
++}
++
+ static int ad9467_preenable_setup(struct adi_axi_adc_conv *conv)
+ {
+ 	struct ad9467_state *st = adi_axi_adc_conv_priv(conv);
+@@ -378,6 +431,21 @@ static int ad9467_preenable_setup(struct adi_axi_adc_conv *conv)
+ 	return ad9467_outputmode_set(st->spi, st->output_mode);
+ }
+ 
++static int ad9467_reset(struct device *dev)
++{
++	struct gpio_desc *gpio;
++
++	gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
++	if (IS_ERR_OR_NULL(gpio))
++		return PTR_ERR_OR_ZERO(gpio);
++
++	fsleep(1);
++	gpiod_set_value_cansleep(gpio, 0);
++	fsleep(10 * USEC_PER_MSEC);
++
++	return 0;
++}
++
+ static int ad9467_probe(struct spi_device *spi)
+ {
+ 	const struct ad9467_chip_info *info;
+@@ -408,21 +476,16 @@ static int ad9467_probe(struct spi_device *spi)
+ 	if (IS_ERR(st->pwrdown_gpio))
+ 		return PTR_ERR(st->pwrdown_gpio);
+ 
+-	st->reset_gpio = devm_gpiod_get_optional(&spi->dev, "reset",
+-						 GPIOD_OUT_LOW);
+-	if (IS_ERR(st->reset_gpio))
+-		return PTR_ERR(st->reset_gpio);
+-
+-	if (st->reset_gpio) {
+-		udelay(1);
+-		ret = gpiod_direction_output(st->reset_gpio, 1);
+-		if (ret)
+-			return ret;
+-		mdelay(10);
+-	}
++	ret = ad9467_reset(&spi->dev);
++	if (ret)
++		return ret;
+ 
+ 	conv->chip_info = &info->axi_adc_info;
+ 
++	ret = ad9467_scale_fill(conv);
++	if (ret)
++		return ret;
++
+ 	id = ad9467_spi_read(spi, AN877_ADC_REG_CHIP_ID);
+ 	if (id != conv->chip_info->id) {
+ 		dev_err(&spi->dev, "Mismatch CHIP_ID, got 0x%X, expected 0x%X\n",
+@@ -433,6 +496,7 @@ static int ad9467_probe(struct spi_device *spi)
+ 	conv->reg_access = ad9467_reg_access;
+ 	conv->write_raw = ad9467_write_raw;
+ 	conv->read_raw = ad9467_read_raw;
++	conv->read_avail = ad9467_read_avail;
+ 	conv->preenable_setup = ad9467_preenable_setup;
+ 
+ 	st->output_mode = info->default_output_mode |
+diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
+index aff0532a974aa..ae83ada7f9f2d 100644
+--- a/drivers/iio/adc/adi-axi-adc.c
++++ b/drivers/iio/adc/adi-axi-adc.c
+@@ -144,6 +144,20 @@ static int adi_axi_adc_write_raw(struct iio_dev *indio_dev,
+ 	return conv->write_raw(conv, chan, val, val2, mask);
+ }
+ 
++static int adi_axi_adc_read_avail(struct iio_dev *indio_dev,
++				  struct iio_chan_spec const *chan,
++				  const int **vals, int *type, int *length,
++				  long mask)
++{
++	struct adi_axi_adc_state *st = iio_priv(indio_dev);
++	struct adi_axi_adc_conv *conv = &st->client->conv;
++
++	if (!conv->read_avail)
++		return -EOPNOTSUPP;
++
++	return conv->read_avail(conv, chan, vals, type, length, mask);
++}
++
+ static int adi_axi_adc_update_scan_mode(struct iio_dev *indio_dev,
+ 					const unsigned long *scan_mask)
+ {
+@@ -228,69 +242,11 @@ struct adi_axi_adc_conv *devm_adi_axi_adc_conv_register(struct device *dev,
+ }
+ EXPORT_SYMBOL_NS_GPL(devm_adi_axi_adc_conv_register, IIO_ADI_AXI);
+ 
+-static ssize_t in_voltage_scale_available_show(struct device *dev,
+-					       struct device_attribute *attr,
+-					       char *buf)
+-{
+-	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+-	struct adi_axi_adc_state *st = iio_priv(indio_dev);
+-	struct adi_axi_adc_conv *conv = &st->client->conv;
+-	size_t len = 0;
+-	int i;
+-
+-	for (i = 0; i < conv->chip_info->num_scales; i++) {
+-		const unsigned int *s = conv->chip_info->scale_table[i];
+-
+-		len += scnprintf(buf + len, PAGE_SIZE - len,
+-				 "%u.%06u ", s[0], s[1]);
+-	}
+-	buf[len - 1] = '\n';
+-
+-	return len;
+-}
+-
+-static IIO_DEVICE_ATTR_RO(in_voltage_scale_available, 0);
+-
+-enum {
+-	ADI_AXI_ATTR_SCALE_AVAIL,
+-};
+-
+-#define ADI_AXI_ATTR(_en_, _file_)			\
+-	[ADI_AXI_ATTR_##_en_] = &iio_dev_attr_##_file_.dev_attr.attr
+-
+-static struct attribute *adi_axi_adc_attributes[] = {
+-	ADI_AXI_ATTR(SCALE_AVAIL, in_voltage_scale_available),
+-	NULL
+-};
+-
+-static umode_t axi_adc_attr_is_visible(struct kobject *kobj,
+-				       struct attribute *attr, int n)
+-{
+-	struct device *dev = kobj_to_dev(kobj);
+-	struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+-	struct adi_axi_adc_state *st = iio_priv(indio_dev);
+-	struct adi_axi_adc_conv *conv = &st->client->conv;
+-
+-	switch (n) {
+-	case ADI_AXI_ATTR_SCALE_AVAIL:
+-		if (!conv->chip_info->num_scales)
+-			return 0;
+-		return attr->mode;
+-	default:
+-		return attr->mode;
+-	}
+-}
+-
+-static const struct attribute_group adi_axi_adc_attribute_group = {
+-	.attrs = adi_axi_adc_attributes,
+-	.is_visible = axi_adc_attr_is_visible,
+-};
+-
+ static const struct iio_info adi_axi_adc_info = {
+ 	.read_raw = &adi_axi_adc_read_raw,
+ 	.write_raw = &adi_axi_adc_write_raw,
+-	.attrs = &adi_axi_adc_attribute_group,
+ 	.update_scan_mode = &adi_axi_adc_update_scan_mode,
++	.read_avail = &adi_axi_adc_read_avail,
+ };
+ 
+ static const struct adi_axi_adc_core_info adi_axi_adc_10_0_a_info = {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 2bca9560f32dd..aa9527ac2fe06 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -2698,6 +2698,10 @@ static int free_mr_alloc_res(struct hns_roce_dev *hr_dev)
+ 	return 0;
+ 
+ create_failed_qp:
++	for (i--; i >= 0; i--) {
++		hns_roce_v2_destroy_qp(&free_mr->rsv_qp[i]->ibqp, NULL);
++		kfree(free_mr->rsv_qp[i]);
++	}
+ 	hns_roce_destroy_cq(cq, NULL);
+ 	kfree(cq);
+ 
+@@ -5671,7 +5675,7 @@ static int hns_roce_v2_modify_srq(struct ib_srq *ibsrq,
+ 
+ 	/* Resizing SRQs is not supported yet */
+ 	if (srq_attr_mask & IB_SRQ_MAX_WR)
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 
+ 	if (srq_attr_mask & IB_SRQ_LIMIT) {
+ 		if (srq_attr->srq_limit > srq->wqe_cnt)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
+index 783e71852c503..bd1fe89ca205e 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
+@@ -150,7 +150,7 @@ int hns_roce_alloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata)
+ 	int ret;
+ 
+ 	if (!(hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC))
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 
+ 	ret = hns_roce_xrcd_alloc(hr_dev, &xrcd->xrcdn);
+ 	if (ret)
+diff --git a/drivers/infiniband/hw/mthca/mthca_cmd.c b/drivers/infiniband/hw/mthca/mthca_cmd.c
+index f330ce895d884..8fe0cef7e2be6 100644
+--- a/drivers/infiniband/hw/mthca/mthca_cmd.c
++++ b/drivers/infiniband/hw/mthca/mthca_cmd.c
+@@ -635,7 +635,7 @@ void mthca_free_mailbox(struct mthca_dev *dev, struct mthca_mailbox *mailbox)
+ 
+ int mthca_SYS_EN(struct mthca_dev *dev)
+ {
+-	u64 out;
++	u64 out = 0;
+ 	int ret;
+ 
+ 	ret = mthca_cmd_imm(dev, 0, &out, 0, 0, CMD_SYS_EN, CMD_TIME_CLASS_D);
+@@ -1955,7 +1955,7 @@ int mthca_WRITE_MGM(struct mthca_dev *dev, int index,
+ int mthca_MGID_HASH(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
+ 		    u16 *hash)
+ {
+-	u64 imm;
++	u64 imm = 0;
+ 	int err;
+ 
+ 	err = mthca_cmd_imm(dev, mailbox->dma, &imm, 0, 0, CMD_MGID_HASH,
+diff --git a/drivers/infiniband/hw/mthca/mthca_main.c b/drivers/infiniband/hw/mthca/mthca_main.c
+index b54bc8865daec..1ab268b770968 100644
+--- a/drivers/infiniband/hw/mthca/mthca_main.c
++++ b/drivers/infiniband/hw/mthca/mthca_main.c
+@@ -382,7 +382,7 @@ static int mthca_init_icm(struct mthca_dev *mdev,
+ 			  struct mthca_init_hca_param *init_hca,
+ 			  u64 icm_size)
+ {
+-	u64 aux_pages;
++	u64 aux_pages = 0;
+ 	int err;
+ 
+ 	err = mthca_SET_ICM_SIZE(mdev, icm_size, &aux_pages);
+diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.h b/drivers/infiniband/ulp/iser/iscsi_iser.h
+index dee8c97ff0568..d967d55324596 100644
+--- a/drivers/infiniband/ulp/iser/iscsi_iser.h
++++ b/drivers/infiniband/ulp/iser/iscsi_iser.h
+@@ -317,12 +317,10 @@ struct iser_device {
+  *
+  * @mr:         memory region
+  * @sig_mr:     signature memory region
+- * @mr_valid:   is mr valid indicator
+  */
+ struct iser_reg_resources {
+ 	struct ib_mr                     *mr;
+ 	struct ib_mr                     *sig_mr;
+-	u8				  mr_valid:1;
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index 39ea73f690168..f5f090dc4f1eb 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -581,7 +581,10 @@ static inline int iser_inv_desc(struct iser_fr_desc *desc, u32 rkey)
+ 		return -EINVAL;
+ 	}
+ 
+-	desc->rsc.mr_valid = 0;
++	if (desc->sig_protected)
++		desc->rsc.sig_mr->need_inval = false;
++	else
++		desc->rsc.mr->need_inval = false;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
+index 29ae2c6a250a3..6efcb79c8efe3 100644
+--- a/drivers/infiniband/ulp/iser/iser_memory.c
++++ b/drivers/infiniband/ulp/iser/iser_memory.c
+@@ -264,7 +264,7 @@ static int iser_reg_sig_mr(struct iscsi_iser_task *iser_task,
+ 
+ 	iser_set_prot_checks(iser_task->sc, &sig_attrs->check_mask);
+ 
+-	if (rsc->mr_valid)
++	if (rsc->sig_mr->need_inval)
+ 		iser_inv_rkey(&tx_desc->inv_wr, mr, cqe, &wr->wr);
+ 
+ 	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+@@ -288,7 +288,7 @@ static int iser_reg_sig_mr(struct iscsi_iser_task *iser_task,
+ 	wr->access = IB_ACCESS_LOCAL_WRITE |
+ 		     IB_ACCESS_REMOTE_READ |
+ 		     IB_ACCESS_REMOTE_WRITE;
+-	rsc->mr_valid = 1;
++	rsc->sig_mr->need_inval = true;
+ 
+ 	sig_reg->sge.lkey = mr->lkey;
+ 	sig_reg->rkey = mr->rkey;
+@@ -313,7 +313,7 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
+ 	struct ib_reg_wr *wr = &tx_desc->reg_wr;
+ 	int n;
+ 
+-	if (rsc->mr_valid)
++	if (rsc->mr->need_inval)
+ 		iser_inv_rkey(&tx_desc->inv_wr, mr, cqe, &wr->wr);
+ 
+ 	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
+@@ -336,7 +336,7 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
+ 		     IB_ACCESS_REMOTE_WRITE |
+ 		     IB_ACCESS_REMOTE_READ;
+ 
+-	rsc->mr_valid = 1;
++	rsc->mr->need_inval = true;
+ 
+ 	reg->sge.lkey = mr->lkey;
+ 	reg->rkey = mr->rkey;
+diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c
+index 95b8eebf7e045..6801b70dc9e0e 100644
+--- a/drivers/infiniband/ulp/iser/iser_verbs.c
++++ b/drivers/infiniband/ulp/iser/iser_verbs.c
+@@ -129,7 +129,6 @@ iser_create_fastreg_desc(struct iser_device *device,
+ 			goto err_alloc_mr_integrity;
+ 		}
+ 	}
+-	desc->rsc.mr_valid = 0;
+ 
+ 	return desc;
+ 
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index 786f00f6b7fd8..13ef6284223da 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -791,9 +791,9 @@ static bool atkbd_is_portable_device(void)
+  * not work. So in this case simply assume a keyboard is connected to avoid
+  * confusing some laptop keyboards.
+  *
+- * Skipping ATKBD_CMD_GETID ends up using a fake keyboard id. Using a fake id is
+- * ok in translated mode, only atkbd_select_set() checks atkbd->id and in
+- * translated mode that is a no-op.
++ * Skipping ATKBD_CMD_GETID ends up using a fake keyboard id. Using the standard
++ * 0xab83 id is ok in translated mode, only atkbd_select_set() checks atkbd->id
++ * and in translated mode that is a no-op.
+  */
+ static bool atkbd_skip_getid(struct atkbd *atkbd)
+ {
+@@ -811,6 +811,7 @@ static int atkbd_probe(struct atkbd *atkbd)
+ {
+ 	struct ps2dev *ps2dev = &atkbd->ps2dev;
+ 	unsigned char param[2];
++	bool skip_getid;
+ 
+ /*
+  * Some systems, where the bit-twiddling when testing the io-lines of the
+@@ -832,7 +833,8 @@ static int atkbd_probe(struct atkbd *atkbd)
+  */
+ 
+ 	param[0] = param[1] = 0xa5;	/* initialize with invalid values */
+-	if (atkbd_skip_getid(atkbd) || ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
++	skip_getid = atkbd_skip_getid(atkbd);
++	if (skip_getid || ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
+ 
+ /*
+  * If the get ID command was skipped or failed, we check if we can at least set
+@@ -842,7 +844,7 @@ static int atkbd_probe(struct atkbd *atkbd)
+ 		param[0] = 0;
+ 		if (ps2_command(ps2dev, param, ATKBD_CMD_SETLEDS))
+ 			return -1;
+-		atkbd->id = 0xabba;
++		atkbd->id = skip_getid ? 0xab83 : 0xabba;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 549ae4dba3a68..d326fa230b96e 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -243,6 +243,7 @@ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain,
+ 
+ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 	{ .compatible = "qcom,adreno" },
++	{ .compatible = "qcom,adreno-gmu" },
+ 	{ .compatible = "qcom,mdp4" },
+ 	{ .compatible = "qcom,mdss" },
+ 	{ .compatible = "qcom,sc7180-mdss" },
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 85163a83df2f6..037fcf826407f 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -29,6 +29,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/swiotlb.h>
+ #include <linux/vmalloc.h>
++#include <trace/events/swiotlb.h>
+ 
+ #include "dma-iommu.h"
+ 
+@@ -1156,6 +1157,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
+ 			return DMA_MAPPING_ERROR;
+ 		}
+ 
++		trace_swiotlb_bounced(dev, phys, size);
++
+ 		aligned_size = iova_align(iovad, size);
+ 		phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
+ 					      iova_mask(iovad), dir, attrs);
+diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c
+index 35ba090f3b5e2..42cffb0ee5e28 100644
+--- a/drivers/iommu/of_iommu.c
++++ b/drivers/iommu/of_iommu.c
+@@ -260,7 +260,14 @@ void of_iommu_get_resv_regions(struct device *dev, struct list_head *list)
+ 				phys_addr_t iova;
+ 				size_t length;
+ 
++				if (of_dma_is_coherent(dev->of_node))
++					prot |= IOMMU_CACHE;
++
+ 				maps = of_translate_dma_region(np, maps, &iova, &length);
++				if (length == 0) {
++					dev_warn(dev, "Cannot reserve IOVA region of 0 size\n");
++					continue;
++				}
+ 				type = iommu_resv_region_get_type(dev, &phys, iova, length);
+ 
+ 				region = iommu_alloc_resv_region(iova, length, prot, type,
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 6292fddcc55cd..a3a9ac5b53384 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -110,6 +110,7 @@ config LEDS_AW200XX
+ config LEDS_AW2013
+ 	tristate "LED support for Awinic AW2013"
+ 	depends on LEDS_CLASS && I2C && OF
++	select REGMAP_I2C
+ 	help
+ 	  This option enables support for the AW2013 3-channel
+ 	  LED driver.
+diff --git a/drivers/leds/leds-aw200xx.c b/drivers/leds/leds-aw200xx.c
+index 14ca236ce29e5..f3bed4f05b34f 100644
+--- a/drivers/leds/leds-aw200xx.c
++++ b/drivers/leds/leds-aw200xx.c
+@@ -74,6 +74,10 @@
+ #define AW200XX_LED2REG(x, columns) \
+ 	((x) + (((x) / (columns)) * (AW200XX_DSIZE_COLUMNS_MAX - (columns))))
+ 
++/* DIM current configuration register on page 1 */
++#define AW200XX_REG_DIM_PAGE1(x, columns) \
++	AW200XX_REG(AW200XX_PAGE1, AW200XX_LED2REG(x, columns))
++
+ /*
+  * DIM current configuration register (page 4).
+  * The even address for current DIM configuration.
+@@ -153,7 +157,8 @@ static ssize_t dim_store(struct device *dev, struct device_attribute *devattr,
+ 
+ 	if (dim >= 0) {
+ 		ret = regmap_write(chip->regmap,
+-				   AW200XX_REG_DIM(led->num, columns), dim);
++				   AW200XX_REG_DIM_PAGE1(led->num, columns),
++				   dim);
+ 		if (ret)
+ 			goto out_unlock;
+ 	}
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 9bdd57324c376..22b54788bceab 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -543,6 +543,9 @@ static void md_end_flush(struct bio *bio)
+ 	rdev_dec_pending(rdev, mddev);
+ 
+ 	if (atomic_dec_and_test(&mddev->flush_pending)) {
++		/* The pair is percpu_ref_get() from md_flush_request() */
++		percpu_ref_put(&mddev->active_io);
++
+ 		/* The pre-request flush has finished */
+ 		queue_work(md_wq, &mddev->flush_work);
+ 	}
+@@ -562,12 +565,8 @@ static void submit_flushes(struct work_struct *ws)
+ 	rdev_for_each_rcu(rdev, mddev)
+ 		if (rdev->raid_disk >= 0 &&
+ 		    !test_bit(Faulty, &rdev->flags)) {
+-			/* Take two references, one is dropped
+-			 * when request finishes, one after
+-			 * we reclaim rcu_read_lock
+-			 */
+ 			struct bio *bi;
+-			atomic_inc(&rdev->nr_pending);
++
+ 			atomic_inc(&rdev->nr_pending);
+ 			rcu_read_unlock();
+ 			bi = bio_alloc_bioset(rdev->bdev, 0,
+@@ -578,7 +577,6 @@ static void submit_flushes(struct work_struct *ws)
+ 			atomic_inc(&mddev->flush_pending);
+ 			submit_bio(bi);
+ 			rcu_read_lock();
+-			rdev_dec_pending(rdev, mddev);
+ 		}
+ 	rcu_read_unlock();
+ 	if (atomic_dec_and_test(&mddev->flush_pending))
+@@ -631,6 +629,18 @@ bool md_flush_request(struct mddev *mddev, struct bio *bio)
+ 	/* new request after previous flush is completed */
+ 	if (ktime_after(req_start, mddev->prev_flush_start)) {
+ 		WARN_ON(mddev->flush_bio);
++		/*
++		 * Grab a reference to make sure mddev_suspend() will wait for
++		 * this flush to be done.
++		 *
++		 * md_flush_reqeust() is called under md_handle_request() and
++		 * 'active_io' is already grabbed, hence percpu_ref_is_zero()
++		 * won't pass, percpu_ref_tryget_live() can't be used because
++		 * percpu_ref_kill() can be called by mddev_suspend()
++		 * concurrently.
++		 */
++		WARN_ON(percpu_ref_is_zero(&mddev->active_io));
++		percpu_ref_get(&mddev->active_io);
+ 		mddev->flush_bio = bio;
+ 		bio = NULL;
+ 	}
+@@ -8112,6 +8122,19 @@ static void status_unused(struct seq_file *seq)
+ 	seq_printf(seq, "\n");
+ }
+ 
++static void status_personalities(struct seq_file *seq)
++{
++	struct md_personality *pers;
++
++	seq_puts(seq, "Personalities : ");
++	spin_lock(&pers_lock);
++	list_for_each_entry(pers, &pers_list, list)
++		seq_printf(seq, "[%s] ", pers->name);
++
++	spin_unlock(&pers_lock);
++	seq_puts(seq, "\n");
++}
++
+ static int status_resync(struct seq_file *seq, struct mddev *mddev)
+ {
+ 	sector_t max_sectors, resync, res;
+@@ -8253,20 +8276,10 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
+ static void *md_seq_start(struct seq_file *seq, loff_t *pos)
+ 	__acquires(&all_mddevs_lock)
+ {
+-	struct md_personality *pers;
+-
+-	seq_puts(seq, "Personalities : ");
+-	spin_lock(&pers_lock);
+-	list_for_each_entry(pers, &pers_list, list)
+-		seq_printf(seq, "[%s] ", pers->name);
+-
+-	spin_unlock(&pers_lock);
+-	seq_puts(seq, "\n");
+ 	seq->poll_event = atomic_read(&md_event_count);
+-
+ 	spin_lock(&all_mddevs_lock);
+ 
+-	return seq_list_start(&all_mddevs, *pos);
++	return seq_list_start_head(&all_mddevs, *pos);
+ }
+ 
+ static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+@@ -8277,16 +8290,23 @@ static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ static void md_seq_stop(struct seq_file *seq, void *v)
+ 	__releases(&all_mddevs_lock)
+ {
+-	status_unused(seq);
+ 	spin_unlock(&all_mddevs_lock);
+ }
+ 
+ static int md_seq_show(struct seq_file *seq, void *v)
+ {
+-	struct mddev *mddev = list_entry(v, struct mddev, all_mddevs);
++	struct mddev *mddev;
+ 	sector_t sectors;
+ 	struct md_rdev *rdev;
+ 
++	if (v == &all_mddevs) {
++		status_personalities(seq);
++		if (list_empty(&all_mddevs))
++			status_unused(seq);
++		return 0;
++	}
++
++	mddev = list_entry(v, struct mddev, all_mddevs);
+ 	if (!mddev_get(mddev))
+ 		return 0;
+ 
+@@ -8362,6 +8382,10 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 	spin_unlock(&mddev->lock);
+ 	spin_lock(&all_mddevs_lock);
++
++	if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
++		status_unused(seq);
++
+ 	if (atomic_dec_and_test(&mddev->active))
+ 		__mddev_put(mddev);
+ 
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 35d12948e0a96..e138922d51292 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1984,12 +1984,12 @@ static void end_sync_write(struct bio *bio)
+ }
+ 
+ static int r1_sync_page_io(struct md_rdev *rdev, sector_t sector,
+-			   int sectors, struct page *page, int rw)
++			   int sectors, struct page *page, blk_opf_t rw)
+ {
+ 	if (sync_page_io(rdev, sector, sectors << 9, page, rw, false))
+ 		/* success */
+ 		return 1;
+-	if (rw == WRITE) {
++	if (rw == REQ_OP_WRITE) {
+ 		set_bit(WriteErrorSeen, &rdev->flags);
+ 		if (!test_and_set_bit(WantReplacement,
+ 				      &rdev->flags))
+@@ -2106,7 +2106,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio)
+ 			rdev = conf->mirrors[d].rdev;
+ 			if (r1_sync_page_io(rdev, sect, s,
+ 					    pages[idx],
+-					    WRITE) == 0) {
++					    REQ_OP_WRITE) == 0) {
+ 				r1_bio->bios[d]->bi_end_io = NULL;
+ 				rdev_dec_pending(rdev, mddev);
+ 			}
+@@ -2121,7 +2121,7 @@ static int fix_sync_read_error(struct r1bio *r1_bio)
+ 			rdev = conf->mirrors[d].rdev;
+ 			if (r1_sync_page_io(rdev, sect, s,
+ 					    pages[idx],
+-					    READ) != 0)
++					    REQ_OP_READ) != 0)
+ 				atomic_add(s, &rdev->corrected_errors);
+ 		}
+ 		sectors -= s;
+@@ -2333,7 +2333,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
+ 				atomic_inc(&rdev->nr_pending);
+ 				rcu_read_unlock();
+ 				r1_sync_page_io(rdev, sect, s,
+-						conf->tmppage, WRITE);
++						conf->tmppage, REQ_OP_WRITE);
+ 				rdev_dec_pending(rdev, mddev);
+ 			} else
+ 				rcu_read_unlock();
+@@ -2350,7 +2350,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
+ 				atomic_inc(&rdev->nr_pending);
+ 				rcu_read_unlock();
+ 				if (r1_sync_page_io(rdev, sect, s,
+-						    conf->tmppage, READ)) {
++						conf->tmppage, REQ_OP_READ)) {
+ 					atomic_add(s, &rdev->corrected_errors);
+ 					pr_info("md/raid1:%s: read error corrected (%d sectors at %llu on %pg)\n",
+ 						mdname(mddev), s,
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index b02b1a3010f71..26e1e8a5e9419 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -36,6 +36,7 @@
+  */
+ 
+ #include <linux/blkdev.h>
++#include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/raid/pq.h>
+ #include <linux/async_tx.h>
+@@ -6819,7 +6820,18 @@ static void raid5d(struct md_thread *thread)
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_check_recovery(mddev);
+ 			spin_lock_irq(&conf->device_lock);
++
++			/*
++			 * Waiting on MD_SB_CHANGE_PENDING below may deadlock
++			 * seeing md_check_recovery() is needed to clear
++			 * the flag when using mdmon.
++			 */
++			continue;
+ 		}
++
++		wait_event_lock_irq(mddev->sb_wait,
++			!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags),
++			conf->device_lock);
+ 	}
+ 	pr_debug("%d stripes handled\n", handled);
+ 
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 27aee92f3eea4..19f80ff497b7d 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -2648,9 +2648,14 @@ static int __vb2_init_fileio(struct vb2_queue *q, int read)
+ 		return -EBUSY;
+ 
+ 	/*
+-	 * Start with count 1, driver can increase it in queue_setup()
++	 * Start with q->min_buffers_needed + 1, driver can increase it in
++	 * queue_setup()
++	 *
++	 * 'min_buffers_needed' buffers need to be queued up before you
++	 * can start streaming, plus 1 for userspace (or in this case,
++	 * kernelspace) processing.
+ 	 */
+-	count = 1;
++	count = max(2, q->min_buffers_needed + 1);
+ 
+ 	dprintk(q, 3, "setting up file io: mode %s, count %d, read_once %d, write_immediately %d\n",
+ 		(read) ? "read" : "write", count, q->fileio_read_once,
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 305bb21d843c8..49f0eb7d0b9d3 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -104,6 +104,8 @@ static int dvb_device_open(struct inode *inode, struct file *file)
+ 			err = file->f_op->open(inode, file);
+ 		up_read(&minor_rwsem);
+ 		mutex_unlock(&dvbdev_mutex);
++		if (err)
++			dvb_device_put(dvbdev);
+ 		return err;
+ 	}
+ fail:
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index 26c67ef05d135..e0272054fca51 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1894,7 +1894,7 @@ static int m88ds3103_probe(struct i2c_client *client)
+ 		/* get frontend address */
+ 		ret = regmap_read(dev->regmap, 0x29, &utmp);
+ 		if (ret)
+-			goto err_kfree;
++			goto err_del_adapters;
+ 		dev->dt_addr = ((utmp & 0x80) == 0) ? 0x42 >> 1 : 0x40 >> 1;
+ 		dev_dbg(&client->dev, "dt addr is 0x%02x\n", dev->dt_addr);
+ 
+@@ -1902,11 +1902,14 @@ static int m88ds3103_probe(struct i2c_client *client)
+ 						      dev->dt_addr);
+ 		if (IS_ERR(dev->dt_client)) {
+ 			ret = PTR_ERR(dev->dt_client);
+-			goto err_kfree;
++			goto err_del_adapters;
+ 		}
+ 	}
+ 
+ 	return 0;
++
++err_del_adapters:
++	i2c_mux_del_adapters(dev->muxc);
+ err_kfree:
+ 	kfree(dev);
+ err:
+diff --git a/drivers/media/i2c/mt9m114.c b/drivers/media/i2c/mt9m114.c
+index ac19078ceda3f..1a535c098ded6 100644
+--- a/drivers/media/i2c/mt9m114.c
++++ b/drivers/media/i2c/mt9m114.c
+@@ -2112,7 +2112,7 @@ static int mt9m114_power_on(struct mt9m114 *sensor)
+ 		duration = DIV_ROUND_UP(2 * 50 * 1000000, freq);
+ 
+ 		gpiod_set_value(sensor->reset, 1);
+-		udelay(duration);
++		fsleep(duration);
+ 		gpiod_set_value(sensor->reset, 0);
+ 	} else {
+ 		/*
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 09a193bb87df3..49a3dd70ec0f7 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -1536,13 +1536,11 @@ static void buf_cleanup(struct vb2_buffer *vb)
+ 
+ static int start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+-	int ret = 1;
+ 	int seqnr = 0;
+ 	struct bttv_buffer *buf;
+ 	struct bttv *btv = vb2_get_drv_priv(q);
+ 
+-	ret = check_alloc_btres_lock(btv, RESOURCE_VIDEO_STREAM);
+-	if (ret == 0) {
++	if (!check_alloc_btres_lock(btv, RESOURCE_VIDEO_STREAM)) {
+ 		if (btv->field_count)
+ 			seqnr++;
+ 		while (!list_empty(&btv->capture)) {
+@@ -1553,7 +1551,7 @@ static int start_streaming(struct vb2_queue *q, unsigned int count)
+ 			vb2_buffer_done(&buf->vbuf.vb2_buf,
+ 					VB2_BUF_STATE_QUEUED);
+ 		}
+-		return !ret;
++		return -EBUSY;
+ 	}
+ 	if (!vb2_is_streaming(&btv->vbiq)) {
+ 		init_irqreg(btv);
+@@ -2774,6 +2772,27 @@ bttv_irq_wakeup_vbi(struct bttv *btv, struct bttv_buffer *wakeup,
+ 		return;
+ 	wakeup->vbuf.vb2_buf.timestamp = ktime_get_ns();
+ 	wakeup->vbuf.sequence = btv->field_count >> 1;
++
++	/*
++	 * Ugly hack for backwards compatibility.
++	 * Some applications expect that the last 4 bytes of
++	 * the VBI data contains the sequence number.
++	 *
++	 * This makes it possible to associate the VBI data
++	 * with the video frame if you use read() to get the
++	 * VBI data.
++	 */
++	if (vb2_fileio_is_active(wakeup->vbuf.vb2_buf.vb2_queue)) {
++		u32 *vaddr = vb2_plane_vaddr(&wakeup->vbuf.vb2_buf, 0);
++		unsigned long size =
++			vb2_get_plane_payload(&wakeup->vbuf.vb2_buf, 0) / 4;
++
++		if (vaddr && size) {
++			vaddr += size - 1;
++			*vaddr = wakeup->vbuf.sequence;
++		}
++	}
++
+ 	vb2_buffer_done(&wakeup->vbuf.vb2_buf, state);
+ 	if (btv->field_count == 0)
+ 		btor(BT848_INT_VSYNC, BT848_INT_MASK);
+diff --git a/drivers/media/pci/bt8xx/bttv-vbi.c b/drivers/media/pci/bt8xx/bttv-vbi.c
+index ab213e51ec95f..e489a3acb4b98 100644
+--- a/drivers/media/pci/bt8xx/bttv-vbi.c
++++ b/drivers/media/pci/bt8xx/bttv-vbi.c
+@@ -123,14 +123,12 @@ static void buf_cleanup_vbi(struct vb2_buffer *vb)
+ 
+ static int start_streaming_vbi(struct vb2_queue *q, unsigned int count)
+ {
+-	int ret;
+ 	int seqnr = 0;
+ 	struct bttv_buffer *buf;
+ 	struct bttv *btv = vb2_get_drv_priv(q);
+ 
+ 	btv->framedrop = 0;
+-	ret = check_alloc_btres_lock(btv, RESOURCE_VBI);
+-	if (ret == 0) {
++	if (!check_alloc_btres_lock(btv, RESOURCE_VBI)) {
+ 		if (btv->field_count)
+ 			seqnr++;
+ 		while (!list_empty(&btv->vcapture)) {
+@@ -141,13 +139,13 @@ static int start_streaming_vbi(struct vb2_queue *q, unsigned int count)
+ 			vb2_buffer_done(&buf->vbuf.vb2_buf,
+ 					VB2_BUF_STATE_QUEUED);
+ 		}
+-		return !ret;
++		return -EBUSY;
+ 	}
+ 	if (!vb2_is_streaming(&btv->capq)) {
+ 		init_irqreg(btv);
+ 		btv->field_count = 0;
+ 	}
+-	return !ret;
++	return 0;
+ }
+ 
+ static void stop_streaming_vbi(struct vb2_queue *q)
+diff --git a/drivers/media/pci/solo6x10/solo6x10-offsets.h b/drivers/media/pci/solo6x10/solo6x10-offsets.h
+index f414ee1316f29..fdbb817e63601 100644
+--- a/drivers/media/pci/solo6x10/solo6x10-offsets.h
++++ b/drivers/media/pci/solo6x10/solo6x10-offsets.h
+@@ -57,16 +57,16 @@
+ #define SOLO_MP4E_EXT_ADDR(__solo) \
+ 	(SOLO_EREF_EXT_ADDR(__solo) + SOLO_EREF_EXT_AREA(__solo))
+ #define SOLO_MP4E_EXT_SIZE(__solo) \
+-	max((__solo->nr_chans * 0x00080000),				\
+-	    min(((__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo)) -	\
+-		 __SOLO_JPEG_MIN_SIZE(__solo)), 0x00ff0000))
++	clamp(__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo) -	\
++	      __SOLO_JPEG_MIN_SIZE(__solo),			\
++	      __solo->nr_chans * 0x00080000, 0x00ff0000)
+ 
+ #define __SOLO_JPEG_MIN_SIZE(__solo)		(__solo->nr_chans * 0x00080000)
+ #define SOLO_JPEG_EXT_ADDR(__solo) \
+ 		(SOLO_MP4E_EXT_ADDR(__solo) + SOLO_MP4E_EXT_SIZE(__solo))
+ #define SOLO_JPEG_EXT_SIZE(__solo) \
+-	max(__SOLO_JPEG_MIN_SIZE(__solo),				\
+-	    min((__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo)), 0x00ff0000))
++	clamp(__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo),	\
++	      __SOLO_JPEG_MIN_SIZE(__solo), 0x00ff0000)
+ 
+ #define SOLO_SDRAM_END(__solo) \
+ 	(SOLO_JPEG_EXT_ADDR(__solo) + SOLO_JPEG_EXT_SIZE(__solo))
+diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
+index 1af6fc9460d4d..3a2030d02e45e 100644
+--- a/drivers/media/platform/amphion/vpu_core.c
++++ b/drivers/media/platform/amphion/vpu_core.c
+@@ -642,7 +642,7 @@ static int vpu_core_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	core->type = core->res->type;
+-	core->id = of_alias_get_id(dev->of_node, "vpu_core");
++	core->id = of_alias_get_id(dev->of_node, "vpu-core");
+ 	if (core->id < 0) {
+ 		dev_err(dev, "can't get vpu core id\n");
+ 		return core->id;
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index 7194f88edc0fb..60425c99a2b8b 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1403,7 +1403,6 @@ static void mtk_jpeg_remove(struct platform_device *pdev)
+ {
+ 	struct mtk_jpeg_dev *jpeg = platform_get_drvdata(pdev);
+ 
+-	cancel_delayed_work_sync(&jpeg->job_timeout_work);
+ 	pm_runtime_disable(&pdev->dev);
+ 	video_unregister_device(jpeg->vdev);
+ 	v4l2_m2m_release(jpeg->m2m_dev);
+diff --git a/drivers/media/platform/nxp/imx-mipi-csis.c b/drivers/media/platform/nxp/imx-mipi-csis.c
+index 6cb20b45e0a13..b08f6d2e7516d 100644
+--- a/drivers/media/platform/nxp/imx-mipi-csis.c
++++ b/drivers/media/platform/nxp/imx-mipi-csis.c
+@@ -1435,24 +1435,18 @@ static int mipi_csis_probe(struct platform_device *pdev)
+ 	/* Reset PHY and enable the clocks. */
+ 	mipi_csis_phy_reset(csis);
+ 
+-	ret = mipi_csis_clk_enable(csis);
+-	if (ret < 0) {
+-		dev_err(csis->dev, "failed to enable clocks: %d\n", ret);
+-		return ret;
+-	}
+-
+ 	/* Now that the hardware is initialized, request the interrupt. */
+ 	ret = devm_request_irq(dev, irq, mipi_csis_irq_handler, 0,
+ 			       dev_name(dev), csis);
+ 	if (ret) {
+ 		dev_err(dev, "Interrupt request failed\n");
+-		goto err_disable_clock;
++		return ret;
+ 	}
+ 
+ 	/* Initialize and register the subdev. */
+ 	ret = mipi_csis_subdev_init(csis);
+ 	if (ret < 0)
+-		goto err_disable_clock;
++		return ret;
+ 
+ 	platform_set_drvdata(pdev, &csis->sd);
+ 
+@@ -1486,8 +1480,6 @@ err_cleanup:
+ 	v4l2_async_nf_unregister(&csis->notifier);
+ 	v4l2_async_nf_cleanup(&csis->notifier);
+ 	v4l2_async_unregister_subdev(&csis->sd);
+-err_disable_clock:
+-	mipi_csis_clk_disable(csis);
+ 
+ 	return ret;
+ }
+@@ -1502,9 +1494,10 @@ static void mipi_csis_remove(struct platform_device *pdev)
+ 	v4l2_async_nf_cleanup(&csis->notifier);
+ 	v4l2_async_unregister_subdev(&csis->sd);
+ 
++	if (!pm_runtime_enabled(&pdev->dev))
++		mipi_csis_runtime_suspend(&pdev->dev);
++
+ 	pm_runtime_disable(&pdev->dev);
+-	mipi_csis_runtime_suspend(&pdev->dev);
+-	mipi_csis_clk_disable(csis);
+ 	v4l2_subdev_cleanup(&csis->sd);
+ 	media_entity_cleanup(&csis->sd.entity);
+ 	pm_runtime_set_suspended(&pdev->dev);
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+index c41abd2833f12..894d5afaff4e1 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+@@ -582,7 +582,7 @@ static int rkisp1_probe(struct platform_device *pdev)
+ 
+ 	ret = v4l2_device_register(rkisp1->dev, &rkisp1->v4l2_dev);
+ 	if (ret)
+-		goto err_pm_runtime_disable;
++		goto err_media_dev_cleanup;
+ 
+ 	ret = media_device_register(&rkisp1->media_dev);
+ 	if (ret) {
+@@ -617,6 +617,8 @@ err_unreg_media_dev:
+ 	media_device_unregister(&rkisp1->media_dev);
+ err_unreg_v4l2_dev:
+ 	v4l2_device_unregister(&rkisp1->v4l2_dev);
++err_media_dev_cleanup:
++	media_device_cleanup(&rkisp1->media_dev);
+ err_pm_runtime_disable:
+ 	pm_runtime_disable(&pdev->dev);
+ 	return ret;
+@@ -637,6 +639,8 @@ static void rkisp1_remove(struct platform_device *pdev)
+ 	media_device_unregister(&rkisp1->media_dev);
+ 	v4l2_device_unregister(&rkisp1->v4l2_dev);
+ 
++	media_device_cleanup(&rkisp1->media_dev);
++
+ 	pm_runtime_disable(&pdev->dev);
+ }
+ 
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
+index 88ca8b2283b7f..45d1ab96fc6e7 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
+@@ -933,6 +933,7 @@ void rkisp1_isp_unregister(struct rkisp1_device *rkisp1)
+ 		return;
+ 
+ 	v4l2_device_unregister_subdev(&isp->sd);
++	v4l2_subdev_cleanup(&isp->sd);
+ 	media_entity_cleanup(&isp->sd.entity);
+ }
+ 
+diff --git a/drivers/media/platform/verisilicon/hantro_drv.c b/drivers/media/platform/verisilicon/hantro_drv.c
+index a9fa05ac56a9b..3a2a0f28cbfe0 100644
+--- a/drivers/media/platform/verisilicon/hantro_drv.c
++++ b/drivers/media/platform/verisilicon/hantro_drv.c
+@@ -905,6 +905,8 @@ static int hantro_add_func(struct hantro_dev *vpu, unsigned int funcid)
+ 
+ 	if (funcid == MEDIA_ENT_F_PROC_VIDEO_ENCODER) {
+ 		vpu->encoder = func;
++		v4l2_disable_ioctl(vfd, VIDIOC_TRY_DECODER_CMD);
++		v4l2_disable_ioctl(vfd, VIDIOC_DECODER_CMD);
+ 	} else {
+ 		vpu->decoder = func;
+ 		v4l2_disable_ioctl(vfd, VIDIOC_TRY_ENCODER_CMD);
+diff --git a/drivers/media/platform/verisilicon/hantro_v4l2.c b/drivers/media/platform/verisilicon/hantro_v4l2.c
+index b3ae037a50f61..db145519fc5d3 100644
+--- a/drivers/media/platform/verisilicon/hantro_v4l2.c
++++ b/drivers/media/platform/verisilicon/hantro_v4l2.c
+@@ -785,6 +785,9 @@ const struct v4l2_ioctl_ops hantro_ioctl_ops = {
+ 	.vidioc_g_selection = vidioc_g_selection,
+ 	.vidioc_s_selection = vidioc_s_selection,
+ 
++	.vidioc_decoder_cmd = v4l2_m2m_ioctl_stateless_decoder_cmd,
++	.vidioc_try_decoder_cmd = v4l2_m2m_ioctl_stateless_try_decoder_cmd,
++
+ 	.vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd,
+ 	.vidioc_encoder_cmd = vidioc_encoder_cmd,
+ };
+diff --git a/drivers/media/test-drivers/visl/visl-video.c b/drivers/media/test-drivers/visl/visl-video.c
+index 7cac6a6456eb6..9303a3e118d77 100644
+--- a/drivers/media/test-drivers/visl/visl-video.c
++++ b/drivers/media/test-drivers/visl/visl-video.c
+@@ -525,6 +525,9 @@ const struct v4l2_ioctl_ops visl_ioctl_ops = {
+ 	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
+ 	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
+ 
++	.vidioc_decoder_cmd		= v4l2_m2m_ioctl_stateless_decoder_cmd,
++	.vidioc_try_decoder_cmd		= v4l2_m2m_ioctl_stateless_try_decoder_cmd,
++
+ 	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
+ 	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
+ };
+diff --git a/drivers/media/usb/cx231xx/cx231xx-core.c b/drivers/media/usb/cx231xx/cx231xx-core.c
+index 7b7e2a26ef93b..d8312201694f8 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-core.c
++++ b/drivers/media/usb/cx231xx/cx231xx-core.c
+@@ -1023,6 +1023,7 @@ int cx231xx_init_isoc(struct cx231xx *dev, int max_packets,
+ 	if (!dev->video_mode.isoc_ctl.urb) {
+ 		dev_err(dev->dev,
+ 			"cannot alloc memory for usb buffers\n");
++		kfree(dma_q->p_left_data);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -1032,6 +1033,7 @@ int cx231xx_init_isoc(struct cx231xx *dev, int max_packets,
+ 		dev_err(dev->dev,
+ 			"cannot allocate memory for usbtransfer\n");
+ 		kfree(dev->video_mode.isoc_ctl.urb);
++		kfree(dma_q->p_left_data);
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-context.c b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+index 14170a5d72b35..1764674de98bc 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-context.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+@@ -268,7 +268,8 @@ void pvr2_context_disconnect(struct pvr2_context *mp)
+ {
+ 	pvr2_hdw_disconnect(mp->hdw);
+ 	mp->disconnect_flag = !0;
+-	pvr2_context_notify(mp);
++	if (!pvr2_context_shutok())
++		pvr2_context_notify(mp);
+ }
+ 
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
+index 091e8cf4114ba..8cfd593d293d1 100644
+--- a/drivers/media/v4l2-core/v4l2-async.c
++++ b/drivers/media/v4l2-core/v4l2-async.c
+@@ -880,7 +880,6 @@ void v4l2_async_unregister_subdev(struct v4l2_subdev *sd)
+ 				  &asc->notifier->waiting_list);
+ 
+ 			v4l2_async_unbind_subdev_one(asc->notifier, asc);
+-			list_del(&asc->asc_subdev_entry);
+ 		}
+ 	}
+ 
+diff --git a/drivers/mfd/cs42l43-sdw.c b/drivers/mfd/cs42l43-sdw.c
+index 7392b3d2e6b96..4be4df9dd8cf1 100644
+--- a/drivers/mfd/cs42l43-sdw.c
++++ b/drivers/mfd/cs42l43-sdw.c
+@@ -17,13 +17,12 @@
+ 
+ #include "cs42l43.h"
+ 
+-enum cs42l43_sdw_ports {
+-	CS42L43_DMIC_DEC_ASP_PORT = 1,
+-	CS42L43_SPK_TX_PORT,
+-	CS42L43_SPDIF_HP_PORT,
+-	CS42L43_SPK_RX_PORT,
+-	CS42L43_ASP_PORT,
+-};
++#define CS42L43_SDW_PORT(port, chans) { \
++	.num = port, \
++	.max_ch = chans, \
++	.type = SDW_DPN_FULL, \
++	.max_word = 24, \
++}
+ 
+ static const struct regmap_config cs42l43_sdw_regmap = {
+ 	.reg_bits		= 32,
+@@ -42,65 +41,48 @@ static const struct regmap_config cs42l43_sdw_regmap = {
+ 	.num_reg_defaults	= ARRAY_SIZE(cs42l43_reg_default),
+ };
+ 
++static const struct sdw_dpn_prop cs42l43_src_port_props[] = {
++	CS42L43_SDW_PORT(1, 4),
++	CS42L43_SDW_PORT(2, 2),
++	CS42L43_SDW_PORT(3, 2),
++	CS42L43_SDW_PORT(4, 2),
++};
++
++static const struct sdw_dpn_prop cs42l43_sink_port_props[] = {
++	CS42L43_SDW_PORT(5, 2),
++	CS42L43_SDW_PORT(6, 2),
++	CS42L43_SDW_PORT(7, 2),
++};
++
+ static int cs42l43_read_prop(struct sdw_slave *sdw)
+ {
+ 	struct sdw_slave_prop *prop = &sdw->prop;
+ 	struct device *dev = &sdw->dev;
+-	struct sdw_dpn_prop *dpn;
+-	unsigned long addr;
+-	int nval;
+ 	int i;
+-	u32 bit;
+ 
+ 	prop->use_domain_irq = true;
+ 	prop->paging_support = true;
+ 	prop->wake_capable = true;
+-	prop->source_ports = BIT(CS42L43_DMIC_DEC_ASP_PORT) | BIT(CS42L43_SPK_TX_PORT);
+-	prop->sink_ports = BIT(CS42L43_SPDIF_HP_PORT) |
+-			   BIT(CS42L43_SPK_RX_PORT) | BIT(CS42L43_ASP_PORT);
+ 	prop->quirks = SDW_SLAVE_QUIRKS_INVALID_INITIAL_PARITY;
+ 	prop->scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY |
+ 			      SDW_SCP_INT1_IMPL_DEF;
+ 
+-	nval = hweight32(prop->source_ports);
+-	prop->src_dpn_prop = devm_kcalloc(dev, nval, sizeof(*prop->src_dpn_prop),
+-					  GFP_KERNEL);
++	for (i = 0; i < ARRAY_SIZE(cs42l43_src_port_props); i++)
++		prop->source_ports |= BIT(cs42l43_src_port_props[i].num);
++
++	prop->src_dpn_prop = devm_kmemdup(dev, cs42l43_src_port_props,
++					  sizeof(cs42l43_src_port_props), GFP_KERNEL);
+ 	if (!prop->src_dpn_prop)
+ 		return -ENOMEM;
+ 
+-	i = 0;
+-	dpn = prop->src_dpn_prop;
+-	addr = prop->source_ports;
+-	for_each_set_bit(bit, &addr, 32) {
+-		dpn[i].num = bit;
+-		dpn[i].max_ch = 2;
+-		dpn[i].type = SDW_DPN_FULL;
+-		dpn[i].max_word = 24;
+-		i++;
+-	}
+-	/*
+-	 * All ports are 2 channels max, except the first one,
+-	 * CS42L43_DMIC_DEC_ASP_PORT.
+-	 */
+-	dpn[CS42L43_DMIC_DEC_ASP_PORT].max_ch = 4;
++	for (i = 0; i < ARRAY_SIZE(cs42l43_sink_port_props); i++)
++		prop->sink_ports |= BIT(cs42l43_sink_port_props[i].num);
+ 
+-	nval = hweight32(prop->sink_ports);
+-	prop->sink_dpn_prop = devm_kcalloc(dev, nval, sizeof(*prop->sink_dpn_prop),
+-					   GFP_KERNEL);
++	prop->sink_dpn_prop = devm_kmemdup(dev, cs42l43_sink_port_props,
++					   sizeof(cs42l43_sink_port_props), GFP_KERNEL);
+ 	if (!prop->sink_dpn_prop)
+ 		return -ENOMEM;
+ 
+-	i = 0;
+-	dpn = prop->sink_dpn_prop;
+-	addr = prop->sink_ports;
+-	for_each_set_bit(bit, &addr, 32) {
+-		dpn[i].num = bit;
+-		dpn[i].max_ch = 2;
+-		dpn[i].type = SDW_DPN_FULL;
+-		dpn[i].max_word = 24;
+-		i++;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
+index 9591b354072ad..00e7b578bb3e8 100644
+--- a/drivers/mfd/intel-lpss.c
++++ b/drivers/mfd/intel-lpss.c
+@@ -301,8 +301,8 @@ static int intel_lpss_register_clock_divider(struct intel_lpss *lpss,
+ 
+ 	snprintf(name, sizeof(name), "%s-div", devname);
+ 	tmp = clk_register_fractional_divider(NULL, name, __clk_get_name(tmp),
++					      0, lpss->priv, 1, 15, 16, 15,
+ 					      CLK_FRAC_DIVIDER_POWER_OF_TWO_PS,
+-					      lpss->priv, 1, 15, 16, 15, 0,
+ 					      NULL);
+ 	if (IS_ERR(tmp))
+ 		return PTR_ERR(tmp);
+diff --git a/drivers/mfd/rk8xx-core.c b/drivers/mfd/rk8xx-core.c
+index c47164a3ec1da..b1ffc3b9e2be7 100644
+--- a/drivers/mfd/rk8xx-core.c
++++ b/drivers/mfd/rk8xx-core.c
+@@ -53,76 +53,68 @@ static const struct resource rk817_charger_resources[] = {
+ };
+ 
+ static const struct mfd_cell rk805s[] = {
+-	{ .name = "rk808-clkout", .id = PLATFORM_DEVID_NONE, },
+-	{ .name = "rk808-regulator", .id = PLATFORM_DEVID_NONE, },
+-	{ .name = "rk805-pinctrl", .id = PLATFORM_DEVID_NONE, },
++	{ .name = "rk808-clkout", },
++	{ .name = "rk808-regulator", },
++	{ .name = "rk805-pinctrl", },
+ 	{
+ 		.name = "rk808-rtc",
+ 		.num_resources = ARRAY_SIZE(rtc_resources),
+ 		.resources = &rtc_resources[0],
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+ 	{	.name = "rk805-pwrkey",
+ 		.num_resources = ARRAY_SIZE(rk805_key_resources),
+ 		.resources = &rk805_key_resources[0],
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+ };
+ 
+ static const struct mfd_cell rk806s[] = {
+-	{ .name = "rk805-pinctrl", .id = PLATFORM_DEVID_AUTO, },
+-	{ .name = "rk808-regulator", .id = PLATFORM_DEVID_AUTO, },
++	{ .name = "rk805-pinctrl", },
++	{ .name = "rk808-regulator", },
+ 	{
+ 		.name = "rk805-pwrkey",
+ 		.resources = rk806_pwrkey_resources,
+ 		.num_resources = ARRAY_SIZE(rk806_pwrkey_resources),
+-		.id = PLATFORM_DEVID_AUTO,
+ 	},
+ };
+ 
+ static const struct mfd_cell rk808s[] = {
+-	{ .name = "rk808-clkout", .id = PLATFORM_DEVID_NONE, },
+-	{ .name = "rk808-regulator", .id = PLATFORM_DEVID_NONE, },
++	{ .name = "rk808-clkout", },
++	{ .name = "rk808-regulator", },
+ 	{
+ 		.name = "rk808-rtc",
+ 		.num_resources = ARRAY_SIZE(rtc_resources),
+ 		.resources = rtc_resources,
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+ };
+ 
+ static const struct mfd_cell rk817s[] = {
+-	{ .name = "rk808-clkout", .id = PLATFORM_DEVID_NONE, },
+-	{ .name = "rk808-regulator", .id = PLATFORM_DEVID_NONE, },
++	{ .name = "rk808-clkout", },
++	{ .name = "rk808-regulator", },
+ 	{
+ 		.name = "rk805-pwrkey",
+ 		.num_resources = ARRAY_SIZE(rk817_pwrkey_resources),
+ 		.resources = &rk817_pwrkey_resources[0],
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+ 	{
+ 		.name = "rk808-rtc",
+ 		.num_resources = ARRAY_SIZE(rk817_rtc_resources),
+ 		.resources = &rk817_rtc_resources[0],
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+-	{ .name = "rk817-codec", .id = PLATFORM_DEVID_NONE, },
++	{ .name = "rk817-codec", },
+ 	{
+ 		.name = "rk817-charger",
+ 		.num_resources = ARRAY_SIZE(rk817_charger_resources),
+ 		.resources = &rk817_charger_resources[0],
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+ };
+ 
+ static const struct mfd_cell rk818s[] = {
+-	{ .name = "rk808-clkout", .id = PLATFORM_DEVID_NONE, },
+-	{ .name = "rk808-regulator", .id = PLATFORM_DEVID_NONE, },
++	{ .name = "rk808-clkout", },
++	{ .name = "rk808-regulator", },
+ 	{
+ 		.name = "rk808-rtc",
+ 		.num_resources = ARRAY_SIZE(rtc_resources),
+ 		.resources = rtc_resources,
+-		.id = PLATFORM_DEVID_NONE,
+ 	},
+ };
+ 
+@@ -684,7 +676,7 @@ int rk8xx_probe(struct device *dev, int variant, unsigned int irq, struct regmap
+ 					     pre_init_reg[i].addr);
+ 	}
+ 
+-	ret = devm_mfd_add_devices(dev, 0, cells, nr_cells, NULL, 0,
++	ret = devm_mfd_add_devices(dev, PLATFORM_DEVID_AUTO, cells, nr_cells, NULL, 0,
+ 			      regmap_irq_get_domain(rk808->irq_data));
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "failed to add MFD devices\n");
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index 57b29c3251312..c9550368d9ea5 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -105,6 +105,10 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ 	}
+ 
+ 	syscon_config.name = kasprintf(GFP_KERNEL, "%pOFn@%pa", np, &res.start);
++	if (!syscon_config.name) {
++		ret = -ENOMEM;
++		goto err_regmap;
++	}
+ 	syscon_config.reg_stride = reg_io_width;
+ 	syscon_config.val_bits = reg_io_width * 8;
+ 	syscon_config.max_register = resource_size(&res) - reg_io_width;
+diff --git a/drivers/mfd/tps6594-core.c b/drivers/mfd/tps6594-core.c
+index 0fb9c5cf213a4..783ee59901e86 100644
+--- a/drivers/mfd/tps6594-core.c
++++ b/drivers/mfd/tps6594-core.c
+@@ -433,6 +433,9 @@ int tps6594_device_init(struct tps6594 *tps, bool enable_crc)
+ 	tps6594_irq_chip.name = devm_kasprintf(dev, GFP_KERNEL, "%s-%ld-0x%02x",
+ 					       dev->driver->name, tps->chip_id, tps->reg);
+ 
++	if (!tps6594_irq_chip.name)
++		return -ENOMEM;
++
+ 	ret = devm_regmap_add_irq_chip(dev, tps->regmap, tps->irq, IRQF_SHARED | IRQF_ONESHOT,
+ 				       0, &tps6594_irq_chip, &tps->irq_data);
+ 	if (ret)
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index 58bd5fe4cd255..81f2c4e05287b 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -1026,14 +1026,15 @@ config MMC_SDHCI_XENON
+ 
+ config MMC_SDHCI_OMAP
+ 	tristate "TI SDHCI Controller Support"
++	depends on ARCH_OMAP2PLUS || ARCH_KEYSTONE || COMPILE_TEST
+ 	depends on MMC_SDHCI_PLTFM && OF
+ 	select THERMAL
+ 	imply TI_SOC_THERMAL
+ 	select MMC_SDHCI_EXTERNAL_DMA if DMA_ENGINE
+ 	help
+ 	  This selects the Secure Digital Host Controller Interface (SDHCI)
+-	  support present in TI's DRA7 SOCs. The controller supports
+-	  SD/MMC/SDIO devices.
++	  support present in TI's Keystone/OMAP2+/DRA7 SOCs. The controller
++	  supports SD/MMC/SDIO devices.
+ 
+ 	  If you have a controller with this interface, say Y or M here.
+ 
+@@ -1041,14 +1042,15 @@ config MMC_SDHCI_OMAP
+ 
+ config MMC_SDHCI_AM654
+ 	tristate "Support for the SDHCI Controller in TI's AM654 SOCs"
++	depends on ARCH_K3 || COMPILE_TEST
+ 	depends on MMC_SDHCI_PLTFM && OF
+ 	select MMC_SDHCI_IO_ACCESSORS
+ 	select MMC_CQHCI
+ 	select REGMAP_MMIO
+ 	help
+ 	  This selects the Secure Digital Host Controller Interface (SDHCI)
+-	  support present in TI's AM654 SOCs. The controller supports
+-	  SD/MMC/SDIO devices.
++	  support present in TI's AM65x/AM64x/AM62x/J721E SOCs. The controller
++	  supports SD/MMC/SDIO devices.
+ 
+ 	  If you have a controller with this interface, say Y or M here.
+ 
+diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
+index ff18636e08897..5bc32108ca03b 100644
+--- a/drivers/mtd/mtd_blkdevs.c
++++ b/drivers/mtd/mtd_blkdevs.c
+@@ -463,7 +463,7 @@ static void blktrans_notify_add(struct mtd_info *mtd)
+ {
+ 	struct mtd_blktrans_ops *tr;
+ 
+-	if (mtd->type == MTD_ABSENT)
++	if (mtd->type == MTD_ABSENT || mtd->type == MTD_UBIVOLUME)
+ 		return;
+ 
+ 	list_for_each_entry(tr, &blktrans_majors, list)
+@@ -503,7 +503,7 @@ int register_mtd_blktrans(struct mtd_blktrans_ops *tr)
+ 	mutex_lock(&mtd_table_mutex);
+ 	list_add(&tr->list, &blktrans_majors);
+ 	mtd_for_each_device(mtd)
+-		if (mtd->type != MTD_ABSENT)
++		if (mtd->type != MTD_ABSENT && mtd->type != MTD_UBIVOLUME)
+ 			tr->add_mtd(tr, mtd);
+ 	mutex_unlock(&mtd_table_mutex);
+ 	return 0;
+diff --git a/drivers/mtd/nand/raw/fsl_ifc_nand.c b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+index 20bb1e0cb5ebf..f0e2318ce088c 100644
+--- a/drivers/mtd/nand/raw/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+@@ -21,7 +21,7 @@
+ 
+ #define ERR_BYTE		0xFF /* Value returned for read
+ 					bytes when read failed	*/
+-#define IFC_TIMEOUT_MSECS	500  /* Maximum number of mSecs to wait
++#define IFC_TIMEOUT_MSECS	1000 /* Maximum timeout to wait
+ 					for IFC NAND Machine	*/
+ 
+ struct fsl_ifc_ctrl;
+diff --git a/drivers/net/amt.c b/drivers/net/amt.c
+index 53415e83821ce..68e79b1272f6b 100644
+--- a/drivers/net/amt.c
++++ b/drivers/net/amt.c
+@@ -11,7 +11,7 @@
+ #include <linux/net.h>
+ #include <linux/igmp.h>
+ #include <linux/workqueue.h>
+-#include <net/sch_generic.h>
++#include <net/pkt_sched.h>
+ #include <net/net_namespace.h>
+ #include <net/ip.h>
+ #include <net/udp.h>
+@@ -80,11 +80,11 @@ static struct mld2_grec mldv2_zero_grec;
+ 
+ static struct amt_skb_cb *amt_skb_cb(struct sk_buff *skb)
+ {
+-	BUILD_BUG_ON(sizeof(struct amt_skb_cb) + sizeof(struct qdisc_skb_cb) >
++	BUILD_BUG_ON(sizeof(struct amt_skb_cb) + sizeof(struct tc_skb_cb) >
+ 		     sizeof_field(struct sk_buff, cb));
+ 
+ 	return (struct amt_skb_cb *)((void *)skb->cb +
+-		sizeof(struct qdisc_skb_cb));
++		sizeof(struct tc_skb_cb));
+ }
+ 
+ static void __amt_source_gc_work(void)
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index e6f29e4e508c1..571f037fe649a 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -1135,6 +1135,8 @@ static int vsc73xx_gpio_probe(struct vsc73xx *vsc)
+ 
+ 	vsc->gc.label = devm_kasprintf(vsc->dev, GFP_KERNEL, "VSC%04x",
+ 				       vsc->chipid);
++	if (!vsc->gc.label)
++		return -ENOMEM;
+ 	vsc->gc.ngpio = 4;
+ 	vsc->gc.owner = THIS_MODULE;
+ 	vsc->gc.parent = vsc->dev;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 71f405f8a6fee..e6b1ce76ca8a6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -2692,6 +2692,8 @@ static int ice_ptp_register_auxbus_driver(struct ice_pf *pf)
+ 	name = devm_kasprintf(dev, GFP_KERNEL, "ptp_aux_dev_%u_%u_clk%u",
+ 			      pf->pdev->bus->number, PCI_SLOT(pf->pdev->devfn),
+ 			      ice_get_ptp_src_clock_index(&pf->hw));
++	if (!name)
++		return -ENOMEM;
+ 
+ 	aux_driver->name = name;
+ 	aux_driver->shutdown = ice_ptp_auxbus_shutdown;
+@@ -2938,6 +2940,8 @@ static int ice_ptp_create_auxbus_device(struct ice_pf *pf)
+ 	name = devm_kasprintf(dev, GFP_KERNEL, "ptp_aux_dev_%u_%u_clk%u",
+ 			      pf->pdev->bus->number, PCI_SLOT(pf->pdev->devfn),
+ 			      ice_get_ptp_src_clock_index(&pf->hw));
++	if (!name)
++		return -ENOMEM;
+ 
+ 	aux_dev->name = name;
+ 	aux_dev->id = id;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index 4728ba34b0e34..76218f1cb4595 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -506,6 +506,7 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ 	rpm_t *rpm = rpmd;
+ 	u8 num_lmacs;
+ 	u32 fifo_len;
++	u16 max_lmac;
+ 
+ 	lmac_info = rpm_read(rpm, 0, RPM2_CMRX_RX_LMACS);
+ 	/* LMACs are divided into two groups and each group
+@@ -513,7 +514,11 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ 	 * Group0 lmac_id range {0..3}
+ 	 * Group1 lmac_id range {4..7}
+ 	 */
+-	fifo_len = rpm->mac_ops->fifo_len / 2;
++	max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF;
++	if (max_lmac > 4)
++		fifo_len = rpm->mac_ops->fifo_len / 2;
++	else
++		fifo_len = rpm->mac_ops->fifo_len;
+ 
+ 	if (lmac_id < 4) {
+ 		num_lmacs = hweight8(lmac_info & 0xF);
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+index 954ba0826c610..3d09fa54598f1 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+@@ -130,9 +130,15 @@ static int mlxbf_gige_open(struct net_device *netdev)
+ {
+ 	struct mlxbf_gige *priv = netdev_priv(netdev);
+ 	struct phy_device *phydev = netdev->phydev;
++	u64 control;
+ 	u64 int_en;
+ 	int err;
+ 
++	/* Perform general init of GigE block */
++	control = readq(priv->base + MLXBF_GIGE_CONTROL);
++	control |= MLXBF_GIGE_CONTROL_PORT_EN;
++	writeq(control, priv->base + MLXBF_GIGE_CONTROL);
++
+ 	err = mlxbf_gige_request_irqs(priv);
+ 	if (err)
+ 		return err;
+@@ -147,14 +153,14 @@ static int mlxbf_gige_open(struct net_device *netdev)
+ 	 */
+ 	priv->valid_polarity = 0;
+ 
+-	err = mlxbf_gige_rx_init(priv);
++	phy_start(phydev);
++
++	err = mlxbf_gige_tx_init(priv);
+ 	if (err)
+ 		goto free_irqs;
+-	err = mlxbf_gige_tx_init(priv);
++	err = mlxbf_gige_rx_init(priv);
+ 	if (err)
+-		goto rx_deinit;
+-
+-	phy_start(phydev);
++		goto tx_deinit;
+ 
+ 	netif_napi_add(netdev, &priv->napi, mlxbf_gige_poll);
+ 	napi_enable(&priv->napi);
+@@ -176,8 +182,8 @@ static int mlxbf_gige_open(struct net_device *netdev)
+ 
+ 	return 0;
+ 
+-rx_deinit:
+-	mlxbf_gige_rx_deinit(priv);
++tx_deinit:
++	mlxbf_gige_tx_deinit(priv);
+ 
+ free_irqs:
+ 	mlxbf_gige_free_irqs(priv);
+@@ -365,7 +371,6 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
+ 	void __iomem *plu_base;
+ 	void __iomem *base;
+ 	int addr, phy_irq;
+-	u64 control;
+ 	int err;
+ 
+ 	base = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MAC);
+@@ -380,11 +385,6 @@ static int mlxbf_gige_probe(struct platform_device *pdev)
+ 	if (IS_ERR(plu_base))
+ 		return PTR_ERR(plu_base);
+ 
+-	/* Perform general init of GigE block */
+-	control = readq(base + MLXBF_GIGE_CONTROL);
+-	control |= MLXBF_GIGE_CONTROL_PORT_EN;
+-	writeq(control, base + MLXBF_GIGE_CONTROL);
+-
+ 	netdev = devm_alloc_etherdev(&pdev->dev, sizeof(*priv));
+ 	if (!netdev)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
+index 227d01cace3f0..6999843584934 100644
+--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
++++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
+@@ -142,6 +142,9 @@ int mlxbf_gige_rx_init(struct mlxbf_gige *priv)
+ 	writeq(MLXBF_GIGE_RX_MAC_FILTER_COUNT_PASS_EN,
+ 	       priv->base + MLXBF_GIGE_RX_MAC_FILTER_COUNT_PASS);
+ 
++	writeq(ilog2(priv->rx_q_entries),
++	       priv->base + MLXBF_GIGE_RX_WQE_SIZE_LOG2);
++
+ 	/* Clear MLXBF_GIGE_INT_MASK 'receive pkt' bit to
+ 	 * indicate readiness to receive interrupts
+ 	 */
+@@ -154,9 +157,6 @@ int mlxbf_gige_rx_init(struct mlxbf_gige *priv)
+ 	data |= MLXBF_GIGE_RX_DMA_EN;
+ 	writeq(data, priv->base + MLXBF_GIGE_RX_DMA);
+ 
+-	writeq(ilog2(priv->rx_q_entries),
+-	       priv->base + MLXBF_GIGE_RX_WQE_SIZE_LOG2);
+-
+ 	return 0;
+ 
+ free_wqe_and_skb:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+index 4c98950380d53..d231f4d2888be 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+@@ -301,6 +301,7 @@ mlxsw_sp_acl_erp_table_alloc(struct mlxsw_sp_acl_erp_core *erp_core,
+ 			     unsigned long *p_index)
+ {
+ 	unsigned int num_rows, entry_size;
++	unsigned long index;
+ 
+ 	/* We only allow allocations of entire rows */
+ 	if (num_erps % erp_core->num_erp_banks != 0)
+@@ -309,10 +310,11 @@ mlxsw_sp_acl_erp_table_alloc(struct mlxsw_sp_acl_erp_core *erp_core,
+ 	entry_size = erp_core->erpt_entries_size[region_type];
+ 	num_rows = num_erps / erp_core->num_erp_banks;
+ 
+-	*p_index = gen_pool_alloc(erp_core->erp_tables, num_rows * entry_size);
+-	if (*p_index == 0)
++	index = gen_pool_alloc(erp_core->erp_tables, num_rows * entry_size);
++	if (!index)
+ 		return -ENOBUFS;
+-	*p_index -= MLXSW_SP_ACL_ERP_GENALLOC_OFFSET;
++
++	*p_index = index - MLXSW_SP_ACL_ERP_GENALLOC_OFFSET;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+index d50786b0a6ce4..50ea1eff02b2f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+@@ -681,13 +681,13 @@ static void
+ mlxsw_sp_acl_tcam_region_destroy(struct mlxsw_sp *mlxsw_sp,
+ 				 struct mlxsw_sp_acl_tcam_region *region)
+ {
++	struct mlxsw_sp_acl_tcam *tcam = mlxsw_sp_acl_to_tcam(mlxsw_sp->acl);
+ 	const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops;
+ 
+ 	ops->region_fini(mlxsw_sp, region->priv);
+ 	mlxsw_sp_acl_tcam_region_disable(mlxsw_sp, region);
+ 	mlxsw_sp_acl_tcam_region_free(mlxsw_sp, region);
+-	mlxsw_sp_acl_tcam_region_id_put(region->group->tcam,
+-					region->id);
++	mlxsw_sp_acl_tcam_region_id_put(tcam, region->id);
+ 	kfree(region);
+ }
+ 
+@@ -1564,6 +1564,8 @@ int mlxsw_sp_acl_tcam_init(struct mlxsw_sp *mlxsw_sp,
+ 	tcam->max_groups = max_groups;
+ 	tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core,
+ 						  ACL_MAX_GROUP_SIZE);
++	tcam->max_group_size = min_t(unsigned int, tcam->max_group_size,
++				     MLXSW_REG_PAGT_ACL_MAX_NUM);
+ 
+ 	err = ops->init(mlxsw_sp, tcam->priv, tcam);
+ 	if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 82a95125d9caf..39e6d941fd917 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -11458,6 +11458,13 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
+ 	if (err)
+ 		goto err_register_netevent_notifier;
+ 
++	mlxsw_sp->router->netdevice_nb.notifier_call =
++		mlxsw_sp_router_netdevice_event;
++	err = register_netdevice_notifier_net(mlxsw_sp_net(mlxsw_sp),
++					      &mlxsw_sp->router->netdevice_nb);
++	if (err)
++		goto err_register_netdev_notifier;
++
+ 	mlxsw_sp->router->nexthop_nb.notifier_call =
+ 		mlxsw_sp_nexthop_obj_event;
+ 	err = register_nexthop_notifier(mlxsw_sp_net(mlxsw_sp),
+@@ -11473,22 +11480,15 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp,
+ 	if (err)
+ 		goto err_register_fib_notifier;
+ 
+-	mlxsw_sp->router->netdevice_nb.notifier_call =
+-		mlxsw_sp_router_netdevice_event;
+-	err = register_netdevice_notifier_net(mlxsw_sp_net(mlxsw_sp),
+-					      &mlxsw_sp->router->netdevice_nb);
+-	if (err)
+-		goto err_register_netdev_notifier;
+-
+ 	return 0;
+ 
+-err_register_netdev_notifier:
+-	unregister_fib_notifier(mlxsw_sp_net(mlxsw_sp),
+-				&mlxsw_sp->router->fib_nb);
+ err_register_fib_notifier:
+ 	unregister_nexthop_notifier(mlxsw_sp_net(mlxsw_sp),
+ 				    &mlxsw_sp->router->nexthop_nb);
+ err_register_nexthop_notifier:
++	unregister_netdevice_notifier_net(mlxsw_sp_net(mlxsw_sp),
++					  &router->netdevice_nb);
++err_register_netdev_notifier:
+ 	unregister_netevent_notifier(&mlxsw_sp->router->netevent_nb);
+ err_register_netevent_notifier:
+ 	unregister_inet6addr_validator_notifier(&router->inet6addr_valid_nb);
+@@ -11536,11 +11536,11 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp)
+ {
+ 	struct mlxsw_sp_router *router = mlxsw_sp->router;
+ 
+-	unregister_netdevice_notifier_net(mlxsw_sp_net(mlxsw_sp),
+-					  &router->netdevice_nb);
+ 	unregister_fib_notifier(mlxsw_sp_net(mlxsw_sp), &router->fib_nb);
+ 	unregister_nexthop_notifier(mlxsw_sp_net(mlxsw_sp),
+ 				    &router->nexthop_nb);
++	unregister_netdevice_notifier_net(mlxsw_sp_net(mlxsw_sp),
++					  &router->netdevice_nb);
+ 	unregister_netevent_notifier(&router->netevent_nb);
+ 	unregister_inet6addr_validator_notifier(&router->inet6addr_valid_nb);
+ 	unregister_inetaddr_validator_notifier(&router->inetaddr_valid_nb);
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index 39d24e07f3067..5b69b9268c757 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -396,7 +396,7 @@ nla_put_failure:
+ 
+ struct rtnl_link_ops rmnet_link_ops __read_mostly = {
+ 	.kind		= "rmnet",
+-	.maxtype	= __IFLA_RMNET_MAX,
++	.maxtype	= IFLA_RMNET_MAX,
+ 	.priv_size	= sizeof(struct rmnet_priv),
+ 	.setup		= rmnet_vnd_setup,
+ 	.validate	= rmnet_rtnl_validate,
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 8649b3e90edb2..0e3731f50fc28 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -1949,7 +1949,7 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	struct ravb_tstamp_skb *ts_skb;
+ 	struct ravb_tx_desc *desc;
+ 	unsigned long flags;
+-	u32 dma_addr;
++	dma_addr_t dma_addr;
+ 	void *buffer;
+ 	u32 entry;
+ 	u32 len;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index cd7a9768de5f1..b8c93b881a653 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -255,6 +255,7 @@ struct stmmac_priv {
+ 	u32 msg_enable;
+ 	int wolopts;
+ 	int wol_irq;
++	bool wol_irq_disabled;
+ 	int clk_csr;
+ 	struct timer_list eee_ctrl_timer;
+ 	int lpi_irq;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index f628411ae4aef..67bc98f991351 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -311,8 +311,9 @@ static int stmmac_ethtool_get_link_ksettings(struct net_device *dev,
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 
+-	if (priv->hw->pcs & STMMAC_PCS_RGMII ||
+-	    priv->hw->pcs & STMMAC_PCS_SGMII) {
++	if (!(priv->plat->flags & STMMAC_FLAG_HAS_INTEGRATED_PCS) &&
++	    (priv->hw->pcs & STMMAC_PCS_RGMII ||
++	     priv->hw->pcs & STMMAC_PCS_SGMII)) {
+ 		struct rgmii_adv adv;
+ 		u32 supported, advertising, lp_advertising;
+ 
+@@ -397,8 +398,9 @@ stmmac_ethtool_set_link_ksettings(struct net_device *dev,
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 
+-	if (priv->hw->pcs & STMMAC_PCS_RGMII ||
+-	    priv->hw->pcs & STMMAC_PCS_SGMII) {
++	if (!(priv->plat->flags & STMMAC_FLAG_HAS_INTEGRATED_PCS) &&
++	    (priv->hw->pcs & STMMAC_PCS_RGMII ||
++	     priv->hw->pcs & STMMAC_PCS_SGMII)) {
+ 		/* Only support ANE */
+ 		if (cmd->base.autoneg != AUTONEG_ENABLE)
+ 			return -EINVAL;
+@@ -543,15 +545,12 @@ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
+ 	u32 rx_cnt = priv->plat->rx_queues_to_use;
+ 	unsigned int start;
+ 	int q, stat;
+-	u64 *pos;
+ 	char *p;
+ 
+-	pos = data;
+ 	for (q = 0; q < tx_cnt; q++) {
+ 		struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
+ 		struct stmmac_txq_stats snapshot;
+ 
+-		data = pos;
+ 		do {
+ 			start = u64_stats_fetch_begin(&txq_stats->syncp);
+ 			snapshot = *txq_stats;
+@@ -559,17 +558,15 @@ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
+ 
+ 		p = (char *)&snapshot + offsetof(struct stmmac_txq_stats, tx_pkt_n);
+ 		for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
+-			*data++ += (*(u64 *)p);
++			*data++ = (*(u64 *)p);
+ 			p += sizeof(u64);
+ 		}
+ 	}
+ 
+-	pos = data;
+ 	for (q = 0; q < rx_cnt; q++) {
+ 		struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
+ 		struct stmmac_rxq_stats snapshot;
+ 
+-		data = pos;
+ 		do {
+ 			start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ 			snapshot = *rxq_stats;
+@@ -577,7 +574,7 @@ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
+ 
+ 		p = (char *)&snapshot + offsetof(struct stmmac_rxq_stats, rx_pkt_n);
+ 		for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
+-			*data++ += (*(u64 *)p);
++			*data++ = (*(u64 *)p);
+ 			p += sizeof(u64);
+ 		}
+ 	}
+@@ -825,10 +822,16 @@ static int stmmac_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ 	if (wol->wolopts) {
+ 		pr_info("stmmac: wakeup enable\n");
+ 		device_set_wakeup_enable(priv->device, 1);
+-		enable_irq_wake(priv->wol_irq);
++		/* Avoid unbalanced enable_irq_wake calls */
++		if (priv->wol_irq_disabled)
++			enable_irq_wake(priv->wol_irq);
++		priv->wol_irq_disabled = false;
+ 	} else {
+ 		device_set_wakeup_enable(priv->device, 0);
+-		disable_irq_wake(priv->wol_irq);
++		/* Avoid unbalanced disable_irq_wake calls */
++		if (!priv->wol_irq_disabled)
++			disable_irq_wake(priv->wol_irq);
++		priv->wol_irq_disabled = true;
+ 	}
+ 
+ 	mutex_lock(&priv->lock);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 37e64283f9107..49b81daf7411d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3565,6 +3565,7 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
+ 	/* Request the Wake IRQ in case of another line
+ 	 * is used for WoL
+ 	 */
++	priv->wol_irq_disabled = true;
+ 	if (priv->wol_irq > 0 && priv->wol_irq != dev->irq) {
+ 		int_name = priv->int_name_wol;
+ 		sprintf(int_name, "%s:%s", dev->name, "wol");
+@@ -4371,6 +4372,28 @@ dma_map_err:
+ 	return NETDEV_TX_OK;
+ }
+ 
++/**
++ * stmmac_has_ip_ethertype() - Check if packet has IP ethertype
++ * @skb: socket buffer to check
++ *
++ * Check if a packet has an ethertype that will trigger the IP header checks
++ * and IP/TCP checksum engine of the stmmac core.
++ *
++ * Return: true if the ethertype can trigger the checksum engine, false
++ * otherwise
++ */
++static bool stmmac_has_ip_ethertype(struct sk_buff *skb)
++{
++	int depth = 0;
++	__be16 proto;
++
++	proto = __vlan_get_protocol(skb, eth_header_parse_protocol(skb),
++				    &depth);
++
++	return (depth <= ETH_HLEN) &&
++		(proto == htons(ETH_P_IP) || proto == htons(ETH_P_IPV6));
++}
++
+ /**
+  *  stmmac_xmit - Tx entry point of the driver
+  *  @skb : the socket buffer
+@@ -4435,9 +4458,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	/* DWMAC IPs can be synthesized to support tx coe only for a few tx
+ 	 * queues. In that case, checksum offloading for those queues that don't
+ 	 * support tx coe needs to fallback to software checksum calculation.
++	 *
++	 * Packets that won't trigger the COE e.g. most DSA-tagged packets will
++	 * also have to be checksummed in software.
+ 	 */
+ 	if (csum_insertion &&
+-	    priv->plat->tx_queues_cfg[queue].coe_unsupported) {
++	    (priv->plat->tx_queues_cfg[queue].coe_unsupported ||
++	     !stmmac_has_ip_ethertype(skb))) {
+ 		if (unlikely(skb_checksum_help(skb)))
+ 			goto dma_map_err;
+ 		csum_insertion = !csum_insertion;
+@@ -4997,7 +5024,7 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue,
+ 	stmmac_rx_vlan(priv->dev, skb);
+ 	skb->protocol = eth_type_trans(skb, priv->dev);
+ 
+-	if (unlikely(!coe))
++	if (unlikely(!coe) || !stmmac_has_ip_ethertype(skb))
+ 		skb_checksum_none_assert(skb);
+ 	else
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+@@ -5513,7 +5540,7 @@ drain_data:
+ 		stmmac_rx_vlan(priv->dev, skb);
+ 		skb->protocol = eth_type_trans(skb, priv->dev);
+ 
+-		if (unlikely(!coe))
++		if (unlikely(!coe) || !stmmac_has_ip_ethertype(skb))
+ 			skb_checksum_none_assert(skb);
+ 		else
+ 			skb->ip_summed = CHECKSUM_UNNECESSARY;
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index ece9f8df98ae7..f7a6d720ebb3d 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -56,7 +56,7 @@
+ #define AM65_CPSW_MAX_PORTS	8
+ 
+ #define AM65_CPSW_MIN_PACKET_SIZE	VLAN_ETH_ZLEN
+-#define AM65_CPSW_MAX_PACKET_SIZE	(VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
++#define AM65_CPSW_MAX_PACKET_SIZE	2024
+ 
+ #define AM65_CPSW_REG_CTL		0x004
+ #define AM65_CPSW_REG_STAT_PORT_EN	0x014
+@@ -2167,7 +2167,8 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx)
+ 	eth_hw_addr_set(port->ndev, port->slave.mac_addr);
+ 
+ 	port->ndev->min_mtu = AM65_CPSW_MIN_PACKET_SIZE;
+-	port->ndev->max_mtu = AM65_CPSW_MAX_PACKET_SIZE;
++	port->ndev->max_mtu = AM65_CPSW_MAX_PACKET_SIZE -
++			      (VLAN_ETH_HLEN + ETH_FCS_LEN);
+ 	port->ndev->hw_features = NETIF_F_SG |
+ 				  NETIF_F_RXCSUM |
+ 				  NETIF_F_HW_CSUM |
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index aecaf5f44374f..77e8250282a51 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -369,6 +369,12 @@ static int nsim_init_netdevsim_vf(struct netdevsim *ns)
+ 	return err;
+ }
+ 
++static void nsim_exit_netdevsim(struct netdevsim *ns)
++{
++	nsim_udp_tunnels_info_destroy(ns->netdev);
++	mock_phc_destroy(ns->phc);
++}
++
+ struct netdevsim *
+ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
+ {
+@@ -417,8 +423,7 @@ void nsim_destroy(struct netdevsim *ns)
+ 	}
+ 	rtnl_unlock();
+ 	if (nsim_dev_port_is_pf(ns->nsim_dev_port))
+-		nsim_udp_tunnels_info_destroy(dev);
+-	mock_phc_destroy(ns->phc);
++		nsim_exit_netdevsim(ns);
+ 	free_netdev(dev);
+ }
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 08e3915001c3f..ce5ad4a824813 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -3335,8 +3335,10 @@ static int lan8814_probe(struct phy_device *phydev)
+ #define LAN8841_ADC_CHANNEL_MASK		198
+ #define LAN8841_PTP_RX_PARSE_L2_ADDR_EN		370
+ #define LAN8841_PTP_RX_PARSE_IP_ADDR_EN		371
++#define LAN8841_PTP_RX_VERSION			374
+ #define LAN8841_PTP_TX_PARSE_L2_ADDR_EN		434
+ #define LAN8841_PTP_TX_PARSE_IP_ADDR_EN		435
++#define LAN8841_PTP_TX_VERSION			438
+ #define LAN8841_PTP_CMD_CTL			256
+ #define LAN8841_PTP_CMD_CTL_PTP_ENABLE		BIT(2)
+ #define LAN8841_PTP_CMD_CTL_PTP_DISABLE		BIT(1)
+@@ -3380,6 +3382,12 @@ static int lan8841_config_init(struct phy_device *phydev)
+ 	phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
+ 		      LAN8841_PTP_RX_PARSE_IP_ADDR_EN, 0);
+ 
++	/* Disable checking for minorVersionPTP field */
++	phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
++		      LAN8841_PTP_RX_VERSION, 0xff00);
++	phy_write_mmd(phydev, KSZ9131RN_MMD_COMMON_CTRL_REG,
++		      LAN8841_PTP_TX_VERSION, 0xff00);
++
+ 	/* 100BT Clause 40 improvenent errata */
+ 	phy_write_mmd(phydev, LAN8841_MMD_ANALOG_REG,
+ 		      LAN8841_ANALOG_CONTROL_1,
+@@ -4842,6 +4850,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.flags		= PHY_POLL_CABLE_TEST,
+ 	.driver_data	= &ksz9131_type,
+ 	.probe		= kszphy_probe,
++	.soft_reset	= genphy_soft_reset,
+ 	.config_init	= ksz9131_config_init,
+ 	.config_intr	= kszphy_config_intr,
+ 	.config_aneg	= ksz9131_config_aneg,
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 235336ef2a7a5..f8f5e653cd038 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -803,8 +803,8 @@ static int ath11k_core_get_rproc(struct ath11k_base *ab)
+ 
+ 	prproc = rproc_get_by_phandle(rproc_phandle);
+ 	if (!prproc) {
+-		ath11k_err(ab, "failed to get rproc\n");
+-		return -EINVAL;
++		ath11k_dbg(ab, ATH11K_DBG_AHB, "failed to get rproc, deferring\n");
++		return -EPROBE_DEFER;
+ 	}
+ 	ab_ahb->tgt_rproc = prproc;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index b936760b51408..6c01b282fcd33 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/module.h>
+@@ -698,13 +698,15 @@ int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab)
+ 	ret = ath12k_core_rfkill_config(ab);
+ 	if (ret && ret != -EOPNOTSUPP) {
+ 		ath12k_err(ab, "failed to config rfkill: %d\n", ret);
+-		goto err_core_stop;
++		goto err_core_pdev_destroy;
+ 	}
+ 
+ 	mutex_unlock(&ab->core_lock);
+ 
+ 	return 0;
+ 
++err_core_pdev_destroy:
++	ath12k_core_pdev_destroy(ab);
+ err_core_stop:
+ 	ath12k_core_stop(ab);
+ 	ath12k_mac_destroy(ab);
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index 6015e1255d2ac..480f8edbfd35d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -1029,7 +1029,8 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
+ 			  IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
+ 			  IEEE80211_EHT_PHY_CAP3_CODEBOOK_4_2_SU_FDBK |
+ 			  IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK |
+-			  IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK);
++			  IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK |
++			  IEEE80211_EHT_PHY_CAP3_TRIG_CQI_FDBK);
+ 		iftype_data->eht_cap.eht_cap_elem.phy_cap_info[4] &=
+ 			~(IEEE80211_EHT_PHY_CAP4_PART_BW_DL_MU_MIMO |
+ 			  IEEE80211_EHT_PHY_CAP4_POWER_BOOST_FACT_SUPP);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 329c545f65fde..7737650e56cb2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -1815,7 +1815,7 @@ static ssize_t _iwl_dbgfs_link_sta_##name##_write(struct file *file,	\
+ 	char buf[buflen] = {};						\
+ 	size_t buf_size = min(count, sizeof(buf) -  1);			\
+ 									\
+-	if (copy_from_user(buf, user_buf, sizeof(buf)))			\
++	if (copy_from_user(buf, user_buf, buf_size))			\
+ 		return -EFAULT;						\
+ 									\
+ 	return _iwl_dbgfs_link_sta_wrap_write(iwl_dbgfs_##name##_write,	\
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+index ff6cb064051b8..61170173f917a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+@@ -271,17 +271,17 @@ __iwl_mvm_mld_assign_vif_chanctx(struct iwl_mvm *mvm,
+ 		}
+ 	}
+ 
++	mvmvif->link[link_id]->phy_ctxt = phy_ctxt;
++
+ 	if (iwl_mvm_is_esr_supported(mvm->fwrt.trans) && n_active > 1) {
+ 		mvmvif->link[link_id]->listen_lmac = true;
+ 		ret = iwl_mvm_esr_mode_active(mvm, vif);
+ 		if (ret) {
+ 			IWL_ERR(mvm, "failed to activate ESR mode (%d)\n", ret);
+-			return ret;
++			goto out;
+ 		}
+ 	}
+ 
+-	mvmvif->link[link_id]->phy_ctxt = phy_ctxt;
+-
+ 	if (switching_chanctx) {
+ 		/* reactivate if we turned this off during channel switch */
+ 		if (vif->type == NL80211_IFTYPE_AP)
+@@ -716,7 +716,7 @@ void iwl_mvm_mld_select_links(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 		}
+ 	}
+ 
+-	if (WARN_ON(!new_active_links))
++	if (!new_active_links)
+ 		return;
+ 
+ 	if (vif->active_links != new_active_links)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+index 4e1fccff3987c..334d1f59f6e46 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+@@ -99,17 +99,6 @@ static void iwl_mvm_phy_ctxt_set_rxchain(struct iwl_mvm *mvm,
+ 		active_cnt = 2;
+ 	}
+ 
+-	/*
+-	 * If the firmware requested it, then we know that it supports
+-	 * getting zero for the values to indicate "use one, but pick
+-	 * which one yourself", which means it can dynamically pick one
+-	 * that e.g. has better RSSI.
+-	 */
+-	if (mvm->fw_static_smps_request && active_cnt == 1 && idle_cnt == 1) {
+-		idle_cnt = 0;
+-		active_cnt = 0;
+-	}
+-
+ 	*rxchain_info = cpu_to_le32(iwl_mvm_get_valid_rx_ant(mvm) <<
+ 					PHY_RX_CHAIN_VALID_POS);
+ 	*rxchain_info |= cpu_to_le32(idle_cnt << PHY_RX_CHAIN_CNT_POS);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index ae5cd13cd6dda..db986bfc4dc3f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -2256,7 +2256,7 @@ int iwl_mvm_flush_sta_tids(struct iwl_mvm *mvm, u32 sta_id, u16 tids)
+ 	WARN_ON(!iwl_mvm_has_new_tx_api(mvm));
+ 
+ 	if (iwl_fw_lookup_notif_ver(mvm->fw, LONG_GROUP, TXPATH_FLUSH, 0) > 0)
+-		cmd.flags |= CMD_WANT_SKB;
++		cmd.flags |= CMD_WANT_SKB | CMD_SEND_IN_RFKILL;
+ 
+ 	IWL_DEBUG_TX_QUEUES(mvm, "flush for sta id %d tid mask 0x%x\n",
+ 			    sta_id, tids);
+diff --git a/drivers/net/wireless/marvell/libertas/Kconfig b/drivers/net/wireless/marvell/libertas/Kconfig
+index 6d62ab49aa8d4..c7d02adb3eead 100644
+--- a/drivers/net/wireless/marvell/libertas/Kconfig
++++ b/drivers/net/wireless/marvell/libertas/Kconfig
+@@ -2,8 +2,6 @@
+ config LIBERTAS
+ 	tristate "Marvell 8xxx Libertas WLAN driver support"
+ 	depends on CFG80211
+-	select WIRELESS_EXT
+-	select WEXT_SPY
+ 	select LIB80211
+ 	select FW_LOADER
+ 	help
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 7a15ea8072e6f..3604abcbcff93 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -2047,6 +2047,8 @@ static int mwifiex_cfg80211_start_ap(struct wiphy *wiphy,
+ 
+ 	mwifiex_set_sys_config_invalid_data(bss_cfg);
+ 
++	memcpy(bss_cfg->mac_addr, priv->curr_addr, ETH_ALEN);
++
+ 	if (params->beacon_interval)
+ 		bss_cfg->beacon_period = params->beacon_interval;
+ 	if (params->dtim_period)
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 8e6db904e5b2d..62f3c9a52a1d5 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -165,6 +165,7 @@ enum MWIFIEX_802_11_PRIVACY_FILTER {
+ #define TLV_TYPE_STA_MAC_ADDR       (PROPRIETARY_TLV_BASE_ID + 32)
+ #define TLV_TYPE_BSSID              (PROPRIETARY_TLV_BASE_ID + 35)
+ #define TLV_TYPE_CHANNELBANDLIST    (PROPRIETARY_TLV_BASE_ID + 42)
++#define TLV_TYPE_UAP_MAC_ADDRESS    (PROPRIETARY_TLV_BASE_ID + 43)
+ #define TLV_TYPE_UAP_BEACON_PERIOD  (PROPRIETARY_TLV_BASE_ID + 44)
+ #define TLV_TYPE_UAP_DTIM_PERIOD    (PROPRIETARY_TLV_BASE_ID + 45)
+ #define TLV_TYPE_UAP_BCAST_SSID     (PROPRIETARY_TLV_BASE_ID + 48)
+diff --git a/drivers/net/wireless/marvell/mwifiex/ioctl.h b/drivers/net/wireless/marvell/mwifiex/ioctl.h
+index 091e7ca793762..e8825f302de8a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/ioctl.h
++++ b/drivers/net/wireless/marvell/mwifiex/ioctl.h
+@@ -107,6 +107,7 @@ struct mwifiex_uap_bss_param {
+ 	u8 qos_info;
+ 	u8 power_constraint;
+ 	struct mwifiex_types_wmm_info wmm_info;
++	u8 mac_addr[ETH_ALEN];
+ };
+ 
+ enum {
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
+index 6462a0ffe698c..75f53c2f1e1f5 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
+@@ -331,6 +331,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8786 = {
+ 	.can_dump_fw = false,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = false,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8787 = {
+@@ -346,6 +347,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8787 = {
+ 	.can_dump_fw = false,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8797 = {
+@@ -361,6 +363,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8797 = {
+ 	.can_dump_fw = false,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8897 = {
+@@ -376,6 +379,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8897 = {
+ 	.can_dump_fw = true,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8977 = {
+@@ -392,6 +396,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8977 = {
+ 	.fw_dump_enh = true,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8978 = {
+@@ -408,6 +413,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8978 = {
+ 	.fw_dump_enh = true,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = true,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8997 = {
+@@ -425,6 +431,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8997 = {
+ 	.fw_dump_enh = true,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8887 = {
+@@ -440,6 +447,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8887 = {
+ 	.can_dump_fw = false,
+ 	.can_auto_tdls = true,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8987 = {
+@@ -456,6 +464,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8987 = {
+ 	.fw_dump_enh = true,
+ 	.can_auto_tdls = true,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static const struct mwifiex_sdio_device mwifiex_sdio_sd8801 = {
+@@ -471,6 +480,7 @@ static const struct mwifiex_sdio_device mwifiex_sdio_sd8801 = {
+ 	.can_dump_fw = false,
+ 	.can_auto_tdls = false,
+ 	.can_ext_scan = true,
++	.fw_ready_extra_delay = false,
+ };
+ 
+ static struct memory_type_mapping generic_mem_type_map[] = {
+@@ -563,6 +573,7 @@ mwifiex_sdio_probe(struct sdio_func *func, const struct sdio_device_id *id)
+ 		card->fw_dump_enh = data->fw_dump_enh;
+ 		card->can_auto_tdls = data->can_auto_tdls;
+ 		card->can_ext_scan = data->can_ext_scan;
++		card->fw_ready_extra_delay = data->fw_ready_extra_delay;
+ 		INIT_WORK(&card->work, mwifiex_sdio_work);
+ 	}
+ 
+@@ -766,8 +777,9 @@ mwifiex_sdio_read_fw_status(struct mwifiex_adapter *adapter, u16 *dat)
+ static int mwifiex_check_fw_status(struct mwifiex_adapter *adapter,
+ 				   u32 poll_num)
+ {
++	struct sdio_mmc_card *card = adapter->card;
+ 	int ret = 0;
+-	u16 firmware_stat;
++	u16 firmware_stat = 0;
+ 	u32 tries;
+ 
+ 	for (tries = 0; tries < poll_num; tries++) {
+@@ -783,6 +795,13 @@ static int mwifiex_check_fw_status(struct mwifiex_adapter *adapter,
+ 		ret = -1;
+ 	}
+ 
++	if (card->fw_ready_extra_delay &&
++	    firmware_stat == FIRMWARE_READY_SDIO)
++		/* firmware might pretend to be ready, when it's not.
++		 * Wait a little bit more as a workaround.
++		 */
++		msleep(100);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.h b/drivers/net/wireless/marvell/mwifiex/sdio.h
+index b86a9263a6a81..cb63ad55d675f 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.h
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.h
+@@ -255,6 +255,7 @@ struct sdio_mmc_card {
+ 	bool fw_dump_enh;
+ 	bool can_auto_tdls;
+ 	bool can_ext_scan;
++	bool fw_ready_extra_delay;
+ 
+ 	struct mwifiex_sdio_mpa_tx mpa_tx;
+ 	struct mwifiex_sdio_mpa_rx mpa_rx;
+@@ -278,6 +279,7 @@ struct mwifiex_sdio_device {
+ 	bool fw_dump_enh;
+ 	bool can_auto_tdls;
+ 	bool can_ext_scan;
++	bool fw_ready_extra_delay;
+ };
+ 
+ /*
+diff --git a/drivers/net/wireless/marvell/mwifiex/uap_cmd.c b/drivers/net/wireless/marvell/mwifiex/uap_cmd.c
+index e78a201cd1507..491e366119096 100644
+--- a/drivers/net/wireless/marvell/mwifiex/uap_cmd.c
++++ b/drivers/net/wireless/marvell/mwifiex/uap_cmd.c
+@@ -468,6 +468,7 @@ void mwifiex_config_uap_11d(struct mwifiex_private *priv,
+ static int
+ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size)
+ {
++	struct host_cmd_tlv_mac_addr *mac_tlv;
+ 	struct host_cmd_tlv_dtim_period *dtim_period;
+ 	struct host_cmd_tlv_beacon_period *beacon_period;
+ 	struct host_cmd_tlv_ssid *ssid;
+@@ -487,6 +488,13 @@ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size)
+ 	int i;
+ 	u16 cmd_size = *param_size;
+ 
++	mac_tlv = (struct host_cmd_tlv_mac_addr *)tlv;
++	mac_tlv->header.type = cpu_to_le16(TLV_TYPE_UAP_MAC_ADDRESS);
++	mac_tlv->header.len = cpu_to_le16(ETH_ALEN);
++	memcpy(mac_tlv->mac_addr, bss_cfg->mac_addr, ETH_ALEN);
++	cmd_size += sizeof(struct host_cmd_tlv_mac_addr);
++	tlv += sizeof(struct host_cmd_tlv_mac_addr);
++
+ 	if (bss_cfg->ssid.ssid_len) {
+ 		ssid = (struct host_cmd_tlv_ssid *)tlv;
+ 		ssid->header.type = cpu_to_le16(TLV_TYPE_UAP_SSID);
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index 7725dd6763ef2..50820fe00b8bd 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -67,7 +67,7 @@ static int mt76_get_of_epprom_from_mtd(struct mt76_dev *dev, void *eep, int offs
+ 		goto out_put_node;
+ 	}
+ 
+-	offset = be32_to_cpup(list);
++	offset += be32_to_cpup(list);
+ 	ret = mtd_read(mtd, offset, len, &retlen, eep);
+ 	put_mtd_device(mtd);
+ 	if (mtd_is_bitflip(ret))
+@@ -106,7 +106,7 @@ out_put_node:
+ #endif
+ }
+ 
+-static int mt76_get_of_epprom_from_nvmem(struct mt76_dev *dev, void *eep, int len)
++static int mt76_get_of_eeprom_from_nvmem(struct mt76_dev *dev, void *eep, int len)
+ {
+ 	struct device_node *np = dev->dev->of_node;
+ 	struct nvmem_cell *cell;
+@@ -153,7 +153,7 @@ int mt76_get_of_eeprom(struct mt76_dev *dev, void *eep, int offset, int len)
+ 	if (!ret)
+ 		return 0;
+ 
+-	return mt76_get_of_epprom_from_nvmem(dev, eep, len);
++	return mt76_get_of_eeprom_from_nvmem(dev, eep, len);
+ }
+ EXPORT_SYMBOL_GPL(mt76_get_of_eeprom);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index ea828ba0b83ac..a17b2fbd693b4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -575,8 +575,7 @@ struct mt76_sdio {
+ 	struct mt76_worker txrx_worker;
+ 	struct mt76_worker status_worker;
+ 	struct mt76_worker net_worker;
+-
+-	struct work_struct stat_work;
++	struct mt76_worker stat_worker;
+ 
+ 	u8 *xmit_buf;
+ 	u32 xmit_buf_sz;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio.c
+index fc547a0031eae..67cedd2555f97 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio.c
+@@ -204,8 +204,8 @@ static int mt7663s_suspend(struct device *dev)
+ 	mt76_worker_disable(&mdev->mt76.sdio.txrx_worker);
+ 	mt76_worker_disable(&mdev->mt76.sdio.status_worker);
+ 	mt76_worker_disable(&mdev->mt76.sdio.net_worker);
++	mt76_worker_disable(&mdev->mt76.sdio.stat_worker);
+ 
+-	cancel_work_sync(&mdev->mt76.sdio.stat_work);
+ 	clear_bit(MT76_READING_STATS, &mdev->mphy.state);
+ 
+ 	mt76_tx_status_check(&mdev->mt76, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
+index f3e56817d36e9..adc26a222823b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
+@@ -144,7 +144,8 @@ static inline bool
+ mt7915_tssi_enabled(struct mt7915_dev *dev, enum nl80211_band band)
+ {
+ 	u8 *eep = dev->mt76.eeprom.data;
+-	u8 val = eep[MT_EE_WIFI_CONF + 7];
++	u8 offs = is_mt7981(&dev->mt76) ? 8 : 7;
++	u8 val = eep[MT_EE_WIFI_CONF + offs];
+ 
+ 	if (band == NL80211_BAND_2GHZ)
+ 		return val & MT_EE_WIFI_CONF7_TSSI0_2G;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index a3fd54cc1911a..9d747eb8ab82a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -1059,8 +1059,9 @@ mt7915_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+ 
+ 	phy->mt76->antenna_mask = tx_ant;
+ 
+-	/* handle a variant of mt7916 which has 3T3R but nss2 on 5 GHz band */
+-	if (is_mt7916(&dev->mt76) && band && hweight8(tx_ant) == max_nss)
++	/* handle a variant of mt7916/mt7981 which has 3T3R but nss2 on 5 GHz band */
++	if ((is_mt7916(&dev->mt76) || is_mt7981(&dev->mt76)) &&
++	    band && hweight8(tx_ant) == max_nss)
+ 		phy->mt76->chainmask = (dev->chainmask >> chainshift) << chainshift;
+ 	else
+ 		phy->mt76->chainmask = tx_ant << (chainshift * band);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index e7d8e03f826f8..04a49ef67560e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -742,7 +742,7 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
+ 
+ 		res = platform_get_resource(plat_dev, IORESOURCE_MEM, 0);
+ 		if (!res)
+-			return -ENOMEM;
++			return 0;
+ 
+ 		wed->wlan.platform_dev = plat_dev;
+ 		wed->wlan.bus_type = MTK_WED_BUS_AXI;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index 7d6a9d7460111..48433c6d5e7d3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -110,24 +110,37 @@ mt7921_regd_channel_update(struct wiphy *wiphy, struct mt792x_dev *dev)
+ 	}
+ }
+ 
++void mt7921_regd_update(struct mt792x_dev *dev)
++{
++	struct mt76_dev *mdev = &dev->mt76;
++	struct ieee80211_hw *hw = mdev->hw;
++	struct wiphy *wiphy = hw->wiphy;
++
++	mt7921_mcu_set_clc(dev, mdev->alpha2, dev->country_ie_env);
++	mt7921_regd_channel_update(wiphy, dev);
++	mt76_connac_mcu_set_channel_domain(hw->priv);
++	mt7921_set_tx_sar_pwr(hw, NULL);
++}
++EXPORT_SYMBOL_GPL(mt7921_regd_update);
++
+ static void
+ mt7921_regd_notifier(struct wiphy *wiphy,
+ 		     struct regulatory_request *request)
+ {
+ 	struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
+ 	struct mt792x_dev *dev = mt792x_hw_dev(hw);
++	struct mt76_connac_pm *pm = &dev->pm;
+ 
+ 	memcpy(dev->mt76.alpha2, request->alpha2, sizeof(dev->mt76.alpha2));
+ 	dev->mt76.region = request->dfs_region;
+ 	dev->country_ie_env = request->country_ie_env;
+ 
++	if (pm->suspended)
++		return;
++
+ 	mt792x_mutex_acquire(dev);
+-	mt7921_mcu_set_clc(dev, request->alpha2, request->country_ie_env);
+-	mt76_connac_mcu_set_channel_domain(hw->priv);
+-	mt7921_set_tx_sar_pwr(hw, NULL);
++	mt7921_regd_update(dev);
+ 	mt792x_mutex_release(dev);
+-
+-	mt7921_regd_channel_update(wiphy, dev);
+ }
+ 
+ int mt7921_mac_init(struct mt792x_dev *dev)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 510a575a973b8..0645417e05825 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -683,17 +683,45 @@ static void mt7921_bss_info_changed(struct ieee80211_hw *hw,
+ }
+ 
+ static void
+-mt7921_regd_set_6ghz_power_type(struct ieee80211_vif *vif)
++mt7921_calc_vif_num(void *priv, u8 *mac, struct ieee80211_vif *vif)
++{
++	u32 *num = priv;
++
++	if (!priv)
++		return;
++
++	switch (vif->type) {
++	case NL80211_IFTYPE_STATION:
++	case NL80211_IFTYPE_P2P_CLIENT:
++	case NL80211_IFTYPE_AP:
++	case NL80211_IFTYPE_P2P_GO:
++		*num += 1;
++		break;
++	default:
++		break;
++	}
++}
++
++static void
++mt7921_regd_set_6ghz_power_type(struct ieee80211_vif *vif, bool is_add)
+ {
+ 	struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ 	struct mt792x_phy *phy = mvif->phy;
+ 	struct mt792x_dev *dev = phy->dev;
++	u32 valid_vif_num = 0;
++
++	ieee80211_iterate_active_interfaces(mt76_hw(dev),
++					    IEEE80211_IFACE_ITER_RESUME_ALL,
++					    mt7921_calc_vif_num, &valid_vif_num);
+ 
+-	if (hweight64(dev->mt76.vif_mask) > 1) {
++	if (valid_vif_num > 1) {
+ 		phy->power_type = MT_AP_DEFAULT;
+ 		goto out;
+ 	}
+ 
++	if (!is_add)
++		vif->bss_conf.power_type = IEEE80211_REG_UNSET_AP;
++
+ 	switch (vif->bss_conf.power_type) {
+ 	case IEEE80211_REG_SP_AP:
+ 		phy->power_type = MT_AP_SP;
+@@ -705,6 +733,8 @@ mt7921_regd_set_6ghz_power_type(struct ieee80211_vif *vif)
+ 		phy->power_type = MT_AP_LPI;
+ 		break;
+ 	case IEEE80211_REG_UNSET_AP:
++		phy->power_type = MT_AP_UNSET;
++		break;
+ 	default:
+ 		phy->power_type = MT_AP_DEFAULT;
+ 		break;
+@@ -749,7 +779,7 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ 	if (ret)
+ 		return ret;
+ 
+-	mt7921_regd_set_6ghz_power_type(vif);
++	mt7921_regd_set_6ghz_power_type(vif, true);
+ 
+ 	mt76_connac_power_save_sched(&dev->mphy, &dev->pm);
+ 
+@@ -811,6 +841,8 @@ void mt7921_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ 		list_del_init(&msta->wcid.poll_list);
+ 	spin_unlock_bh(&dev->mt76.sta_poll_lock);
+ 
++	mt7921_regd_set_6ghz_power_type(vif, false);
++
+ 	mt76_connac_power_save_sched(&dev->mphy, &dev->pm);
+ }
+ EXPORT_SYMBOL_GPL(mt7921_mac_sta_remove);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 2cc2d2788f831..399d7ca6bebcb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -1263,13 +1263,15 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ 		u8 env_6g;
+ 		u8 rsvd[63];
+ 	} __packed req = {
++		.ver = 1,
+ 		.idx = idx,
+ 		.env = env_cap,
+ 		.env_6g = dev->phy.power_type,
+ 		.acpi_conf = mt792x_acpi_get_flags(&dev->phy),
+ 	};
+ 	int ret, valid_cnt = 0;
+-	u8 i, *pos;
++	u16 buf_len = 0;
++	u8 *pos;
+ 
+ 	if (!clc)
+ 		return 0;
+@@ -1279,12 +1281,15 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ 	if (mt76_find_power_limits_node(&dev->mt76))
+ 		req.cap |= CLC_CAP_DTS_EN;
+ 
++	buf_len = le16_to_cpu(clc->len) - sizeof(*clc);
+ 	pos = clc->data;
+-	for (i = 0; i < clc->nr_country; i++) {
++	while (buf_len > 16) {
+ 		struct mt7921_clc_rule *rule = (struct mt7921_clc_rule *)pos;
+ 		u16 len = le16_to_cpu(rule->len);
++		u16 offset = len + sizeof(*rule);
+ 
+-		pos += len + sizeof(*rule);
++		pos += offset;
++		buf_len -= offset;
+ 		if (rule->alpha2[0] != alpha2[0] ||
+ 		    rule->alpha2[1] != alpha2[1])
+ 			continue;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index f28621121927e..5c4cc370e6ce3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -233,6 +233,7 @@ mt7921_l1_rmw(struct mt792x_dev *dev, u32 addr, u32 mask, u32 val)
+ #define mt7921_l1_set(dev, addr, val)	mt7921_l1_rmw(dev, addr, 0, val)
+ #define mt7921_l1_clear(dev, addr, val)	mt7921_l1_rmw(dev, addr, val, 0)
+ 
++void mt7921_regd_update(struct mt792x_dev *dev);
+ int mt7921_mac_init(struct mt792x_dev *dev);
+ bool mt7921_mac_wtbl_update(struct mt792x_dev *dev, int idx, u32 mask);
+ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index f04e7095e1810..42fd456eb6faf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -507,6 +507,9 @@ static int mt7921_pci_resume(struct device *device)
+ 		mt76_connac_mcu_set_deep_sleep(&dev->mt76, false);
+ 
+ 	err = mt76_connac_mcu_set_hif_suspend(mdev, false);
++
++	mt7921_regd_update(dev);
++
+ failed:
+ 	pm->suspended = false;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
+index dc1beb76df3e1..7591e54d28973 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio.c
+@@ -228,7 +228,7 @@ static int mt7921s_suspend(struct device *__dev)
+ 	mt76_txq_schedule_all(&dev->mphy);
+ 	mt76_worker_disable(&mdev->tx_worker);
+ 	mt76_worker_disable(&mdev->sdio.status_worker);
+-	cancel_work_sync(&mdev->sdio.stat_work);
++	mt76_worker_disable(&mdev->sdio.stat_worker);
+ 	clear_bit(MT76_READING_STATS, &dev->mphy.state);
+ 	mt76_tx_status_check(mdev, true);
+ 
+@@ -260,6 +260,7 @@ restore_txrx_worker:
+ restore_worker:
+ 	mt76_worker_enable(&mdev->tx_worker);
+ 	mt76_worker_enable(&mdev->sdio.status_worker);
++	mt76_worker_enable(&mdev->sdio.stat_worker);
+ 
+ 	if (!pm->ds_enable)
+ 		mt76_connac_mcu_set_deep_sleep(mdev, false);
+@@ -292,6 +293,7 @@ static int mt7921s_resume(struct device *__dev)
+ 	mt76_worker_enable(&mdev->sdio.txrx_worker);
+ 	mt76_worker_enable(&mdev->sdio.status_worker);
+ 	mt76_worker_enable(&mdev->sdio.net_worker);
++	mt76_worker_enable(&mdev->sdio.stat_worker);
+ 
+ 	/* restore previous ds setting */
+ 	if (!pm->ds_enable)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mac.c
+index 8edd0291c1280..389eb0903807e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mac.c
+@@ -107,7 +107,7 @@ int mt7921s_mac_reset(struct mt792x_dev *dev)
+ 	mt76_worker_disable(&dev->mt76.sdio.txrx_worker);
+ 	mt76_worker_disable(&dev->mt76.sdio.status_worker);
+ 	mt76_worker_disable(&dev->mt76.sdio.net_worker);
+-	cancel_work_sync(&dev->mt76.sdio.stat_work);
++	mt76_worker_disable(&dev->mt76.sdio.stat_worker);
+ 
+ 	mt7921s_disable_irq(&dev->mt76);
+ 	mt7921s_wfsys_reset(dev);
+@@ -115,6 +115,7 @@ int mt7921s_mac_reset(struct mt792x_dev *dev)
+ 	mt76_worker_enable(&dev->mt76.sdio.txrx_worker);
+ 	mt76_worker_enable(&dev->mt76.sdio.status_worker);
+ 	mt76_worker_enable(&dev->mt76.sdio.net_worker);
++	mt76_worker_enable(&dev->mt76.sdio.stat_worker);
+ 
+ 	dev->fw_assert = false;
+ 	clear_bit(MT76_MCU_RESET, &dev->mphy.state);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 04540833485fe..fa3001e59a364 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -840,10 +840,10 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ 	struct mt76_vif *mvif;
+ 	u16 tx_count = 15;
+ 	u32 val;
+-	bool beacon = !!(changed & (BSS_CHANGED_BEACON |
+-				    BSS_CHANGED_BEACON_ENABLED));
+ 	bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
+ 					 BSS_CHANGED_FILS_DISCOVERY));
++	bool beacon = !!(changed & (BSS_CHANGED_BEACON |
++				    BSS_CHANGED_BEACON_ENABLED)) && (!inband_disc);
+ 
+ 	mvif = vif ? (struct mt76_vif *)vif->drv_priv : NULL;
+ 	if (mvif) {
+@@ -1074,7 +1074,7 @@ mt7996_mac_tx_free(struct mt7996_dev *dev, void *data, int len)
+ 	struct mt76_phy *phy3 = mdev->phys[MT_BAND2];
+ 	struct mt76_txwi_cache *txwi;
+ 	struct ieee80211_sta *sta = NULL;
+-	struct mt76_wcid *wcid;
++	struct mt76_wcid *wcid = NULL;
+ 	LIST_HEAD(free_list);
+ 	struct sk_buff *skb, *tmp;
+ 	void *end = data + len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+index a88f6af323dae..9300cd8eeb76b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+@@ -160,7 +160,7 @@ struct mt7996_mcu_all_sta_info_event {
+ 	u8 more;
+ 	u8 rsv2;
+ 	__le16 sta_num;
+-	u8 rsv3[2];
++	u8 rsv3[4];
+ 
+ 	union {
+ 		struct {
+@@ -168,15 +168,15 @@ struct mt7996_mcu_all_sta_info_event {
+ 			u8 rsv[2];
+ 			__le32 tx_bytes[IEEE80211_NUM_ACS];
+ 			__le32 rx_bytes[IEEE80211_NUM_ACS];
+-		} adm_stat[0];
++		} adm_stat[0] __packed;
+ 
+ 		struct {
+ 			__le16 wlan_idx;
+ 			u8 rsv[2];
+ 			__le32 tx_msdu_cnt;
+ 			__le32 rx_msdu_cnt;
+-		} msdu_cnt[0];
+-	};
++		} msdu_cnt[0] __packed;
++	} __packed;
+ } __packed;
+ 
+ enum mt7996_chan_mib_offs {
+@@ -247,7 +247,7 @@ struct bss_rate_tlv {
+ 	u8 short_preamble;
+ 	u8 bc_fixed_rate;
+ 	u8 mc_fixed_rate;
+-	u8 __rsv2[1];
++	u8 __rsv2[9];
+ } __packed;
+ 
+ struct bss_ra_tlv {
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
+index 419723118ded8..c52d550f0c32a 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio.c
+@@ -481,21 +481,21 @@ static void mt76s_status_worker(struct mt76_worker *w)
+ 		if (dev->drv->tx_status_data && ndata_frames > 0 &&
+ 		    !test_and_set_bit(MT76_READING_STATS, &dev->phy.state) &&
+ 		    !test_bit(MT76_STATE_SUSPEND, &dev->phy.state))
+-			ieee80211_queue_work(dev->hw, &dev->sdio.stat_work);
++			mt76_worker_schedule(&sdio->stat_worker);
+ 	} while (nframes > 0);
+ 
+ 	if (resched)
+ 		mt76_worker_schedule(&dev->tx_worker);
+ }
+ 
+-static void mt76s_tx_status_data(struct work_struct *work)
++static void mt76s_tx_status_data(struct mt76_worker *worker)
+ {
+ 	struct mt76_sdio *sdio;
+ 	struct mt76_dev *dev;
+ 	u8 update = 1;
+ 	u16 count = 0;
+ 
+-	sdio = container_of(work, struct mt76_sdio, stat_work);
++	sdio = container_of(worker, struct mt76_sdio, stat_worker);
+ 	dev = container_of(sdio, struct mt76_dev, sdio);
+ 
+ 	while (true) {
+@@ -508,7 +508,7 @@ static void mt76s_tx_status_data(struct work_struct *work)
+ 	}
+ 
+ 	if (count && test_bit(MT76_STATE_RUNNING, &dev->phy.state))
+-		ieee80211_queue_work(dev->hw, &sdio->stat_work);
++		mt76_worker_schedule(&sdio->status_worker);
+ 	else
+ 		clear_bit(MT76_READING_STATS, &dev->phy.state);
+ }
+@@ -600,8 +600,8 @@ void mt76s_deinit(struct mt76_dev *dev)
+ 	mt76_worker_teardown(&sdio->txrx_worker);
+ 	mt76_worker_teardown(&sdio->status_worker);
+ 	mt76_worker_teardown(&sdio->net_worker);
++	mt76_worker_teardown(&sdio->stat_worker);
+ 
+-	cancel_work_sync(&sdio->stat_work);
+ 	clear_bit(MT76_READING_STATS, &dev->phy.state);
+ 
+ 	mt76_tx_status_check(dev, true);
+@@ -644,10 +644,14 @@ int mt76s_init(struct mt76_dev *dev, struct sdio_func *func,
+ 	if (err)
+ 		return err;
+ 
++	err = mt76_worker_setup(dev->hw, &sdio->stat_worker, mt76s_tx_status_data,
++				"sdio-sta");
++	if (err)
++		return err;
++
+ 	sched_set_fifo_low(sdio->status_worker.task);
+ 	sched_set_fifo_low(sdio->net_worker.task);
+-
+-	INIT_WORK(&sdio->stat_work, mt76s_tx_status_data);
++	sched_set_fifo_low(sdio->stat_worker.task);
+ 
+ 	dev->queue_ops = &sdio_queue_ops;
+ 	dev->bus = bus_ops;
+diff --git a/drivers/net/wireless/purelifi/plfxlc/usb.c b/drivers/net/wireless/purelifi/plfxlc/usb.c
+index 76d0a778636a4..311676c1ece0a 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/usb.c
++++ b/drivers/net/wireless/purelifi/plfxlc/usb.c
+@@ -493,9 +493,12 @@ int plfxlc_usb_wreq_async(struct plfxlc_usb *usb, const u8 *buffer,
+ 			  void *context)
+ {
+ 	struct usb_device *udev = interface_to_usbdev(usb->ez_usb);
+-	struct urb *urb = usb_alloc_urb(0, GFP_ATOMIC);
++	struct urb *urb;
+ 	int r;
+ 
++	urb = usb_alloc_urb(0, GFP_ATOMIC);
++	if (!urb)
++		return -ENOMEM;
+ 	usb_fill_bulk_urb(urb, udev, usb_sndbulkpipe(udev, EP_DATA_OUT),
+ 			  (void *)buffer, buffer_len, complete_fn, context);
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 9886e719739be..b118df035243c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -164,21 +164,29 @@ static bool _rtl_pci_platform_switch_device_pci_aspm(
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
+ 
++	value &= PCI_EXP_LNKCTL_ASPMC;
++
+ 	if (rtlhal->hw_type != HARDWARE_TYPE_RTL8192SE)
+-		value |= 0x40;
++		value |= PCI_EXP_LNKCTL_CCC;
+ 
+-	pci_write_config_byte(rtlpci->pdev, 0x80, value);
++	pcie_capability_clear_and_set_word(rtlpci->pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_ASPMC | value,
++					   value);
+ 
+ 	return false;
+ }
+ 
+-/*When we set 0x01 to enable clk request. Set 0x0 to disable clk req.*/
+-static void _rtl_pci_switch_clk_req(struct ieee80211_hw *hw, u8 value)
++/* @value is PCI_EXP_LNKCTL_CLKREQ_EN or 0 to enable/disable clk request. */
++static void _rtl_pci_switch_clk_req(struct ieee80211_hw *hw, u16 value)
+ {
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
+ 
+-	pci_write_config_byte(rtlpci->pdev, 0x81, value);
++	value &= PCI_EXP_LNKCTL_CLKREQ_EN;
++
++	pcie_capability_clear_and_set_word(rtlpci->pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_CLKREQ_EN,
++					   value);
+ 
+ 	if (rtlhal->hw_type == HARDWARE_TYPE_RTL8192SE)
+ 		udelay(100);
+@@ -192,11 +200,8 @@ static void rtl_pci_disable_aspm(struct ieee80211_hw *hw)
+ 	struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	u8 pcibridge_vendor = pcipriv->ndis_adapter.pcibridge_vendor;
+-	u8 num4bytes = pcipriv->ndis_adapter.num4bytes;
+ 	/*Retrieve original configuration settings. */
+ 	u8 linkctrl_reg = pcipriv->ndis_adapter.linkctrl_reg;
+-	u16 pcibridge_linkctrlreg = pcipriv->ndis_adapter.
+-				pcibridge_linkctrlreg;
+ 	u16 aspmlevel = 0;
+ 	u8 tmp_u1b = 0;
+ 
+@@ -221,16 +226,8 @@ static void rtl_pci_disable_aspm(struct ieee80211_hw *hw)
+ 	/*Set corresponding value. */
+ 	aspmlevel |= BIT(0) | BIT(1);
+ 	linkctrl_reg &= ~aspmlevel;
+-	pcibridge_linkctrlreg &= ~(BIT(0) | BIT(1));
+ 
+ 	_rtl_pci_platform_switch_device_pci_aspm(hw, linkctrl_reg);
+-	udelay(50);
+-
+-	/*4 Disable Pci Bridge ASPM */
+-	pci_write_config_byte(rtlpci->pdev, (num4bytes << 2),
+-			      pcibridge_linkctrlreg);
+-
+-	udelay(50);
+ }
+ 
+ /*Enable RTL8192SE ASPM & Enable Pci Bridge ASPM for
+@@ -245,9 +242,7 @@ static void rtl_pci_enable_aspm(struct ieee80211_hw *hw)
+ 	struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 	u8 pcibridge_vendor = pcipriv->ndis_adapter.pcibridge_vendor;
+-	u8 num4bytes = pcipriv->ndis_adapter.num4bytes;
+ 	u16 aspmlevel;
+-	u8 u_pcibridge_aspmsetting;
+ 	u8 u_device_aspmsetting;
+ 
+ 	if (!ppsc->support_aspm)
+@@ -259,25 +254,6 @@ static void rtl_pci_enable_aspm(struct ieee80211_hw *hw)
+ 		return;
+ 	}
+ 
+-	/*4 Enable Pci Bridge ASPM */
+-
+-	u_pcibridge_aspmsetting =
+-	    pcipriv->ndis_adapter.pcibridge_linkctrlreg |
+-	    rtlpci->const_hostpci_aspm_setting;
+-
+-	if (pcibridge_vendor == PCI_BRIDGE_VENDOR_INTEL)
+-		u_pcibridge_aspmsetting &= ~BIT(0);
+-
+-	pci_write_config_byte(rtlpci->pdev, (num4bytes << 2),
+-			      u_pcibridge_aspmsetting);
+-
+-	rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+-		"PlatformEnableASPM(): Write reg[%x] = %x\n",
+-		(pcipriv->ndis_adapter.pcibridge_pciehdr_offset + 0x10),
+-		u_pcibridge_aspmsetting);
+-
+-	udelay(50);
+-
+ 	/*Get ASPM level (with/without Clock Req) */
+ 	aspmlevel = rtlpci->const_devicepci_aspm_setting;
+ 	u_device_aspmsetting = pcipriv->ndis_adapter.linkctrl_reg;
+@@ -291,7 +267,8 @@ static void rtl_pci_enable_aspm(struct ieee80211_hw *hw)
+ 
+ 	if (ppsc->reg_rfps_level & RT_RF_OFF_LEVL_CLK_REQ) {
+ 		_rtl_pci_switch_clk_req(hw, (ppsc->reg_rfps_level &
+-					     RT_RF_OFF_LEVL_CLK_REQ) ? 1 : 0);
++					     RT_RF_OFF_LEVL_CLK_REQ) ?
++					     PCI_EXP_LNKCTL_CLKREQ_EN : 0);
+ 		RT_SET_PS_LEVEL(ppsc, RT_RF_OFF_LEVL_CLK_REQ);
+ 	}
+ 	udelay(100);
+@@ -358,22 +335,6 @@ static bool rtl_pci_check_buddy_priv(struct ieee80211_hw *hw,
+ 	return tpriv != NULL;
+ }
+ 
+-static void rtl_pci_get_linkcontrol_field(struct ieee80211_hw *hw)
+-{
+-	struct rtl_pci_priv *pcipriv = rtl_pcipriv(hw);
+-	struct rtl_pci *rtlpci = rtl_pcidev(pcipriv);
+-	u8 capabilityoffset = pcipriv->ndis_adapter.pcibridge_pciehdr_offset;
+-	u8 linkctrl_reg;
+-	u8 num4bbytes;
+-
+-	num4bbytes = (capabilityoffset + 0x10) / 4;
+-
+-	/*Read  Link Control Register */
+-	pci_read_config_byte(rtlpci->pdev, (num4bbytes << 2), &linkctrl_reg);
+-
+-	pcipriv->ndis_adapter.pcibridge_linkctrlreg = linkctrl_reg;
+-}
+-
+ static void rtl_pci_parse_configuration(struct pci_dev *pdev,
+ 					struct ieee80211_hw *hw)
+ {
+@@ -2028,12 +1989,6 @@ static bool _rtl_pci_find_adapter(struct pci_dev *pdev,
+ 		    PCI_SLOT(bridge_pdev->devfn);
+ 		pcipriv->ndis_adapter.pcibridge_funcnum =
+ 		    PCI_FUNC(bridge_pdev->devfn);
+-		pcipriv->ndis_adapter.pcibridge_pciehdr_offset =
+-		    pci_pcie_cap(bridge_pdev);
+-		pcipriv->ndis_adapter.num4bytes =
+-		    (pcipriv->ndis_adapter.pcibridge_pciehdr_offset + 0x10) / 4;
+-
+-		rtl_pci_get_linkcontrol_field(hw);
+ 
+ 		if (pcipriv->ndis_adapter.pcibridge_vendor ==
+ 		    PCI_BRIDGE_VENDOR_AMD) {
+@@ -2050,13 +2005,11 @@ static bool _rtl_pci_find_adapter(struct pci_dev *pdev,
+ 		pdev->vendor, pcipriv->ndis_adapter.linkctrl_reg);
+ 
+ 	rtl_dbg(rtlpriv, COMP_INIT, DBG_DMESG,
+-		"pci_bridge busnumber:devnumber:funcnumber:vendor:pcie_cap:link_ctl_reg:amd %d:%d:%d:%x:%x:%x:%x\n",
++		"pci_bridge busnumber:devnumber:funcnumber:vendor:amd %d:%d:%d:%x:%x\n",
+ 		pcipriv->ndis_adapter.pcibridge_busnum,
+ 		pcipriv->ndis_adapter.pcibridge_devnum,
+ 		pcipriv->ndis_adapter.pcibridge_funcnum,
+ 		pcibridge_vendors[pcipriv->ndis_adapter.pcibridge_vendor],
+-		pcipriv->ndis_adapter.pcibridge_pciehdr_offset,
+-		pcipriv->ndis_adapter.pcibridge_linkctrlreg,
+ 		pcipriv->ndis_adapter.amd_l1_patch);
+ 
+ 	rtl_pci_parse_configuration(pdev, hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.h b/drivers/net/wireless/realtek/rtlwifi/pci.h
+index 866861626a0a1..d6307197dfea0 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.h
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.h
+@@ -236,11 +236,6 @@ struct mp_adapter {
+ 	u16 pcibridge_vendorid;
+ 	u16 pcibridge_deviceid;
+ 
+-	u8 num4bytes;
+-
+-	u8 pcibridge_pciehdr_offset;
+-	u8 pcibridge_linkctrlreg;
+-
+ 	bool amd_l1_patch;
+ };
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
+index 12d0b3a87af7c..0fab3a0c7d49d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
+@@ -16,12 +16,6 @@ static u32 _rtl88e_phy_rf_serial_read(struct ieee80211_hw *hw,
+ static void _rtl88e_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 					enum radio_path rfpath, u32 offset,
+ 					u32 data);
+-static u32 _rtl88e_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+ static bool _rtl88e_phy_bb8188e_config_parafile(struct ieee80211_hw *hw);
+ static bool _rtl88e_phy_config_mac_with_headerfile(struct ieee80211_hw *hw);
+ static bool phy_config_bb_with_headerfile(struct ieee80211_hw *hw,
+@@ -51,7 +45,7 @@ u32 rtl88e_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"regaddr(%#x), bitmask(%#x)\n", regaddr, bitmask);
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -74,7 +68,7 @@ void rtl88e_phy_set_bb_reg(struct ieee80211_hw *hw,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -99,7 +93,7 @@ u32 rtl88e_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 
+ 
+ 	original_value = _rtl88e_phy_rf_serial_read(hw, rfpath, regaddr);
+-	bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -127,7 +121,7 @@ void rtl88e_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl88e_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = _rtl88e_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
+index 3d29c8dbb2559..144ee780e1b6a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
+@@ -17,7 +17,7 @@ u32 rtl92c_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE, "regaddr(%#x), bitmask(%#x)\n",
+ 		regaddr, bitmask);
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -40,7 +40,7 @@ void rtl92c_phy_set_bb_reg(struct ieee80211_hw *hw,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -143,14 +143,6 @@ void _rtl92c_phy_rf_serial_write(struct ieee80211_hw *hw,
+ }
+ EXPORT_SYMBOL(_rtl92c_phy_rf_serial_write);
+ 
+-u32 _rtl92c_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-EXPORT_SYMBOL(_rtl92c_phy_calculate_bit_shift);
+-
+ static void _rtl92c_phy_bb_config_1t(struct ieee80211_hw *hw)
+ {
+ 	rtl_set_bbreg(hw, RFPGA0_TXINFO, 0x3, 0x2);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
+index 75afa6253ad02..e64d377dfe9e2 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
+@@ -196,7 +196,6 @@ bool rtl92c_phy_set_rf_power_state(struct ieee80211_hw *hw,
+ void rtl92ce_phy_set_rf_on(struct ieee80211_hw *hw);
+ void rtl92c_phy_set_io(struct ieee80211_hw *hw);
+ void rtl92c_bb_block_on(struct ieee80211_hw *hw);
+-u32 _rtl92c_phy_calculate_bit_shift(u32 bitmask);
+ long _rtl92c_phy_txpwr_idx_to_dbm(struct ieee80211_hw *hw,
+ 				  enum wireless_mode wirelessmode,
+ 				  u8 txpwridx);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c
+index da54e51badd3a..fa70a7d5539fd 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.c
+@@ -39,7 +39,7 @@ u32 rtl92c_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 							       rfpath, regaddr);
+ 	}
+ 
+-	bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -110,7 +110,7 @@ void rtl92ce_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+@@ -122,7 +122,7 @@ void rtl92ce_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_fw_rf_serial_read(hw,
+ 								       rfpath,
+ 								       regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
+index 7582a162bd112..c7a0d4c776f0a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
+@@ -94,7 +94,6 @@ u32 _rtl92c_phy_rf_serial_read(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 			       u32 offset);
+ u32 _rtl92c_phy_fw_rf_serial_read(struct ieee80211_hw *hw,
+ 				  enum radio_path rfpath, u32 offset);
+-u32 _rtl92c_phy_calculate_bit_shift(u32 bitmask);
+ void _rtl92c_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 				 enum radio_path rfpath, u32 offset, u32 data);
+ void _rtl92c_phy_fw_rf_serial_write(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c
+index a8d9fe269f313..0b8cb7e61fd80 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/phy.c
+@@ -32,7 +32,7 @@ u32 rtl92cu_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 		original_value = _rtl92c_phy_fw_rf_serial_read(hw,
+ 							       rfpath, regaddr);
+ 	}
+-	bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"regaddr(%#x), rfpath(%#x), bitmask(%#x), original_value(%#x)\n",
+@@ -56,7 +56,7 @@ void rtl92cu_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+@@ -67,7 +67,7 @@ void rtl92cu_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = _rtl92c_phy_fw_rf_serial_read(hw,
+ 								       rfpath,
+ 								       regaddr);
+-			bitshift = _rtl92c_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+index d18c092b61426..d835a27429f0f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
+@@ -169,13 +169,6 @@ static const u8 channel_all[59] = {
+ 	157, 159, 161, 163, 165
+ };
+ 
+-static u32 _rtl92d_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-
+ u32 rtl92d_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+@@ -198,7 +191,7 @@ u32 rtl92d_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	} else {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+ 	}
+-	bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"BBR MASK=0x%x Addr[0x%x]=0x%x\n",
+@@ -230,7 +223,7 @@ void rtl92d_phy_set_bb_reg(struct ieee80211_hw *hw,
+ 					dbi_direct);
+ 		else
+ 			originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 	if (rtlhal->during_mac1init_radioa || rtlhal->during_mac0init_radiob)
+@@ -317,7 +310,7 @@ u32 rtl92d_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 		regaddr, rfpath, bitmask);
+ 	spin_lock(&rtlpriv->locks.rf_lock);
+ 	original_value = _rtl92d_phy_rf_serial_read(hw, rfpath, regaddr);
+-	bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -343,7 +336,7 @@ void rtl92d_phy_set_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 		if (bitmask != RFREG_OFFSET_MASK) {
+ 			original_value = _rtl92d_phy_rf_serial_read(hw,
+ 				rfpath, regaddr);
+-			bitshift = _rtl92d_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data = ((original_value & (~bitmask)) |
+ 				(data << bitshift));
+ 		}
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
+index cc0bcaf13e96e..73ef602bfb01a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
+@@ -16,7 +16,6 @@ static u32 _rtl92ee_phy_rf_serial_read(struct ieee80211_hw *hw,
+ static void _rtl92ee_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 					 enum radio_path rfpath, u32 offset,
+ 					 u32 data);
+-static u32 _rtl92ee_phy_calculate_bit_shift(u32 bitmask);
+ static bool _rtl92ee_phy_bb8192ee_config_parafile(struct ieee80211_hw *hw);
+ static bool _rtl92ee_phy_config_mac_with_headerfile(struct ieee80211_hw *hw);
+ static bool phy_config_bb_with_hdr_file(struct ieee80211_hw *hw,
+@@ -46,7 +45,7 @@ u32 rtl92ee_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+ 		"regaddr(%#x), bitmask(%#x)\n", regaddr, bitmask);
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE,
+@@ -68,7 +67,7 @@ void rtl92ee_phy_set_bb_reg(struct ieee80211_hw *hw, u32 regaddr,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -92,7 +91,7 @@ u32 rtl92ee_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 	spin_lock(&rtlpriv->locks.rf_lock);
+ 
+ 	original_value = _rtl92ee_phy_rf_serial_read(hw , rfpath, regaddr);
+-	bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -119,7 +118,7 @@ void rtl92ee_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 
+ 	if (bitmask != RFREG_OFFSET_MASK) {
+ 		original_value = _rtl92ee_phy_rf_serial_read(hw, rfpath, addr);
+-		bitshift = _rtl92ee_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = (original_value & (~bitmask)) | (data << bitshift);
+ 	}
+ 
+@@ -201,13 +200,6 @@ static void _rtl92ee_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 		pphyreg->rf3wire_offset, data_and_addr);
+ }
+ 
+-static u32 _rtl92ee_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-
+ bool rtl92ee_phy_mac_config(struct ieee80211_hw *hw)
+ {
+ 	return _rtl92ee_phy_config_mac_with_headerfile(hw);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c
+index 09591a0b5a818..d9ef7e1da1db4 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c
+@@ -14,13 +14,6 @@
+ #include "hw.h"
+ #include "table.h"
+ 
+-static u32 _rtl92s_phy_calculate_bit_shift(u32 bitmask)
+-{
+-	u32 i = ffs(bitmask);
+-
+-	return i ? i - 1 : 32;
+-}
+-
+ u32 rtl92s_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+@@ -30,7 +23,7 @@ u32 rtl92s_phy_query_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask)
+ 		regaddr, bitmask);
+ 
+ 	originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-	bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	returnvalue = (originalvalue & bitmask) >> bitshift;
+ 
+ 	rtl_dbg(rtlpriv, COMP_RF, DBG_TRACE, "BBR MASK=0x%x Addr[0x%x]=0x%x\n",
+@@ -52,7 +45,7 @@ void rtl92s_phy_set_bb_reg(struct ieee80211_hw *hw, u32 regaddr, u32 bitmask,
+ 
+ 	if (bitmask != MASKDWORD) {
+ 		originalvalue = rtl_read_dword(rtlpriv, regaddr);
+-		bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((originalvalue & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+@@ -157,7 +150,7 @@ u32 rtl92s_phy_query_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 
+ 	original_value = _rtl92s_phy_rf_serial_read(hw, rfpath, regaddr);
+ 
+-	bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -188,7 +181,7 @@ void rtl92s_phy_set_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 	if (bitmask != RFREG_OFFSET_MASK) {
+ 		original_value = _rtl92s_phy_rf_serial_read(hw, rfpath,
+ 							    regaddr);
+-		bitshift = _rtl92s_phy_calculate_bit_shift(bitmask);
++		bitshift = calculate_bit_shift(bitmask);
+ 		data = ((original_value & (~bitmask)) | (data << bitshift));
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+index 5323ead30db03..fa1839d8ee55f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+@@ -29,9 +29,10 @@ static void _rtl8821ae_phy_rf_serial_write(struct ieee80211_hw *hw,
+ 					   u32 data);
+ static u32 _rtl8821ae_phy_calculate_bit_shift(u32 bitmask)
+ {
+-	u32 i = ffs(bitmask);
++	if (WARN_ON_ONCE(!bitmask))
++		return 0;
+ 
+-	return i ? i - 1 : 32;
++	return __ffs(bitmask);
+ }
+ static bool _rtl8821ae_phy_bb8821a_config_parafile(struct ieee80211_hw *hw);
+ /*static bool _rtl8812ae_phy_config_mac_with_headerfile(struct ieee80211_hw *hw);*/
+diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+index 31a481f43a07d..5d842cc394aac 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+@@ -3069,4 +3069,11 @@ static inline struct ieee80211_sta *rtl_find_sta(struct ieee80211_hw *hw,
+ 	return ieee80211_find_sta(mac->vif, mac_addr);
+ }
+ 
++static inline u32 calculate_bit_shift(u32 bitmask)
++{
++	if (WARN_ON_ONCE(!bitmask))
++		return 0;
++
++	return __ffs(bitmask);
++}
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index a99b53d442676..d8d68f16014e3 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -280,9 +280,9 @@ static void rtw_ops_configure_filter(struct ieee80211_hw *hw,
+ 
+ 	if (changed_flags & FIF_ALLMULTI) {
+ 		if (*new_flags & FIF_ALLMULTI)
+-			rtwdev->hal.rcr |= BIT_AM | BIT_AB;
++			rtwdev->hal.rcr |= BIT_AM;
+ 		else
+-			rtwdev->hal.rcr &= ~(BIT_AM | BIT_AB);
++			rtwdev->hal.rcr &= ~(BIT_AM);
+ 	}
+ 	if (changed_flags & FIF_FCSFAIL) {
+ 		if (*new_flags & FIF_FCSFAIL)
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 2c1fb2dabd40a..0cae5746f540f 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -500,19 +500,40 @@ static u32 rtw_sdio_get_tx_addr(struct rtw_dev *rtwdev, size_t size,
+ static int rtw_sdio_read_port(struct rtw_dev *rtwdev, u8 *buf, size_t count)
+ {
+ 	struct rtw_sdio *rtwsdio = (struct rtw_sdio *)rtwdev->priv;
++	struct mmc_host *host = rtwsdio->sdio_func->card->host;
+ 	bool bus_claim = rtw_sdio_bus_claim_needed(rtwsdio);
+ 	u32 rxaddr = rtwsdio->rx_addr++;
+-	int ret;
++	int ret = 0, err;
++	size_t bytes;
+ 
+ 	if (bus_claim)
+ 		sdio_claim_host(rtwsdio->sdio_func);
+ 
+-	ret = sdio_memcpy_fromio(rtwsdio->sdio_func, buf,
+-				 RTW_SDIO_ADDR_RX_RX0FF_GEN(rxaddr), count);
+-	if (ret)
+-		rtw_warn(rtwdev,
+-			 "Failed to read %zu byte(s) from SDIO port 0x%08x",
+-			 count, rxaddr);
++	while (count > 0) {
++		bytes = min_t(size_t, host->max_req_size, count);
++
++		err = sdio_memcpy_fromio(rtwsdio->sdio_func, buf,
++					 RTW_SDIO_ADDR_RX_RX0FF_GEN(rxaddr),
++					 bytes);
++		if (err) {
++			rtw_warn(rtwdev,
++				 "Failed to read %zu byte(s) from SDIO port 0x%08x: %d",
++				 bytes, rxaddr, err);
++
++			 /* Signal to the caller that reading did not work and
++			  * that the data in the buffer is short/corrupted.
++			  */
++			ret = err;
++
++			/* Don't stop here - instead drain the remaining data
++			 * from the card's buffer, else the card will return
++			 * corrupt data for the next rtw_sdio_read_port() call.
++			 */
++		}
++
++		count -= bytes;
++		buf += bytes;
++	}
+ 
+ 	if (bus_claim)
+ 		sdio_release_host(rtwsdio->sdio_func);
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 88f760a7cbc35..d7503aef599f0 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -463,12 +463,25 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 	}
+ 
+ 	for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS;
+-	     shinfo->nr_frags++, gop++, nr_slots--) {
++	     nr_slots--) {
++		if (unlikely(!txp->size)) {
++			unsigned long flags;
++
++			spin_lock_irqsave(&queue->response_lock, flags);
++			make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY);
++			push_tx_responses(queue);
++			spin_unlock_irqrestore(&queue->response_lock, flags);
++			++txp;
++			continue;
++		}
++
+ 		index = pending_index(queue->pending_cons++);
+ 		pending_idx = queue->pending_ring[index];
+ 		xenvif_tx_create_map_op(queue, pending_idx, txp,
+ 				        txp == first ? extra_count : 0, gop);
+ 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
++		++shinfo->nr_frags;
++		++gop;
+ 
+ 		if (txp == first)
+ 			txp = txfrags;
+@@ -481,20 +494,39 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 		shinfo = skb_shinfo(nskb);
+ 		frags = shinfo->frags;
+ 
+-		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots;
+-		     shinfo->nr_frags++, txp++, gop++) {
++		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) {
++			if (unlikely(!txp->size)) {
++				unsigned long flags;
++
++				spin_lock_irqsave(&queue->response_lock, flags);
++				make_tx_response(queue, txp, 0,
++						 XEN_NETIF_RSP_OKAY);
++				push_tx_responses(queue);
++				spin_unlock_irqrestore(&queue->response_lock,
++						       flags);
++				continue;
++			}
++
+ 			index = pending_index(queue->pending_cons++);
+ 			pending_idx = queue->pending_ring[index];
+ 			xenvif_tx_create_map_op(queue, pending_idx, txp, 0,
+ 						gop);
+ 			frag_set_pending_idx(&frags[shinfo->nr_frags],
+ 					     pending_idx);
++			++shinfo->nr_frags;
++			++gop;
+ 		}
+ 
+-		skb_shinfo(skb)->frag_list = nskb;
+-	} else if (nskb) {
++		if (shinfo->nr_frags) {
++			skb_shinfo(skb)->frag_list = nskb;
++			nskb = NULL;
++		}
++	}
++
++	if (nskb) {
+ 		/* A frag_list skb was allocated but it is no longer needed
+-		 * because enough slots were converted to copy ops above.
++		 * because enough slots were converted to copy ops above or some
++		 * were empty.
+ 		 */
+ 		kfree_skb(nskb);
+ 	}
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 4cc27856aa8fe..80db64a4b09f6 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -24,6 +24,7 @@
+ #include "nvmet.h"
+ 
+ #define NVMET_TCP_DEF_INLINE_DATA_SIZE	(4 * PAGE_SIZE)
++#define NVMET_TCP_MAXH2CDATA		0x400000 /* 16M arbitrary limit */
+ 
+ static int param_store_val(const char *str, int *val, int min, int max)
+ {
+@@ -923,7 +924,7 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
+ 	icresp->hdr.pdo = 0;
+ 	icresp->hdr.plen = cpu_to_le32(icresp->hdr.hlen);
+ 	icresp->pfv = cpu_to_le16(NVME_TCP_PFV_1_0);
+-	icresp->maxdata = cpu_to_le32(0x400000); /* 16M arbitrary limit */
++	icresp->maxdata = cpu_to_le32(NVMET_TCP_MAXH2CDATA);
+ 	icresp->cpda = 0;
+ 	if (queue->hdr_digest)
+ 		icresp->digest |= NVME_TCP_HDR_DIGEST_ENABLE;
+@@ -978,6 +979,7 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ {
+ 	struct nvme_tcp_data_pdu *data = &queue->pdu.data;
+ 	struct nvmet_tcp_cmd *cmd;
++	unsigned int exp_data_len;
+ 
+ 	if (likely(queue->nr_cmds)) {
+ 		if (unlikely(data->ttag >= queue->nr_cmds)) {
+@@ -996,12 +998,24 @@ static int nvmet_tcp_handle_h2c_data_pdu(struct nvmet_tcp_queue *queue)
+ 			data->ttag, le32_to_cpu(data->data_offset),
+ 			cmd->rbytes_done);
+ 		/* FIXME: use path and transport errors */
+-		nvmet_req_complete(&cmd->req,
+-			NVME_SC_INVALID_FIELD | NVME_SC_DNR);
++		nvmet_tcp_fatal_error(queue);
+ 		return -EPROTO;
+ 	}
+ 
++	exp_data_len = le32_to_cpu(data->hdr.plen) -
++			nvmet_tcp_hdgst_len(queue) -
++			nvmet_tcp_ddgst_len(queue) -
++			sizeof(*data);
++
+ 	cmd->pdu_len = le32_to_cpu(data->data_length);
++	if (unlikely(cmd->pdu_len != exp_data_len ||
++		     cmd->pdu_len == 0 ||
++		     cmd->pdu_len > NVMET_TCP_MAXH2CDATA)) {
++		pr_err("H2CData PDU len %u is invalid\n", cmd->pdu_len);
++		/* FIXME: use proper transport errors */
++		nvmet_tcp_fatal_error(queue);
++		return -EPROTO;
++	}
+ 	cmd->pdu_recv = 0;
+ 	nvmet_tcp_build_pdu_iovec(cmd);
+ 	queue->cmd = cmd;
+@@ -1768,7 +1782,7 @@ static int nvmet_tcp_try_peek_pdu(struct nvmet_tcp_queue *queue)
+ 		 (int)sizeof(struct nvme_tcp_icreq_pdu));
+ 	if (hdr->type == nvme_tcp_icreq &&
+ 	    hdr->hlen == sizeof(struct nvme_tcp_icreq_pdu) &&
+-	    hdr->plen == (__le32)sizeof(struct nvme_tcp_icreq_pdu)) {
++	    hdr->plen == cpu_to_le32(sizeof(struct nvme_tcp_icreq_pdu))) {
+ 		pr_debug("queue %d: icreq detected\n",
+ 			 queue->idx);
+ 		return len;
+diff --git a/drivers/nvme/target/trace.h b/drivers/nvme/target/trace.h
+index 6109b3806b12b..974d99d47f514 100644
+--- a/drivers/nvme/target/trace.h
++++ b/drivers/nvme/target/trace.h
+@@ -53,8 +53,7 @@ static inline void __assign_req_name(char *name, struct nvmet_req *req)
+ 		return;
+ 	}
+ 
+-	strncpy(name, req->ns->device_path,
+-		min_t(size_t, DISK_NAME_LEN, strlen(req->ns->device_path)));
++	strscpy_pad(name, req->ns->device_path, DISK_NAME_LEN);
+ }
+ #endif
+ 
+@@ -85,7 +84,7 @@ TRACE_EVENT(nvmet_req_init,
+ 		__entry->flags = cmd->common.flags;
+ 		__entry->nsid = le32_to_cpu(cmd->common.nsid);
+ 		__entry->metadata = le64_to_cpu(cmd->common.metadata);
+-		memcpy(__entry->cdw10, &cmd->common.cdw10,
++		memcpy(__entry->cdw10, &cmd->common.cdws,
+ 			sizeof(__entry->cdw10));
+ 	),
+ 	TP_printk("nvmet%s: %sqid=%d, cmdid=%u, nsid=%u, flags=%#x, "
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 8d93cb6ea9cde..b0ad8fc06e80e 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -1464,6 +1464,7 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ 		out_args->np = new;
+ 		of_node_put(cur);
+ 		cur = new;
++		new = NULL;
+ 	}
+ put:
+ 	of_node_put(cur);
+diff --git a/drivers/of/unittest-data/tests-phandle.dtsi b/drivers/of/unittest-data/tests-phandle.dtsi
+index d01f92f0f0db7..554a996b2ef18 100644
+--- a/drivers/of/unittest-data/tests-phandle.dtsi
++++ b/drivers/of/unittest-data/tests-phandle.dtsi
+@@ -40,6 +40,13 @@
+ 				phandle-map-pass-thru = <0x0 0xf0>;
+ 			};
+ 
++			provider5: provider5 {
++				#phandle-cells = <2>;
++				phandle-map = <2 7 &provider4 2 3>;
++				phandle-map-mask = <0xff 0xf>;
++				phandle-map-pass-thru = <0x0 0xf0>;
++			};
++
+ 			consumer-a {
+ 				phandle-list =	<&provider1 1>,
+ 						<&provider2 2 0>,
+@@ -66,7 +73,8 @@
+ 						<&provider4 4 0x100>,
+ 						<&provider4 0 0x61>,
+ 						<&provider0>,
+-						<&provider4 19 0x20>;
++						<&provider4 19 0x20>,
++						<&provider5 2 7>;
+ 				phandle-list-bad-phandle = <12345678 0 0>;
+ 				phandle-list-bad-args = <&provider2 1 0>,
+ 							<&provider4 0>;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index e9e90e96600e4..cfd60e35a8992 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -456,6 +456,9 @@ static void __init of_unittest_parse_phandle_with_args(void)
+ 
+ 		unittest(passed, "index %i - data error on node %pOF rc=%i\n",
+ 			 i, args.np, rc);
++
++		if (rc == 0)
++			of_node_put(args.np);
+ 	}
+ 
+ 	/* Check for missing list property */
+@@ -545,8 +548,9 @@ static void __init of_unittest_parse_phandle_with_args(void)
+ 
+ static void __init of_unittest_parse_phandle_with_args_map(void)
+ {
+-	struct device_node *np, *p0, *p1, *p2, *p3;
++	struct device_node *np, *p[6] = {};
+ 	struct of_phandle_args args;
++	unsigned int prefs[6];
+ 	int i, rc;
+ 
+ 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-b");
+@@ -555,34 +559,24 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 		return;
+ 	}
+ 
+-	p0 = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
+-	if (!p0) {
+-		pr_err("missing testcase data\n");
+-		return;
+-	}
+-
+-	p1 = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
+-	if (!p1) {
+-		pr_err("missing testcase data\n");
+-		return;
+-	}
+-
+-	p2 = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
+-	if (!p2) {
+-		pr_err("missing testcase data\n");
+-		return;
+-	}
+-
+-	p3 = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
+-	if (!p3) {
+-		pr_err("missing testcase data\n");
+-		return;
++	p[0] = of_find_node_by_path("/testcase-data/phandle-tests/provider0");
++	p[1] = of_find_node_by_path("/testcase-data/phandle-tests/provider1");
++	p[2] = of_find_node_by_path("/testcase-data/phandle-tests/provider2");
++	p[3] = of_find_node_by_path("/testcase-data/phandle-tests/provider3");
++	p[4] = of_find_node_by_path("/testcase-data/phandle-tests/provider4");
++	p[5] = of_find_node_by_path("/testcase-data/phandle-tests/provider5");
++	for (i = 0; i < ARRAY_SIZE(p); ++i) {
++		if (!p[i]) {
++			pr_err("missing testcase data\n");
++			return;
++		}
++		prefs[i] = kref_read(&p[i]->kobj.kref);
+ 	}
+ 
+ 	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
+-	unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
++	unittest(rc == 8, "of_count_phandle_with_args() returned %i, expected 8\n", rc);
+ 
+-	for (i = 0; i < 8; i++) {
++	for (i = 0; i < 9; i++) {
+ 		bool passed = true;
+ 
+ 		memset(&args, 0, sizeof(args));
+@@ -593,13 +587,13 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 		switch (i) {
+ 		case 0:
+ 			passed &= !rc;
+-			passed &= (args.np == p1);
++			passed &= (args.np == p[1]);
+ 			passed &= (args.args_count == 1);
+ 			passed &= (args.args[0] == 1);
+ 			break;
+ 		case 1:
+ 			passed &= !rc;
+-			passed &= (args.np == p3);
++			passed &= (args.np == p[3]);
+ 			passed &= (args.args_count == 3);
+ 			passed &= (args.args[0] == 2);
+ 			passed &= (args.args[1] == 5);
+@@ -610,28 +604,36 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 			break;
+ 		case 3:
+ 			passed &= !rc;
+-			passed &= (args.np == p0);
++			passed &= (args.np == p[0]);
+ 			passed &= (args.args_count == 0);
+ 			break;
+ 		case 4:
+ 			passed &= !rc;
+-			passed &= (args.np == p1);
++			passed &= (args.np == p[1]);
+ 			passed &= (args.args_count == 1);
+ 			passed &= (args.args[0] == 3);
+ 			break;
+ 		case 5:
+ 			passed &= !rc;
+-			passed &= (args.np == p0);
++			passed &= (args.np == p[0]);
+ 			passed &= (args.args_count == 0);
+ 			break;
+ 		case 6:
+ 			passed &= !rc;
+-			passed &= (args.np == p2);
++			passed &= (args.np == p[2]);
+ 			passed &= (args.args_count == 2);
+ 			passed &= (args.args[0] == 15);
+ 			passed &= (args.args[1] == 0x20);
+ 			break;
+ 		case 7:
++			passed &= !rc;
++			passed &= (args.np == p[3]);
++			passed &= (args.args_count == 3);
++			passed &= (args.args[0] == 2);
++			passed &= (args.args[1] == 5);
++			passed &= (args.args[2] == 3);
++			break;
++		case 8:
+ 			passed &= (rc == -ENOENT);
+ 			break;
+ 		default:
+@@ -640,6 +642,9 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 
+ 		unittest(passed, "index %i - data error on node %s rc=%i\n",
+ 			 i, args.np->full_name, rc);
++
++		if (rc == 0)
++			of_node_put(args.np);
+ 	}
+ 
+ 	/* Check for missing list property */
+@@ -686,6 +691,13 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 		   "OF: /testcase-data/phandle-tests/consumer-b: #phandle-cells = 2 found 1");
+ 
+ 	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
++
++	for (i = 0; i < ARRAY_SIZE(p); ++i) {
++		unittest(prefs[i] == kref_read(&p[i]->kobj.kref),
++			 "provider%d: expected:%d got:%d\n",
++			 i, prefs[i], kref_read(&p[i]->kobj.kref));
++		of_node_put(p[i]);
++	}
+ }
+ 
+ static void __init of_unittest_property_string(void)
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 0def919f89faf..cf3836561316d 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -1218,7 +1218,16 @@ static int ks_pcie_probe(struct platform_device *pdev)
+ 		goto err_link;
+ 	}
+ 
++	/* Obtain references to the PHYs */
++	for (i = 0; i < num_lanes; i++)
++		phy_pm_runtime_get_sync(ks_pcie->phy[i]);
++
+ 	ret = ks_pcie_enable_phy(ks_pcie);
++
++	/* Release references to the PHYs */
++	for (i = 0; i < num_lanes; i++)
++		phy_pm_runtime_put_sync(ks_pcie->phy[i]);
++
+ 	if (ret) {
+ 		dev_err(dev, "failed to enable phy\n");
+ 		goto err_link;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index f6207989fc6ad..bc94d7f395357 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -615,6 +615,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
+ 	}
+ 
+ 	aligned_offset = msg_addr & (epc->mem->window.page_size - 1);
++	msg_addr &= ~aligned_offset;
+ 	ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
+ 				  epc->mem->window.page_size);
+ 	if (ret)
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index e0e27645fdf4c..975b3024fb08c 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -245,35 +245,60 @@ static int mtk_pcie_set_trans_table(struct mtk_gen3_pcie *pcie,
+ 				    resource_size_t cpu_addr,
+ 				    resource_size_t pci_addr,
+ 				    resource_size_t size,
+-				    unsigned long type, int num)
++				    unsigned long type, int *num)
+ {
++	resource_size_t remaining = size;
++	resource_size_t table_size;
++	resource_size_t addr_align;
++	const char *range_type;
+ 	void __iomem *table;
+ 	u32 val;
+ 
+-	if (num >= PCIE_MAX_TRANS_TABLES) {
+-		dev_err(pcie->dev, "not enough translate table for addr: %#llx, limited to [%d]\n",
+-			(unsigned long long)cpu_addr, PCIE_MAX_TRANS_TABLES);
+-		return -ENODEV;
+-	}
++	while (remaining && (*num < PCIE_MAX_TRANS_TABLES)) {
++		/* Table size needs to be a power of 2 */
++		table_size = BIT(fls(remaining) - 1);
++
++		if (cpu_addr > 0) {
++			addr_align = BIT(ffs(cpu_addr) - 1);
++			table_size = min(table_size, addr_align);
++		}
++
++		/* Minimum size of translate table is 4KiB */
++		if (table_size < 0x1000) {
++			dev_err(pcie->dev, "illegal table size %#llx\n",
++				(unsigned long long)table_size);
++			return -EINVAL;
++		}
+ 
+-	table = pcie->base + PCIE_TRANS_TABLE_BASE_REG +
+-		num * PCIE_ATR_TLB_SET_OFFSET;
++		table = pcie->base + PCIE_TRANS_TABLE_BASE_REG + *num * PCIE_ATR_TLB_SET_OFFSET;
++		writel_relaxed(lower_32_bits(cpu_addr) | PCIE_ATR_SIZE(fls(table_size) - 1), table);
++		writel_relaxed(upper_32_bits(cpu_addr), table + PCIE_ATR_SRC_ADDR_MSB_OFFSET);
++		writel_relaxed(lower_32_bits(pci_addr), table + PCIE_ATR_TRSL_ADDR_LSB_OFFSET);
++		writel_relaxed(upper_32_bits(pci_addr), table + PCIE_ATR_TRSL_ADDR_MSB_OFFSET);
+ 
+-	writel_relaxed(lower_32_bits(cpu_addr) | PCIE_ATR_SIZE(fls(size) - 1),
+-		       table);
+-	writel_relaxed(upper_32_bits(cpu_addr),
+-		       table + PCIE_ATR_SRC_ADDR_MSB_OFFSET);
+-	writel_relaxed(lower_32_bits(pci_addr),
+-		       table + PCIE_ATR_TRSL_ADDR_LSB_OFFSET);
+-	writel_relaxed(upper_32_bits(pci_addr),
+-		       table + PCIE_ATR_TRSL_ADDR_MSB_OFFSET);
++		if (type == IORESOURCE_IO) {
++			val = PCIE_ATR_TYPE_IO | PCIE_ATR_TLP_TYPE_IO;
++			range_type = "IO";
++		} else {
++			val = PCIE_ATR_TYPE_MEM | PCIE_ATR_TLP_TYPE_MEM;
++			range_type = "MEM";
++		}
+ 
+-	if (type == IORESOURCE_IO)
+-		val = PCIE_ATR_TYPE_IO | PCIE_ATR_TLP_TYPE_IO;
+-	else
+-		val = PCIE_ATR_TYPE_MEM | PCIE_ATR_TLP_TYPE_MEM;
++		writel_relaxed(val, table + PCIE_ATR_TRSL_PARAM_OFFSET);
+ 
+-	writel_relaxed(val, table + PCIE_ATR_TRSL_PARAM_OFFSET);
++		dev_dbg(pcie->dev, "set %s trans window[%d]: cpu_addr = %#llx, pci_addr = %#llx, size = %#llx\n",
++			range_type, *num, (unsigned long long)cpu_addr,
++			(unsigned long long)pci_addr, (unsigned long long)table_size);
++
++		cpu_addr += table_size;
++		pci_addr += table_size;
++		remaining -= table_size;
++		(*num)++;
++	}
++
++	if (remaining)
++		dev_warn(pcie->dev, "not enough translate table for addr: %#llx, limited to [%d]\n",
++			 (unsigned long long)cpu_addr, PCIE_MAX_TRANS_TABLES);
+ 
+ 	return 0;
+ }
+@@ -380,30 +405,20 @@ static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
+ 		resource_size_t cpu_addr;
+ 		resource_size_t pci_addr;
+ 		resource_size_t size;
+-		const char *range_type;
+ 
+-		if (type == IORESOURCE_IO) {
++		if (type == IORESOURCE_IO)
+ 			cpu_addr = pci_pio_to_address(res->start);
+-			range_type = "IO";
+-		} else if (type == IORESOURCE_MEM) {
++		else if (type == IORESOURCE_MEM)
+ 			cpu_addr = res->start;
+-			range_type = "MEM";
+-		} else {
++		else
+ 			continue;
+-		}
+ 
+ 		pci_addr = res->start - entry->offset;
+ 		size = resource_size(res);
+ 		err = mtk_pcie_set_trans_table(pcie, cpu_addr, pci_addr, size,
+-					       type, table_index);
++					       type, &table_index);
+ 		if (err)
+ 			return err;
+-
+-		dev_dbg(pcie->dev, "set %s trans window[%d]: cpu_addr = %#llx, pci_addr = %#llx, size = %#llx\n",
+-			range_type, table_index, (unsigned long long)cpu_addr,
+-			(unsigned long long)pci_addr, (unsigned long long)size);
+-
+-		table_index++;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 66a8f73296fc8..48372013f26d2 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -617,12 +617,18 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
+ 		if (status & MSI_STATUS){
+ 			unsigned long imsi_status;
+ 
++			/*
++			 * The interrupt status can be cleared even if the
++			 * MSI status remains pending. As such, given the
++			 * edge-triggered interrupt type, its status should
++			 * be cleared before being dispatched to the
++			 * handler of the underlying device.
++			 */
++			writel(MSI_STATUS, port->base + PCIE_INT_STATUS);
+ 			while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) {
+ 				for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM)
+ 					generic_handle_domain_irq(port->inner_domain, bit);
+ 			}
+-			/* Clear MSI interrupt status */
+-			writel(MSI_STATUS, port->base + PCIE_INT_STATUS);
+ 		}
+ 	}
+ 
+diff --git a/drivers/pci/controller/pcie-xilinx-dma-pl.c b/drivers/pci/controller/pcie-xilinx-dma-pl.c
+index 2f7d676c683cc..3d7f280edade0 100644
+--- a/drivers/pci/controller/pcie-xilinx-dma-pl.c
++++ b/drivers/pci/controller/pcie-xilinx-dma-pl.c
+@@ -576,7 +576,7 @@ static int xilinx_pl_dma_pcie_init_irq_domain(struct pl_dma_pcie *port)
+ 						  &intx_domain_ops, port);
+ 	if (!port->intx_domain) {
+ 		dev_err(dev, "Failed to get a INTx IRQ domain\n");
+-		return PTR_ERR(port->intx_domain);
++		return -ENOMEM;
+ 	}
+ 
+ 	irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
+@@ -635,14 +635,14 @@ static int xilinx_pl_dma_pcie_setup_irq(struct pl_dma_pcie *port)
+ 	err = devm_request_irq(dev, port->intx_irq, xilinx_pl_dma_pcie_intx_flow,
+ 			       IRQF_SHARED | IRQF_NO_THREAD, NULL, port);
+ 	if (err) {
+-		dev_err(dev, "Failed to request INTx IRQ %d\n", irq);
++		dev_err(dev, "Failed to request INTx IRQ %d\n", port->intx_irq);
+ 		return err;
+ 	}
+ 
+ 	err = devm_request_irq(dev, port->irq, xilinx_pl_dma_pcie_event_flow,
+ 			       IRQF_SHARED | IRQF_NO_THREAD, NULL, port);
+ 	if (err) {
+-		dev_err(dev, "Failed to request event IRQ %d\n", irq);
++		dev_err(dev, "Failed to request event IRQ %d\n", port->irq);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+index b7b9d3e21f97d..6dc918a8a0235 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
++++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+@@ -209,28 +209,28 @@ static void pci_epf_mhi_raise_irq(struct mhi_ep_cntrl *mhi_cntrl, u32 vector)
+ 			  vector + 1);
+ }
+ 
+-static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
+-				 void *to, size_t size)
++static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl,
++				 struct mhi_ep_buf_info *buf_info)
+ {
+ 	struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
+-	size_t offset = get_align_offset(epf_mhi, from);
++	size_t offset = get_align_offset(epf_mhi, buf_info->host_addr);
+ 	void __iomem *tre_buf;
+ 	phys_addr_t tre_phys;
+ 	int ret;
+ 
+ 	mutex_lock(&epf_mhi->lock);
+ 
+-	ret = __pci_epf_mhi_alloc_map(mhi_cntrl, from, &tre_phys, &tre_buf,
+-				      offset, size);
++	ret = __pci_epf_mhi_alloc_map(mhi_cntrl, buf_info->host_addr, &tre_phys,
++				      &tre_buf, offset, buf_info->size);
+ 	if (ret) {
+ 		mutex_unlock(&epf_mhi->lock);
+ 		return ret;
+ 	}
+ 
+-	memcpy_fromio(to, tre_buf, size);
++	memcpy_fromio(buf_info->dev_addr, tre_buf, buf_info->size);
+ 
+-	__pci_epf_mhi_unmap_free(mhi_cntrl, from, tre_phys, tre_buf, offset,
+-				 size);
++	__pci_epf_mhi_unmap_free(mhi_cntrl, buf_info->host_addr, tre_phys,
++				 tre_buf, offset, buf_info->size);
+ 
+ 	mutex_unlock(&epf_mhi->lock);
+ 
+@@ -238,27 +238,27 @@ static int pci_epf_mhi_iatu_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
+ }
+ 
+ static int pci_epf_mhi_iatu_write(struct mhi_ep_cntrl *mhi_cntrl,
+-				  void *from, u64 to, size_t size)
++				  struct mhi_ep_buf_info *buf_info)
+ {
+ 	struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
+-	size_t offset = get_align_offset(epf_mhi, to);
++	size_t offset = get_align_offset(epf_mhi, buf_info->host_addr);
+ 	void __iomem *tre_buf;
+ 	phys_addr_t tre_phys;
+ 	int ret;
+ 
+ 	mutex_lock(&epf_mhi->lock);
+ 
+-	ret = __pci_epf_mhi_alloc_map(mhi_cntrl, to, &tre_phys, &tre_buf,
+-				      offset, size);
++	ret = __pci_epf_mhi_alloc_map(mhi_cntrl, buf_info->host_addr, &tre_phys,
++				      &tre_buf, offset, buf_info->size);
+ 	if (ret) {
+ 		mutex_unlock(&epf_mhi->lock);
+ 		return ret;
+ 	}
+ 
+-	memcpy_toio(tre_buf, from, size);
++	memcpy_toio(tre_buf, buf_info->dev_addr, buf_info->size);
+ 
+-	__pci_epf_mhi_unmap_free(mhi_cntrl, to, tre_phys, tre_buf, offset,
+-				 size);
++	__pci_epf_mhi_unmap_free(mhi_cntrl, buf_info->host_addr, tre_phys,
++				 tre_buf, offset, buf_info->size);
+ 
+ 	mutex_unlock(&epf_mhi->lock);
+ 
+@@ -270,8 +270,8 @@ static void pci_epf_mhi_dma_callback(void *param)
+ 	complete(param);
+ }
+ 
+-static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
+-				 void *to, size_t size)
++static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl,
++				 struct mhi_ep_buf_info *buf_info)
+ {
+ 	struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
+ 	struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
+@@ -284,13 +284,13 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
+ 	dma_addr_t dst_addr;
+ 	int ret;
+ 
+-	if (size < SZ_4K)
+-		return pci_epf_mhi_iatu_read(mhi_cntrl, from, to, size);
++	if (buf_info->size < SZ_4K)
++		return pci_epf_mhi_iatu_read(mhi_cntrl, buf_info);
+ 
+ 	mutex_lock(&epf_mhi->lock);
+ 
+ 	config.direction = DMA_DEV_TO_MEM;
+-	config.src_addr = from;
++	config.src_addr = buf_info->host_addr;
+ 
+ 	ret = dmaengine_slave_config(chan, &config);
+ 	if (ret) {
+@@ -298,14 +298,16 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
+ 		goto err_unlock;
+ 	}
+ 
+-	dst_addr = dma_map_single(dma_dev, to, size, DMA_FROM_DEVICE);
++	dst_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
++				  DMA_FROM_DEVICE);
+ 	ret = dma_mapping_error(dma_dev, dst_addr);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to map remote memory\n");
+ 		goto err_unlock;
+ 	}
+ 
+-	desc = dmaengine_prep_slave_single(chan, dst_addr, size, DMA_DEV_TO_MEM,
++	desc = dmaengine_prep_slave_single(chan, dst_addr, buf_info->size,
++					   DMA_DEV_TO_MEM,
+ 					   DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
+ 	if (!desc) {
+ 		dev_err(dev, "Failed to prepare DMA\n");
+@@ -332,15 +334,15 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
+ 	}
+ 
+ err_unmap:
+-	dma_unmap_single(dma_dev, dst_addr, size, DMA_FROM_DEVICE);
++	dma_unmap_single(dma_dev, dst_addr, buf_info->size, DMA_FROM_DEVICE);
+ err_unlock:
+ 	mutex_unlock(&epf_mhi->lock);
+ 
+ 	return ret;
+ }
+ 
+-static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
+-				  u64 to, size_t size)
++static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl,
++				  struct mhi_ep_buf_info *buf_info)
+ {
+ 	struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
+ 	struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
+@@ -353,13 +355,13 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
+ 	dma_addr_t src_addr;
+ 	int ret;
+ 
+-	if (size < SZ_4K)
+-		return pci_epf_mhi_iatu_write(mhi_cntrl, from, to, size);
++	if (buf_info->size < SZ_4K)
++		return pci_epf_mhi_iatu_write(mhi_cntrl, buf_info);
+ 
+ 	mutex_lock(&epf_mhi->lock);
+ 
+ 	config.direction = DMA_MEM_TO_DEV;
+-	config.dst_addr = to;
++	config.dst_addr = buf_info->host_addr;
+ 
+ 	ret = dmaengine_slave_config(chan, &config);
+ 	if (ret) {
+@@ -367,14 +369,16 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
+ 		goto err_unlock;
+ 	}
+ 
+-	src_addr = dma_map_single(dma_dev, from, size, DMA_TO_DEVICE);
++	src_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
++				  DMA_TO_DEVICE);
+ 	ret = dma_mapping_error(dma_dev, src_addr);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to map remote memory\n");
+ 		goto err_unlock;
+ 	}
+ 
+-	desc = dmaengine_prep_slave_single(chan, src_addr, size, DMA_MEM_TO_DEV,
++	desc = dmaengine_prep_slave_single(chan, src_addr, buf_info->size,
++					   DMA_MEM_TO_DEV,
+ 					   DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
+ 	if (!desc) {
+ 		dev_err(dev, "Failed to prepare DMA\n");
+@@ -401,7 +405,7 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl, void *from,
+ 	}
+ 
+ err_unmap:
+-	dma_unmap_single(dma_dev, src_addr, size, DMA_FROM_DEVICE);
++	dma_unmap_single(dma_dev, src_addr, buf_info->size, DMA_TO_DEVICE);
+ err_unlock:
+ 	mutex_unlock(&epf_mhi->lock);
+ 
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 847b0dc41293d..c584165b13bab 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -811,7 +811,7 @@ static umode_t arm_cmn_event_attr_is_visible(struct kobject *kobj,
+ #define CMN_EVENT_HNF_OCC(_model, _name, _event)			\
+ 	CMN_EVENT_HN_OCC(_model, hnf_##_name, CMN_TYPE_HNF, _event)
+ #define CMN_EVENT_HNF_CLS(_model, _name, _event)			\
+-	CMN_EVENT_HN_CLS(_model, hnf_##_name, CMN_TYPE_HNS, _event)
++	CMN_EVENT_HN_CLS(_model, hnf_##_name, CMN_TYPE_HNF, _event)
+ #define CMN_EVENT_HNF_SNT(_model, _name, _event)			\
+ 	CMN_EVENT_HN_SNT(_model, hnf_##_name, CMN_TYPE_HNF, _event)
+ 
+diff --git a/drivers/perf/hisilicon/hisi_uncore_uc_pmu.c b/drivers/perf/hisilicon/hisi_uncore_uc_pmu.c
+index 63da05e5831c1..636fb79647c8c 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_uc_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_uc_pmu.c
+@@ -383,8 +383,8 @@ static struct attribute *hisi_uc_pmu_events_attr[] = {
+ 	HISI_PMU_EVENT_ATTR(cpu_rd,		0x10),
+ 	HISI_PMU_EVENT_ATTR(cpu_rd64,		0x17),
+ 	HISI_PMU_EVENT_ATTR(cpu_rs64,		0x19),
+-	HISI_PMU_EVENT_ATTR(cpu_mru,		0x1a),
+-	HISI_PMU_EVENT_ATTR(cycles,		0x9c),
++	HISI_PMU_EVENT_ATTR(cpu_mru,		0x1c),
++	HISI_PMU_EVENT_ATTR(cycles,		0x95),
+ 	HISI_PMU_EVENT_ATTR(spipe_hit,		0xb3),
+ 	HISI_PMU_EVENT_ATTR(hpipe_hit,		0xdb),
+ 	HISI_PMU_EVENT_ATTR(cring_rxdat_cnt,	0xfa),
+diff --git a/drivers/platform/x86/intel/vsec.c b/drivers/platform/x86/intel/vsec.c
+index c1f9e4471b28f..343ab6a82c017 100644
+--- a/drivers/platform/x86/intel/vsec.c
++++ b/drivers/platform/x86/intel/vsec.c
+@@ -120,6 +120,8 @@ static void intel_vsec_dev_release(struct device *dev)
+ {
+ 	struct intel_vsec_device *intel_vsec_dev = dev_to_ivdev(dev);
+ 
++	xa_erase(&auxdev_array, intel_vsec_dev->id);
++
+ 	mutex_lock(&vsec_ida_lock);
+ 	ida_free(intel_vsec_dev->ida, intel_vsec_dev->auxdev.id);
+ 	mutex_unlock(&vsec_ida_lock);
+@@ -135,19 +137,28 @@ int intel_vsec_add_aux(struct pci_dev *pdev, struct device *parent,
+ 	struct auxiliary_device *auxdev = &intel_vsec_dev->auxdev;
+ 	int ret, id;
+ 
+-	mutex_lock(&vsec_ida_lock);
+-	ret = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL);
+-	mutex_unlock(&vsec_ida_lock);
++	ret = xa_alloc(&auxdev_array, &intel_vsec_dev->id, intel_vsec_dev,
++		       PMT_XA_LIMIT, GFP_KERNEL);
+ 	if (ret < 0) {
+ 		kfree(intel_vsec_dev->resource);
+ 		kfree(intel_vsec_dev);
+ 		return ret;
+ 	}
+ 
++	mutex_lock(&vsec_ida_lock);
++	id = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL);
++	mutex_unlock(&vsec_ida_lock);
++	if (id < 0) {
++		xa_erase(&auxdev_array, intel_vsec_dev->id);
++		kfree(intel_vsec_dev->resource);
++		kfree(intel_vsec_dev);
++		return id;
++	}
++
+ 	if (!parent)
+ 		parent = &pdev->dev;
+ 
+-	auxdev->id = ret;
++	auxdev->id = id;
+ 	auxdev->name = name;
+ 	auxdev->dev.parent = parent;
+ 	auxdev->dev.release = intel_vsec_dev_release;
+@@ -169,12 +180,6 @@ int intel_vsec_add_aux(struct pci_dev *pdev, struct device *parent,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	/* Add auxdev to list */
+-	ret = xa_alloc(&auxdev_array, &id, intel_vsec_dev, PMT_XA_LIMIT,
+-		       GFP_KERNEL);
+-	if (ret)
+-		return ret;
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(intel_vsec_add_aux, INTEL_VSEC);
+diff --git a/drivers/platform/x86/intel/vsec.h b/drivers/platform/x86/intel/vsec.h
+index 0fd042c171ba0..0a6201b4a0e90 100644
+--- a/drivers/platform/x86/intel/vsec.h
++++ b/drivers/platform/x86/intel/vsec.h
+@@ -45,6 +45,7 @@ struct intel_vsec_device {
+ 	struct ida *ida;
+ 	struct intel_vsec_platform_info *info;
+ 	int num_resources;
++	int id; /* xa */
+ 	void *priv_data;
+ 	size_t priv_data_size;
+ };
+diff --git a/drivers/power/supply/bq256xx_charger.c b/drivers/power/supply/bq256xx_charger.c
+index 789a31bd70c39..1a935bc885108 100644
+--- a/drivers/power/supply/bq256xx_charger.c
++++ b/drivers/power/supply/bq256xx_charger.c
+@@ -1574,13 +1574,16 @@ static int bq256xx_hw_init(struct bq256xx_device *bq)
+ 			wd_reg_val = i;
+ 			break;
+ 		}
+-		if (bq->watchdog_timer > bq256xx_watchdog_time[i] &&
++		if (i + 1 < BQ256XX_NUM_WD_VAL &&
++		    bq->watchdog_timer > bq256xx_watchdog_time[i] &&
+ 		    bq->watchdog_timer < bq256xx_watchdog_time[i + 1])
+ 			wd_reg_val = i;
+ 	}
+ 	ret = regmap_update_bits(bq->regmap, BQ256XX_CHARGER_CONTROL_1,
+ 				 BQ256XX_WATCHDOG_MASK, wd_reg_val <<
+ 						BQ256XX_WDT_BIT_SHIFT);
++	if (ret)
++		return ret;
+ 
+ 	ret = power_supply_get_battery_info(bq->charger, &bat_info);
+ 	if (ret == -ENOMEM)
+diff --git a/drivers/power/supply/cw2015_battery.c b/drivers/power/supply/cw2015_battery.c
+index bb29e9ebd24a8..99f3ccdc30a6a 100644
+--- a/drivers/power/supply/cw2015_battery.c
++++ b/drivers/power/supply/cw2015_battery.c
+@@ -491,7 +491,7 @@ static int cw_battery_get_property(struct power_supply *psy,
+ 
+ 	case POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW:
+ 		if (cw_battery_valid_time_to_empty(cw_bat))
+-			val->intval = cw_bat->time_to_empty;
++			val->intval = cw_bat->time_to_empty * 60;
+ 		else
+ 			val->intval = 0;
+ 		break;
+diff --git a/drivers/power/supply/qcom_pmi8998_charger.c b/drivers/power/supply/qcom_pmi8998_charger.c
+index 8acf63ee6897f..9bb7774060138 100644
+--- a/drivers/power/supply/qcom_pmi8998_charger.c
++++ b/drivers/power/supply/qcom_pmi8998_charger.c
+@@ -972,10 +972,14 @@ static int smb2_probe(struct platform_device *pdev)
+ 	supply_config.of_node = pdev->dev.of_node;
+ 
+ 	desc = devm_kzalloc(chip->dev, sizeof(smb2_psy_desc), GFP_KERNEL);
++	if (!desc)
++		return -ENOMEM;
+ 	memcpy(desc, &smb2_psy_desc, sizeof(smb2_psy_desc));
+ 	desc->name =
+ 		devm_kasprintf(chip->dev, GFP_KERNEL, "%s-charger",
+ 			       (const char *)device_get_match_data(chip->dev));
++	if (!desc->name)
++		return -ENOMEM;
+ 
+ 	chip->chg_psy =
+ 		devm_power_supply_register(chip->dev, desc, &supply_config);
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 29078486534d4..59c44a5584e64 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -176,7 +176,7 @@ of_pwm_single_xlate(struct pwm_chip *chip, const struct of_phandle_args *args)
+ 	pwm->args.period = args->args[0];
+ 	pwm->args.polarity = PWM_POLARITY_NORMAL;
+ 
+-	if (args->args_count == 2 && args->args[2] & PWM_POLARITY_INVERTED)
++	if (args->args_count == 2 && args->args[1] & PWM_POLARITY_INVERTED)
+ 		pwm->args.polarity = PWM_POLARITY_INVERSED;
+ 
+ 	return pwm;
+diff --git a/drivers/pwm/pwm-jz4740.c b/drivers/pwm/pwm-jz4740.c
+index e9375de60ad62..d0a0406b24f41 100644
+--- a/drivers/pwm/pwm-jz4740.c
++++ b/drivers/pwm/pwm-jz4740.c
+@@ -61,9 +61,10 @@ static int jz4740_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	snprintf(name, sizeof(name), "timer%u", pwm->hwpwm);
+ 
+ 	clk = clk_get(chip->dev, name);
+-	if (IS_ERR(clk))
+-		return dev_err_probe(chip->dev, PTR_ERR(clk),
+-				     "Failed to get clock\n");
++	if (IS_ERR(clk)) {
++		dev_err(chip->dev, "error %pe: Failed to get clock\n", clk);
++		return PTR_ERR(clk);
++	}
+ 
+ 	err = clk_prepare_enable(clk);
+ 	if (err < 0) {
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index 3303a754ea020..58ece15ef6c37 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -578,32 +578,23 @@ static void stm32_pwm_detect_complementary(struct stm32_pwm *priv)
+ 	priv->have_complementary_output = (ccer != 0);
+ }
+ 
+-static int stm32_pwm_detect_channels(struct stm32_pwm *priv)
++static unsigned int stm32_pwm_detect_channels(struct stm32_pwm *priv,
++					      unsigned int *num_enabled)
+ {
+-	u32 ccer;
+-	int npwm = 0;
++	u32 ccer, ccer_backup;
+ 
+ 	/*
+ 	 * If channels enable bits don't exist writing 1 will have no
+ 	 * effect so we can detect and count them.
+ 	 */
++	regmap_read(priv->regmap, TIM_CCER, &ccer_backup);
+ 	regmap_set_bits(priv->regmap, TIM_CCER, TIM_CCER_CCXE);
+ 	regmap_read(priv->regmap, TIM_CCER, &ccer);
+-	regmap_clear_bits(priv->regmap, TIM_CCER, TIM_CCER_CCXE);
+-
+-	if (ccer & TIM_CCER_CC1E)
+-		npwm++;
+-
+-	if (ccer & TIM_CCER_CC2E)
+-		npwm++;
++	regmap_write(priv->regmap, TIM_CCER, ccer_backup);
+ 
+-	if (ccer & TIM_CCER_CC3E)
+-		npwm++;
++	*num_enabled = hweight32(ccer_backup & TIM_CCER_CCXE);
+ 
+-	if (ccer & TIM_CCER_CC4E)
+-		npwm++;
+-
+-	return npwm;
++	return hweight32(ccer & TIM_CCER_CCXE);
+ }
+ 
+ static int stm32_pwm_probe(struct platform_device *pdev)
+@@ -612,6 +603,8 @@ static int stm32_pwm_probe(struct platform_device *pdev)
+ 	struct device_node *np = dev->of_node;
+ 	struct stm32_timers *ddata = dev_get_drvdata(pdev->dev.parent);
+ 	struct stm32_pwm *priv;
++	unsigned int num_enabled;
++	unsigned int i;
+ 	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -634,7 +627,11 @@ static int stm32_pwm_probe(struct platform_device *pdev)
+ 
+ 	priv->chip.dev = dev;
+ 	priv->chip.ops = &stm32pwm_ops;
+-	priv->chip.npwm = stm32_pwm_detect_channels(priv);
++	priv->chip.npwm = stm32_pwm_detect_channels(priv, &num_enabled);
++
++	/* Initialize clock refcount to number of enabled PWM channels. */
++	for (i = 0; i < num_enabled; i++)
++		clk_enable(priv->clk);
+ 
+ 	ret = devm_pwmchip_add(dev, &priv->chip);
+ 	if (ret < 0)
+diff --git a/drivers/scsi/bfa/bfad_bsg.c b/drivers/scsi/bfa/bfad_bsg.c
+index 520f9152f3bf2..d4ceca2d435ee 100644
+--- a/drivers/scsi/bfa/bfad_bsg.c
++++ b/drivers/scsi/bfa/bfad_bsg.c
+@@ -2550,7 +2550,7 @@ out:
+ static void bfad_reset_sdev_bflags(struct bfad_im_port_s *im_port,
+ 				   int lunmask_cfg)
+ {
+-	const u32 scan_flags = BLIST_NOREPORTLUN | BLIST_SPARSELUN;
++	const blist_flags_t scan_flags = BLIST_NOREPORTLUN | BLIST_SPARSELUN;
+ 	struct bfad_itnim_s *itnim;
+ 	struct scsi_device *sdev;
+ 	unsigned long flags;
+diff --git a/drivers/scsi/fnic/fnic_debugfs.c b/drivers/scsi/fnic/fnic_debugfs.c
+index c4d9ed0d7d753..2619a2d4f5f14 100644
+--- a/drivers/scsi/fnic/fnic_debugfs.c
++++ b/drivers/scsi/fnic/fnic_debugfs.c
+@@ -52,9 +52,10 @@ int fnic_debugfs_init(void)
+ 		fc_trc_flag->fnic_trace = 2;
+ 		fc_trc_flag->fc_trace = 3;
+ 		fc_trc_flag->fc_clear = 4;
++		return 0;
+ 	}
+ 
+-	return 0;
++	return -ENOMEM;
+ }
+ 
+ /*
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index d50058b414098..bbb7b2d9ffcfb 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1565,12 +1565,12 @@ EXPORT_SYMBOL_GPL(hisi_sas_controller_reset_done);
+ static int hisi_sas_controller_prereset(struct hisi_hba *hisi_hba)
+ {
+ 	if (!hisi_hba->hw->soft_reset)
+-		return -1;
++		return -ENOENT;
+ 
+ 	down(&hisi_hba->sem);
+ 	if (test_and_set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags)) {
+ 		up(&hisi_hba->sem);
+-		return -1;
++		return -EPERM;
+ 	}
+ 
+ 	if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct)
+@@ -1641,7 +1641,10 @@ static int hisi_sas_abort_task(struct sas_task *task)
+ 	task->task_state_flags |= SAS_TASK_STATE_ABORTED;
+ 	spin_unlock_irqrestore(&task->task_state_lock, flags);
+ 
+-	if (slot && task->task_proto & SAS_PROTOCOL_SSP) {
++	if (!slot)
++		goto out;
++
++	if (task->task_proto & SAS_PROTOCOL_SSP) {
+ 		u16 tag = slot->idx;
+ 		int rc2;
+ 
+@@ -1688,7 +1691,7 @@ static int hisi_sas_abort_task(struct sas_task *task)
+ 				rc = hisi_sas_softreset_ata_disk(device);
+ 			}
+ 		}
+-	} else if (slot && task->task_proto & SAS_PROTOCOL_SMP) {
++	} else if (task->task_proto & SAS_PROTOCOL_SMP) {
+ 		/* SMP */
+ 		u32 tag = slot->idx;
+ 		struct hisi_sas_cq *cq = &hisi_hba->cq[slot->dlvry_queue];
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index d8437a98037b9..6d8577423d32b 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -3476,7 +3476,7 @@ static void debugfs_snapshot_global_reg_v3_hw(struct hisi_hba *hisi_hba)
+ 	u32 *databuf = hisi_hba->debugfs_regs[dump_index][DEBUGFS_GLOBAL].data;
+ 	int i;
+ 
+-	for (i = 0; i < debugfs_axi_reg.count; i++, databuf++)
++	for (i = 0; i < debugfs_global_reg.count; i++, databuf++)
+ 		*databuf = hisi_sas_read32(hisi_hba, 4 * i);
+ }
+ 
+@@ -4968,6 +4968,7 @@ static void hisi_sas_reset_done_v3_hw(struct pci_dev *pdev)
+ {
+ 	struct sas_ha_struct *sha = pci_get_drvdata(pdev);
+ 	struct hisi_hba *hisi_hba = sha->lldd_ha;
++	struct Scsi_Host *shost = hisi_hba->shost;
+ 	struct device *dev = hisi_hba->dev;
+ 	int rc;
+ 
+@@ -4976,6 +4977,10 @@ static void hisi_sas_reset_done_v3_hw(struct pci_dev *pdev)
+ 	rc = hw_init_v3_hw(hisi_hba);
+ 	if (rc) {
+ 		dev_err(dev, "FLR: hw init failed rc=%d\n", rc);
++		clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
++		scsi_unblock_requests(shost);
++		clear_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags);
++		up(&hisi_hba->sem);
+ 		return;
+ 	}
+ 
+@@ -5018,7 +5023,7 @@ static int _suspend_v3_hw(struct device *device)
+ 	}
+ 
+ 	if (test_and_set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags))
+-		return -1;
++		return -EPERM;
+ 
+ 	dev_warn(dev, "entering suspend state\n");
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index f9627eddab085..0829fe6ddff88 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -2128,8 +2128,8 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 						NLP_EVT_DEVICE_RM);
+ 	} else {
+ 		/* Good status, call state machine */
+-		prsp = list_entry(cmdiocb->cmd_dmabuf->list.next,
+-				  struct lpfc_dmabuf, list);
++		prsp = list_get_first(&cmdiocb->cmd_dmabuf->list,
++				      struct lpfc_dmabuf, list);
+ 		if (!prsp)
+ 			goto out;
+ 		if (!lpfc_is_els_acc_rsp(prsp))
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 08645a99ad6b3..9dacbb8570c93 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -223,6 +223,22 @@ static long mpi3mr_bsg_pel_enable(struct mpi3mr_ioc *mrioc,
+ 		return rval;
+ 	}
+ 
++	if (mrioc->unrecoverable) {
++		dprint_bsg_err(mrioc, "%s: unrecoverable controller\n",
++			       __func__);
++		return -EFAULT;
++	}
++
++	if (mrioc->reset_in_progress) {
++		dprint_bsg_err(mrioc, "%s: reset in progress\n", __func__);
++		return -EAGAIN;
++	}
++
++	if (mrioc->stop_bsgs) {
++		dprint_bsg_err(mrioc, "%s: bsgs are blocked\n", __func__);
++		return -EAGAIN;
++	}
++
+ 	sg_copy_to_buffer(job->request_payload.sg_list,
+ 			  job->request_payload.sg_cnt,
+ 			  &pel_enable, sizeof(pel_enable));
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 040031eb0c12d..e2f2c205df71a 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -1047,8 +1047,9 @@ void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc)
+ 	list_for_each_entry_safe(tgtdev, tgtdev_next, &mrioc->tgtdev_list,
+ 	    list) {
+ 		if ((tgtdev->dev_handle == MPI3MR_INVALID_DEV_HANDLE) &&
+-		    tgtdev->host_exposed && tgtdev->starget &&
+-		    tgtdev->starget->hostdata) {
++		     tgtdev->is_hidden &&
++		     tgtdev->host_exposed && tgtdev->starget &&
++		     tgtdev->starget->hostdata) {
+ 			tgt_priv = tgtdev->starget->hostdata;
+ 			tgt_priv->dev_removed = 1;
+ 			atomic_set(&tgt_priv->block_io, 0);
+@@ -1064,14 +1065,24 @@ void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc)
+ 				mpi3mr_remove_tgtdev_from_host(mrioc, tgtdev);
+ 			mpi3mr_tgtdev_del_from_list(mrioc, tgtdev, true);
+ 			mpi3mr_tgtdev_put(tgtdev);
++		} else if (tgtdev->is_hidden & tgtdev->host_exposed) {
++			dprint_reset(mrioc, "hiding target device with perst_id(%d)\n",
++				     tgtdev->perst_id);
++			mpi3mr_remove_tgtdev_from_host(mrioc, tgtdev);
+ 		}
+ 	}
+ 
+ 	tgtdev = NULL;
+ 	list_for_each_entry(tgtdev, &mrioc->tgtdev_list, list) {
+ 		if ((tgtdev->dev_handle != MPI3MR_INVALID_DEV_HANDLE) &&
+-		    !tgtdev->is_hidden && !tgtdev->host_exposed)
+-			mpi3mr_report_tgtdev_to_host(mrioc, tgtdev->perst_id);
++		    !tgtdev->is_hidden) {
++			if (!tgtdev->host_exposed)
++				mpi3mr_report_tgtdev_to_host(mrioc,
++							     tgtdev->perst_id);
++			else if (tgtdev->starget)
++				starget_for_each_device(tgtdev->starget,
++							(void *)tgtdev, mpi3mr_update_sdev);
++	}
+ 	}
+ }
+ 
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 674abd0d67007..57d47dcf11b92 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -47,7 +47,7 @@
+ #define LLCC_TRP_STATUSn(n)           (4 + n * SZ_4K)
+ #define LLCC_TRP_ATTR0_CFGn(n)        (0x21000 + SZ_8 * n)
+ #define LLCC_TRP_ATTR1_CFGn(n)        (0x21004 + SZ_8 * n)
+-#define LLCC_TRP_ATTR2_CFGn(n)        (0x21100 + SZ_8 * n)
++#define LLCC_TRP_ATTR2_CFGn(n)        (0x21100 + SZ_4 * n)
+ 
+ #define LLCC_TRP_SCID_DIS_CAP_ALLOC   0x21f00
+ #define LLCC_TRP_PCB_ACT              0x21f04
+@@ -941,15 +941,15 @@ static int _qcom_llcc_cfg_program(const struct llcc_slice_config *config,
+ 		u32 disable_cap_alloc, retain_pc;
+ 
+ 		disable_cap_alloc = config->dis_cap_alloc << config->slice_id;
+-		ret = regmap_write(drv_data->bcast_regmap,
+-				LLCC_TRP_SCID_DIS_CAP_ALLOC, disable_cap_alloc);
++		ret = regmap_update_bits(drv_data->bcast_regmap, LLCC_TRP_SCID_DIS_CAP_ALLOC,
++					 BIT(config->slice_id), disable_cap_alloc);
+ 		if (ret)
+ 			return ret;
+ 
+ 		if (drv_data->version < LLCC_VERSION_4_1_0_0) {
+ 			retain_pc = config->retain_on_pc << config->slice_id;
+-			ret = regmap_write(drv_data->bcast_regmap,
+-					LLCC_TRP_PCB_ACT, retain_pc);
++			ret = regmap_update_bits(drv_data->bcast_regmap, LLCC_TRP_PCB_ACT,
++						 BIT(config->slice_id), retain_pc);
+ 			if (ret)
+ 				return ret;
+ 		}
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 70c9dd6b6a31e..ddae0fde798ee 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -1177,9 +1177,10 @@ config SPI_ZYNQ_QSPI
+ 
+ config SPI_ZYNQMP_GQSPI
+ 	tristate "Xilinx ZynqMP GQSPI controller"
+-	depends on (SPI_MASTER && HAS_DMA) || COMPILE_TEST
++	depends on (SPI_MEM && HAS_DMA) || COMPILE_TEST
+ 	help
+ 	  Enables Xilinx GQSPI controller driver for Zynq UltraScale+ MPSoC.
++	  This controller only supports SPI memory interface.
+ 
+ config SPI_AMD
+ 	tristate "AMD SPI controller"
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 3d7bf62da11cb..f94e0d370d466 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1840,7 +1840,7 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		if (ddata->jh7110_clk_init) {
+ 			ret = cqspi_jh7110_clk_init(pdev, cqspi);
+ 			if (ret)
+-				goto probe_clk_failed;
++				goto probe_reset_failed;
+ 		}
+ 
+ 		if (of_device_is_compatible(pdev->dev.of_node,
+@@ -1901,6 +1901,8 @@ static int cqspi_probe(struct platform_device *pdev)
+ probe_setup_failed:
+ 	cqspi_controller_enable(cqspi, 0);
+ probe_reset_failed:
++	if (cqspi->is_jh7110)
++		cqspi_jh7110_disable_clk(pdev, cqspi);
+ 	clk_disable_unprepare(cqspi->clk);
+ probe_clk_failed:
+ 	return ret;
+diff --git a/drivers/spi/spi-coldfire-qspi.c b/drivers/spi/spi-coldfire-qspi.c
+index f0b630fe16c3c..b341b6908df06 100644
+--- a/drivers/spi/spi-coldfire-qspi.c
++++ b/drivers/spi/spi-coldfire-qspi.c
+@@ -441,7 +441,6 @@ static void mcfqspi_remove(struct platform_device *pdev)
+ 	mcfqspi_wr_qmr(mcfqspi, MCFQSPI_QMR_MSTR);
+ 
+ 	mcfqspi_cs_teardown(mcfqspi);
+-	clk_disable_unprepare(mcfqspi->clk);
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index fb452bc783727..cfc3b1ddbd229 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -29,12 +29,15 @@
+ 
+ #include <asm/unaligned.h>
+ 
++#define SH_MSIOF_FLAG_FIXED_DTDL_200	BIT(0)
++
+ struct sh_msiof_chipdata {
+ 	u32 bits_per_word_mask;
+ 	u16 tx_fifo_size;
+ 	u16 rx_fifo_size;
+ 	u16 ctlr_flags;
+ 	u16 min_div_pow;
++	u32 flags;
+ };
+ 
+ struct sh_msiof_spi_priv {
+@@ -1072,6 +1075,16 @@ static const struct sh_msiof_chipdata rcar_gen3_data = {
+ 	.min_div_pow = 1,
+ };
+ 
++static const struct sh_msiof_chipdata rcar_r8a7795_data = {
++	.bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16) |
++			      SPI_BPW_MASK(24) | SPI_BPW_MASK(32),
++	.tx_fifo_size = 64,
++	.rx_fifo_size = 64,
++	.ctlr_flags = SPI_CONTROLLER_MUST_TX,
++	.min_div_pow = 1,
++	.flags = SH_MSIOF_FLAG_FIXED_DTDL_200,
++};
++
+ static const struct of_device_id sh_msiof_match[] __maybe_unused = {
+ 	{ .compatible = "renesas,sh-mobile-msiof", .data = &sh_data },
+ 	{ .compatible = "renesas,msiof-r8a7743",   .data = &rcar_gen2_data },
+@@ -1082,6 +1095,7 @@ static const struct of_device_id sh_msiof_match[] __maybe_unused = {
+ 	{ .compatible = "renesas,msiof-r8a7793",   .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,msiof-r8a7794",   .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,rcar-gen2-msiof", .data = &rcar_gen2_data },
++	{ .compatible = "renesas,msiof-r8a7795",   .data = &rcar_r8a7795_data },
+ 	{ .compatible = "renesas,msiof-r8a7796",   .data = &rcar_gen3_data },
+ 	{ .compatible = "renesas,rcar-gen3-msiof", .data = &rcar_gen3_data },
+ 	{ .compatible = "renesas,rcar-gen4-msiof", .data = &rcar_gen3_data },
+@@ -1279,6 +1293,9 @@ static int sh_msiof_spi_probe(struct platform_device *pdev)
+ 		return -ENXIO;
+ 	}
+ 
++	if (chipdata->flags & SH_MSIOF_FLAG_FIXED_DTDL_200)
++		info->dtdl = 200;
++
+ 	if (info->mode == MSIOF_SPI_TARGET)
+ 		ctlr = spi_alloc_target(&pdev->dev,
+ 				        sizeof(struct sh_msiof_spi_priv));
+diff --git a/drivers/spmi/spmi-mtk-pmif.c b/drivers/spmi/spmi-mtk-pmif.c
+index b3c991e1ea40d..54c35f5535cbe 100644
+--- a/drivers/spmi/spmi-mtk-pmif.c
++++ b/drivers/spmi/spmi-mtk-pmif.c
+@@ -50,6 +50,7 @@ struct pmif {
+ 	struct clk_bulk_data clks[PMIF_MAX_CLKS];
+ 	size_t nclks;
+ 	const struct pmif_data *data;
++	raw_spinlock_t lock;
+ };
+ 
+ static const char * const pmif_clock_names[] = {
+@@ -314,6 +315,7 @@ static int pmif_spmi_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 	struct ch_reg *inf_reg;
+ 	int ret;
+ 	u32 data, cmd;
++	unsigned long flags;
+ 
+ 	/* Check for argument validation. */
+ 	if (sid & ~0xf) {
+@@ -334,6 +336,7 @@ static int pmif_spmi_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 	else
+ 		return -EINVAL;
+ 
++	raw_spin_lock_irqsave(&arb->lock, flags);
+ 	/* Wait for Software Interface FSM state to be IDLE. */
+ 	inf_reg = &arb->chan;
+ 	ret = readl_poll_timeout_atomic(arb->base + arb->data->regs[inf_reg->ch_sta],
+@@ -343,6 +346,7 @@ static int pmif_spmi_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 		/* set channel ready if the data has transferred */
+ 		if (pmif_is_fsm_vldclr(arb))
+ 			pmif_writel(arb, 1, inf_reg->ch_rdy);
++		raw_spin_unlock_irqrestore(&arb->lock, flags);
+ 		dev_err(&ctrl->dev, "failed to wait for SWINF_IDLE\n");
+ 		return ret;
+ 	}
+@@ -350,6 +354,7 @@ static int pmif_spmi_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 	/* Send the command. */
+ 	cmd = (opc << 30) | (sid << 24) | ((len - 1) << 16) | addr;
+ 	pmif_writel(arb, cmd, inf_reg->ch_send);
++	raw_spin_unlock_irqrestore(&arb->lock, flags);
+ 
+ 	/*
+ 	 * Wait for Software Interface FSM state to be WFVLDCLR,
+@@ -376,7 +381,8 @@ static int pmif_spmi_write_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 	struct pmif *arb = spmi_controller_get_drvdata(ctrl);
+ 	struct ch_reg *inf_reg;
+ 	int ret;
+-	u32 data, cmd;
++	u32 data, wdata, cmd;
++	unsigned long flags;
+ 
+ 	if (len > 4) {
+ 		dev_err(&ctrl->dev, "pmif supports 1..4 bytes per trans, but:%zu requested", len);
+@@ -394,6 +400,10 @@ static int pmif_spmi_write_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 	else
+ 		return -EINVAL;
+ 
++	/* Set the write data. */
++	memcpy(&wdata, buf, len);
++
++	raw_spin_lock_irqsave(&arb->lock, flags);
+ 	/* Wait for Software Interface FSM state to be IDLE. */
+ 	inf_reg = &arb->chan;
+ 	ret = readl_poll_timeout_atomic(arb->base + arb->data->regs[inf_reg->ch_sta],
+@@ -403,17 +413,17 @@ static int pmif_spmi_write_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
+ 		/* set channel ready if the data has transferred */
+ 		if (pmif_is_fsm_vldclr(arb))
+ 			pmif_writel(arb, 1, inf_reg->ch_rdy);
++		raw_spin_unlock_irqrestore(&arb->lock, flags);
+ 		dev_err(&ctrl->dev, "failed to wait for SWINF_IDLE\n");
+ 		return ret;
+ 	}
+ 
+-	/* Set the write data. */
+-	memcpy(&data, buf, len);
+-	pmif_writel(arb, data, inf_reg->wdata);
++	pmif_writel(arb, wdata, inf_reg->wdata);
+ 
+ 	/* Send the command. */
+ 	cmd = (opc << 30) | BIT(29) | (sid << 24) | ((len - 1) << 16) | addr;
+ 	pmif_writel(arb, cmd, inf_reg->ch_send);
++	raw_spin_unlock_irqrestore(&arb->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -488,6 +498,8 @@ static int mtk_spmi_probe(struct platform_device *pdev)
+ 	arb->chan.ch_send = PMIF_SWINF_0_ACC + chan_offset;
+ 	arb->chan.ch_rdy = PMIF_SWINF_0_VLD_CLR + chan_offset;
+ 
++	raw_spin_lock_init(&arb->lock);
++
+ 	platform_set_drvdata(pdev, ctrl);
+ 
+ 	err = spmi_controller_add(ctrl);
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index 84a41792cb4b8..ac398b5a97360 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -461,6 +461,9 @@ static const struct v4l2_ioctl_ops rkvdec_ioctl_ops = {
+ 
+ 	.vidioc_streamon = v4l2_m2m_ioctl_streamon,
+ 	.vidioc_streamoff = v4l2_m2m_ioctl_streamoff,
++
++	.vidioc_decoder_cmd = v4l2_m2m_ioctl_stateless_decoder_cmd,
++	.vidioc_try_decoder_cmd = v4l2_m2m_ioctl_stateless_try_decoder_cmd,
+ };
+ 
+ static int rkvdec_queue_setup(struct vb2_queue *vq, unsigned int *num_buffers,
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c
+index b3928bd8c9c60..21f9fa1a17132 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.c
+@@ -27,7 +27,7 @@ static void connected_init(void)
+  * be made immediately, otherwise it will be deferred until
+  * vchiq_call_connected_callbacks is called.
+  */
+-void vchiq_add_connected_callback(void (*callback)(void))
++void vchiq_add_connected_callback(struct vchiq_device *device, void (*callback)(void))
+ {
+ 	connected_init();
+ 
+@@ -39,7 +39,7 @@ void vchiq_add_connected_callback(void (*callback)(void))
+ 		callback();
+ 	} else {
+ 		if (g_num_deferred_callbacks >= MAX_CALLBACKS) {
+-			vchiq_log_error(NULL, VCHIQ_CORE,
++			vchiq_log_error(&device->dev, VCHIQ_CORE,
+ 					"There already %d callback registered - please increase MAX_CALLBACKS",
+ 					g_num_deferred_callbacks);
+ 		} else {
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.h
+index 4caf5e30099dc..e4ed56446f8ad 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.h
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_connected.h
+@@ -1,10 +1,12 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /* Copyright (c) 2010-2012 Broadcom. All rights reserved. */
+ 
++#include "vchiq_bus.h"
++
+ #ifndef VCHIQ_CONNECTED_H
+ #define VCHIQ_CONNECTED_H
+ 
+-void vchiq_add_connected_callback(void (*callback)(void));
++void vchiq_add_connected_callback(struct vchiq_device *device, void (*callback)(void));
+ void vchiq_call_connected_callbacks(void);
+ 
+ #endif /* VCHIQ_CONNECTED_H */
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 39b857da2d429..8a9eb0101c2e1 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -245,7 +245,7 @@ find_service_by_handle(struct vchiq_instance *instance, unsigned int handle)
+ 		return service;
+ 	}
+ 	rcu_read_unlock();
+-	vchiq_log_debug(service->state->dev, VCHIQ_CORE,
++	vchiq_log_debug(instance->state->dev, VCHIQ_CORE,
+ 			"Invalid service handle 0x%x", handle);
+ 	return NULL;
+ }
+@@ -287,7 +287,7 @@ find_service_for_instance(struct vchiq_instance *instance, unsigned int handle)
+ 		return service;
+ 	}
+ 	rcu_read_unlock();
+-	vchiq_log_debug(service->state->dev, VCHIQ_CORE,
++	vchiq_log_debug(instance->state->dev, VCHIQ_CORE,
+ 			"Invalid service handle 0x%x", handle);
+ 	return NULL;
+ }
+@@ -310,7 +310,7 @@ find_closed_service_for_instance(struct vchiq_instance *instance, unsigned int h
+ 		return service;
+ 	}
+ 	rcu_read_unlock();
+-	vchiq_log_debug(service->state->dev, VCHIQ_CORE,
++	vchiq_log_debug(instance->state->dev, VCHIQ_CORE,
+ 			"Invalid service handle 0x%x", handle);
+ 	return service;
+ }
+diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
+index 4d447520bab87..4e4cf6c34a775 100644
+--- a/drivers/target/target_core_file.c
++++ b/drivers/target/target_core_file.c
+@@ -332,11 +332,13 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
+ 	}
+ 
+ 	iov_iter_bvec(&iter, is_write, bvec, sgl_nents, len);
+-	if (is_write)
++	if (is_write) {
++		file_start_write(fd);
+ 		ret = vfs_iter_write(fd, &iter, &pos, 0);
+-	else
++		file_end_write(fd);
++	} else {
+ 		ret = vfs_iter_read(fd, &iter, &pos, 0);
+-
++	}
+ 	if (is_write) {
+ 		if (ret < 0 || ret != data_length) {
+ 			pr_err("%s() write returned %d\n", __func__, ret);
+@@ -467,7 +469,9 @@ fd_execute_write_same(struct se_cmd *cmd)
+ 	}
+ 
+ 	iov_iter_bvec(&iter, ITER_SOURCE, bvec, nolb, len);
++	file_start_write(fd_dev->fd_file);
+ 	ret = vfs_iter_write(fd_dev->fd_file, &iter, &pos, 0);
++	file_end_write(fd_dev->fd_file);
+ 
+ 	kfree(bvec);
+ 	if (ret < 0 || ret != len) {
+diff --git a/drivers/thermal/loongson2_thermal.c b/drivers/thermal/loongson2_thermal.c
+index 133098dc08547..99ca0c7bc41c7 100644
+--- a/drivers/thermal/loongson2_thermal.c
++++ b/drivers/thermal/loongson2_thermal.c
+@@ -127,7 +127,7 @@ static int loongson2_thermal_probe(struct platform_device *pdev)
+ 		if (!IS_ERR(tzd))
+ 			break;
+ 
+-		if (PTR_ERR(tzd) != ENODEV)
++		if (PTR_ERR(tzd) != -ENODEV)
+ 			continue;
+ 
+ 		return dev_err_probe(dev, PTR_ERR(tzd), "failed to register");
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 9c17d35ccbbd8..1bc7ba4594063 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -1369,7 +1369,6 @@ unregister:
+ 	device_del(&tz->device);
+ release_device:
+ 	put_device(&tz->device);
+-	tz = NULL;
+ remove_id:
+ 	ida_free(&thermal_tz_ida, id);
+ free_tzp:
+diff --git a/drivers/tty/serial/8250/8250_bcm2835aux.c b/drivers/tty/serial/8250/8250_bcm2835aux.c
+index 15a2387a5b258..4f4502fb5454c 100644
+--- a/drivers/tty/serial/8250/8250_bcm2835aux.c
++++ b/drivers/tty/serial/8250/8250_bcm2835aux.c
+@@ -119,6 +119,8 @@ static int bcm2835aux_serial_probe(struct platform_device *pdev)
+ 
+ 	/* get the clock - this also enables the HW */
+ 	data->clk = devm_clk_get_optional(&pdev->dev, NULL);
++	if (IS_ERR(data->clk))
++		return dev_err_probe(&pdev->dev, PTR_ERR(data->clk), "could not get clk\n");
+ 
+ 	/* get the interrupt */
+ 	ret = platform_get_irq(pdev, 0);
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 6085d356ad86d..23366f868ae3a 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -480,7 +480,7 @@ static int sealevel_rs485_config(struct uart_port *port, struct ktermios *termio
+ }
+ 
+ static const struct serial_rs485 generic_rs485_supported = {
+-	.flags = SER_RS485_ENABLED,
++	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND,
+ };
+ 
+ static const struct exar8250_platform exar8250_default_platform = {
+@@ -524,7 +524,8 @@ static int iot2040_rs485_config(struct uart_port *port, struct ktermios *termios
+ }
+ 
+ static const struct serial_rs485 iot2040_rs485_supported = {
+-	.flags = SER_RS485_ENABLED | SER_RS485_RX_DURING_TX | SER_RS485_TERMINATE_BUS,
++	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND |
++		 SER_RS485_RX_DURING_TX | SER_RS485_TERMINATE_BUS,
+ };
+ 
+ static const struct property_entry iot2040_gpio_properties[] = {
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 578f35895b273..dd2be7a813c31 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1594,7 +1594,7 @@ static int omap8250_remove(struct platform_device *pdev)
+ 
+ 	err = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (err)
+-		return err;
++		dev_err(&pdev->dev, "Failed to resume hardware\n");
+ 
+ 	up = serial8250_get_port(priv->line);
+ 	omap_8250_shutdown(&up->port);
+diff --git a/drivers/tty/serial/apbuart.c b/drivers/tty/serial/apbuart.c
+index 716cb014c028e..364599f256db8 100644
+--- a/drivers/tty/serial/apbuart.c
++++ b/drivers/tty/serial/apbuart.c
+@@ -122,7 +122,7 @@ static void apbuart_tx_chars(struct uart_port *port)
+ {
+ 	u8 ch;
+ 
+-	uart_port_tx_limited(port, ch, port->fifosize >> 1,
++	uart_port_tx_limited(port, ch, port->fifosize,
+ 		true,
+ 		UART_PUT_CHAR(port, ch),
+ 		({}));
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 708b9852a575d..81557a58f86f4 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -415,13 +415,13 @@ static void imx_uart_stop_tx(struct uart_port *port)
+ 	ucr1 = imx_uart_readl(sport, UCR1);
+ 	imx_uart_writel(sport, ucr1 & ~UCR1_TRDYEN, UCR1);
+ 
++	ucr4 = imx_uart_readl(sport, UCR4);
+ 	usr2 = imx_uart_readl(sport, USR2);
+-	if (!(usr2 & USR2_TXDC)) {
++	if ((!(usr2 & USR2_TXDC)) && (ucr4 & UCR4_TCEN)) {
+ 		/* The shifter is still busy, so retry once TC triggers */
+ 		return;
+ 	}
+ 
+-	ucr4 = imx_uart_readl(sport, UCR4);
+ 	ucr4 &= ~UCR4_TCEN;
+ 	imx_uart_writel(sport, ucr4, UCR4);
+ 
+@@ -1943,10 +1943,6 @@ static int imx_uart_rs485_config(struct uart_port *port, struct ktermios *termio
+ 	    rs485conf->flags & SER_RS485_RX_DURING_TX)
+ 		imx_uart_start_rx(port);
+ 
+-	if (port->rs485_rx_during_tx_gpio)
+-		gpiod_set_value_cansleep(port->rs485_rx_during_tx_gpio,
+-					 !!(rs485conf->flags & SER_RS485_RX_DURING_TX));
+-
+ 	return 0;
+ }
+ 
+@@ -2210,7 +2206,6 @@ static enum hrtimer_restart imx_trigger_stop_tx(struct hrtimer *t)
+ 	return HRTIMER_NORESTART;
+ }
+ 
+-static const struct serial_rs485 imx_no_rs485 = {};	/* No RS485 if no RTS */
+ static const struct serial_rs485 imx_rs485_supported = {
+ 	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND |
+ 		 SER_RS485_RX_DURING_TX,
+@@ -2294,8 +2289,6 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 	/* RTS is required to control the RS485 transmitter */
+ 	if (sport->have_rtscts || sport->have_rtsgpio)
+ 		sport->port.rs485_supported = imx_rs485_supported;
+-	else
+-		sport->port.rs485_supported = imx_no_rs485;
+ 	sport->port.flags = UPF_BOOT_AUTOCONF;
+ 	timer_setup(&sport->timer, imx_uart_timeout, 0);
+ 
+@@ -2322,19 +2315,13 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 	/* For register access, we only need to enable the ipg clock. */
+ 	ret = clk_prepare_enable(sport->clk_ipg);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "failed to enable per clk: %d\n", ret);
++		dev_err(&pdev->dev, "failed to enable ipg clk: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	ret = uart_get_rs485_mode(&sport->port);
+-	if (ret) {
+-		clk_disable_unprepare(sport->clk_ipg);
+-		return ret;
+-	}
+-
+-	if (sport->port.rs485.flags & SER_RS485_ENABLED &&
+-	    (!sport->have_rtscts && !sport->have_rtsgpio))
+-		dev_err(&pdev->dev, "no RTS control, disabling rs485\n");
++	if (ret)
++		goto err_clk;
+ 
+ 	/*
+ 	 * If using the i.MX UART RTS/CTS control then the RTS (CTS_B)
+@@ -2414,8 +2401,6 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		imx_uart_writel(sport, ucr3, UCR3);
+ 	}
+ 
+-	clk_disable_unprepare(sport->clk_ipg);
+-
+ 	hrtimer_init(&sport->trigger_start_tx, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	hrtimer_init(&sport->trigger_stop_tx, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	sport->trigger_start_tx.function = imx_trigger_start_tx;
+@@ -2431,7 +2416,7 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request rx irq: %d\n",
+ 				ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 
+ 		ret = devm_request_irq(&pdev->dev, txirq, imx_uart_txint, 0,
+@@ -2439,7 +2424,7 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request tx irq: %d\n",
+ 				ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 
+ 		ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0,
+@@ -2447,14 +2432,14 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request rts irq: %d\n",
+ 				ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 	} else {
+ 		ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0,
+ 				       dev_name(&pdev->dev), sport);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to request irq: %d\n", ret);
+-			return ret;
++			goto err_clk;
+ 		}
+ 	}
+ 
+@@ -2462,7 +2447,12 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, sport);
+ 
+-	return uart_add_one_port(&imx_uart_uart_driver, &sport->port);
++	ret = uart_add_one_port(&imx_uart_uart_driver, &sport->port);
++
++err_clk:
++	clk_disable_unprepare(sport->clk_ipg);
++
++	return ret;
+ }
+ 
+ static int imx_uart_remove(struct platform_device *pdev)
+diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
+index ad4c1c5d0a7f0..f4c6ff8064652 100644
+--- a/drivers/tty/serial/omap-serial.c
++++ b/drivers/tty/serial/omap-serial.c
+@@ -1483,6 +1483,13 @@ static struct omap_uart_port_info *of_get_uart_port_info(struct device *dev)
+ 	return omap_up_info;
+ }
+ 
++static const struct serial_rs485 serial_omap_rs485_supported = {
++	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND |
++		 SER_RS485_RX_DURING_TX,
++	.delay_rts_before_send = 1,
++	.delay_rts_after_send = 1,
++};
++
+ static int serial_omap_probe_rs485(struct uart_omap_port *up,
+ 				   struct device *dev)
+ {
+@@ -1497,6 +1504,9 @@ static int serial_omap_probe_rs485(struct uart_omap_port *up,
+ 	if (!np)
+ 		return 0;
+ 
++	up->port.rs485_config = serial_omap_config_rs485;
++	up->port.rs485_supported = serial_omap_rs485_supported;
++
+ 	ret = uart_get_rs485_mode(&up->port);
+ 	if (ret)
+ 		return ret;
+@@ -1531,13 +1541,6 @@ static int serial_omap_probe_rs485(struct uart_omap_port *up,
+ 	return 0;
+ }
+ 
+-static const struct serial_rs485 serial_omap_rs485_supported = {
+-	.flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND |
+-		 SER_RS485_RX_DURING_TX,
+-	.delay_rts_before_send = 1,
+-	.delay_rts_after_send = 1,
+-};
+-
+ static int serial_omap_probe(struct platform_device *pdev)
+ {
+ 	struct omap_uart_port_info *omap_up_info = dev_get_platdata(&pdev->dev);
+@@ -1604,17 +1607,11 @@ static int serial_omap_probe(struct platform_device *pdev)
+ 		dev_info(up->port.dev, "no wakeirq for uart%d\n",
+ 			 up->port.line);
+ 
+-	ret = serial_omap_probe_rs485(up, &pdev->dev);
+-	if (ret < 0)
+-		goto err_rs485;
+-
+ 	sprintf(up->name, "OMAP UART%d", up->port.line);
+ 	up->port.mapbase = mem->start;
+ 	up->port.membase = base;
+ 	up->port.flags = omap_up_info->flags;
+ 	up->port.uartclk = omap_up_info->uartclk;
+-	up->port.rs485_config = serial_omap_config_rs485;
+-	up->port.rs485_supported = serial_omap_rs485_supported;
+ 	if (!up->port.uartclk) {
+ 		up->port.uartclk = DEFAULT_CLK_SPEED;
+ 		dev_warn(&pdev->dev,
+@@ -1622,6 +1619,10 @@ static int serial_omap_probe(struct platform_device *pdev)
+ 			 DEFAULT_CLK_SPEED);
+ 	}
+ 
++	ret = serial_omap_probe_rs485(up, &pdev->dev);
++	if (ret < 0)
++		goto err_rs485;
++
+ 	up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+ 	up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
+ 	cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index cf0c6120d30ed..d5bb0da95376f 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -24,6 +24,7 @@
+ #include <linux/tty_flip.h>
+ #include <linux/spi/spi.h>
+ #include <linux/uaccess.h>
++#include <linux/units.h>
+ #include <uapi/linux/sched/types.h>
+ 
+ #define SC16IS7XX_NAME			"sc16is7xx"
+@@ -1727,9 +1728,12 @@ static int sc16is7xx_spi_probe(struct spi_device *spi)
+ 
+ 	/* Setup SPI bus */
+ 	spi->bits_per_word	= 8;
+-	/* only supports mode 0 on SC16IS762 */
++	/* For all variants, only mode 0 is supported */
++	if ((spi->mode & SPI_MODE_X_MASK) != SPI_MODE_0)
++		return dev_err_probe(&spi->dev, -EINVAL, "Unsupported SPI mode\n");
++
+ 	spi->mode		= spi->mode ? : SPI_MODE_0;
+-	spi->max_speed_hz	= spi->max_speed_hz ? : 15000000;
++	spi->max_speed_hz	= spi->max_speed_hz ? : 4 * HZ_PER_MHZ;
+ 	ret = spi_setup(spi);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index f1348a5095527..93e4e1693601f 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1371,19 +1371,27 @@ static void uart_sanitize_serial_rs485(struct uart_port *port, struct serial_rs4
+ 		return;
+ 	}
+ 
++	rs485->flags &= supported_flags;
++
+ 	/* Pick sane settings if the user hasn't */
+-	if ((supported_flags & (SER_RS485_RTS_ON_SEND|SER_RS485_RTS_AFTER_SEND)) &&
+-	    !(rs485->flags & SER_RS485_RTS_ON_SEND) ==
++	if (!(rs485->flags & SER_RS485_RTS_ON_SEND) ==
+ 	    !(rs485->flags & SER_RS485_RTS_AFTER_SEND)) {
+-		dev_warn_ratelimited(port->dev,
+-			"%s (%d): invalid RTS setting, using RTS_ON_SEND instead\n",
+-			port->name, port->line);
+-		rs485->flags |= SER_RS485_RTS_ON_SEND;
+-		rs485->flags &= ~SER_RS485_RTS_AFTER_SEND;
+-		supported_flags |= SER_RS485_RTS_ON_SEND|SER_RS485_RTS_AFTER_SEND;
+-	}
++		if (supported_flags & SER_RS485_RTS_ON_SEND) {
++			rs485->flags |= SER_RS485_RTS_ON_SEND;
++			rs485->flags &= ~SER_RS485_RTS_AFTER_SEND;
+ 
+-	rs485->flags &= supported_flags;
++			dev_warn_ratelimited(port->dev,
++				"%s (%d): invalid RTS setting, using RTS_ON_SEND instead\n",
++				port->name, port->line);
++		} else {
++			rs485->flags |= SER_RS485_RTS_AFTER_SEND;
++			rs485->flags &= ~SER_RS485_RTS_ON_SEND;
++
++			dev_warn_ratelimited(port->dev,
++				"%s (%d): invalid RTS setting, using RTS_AFTER_SEND instead\n",
++				port->name, port->line);
++		}
++	}
+ 
+ 	uart_sanitize_serial_rs485_delays(port, rs485);
+ 
+@@ -1402,6 +1410,16 @@ static void uart_set_rs485_termination(struct uart_port *port,
+ 				 !!(rs485->flags & SER_RS485_TERMINATE_BUS));
+ }
+ 
++static void uart_set_rs485_rx_during_tx(struct uart_port *port,
++					const struct serial_rs485 *rs485)
++{
++	if (!(rs485->flags & SER_RS485_ENABLED))
++		return;
++
++	gpiod_set_value_cansleep(port->rs485_rx_during_tx_gpio,
++				 !!(rs485->flags & SER_RS485_RX_DURING_TX));
++}
++
+ static int uart_rs485_config(struct uart_port *port)
+ {
+ 	struct serial_rs485 *rs485 = &port->rs485;
+@@ -1413,12 +1431,17 @@ static int uart_rs485_config(struct uart_port *port)
+ 
+ 	uart_sanitize_serial_rs485(port, rs485);
+ 	uart_set_rs485_termination(port, rs485);
++	uart_set_rs485_rx_during_tx(port, rs485);
+ 
+ 	uart_port_lock_irqsave(port, &flags);
+ 	ret = port->rs485_config(port, NULL, rs485);
+ 	uart_port_unlock_irqrestore(port, flags);
+-	if (ret)
++	if (ret) {
+ 		memset(rs485, 0, sizeof(*rs485));
++		/* unset GPIOs */
++		gpiod_set_value_cansleep(port->rs485_term_gpio, 0);
++		gpiod_set_value_cansleep(port->rs485_rx_during_tx_gpio, 0);
++	}
+ 
+ 	return ret;
+ }
+@@ -1446,7 +1469,7 @@ static int uart_set_rs485_config(struct tty_struct *tty, struct uart_port *port,
+ 	int ret;
+ 	unsigned long flags;
+ 
+-	if (!port->rs485_config)
++	if (!(port->rs485_supported.flags & SER_RS485_ENABLED))
+ 		return -ENOTTY;
+ 
+ 	if (copy_from_user(&rs485, rs485_user, sizeof(*rs485_user)))
+@@ -1457,6 +1480,7 @@ static int uart_set_rs485_config(struct tty_struct *tty, struct uart_port *port,
+ 		return ret;
+ 	uart_sanitize_serial_rs485(port, &rs485);
+ 	uart_set_rs485_termination(port, &rs485);
++	uart_set_rs485_rx_during_tx(port, &rs485);
+ 
+ 	uart_port_lock_irqsave(port, &flags);
+ 	ret = port->rs485_config(port, &tty->termios, &rs485);
+@@ -1468,8 +1492,14 @@ static int uart_set_rs485_config(struct tty_struct *tty, struct uart_port *port,
+ 			port->ops->set_mctrl(port, port->mctrl);
+ 	}
+ 	uart_port_unlock_irqrestore(port, flags);
+-	if (ret)
++	if (ret) {
++		/* restore old GPIO settings */
++		gpiod_set_value_cansleep(port->rs485_term_gpio,
++			!!(port->rs485.flags & SER_RS485_TERMINATE_BUS));
++		gpiod_set_value_cansleep(port->rs485_rx_during_tx_gpio,
++			!!(port->rs485.flags & SER_RS485_RX_DURING_TX));
+ 		return ret;
++	}
+ 
+ 	if (copy_to_user(rs485_user, &port->rs485, sizeof(port->rs485)))
+ 		return -EFAULT;
+@@ -3570,6 +3600,9 @@ int uart_get_rs485_mode(struct uart_port *port)
+ 	u32 rs485_delay[2];
+ 	int ret;
+ 
++	if (!(port->rs485_supported.flags & SER_RS485_ENABLED))
++		return 0;
++
+ 	ret = device_property_read_u32_array(dev, "rs485-rts-delay",
+ 					     rs485_delay, 2);
+ 	if (!ret) {
+@@ -3620,6 +3653,8 @@ int uart_get_rs485_mode(struct uart_port *port)
+ 	if (IS_ERR(desc))
+ 		return dev_err_probe(dev, PTR_ERR(desc), "Cannot get rs485-rx-during-tx-gpios\n");
+ 	port->rs485_rx_during_tx_gpio = desc;
++	if (port->rs485_rx_during_tx_gpio)
++		port->rs485_supported.flags |= SER_RS485_RX_DURING_TX;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 3048620315d60..fc7fd40bca988 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -226,12 +226,6 @@ static int stm32_usart_config_rs485(struct uart_port *port, struct ktermios *ter
+ 
+ 	stm32_usart_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 
+-	if (port->rs485_rx_during_tx_gpio)
+-		gpiod_set_value_cansleep(port->rs485_rx_during_tx_gpio,
+-					 !!(rs485conf->flags & SER_RS485_RX_DURING_TX));
+-	else
+-		rs485conf->flags |= SER_RS485_RX_DURING_TX;
+-
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+ 		cr1 = readl_relaxed(port->membase + ofs->cr1);
+ 		cr3 = readl_relaxed(port->membase + ofs->cr3);
+@@ -256,6 +250,8 @@ static int stm32_usart_config_rs485(struct uart_port *port, struct ktermios *ter
+ 
+ 		writel_relaxed(cr3, port->membase + ofs->cr3);
+ 		writel_relaxed(cr1, port->membase + ofs->cr1);
++
++		rs485conf->flags |= SER_RS485_RX_DURING_TX;
+ 	} else {
+ 		stm32_usart_clr_bits(port, ofs->cr3,
+ 				     USART_CR3_DEM | USART_CR3_DEP);
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 06414e43e0b53..96617f9af819d 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2489,6 +2489,9 @@ static int send_break(struct tty_struct *tty, unsigned int duration)
+ 	if (!retval) {
+ 		msleep_interruptible(duration);
+ 		retval = tty->ops->break_ctl(tty, 0);
++	} else if (retval == -EOPNOTSUPP) {
++		/* some drivers can tell only dynamically */
++		retval = 0;
+ 	}
+ 	tty_write_unlock(tty);
+ 
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 16d76325039a5..ce2f76769fb0b 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8646,7 +8646,6 @@ static int ufshcd_add_lus(struct ufs_hba *hba)
+ 
+ 	ufs_bsg_probe(hba);
+ 	scsi_scan_host(hba->host);
+-	pm_runtime_put_sync(hba->dev);
+ 
+ out:
+ 	return ret;
+@@ -8914,15 +8913,15 @@ static void ufshcd_async_scan(void *data, async_cookie_t cookie)
+ 
+ 	/* Probe and add UFS logical units  */
+ 	ret = ufshcd_add_lus(hba);
++
+ out:
++	pm_runtime_put_sync(hba->dev);
+ 	/*
+ 	 * If we failed to initialize the device or the device is not
+ 	 * present, turn off the power/clocks etc.
+ 	 */
+-	if (ret) {
+-		pm_runtime_put_sync(hba->dev);
++	if (ret)
+ 		ufshcd_hba_exit(hba);
+-	}
+ }
+ 
+ static enum scsi_timeout_action ufshcd_eh_timed_out(struct scsi_cmnd *scmd)
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 17e24270477dd..0f4b3f16d3d7a 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -158,7 +158,7 @@ static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
+ 	cap = hba->crypto_cap_array[cfg->crypto_cap_idx];
+ 	if (cap.algorithm_id != UFS_CRYPTO_ALG_AES_XTS ||
+ 	    cap.key_size != UFS_CRYPTO_KEY_SIZE_256)
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 
+ 	if (config_enable)
+ 		return qcom_ice_program_key(host->ice,
+@@ -1787,7 +1787,7 @@ static int ufs_qcom_mcq_config_resource(struct ufs_hba *hba)
+ 		if (!res->resource) {
+ 			dev_info(hba->dev, "Resource %s not provided\n", res->name);
+ 			if (i == RES_UFS)
+-				return -ENOMEM;
++				return -ENODEV;
+ 			continue;
+ 		} else if (i == RES_UFS) {
+ 			res_mem = res->resource;
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index 11a5b3437c32d..d140010257004 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -1119,6 +1119,8 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 	dma_addr_t trb_dma;
+ 	u32 togle_pcs = 1;
+ 	int sg_iter = 0;
++	int num_trb_req;
++	int trb_burst;
+ 	int num_trb;
+ 	int address;
+ 	u32 control;
+@@ -1127,15 +1129,13 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 	struct scatterlist *s = NULL;
+ 	bool sg_supported = !!(request->num_mapped_sgs);
+ 
++	num_trb_req = sg_supported ? request->num_mapped_sgs : 1;
++
++	/* ISO transfer require each SOF have a TD, each TD include some TRBs */
+ 	if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
+-		num_trb = priv_ep->interval;
++		num_trb = priv_ep->interval * num_trb_req;
+ 	else
+-		num_trb = sg_supported ? request->num_mapped_sgs : 1;
+-
+-	if (num_trb > priv_ep->free_trbs) {
+-		priv_ep->flags |= EP_RING_FULL;
+-		return -ENOBUFS;
+-	}
++		num_trb = num_trb_req;
+ 
+ 	priv_req = to_cdns3_request(request);
+ 	address = priv_ep->endpoint.desc->bEndpointAddress;
+@@ -1184,14 +1184,31 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 
+ 		link_trb->control = cpu_to_le32(((priv_ep->pcs) ? TRB_CYCLE : 0) |
+ 				    TRB_TYPE(TRB_LINK) | TRB_TOGGLE | ch_bit);
++
++		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
++			/*
++			 * ISO require LINK TRB must be first one of TD.
++			 * Fill LINK TRBs for left trb space to simply software process logic.
++			 */
++			while (priv_ep->enqueue) {
++				*trb = *link_trb;
++				trace_cdns3_prepare_trb(priv_ep, trb);
++
++				cdns3_ep_inc_enq(priv_ep);
++				trb = priv_ep->trb_pool + priv_ep->enqueue;
++				priv_req->trb = trb;
++			}
++		}
++	}
++
++	if (num_trb > priv_ep->free_trbs) {
++		priv_ep->flags |= EP_RING_FULL;
++		return -ENOBUFS;
+ 	}
+ 
+ 	if (priv_dev->dev_ver <= DEV_VER_V2)
+ 		togle_pcs = cdns3_wa1_update_guard(priv_ep, trb);
+ 
+-	if (sg_supported)
+-		s = request->sg;
+-
+ 	/* set incorrect Cycle Bit for first trb*/
+ 	control = priv_ep->pcs ? 0 : TRB_CYCLE;
+ 	trb->length = 0;
+@@ -1209,6 +1226,9 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 	do {
+ 		u32 length;
+ 
++		if (!(sg_iter % num_trb_req) && sg_supported)
++			s = request->sg;
++
+ 		/* fill TRB */
+ 		control |= TRB_TYPE(TRB_NORMAL);
+ 		if (sg_supported) {
+@@ -1223,7 +1243,36 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 			total_tdl += DIV_ROUND_UP(length,
+ 					       priv_ep->endpoint.maxpacket);
+ 
+-		trb->length |= cpu_to_le32(TRB_BURST_LEN(priv_ep->trb_burst_size) |
++		trb_burst = priv_ep->trb_burst_size;
++
++		/*
++		 * Supposed DMA cross 4k bounder problem should be fixed at DEV_VER_V2, but still
++		 * met problem when do ISO transfer if sg enabled.
++		 *
++		 * Data pattern likes below when sg enabled, package size is 1k and mult is 2
++		 *       [UVC Header(8B) ] [data(3k - 8)] ...
++		 *
++		 * The received data at offset 0xd000 will get 0xc000 data, len 0x70. Error happen
++		 * as below pattern:
++		 *	0xd000: wrong
++		 *	0xe000: wrong
++		 *	0xf000: correct
++		 *	0x10000: wrong
++		 *	0x11000: wrong
++		 *	0x12000: correct
++		 *	...
++		 *
++		 * But it is still unclear about why error have not happen below 0xd000, it should
++		 * cross 4k bounder. But anyway, the below code can fix this problem.
++		 *
++		 * To avoid DMA cross 4k bounder at ISO transfer, reduce burst len according to 16.
++		 */
++		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && priv_dev->dev_ver <= DEV_VER_V2)
++			if (ALIGN_DOWN(trb->buffer, SZ_4K) !=
++			    ALIGN_DOWN(trb->buffer + length, SZ_4K))
++				trb_burst = 16;
++
++		trb->length |= cpu_to_le32(TRB_BURST_LEN(trb_burst) |
+ 					TRB_LEN(length));
+ 		pcs = priv_ep->pcs ? TRB_CYCLE : 0;
+ 
+@@ -1250,7 +1299,7 @@ static int cdns3_ep_run_transfer(struct cdns3_endpoint *priv_ep,
+ 		if (sg_supported) {
+ 			trb->control |= cpu_to_le32(TRB_ISP);
+ 			/* Don't set chain bit for last TRB */
+-			if (sg_iter < num_trb - 1)
++			if ((sg_iter % num_trb_req) < num_trb_req - 1)
+ 				trb->control |= cpu_to_le32(TRB_CHAIN);
+ 
+ 			s = sg_next(s);
+@@ -1508,6 +1557,12 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ 
+ 		/* The TRB was changed as link TRB, and the request was handled at ep_dequeue */
+ 		while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {
++
++			/* ISO ep_traddr may stop at LINK TRB */
++			if (priv_ep->dequeue == cdns3_get_dma_pos(priv_dev, priv_ep) &&
++			    priv_ep->type == USB_ENDPOINT_XFER_ISOC)
++				break;
++
+ 			trace_cdns3_complete_trb(priv_ep, trb);
+ 			cdns3_ep_inc_deq(priv_ep);
+ 			trb = priv_ep->trb_pool + priv_ep->dequeue;
+@@ -1540,6 +1595,10 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
+ 			}
+ 
+ 			if (request_handled) {
++				/* TRBs are duplicated by priv_ep->interval time for ISO IN */
++				if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && priv_ep->dir)
++					request->actual /= priv_ep->interval;
++
+ 				cdns3_gadget_giveback(priv_ep, priv_req, 0);
+ 				request_handled = false;
+ 				transfer_end = false;
+@@ -2035,11 +2094,10 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	bool is_iso_ep = (priv_ep->type == USB_ENDPOINT_XFER_ISOC);
+ 	struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
+ 	u32 bEndpointAddress = priv_ep->num | priv_ep->dir;
+-	u32 max_packet_size = 0;
+-	u8 maxburst = 0;
++	u32 max_packet_size = priv_ep->wMaxPacketSize;
++	u8 maxburst = priv_ep->bMaxBurst;
+ 	u32 ep_cfg = 0;
+ 	u8 buffering;
+-	u8 mult = 0;
+ 	int ret;
+ 
+ 	buffering = priv_dev->ep_buf_size - 1;
+@@ -2061,8 +2119,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 		break;
+ 	default:
+ 		ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_ISOC);
+-		mult = priv_dev->ep_iso_burst - 1;
+-		buffering = mult + 1;
++		buffering = (priv_ep->bMaxBurst + 1) * (priv_ep->mult + 1) - 1;
+ 	}
+ 
+ 	switch (priv_dev->gadget.speed) {
+@@ -2073,17 +2130,8 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 		max_packet_size = is_iso_ep ? 1024 : 512;
+ 		break;
+ 	case USB_SPEED_SUPER:
+-		/* It's limitation that driver assumes in driver. */
+-		mult = 0;
+-		max_packet_size = 1024;
+-		if (priv_ep->type == USB_ENDPOINT_XFER_ISOC) {
+-			maxburst = priv_dev->ep_iso_burst - 1;
+-			buffering = (mult + 1) *
+-				    (maxburst + 1);
+-
+-			if (priv_ep->interval > 1)
+-				buffering++;
+-		} else {
++		if (priv_ep->type != USB_ENDPOINT_XFER_ISOC) {
++			max_packet_size = 1024;
+ 			maxburst = priv_dev->ep_buf_size - 1;
+ 		}
+ 		break;
+@@ -2112,7 +2160,6 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	if (priv_dev->dev_ver < DEV_VER_V2)
+ 		priv_ep->trb_burst_size = 16;
+ 
+-	mult = min_t(u8, mult, EP_CFG_MULT_MAX);
+ 	buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX);
+ 	maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
+ 
+@@ -2146,7 +2193,7 @@ int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ 	}
+ 
+ 	ep_cfg |= EP_CFG_MAXPKTSIZE(max_packet_size) |
+-		  EP_CFG_MULT(mult) |
++		  EP_CFG_MULT(priv_ep->mult) |			/* must match EP setting */
+ 		  EP_CFG_BUFFERING(buffering) |
+ 		  EP_CFG_MAXBURST(maxburst);
+ 
+@@ -2236,6 +2283,13 @@ usb_ep *cdns3_gadget_match_ep(struct usb_gadget *gadget,
+ 	priv_ep->type = usb_endpoint_type(desc);
+ 	priv_ep->flags |= EP_CLAIMED;
+ 	priv_ep->interval = desc->bInterval ? BIT(desc->bInterval - 1) : 0;
++	priv_ep->wMaxPacketSize =  usb_endpoint_maxp(desc);
++	priv_ep->mult = USB_EP_MAXP_MULT(priv_ep->wMaxPacketSize);
++	priv_ep->wMaxPacketSize &= USB_ENDPOINT_MAXP_MASK;
++	if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && comp_desc) {
++		priv_ep->mult =  USB_SS_MULT(comp_desc->bmAttributes) - 1;
++		priv_ep->bMaxBurst = comp_desc->bMaxBurst;
++	}
+ 
+ 	spin_unlock_irqrestore(&priv_dev->lock, flags);
+ 	return &priv_ep->endpoint;
+@@ -3019,22 +3073,40 @@ static int cdns3_gadget_check_config(struct usb_gadget *gadget)
+ 	struct cdns3_endpoint *priv_ep;
+ 	struct usb_ep *ep;
+ 	int n_in = 0;
++	int iso = 0;
++	int out = 1;
+ 	int total;
++	int n;
+ 
+ 	list_for_each_entry(ep, &gadget->ep_list, ep_list) {
+ 		priv_ep = ep_to_cdns3_ep(ep);
+-		if ((priv_ep->flags & EP_CLAIMED) && (ep->address & USB_DIR_IN))
+-			n_in++;
++		if (!(priv_ep->flags & EP_CLAIMED))
++			continue;
++
++		n = (priv_ep->mult + 1) * (priv_ep->bMaxBurst + 1);
++		if (ep->address & USB_DIR_IN) {
++			/*
++			 * ISO transfer: DMA start move data when get ISO, only transfer
++			 * data as min(TD size, iso). No benefit for allocate bigger
++			 * internal memory than 'iso'.
++			 */
++			if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
++				iso += n;
++			else
++				n_in++;
++		} else {
++			if (priv_ep->type == USB_ENDPOINT_XFER_ISOC)
++				out = max_t(int, out, n);
++		}
+ 	}
+ 
+ 	/* 2KB are reserved for EP0, 1KB for out*/
+-	total = 2 + n_in + 1;
++	total = 2 + n_in + out + iso;
+ 
+ 	if (total > priv_dev->onchip_buffers)
+ 		return -ENOMEM;
+ 
+-	priv_dev->ep_buf_size = priv_dev->ep_iso_burst =
+-			(priv_dev->onchip_buffers - 2) / (n_in + 1);
++	priv_dev->ep_buf_size = (priv_dev->onchip_buffers - 2 - iso) / (n_in + out);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/cdns3/cdns3-gadget.h b/drivers/usb/cdns3/cdns3-gadget.h
+index fbe4a8e3aa897..086a7bb838975 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.h
++++ b/drivers/usb/cdns3/cdns3-gadget.h
+@@ -1168,6 +1168,9 @@ struct cdns3_endpoint {
+ 	u8			dir;
+ 	u8			num;
+ 	u8			type;
++	u8			mult;
++	u8			bMaxBurst;
++	u16			wMaxPacketSize;
+ 	int			interval;
+ 
+ 	int			free_trbs;
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 7ac39a281b8cb..85e9c3ab66e94 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -523,6 +523,13 @@ static irqreturn_t ci_irq_handler(int irq, void *data)
+ 	u32 otgsc = 0;
+ 
+ 	if (ci->in_lpm) {
++		/*
++		 * If we already have a wakeup irq pending there,
++		 * let's just return to wait resume finished firstly.
++		 */
++		if (ci->wakeup_int)
++			return IRQ_HANDLED;
++
+ 		disable_irq_nosync(irq);
+ 		ci->wakeup_int = true;
+ 		pm_runtime_get(ci->dev);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index a1f4e1ead97ff..0e7439dba8fe8 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -916,6 +916,9 @@ static int acm_tty_break_ctl(struct tty_struct *tty, int state)
+ 	struct acm *acm = tty->driver_data;
+ 	int retval;
+ 
++	if (!(acm->ctrl_caps & USB_CDC_CAP_BRK))
++		return -EOPNOTSUPP;
++
+ 	retval = acm_send_break(acm, state ? 0xffff : 0);
+ 	if (retval < 0)
+ 		dev_dbg(&acm->control->dev,
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index b101dbf8c5dcc..f50b5575d588a 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -277,48 +277,11 @@ int dwc3_core_soft_reset(struct dwc3 *dwc)
+ 	/*
+ 	 * We're resetting only the device side because, if we're in host mode,
+ 	 * XHCI driver will reset the host block. If dwc3 was configured for
+-	 * host-only mode or current role is host, then we can return early.
++	 * host-only mode, then we can return early.
+ 	 */
+ 	if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
+ 		return 0;
+ 
+-	/*
+-	 * If the dr_mode is host and the dwc->current_dr_role is not the
+-	 * corresponding DWC3_GCTL_PRTCAP_HOST, then the dwc3_core_init_mode
+-	 * isn't executed yet. Ensure the phy is ready before the controller
+-	 * updates the GCTL.PRTCAPDIR or other settings by soft-resetting
+-	 * the phy.
+-	 *
+-	 * Note: GUSB3PIPECTL[n] and GUSB2PHYCFG[n] are port settings where n
+-	 * is port index. If this is a multiport host, then we need to reset
+-	 * all active ports.
+-	 */
+-	if (dwc->dr_mode == USB_DR_MODE_HOST) {
+-		u32 usb3_port;
+-		u32 usb2_port;
+-
+-		usb3_port = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0));
+-		usb3_port |= DWC3_GUSB3PIPECTL_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port);
+-
+-		usb2_port = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+-		usb2_port |= DWC3_GUSB2PHYCFG_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port);
+-
+-		/* Small delay for phy reset assertion */
+-		usleep_range(1000, 2000);
+-
+-		usb3_port &= ~DWC3_GUSB3PIPECTL_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port);
+-
+-		usb2_port &= ~DWC3_GUSB2PHYCFG_PHYSOFTRST;
+-		dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port);
+-
+-		/* Wait for clock synchronization */
+-		msleep(50);
+-		return 0;
+-	}
+-
+ 	reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ 	reg |= DWC3_DCTL_CSFTRST;
+ 	reg &= ~DWC3_DCTL_RUN_STOP;
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index b942432372937..6ae8a36f21cf6 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -238,7 +238,10 @@ void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+ 		struct dwc3_request	*req;
+ 
+ 		req = next_request(&dep->pending_list);
+-		dwc3_gadget_giveback(dep, req, -ECONNRESET);
++		if (!dwc->connected)
++			dwc3_gadget_giveback(dep, req, -ESHUTDOWN);
++		else
++			dwc3_gadget_giveback(dep, req, -ECONNRESET);
+ 	}
+ 
+ 	dwc->eps[0]->trb_enqueue = 0;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 858fe4c299b7a..89de363ecf8bb 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2103,7 +2103,17 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
+ 
+ 	list_for_each_entry(r, &dep->pending_list, list) {
+ 		if (r == req) {
+-			dwc3_gadget_giveback(dep, req, -ECONNRESET);
++			/*
++			 * Explicitly check for EP0/1 as dequeue for those
++			 * EPs need to be handled differently.  Control EP
++			 * only deals with one USB req, and giveback will
++			 * occur during dwc3_ep0_stall_and_restart().  EP0
++			 * requests are never added to started_list.
++			 */
++			if (dep->number > 1)
++				dwc3_gadget_giveback(dep, req, -ECONNRESET);
++			else
++				dwc3_ep0_reset_state(dwc);
+ 			goto out;
+ 		}
+ 	}
+@@ -3973,6 +3983,13 @@ static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc)
+ 	usb_gadget_set_state(dwc->gadget, USB_STATE_NOTATTACHED);
+ 
+ 	dwc3_ep0_reset_state(dwc);
++
++	/*
++	 * Request PM idle to address condition where usage count is
++	 * already decremented to zero, but waiting for the disconnect
++	 * interrupt to set dwc->connected to FALSE.
++	 */
++	pm_request_idle(dwc->dev);
+ }
+ 
+ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 786379f1b7b72..c6965e389473b 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -722,13 +722,29 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
+ 	}
+ 	uvc->enable_interrupt_ep = opts->enable_interrupt_ep;
+ 
+-	ep = usb_ep_autoconfig(cdev->gadget, &uvc_fs_streaming_ep);
++	/*
++	 * gadget_is_{super|dual}speed() API check UDC controller capitblity. It should pass down
++	 * highest speed endpoint descriptor to UDC controller. So UDC controller driver can reserve
++	 * enough resource at check_config(), especially mult and maxburst. So UDC driver (such as
++	 * cdns3) can know need at least (mult + 1) * (maxburst + 1) * wMaxPacketSize internal
++	 * memory for this uvc functions. This is the only straightforward method to resolve the UDC
++	 * resource allocation issue in the current gadget framework.
++	 */
++	if (gadget_is_superspeed(c->cdev->gadget))
++		ep = usb_ep_autoconfig_ss(cdev->gadget, &uvc_ss_streaming_ep,
++					  &uvc_ss_streaming_comp);
++	else if (gadget_is_dualspeed(cdev->gadget))
++		ep = usb_ep_autoconfig(cdev->gadget, &uvc_hs_streaming_ep);
++	else
++		ep = usb_ep_autoconfig(cdev->gadget, &uvc_fs_streaming_ep);
++
+ 	if (!ep) {
+ 		uvcg_info(f, "Unable to allocate streaming EP\n");
+ 		goto error;
+ 	}
+ 	uvc->video.ep = ep;
+ 
++	uvc_fs_streaming_ep.bEndpointAddress = uvc->video.ep->address;
+ 	uvc_hs_streaming_ep.bEndpointAddress = uvc->video.ep->address;
+ 	uvc_ss_streaming_ep.bEndpointAddress = uvc->video.ep->address;
+ 
+@@ -960,7 +976,8 @@ static void uvc_free(struct usb_function *f)
+ 	struct uvc_device *uvc = to_uvc(f);
+ 	struct f_uvc_opts *opts = container_of(f->fi, struct f_uvc_opts,
+ 					       func_inst);
+-	config_item_put(&uvc->header->item);
++	if (!opts->header)
++		config_item_put(&uvc->header->item);
+ 	--opts->refcnt;
+ 	kfree(uvc);
+ }
+@@ -1052,25 +1069,29 @@ static struct usb_function *uvc_alloc(struct usb_function_instance *fi)
+ 	uvc->desc.hs_streaming = opts->hs_streaming;
+ 	uvc->desc.ss_streaming = opts->ss_streaming;
+ 
+-	streaming = config_group_find_item(&opts->func_inst.group, "streaming");
+-	if (!streaming)
+-		goto err_config;
+-
+-	header = config_group_find_item(to_config_group(streaming), "header");
+-	config_item_put(streaming);
+-	if (!header)
+-		goto err_config;
+-
+-	h = config_group_find_item(to_config_group(header), "h");
+-	config_item_put(header);
+-	if (!h)
+-		goto err_config;
+-
+-	uvc->header = to_uvcg_streaming_header(h);
+-	if (!uvc->header->linked) {
+-		mutex_unlock(&opts->lock);
+-		kfree(uvc);
+-		return ERR_PTR(-EBUSY);
++	if (opts->header) {
++		uvc->header = opts->header;
++	} else {
++		streaming = config_group_find_item(&opts->func_inst.group, "streaming");
++		if (!streaming)
++			goto err_config;
++
++		header = config_group_find_item(to_config_group(streaming), "header");
++		config_item_put(streaming);
++		if (!header)
++			goto err_config;
++
++		h = config_group_find_item(to_config_group(header), "h");
++		config_item_put(header);
++		if (!h)
++			goto err_config;
++
++		uvc->header = to_uvcg_streaming_header(h);
++		if (!uvc->header->linked) {
++			mutex_unlock(&opts->lock);
++			kfree(uvc);
++			return ERR_PTR(-EBUSY);
++		}
+ 	}
+ 
+ 	uvc->desc.extension_units = &opts->extension_units;
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 9d1c40c152d86..3c5a6f6ac3414 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -1163,6 +1163,8 @@ struct net_device *gether_connect(struct gether *link)
+ 		if (netif_running(dev->net))
+ 			eth_start(dev, GFP_ATOMIC);
+ 
++		netif_device_attach(dev->net);
++
+ 	/* on error, disable any endpoints  */
+ 	} else {
+ 		(void) usb_ep_disable(link->out_ep);
+diff --git a/drivers/usb/gadget/function/u_uvc.h b/drivers/usb/gadget/function/u_uvc.h
+index 1ce58f61253c9..3ac392cbb7794 100644
+--- a/drivers/usb/gadget/function/u_uvc.h
++++ b/drivers/usb/gadget/function/u_uvc.h
+@@ -98,6 +98,12 @@ struct f_uvc_opts {
+ 	 */
+ 	struct mutex			lock;
+ 	int				refcnt;
++
++	/*
++	 * Only for legacy gadget. Shall be NULL for configfs-composed gadgets,
++	 * which is guaranteed by alloc_inst implementation of f_uvc doing kzalloc.
++	 */
++	struct uvcg_streaming_header	*header;
+ };
+ 
+ #endif /* U_UVC_H */
+diff --git a/drivers/usb/gadget/legacy/webcam.c b/drivers/usb/gadget/legacy/webcam.c
+index c06dd1af7a0c5..c395438d39780 100644
+--- a/drivers/usb/gadget/legacy/webcam.c
++++ b/drivers/usb/gadget/legacy/webcam.c
+@@ -12,6 +12,7 @@
+ #include <linux/usb/video.h>
+ 
+ #include "u_uvc.h"
++#include "uvc_configfs.h"
+ 
+ USB_GADGET_COMPOSITE_OPTIONS();
+ 
+@@ -84,8 +85,6 @@ static struct usb_device_descriptor webcam_device_descriptor = {
+ 	.bNumConfigurations	= 0, /* dynamic */
+ };
+ 
+-DECLARE_UVC_HEADER_DESCRIPTOR(1);
+-
+ static const struct UVC_HEADER_DESCRIPTOR(1) uvc_control_header = {
+ 	.bLength		= UVC_DT_HEADER_SIZE(1),
+ 	.bDescriptorType	= USB_DT_CS_INTERFACE,
+@@ -158,43 +157,112 @@ static const struct UVC_INPUT_HEADER_DESCRIPTOR(1, 2) uvc_input_header = {
+ 	.bmaControls[1][0]	= 4,
+ };
+ 
+-static const struct uvc_format_uncompressed uvc_format_yuv = {
+-	.bLength		= UVC_DT_FORMAT_UNCOMPRESSED_SIZE,
+-	.bDescriptorType	= USB_DT_CS_INTERFACE,
+-	.bDescriptorSubType	= UVC_VS_FORMAT_UNCOMPRESSED,
+-	.bFormatIndex		= 1,
+-	.bNumFrameDescriptors	= 2,
+-	.guidFormat		=
+-		{ 'Y',  'U',  'Y',  '2', 0x00, 0x00, 0x10, 0x00,
+-		 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71},
+-	.bBitsPerPixel		= 16,
+-	.bDefaultFrameIndex	= 1,
+-	.bAspectRatioX		= 0,
+-	.bAspectRatioY		= 0,
+-	.bmInterlaceFlags	= 0,
+-	.bCopyProtect		= 0,
++static const struct uvcg_color_matching uvcg_color_matching = {
++	.desc = {
++		.bLength		= UVC_DT_COLOR_MATCHING_SIZE,
++		.bDescriptorType	= USB_DT_CS_INTERFACE,
++		.bDescriptorSubType	= UVC_VS_COLORFORMAT,
++		.bColorPrimaries	= 1,
++		.bTransferCharacteristics	= 1,
++		.bMatrixCoefficients	= 4,
++	},
++};
++
++static struct uvcg_uncompressed uvcg_format_yuv = {
++	.fmt = {
++		.type			= UVCG_UNCOMPRESSED,
++		/* add to .frames and fill .num_frames at runtime */
++		.color_matching		= (struct uvcg_color_matching *)&uvcg_color_matching,
++	},
++	.desc = {
++		.bLength		= UVC_DT_FORMAT_UNCOMPRESSED_SIZE,
++		.bDescriptorType	= USB_DT_CS_INTERFACE,
++		.bDescriptorSubType	= UVC_VS_FORMAT_UNCOMPRESSED,
++		.bFormatIndex		= 1,
++		.bNumFrameDescriptors	= 2,
++		.guidFormat		= {
++			'Y',  'U',  'Y',  '2', 0x00, 0x00, 0x10, 0x00,
++			 0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71
++		},
++		.bBitsPerPixel		= 16,
++		.bDefaultFrameIndex	= 1,
++		.bAspectRatioX		= 0,
++		.bAspectRatioY		= 0,
++		.bmInterlaceFlags	= 0,
++		.bCopyProtect		= 0,
++	},
++};
++
++static struct uvcg_format_ptr uvcg_format_ptr_yuv = {
++	.fmt = &uvcg_format_yuv.fmt,
+ };
+ 
+ DECLARE_UVC_FRAME_UNCOMPRESSED(1);
+ DECLARE_UVC_FRAME_UNCOMPRESSED(3);
+ 
++#define UVCG_WIDTH_360P			640
++#define UVCG_HEIGHT_360P		360
++#define UVCG_MIN_BITRATE_360P		18432000
++#define UVCG_MAX_BITRATE_360P		55296000
++#define UVCG_MAX_VIDEO_FB_SZ_360P	460800
++#define UVCG_FRM_INTERV_0_360P		666666
++#define UVCG_FRM_INTERV_1_360P		1000000
++#define UVCG_FRM_INTERV_2_360P		5000000
++#define UVCG_DEFAULT_FRM_INTERV_360P	UVCG_FRM_INTERV_0_360P
++
+ static const struct UVC_FRAME_UNCOMPRESSED(3) uvc_frame_yuv_360p = {
+ 	.bLength		= UVC_DT_FRAME_UNCOMPRESSED_SIZE(3),
+ 	.bDescriptorType	= USB_DT_CS_INTERFACE,
+ 	.bDescriptorSubType	= UVC_VS_FRAME_UNCOMPRESSED,
+ 	.bFrameIndex		= 1,
+ 	.bmCapabilities		= 0,
+-	.wWidth			= cpu_to_le16(640),
+-	.wHeight		= cpu_to_le16(360),
+-	.dwMinBitRate		= cpu_to_le32(18432000),
+-	.dwMaxBitRate		= cpu_to_le32(55296000),
+-	.dwMaxVideoFrameBufferSize	= cpu_to_le32(460800),
+-	.dwDefaultFrameInterval	= cpu_to_le32(666666),
++	.wWidth			= cpu_to_le16(UVCG_WIDTH_360P),
++	.wHeight		= cpu_to_le16(UVCG_HEIGHT_360P),
++	.dwMinBitRate		= cpu_to_le32(UVCG_MIN_BITRATE_360P),
++	.dwMaxBitRate		= cpu_to_le32(UVCG_MAX_BITRATE_360P),
++	.dwMaxVideoFrameBufferSize	= cpu_to_le32(UVCG_MAX_VIDEO_FB_SZ_360P),
++	.dwDefaultFrameInterval	= cpu_to_le32(UVCG_DEFAULT_FRM_INTERV_360P),
+ 	.bFrameIntervalType	= 3,
+-	.dwFrameInterval[0]	= cpu_to_le32(666666),
+-	.dwFrameInterval[1]	= cpu_to_le32(1000000),
+-	.dwFrameInterval[2]	= cpu_to_le32(5000000),
++	.dwFrameInterval[0]	= cpu_to_le32(UVCG_FRM_INTERV_0_360P),
++	.dwFrameInterval[1]	= cpu_to_le32(UVCG_FRM_INTERV_1_360P),
++	.dwFrameInterval[2]	= cpu_to_le32(UVCG_FRM_INTERV_2_360P),
++};
++
++static u32 uvcg_frame_yuv_360p_dw_frame_interval[] = {
++	[0] = UVCG_FRM_INTERV_0_360P,
++	[1] = UVCG_FRM_INTERV_1_360P,
++	[2] = UVCG_FRM_INTERV_2_360P,
++};
++
++static const struct uvcg_frame uvcg_frame_yuv_360p = {
++	.fmt_type		= UVCG_UNCOMPRESSED,
++	.frame = {
++		.b_length			= UVC_DT_FRAME_UNCOMPRESSED_SIZE(3),
++		.b_descriptor_type		= USB_DT_CS_INTERFACE,
++		.b_descriptor_subtype		= UVC_VS_FRAME_UNCOMPRESSED,
++		.b_frame_index			= 1,
++		.bm_capabilities		= 0,
++		.w_width			= UVCG_WIDTH_360P,
++		.w_height			= UVCG_HEIGHT_360P,
++		.dw_min_bit_rate		= UVCG_MIN_BITRATE_360P,
++		.dw_max_bit_rate		= UVCG_MAX_BITRATE_360P,
++		.dw_max_video_frame_buffer_size	= UVCG_MAX_VIDEO_FB_SZ_360P,
++		.dw_default_frame_interval	= UVCG_DEFAULT_FRM_INTERV_360P,
++		.b_frame_interval_type		= 3,
++	},
++	.dw_frame_interval	= uvcg_frame_yuv_360p_dw_frame_interval,
++};
++
++static struct uvcg_frame_ptr uvcg_frame_ptr_yuv_360p = {
++	.frm = (struct uvcg_frame *)&uvcg_frame_yuv_360p,
+ };
++#define UVCG_WIDTH_720P			1280
++#define UVCG_HEIGHT_720P		720
++#define UVCG_MIN_BITRATE_720P		29491200
++#define UVCG_MAX_BITRATE_720P		29491200
++#define UVCG_MAX_VIDEO_FB_SZ_720P	1843200
++#define UVCG_FRM_INTERV_0_720P		5000000
++#define UVCG_DEFAULT_FRM_INTERV_720P	UVCG_FRM_INTERV_0_720P
+ 
+ static const struct UVC_FRAME_UNCOMPRESSED(1) uvc_frame_yuv_720p = {
+ 	.bLength		= UVC_DT_FRAME_UNCOMPRESSED_SIZE(1),
+@@ -202,28 +270,66 @@ static const struct UVC_FRAME_UNCOMPRESSED(1) uvc_frame_yuv_720p = {
+ 	.bDescriptorSubType	= UVC_VS_FRAME_UNCOMPRESSED,
+ 	.bFrameIndex		= 2,
+ 	.bmCapabilities		= 0,
+-	.wWidth			= cpu_to_le16(1280),
+-	.wHeight		= cpu_to_le16(720),
+-	.dwMinBitRate		= cpu_to_le32(29491200),
+-	.dwMaxBitRate		= cpu_to_le32(29491200),
+-	.dwMaxVideoFrameBufferSize	= cpu_to_le32(1843200),
+-	.dwDefaultFrameInterval	= cpu_to_le32(5000000),
++	.wWidth			= cpu_to_le16(UVCG_WIDTH_720P),
++	.wHeight		= cpu_to_le16(UVCG_HEIGHT_720P),
++	.dwMinBitRate		= cpu_to_le32(UVCG_MIN_BITRATE_720P),
++	.dwMaxBitRate		= cpu_to_le32(UVCG_MAX_BITRATE_720P),
++	.dwMaxVideoFrameBufferSize	= cpu_to_le32(UVCG_MAX_VIDEO_FB_SZ_720P),
++	.dwDefaultFrameInterval	= cpu_to_le32(UVCG_DEFAULT_FRM_INTERV_720P),
+ 	.bFrameIntervalType	= 1,
+-	.dwFrameInterval[0]	= cpu_to_le32(5000000),
++	.dwFrameInterval[0]	= cpu_to_le32(UVCG_FRM_INTERV_0_720P),
+ };
+ 
+-static const struct uvc_format_mjpeg uvc_format_mjpg = {
+-	.bLength		= UVC_DT_FORMAT_MJPEG_SIZE,
+-	.bDescriptorType	= USB_DT_CS_INTERFACE,
+-	.bDescriptorSubType	= UVC_VS_FORMAT_MJPEG,
+-	.bFormatIndex		= 2,
+-	.bNumFrameDescriptors	= 2,
+-	.bmFlags		= 0,
+-	.bDefaultFrameIndex	= 1,
+-	.bAspectRatioX		= 0,
+-	.bAspectRatioY		= 0,
+-	.bmInterlaceFlags	= 0,
+-	.bCopyProtect		= 0,
++static u32 uvcg_frame_yuv_720p_dw_frame_interval[] = {
++	[0] = UVCG_FRM_INTERV_0_720P,
++};
++
++static const struct uvcg_frame uvcg_frame_yuv_720p = {
++	.fmt_type		= UVCG_UNCOMPRESSED,
++	.frame = {
++		.b_length			= UVC_DT_FRAME_UNCOMPRESSED_SIZE(1),
++		.b_descriptor_type		= USB_DT_CS_INTERFACE,
++		.b_descriptor_subtype		= UVC_VS_FRAME_UNCOMPRESSED,
++		.b_frame_index			= 2,
++		.bm_capabilities		= 0,
++		.w_width			= UVCG_WIDTH_720P,
++		.w_height			= UVCG_HEIGHT_720P,
++		.dw_min_bit_rate		= UVCG_MIN_BITRATE_720P,
++		.dw_max_bit_rate		= UVCG_MAX_BITRATE_720P,
++		.dw_max_video_frame_buffer_size	= UVCG_MAX_VIDEO_FB_SZ_720P,
++		.dw_default_frame_interval	= UVCG_DEFAULT_FRM_INTERV_720P,
++		.b_frame_interval_type		= 1,
++	},
++	.dw_frame_interval	= uvcg_frame_yuv_720p_dw_frame_interval,
++};
++
++static struct uvcg_frame_ptr uvcg_frame_ptr_yuv_720p = {
++	.frm = (struct uvcg_frame *)&uvcg_frame_yuv_720p,
++};
++
++static struct uvcg_mjpeg uvcg_format_mjpeg = {
++	.fmt = {
++		.type			= UVCG_MJPEG,
++		/* add to .frames and fill .num_frames at runtime */
++		.color_matching		= (struct uvcg_color_matching *)&uvcg_color_matching,
++	},
++	.desc = {
++		.bLength		= UVC_DT_FORMAT_MJPEG_SIZE,
++		.bDescriptorType	= USB_DT_CS_INTERFACE,
++		.bDescriptorSubType	= UVC_VS_FORMAT_MJPEG,
++		.bFormatIndex		= 2,
++		.bNumFrameDescriptors	= 2,
++		.bmFlags		= 0,
++		.bDefaultFrameIndex	= 1,
++		.bAspectRatioX		= 0,
++		.bAspectRatioY		= 0,
++		.bmInterlaceFlags	= 0,
++		.bCopyProtect		= 0,
++	},
++};
++
++static struct uvcg_format_ptr uvcg_format_ptr_mjpeg = {
++	.fmt = &uvcg_format_mjpeg.fmt,
+ };
+ 
+ DECLARE_UVC_FRAME_MJPEG(1);
+@@ -235,16 +341,45 @@ static const struct UVC_FRAME_MJPEG(3) uvc_frame_mjpg_360p = {
+ 	.bDescriptorSubType	= UVC_VS_FRAME_MJPEG,
+ 	.bFrameIndex		= 1,
+ 	.bmCapabilities		= 0,
+-	.wWidth			= cpu_to_le16(640),
+-	.wHeight		= cpu_to_le16(360),
+-	.dwMinBitRate		= cpu_to_le32(18432000),
+-	.dwMaxBitRate		= cpu_to_le32(55296000),
+-	.dwMaxVideoFrameBufferSize	= cpu_to_le32(460800),
+-	.dwDefaultFrameInterval	= cpu_to_le32(666666),
++	.wWidth			= cpu_to_le16(UVCG_WIDTH_360P),
++	.wHeight		= cpu_to_le16(UVCG_HEIGHT_360P),
++	.dwMinBitRate		= cpu_to_le32(UVCG_MIN_BITRATE_360P),
++	.dwMaxBitRate		= cpu_to_le32(UVCG_MAX_BITRATE_360P),
++	.dwMaxVideoFrameBufferSize	= cpu_to_le32(UVCG_MAX_VIDEO_FB_SZ_360P),
++	.dwDefaultFrameInterval	= cpu_to_le32(UVCG_DEFAULT_FRM_INTERV_360P),
+ 	.bFrameIntervalType	= 3,
+-	.dwFrameInterval[0]	= cpu_to_le32(666666),
+-	.dwFrameInterval[1]	= cpu_to_le32(1000000),
+-	.dwFrameInterval[2]	= cpu_to_le32(5000000),
++	.dwFrameInterval[0]	= cpu_to_le32(UVCG_FRM_INTERV_0_360P),
++	.dwFrameInterval[1]	= cpu_to_le32(UVCG_FRM_INTERV_1_360P),
++	.dwFrameInterval[2]	= cpu_to_le32(UVCG_FRM_INTERV_2_360P),
++};
++
++static u32 uvcg_frame_mjpeg_360p_dw_frame_interval[] = {
++	[0] = UVCG_FRM_INTERV_0_360P,
++	[1] = UVCG_FRM_INTERV_1_360P,
++	[2] = UVCG_FRM_INTERV_2_360P,
++};
++
++static const struct uvcg_frame uvcg_frame_mjpeg_360p = {
++	.fmt_type		= UVCG_MJPEG,
++	.frame = {
++		.b_length			= UVC_DT_FRAME_MJPEG_SIZE(3),
++		.b_descriptor_type		= USB_DT_CS_INTERFACE,
++		.b_descriptor_subtype		= UVC_VS_FRAME_MJPEG,
++		.b_frame_index			= 1,
++		.bm_capabilities		= 0,
++		.w_width			= UVCG_WIDTH_360P,
++		.w_height			= UVCG_HEIGHT_360P,
++		.dw_min_bit_rate		= UVCG_MIN_BITRATE_360P,
++		.dw_max_bit_rate		= UVCG_MAX_BITRATE_360P,
++		.dw_max_video_frame_buffer_size	= UVCG_MAX_VIDEO_FB_SZ_360P,
++		.dw_default_frame_interval	= UVCG_DEFAULT_FRM_INTERV_360P,
++		.b_frame_interval_type		= 3,
++	},
++	.dw_frame_interval	= uvcg_frame_mjpeg_360p_dw_frame_interval,
++};
++
++static struct uvcg_frame_ptr uvcg_frame_ptr_mjpeg_360p = {
++	.frm = (struct uvcg_frame *)&uvcg_frame_mjpeg_360p,
+ };
+ 
+ static const struct UVC_FRAME_MJPEG(1) uvc_frame_mjpg_720p = {
+@@ -253,23 +388,44 @@ static const struct UVC_FRAME_MJPEG(1) uvc_frame_mjpg_720p = {
+ 	.bDescriptorSubType	= UVC_VS_FRAME_MJPEG,
+ 	.bFrameIndex		= 2,
+ 	.bmCapabilities		= 0,
+-	.wWidth			= cpu_to_le16(1280),
+-	.wHeight		= cpu_to_le16(720),
+-	.dwMinBitRate		= cpu_to_le32(29491200),
+-	.dwMaxBitRate		= cpu_to_le32(29491200),
+-	.dwMaxVideoFrameBufferSize	= cpu_to_le32(1843200),
+-	.dwDefaultFrameInterval	= cpu_to_le32(5000000),
++	.wWidth			= cpu_to_le16(UVCG_WIDTH_720P),
++	.wHeight		= cpu_to_le16(UVCG_HEIGHT_720P),
++	.dwMinBitRate		= cpu_to_le32(UVCG_MIN_BITRATE_720P),
++	.dwMaxBitRate		= cpu_to_le32(UVCG_MAX_BITRATE_720P),
++	.dwMaxVideoFrameBufferSize	= cpu_to_le32(UVCG_MAX_VIDEO_FB_SZ_720P),
++	.dwDefaultFrameInterval	= cpu_to_le32(UVCG_DEFAULT_FRM_INTERV_720P),
+ 	.bFrameIntervalType	= 1,
+-	.dwFrameInterval[0]	= cpu_to_le32(5000000),
++	.dwFrameInterval[0]	= cpu_to_le32(UVCG_FRM_INTERV_0_720P),
+ };
+ 
+-static const struct uvc_color_matching_descriptor uvc_color_matching = {
+-	.bLength		= UVC_DT_COLOR_MATCHING_SIZE,
+-	.bDescriptorType	= USB_DT_CS_INTERFACE,
+-	.bDescriptorSubType	= UVC_VS_COLORFORMAT,
+-	.bColorPrimaries	= 1,
+-	.bTransferCharacteristics	= 1,
+-	.bMatrixCoefficients	= 4,
++static u32 uvcg_frame_mjpeg_720p_dw_frame_interval[] = {
++	[0] = UVCG_FRM_INTERV_0_720P,
++};
++
++static const struct uvcg_frame uvcg_frame_mjpeg_720p = {
++	.fmt_type		= UVCG_MJPEG,
++	.frame = {
++		.b_length			= UVC_DT_FRAME_MJPEG_SIZE(1),
++		.b_descriptor_type		= USB_DT_CS_INTERFACE,
++		.b_descriptor_subtype		= UVC_VS_FRAME_MJPEG,
++		.b_frame_index			= 2,
++		.bm_capabilities		= 0,
++		.w_width			= UVCG_WIDTH_720P,
++		.w_height			= UVCG_HEIGHT_720P,
++		.dw_min_bit_rate		= UVCG_MIN_BITRATE_720P,
++		.dw_max_bit_rate		= UVCG_MAX_BITRATE_720P,
++		.dw_max_video_frame_buffer_size	= UVCG_MAX_VIDEO_FB_SZ_720P,
++		.dw_default_frame_interval	= UVCG_DEFAULT_FRM_INTERV_720P,
++		.b_frame_interval_type		= 1,
++	},
++	.dw_frame_interval	= uvcg_frame_mjpeg_720p_dw_frame_interval,
++};
++
++static struct uvcg_frame_ptr uvcg_frame_ptr_mjpeg_720p = {
++	.frm = (struct uvcg_frame *)&uvcg_frame_mjpeg_720p,
++};
++
++static struct uvcg_streaming_header uvcg_streaming_header = {
+ };
+ 
+ static const struct uvc_descriptor_header * const uvc_fs_control_cls[] = {
+@@ -290,40 +446,40 @@ static const struct uvc_descriptor_header * const uvc_ss_control_cls[] = {
+ 
+ static const struct uvc_descriptor_header * const uvc_fs_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_input_header,
+-	(const struct uvc_descriptor_header *) &uvc_format_yuv,
++	(const struct uvc_descriptor_header *) &uvcg_format_yuv.desc,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
+-	(const struct uvc_descriptor_header *) &uvc_color_matching,
+-	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
++	(const struct uvc_descriptor_header *) &uvcg_color_matching.desc,
++	(const struct uvc_descriptor_header *) &uvcg_format_mjpeg.desc,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+-	(const struct uvc_descriptor_header *) &uvc_color_matching,
++	(const struct uvc_descriptor_header *) &uvcg_color_matching.desc,
+ 	NULL,
+ };
+ 
+ static const struct uvc_descriptor_header * const uvc_hs_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_input_header,
+-	(const struct uvc_descriptor_header *) &uvc_format_yuv,
++	(const struct uvc_descriptor_header *) &uvcg_format_yuv.desc,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
+-	(const struct uvc_descriptor_header *) &uvc_color_matching,
+-	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
++	(const struct uvc_descriptor_header *) &uvcg_color_matching.desc,
++	(const struct uvc_descriptor_header *) &uvcg_format_mjpeg.desc,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+-	(const struct uvc_descriptor_header *) &uvc_color_matching,
++	(const struct uvc_descriptor_header *) &uvcg_color_matching.desc,
+ 	NULL,
+ };
+ 
+ static const struct uvc_descriptor_header * const uvc_ss_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_input_header,
+-	(const struct uvc_descriptor_header *) &uvc_format_yuv,
++	(const struct uvc_descriptor_header *) &uvcg_format_yuv.desc,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
+-	(const struct uvc_descriptor_header *) &uvc_color_matching,
+-	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
++	(const struct uvc_descriptor_header *) &uvcg_color_matching.desc,
++	(const struct uvc_descriptor_header *) &uvcg_format_mjpeg.desc,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+-	(const struct uvc_descriptor_header *) &uvc_color_matching,
++	(const struct uvc_descriptor_header *) &uvcg_color_matching.desc,
+ 	NULL,
+ };
+ 
+@@ -387,6 +543,23 @@ webcam_bind(struct usb_composite_dev *cdev)
+ 	uvc_opts->hs_streaming = uvc_hs_streaming_cls;
+ 	uvc_opts->ss_streaming = uvc_ss_streaming_cls;
+ 
++	INIT_LIST_HEAD(&uvcg_format_yuv.fmt.frames);
++	list_add_tail(&uvcg_frame_ptr_yuv_360p.entry, &uvcg_format_yuv.fmt.frames);
++	list_add_tail(&uvcg_frame_ptr_yuv_720p.entry, &uvcg_format_yuv.fmt.frames);
++	uvcg_format_yuv.fmt.num_frames = 2;
++
++	INIT_LIST_HEAD(&uvcg_format_mjpeg.fmt.frames);
++	list_add_tail(&uvcg_frame_ptr_mjpeg_360p.entry, &uvcg_format_mjpeg.fmt.frames);
++	list_add_tail(&uvcg_frame_ptr_mjpeg_720p.entry, &uvcg_format_mjpeg.fmt.frames);
++	uvcg_format_mjpeg.fmt.num_frames = 2;
++
++	INIT_LIST_HEAD(&uvcg_streaming_header.formats);
++	list_add_tail(&uvcg_format_ptr_yuv.entry, &uvcg_streaming_header.formats);
++	list_add_tail(&uvcg_format_ptr_mjpeg.entry, &uvcg_streaming_header.formats);
++	uvcg_streaming_header.num_fmt = 2;
++
++	uvc_opts->header = &uvcg_streaming_header;
++
+ 	/* Allocate string descriptor numbers ... note that string contents
+ 	 * can be overridden by the composite_dev glue.
+ 	 */
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index bbdf1b0b7be11..3252e3d2d79cd 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -7,6 +7,7 @@
+  *  Chunfeng Yun <chunfeng.yun@mediatek.com>
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/iopoll.h>
+ #include <linux/kernel.h>
+@@ -73,6 +74,9 @@
+ #define FRMCNT_LEV1_RANG	(0x12b << 8)
+ #define FRMCNT_LEV1_RANG_MASK	GENMASK(19, 8)
+ 
++#define HSCH_CFG1		0x960
++#define SCH3_RXFIFO_DEPTH_MASK	GENMASK(21, 20)
++
+ #define SS_GEN2_EOF_CFG		0x990
+ #define SSG2EOF_OFFSET		0x3c
+ 
+@@ -114,6 +118,8 @@
+ #define SSC_IP_SLEEP_EN	BIT(4)
+ #define SSC_SPM_INT_EN		BIT(1)
+ 
++#define SCH_FIFO_TO_KB(x)	((x) >> 10)
++
+ enum ssusb_uwk_vers {
+ 	SSUSB_UWK_V1 = 1,
+ 	SSUSB_UWK_V2,
+@@ -165,6 +171,35 @@ static void xhci_mtk_set_frame_interval(struct xhci_hcd_mtk *mtk)
+ 	writel(value, hcd->regs + SS_GEN2_EOF_CFG);
+ }
+ 
++/*
++ * workaround: usb3.2 gen1 isoc rx hw issue
++ * host send out unexpected ACK afer device fininsh a burst transfer with
++ * a short packet.
++ */
++static void xhci_mtk_rxfifo_depth_set(struct xhci_hcd_mtk *mtk)
++{
++	struct usb_hcd *hcd = mtk->hcd;
++	u32 value;
++
++	if (!mtk->rxfifo_depth)
++		return;
++
++	value = readl(hcd->regs + HSCH_CFG1);
++	value &= ~SCH3_RXFIFO_DEPTH_MASK;
++	value |= FIELD_PREP(SCH3_RXFIFO_DEPTH_MASK,
++			    SCH_FIFO_TO_KB(mtk->rxfifo_depth) - 1);
++	writel(value, hcd->regs + HSCH_CFG1);
++}
++
++static void xhci_mtk_init_quirk(struct xhci_hcd_mtk *mtk)
++{
++	/* workaround only for mt8195 */
++	xhci_mtk_set_frame_interval(mtk);
++
++	/* workaround for SoCs using SSUSB about before IPM v1.6.0 */
++	xhci_mtk_rxfifo_depth_set(mtk);
++}
++
+ static int xhci_mtk_host_enable(struct xhci_hcd_mtk *mtk)
+ {
+ 	struct mu3c_ippc_regs __iomem *ippc = mtk->ippc_regs;
+@@ -448,8 +483,7 @@ static int xhci_mtk_setup(struct usb_hcd *hcd)
+ 		if (ret)
+ 			return ret;
+ 
+-		/* workaround only for mt8195 */
+-		xhci_mtk_set_frame_interval(mtk);
++		xhci_mtk_init_quirk(mtk);
+ 	}
+ 
+ 	ret = xhci_gen_setup(hcd, xhci_mtk_quirks);
+@@ -527,6 +561,8 @@ static int xhci_mtk_probe(struct platform_device *pdev)
+ 	of_property_read_u32(node, "mediatek,u2p-dis-msk",
+ 			     &mtk->u2p_dis_msk);
+ 
++	of_property_read_u32(node, "rx-fifo-depth", &mtk->rxfifo_depth);
++
+ 	ret = usb_wakeup_of_property_parse(mtk, node);
+ 	if (ret) {
+ 		dev_err(dev, "failed to parse uwk property\n");
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index 39f7ae7d30871..f5e2bd66bb1b2 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -171,6 +171,8 @@ struct xhci_hcd_mtk {
+ 	struct regmap *uwk;
+ 	u32 uwk_reg_base;
+ 	u32 uwk_vers;
++	/* quirk */
++	u32 rxfifo_depth;
+ };
+ 
+ static inline struct xhci_hcd_mtk *hcd_to_mtk(struct usb_hcd *hcd)
+diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
+index 9ca9305243fe5..4e30de4db1c0a 100644
+--- a/drivers/usb/mon/mon_bin.c
++++ b/drivers/usb/mon/mon_bin.c
+@@ -1250,14 +1250,19 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf)
+ 	struct mon_reader_bin *rp = vmf->vma->vm_private_data;
+ 	unsigned long offset, chunk_idx;
+ 	struct page *pageptr;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&rp->b_lock, flags);
+ 	offset = vmf->pgoff << PAGE_SHIFT;
+-	if (offset >= rp->b_size)
++	if (offset >= rp->b_size) {
++		spin_unlock_irqrestore(&rp->b_lock, flags);
+ 		return VM_FAULT_SIGBUS;
++	}
+ 	chunk_idx = offset / CHUNK_SIZE;
+ 	pageptr = rp->b_vec[chunk_idx].pg;
+ 	get_page(pageptr);
+ 	vmf->page = pageptr;
++	spin_unlock_irqrestore(&rp->b_lock, flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index acd46b72899e9..920a32cd094d6 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -388,8 +388,7 @@ static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect)
+ 
+ static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy)
+ {
+-	return IS_ENABLED(CONFIG_USB_OTG) &&
+-		mxs_phy->phy.last_event == USB_EVENT_ID;
++	return mxs_phy->phy.last_event == USB_EVENT_ID;
+ }
+ 
+ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 16a670828dde1..2da19feacd915 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -263,11 +263,13 @@ static void typec_altmode_put_partner(struct altmode *altmode)
+ {
+ 	struct altmode *partner = altmode->partner;
+ 	struct typec_altmode *adev;
++	struct typec_altmode *partner_adev;
+ 
+ 	if (!partner)
+ 		return;
+ 
+ 	adev = &altmode->adev;
++	partner_adev = &partner->adev;
+ 
+ 	if (is_typec_plug(adev->dev.parent)) {
+ 		struct typec_plug *plug = to_typec_plug(adev->dev.parent);
+@@ -276,7 +278,7 @@ static void typec_altmode_put_partner(struct altmode *altmode)
+ 	} else {
+ 		partner->partner = NULL;
+ 	}
+-	put_device(&adev->dev);
++	put_device(&partner_adev->dev);
+ }
+ 
+ /**
+diff --git a/drivers/vdpa/alibaba/eni_vdpa.c b/drivers/vdpa/alibaba/eni_vdpa.c
+index 5a09a09cca709..cce3d1837104c 100644
+--- a/drivers/vdpa/alibaba/eni_vdpa.c
++++ b/drivers/vdpa/alibaba/eni_vdpa.c
+@@ -497,7 +497,7 @@ static int eni_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (!eni_vdpa->vring) {
+ 		ret = -ENOMEM;
+ 		ENI_ERR(pdev, "failed to allocate virtqueues\n");
+-		goto err;
++		goto err_remove_vp_legacy;
+ 	}
+ 
+ 	for (i = 0; i < eni_vdpa->queues; i++) {
+@@ -509,11 +509,13 @@ static int eni_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	ret = vdpa_register_device(&eni_vdpa->vdpa, eni_vdpa->queues);
+ 	if (ret) {
+ 		ENI_ERR(pdev, "failed to register to vdpa bus\n");
+-		goto err;
++		goto err_remove_vp_legacy;
+ 	}
+ 
+ 	return 0;
+ 
++err_remove_vp_legacy:
++	vp_legacy_remove(&eni_vdpa->ldev);
+ err:
+ 	put_device(&eni_vdpa->vdpa.dev);
+ 	return ret;
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+index b2f9778c8366e..4d27465c8f1a8 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+@@ -694,6 +694,7 @@ static ssize_t hisi_acc_vf_resume_write(struct file *filp, const char __user *bu
+ 					size_t len, loff_t *pos)
+ {
+ 	struct hisi_acc_vf_migration_file *migf = filp->private_data;
++	u8 *vf_data = (u8 *)&migf->vf_data;
+ 	loff_t requested_length;
+ 	ssize_t done = 0;
+ 	int ret;
+@@ -715,7 +716,7 @@ static ssize_t hisi_acc_vf_resume_write(struct file *filp, const char __user *bu
+ 		goto out_unlock;
+ 	}
+ 
+-	ret = copy_from_user(&migf->vf_data, buf, len);
++	ret = copy_from_user(vf_data + *pos, buf, len);
+ 	if (ret) {
+ 		done = -EFAULT;
+ 		goto out_unlock;
+@@ -835,7 +836,9 @@ static ssize_t hisi_acc_vf_save_read(struct file *filp, char __user *buf, size_t
+ 
+ 	len = min_t(size_t, migf->total_length - *pos, len);
+ 	if (len) {
+-		ret = copy_to_user(buf, &migf->vf_data, len);
++		u8 *vf_data = (u8 *)&migf->vf_data;
++
++		ret = copy_to_user(buf, vf_data + *pos, len);
+ 		if (ret) {
+ 			done = -EFAULT;
+ 			goto out_unlock;
+diff --git a/drivers/vfio/pci/pds/dirty.c b/drivers/vfio/pci/pds/dirty.c
+index c937aa6f39546..27607d7b9030a 100644
+--- a/drivers/vfio/pci/pds/dirty.c
++++ b/drivers/vfio/pci/pds/dirty.c
+@@ -478,8 +478,7 @@ static int pds_vfio_dirty_sync(struct pds_vfio_pci_device *pds_vfio,
+ 		pds_vfio->vf_id, iova, length, pds_vfio->dirty.region_page_size,
+ 		pages, bitmap_size);
+ 
+-	if (!length || ((dirty->region_start + iova + length) >
+-			(dirty->region_start + dirty->region_size))) {
++	if (!length || ((iova - dirty->region_start + length) > dirty->region_size)) {
+ 		dev_err(dev, "Invalid iova 0x%lx and/or length 0x%lx to sync\n",
+ 			iova, length);
+ 		return -EINVAL;
+@@ -496,7 +495,8 @@ static int pds_vfio_dirty_sync(struct pds_vfio_pci_device *pds_vfio,
+ 		return -EINVAL;
+ 	}
+ 
+-	bmp_offset = DIV_ROUND_UP(iova / dirty->region_page_size, sizeof(u64));
++	bmp_offset = DIV_ROUND_UP((iova - dirty->region_start) /
++				  dirty->region_page_size, sizeof(u64));
+ 
+ 	dev_dbg(dev,
+ 		"Syncing dirty bitmap, iova 0x%lx length 0x%lx, bmp_offset %llu bmp_bytes %llu\n",
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index f75731396b7ef..ec20ecff85c7f 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -449,6 +449,7 @@ static struct virtio_transport vhost_transport = {
+ 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
+ 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
+ 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
++		.notify_set_rcvlowat      = virtio_transport_notify_set_rcvlowat,
+ 
+ 		.read_skb = virtio_transport_read_skb,
+ 	},
+diff --git a/drivers/video/fbdev/acornfb.c b/drivers/video/fbdev/acornfb.c
+index 163d2c9f951c3..f0600f6ca2548 100644
+--- a/drivers/video/fbdev/acornfb.c
++++ b/drivers/video/fbdev/acornfb.c
+@@ -605,7 +605,7 @@ acornfb_pan_display(struct fb_var_screeninfo *var, struct fb_info *info)
+ 
+ static const struct fb_ops acornfb_ops = {
+ 	.owner		= THIS_MODULE,
+-	FB_IOMEM_DEFAULT_OPS,
++	FB_DEFAULT_IOMEM_OPS,
+ 	.fb_check_var	= acornfb_check_var,
+ 	.fb_set_par	= acornfb_set_par,
+ 	.fb_setcolreg	= acornfb_setcolreg,
+diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
+index 274f5d0fa2471..1ae1d35a59423 100644
+--- a/drivers/video/fbdev/core/fb_defio.c
++++ b/drivers/video/fbdev/core/fb_defio.c
+@@ -132,11 +132,7 @@ int fb_deferred_io_fsync(struct file *file, loff_t start, loff_t end, int datasy
+ 		return 0;
+ 
+ 	inode_lock(inode);
+-	/* Kill off the delayed work */
+-	cancel_delayed_work_sync(&info->deferred_work);
+-
+-	/* Run it immediately */
+-	schedule_delayed_work(&info->deferred_work, 0);
++	flush_delayed_work(&info->deferred_work);
+ 	inode_unlock(inode);
+ 
+ 	return 0;
+@@ -317,7 +313,7 @@ static void fb_deferred_io_lastclose(struct fb_info *info)
+ 	struct page *page;
+ 	int i;
+ 
+-	cancel_delayed_work_sync(&info->deferred_work);
++	flush_delayed_work(&info->deferred_work);
+ 
+ 	/* clear out the mapping that we setup */
+ 	for (i = 0 ; i < info->fix.smem_len; i += PAGE_SIZE) {
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index 84201c9608d36..7042a43b81d85 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -42,6 +42,7 @@
+ #include <video/videomode.h>
+ 
+ #define PCR_TFT		(1 << 31)
++#define PCR_COLOR	(1 << 30)
+ #define PCR_BPIX_8	(3 << 25)
+ #define PCR_BPIX_12	(4 << 25)
+ #define PCR_BPIX_16	(5 << 25)
+@@ -150,6 +151,12 @@ enum imxfb_type {
+ 	IMX21_FB,
+ };
+ 
++enum imxfb_panel_type {
++	PANEL_TYPE_MONOCHROME,
++	PANEL_TYPE_CSTN,
++	PANEL_TYPE_TFT,
++};
++
+ struct imxfb_info {
+ 	struct platform_device  *pdev;
+ 	void __iomem		*regs;
+@@ -157,6 +164,7 @@ struct imxfb_info {
+ 	struct clk		*clk_ahb;
+ 	struct clk		*clk_per;
+ 	enum imxfb_type		devtype;
++	enum imxfb_panel_type	panel_type;
+ 	bool			enabled;
+ 
+ 	/*
+@@ -444,6 +452,13 @@ static int imxfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 	if (!is_imx1_fb(fbi) && imxfb_mode->aus_mode)
+ 		fbi->lauscr = LAUSCR_AUS_MODE;
+ 
++	if (imxfb_mode->pcr & PCR_TFT)
++		fbi->panel_type = PANEL_TYPE_TFT;
++	else if (imxfb_mode->pcr & PCR_COLOR)
++		fbi->panel_type = PANEL_TYPE_CSTN;
++	else
++		fbi->panel_type = PANEL_TYPE_MONOCHROME;
++
+ 	/*
+ 	 * Copy the RGB parameters for this display
+ 	 * from the machine specific parameters.
+@@ -596,6 +611,7 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
+ {
+ 	struct imxfb_info *fbi = info->par;
+ 	u32 ymax_mask = is_imx1_fb(fbi) ? YMAX_MASK_IMX1 : YMAX_MASK_IMX21;
++	u8 left_margin_low;
+ 
+ 	pr_debug("var: xres=%d hslen=%d lm=%d rm=%d\n",
+ 		var->xres, var->hsync_len,
+@@ -604,6 +620,13 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
+ 		var->yres, var->vsync_len,
+ 		var->upper_margin, var->lower_margin);
+ 
++	if (fbi->panel_type == PANEL_TYPE_TFT)
++		left_margin_low = 3;
++	else if (fbi->panel_type == PANEL_TYPE_CSTN)
++		left_margin_low = 2;
++	else
++		left_margin_low = 0;
++
+ #if DEBUG_VAR
+ 	if (var->xres < 16        || var->xres > 1024)
+ 		printk(KERN_ERR "%s: invalid xres %d\n",
+@@ -611,7 +634,7 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
+ 	if (var->hsync_len < 1    || var->hsync_len > 64)
+ 		printk(KERN_ERR "%s: invalid hsync_len %d\n",
+ 			info->fix.id, var->hsync_len);
+-	if (var->left_margin < 3  || var->left_margin > 255)
++	if (var->left_margin < left_margin_low  || var->left_margin > 255)
+ 		printk(KERN_ERR "%s: invalid left_margin %d\n",
+ 			info->fix.id, var->left_margin);
+ 	if (var->right_margin < 1 || var->right_margin > 255)
+@@ -637,7 +660,7 @@ static int imxfb_activate_var(struct fb_var_screeninfo *var, struct fb_info *inf
+ 
+ 	writel(HCR_H_WIDTH(var->hsync_len - 1) |
+ 		HCR_H_WAIT_1(var->right_margin - 1) |
+-		HCR_H_WAIT_2(var->left_margin - 3),
++		HCR_H_WAIT_2(var->left_margin - left_margin_low),
+ 		fbi->regs + LCDC_HCR);
+ 
+ 	writel(VCR_V_WIDTH(var->vsync_len) |
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 3f8ef50e32095..104f122e0f273 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -1347,16 +1347,14 @@ static int smtc_set_par(struct fb_info *info)
+ 
+ static const struct fb_ops smtcfb_ops = {
+ 	.owner        = THIS_MODULE,
+-	FB_DEFAULT_IOMEM_OPS,
+ 	.fb_check_var = smtc_check_var,
+ 	.fb_set_par   = smtc_set_par,
+ 	.fb_setcolreg = smtc_setcolreg,
+ 	.fb_blank     = smtc_blank,
+-	.fb_fillrect  = cfb_fillrect,
+-	.fb_imageblit = cfb_imageblit,
+-	.fb_copyarea  = cfb_copyarea,
++	__FB_DEFAULT_IOMEM_OPS_DRAW,
+ 	.fb_read      = smtcfb_read,
+ 	.fb_write     = smtcfb_write,
++	__FB_DEFAULT_IOMEM_OPS_MMAP,
+ };
+ 
+ /*
+diff --git a/drivers/watchdog/bcm2835_wdt.c b/drivers/watchdog/bcm2835_wdt.c
+index 7a855289ff5e6..bb001c5d7f17f 100644
+--- a/drivers/watchdog/bcm2835_wdt.c
++++ b/drivers/watchdog/bcm2835_wdt.c
+@@ -42,6 +42,7 @@
+ 
+ #define SECS_TO_WDOG_TICKS(x) ((x) << 16)
+ #define WDOG_TICKS_TO_SECS(x) ((x) >> 16)
++#define WDOG_TICKS_TO_MSECS(x) ((x) * 1000 >> 16)
+ 
+ struct bcm2835_wdt {
+ 	void __iomem		*base;
+@@ -140,7 +141,7 @@ static struct watchdog_device bcm2835_wdt_wdd = {
+ 	.info =		&bcm2835_wdt_info,
+ 	.ops =		&bcm2835_wdt_ops,
+ 	.min_timeout =	1,
+-	.max_timeout =	WDOG_TICKS_TO_SECS(PM_WDOG_TIME_SET),
++	.max_hw_heartbeat_ms =	WDOG_TICKS_TO_MSECS(PM_WDOG_TIME_SET),
+ 	.timeout =	WDOG_TICKS_TO_SECS(PM_WDOG_TIME_SET),
+ };
+ 
+diff --git a/drivers/watchdog/hpwdt.c b/drivers/watchdog/hpwdt.c
+index f79f932bca148..79ed1626d8ea1 100644
+--- a/drivers/watchdog/hpwdt.c
++++ b/drivers/watchdog/hpwdt.c
+@@ -178,7 +178,7 @@ static int hpwdt_pretimeout(unsigned int ulReason, struct pt_regs *regs)
+ 		"3. OA Forward Progress Log\n"
+ 		"4. iLO Event Log";
+ 
+-	if (ilo5 && ulReason == NMI_UNKNOWN && !mynmi)
++	if (ulReason == NMI_UNKNOWN && !mynmi)
+ 		return NMI_DONE;
+ 
+ 	if (ilo5 && !pretimeout && !mynmi)
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 8e1be7ba01039..9215793a1c814 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -77,6 +77,11 @@ static int rti_wdt_start(struct watchdog_device *wdd)
+ {
+ 	u32 timer_margin;
+ 	struct rti_wdt_device *wdt = watchdog_get_drvdata(wdd);
++	int ret;
++
++	ret = pm_runtime_resume_and_get(wdd->parent);
++	if (ret)
++		return ret;
+ 
+ 	/* set timeout period */
+ 	timer_margin = (u64)wdd->timeout * wdt->freq;
+@@ -343,6 +348,9 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ 	if (last_ping)
+ 		watchdog_set_last_hw_keepalive(wdd, last_ping);
+ 
++	if (!watchdog_hw_running(wdd))
++		pm_runtime_put_sync(&pdev->dev);
++
+ 	return 0;
+ 
+ err_iomap:
+@@ -357,7 +365,10 @@ static void rti_wdt_remove(struct platform_device *pdev)
+ 	struct rti_wdt_device *wdt = platform_get_drvdata(pdev);
+ 
+ 	watchdog_unregister_device(&wdt->wdd);
+-	pm_runtime_put(&pdev->dev);
++
++	if (!pm_runtime_suspended(&pdev->dev))
++		pm_runtime_put(&pdev->dev);
++
+ 	pm_runtime_disable(&pdev->dev);
+ }
+ 
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 15df74e11a595..e2bd266b1b5b3 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -1073,6 +1073,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 
+ 	/* Fill in the data structures */
+ 	cdev_init(&wd_data->cdev, &watchdog_fops);
++	wd_data->cdev.owner = wdd->ops->owner;
+ 
+ 	/* Add the device */
+ 	err = cdev_device_add(&wd_data->cdev, &wd_data->dev);
+@@ -1087,8 +1088,6 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
+ 		return err;
+ 	}
+ 
+-	wd_data->cdev.owner = wdd->ops->owner;
+-
+ 	/* Record time of most recent heartbeat as 'just before now'. */
+ 	wd_data->last_hw_keepalive = ktime_sub(ktime_get(), 1);
+ 	watchdog_set_open_deadline(wd_data);
+diff --git a/fs/ceph/Kconfig b/fs/ceph/Kconfig
+index 94df854147d35..7249d70e1a43f 100644
+--- a/fs/ceph/Kconfig
++++ b/fs/ceph/Kconfig
+@@ -7,6 +7,7 @@ config CEPH_FS
+ 	select CRYPTO_AES
+ 	select CRYPTO
+ 	select NETFS_SUPPORT
++	select FS_ENCRYPTION_ALGS if FS_ENCRYPTION
+ 	default n
+ 	help
+ 	  Choose Y or M here to include support for mounting the
+diff --git a/fs/dlm/debug_fs.c b/fs/dlm/debug_fs.c
+index 42f332f463598..c587bfadeff43 100644
+--- a/fs/dlm/debug_fs.c
++++ b/fs/dlm/debug_fs.c
+@@ -748,7 +748,7 @@ static int table_open4(struct inode *inode, struct file *file)
+ 	struct seq_file *seq;
+ 	int ret;
+ 
+-	ret = seq_open(file, &format5_seq_ops);
++	ret = seq_open(file, &format4_seq_ops);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index 77240953a92e0..edf29c15db778 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -15,6 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/magic.h>
+ #include <linux/statfs.h>
++#include <linux/printk.h>
+ 
+ #include "internal.h"
+ 
+@@ -333,9 +334,20 @@ static int efivarfs_get_tree(struct fs_context *fc)
+ 	return get_tree_single(fc, efivarfs_fill_super);
+ }
+ 
++static int efivarfs_reconfigure(struct fs_context *fc)
++{
++	if (!efivar_supports_writes() && !(fc->sb_flags & SB_RDONLY)) {
++		pr_err("Firmware does not support SetVariableRT. Can not remount with rw\n");
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static const struct fs_context_operations efivarfs_context_ops = {
+ 	.get_tree	= efivarfs_get_tree,
+ 	.parse_param	= efivarfs_parse_param,
++	.reconfigure	= efivarfs_reconfigure,
+ };
+ 
+ static int efivarfs_init_fs_context(struct fs_context *fc)
+@@ -356,6 +368,8 @@ static int efivarfs_init_fs_context(struct fs_context *fc)
+ 
+ static void efivarfs_kill_sb(struct super_block *sb)
+ {
++	struct efivarfs_fs_info *sfi = sb->s_fs_info;
++
+ 	kill_litter_super(sb);
+ 
+ 	if (!efivar_is_available())
+@@ -363,6 +377,7 @@ static void efivarfs_kill_sb(struct super_block *sb)
+ 
+ 	/* Remove all entries and destroy */
+ 	efivar_entry_iter(efivarfs_destroy, &efivarfs_list, NULL);
++	kfree(sfi);
+ }
+ 
+ static struct file_system_type efivarfs_type = {
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index 021be5feb1bcf..af98e88908eea 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -398,7 +398,7 @@ int z_erofs_parse_cfgs(struct super_block *sb, struct erofs_super_block *dsb)
+ 	int size, ret = 0;
+ 
+ 	if (!erofs_sb_has_compr_cfgs(sbi)) {
+-		sbi->available_compr_algs = Z_EROFS_COMPRESSION_LZ4;
++		sbi->available_compr_algs = 1 << Z_EROFS_COMPRESSION_LZ4;
+ 		return z_erofs_load_lz4_config(sb, dsb, NULL, 0);
+ 	}
+ 
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index a7e6847f6f8f1..a33cd6757f984 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1309,12 +1309,11 @@ out:
+ 		put_page(page);
+ 	} else {
+ 		for (i = 0; i < pclusterpages; ++i) {
+-			page = pcl->compressed_bvecs[i].page;
++			/* consider shortlived pages added when decompressing */
++			page = be->compressed_pages[i];
+ 
+ 			if (erofs_page_is_managed(sbi, page))
+ 				continue;
+-
+-			/* recycle all individual short-lived pages */
+ 			(void)z_erofs_put_shortlivedpage(be->pagepool, page);
+ 			WRITE_ONCE(pcl->compressed_bvecs[i].page, NULL);
+ 		}
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 7b55111fd5337..7a1a24ae4a2d8 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -458,7 +458,7 @@ static int z_erofs_do_map_blocks(struct inode *inode,
+ 		.map = map,
+ 	};
+ 	int err = 0;
+-	unsigned int lclusterbits, endoff;
++	unsigned int lclusterbits, endoff, afmt;
+ 	unsigned long initial_lcn;
+ 	unsigned long long ofs, end;
+ 
+@@ -547,17 +547,20 @@ static int z_erofs_do_map_blocks(struct inode *inode,
+ 			err = -EFSCORRUPTED;
+ 			goto unmap_out;
+ 		}
+-		if (vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER)
+-			map->m_algorithmformat =
+-				Z_EROFS_COMPRESSION_INTERLACED;
+-		else
+-			map->m_algorithmformat =
+-				Z_EROFS_COMPRESSION_SHIFTED;
+-	} else if (m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
+-		map->m_algorithmformat = vi->z_algorithmtype[1];
++		afmt = vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER ?
++			Z_EROFS_COMPRESSION_INTERLACED :
++			Z_EROFS_COMPRESSION_SHIFTED;
+ 	} else {
+-		map->m_algorithmformat = vi->z_algorithmtype[0];
++		afmt = m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2 ?
++			vi->z_algorithmtype[1] : vi->z_algorithmtype[0];
++		if (!(EROFS_I_SB(inode)->available_compr_algs & (1 << afmt))) {
++			erofs_err(inode->i_sb, "inconsistent algorithmtype %u for nid %llu",
++				  afmt, vi->nid);
++			err = -EFSCORRUPTED;
++			goto unmap_out;
++		}
+ 	}
++	map->m_algorithmformat = afmt;
+ 
+ 	if ((flags & EROFS_GET_BLOCKS_FIEMAP) ||
+ 	    ((flags & EROFS_GET_BLOCKS_READMORE) &&
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 4e42b5f24debe..bc3f05d43b624 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2566,9 +2566,6 @@ int f2fs_encrypt_one_page(struct f2fs_io_info *fio)
+ 
+ 	page = fio->compressed_page ? fio->compressed_page : fio->page;
+ 
+-	/* wait for GCed page writeback via META_MAPPING */
+-	f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
+-
+ 	if (fscrypt_inode_uses_inline_crypto(inode))
+ 		return 0;
+ 
+@@ -2755,6 +2752,10 @@ got_it:
+ 		goto out_writepage;
+ 	}
+ 
++	/* wait for GCed page writeback via META_MAPPING */
++	if (fio->post_read)
++		f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
++
+ 	/*
+ 	 * If current allocation needs SSR,
+ 	 * it had better in-place writes for updated data.
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index e50363583f019..8912511980ae2 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -42,7 +42,7 @@ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
+ 	vm_fault_t ret;
+ 
+ 	ret = filemap_fault(vmf);
+-	if (!ret)
++	if (ret & VM_FAULT_LOCKED)
+ 		f2fs_update_iostat(F2FS_I_SB(inode), inode,
+ 					APP_MAPPED_READ_IO, F2FS_BLKSIZE);
+ 
+@@ -2818,6 +2818,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
+ 			goto out;
+ 	}
+ 
++	if (f2fs_compressed_file(src) || f2fs_compressed_file(dst)) {
++		ret = -EOPNOTSUPP;
++		goto out_unlock;
++	}
++
+ 	ret = -EINVAL;
+ 	if (pos_in + len > src->i_size || pos_in + len < pos_in)
+ 		goto out_unlock;
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index d0053b0284d8e..7f71bae2c83b5 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -1106,7 +1106,7 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 	}
+ 
+ 	if (old_dir_entry) {
+-		if (old_dir != new_dir && !whiteout)
++		if (old_dir != new_dir)
+ 			f2fs_set_link(old_inode, old_dir_entry,
+ 						old_dir_page, new_dir);
+ 		else
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 6c7f6a649d272..9b546fd210100 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2751,11 +2751,11 @@ recover_xnid:
+ 	f2fs_update_inode_page(inode);
+ 
+ 	/* 3: update and set xattr node page dirty */
+-	if (page)
++	if (page) {
+ 		memcpy(F2FS_NODE(xpage), F2FS_NODE(page),
+ 				VALID_XATTR_BLOCK_SIZE);
+-
+-	set_page_dirty(xpage);
++		set_page_dirty(xpage);
++	}
+ 	f2fs_put_page(xpage, 1);
+ 
+ 	return 0;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 033af907c3b1d..5dfbc6b4c0ac8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3364,6 +3364,14 @@ loff_t max_file_blocks(struct inode *inode)
+ 	leaf_count *= NIDS_PER_BLOCK;
+ 	result += leaf_count;
+ 
++	/*
++	 * For compatibility with FSCRYPT_POLICY_FLAG_IV_INO_LBLK_{64,32} with
++	 * a 4K crypto data unit, we must restrict the max filesize to what can
++	 * fit within U32_MAX + 1 data units.
++	 */
++
++	result = min(result, (((loff_t)U32_MAX + 1) * 4096) >> F2FS_BLKSIZE_BITS);
++
+ 	return result;
+ }
+ 
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index a8fc2cac68799..f290fe9327c49 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -660,11 +660,14 @@ retry:
+ 	here = __find_xattr(base_addr, last_base_addr, NULL, index, len, name);
+ 	if (!here) {
+ 		if (!F2FS_I(inode)->i_xattr_nid) {
++			error = f2fs_recover_xattr_data(inode, NULL);
+ 			f2fs_notice(F2FS_I_SB(inode),
+-				"recover xattr in inode (%lu)", inode->i_ino);
+-			f2fs_recover_xattr_data(inode, NULL);
+-			kfree(base_addr);
+-			goto retry;
++				"recover xattr in inode (%lu), error(%d)",
++					inode->i_ino, error);
++			if (!error) {
++				kfree(base_addr);
++				goto retry;
++			}
+ 		}
+ 		f2fs_err(F2FS_I_SB(inode), "set inode (%lu) has corrupted xattr",
+ 								inode->i_ino);
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index 95dae7838b4e5..f139ce8cf5ce2 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -1505,7 +1505,8 @@ void gfs2_quota_cleanup(struct gfs2_sbd *sdp)
+ 	LIST_HEAD(dispose);
+ 	int count;
+ 
+-	BUG_ON(test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags));
++	BUG_ON(!test_bit(SDF_NORECOVERY, &sdp->sd_flags) &&
++		test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags));
+ 
+ 	spin_lock(&qd_lock);
+ 	list_for_each_entry(qd, &sdp->sd_quota_list, qd_list) {
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index c2060203b98af..396d0f4a259d5 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -2306,7 +2306,7 @@ void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_rgrpd *rgd,
+ 		       (unsigned long long)rgd->rd_addr, rgd->rd_flags,
+ 		       rgd->rd_free, rgd->rd_free_clone, rgd->rd_dinodes,
+ 		       rgd->rd_requested, rgd->rd_reserved, rgd->rd_extfail_pt);
+-	if (rgd->rd_sbd->sd_args.ar_rgrplvb) {
++	if (rgd->rd_sbd->sd_args.ar_rgrplvb && rgd->rd_rgl) {
+ 		struct gfs2_rgrp_lvb *rgl = rgd->rd_rgl;
+ 
+ 		gfs2_print_dbg(seq, "%s  L: f:%02x b:%u i:%u\n", fs_id_buf,
+diff --git a/fs/namespace.c b/fs/namespace.c
+index fbf0e596fcd30..6c39ec020a5f4 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2875,7 +2875,12 @@ static int do_remount(struct path *path, int ms_flags, int sb_flags,
+ 	if (IS_ERR(fc))
+ 		return PTR_ERR(fc);
+ 
++	/*
++	 * Indicate to the filesystem that the remount request is coming
++	 * from the legacy mount system call.
++	 */
+ 	fc->oldapi = true;
++
+ 	err = parse_monolithic_mount_data(fc, data);
+ 	if (!err) {
+ 		down_write(&sb->s_umount);
+@@ -3324,6 +3329,12 @@ static int do_new_mount(struct path *path, const char *fstype, int sb_flags,
+ 	if (IS_ERR(fc))
+ 		return PTR_ERR(fc);
+ 
++	/*
++	 * Indicate to the filesystem that the mount request is coming
++	 * from the legacy mount system call.
++	 */
++	fc->oldapi = true;
++
+ 	if (subtype)
+ 		err = vfs_parse_fs_string(fc, "subtype",
+ 					  subtype, strlen(subtype));
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 943aeea1eb160..6be13e0ec170d 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -580,6 +580,8 @@ retry:
+ 		nfs4_delete_deviceid(node->ld, node->nfs_client, id);
+ 		goto retry;
+ 	}
++
++	nfs4_put_deviceid_node(node);
+ 	return ERR_PTR(-ENODEV);
+ }
+ 
+@@ -893,10 +895,9 @@ bl_pg_init_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
+ 	}
+ 
+ 	if (pgio->pg_dreq == NULL)
+-		wb_size = pnfs_num_cont_bytes(pgio->pg_inode,
+-					      req->wb_index);
++		wb_size = pnfs_num_cont_bytes(pgio->pg_inode, req->wb_index);
+ 	else
+-		wb_size = nfs_dreq_bytes_left(pgio->pg_dreq);
++		wb_size = nfs_dreq_bytes_left(pgio->pg_dreq, req_offset(req));
+ 
+ 	pnfs_generic_pg_init_write(pgio, req, wb_size);
+ 
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 13dffe4201e6e..273c0b68abf44 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -2963,7 +2963,7 @@ static u64 nfs_access_login_time(const struct task_struct *task,
+ 	rcu_read_lock();
+ 	for (;;) {
+ 		parent = rcu_dereference(task->real_parent);
+-		pcred = rcu_dereference(parent->cred);
++		pcred = __task_cred(parent);
+ 		if (parent == task || cred_fscmp(pcred, cred) != 0)
+ 			break;
+ 		task = parent;
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index f6c74f4246917..5918c67dae0da 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -205,9 +205,10 @@ static void nfs_direct_req_release(struct nfs_direct_req *dreq)
+ 	kref_put(&dreq->kref, nfs_direct_req_free);
+ }
+ 
+-ssize_t nfs_dreq_bytes_left(struct nfs_direct_req *dreq)
++ssize_t nfs_dreq_bytes_left(struct nfs_direct_req *dreq, loff_t offset)
+ {
+-	return dreq->bytes_left;
++	loff_t start = offset - dreq->io_start;
++	return dreq->max_count - start;
+ }
+ EXPORT_SYMBOL_GPL(nfs_dreq_bytes_left);
+ 
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 9c9cf764f6000..b1fa81c9dff6f 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -655,7 +655,7 @@ extern int nfs_sillyrename(struct inode *dir, struct dentry *dentry);
+ /* direct.c */
+ void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo,
+ 			      struct nfs_direct_req *dreq);
+-extern ssize_t nfs_dreq_bytes_left(struct nfs_direct_req *dreq);
++extern ssize_t nfs_dreq_bytes_left(struct nfs_direct_req *dreq, loff_t offset);
+ 
+ /* nfs4proc.c */
+ extern struct nfs_client *nfs4_init_client(struct nfs_client *clp,
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 8a943fffaad56..23819a7565085 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -170,6 +170,7 @@ static int nfs4_map_errors(int err)
+ 	case -NFS4ERR_RESOURCE:
+ 	case -NFS4ERR_LAYOUTTRYLATER:
+ 	case -NFS4ERR_RECALLCONFLICT:
++	case -NFS4ERR_RETURNCONFLICT:
+ 		return -EREMOTEIO;
+ 	case -NFS4ERR_WRONGSEC:
+ 	case -NFS4ERR_WRONG_CRED:
+@@ -558,6 +559,7 @@ static int nfs4_do_handle_exception(struct nfs_server *server,
+ 		case -NFS4ERR_GRACE:
+ 		case -NFS4ERR_LAYOUTTRYLATER:
+ 		case -NFS4ERR_RECALLCONFLICT:
++		case -NFS4ERR_RETURNCONFLICT:
+ 			exception->delay = 1;
+ 			return 0;
+ 
+@@ -9691,6 +9693,7 @@ nfs4_layoutget_handle_exception(struct rpc_task *task,
+ 		status = -EBUSY;
+ 		break;
+ 	case -NFS4ERR_RECALLCONFLICT:
++	case -NFS4ERR_RETURNCONFLICT:
+ 		status = -ERECALLCONFLICT;
+ 		break;
+ 	case -NFS4ERR_DELEG_REVOKED:
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 21a365357629c..0c0fed1ecd0bf 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2733,7 +2733,8 @@ pnfs_generic_pg_init_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *r
+ 		if (pgio->pg_dreq == NULL)
+ 			rd_size = i_size_read(pgio->pg_inode) - req_offset(req);
+ 		else
+-			rd_size = nfs_dreq_bytes_left(pgio->pg_dreq);
++			rd_size = nfs_dreq_bytes_left(pgio->pg_dreq,
++						      req_offset(req));
+ 
+ 		pgio->pg_lseg =
+ 			pnfs_update_layout(pgio->pg_inode, nfs_req_openctx(req),
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 804a7d7894521..226e7f66b5903 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -446,6 +446,18 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 	bool was_empty = false;
+ 	bool wake_next_writer = false;
+ 
++	/*
++	 * Reject writing to watch queue pipes before the point where we lock
++	 * the pipe.
++	 * Otherwise, lockdep would be unhappy if the caller already has another
++	 * pipe locked.
++	 * If we had to support locking a normal pipe and a notification pipe at
++	 * the same time, we could set up lockdep annotations for that, but
++	 * since we don't actually need that, it's simpler to just bail here.
++	 */
++	if (pipe_has_watch_queue(pipe))
++		return -EXDEV;
++
+ 	/* Null write succeeds. */
+ 	if (unlikely(total_len == 0))
+ 		return 0;
+@@ -458,11 +470,6 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 		goto out;
+ 	}
+ 
+-	if (pipe_has_watch_queue(pipe)) {
+-		ret = -EXDEV;
+-		goto out;
+-	}
+-
+ 	/*
+ 	 * If it wasn't empty we try to merge new data into
+ 	 * the last buffer.
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 650e437b55e6b..f1848cdd6d348 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -190,7 +190,7 @@ static int persistent_ram_init_ecc(struct persistent_ram_zone *prz,
+ {
+ 	int numerr;
+ 	struct persistent_ram_buffer *buffer = prz->buffer;
+-	int ecc_blocks;
++	size_t ecc_blocks;
+ 	size_t ecc_total;
+ 
+ 	if (!ecc_info || !ecc_info->ecc_size)
+diff --git a/fs/smb/server/asn1.c b/fs/smb/server/asn1.c
+index 4a4b2b03ff33d..b931a99ab9c85 100644
+--- a/fs/smb/server/asn1.c
++++ b/fs/smb/server/asn1.c
+@@ -214,10 +214,15 @@ static int ksmbd_neg_token_alloc(void *context, size_t hdrlen,
+ {
+ 	struct ksmbd_conn *conn = context;
+ 
++	if (!vlen)
++		return -EINVAL;
++
+ 	conn->mechToken = kmemdup_nul(value, vlen, GFP_KERNEL);
+ 	if (!conn->mechToken)
+ 		return -ENOMEM;
+ 
++	conn->mechTokenLen = (unsigned int)vlen;
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index b6fa1e285c401..7977827c65410 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -415,13 +415,7 @@ static void stop_sessions(void)
+ again:
+ 	down_read(&conn_list_lock);
+ 	list_for_each_entry(conn, &conn_list, conns_list) {
+-		struct task_struct *task;
+-
+ 		t = conn->transport;
+-		task = t->handler;
+-		if (task)
+-			ksmbd_debug(CONN, "Stop session handler %s/%d\n",
+-				    task->comm, task_pid_nr(task));
+ 		ksmbd_conn_set_exiting(conn);
+ 		if (t->ops->shutdown) {
+ 			up_read(&conn_list_lock);
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 3c005246a32e8..0e04cf8b1d896 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -88,6 +88,7 @@ struct ksmbd_conn {
+ 	__u16				dialect;
+ 
+ 	char				*mechToken;
++	unsigned int			mechTokenLen;
+ 
+ 	struct ksmbd_conn_ops	*conn_ops;
+ 
+@@ -134,7 +135,6 @@ struct ksmbd_transport_ops {
+ struct ksmbd_transport {
+ 	struct ksmbd_conn		*conn;
+ 	struct ksmbd_transport_ops	*ops;
+-	struct task_struct		*handler;
+ };
+ 
+ #define KSMBD_TCP_RECV_TIMEOUT	(7 * HZ)
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index 562b180459a1a..e0eb7cb2a5258 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -1191,6 +1191,12 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ 	bool prev_op_has_lease;
+ 	__le32 prev_op_state = 0;
+ 
++	/* Only v2 leases handle the directory */
++	if (S_ISDIR(file_inode(fp->filp)->i_mode)) {
++		if (!lctx || lctx->version != 2)
++			return 0;
++	}
++
+ 	opinfo = alloc_opinfo(work, pid, tid);
+ 	if (!opinfo)
+ 		return -ENOMEM;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index ca2c528c9de3a..948665f8370f4 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1414,7 +1414,10 @@ static struct ksmbd_user *session_user(struct ksmbd_conn *conn,
+ 	char *name;
+ 	unsigned int name_off, name_len, secbuf_len;
+ 
+-	secbuf_len = le16_to_cpu(req->SecurityBufferLength);
++	if (conn->use_spnego && conn->mechToken)
++		secbuf_len = conn->mechTokenLen;
++	else
++		secbuf_len = le16_to_cpu(req->SecurityBufferLength);
+ 	if (secbuf_len < sizeof(struct authenticate_message)) {
+ 		ksmbd_debug(SMB, "blob len %d too small\n", secbuf_len);
+ 		return NULL;
+@@ -1505,7 +1508,10 @@ static int ntlm_authenticate(struct ksmbd_work *work,
+ 		struct authenticate_message *authblob;
+ 
+ 		authblob = user_authblob(conn, req);
+-		sz = le16_to_cpu(req->SecurityBufferLength);
++		if (conn->use_spnego && conn->mechToken)
++			sz = conn->mechTokenLen;
++		else
++			sz = le16_to_cpu(req->SecurityBufferLength);
+ 		rc = ksmbd_decode_ntlmssp_auth_blob(authblob, sz, conn, sess);
+ 		if (rc) {
+ 			set_user_flag(sess->user, KSMBD_USER_FLAG_BAD_PASSWORD);
+@@ -1778,8 +1784,7 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 
+ 	negblob_off = le16_to_cpu(req->SecurityBufferOffset);
+ 	negblob_len = le16_to_cpu(req->SecurityBufferLength);
+-	if (negblob_off < offsetof(struct smb2_sess_setup_req, Buffer) ||
+-	    negblob_len < offsetof(struct negotiate_message, NegotiateFlags)) {
++	if (negblob_off < offsetof(struct smb2_sess_setup_req, Buffer)) {
+ 		rc = -EINVAL;
+ 		goto out_err;
+ 	}
+@@ -1788,8 +1793,15 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 			negblob_off);
+ 
+ 	if (decode_negotiation_token(conn, negblob, negblob_len) == 0) {
+-		if (conn->mechToken)
++		if (conn->mechToken) {
+ 			negblob = (struct negotiate_message *)conn->mechToken;
++			negblob_len = conn->mechTokenLen;
++		}
++	}
++
++	if (negblob_len < offsetof(struct negotiate_message, NegotiateFlags)) {
++		rc = -EINVAL;
++		goto out_err;
+ 	}
+ 
+ 	if (server_conf.auth_mechs & conn->auth_mechs) {
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 6691ae68af0c0..7c98bf699772f 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -158,8 +158,12 @@ int ksmbd_verify_smb_message(struct ksmbd_work *work)
+  */
+ bool ksmbd_smb_request(struct ksmbd_conn *conn)
+ {
+-	__le32 *proto = (__le32 *)smb2_get_msg(conn->request_buf);
++	__le32 *proto;
+ 
++	if (conn->request_buf[0] != 0)
++		return false;
++
++	proto = (__le32 *)smb2_get_msg(conn->request_buf);
+ 	if (*proto == SMB2_COMPRESSION_TRANSFORM_ID) {
+ 		pr_err_ratelimited("smb2 compression not support yet");
+ 		return false;
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index c5629a68c8b73..8faa25c6e129b 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -2039,6 +2039,7 @@ static bool rdma_frwr_is_supported(struct ib_device_attr *attrs)
+ static int smb_direct_handle_connect_request(struct rdma_cm_id *new_cm_id)
+ {
+ 	struct smb_direct_transport *t;
++	struct task_struct *handler;
+ 	int ret;
+ 
+ 	if (!rdma_frwr_is_supported(&new_cm_id->device->attrs)) {
+@@ -2056,11 +2057,11 @@ static int smb_direct_handle_connect_request(struct rdma_cm_id *new_cm_id)
+ 	if (ret)
+ 		goto out_err;
+ 
+-	KSMBD_TRANS(t)->handler = kthread_run(ksmbd_conn_handler_loop,
+-					      KSMBD_TRANS(t)->conn, "ksmbd:r%u",
+-					      smb_direct_port);
+-	if (IS_ERR(KSMBD_TRANS(t)->handler)) {
+-		ret = PTR_ERR(KSMBD_TRANS(t)->handler);
++	handler = kthread_run(ksmbd_conn_handler_loop,
++			      KSMBD_TRANS(t)->conn, "ksmbd:r%u",
++			      smb_direct_port);
++	if (IS_ERR(handler)) {
++		ret = PTR_ERR(handler);
+ 		pr_err("Can't start thread\n");
+ 		goto out_err;
+ 	}
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index eff7a1d793f00..9d4222154dcc0 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -185,6 +185,7 @@ static int ksmbd_tcp_new_connection(struct socket *client_sk)
+ 	struct sockaddr *csin;
+ 	int rc = 0;
+ 	struct tcp_transport *t;
++	struct task_struct *handler;
+ 
+ 	t = alloc_transport(client_sk);
+ 	if (!t) {
+@@ -199,13 +200,13 @@ static int ksmbd_tcp_new_connection(struct socket *client_sk)
+ 		goto out_error;
+ 	}
+ 
+-	KSMBD_TRANS(t)->handler = kthread_run(ksmbd_conn_handler_loop,
+-					      KSMBD_TRANS(t)->conn,
+-					      "ksmbd:%u",
+-					      ksmbd_tcp_get_port(csin));
+-	if (IS_ERR(KSMBD_TRANS(t)->handler)) {
++	handler = kthread_run(ksmbd_conn_handler_loop,
++			      KSMBD_TRANS(t)->conn,
++			      "ksmbd:%u",
++			      ksmbd_tcp_get_port(csin));
++	if (IS_ERR(handler)) {
+ 		pr_err("cannot start conn thread\n");
+-		rc = PTR_ERR(KSMBD_TRANS(t)->handler);
++		rc = PTR_ERR(handler);
+ 		free_transport(t);
+ 	}
+ 	return rc;
+diff --git a/include/asm-generic/cmpxchg-local.h b/include/asm-generic/cmpxchg-local.h
+index 3df9f59a544e4..f27d66fdc00a2 100644
+--- a/include/asm-generic/cmpxchg-local.h
++++ b/include/asm-generic/cmpxchg-local.h
+@@ -34,7 +34,7 @@ static inline unsigned long __generic_cmpxchg_local(volatile void *ptr,
+ 			*(u16 *)ptr = (new & 0xffffu);
+ 		break;
+ 	case 4: prev = *(u32 *)ptr;
+-		if (prev == (old & 0xffffffffffu))
++		if (prev == (old & 0xffffffffu))
+ 			*(u32 *)ptr = (new & 0xffffffffu);
+ 		break;
+ 	case 8: prev = *(u64 *)ptr;
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index ef8ce86b1f788..08b803a4fcde4 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -136,6 +136,7 @@ struct af_alg_async_req {
+  *			recvmsg is invoked.
+  * @init:		True if metadata has been sent.
+  * @len:		Length of memory allocated for this data structure.
++ * @inflight:		Non-zero when AIO requests are in flight.
+  */
+ struct af_alg_ctx {
+ 	struct list_head tsgl_list;
+@@ -154,6 +155,8 @@ struct af_alg_ctx {
+ 	bool init;
+ 
+ 	unsigned int len;
++
++	unsigned int inflight;
+ };
+ 
+ int af_alg_register_type(const struct af_alg_type *type);
+diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h
+index 4429d3b1745b6..655862b3d2a49 100644
+--- a/include/drm/display/drm_dp_mst_helper.h
++++ b/include/drm/display/drm_dp_mst_helper.h
+@@ -842,7 +842,7 @@ struct edid *drm_dp_mst_get_edid(struct drm_connector *connector,
+ int drm_dp_get_vc_payload_bw(const struct drm_dp_mst_topology_mgr *mgr,
+ 			     int link_rate, int link_lane_count);
+ 
+-int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
++int drm_dp_calc_pbn_mode(int clock, int bpp);
+ 
+ void drm_dp_mst_update_slots(struct drm_dp_mst_topology_state *mst_state, uint8_t link_encoding_cap);
+ 
+diff --git a/include/drm/drm_bridge.h b/include/drm/drm_bridge.h
+index cfb7dcdb66c4b..9ef461aa9b9e2 100644
+--- a/include/drm/drm_bridge.h
++++ b/include/drm/drm_bridge.h
+@@ -194,7 +194,7 @@ struct drm_bridge_funcs {
+ 	 * or &drm_encoder_helper_funcs.dpms hook.
+ 	 *
+ 	 * The bridge must assume that the display pipe (i.e. clocks and timing
+-	 * singals) feeding it is no longer running when this callback is
++	 * signals) feeding it is no longer running when this callback is
+ 	 * called.
+ 	 *
+ 	 * The @post_disable callback is optional.
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index 41d417ee13499..0286bada25ce7 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -286,6 +286,11 @@ static inline void bio_first_folio(struct folio_iter *fi, struct bio *bio,
+ {
+ 	struct bio_vec *bvec = bio_first_bvec_all(bio) + i;
+ 
++	if (unlikely(i >= bio->bi_vcnt)) {
++		fi->folio = NULL;
++		return;
++	}
++
+ 	fi->folio = page_folio(bvec->bv_page);
+ 	fi->offset = bvec->bv_offset +
+ 			PAGE_SIZE * (bvec->bv_page - &fi->folio->page);
+@@ -303,10 +308,8 @@ static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio)
+ 		fi->offset = 0;
+ 		fi->length = min(folio_size(fi->folio), fi->_seg_count);
+ 		fi->_next = folio_next(fi->folio);
+-	} else if (fi->_i + 1 < bio->bi_vcnt) {
+-		bio_first_folio(fi, bio, fi->_i + 1);
+ 	} else {
+-		fi->folio = NULL;
++		bio_first_folio(fi, bio, fi->_i + 1);
+ 	}
+ }
+ 
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index cff5bb08820ec..7a7859a5cce42 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -106,7 +106,11 @@ struct bpf_map_ops {
+ 	/* funcs called by prog_array and perf_event_array map */
+ 	void *(*map_fd_get_ptr)(struct bpf_map *map, struct file *map_file,
+ 				int fd);
+-	void (*map_fd_put_ptr)(void *ptr);
++	/* If need_defer is true, the implementation should guarantee that
++	 * the to-be-put element is still alive before the bpf program, which
++	 * may manipulate it, exists.
++	 */
++	void (*map_fd_put_ptr)(struct bpf_map *map, void *ptr, bool need_defer);
+ 	int (*map_gen_lookup)(struct bpf_map *map, struct bpf_insn *insn_buf);
+ 	u32 (*map_fd_sys_lookup_elem)(void *ptr);
+ 	void (*map_seq_show_elem)(struct bpf_map *map, void *key,
+@@ -272,7 +276,11 @@ struct bpf_map {
+ 	 */
+ 	atomic64_t refcnt ____cacheline_aligned;
+ 	atomic64_t usercnt;
+-	struct work_struct work;
++	/* rcu is used before freeing and work is only used during freeing */
++	union {
++		struct work_struct work;
++		struct rcu_head rcu;
++	};
+ 	struct mutex freeze_mutex;
+ 	atomic64_t writecnt;
+ 	/* 'Ownership' of program-containing map is claimed by the first program
+@@ -288,6 +296,7 @@ struct bpf_map {
+ 	} owner;
+ 	bool bypass_spec_v1;
+ 	bool frozen; /* write-once; write-protected by freeze_mutex */
++	bool free_after_mult_rcu_gp;
+ 	s64 __percpu *elem_count;
+ };
+ 
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index ace3a4ce2fc98..1293c38ddb7f7 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -448,8 +448,8 @@ struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
+  */
+ #define clk_hw_register_fixed_rate_with_accuracy_parent_hw(dev, name,	      \
+ 		parent_hw, flags, fixed_rate, fixed_accuracy)		      \
+-	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, (parent_hw)   \
+-				     NULL, NULL, (flags), (fixed_rate),	      \
++	__clk_hw_register_fixed_rate((dev), NULL, (name), NULL, (parent_hw),  \
++				     NULL, (flags), (fixed_rate),	      \
+ 				     (fixed_accuracy), 0, false)
+ /**
+  * clk_hw_register_fixed_rate_with_accuracy_parent_data - register fixed-rate
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 0aed62f0c6330..f2878b3614634 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -614,7 +614,7 @@ struct gpio_device *gpio_device_get(struct gpio_device *gdev);
+ void gpio_device_put(struct gpio_device *gdev);
+ 
+ DEFINE_FREE(gpio_device_put, struct gpio_device *,
+-	    if (IS_ERR_OR_NULL(_T)) gpio_device_put(_T));
++	    if (!IS_ERR_OR_NULL(_T)) gpio_device_put(_T))
+ 
+ struct device *gpio_device_to_device(struct gpio_device *gdev);
+ 
+diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h
+index ddc7ebb705234..4c5611d99c424 100644
+--- a/include/linux/hisi_acc_qm.h
++++ b/include/linux/hisi_acc_qm.h
+@@ -160,6 +160,11 @@ enum qm_cap_bits {
+ 	QM_SUPPORT_RPM,
+ };
+ 
++struct qm_dev_alg {
++	u64 alg_msk;
++	const char *alg;
++};
++
+ struct dfx_diff_registers {
+ 	u32 *regs;
+ 	u32 reg_offset;
+@@ -265,6 +270,16 @@ struct hisi_qm_cap_info {
+ 	u32 v3_val;
+ };
+ 
++struct hisi_qm_cap_record {
++	u32 type;
++	u32 cap_val;
++};
++
++struct hisi_qm_cap_tables {
++	struct hisi_qm_cap_record *qm_cap_table;
++	struct hisi_qm_cap_record *dev_cap_table;
++};
++
+ struct hisi_qm_list {
+ 	struct mutex lock;
+ 	struct list_head list;
+@@ -365,7 +380,6 @@ struct hisi_qm {
+ 	struct work_struct rst_work;
+ 	struct work_struct cmd_process;
+ 
+-	const char *algs;
+ 	bool use_sva;
+ 
+ 	resource_size_t phys_base;
+@@ -376,6 +390,8 @@ struct hisi_qm {
+ 	u32 mb_qos;
+ 	u32 type_rate;
+ 	struct qm_err_isolate isolate_data;
++
++	struct hisi_qm_cap_tables cap_tables;
+ };
+ 
+ struct hisi_qp_status {
+@@ -563,6 +579,8 @@ void hisi_qm_regs_dump(struct seq_file *s, struct debugfs_regset32 *regset);
+ u32 hisi_qm_get_hw_info(struct hisi_qm *qm,
+ 			const struct hisi_qm_cap_info *info_table,
+ 			u32 index, bool is_read);
++int hisi_qm_set_algs(struct hisi_qm *qm, u64 alg_msk, const struct qm_dev_alg *dev_algs,
++		     u32 dev_algs_size);
+ 
+ /* Used by VFIO ACC live migration driver */
+ struct pci_driver *hisi_sec_get_pf_driver(void);
+diff --git a/include/linux/iio/adc/adi-axi-adc.h b/include/linux/iio/adc/adi-axi-adc.h
+index 52620e5b80522..b7904992d5619 100644
+--- a/include/linux/iio/adc/adi-axi-adc.h
++++ b/include/linux/iio/adc/adi-axi-adc.h
+@@ -41,6 +41,7 @@ struct adi_axi_adc_chip_info {
+  * @reg_access		IIO debugfs_reg_access hook for the client ADC
+  * @read_raw		IIO read_raw hook for the client ADC
+  * @write_raw		IIO write_raw hook for the client ADC
++ * @read_avail		IIO read_avail hook for the client ADC
+  */
+ struct adi_axi_adc_conv {
+ 	const struct adi_axi_adc_chip_info		*chip_info;
+@@ -54,6 +55,9 @@ struct adi_axi_adc_conv {
+ 	int (*write_raw)(struct adi_axi_adc_conv *conv,
+ 			 struct iio_chan_spec const *chan,
+ 			 int val, int val2, long mask);
++	int (*read_avail)(struct adi_axi_adc_conv *conv,
++			  struct iio_chan_spec const *chan,
++			  const int **val, int *type, int *length, long mask);
+ };
+ 
+ struct adi_axi_adc_conv *devm_adi_axi_adc_conv_register(struct device *dev,
+diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
+index f198a8ac7ee72..96f3a133540db 100644
+--- a/include/linux/mhi_ep.h
++++ b/include/linux/mhi_ep.h
+@@ -49,6 +49,18 @@ struct mhi_ep_db_info {
+ 	u32 status;
+ };
+ 
++/**
++ * struct mhi_ep_buf_info - MHI Endpoint transfer buffer info
++ * @dev_addr: Address of the buffer in endpoint
++ * @host_addr: Address of the bufffer in host
++ * @size: Size of the buffer
++ */
++struct mhi_ep_buf_info {
++	void *dev_addr;
++	u64 host_addr;
++	size_t size;
++};
++
+ /**
+  * struct mhi_ep_cntrl - MHI Endpoint controller structure
+  * @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
+@@ -128,14 +140,17 @@ struct mhi_ep_cntrl {
+ 	struct work_struct reset_work;
+ 	struct work_struct cmd_ring_work;
+ 	struct work_struct ch_ring_work;
++	struct kmem_cache *ring_item_cache;
++	struct kmem_cache *ev_ring_el_cache;
++	struct kmem_cache *tre_buf_cache;
+ 
+ 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
+ 	int (*alloc_map)(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t *phys_ptr,
+ 			 void __iomem **virt, size_t size);
+ 	void (*unmap_free)(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
+ 			   void __iomem *virt, size_t size);
+-	int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, u64 from, void *to, size_t size);
+-	int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, void *from, u64 to, size_t size);
++	int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_buf_info *buf_info);
++	int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_buf_info *buf_info);
+ 
+ 	enum mhi_state mhi_state;
+ 
+diff --git a/include/linux/netfilter_bridge.h b/include/linux/netfilter_bridge.h
+index f980edfdd2783..743475ca7e9d5 100644
+--- a/include/linux/netfilter_bridge.h
++++ b/include/linux/netfilter_bridge.h
+@@ -42,7 +42,7 @@ static inline int nf_bridge_get_physinif(const struct sk_buff *skb)
+ 	if (!nf_bridge)
+ 		return 0;
+ 
+-	return nf_bridge->physindev ? nf_bridge->physindev->ifindex : 0;
++	return nf_bridge->physinif;
+ }
+ 
+ static inline int nf_bridge_get_physoutif(const struct sk_buff *skb)
+@@ -56,11 +56,11 @@ static inline int nf_bridge_get_physoutif(const struct sk_buff *skb)
+ }
+ 
+ static inline struct net_device *
+-nf_bridge_get_physindev(const struct sk_buff *skb)
++nf_bridge_get_physindev(const struct sk_buff *skb, struct net *net)
+ {
+ 	const struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
+ 
+-	return nf_bridge ? nf_bridge->physindev : NULL;
++	return nf_bridge ? dev_get_by_index_rcu(net, nf_bridge->physinif) : NULL;
+ }
+ 
+ static inline struct net_device *
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index dea043bc1e383..bc80960fad7c4 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2130,14 +2130,14 @@ int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma);
+ 	(pci_resource_end((dev), (bar)) ? 				\
+ 	 resource_size(pci_resource_n((dev), (bar))) : 0)
+ 
+-#define __pci_dev_for_each_res0(dev, res, ...)				\
+-	for (unsigned int __b = 0;					\
+-	     res = pci_resource_n(dev, __b), __b < PCI_NUM_RESOURCES;	\
++#define __pci_dev_for_each_res0(dev, res, ...)				  \
++	for (unsigned int __b = 0;					  \
++	     __b < PCI_NUM_RESOURCES && (res = pci_resource_n(dev, __b)); \
+ 	     __b++)
+ 
+-#define __pci_dev_for_each_res1(dev, res, __b)				\
+-	for (__b = 0;							\
+-	     res = pci_resource_n(dev, __b), __b < PCI_NUM_RESOURCES;	\
++#define __pci_dev_for_each_res1(dev, res, __b)				  \
++	for (__b = 0;							  \
++	     __b < PCI_NUM_RESOURCES && (res = pci_resource_n(dev, __b)); \
+ 	     __b++)
+ 
+ #define pci_dev_for_each_resource(dev, res, ...)			\
+diff --git a/include/linux/rcu_notifier.h b/include/linux/rcu_notifier.h
+index ebf371364581d..5640f024773b3 100644
+--- a/include/linux/rcu_notifier.h
++++ b/include/linux/rcu_notifier.h
+@@ -13,7 +13,7 @@
+ #define RCU_STALL_NOTIFY_NORM	1
+ #define RCU_STALL_NOTIFY_EXP	2
+ 
+-#ifdef CONFIG_RCU_STALL_COMMON
++#if defined(CONFIG_RCU_STALL_COMMON) && defined(CONFIG_RCU_CPU_STALL_NOTIFIER)
+ 
+ #include <linux/notifier.h>
+ #include <linux/types.h>
+@@ -21,12 +21,12 @@
+ int rcu_stall_chain_notifier_register(struct notifier_block *n);
+ int rcu_stall_chain_notifier_unregister(struct notifier_block *n);
+ 
+-#else // #ifdef CONFIG_RCU_STALL_COMMON
++#else // #if defined(CONFIG_RCU_STALL_COMMON) && defined(CONFIG_RCU_CPU_STALL_NOTIFIER)
+ 
+ // No RCU CPU stall warnings in Tiny RCU.
+ static inline int rcu_stall_chain_notifier_register(struct notifier_block *n) { return -EEXIST; }
+ static inline int rcu_stall_chain_notifier_unregister(struct notifier_block *n) { return -ENOENT; }
+ 
+-#endif // #else // #ifdef CONFIG_RCU_STALL_COMMON
++#endif // #else // #if defined(CONFIG_RCU_STALL_COMMON) && defined(CONFIG_RCU_CPU_STALL_NOTIFIER)
+ 
+ #endif /* __LINUX_RCU_NOTIFIER_H */
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index f7206b2623c98..31d523c4e0893 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -301,6 +301,11 @@ static inline void rcu_lock_acquire(struct lockdep_map *map)
+ 	lock_acquire(map, 0, 0, 2, 0, NULL, _THIS_IP_);
+ }
+ 
++static inline void rcu_try_lock_acquire(struct lockdep_map *map)
++{
++	lock_acquire(map, 0, 1, 2, 0, NULL, _THIS_IP_);
++}
++
+ static inline void rcu_lock_release(struct lockdep_map *map)
+ {
+ 	lock_release(map, _THIS_IP_);
+@@ -315,6 +320,7 @@ int rcu_read_lock_any_held(void);
+ #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
+ 
+ # define rcu_lock_acquire(a)		do { } while (0)
++# define rcu_try_lock_acquire(a)	do { } while (0)
+ # define rcu_lock_release(a)		do { } while (0)
+ 
+ static inline int rcu_read_lock_held(void)
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 27998f73183e1..763e9264402f0 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -295,7 +295,7 @@ struct nf_bridge_info {
+ 	u8			bridged_dnat:1;
+ 	u8			sabotage_in_done:1;
+ 	__u16			frag_max_size;
+-	struct net_device	*physindev;
++	int			physinif;
+ 
+ 	/* always valid & non-NULL from FORWARD on, for physdev match */
+ 	struct net_device	*physoutdev;
+diff --git a/include/linux/srcu.h b/include/linux/srcu.h
+index 127ef3b2e6073..236610e4a8fa5 100644
+--- a/include/linux/srcu.h
++++ b/include/linux/srcu.h
+@@ -229,7 +229,7 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp
+ 
+ 	srcu_check_nmi_safety(ssp, true);
+ 	retval = __srcu_read_lock_nmisafe(ssp);
+-	rcu_lock_acquire(&ssp->dep_map);
++	rcu_try_lock_acquire(&ssp->dep_map);
+ 	return retval;
+ }
+ 
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 27cc1d4643219..4dfa9b69ca8d9 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -3,6 +3,8 @@
+ #define _LINUX_VIRTIO_NET_H
+ 
+ #include <linux/if_vlan.h>
++#include <linux/ip.h>
++#include <linux/ipv6.h>
+ #include <linux/udp.h>
+ #include <uapi/linux/tcp.h>
+ #include <uapi/linux/virtio_net.h>
+@@ -49,6 +51,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 					const struct virtio_net_hdr *hdr,
+ 					bool little_endian)
+ {
++	unsigned int nh_min_len = sizeof(struct iphdr);
+ 	unsigned int gso_type = 0;
+ 	unsigned int thlen = 0;
+ 	unsigned int p_off = 0;
+@@ -65,6 +68,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 			gso_type = SKB_GSO_TCPV6;
+ 			ip_proto = IPPROTO_TCP;
+ 			thlen = sizeof(struct tcphdr);
++			nh_min_len = sizeof(struct ipv6hdr);
+ 			break;
+ 		case VIRTIO_NET_HDR_GSO_UDP:
+ 			gso_type = SKB_GSO_UDP;
+@@ -100,7 +104,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 		if (!skb_partial_csum_set(skb, start, off))
+ 			return -EINVAL;
+ 
+-		p_off = skb_transport_offset(skb) + thlen;
++		nh_min_len = max_t(u32, nh_min_len, skb_transport_offset(skb));
++		p_off = nh_min_len + thlen;
+ 		if (!pskb_may_pull(skb, p_off))
+ 			return -EINVAL;
+ 	} else {
+@@ -140,7 +145,7 @@ retry:
+ 
+ 			skb_set_transport_header(skb, keys.control.thoff);
+ 		} else if (gso_type) {
+-			p_off = thlen;
++			p_off = nh_min_len + thlen;
+ 			if (!pskb_may_pull(skb, p_off))
+ 				return -EINVAL;
+ 		}
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index ebb3ce63d64da..c82089dee0c83 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -256,4 +256,5 @@ void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
+ void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
+ int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
+ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
++int virtio_transport_notify_set_rcvlowat(struct vsock_sock *vsk, int val);
+ #endif /* _LINUX_VIRTIO_VSOCK_H */
+diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
+index e302c0e804d0f..535701efc1e5c 100644
+--- a/include/net/af_vsock.h
++++ b/include/net/af_vsock.h
+@@ -137,7 +137,6 @@ struct vsock_transport {
+ 	u64 (*stream_rcvhiwat)(struct vsock_sock *);
+ 	bool (*stream_is_active)(struct vsock_sock *);
+ 	bool (*stream_allow)(u32 cid, u32 port);
+-	int (*set_rcvlowat)(struct vsock_sock *vsk, int val);
+ 
+ 	/* SEQ_PACKET. */
+ 	ssize_t (*seqpacket_dequeue)(struct vsock_sock *vsk, struct msghdr *msg,
+@@ -168,6 +167,7 @@ struct vsock_transport {
+ 		struct vsock_transport_send_notify_data *);
+ 	/* sk_lock held by the caller */
+ 	void (*notify_buffer_size)(struct vsock_sock *, u64 *);
++	int (*notify_set_rcvlowat)(struct vsock_sock *vsk, int val);
+ 
+ 	/* Shutdown. */
+ 	int (*shutdown)(struct vsock_sock *, int);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index a3a1ea2696a80..65dd286693527 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -957,7 +957,6 @@ void hci_inquiry_cache_flush(struct hci_dev *hdev);
+ /* ----- HCI Connections ----- */
+ enum {
+ 	HCI_CONN_AUTH_PEND,
+-	HCI_CONN_REAUTH_PEND,
+ 	HCI_CONN_ENCRYPT_PEND,
+ 	HCI_CONN_RSWITCH_PEND,
+ 	HCI_CONN_MODE_CHANGE_PEND,
+diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h
+index d68b0a4834315..8b8ed4e13d74d 100644
+--- a/include/net/netdev_queues.h
++++ b/include/net/netdev_queues.h
+@@ -128,7 +128,7 @@ netdev_txq_completed_mb(struct netdev_queue *dev_queue,
+ 		netdev_txq_completed_mb(txq, pkts, bytes);		\
+ 									\
+ 		_res = -1;						\
+-		if (pkts && likely(get_desc > start_thrs)) {		\
++		if (pkts && likely(get_desc >= start_thrs)) {		\
+ 			_res = 1;					\
+ 			if (unlikely(netif_tx_queue_stopped(txq)) &&	\
+ 			    !(down_cond)) {				\
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 0f6cdf52b1dab..bda948a685e50 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -4517,6 +4517,8 @@ union bpf_attr {
+  * long bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, u64 flags)
+  *	Description
+  *		Return a user or a kernel stack in bpf program provided buffer.
++ *		Note: the user stack will only be populated if the *task* is
++ *		the current task; all other tasks will return -EOPNOTSUPP.
+  *		To achieve this, the helper needs *task*, which is a valid
+  *		pointer to **struct task_struct**. To store the stacktrace, the
+  *		bpf program provides *buf* with a nonnegative *size*.
+@@ -4528,6 +4530,7 @@ union bpf_attr {
+  *
+  *		**BPF_F_USER_STACK**
+  *			Collect a user space stack instead of a kernel stack.
++ *			The *task* must be the current task.
+  *		**BPF_F_USER_BUILD_ID**
+  *			Collect buildid+offset instead of ips for user stack,
+  *			only valid if **BPF_F_USER_STACK** is also specified.
+diff --git a/init/do_mounts.c b/init/do_mounts.c
+index 5fdef94f08645..279ad28bf4fb1 100644
+--- a/init/do_mounts.c
++++ b/init/do_mounts.c
+@@ -510,7 +510,10 @@ struct file_system_type rootfs_fs_type = {
+ 
+ void __init init_rootfs(void)
+ {
+-	if (IS_ENABLED(CONFIG_TMPFS) && !saved_root_name[0] &&
+-		(!root_fs_names || strstr(root_fs_names, "tmpfs")))
+-		is_tmpfs = true;
++	if (IS_ENABLED(CONFIG_TMPFS)) {
++		if (!saved_root_name[0] && !root_fs_names)
++			is_tmpfs = true;
++		else if (root_fs_names && !!strstr(root_fs_names, "tmpfs"))
++			is_tmpfs = true;
++	}
+ }
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 9626a363f1213..59f5791c90c31 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1346,7 +1346,7 @@ static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
+ 		nr_tw = nr_tw_prev + 1;
+ 		/* Large enough to fail the nr_wait comparison below */
+ 		if (!(flags & IOU_F_TWQ_LAZY_WAKE))
+-			nr_tw = -1U;
++			nr_tw = INT_MAX;
+ 
+ 		req->nr_tw = nr_tw;
+ 		req->io_task_work.node.next = first;
+@@ -1898,7 +1898,11 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+ 			io_req_complete_defer(req);
+ 		else
+ 			io_req_complete_post(req, issue_flags);
+-	} else if (ret != IOU_ISSUE_SKIP_COMPLETE)
++
++		return 0;
++	}
++
++	if (ret != IOU_ISSUE_SKIP_COMPLETE)
+ 		return ret;
+ 
+ 	/* If the op doesn't have a file, we're not polling for it */
+@@ -2633,8 +2637,6 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 		__set_current_state(TASK_RUNNING);
+ 		atomic_set(&ctx->cq_wait_nr, 0);
+ 
+-		if (ret < 0)
+-			break;
+ 		/*
+ 		 * Run task_work after scheduling and before io_should_wake().
+ 		 * If we got woken because of task_work being processed, run it
+@@ -2644,6 +2646,18 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 		if (!llist_empty(&ctx->work_llist))
+ 			io_run_local_work(ctx);
+ 
++		/*
++		 * Non-local task_work will be run on exit to userspace, but
++		 * if we're using DEFER_TASKRUN, then we could have waited
++		 * with a timeout for a number of requests. If the timeout
++		 * hits, we could have some requests ready to process. Ensure
++		 * this break is _after_ we have run task_work, to avoid
++		 * deferring running potentially pending requests until the
++		 * next time we wait for events.
++		 */
++		if (ret < 0)
++			break;
++
+ 		check_cq = READ_ONCE(ctx->check_cq);
+ 		if (unlikely(check_cq)) {
+ 			/* let the caller flush overflows, retry */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 64390d4e20c18..743732fba7a45 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -589,15 +589,19 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
+ 	struct iovec *iov;
+ 	int ret;
+ 
++	iorw->bytes_done = 0;
++	iorw->free_iovec = NULL;
++
+ 	/* submission path, ->uring_lock should already be taken */
+ 	ret = io_import_iovec(rw, req, &iov, &iorw->s, 0);
+ 	if (unlikely(ret < 0))
+ 		return ret;
+ 
+-	iorw->bytes_done = 0;
+-	iorw->free_iovec = iov;
+-	if (iov)
++	if (iov) {
++		iorw->free_iovec = iov;
+ 		req->flags |= REQ_F_NEED_CLEANUP;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index c85ff9162a5cd..9bfad7e969131 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -867,7 +867,7 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
+ 	}
+ 
+ 	if (old_ptr)
+-		map->ops->map_fd_put_ptr(old_ptr);
++		map->ops->map_fd_put_ptr(map, old_ptr, true);
+ 	return 0;
+ }
+ 
+@@ -890,7 +890,7 @@ static long fd_array_map_delete_elem(struct bpf_map *map, void *key)
+ 	}
+ 
+ 	if (old_ptr) {
+-		map->ops->map_fd_put_ptr(old_ptr);
++		map->ops->map_fd_put_ptr(map, old_ptr, true);
+ 		return 0;
+ 	} else {
+ 		return -ENOENT;
+@@ -913,8 +913,9 @@ static void *prog_fd_array_get_ptr(struct bpf_map *map,
+ 	return prog;
+ }
+ 
+-static void prog_fd_array_put_ptr(void *ptr)
++static void prog_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
++	/* bpf_prog is freed after one RCU or tasks trace grace period */
+ 	bpf_prog_put(ptr);
+ }
+ 
+@@ -1201,8 +1202,9 @@ err_out:
+ 	return ee;
+ }
+ 
+-static void perf_event_fd_array_put_ptr(void *ptr)
++static void perf_event_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
++	/* bpf_perf_event is freed after one RCU grace period */
+ 	bpf_event_entry_free_rcu(ptr);
+ }
+ 
+@@ -1256,7 +1258,7 @@ static void *cgroup_fd_array_get_ptr(struct bpf_map *map,
+ 	return cgroup_get_from_fd(fd);
+ }
+ 
+-static void cgroup_fd_array_put_ptr(void *ptr)
++static void cgroup_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
+ 	/* cgroup_put free cgrp after a rcu grace period */
+ 	cgroup_put(ptr);
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index fd8d4b0addfca..5b9146fa825fd 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -897,7 +897,7 @@ static void htab_put_fd_value(struct bpf_htab *htab, struct htab_elem *l)
+ 
+ 	if (map->ops->map_fd_put_ptr) {
+ 		ptr = fd_htab_map_get_ptr(map, l);
+-		map->ops->map_fd_put_ptr(ptr);
++		map->ops->map_fd_put_ptr(map, ptr, true);
+ 	}
+ }
+ 
+@@ -2484,7 +2484,7 @@ static void fd_htab_map_free(struct bpf_map *map)
+ 		hlist_nulls_for_each_entry_safe(l, n, head, hash_node) {
+ 			void *ptr = fd_htab_map_get_ptr(map, l);
+ 
+-			map->ops->map_fd_put_ptr(ptr);
++			map->ops->map_fd_put_ptr(map, ptr, false);
+ 		}
+ 	}
+ 
+@@ -2525,7 +2525,7 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
+ 
+ 	ret = htab_map_update_elem(map, key, &ptr, map_flags);
+ 	if (ret)
+-		map->ops->map_fd_put_ptr(ptr);
++		map->ops->map_fd_put_ptr(map, ptr, false);
+ 
+ 	return ret;
+ }
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 56b0c1f678ee7..6950f04616349 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -2520,7 +2520,7 @@ BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
+ BTF_ID_FLAGS(func, bpf_percpu_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
+ BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE)
+ BTF_ID_FLAGS(func, bpf_percpu_obj_drop_impl, KF_RELEASE)
+-BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE | KF_RET_NULL)
++BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE | KF_RET_NULL | KF_RCU)
+ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
+ BTF_ID_FLAGS(func, bpf_list_push_back_impl)
+ BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 17c7e7782a1f7..b32be680da6cd 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -231,6 +231,9 @@ static void *trie_lookup_elem(struct bpf_map *map, void *_key)
+ 	struct lpm_trie_node *node, *found = NULL;
+ 	struct bpf_lpm_trie_key *key = _key;
+ 
++	if (key->prefixlen > trie->max_prefixlen)
++		return NULL;
++
+ 	/* Start walking the trie from the root node ... */
+ 
+ 	for (node = rcu_dereference_check(trie->root, rcu_read_lock_bh_held());
+diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c
+index cd5eafaba97e2..3248ff5d81617 100644
+--- a/kernel/bpf/map_in_map.c
++++ b/kernel/bpf/map_in_map.c
+@@ -127,12 +127,17 @@ void *bpf_map_fd_get_ptr(struct bpf_map *map,
+ 	return inner_map;
+ }
+ 
+-void bpf_map_fd_put_ptr(void *ptr)
++void bpf_map_fd_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
+-	/* ptr->ops->map_free() has to go through one
+-	 * rcu grace period by itself.
++	struct bpf_map *inner_map = ptr;
++
++	/* The inner map may still be used by both non-sleepable and sleepable
++	 * bpf program, so free it after one RCU grace period and one tasks
++	 * trace RCU grace period.
+ 	 */
+-	bpf_map_put(ptr);
++	if (need_defer)
++		WRITE_ONCE(inner_map->free_after_mult_rcu_gp, true);
++	bpf_map_put(inner_map);
+ }
+ 
+ u32 bpf_map_fd_sys_lookup_elem(void *ptr)
+diff --git a/kernel/bpf/map_in_map.h b/kernel/bpf/map_in_map.h
+index bcb7534afb3c0..7d61602354de8 100644
+--- a/kernel/bpf/map_in_map.h
++++ b/kernel/bpf/map_in_map.h
+@@ -13,7 +13,7 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd);
+ void bpf_map_meta_free(struct bpf_map *map_meta);
+ void *bpf_map_fd_get_ptr(struct bpf_map *map, struct file *map_file,
+ 			 int ufd);
+-void bpf_map_fd_put_ptr(void *ptr);
++void bpf_map_fd_put_ptr(struct bpf_map *map, void *ptr, bool need_defer);
+ u32 bpf_map_fd_sys_lookup_elem(void *ptr);
+ 
+ #endif
+diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
+index 6a51cfe4c2d63..aa0fbf000a12b 100644
+--- a/kernel/bpf/memalloc.c
++++ b/kernel/bpf/memalloc.c
+@@ -490,27 +490,6 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
+ 	alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
+ }
+ 
+-static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
+-{
+-	struct llist_node *first;
+-	unsigned int obj_size;
+-
+-	first = c->free_llist.first;
+-	if (!first)
+-		return 0;
+-
+-	if (c->percpu_size)
+-		obj_size = pcpu_alloc_size(((void **)first)[1]);
+-	else
+-		obj_size = ksize(first);
+-	if (obj_size != c->unit_size) {
+-		WARN_ONCE(1, "bpf_mem_cache[%u]: percpu %d, unexpected object size %u, expect %u\n",
+-			  idx, c->percpu_size, obj_size, c->unit_size);
+-		return -EINVAL;
+-	}
+-	return 0;
+-}
+-
+ /* When size != 0 bpf_mem_cache for each cpu.
+  * This is typical bpf hash map use case when all elements have equal size.
+  *
+@@ -521,10 +500,10 @@ static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
+ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
+ {
+ 	static u16 sizes[NUM_CACHES] = {96, 192, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096};
+-	int cpu, i, err, unit_size, percpu_size = 0;
+ 	struct bpf_mem_caches *cc, __percpu *pcc;
+ 	struct bpf_mem_cache *c, __percpu *pc;
+ 	struct obj_cgroup *objcg = NULL;
++	int cpu, i, unit_size, percpu_size = 0;
+ 
+ 	/* room for llist_node and per-cpu pointer */
+ 	if (percpu)
+@@ -560,7 +539,6 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
+ 	pcc = __alloc_percpu_gfp(sizeof(*cc), 8, GFP_KERNEL);
+ 	if (!pcc)
+ 		return -ENOMEM;
+-	err = 0;
+ #ifdef CONFIG_MEMCG_KMEM
+ 	objcg = get_obj_cgroup_from_current();
+ #endif
+@@ -574,28 +552,12 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
+ 			c->tgt = c;
+ 
+ 			init_refill_work(c);
+-			/* Another bpf_mem_cache will be used when allocating
+-			 * c->unit_size in bpf_mem_alloc(), so doesn't prefill
+-			 * for the bpf_mem_cache because these free objects will
+-			 * never be used.
+-			 */
+-			if (i != bpf_mem_cache_idx(c->unit_size))
+-				continue;
+ 			prefill_mem_cache(c, cpu);
+-			err = check_obj_size(c, i);
+-			if (err)
+-				goto out;
+ 		}
+ 	}
+ 
+-out:
+ 	ma->caches = pcc;
+-	/* refill_work is either zeroed or initialized, so it is safe to
+-	 * call irq_work_sync().
+-	 */
+-	if (err)
+-		bpf_mem_alloc_destroy(ma);
+-	return err;
++	return 0;
+ }
+ 
+ static void drain_mem_cache(struct bpf_mem_cache *c)
+@@ -869,7 +831,7 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size)
+ 	void *ret;
+ 
+ 	if (!size)
+-		return ZERO_SIZE_PTR;
++		return NULL;
+ 
+ 	idx = bpf_mem_cache_idx(size + LLIST_NODE_SZ);
+ 	if (idx < 0)
+@@ -879,26 +841,17 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size)
+ 	return !ret ? NULL : ret + LLIST_NODE_SZ;
+ }
+ 
+-static notrace int bpf_mem_free_idx(void *ptr, bool percpu)
+-{
+-	size_t size;
+-
+-	if (percpu)
+-		size = pcpu_alloc_size(*((void **)ptr));
+-	else
+-		size = ksize(ptr - LLIST_NODE_SZ);
+-	return bpf_mem_cache_idx(size);
+-}
+-
+ void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
+ {
++	struct bpf_mem_cache *c;
+ 	int idx;
+ 
+ 	if (!ptr)
+ 		return;
+ 
+-	idx = bpf_mem_free_idx(ptr, ma->percpu);
+-	if (idx < 0)
++	c = *(void **)(ptr - LLIST_NODE_SZ);
++	idx = bpf_mem_cache_idx(c->unit_size);
++	if (WARN_ON_ONCE(idx < 0))
+ 		return;
+ 
+ 	unit_free(this_cpu_ptr(ma->caches)->cache + idx, ptr);
+@@ -906,13 +859,15 @@ void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
+ 
+ void notrace bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr)
+ {
++	struct bpf_mem_cache *c;
+ 	int idx;
+ 
+ 	if (!ptr)
+ 		return;
+ 
+-	idx = bpf_mem_free_idx(ptr, ma->percpu);
+-	if (idx < 0)
++	c = *(void **)(ptr - LLIST_NODE_SZ);
++	idx = bpf_mem_cache_idx(c->unit_size);
++	if (WARN_ON_ONCE(idx < 0))
+ 		return;
+ 
+ 	unit_free_rcu(this_cpu_ptr(ma->caches)->cache + idx, ptr);
+@@ -986,41 +941,3 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
+ 
+ 	return !ret ? NULL : ret + LLIST_NODE_SZ;
+ }
+-
+-/* The alignment of dynamic per-cpu area is 8, so c->unit_size and the
+- * actual size of dynamic per-cpu area will always be matched and there is
+- * no need to adjust size_index for per-cpu allocation. However for the
+- * simplicity of the implementation, use an unified size_index for both
+- * kmalloc and per-cpu allocation.
+- */
+-static __init int bpf_mem_cache_adjust_size(void)
+-{
+-	unsigned int size;
+-
+-	/* Adjusting the indexes in size_index() according to the object_size
+-	 * of underlying slab cache, so bpf_mem_alloc() will select a
+-	 * bpf_mem_cache with unit_size equal to the object_size of
+-	 * the underlying slab cache.
+-	 *
+-	 * The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is
+-	 * 256-bytes, so only do adjustment for [8-bytes, 192-bytes].
+-	 */
+-	for (size = 192; size >= 8; size -= 8) {
+-		unsigned int kmalloc_size, index;
+-
+-		kmalloc_size = kmalloc_size_roundup(size);
+-		if (kmalloc_size == size)
+-			continue;
+-
+-		if (kmalloc_size <= 192)
+-			index = size_index[(kmalloc_size - 1) / 8];
+-		else
+-			index = fls(kmalloc_size - 1) - 1;
+-		/* Only overwrite if necessary */
+-		if (size_index[(size - 1) / 8] != index)
+-			size_index[(size - 1) / 8] = index;
+-	}
+-
+-	return 0;
+-}
+-subsys_initcall(bpf_mem_cache_adjust_size);
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index d6b277482085c..dff7ba5397015 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -388,6 +388,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ {
+ 	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
+ 	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
++	bool crosstask = task && task != current;
+ 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ 	bool user = flags & BPF_F_USER_STACK;
+ 	struct perf_callchain_entry *trace;
+@@ -410,6 +411,14 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 	if (task && user && !user_mode(regs))
+ 		goto err_fault;
+ 
++	/* get_perf_callchain does not support crosstask user stack walking
++	 * but returns an empty stack instead of NULL.
++	 */
++	if (crosstask && user) {
++		err = -EOPNOTSUPP;
++		goto clear;
++	}
++
+ 	num_elem = size / elem_size;
+ 	max_depth = num_elem + skip;
+ 	if (sysctl_perf_event_max_stack < max_depth)
+@@ -421,7 +430,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ 		trace = get_callchain_entry_for_task(task, max_depth);
+ 	else
+ 		trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
+-					   false, false);
++					   crosstask, false);
+ 	if (unlikely(!trace))
+ 		goto err_fault;
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 0ed286b8a0f0f..58e34ff811971 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -694,6 +694,7 @@ static void bpf_map_free_deferred(struct work_struct *work)
+ {
+ 	struct bpf_map *map = container_of(work, struct bpf_map, work);
+ 	struct btf_record *rec = map->record;
++	struct btf *btf = map->btf;
+ 
+ 	security_bpf_map_free(map);
+ 	bpf_map_release_memcg(map);
+@@ -709,6 +710,10 @@ static void bpf_map_free_deferred(struct work_struct *work)
+ 	 * template bpf_map struct used during verification.
+ 	 */
+ 	btf_record_free(rec);
++	/* Delay freeing of btf for maps, as map_free callback may need
++	 * struct_meta info which will be freed with btf_put().
++	 */
++	btf_put(btf);
+ }
+ 
+ static void bpf_map_put_uref(struct bpf_map *map)
+@@ -719,6 +724,28 @@ static void bpf_map_put_uref(struct bpf_map *map)
+ 	}
+ }
+ 
++static void bpf_map_free_in_work(struct bpf_map *map)
++{
++	INIT_WORK(&map->work, bpf_map_free_deferred);
++	/* Avoid spawning kworkers, since they all might contend
++	 * for the same mutex like slab_mutex.
++	 */
++	queue_work(system_unbound_wq, &map->work);
++}
++
++static void bpf_map_free_rcu_gp(struct rcu_head *rcu)
++{
++	bpf_map_free_in_work(container_of(rcu, struct bpf_map, rcu));
++}
++
++static void bpf_map_free_mult_rcu_gp(struct rcu_head *rcu)
++{
++	if (rcu_trace_implies_rcu_gp())
++		bpf_map_free_rcu_gp(rcu);
++	else
++		call_rcu(rcu, bpf_map_free_rcu_gp);
++}
++
+ /* decrement map refcnt and schedule it for freeing via workqueue
+  * (underlying map implementation ops->map_free() might sleep)
+  */
+@@ -727,12 +754,11 @@ void bpf_map_put(struct bpf_map *map)
+ 	if (atomic64_dec_and_test(&map->refcnt)) {
+ 		/* bpf_map_free_id() must be called first */
+ 		bpf_map_free_id(map);
+-		btf_put(map->btf);
+-		INIT_WORK(&map->work, bpf_map_free_deferred);
+-		/* Avoid spawning kworkers, since they all might contend
+-		 * for the same mutex like slab_mutex.
+-		 */
+-		queue_work(system_unbound_wq, &map->work);
++
++		if (READ_ONCE(map->free_after_mult_rcu_gp))
++			call_rcu_tasks_trace(&map->rcu, bpf_map_free_mult_rcu_gp);
++		else
++			bpf_map_free_in_work(map);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(bpf_map_put);
+@@ -3179,6 +3205,10 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
+ 	 *
+ 	 * - if prog->aux->dst_trampoline and tgt_prog is NULL, the program
+ 	 *   was detached and is going for re-attachment.
++	 *
++	 * - if prog->aux->dst_trampoline is NULL and tgt_prog and prog->aux->attach_btf
++	 *   are NULL, then program was already attached and user did not provide
++	 *   tgt_prog_fd so we have no way to find out or create trampoline
+ 	 */
+ 	if (!prog->aux->dst_trampoline && !tgt_prog) {
+ 		/*
+@@ -3192,6 +3222,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
+ 			err = -EINVAL;
+ 			goto out_unlock;
+ 		}
++		/* We can allow re-attach only if we have valid attach_btf. */
++		if (!prog->aux->attach_btf) {
++			err = -EINVAL;
++			goto out_unlock;
++		}
+ 		btf_id = prog->aux->attach_btf_id;
+ 		key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, btf_id);
+ 	}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index af2819d5c8ee7..e215413c79a52 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1685,7 +1685,10 @@ static int resize_reference_state(struct bpf_func_state *state, size_t n)
+ 	return 0;
+ }
+ 
+-static int grow_stack_state(struct bpf_func_state *state, int size)
++/* Possibly update state->allocated_stack to be at least size bytes. Also
++ * possibly update the function's high-water mark in its bpf_subprog_info.
++ */
++static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state *state, int size)
+ {
+ 	size_t old_n = state->allocated_stack / BPF_REG_SIZE, n = size / BPF_REG_SIZE;
+ 
+@@ -1697,6 +1700,11 @@ static int grow_stack_state(struct bpf_func_state *state, int size)
+ 		return -ENOMEM;
+ 
+ 	state->allocated_stack = size;
++
++	/* update known max for given subprogram */
++	if (env->subprog_info[state->subprogno].stack_depth < size)
++		env->subprog_info[state->subprogno].stack_depth = size;
++
+ 	return 0;
+ }
+ 
+@@ -4669,14 +4677,11 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 	struct bpf_reg_state *reg = NULL;
+ 	u32 dst_reg = insn->dst_reg;
+ 
+-	err = grow_stack_state(state, round_up(slot + 1, BPF_REG_SIZE));
+-	if (err)
+-		return err;
+ 	/* caller checked that off % size == 0 and -MAX_BPF_STACK <= off < 0,
+ 	 * so it's aligned access and [off, off + size) are within stack limits
+ 	 */
+ 	if (!env->allow_ptr_leaks &&
+-	    state->stack[spi].slot_type[0] == STACK_SPILL &&
++	    is_spilled_reg(&state->stack[spi]) &&
+ 	    size != BPF_REG_SIZE) {
+ 		verbose(env, "attempt to corrupt spilled pointer on stack\n");
+ 		return -EACCES;
+@@ -4827,10 +4832,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
+ 	    (!value_reg && is_bpf_st_mem(insn) && insn->imm == 0))
+ 		writing_zero = true;
+ 
+-	err = grow_stack_state(state, round_up(-min_off, BPF_REG_SIZE));
+-	if (err)
+-		return err;
+-
+ 	for (i = min_off; i < max_off; i++) {
+ 		int spi;
+ 
+@@ -5959,20 +5960,6 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
+ 					   strict);
+ }
+ 
+-static int update_stack_depth(struct bpf_verifier_env *env,
+-			      const struct bpf_func_state *func,
+-			      int off)
+-{
+-	u16 stack = env->subprog_info[func->subprogno].stack_depth;
+-
+-	if (stack >= -off)
+-		return 0;
+-
+-	/* update known max for given subprogram */
+-	env->subprog_info[func->subprogno].stack_depth = -off;
+-	return 0;
+-}
+-
+ /* starting from main bpf function walk all instructions of the function
+  * and recursively walk all callees that given function can call.
+  * Ignore jump and exit insns.
+@@ -6761,13 +6748,14 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env,
+  * The minimum valid offset is -MAX_BPF_STACK for writes, and
+  * -state->allocated_stack for reads.
+  */
+-static int check_stack_slot_within_bounds(int off,
+-					  struct bpf_func_state *state,
+-					  enum bpf_access_type t)
++static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
++                                          s64 off,
++                                          struct bpf_func_state *state,
++                                          enum bpf_access_type t)
+ {
+ 	int min_valid_off;
+ 
+-	if (t == BPF_WRITE)
++	if (t == BPF_WRITE || env->allow_uninit_stack)
+ 		min_valid_off = -MAX_BPF_STACK;
+ 	else
+ 		min_valid_off = -state->allocated_stack;
+@@ -6790,7 +6778,7 @@ static int check_stack_access_within_bounds(
+ 	struct bpf_reg_state *regs = cur_regs(env);
+ 	struct bpf_reg_state *reg = regs + regno;
+ 	struct bpf_func_state *state = func(env, reg);
+-	int min_off, max_off;
++	s64 min_off, max_off;
+ 	int err;
+ 	char *err_extra;
+ 
+@@ -6803,11 +6791,8 @@ static int check_stack_access_within_bounds(
+ 		err_extra = " write to";
+ 
+ 	if (tnum_is_const(reg->var_off)) {
+-		min_off = reg->var_off.value + off;
+-		if (access_size > 0)
+-			max_off = min_off + access_size - 1;
+-		else
+-			max_off = min_off;
++		min_off = (s64)reg->var_off.value + off;
++		max_off = min_off + access_size;
+ 	} else {
+ 		if (reg->smax_value >= BPF_MAX_VAR_OFF ||
+ 		    reg->smin_value <= -BPF_MAX_VAR_OFF) {
+@@ -6816,15 +6801,12 @@ static int check_stack_access_within_bounds(
+ 			return -EACCES;
+ 		}
+ 		min_off = reg->smin_value + off;
+-		if (access_size > 0)
+-			max_off = reg->smax_value + off + access_size - 1;
+-		else
+-			max_off = min_off;
++		max_off = reg->smax_value + off + access_size;
+ 	}
+ 
+-	err = check_stack_slot_within_bounds(min_off, state, type);
+-	if (!err)
+-		err = check_stack_slot_within_bounds(max_off, state, type);
++	err = check_stack_slot_within_bounds(env, min_off, state, type);
++	if (!err && max_off > 0)
++		err = -EINVAL; /* out of stack access into non-negative offsets */
+ 
+ 	if (err) {
+ 		if (tnum_is_const(reg->var_off)) {
+@@ -6837,8 +6819,10 @@ static int check_stack_access_within_bounds(
+ 			verbose(env, "invalid variable-offset%s stack R%d var_off=%s size=%d\n",
+ 				err_extra, regno, tn_buf, access_size);
+ 		}
++		return err;
+ 	}
+-	return err;
++
++	return grow_stack_state(env, state, round_up(-min_off, BPF_REG_SIZE));
+ }
+ 
+ /* check whether memory at (regno + off) is accessible for t = (read | write)
+@@ -6853,7 +6837,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ {
+ 	struct bpf_reg_state *regs = cur_regs(env);
+ 	struct bpf_reg_state *reg = regs + regno;
+-	struct bpf_func_state *state;
+ 	int size, err = 0;
+ 
+ 	size = bpf_size_to_bytes(bpf_size);
+@@ -6996,11 +6979,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 		if (err)
+ 			return err;
+ 
+-		state = func(env, reg);
+-		err = update_stack_depth(env, state, off);
+-		if (err)
+-			return err;
+-
+ 		if (t == BPF_READ)
+ 			err = check_stack_read(env, regno, off, size,
+ 					       value_regno);
+@@ -7195,7 +7173,8 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
+ 
+ /* When register 'regno' is used to read the stack (either directly or through
+  * a helper function) make sure that it's within stack boundary and, depending
+- * on the access type, that all elements of the stack are initialized.
++ * on the access type and privileges, that all elements of the stack are
++ * initialized.
+  *
+  * 'off' includes 'regno->off', but not its dynamic part (if any).
+  *
+@@ -7303,8 +7282,11 @@ static int check_stack_range_initialized(
+ 
+ 		slot = -i - 1;
+ 		spi = slot / BPF_REG_SIZE;
+-		if (state->allocated_stack <= slot)
+-			goto err;
++		if (state->allocated_stack <= slot) {
++			verbose(env, "verifier bug: allocated_stack too small");
++			return -EFAULT;
++		}
++
+ 		stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE];
+ 		if (*stype == STACK_MISC)
+ 			goto mark;
+@@ -7328,7 +7310,6 @@ static int check_stack_range_initialized(
+ 			goto mark;
+ 		}
+ 
+-err:
+ 		if (tnum_is_const(reg->var_off)) {
+ 			verbose(env, "invalid%s read from stack R%d off %d+%d size %d\n",
+ 				err_extra, regno, min_off, i - min_off, access_size);
+@@ -7353,7 +7334,7 @@ mark:
+ 		 * helper may write to the entire memory range.
+ 		 */
+ 	}
+-	return update_stack_depth(env, state, min_off);
++	return 0;
+ }
+ 
+ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+@@ -9829,6 +9810,13 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+ 			verbose(env, "R0 not a scalar value\n");
+ 			return -EACCES;
+ 		}
++
++		/* we are going to rely on register's precise value */
++		err = mark_reg_read(env, r0, r0->parent, REG_LIVE_READ64);
++		err = err ?: mark_chain_precision(env, BPF_REG_0);
++		if (err)
++			return err;
++
+ 		if (!tnum_in(range, r0->var_off)) {
+ 			verbose_invalid_scalar(env, r0, &range, "callback return", "R0");
+ 			return -EINVAL;
+@@ -12866,6 +12854,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 	}
+ 
+ 	switch (base_type(ptr_reg->type)) {
++	case PTR_TO_FLOW_KEYS:
++		if (known)
++			break;
++		fallthrough;
+ 	case CONST_PTR_TO_MAP:
+ 		/* smin_val represents the known value */
+ 		if (known && smin_val == 0 && opcode == BPF_ADD)
+diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
+index 6b213c8252d62..d05066cb40b2e 100644
+--- a/kernel/debug/kdb/kdb_main.c
++++ b/kernel/debug/kdb/kdb_main.c
+@@ -1348,8 +1348,6 @@ do_full_getstr:
+ 		/* PROMPT can only be set if we have MEM_READ permission. */
+ 		snprintf(kdb_prompt_str, CMD_BUFLEN, kdbgetenv("PROMPT"),
+ 			 raw_smp_processor_id());
+-		if (defcmd_in_progress)
+-			strncat(kdb_prompt_str, "[defcmd]", CMD_BUFLEN);
+ 
+ 		/*
+ 		 * Fetch command from keyboard
+diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
+index c21abc77c53e9..ff5683a57f771 100644
+--- a/kernel/dma/coherent.c
++++ b/kernel/dma/coherent.c
+@@ -132,8 +132,10 @@ int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
+ 
+ void dma_release_coherent_memory(struct device *dev)
+ {
+-	if (dev)
++	if (dev) {
+ 		_dma_release_coherent_memory(dev->dma_mem);
++		dev->dma_mem = NULL;
++	}
+ }
+ 
+ static void *__dma_alloc_from_coherent(struct device *dev,
+diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
+index 2984de629f749..9b0b52e1836fa 100644
+--- a/kernel/rcu/Kconfig.debug
++++ b/kernel/rcu/Kconfig.debug
+@@ -105,6 +105,31 @@ config RCU_CPU_STALL_CPUTIME
+ 	  The boot option rcupdate.rcu_cpu_stall_cputime has the same function
+ 	  as this one, but will override this if it exists.
+ 
++config RCU_CPU_STALL_NOTIFIER
++	bool "Provide RCU CPU-stall notifiers"
++	depends on RCU_STALL_COMMON
++	depends on DEBUG_KERNEL
++	depends on RCU_EXPERT
++	default n
++	help
++	  WARNING:  You almost certainly do not want this!!!
++
++	  Enable RCU CPU-stall notifiers, which are invoked just before
++	  printing the RCU CPU stall warning.  As such, bugs in notifier
++	  callbacks can prevent stall warnings from being printed.
++	  And the whole reason that a stall warning is being printed is
++	  that something is hung up somewhere.	Therefore, the notifier
++	  callbacks must be written extremely carefully, preferably
++	  containing only lockless code.  After all, it is quite possible
++	  that the whole reason that the RCU CPU stall is happening in
++	  the first place is that someone forgot to release whatever lock
++	  that you are thinking of acquiring.  In which case, having your
++	  notifier callback acquire that lock will hang, preventing the
++	  RCU CPU stall warning from appearing.
++
++	  Say Y here if you want RCU CPU stall notifiers (you don't want them)
++	  Say N if you are unsure.
++
+ config RCU_TRACE
+ 	bool "Enable tracing for RCU"
+ 	depends on DEBUG_KERNEL
+diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
+index b531c33e9545b..f94f65877f2b6 100644
+--- a/kernel/rcu/rcu.h
++++ b/kernel/rcu/rcu.h
+@@ -262,6 +262,8 @@ static inline bool rcu_stall_is_suppressed_at_boot(void)
+ 	return rcu_cpu_stall_suppress_at_boot && !rcu_inkernel_boot_has_ended();
+ }
+ 
++extern int rcu_cpu_stall_notifiers;
++
+ #ifdef CONFIG_RCU_STALL_COMMON
+ 
+ extern int rcu_cpu_stall_ftrace_dump;
+@@ -659,10 +661,10 @@ static inline bool rcu_cpu_beenfullyonline(int cpu) { return true; }
+ bool rcu_cpu_beenfullyonline(int cpu);
+ #endif
+ 
+-#ifdef CONFIG_RCU_STALL_COMMON
++#if defined(CONFIG_RCU_STALL_COMMON) && defined(CONFIG_RCU_CPU_STALL_NOTIFIER)
+ int rcu_stall_notifier_call_chain(unsigned long val, void *v);
+-#else // #ifdef CONFIG_RCU_STALL_COMMON
++#else // #if defined(CONFIG_RCU_STALL_COMMON) && defined(CONFIG_RCU_CPU_STALL_NOTIFIER)
+ static inline int rcu_stall_notifier_call_chain(unsigned long val, void *v) { return NOTIFY_DONE; }
+-#endif // #else // #ifdef CONFIG_RCU_STALL_COMMON
++#endif // #else // #if defined(CONFIG_RCU_STALL_COMMON) && defined(CONFIG_RCU_CPU_STALL_NOTIFIER)
+ 
+ #endif /* __LINUX_RCU_H */
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 30fc9d34e3297..07a6a183c5556 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -2450,10 +2450,12 @@ static int rcu_torture_stall(void *args)
+ 	unsigned long stop_at;
+ 
+ 	VERBOSE_TOROUT_STRING("rcu_torture_stall task started");
+-	ret = rcu_stall_chain_notifier_register(&rcu_torture_stall_block);
+-	if (ret)
+-		pr_info("%s: rcu_stall_chain_notifier_register() returned %d, %sexpected.\n",
+-			__func__, ret, !IS_ENABLED(CONFIG_RCU_STALL_COMMON) ? "un" : "");
++	if (rcu_cpu_stall_notifiers) {
++		ret = rcu_stall_chain_notifier_register(&rcu_torture_stall_block);
++		if (ret)
++			pr_info("%s: rcu_stall_chain_notifier_register() returned %d, %sexpected.\n",
++				__func__, ret, !IS_ENABLED(CONFIG_RCU_STALL_COMMON) ? "un" : "");
++	}
+ 	if (stall_cpu_holdoff > 0) {
+ 		VERBOSE_TOROUT_STRING("rcu_torture_stall begin holdoff");
+ 		schedule_timeout_interruptible(stall_cpu_holdoff * HZ);
+@@ -2497,7 +2499,7 @@ static int rcu_torture_stall(void *args)
+ 		cur_ops->readunlock(idx);
+ 	}
+ 	pr_alert("%s end.\n", __func__);
+-	if (!ret) {
++	if (rcu_cpu_stall_notifiers && !ret) {
+ 		ret = rcu_stall_chain_notifier_unregister(&rcu_torture_stall_block);
+ 		if (ret)
+ 			pr_info("%s: rcu_stall_chain_notifier_unregister() returned %d.\n", __func__, ret);
+diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
+index ac8e86babe449..5d666428546b0 100644
+--- a/kernel/rcu/tree_stall.h
++++ b/kernel/rcu/tree_stall.h
+@@ -1061,6 +1061,7 @@ static int __init rcu_sysrq_init(void)
+ }
+ early_initcall(rcu_sysrq_init);
+ 
++#ifdef CONFIG_RCU_CPU_STALL_NOTIFIER
+ 
+ //////////////////////////////////////////////////////////////////////////////
+ //
+@@ -1081,7 +1082,13 @@ static ATOMIC_NOTIFIER_HEAD(rcu_cpu_stall_notifier_list);
+  */
+ int rcu_stall_chain_notifier_register(struct notifier_block *n)
+ {
+-	return atomic_notifier_chain_register(&rcu_cpu_stall_notifier_list, n);
++	int rcsn = rcu_cpu_stall_notifiers;
++
++	WARN(1, "Adding %pS() to RCU stall notifier list (%s).\n", n->notifier_call,
++	     rcsn ? "possibly suppressing RCU CPU stall warnings" : "failed, so all is well");
++	if (rcsn)
++		return atomic_notifier_chain_register(&rcu_cpu_stall_notifier_list, n);
++	return -EEXIST;
+ }
+ EXPORT_SYMBOL_GPL(rcu_stall_chain_notifier_register);
+ 
+@@ -1115,3 +1122,5 @@ int rcu_stall_notifier_call_chain(unsigned long val, void *v)
+ {
+ 	return atomic_notifier_call_chain(&rcu_cpu_stall_notifier_list, val, v);
+ }
++
++#endif // #ifdef CONFIG_RCU_CPU_STALL_NOTIFIER
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index c534d6806d3d5..46aaaa9fe3390 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -538,9 +538,15 @@ long torture_sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
+ EXPORT_SYMBOL_GPL(torture_sched_setaffinity);
+ #endif
+ 
++int rcu_cpu_stall_notifiers __read_mostly; // !0 = provide stall notifiers (rarely useful)
++EXPORT_SYMBOL_GPL(rcu_cpu_stall_notifiers);
++
+ #ifdef CONFIG_RCU_STALL_COMMON
+ int rcu_cpu_stall_ftrace_dump __read_mostly;
+ module_param(rcu_cpu_stall_ftrace_dump, int, 0644);
++#ifdef CONFIG_RCU_CPU_STALL_NOTIFIER
++module_param(rcu_cpu_stall_notifiers, int, 0444);
++#endif // #ifdef CONFIG_RCU_CPU_STALL_NOTIFIER
+ int rcu_cpu_stall_suppress __read_mostly; // !0 = suppress stall warnings.
+ EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress);
+ module_param(rcu_cpu_stall_suppress, int, 0644);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index d7a3c63a2171a..4182fb118ce95 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3811,17 +3811,17 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
+ 	enqueue_load_avg(cfs_rq, se);
+ 	if (se->on_rq) {
+ 		update_load_add(&cfs_rq->load, se->load.weight);
+-		if (!curr) {
+-			/*
+-			 * The entity's vruntime has been adjusted, so let's check
+-			 * whether the rq-wide min_vruntime needs updated too. Since
+-			 * the calculations above require stable min_vruntime rather
+-			 * than up-to-date one, we do the update at the end of the
+-			 * reweight process.
+-			 */
++		if (!curr)
+ 			__enqueue_entity(cfs_rq, se);
+-			update_min_vruntime(cfs_rq);
+-		}
++
++		/*
++		 * The entity's vruntime has been adjusted, so let's check
++		 * whether the rq-wide min_vruntime needs updated too. Since
++		 * the calculations above require stable min_vruntime rather
++		 * than up-to-date one, we do the update at the end of the
++		 * reweight process.
++		 */
++		update_min_vruntime(cfs_rq);
+ 	}
+ }
+ 
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index be77b021e5d63..2305366a818a7 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1573,13 +1573,18 @@ void tick_setup_sched_timer(void)
+ void tick_cancel_sched_timer(int cpu)
+ {
+ 	struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
++	ktime_t idle_sleeptime, iowait_sleeptime;
+ 
+ # ifdef CONFIG_HIGH_RES_TIMERS
+ 	if (ts->sched_timer.base)
+ 		hrtimer_cancel(&ts->sched_timer);
+ # endif
+ 
++	idle_sleeptime = ts->idle_sleeptime;
++	iowait_sleeptime = ts->iowait_sleeptime;
+ 	memset(ts, 0, sizeof(*ts));
++	ts->idle_sleeptime = idle_sleeptime;
++	ts->iowait_sleeptime = iowait_sleeptime;
+ }
+ #endif
+ 
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 84e8a0f6e4e0b..652c40a14d0dc 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -41,6 +41,9 @@
+ #define bpf_event_rcu_dereference(p)					\
+ 	rcu_dereference_protected(p, lockdep_is_held(&bpf_event_mutex))
+ 
++#define MAX_UPROBE_MULTI_CNT (1U << 20)
++#define MAX_KPROBE_MULTI_CNT (1U << 20)
++
+ #ifdef CONFIG_MODULES
+ struct bpf_trace_module {
+ 	struct module *module;
+@@ -2899,6 +2902,8 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	cnt = attr->link_create.kprobe_multi.cnt;
+ 	if (!cnt)
+ 		return -EINVAL;
++	if (cnt > MAX_KPROBE_MULTI_CNT)
++		return -E2BIG;
+ 
+ 	size = cnt * sizeof(*addrs);
+ 	addrs = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL);
+@@ -3202,6 +3207,8 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 
+ 	if (!upath || !uoffsets || !cnt)
+ 		return -EINVAL;
++	if (cnt > MAX_UPROBE_MULTI_CNT)
++		return -E2BIG;
+ 
+ 	uref_ctr_offsets = u64_to_user_ptr(attr->link_create.uprobe_multi.ref_ctr_offsets);
+ 	ucookies = u64_to_user_ptr(attr->link_create.uprobe_multi.cookies);
+diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c
+index 270d185737e67..382706dfb47d9 100644
+--- a/lib/kunit/debugfs.c
++++ b/lib/kunit/debugfs.c
+@@ -60,12 +60,14 @@ static void debugfs_print_result(struct seq_file *seq, struct string_stream *log
+ static int debugfs_print_results(struct seq_file *seq, void *v)
+ {
+ 	struct kunit_suite *suite = (struct kunit_suite *)seq->private;
+-	enum kunit_status success = kunit_suite_has_succeeded(suite);
++	enum kunit_status success;
+ 	struct kunit_case *test_case;
+ 
+ 	if (!suite)
+ 		return 0;
+ 
++	success = kunit_suite_has_succeeded(suite);
++
+ 	/* Print KTAP header so the debugfs log can be parsed as valid KTAP. */
+ 	seq_puts(seq, "KTAP version 1\n");
+ 	seq_puts(seq, "1..1\n");
+@@ -109,14 +111,28 @@ static const struct file_operations debugfs_results_fops = {
+ void kunit_debugfs_create_suite(struct kunit_suite *suite)
+ {
+ 	struct kunit_case *test_case;
++	struct string_stream *stream;
++
++	/*
++	 * Allocate logs before creating debugfs representation.
++	 * The suite->log and test_case->log pointer are expected to be NULL
++	 * if there isn't a log, so only set it if the log stream was created
++	 * successfully.
++	 */
++	stream = alloc_string_stream(GFP_KERNEL);
++	if (IS_ERR_OR_NULL(stream))
++		return;
+ 
+-	/* Allocate logs before creating debugfs representation. */
+-	suite->log = alloc_string_stream(GFP_KERNEL);
+-	string_stream_set_append_newlines(suite->log, true);
++	string_stream_set_append_newlines(stream, true);
++	suite->log = stream;
+ 
+ 	kunit_suite_for_each_test_case(suite, test_case) {
+-		test_case->log = alloc_string_stream(GFP_KERNEL);
+-		string_stream_set_append_newlines(test_case->log, true);
++		stream = alloc_string_stream(GFP_KERNEL);
++		if (IS_ERR_OR_NULL(stream))
++			goto err;
++
++		string_stream_set_append_newlines(stream, true);
++		test_case->log = stream;
+ 	}
+ 
+ 	suite->debugfs = debugfs_create_dir(suite->name, debugfs_rootdir);
+@@ -124,6 +140,12 @@ void kunit_debugfs_create_suite(struct kunit_suite *suite)
+ 	debugfs_create_file(KUNIT_DEBUGFS_RESULTS, S_IFREG | 0444,
+ 			    suite->debugfs,
+ 			    suite, &debugfs_results_fops);
++	return;
++
++err:
++	string_stream_destroy(suite->log);
++	kunit_suite_for_each_test_case(suite, test_case)
++		string_stream_destroy(test_case->log);
+ }
+ 
+ void kunit_debugfs_destroy_suite(struct kunit_suite *suite)
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index 7916503e6a6ab..3c5a1ca062190 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -6293,7 +6293,7 @@ static struct bpf_test tests[] = {
+ 	},
+ 	/* BPF_ALU64 | BPF_MOD | BPF_K off=1 (SMOD64) */
+ 	{
+-		"ALU64_SMOD_X: -7 % 2 = -1",
++		"ALU64_SMOD_K: -7 % 2 = -1",
+ 		.u.insns_int = {
+ 			BPF_LD_IMM64(R0, -7),
+ 			BPF_ALU64_IMM_OFF(BPF_MOD, R0, 2, 1),
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 2cee330188ce4..d01db89fcb469 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -2421,12 +2421,10 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type)
+ 		hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED,
+ 			     sizeof(cp), &cp);
+ 
+-		/* If we're already encrypted set the REAUTH_PEND flag,
+-		 * otherwise set the ENCRYPT_PEND.
++		/* Set the ENCRYPT_PEND to trigger encryption after
++		 * authentication.
+ 		 */
+-		if (test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+-			set_bit(HCI_CONN_REAUTH_PEND, &conn->flags);
+-		else
++		if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 			set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);
+ 	}
+ 
+diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c
+index 6b7741f6e95b2..233453807b509 100644
+--- a/net/bluetooth/hci_debugfs.c
++++ b/net/bluetooth/hci_debugfs.c
+@@ -1046,10 +1046,12 @@ static int min_key_size_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val > hdev->le_max_key_size || val < SMP_MIN_ENC_KEY_SIZE)
++	hci_dev_lock(hdev);
++	if (val > hdev->le_max_key_size || val < SMP_MIN_ENC_KEY_SIZE) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_min_key_size = val;
+ 	hci_dev_unlock(hdev);
+ 
+@@ -1074,10 +1076,12 @@ static int max_key_size_set(void *data, u64 val)
+ {
+ 	struct hci_dev *hdev = data;
+ 
+-	if (val > SMP_MAX_ENC_KEY_SIZE || val < hdev->le_min_key_size)
++	hci_dev_lock(hdev);
++	if (val > SMP_MAX_ENC_KEY_SIZE || val < hdev->le_min_key_size) {
++		hci_dev_unlock(hdev);
+ 		return -EINVAL;
++	}
+ 
+-	hci_dev_lock(hdev);
+ 	hdev->le_max_key_size = val;
+ 	hci_dev_unlock(hdev);
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index ebf17b51072fc..ef8c3bed73617 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3500,14 +3500,8 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 	if (!ev->status) {
+ 		clear_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+-
+-		if (!hci_conn_ssp_enabled(conn) &&
+-		    test_bit(HCI_CONN_REAUTH_PEND, &conn->flags)) {
+-			bt_dev_info(hdev, "re-auth of legacy device is not possible.");
+-		} else {
+-			set_bit(HCI_CONN_AUTH, &conn->flags);
+-			conn->sec_level = conn->pending_sec_level;
+-		}
++		set_bit(HCI_CONN_AUTH, &conn->flags);
++		conn->sec_level = conn->pending_sec_level;
+ 	} else {
+ 		if (ev->status == HCI_ERROR_PIN_OR_KEY_MISSING)
+ 			set_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+@@ -3516,7 +3510,6 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, void *data,
+ 	}
+ 
+ 	clear_bit(HCI_CONN_AUTH_PEND, &conn->flags);
+-	clear_bit(HCI_CONN_REAUTH_PEND, &conn->flags);
+ 
+ 	if (conn->state == BT_CONFIG) {
+ 		if (!ev->status && hci_conn_ssp_enabled(conn)) {
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 6adcb45bca75d..ed17208907578 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -279,8 +279,17 @@ int br_nf_pre_routing_finish_bridge(struct net *net, struct sock *sk, struct sk_
+ 
+ 		if ((READ_ONCE(neigh->nud_state) & NUD_CONNECTED) &&
+ 		    READ_ONCE(neigh->hh.hh_len)) {
++			struct net_device *br_indev;
++
++			br_indev = nf_bridge_get_physindev(skb, net);
++			if (!br_indev) {
++				neigh_release(neigh);
++				goto free_skb;
++			}
++
+ 			neigh_hh_bridge(&neigh->hh, skb);
+-			skb->dev = nf_bridge->physindev;
++			skb->dev = br_indev;
++
+ 			ret = br_handle_frame_finish(net, sk, skb);
+ 		} else {
+ 			/* the neighbour function below overwrites the complete
+@@ -352,12 +361,18 @@ br_nf_ipv4_daddr_was_changed(const struct sk_buff *skb,
+  */
+ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	struct net_device *dev = skb->dev;
++	struct net_device *dev = skb->dev, *br_indev;
+ 	struct iphdr *iph = ip_hdr(skb);
+ 	struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
+ 	struct rtable *rt;
+ 	int err;
+ 
++	br_indev = nf_bridge_get_physindev(skb, net);
++	if (!br_indev) {
++		kfree_skb(skb);
++		return 0;
++	}
++
+ 	nf_bridge->frag_max_size = IPCB(skb)->frag_max_size;
+ 
+ 	if (nf_bridge->pkt_otherhost) {
+@@ -397,7 +412,7 @@ free_skb:
+ 		} else {
+ 			if (skb_dst(skb)->dev == dev) {
+ bridged_dnat:
+-				skb->dev = nf_bridge->physindev;
++				skb->dev = br_indev;
+ 				nf_bridge_update_protocol(skb);
+ 				nf_bridge_push_encap_header(skb);
+ 				br_nf_hook_thresh(NF_BR_PRE_ROUTING,
+@@ -410,7 +425,7 @@ bridged_dnat:
+ 			skb->pkt_type = PACKET_HOST;
+ 		}
+ 	} else {
+-		rt = bridge_parent_rtable(nf_bridge->physindev);
++		rt = bridge_parent_rtable(br_indev);
+ 		if (!rt) {
+ 			kfree_skb(skb);
+ 			return 0;
+@@ -419,7 +434,7 @@ bridged_dnat:
+ 		skb_dst_set_noref(skb, &rt->dst);
+ 	}
+ 
+-	skb->dev = nf_bridge->physindev;
++	skb->dev = br_indev;
+ 	nf_bridge_update_protocol(skb);
+ 	nf_bridge_push_encap_header(skb);
+ 	br_nf_hook_thresh(NF_BR_PRE_ROUTING, net, sk, skb, skb->dev, NULL,
+@@ -456,7 +471,7 @@ struct net_device *setup_pre_routing(struct sk_buff *skb, const struct net *net)
+ 	}
+ 
+ 	nf_bridge->in_prerouting = 1;
+-	nf_bridge->physindev = skb->dev;
++	nf_bridge->physinif = skb->dev->ifindex;
+ 	skb->dev = brnf_get_logical_dev(skb, skb->dev, net);
+ 
+ 	if (skb->protocol == htons(ETH_P_8021Q))
+@@ -553,7 +568,11 @@ static int br_nf_forward_finish(struct net *net, struct sock *sk, struct sk_buff
+ 		if (skb->protocol == htons(ETH_P_IPV6))
+ 			nf_bridge->frag_max_size = IP6CB(skb)->frag_max_size;
+ 
+-		in = nf_bridge->physindev;
++		in = nf_bridge_get_physindev(skb, net);
++		if (!in) {
++			kfree_skb(skb);
++			return 0;
++		}
+ 		if (nf_bridge->pkt_otherhost) {
+ 			skb->pkt_type = PACKET_OTHERHOST;
+ 			nf_bridge->pkt_otherhost = false;
+@@ -899,6 +918,13 @@ static unsigned int ip_sabotage_in(void *priv,
+ static void br_nf_pre_routing_finish_bridge_slow(struct sk_buff *skb)
+ {
+ 	struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
++	struct net_device *br_indev;
++
++	br_indev = nf_bridge_get_physindev(skb, dev_net(skb->dev));
++	if (!br_indev) {
++		kfree_skb(skb);
++		return;
++	}
+ 
+ 	skb_pull(skb, ETH_HLEN);
+ 	nf_bridge->bridged_dnat = 0;
+@@ -908,7 +934,7 @@ static void br_nf_pre_routing_finish_bridge_slow(struct sk_buff *skb)
+ 	skb_copy_to_linear_data_offset(skb, -(ETH_HLEN - ETH_ALEN),
+ 				       nf_bridge->neigh_header,
+ 				       ETH_HLEN - ETH_ALEN);
+-	skb->dev = nf_bridge->physindev;
++	skb->dev = br_indev;
+ 
+ 	nf_bridge->physoutdev = NULL;
+ 	br_handle_frame_finish(dev_net(skb->dev), NULL, skb);
+diff --git a/net/bridge/br_netfilter_ipv6.c b/net/bridge/br_netfilter_ipv6.c
+index 2e24a743f9173..e0421eaa3abc7 100644
+--- a/net/bridge/br_netfilter_ipv6.c
++++ b/net/bridge/br_netfilter_ipv6.c
+@@ -102,9 +102,15 @@ static int br_nf_pre_routing_finish_ipv6(struct net *net, struct sock *sk, struc
+ {
+ 	struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
+ 	struct rtable *rt;
+-	struct net_device *dev = skb->dev;
++	struct net_device *dev = skb->dev, *br_indev;
+ 	const struct nf_ipv6_ops *v6ops = nf_get_ipv6_ops();
+ 
++	br_indev = nf_bridge_get_physindev(skb, net);
++	if (!br_indev) {
++		kfree_skb(skb);
++		return 0;
++	}
++
+ 	nf_bridge->frag_max_size = IP6CB(skb)->frag_max_size;
+ 
+ 	if (nf_bridge->pkt_otherhost) {
+@@ -122,7 +128,7 @@ static int br_nf_pre_routing_finish_ipv6(struct net *net, struct sock *sk, struc
+ 		}
+ 
+ 		if (skb_dst(skb)->dev == dev) {
+-			skb->dev = nf_bridge->physindev;
++			skb->dev = br_indev;
+ 			nf_bridge_update_protocol(skb);
+ 			nf_bridge_push_encap_header(skb);
+ 			br_nf_hook_thresh(NF_BR_PRE_ROUTING,
+@@ -133,7 +139,7 @@ static int br_nf_pre_routing_finish_ipv6(struct net *net, struct sock *sk, struc
+ 		ether_addr_copy(eth_hdr(skb)->h_dest, dev->dev_addr);
+ 		skb->pkt_type = PACKET_HOST;
+ 	} else {
+-		rt = bridge_parent_rtable(nf_bridge->physindev);
++		rt = bridge_parent_rtable(br_indev);
+ 		if (!rt) {
+ 			kfree_skb(skb);
+ 			return 0;
+@@ -142,7 +148,7 @@ static int br_nf_pre_routing_finish_ipv6(struct net *net, struct sock *sk, struc
+ 		skb_dst_set_noref(skb, &rt->dst);
+ 	}
+ 
+-	skb->dev = nf_bridge->physindev;
++	skb->dev = br_indev;
+ 	nf_bridge_update_protocol(skb);
+ 	nf_bridge_push_encap_header(skb);
+ 	br_nf_hook_thresh(NF_BR_PRE_ROUTING, net, sk, skb,
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index e8431c6c84909..bf4c3f65ad99c 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2905,13 +2905,6 @@ static int do_setlink(const struct sk_buff *skb,
+ 		call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
+ 	}
+ 
+-	if (tb[IFLA_MASTER]) {
+-		err = do_set_master(dev, nla_get_u32(tb[IFLA_MASTER]), extack);
+-		if (err)
+-			goto errout;
+-		status |= DO_SETLINK_MODIFIED;
+-	}
+-
+ 	if (ifm->ifi_flags || ifm->ifi_change) {
+ 		err = dev_change_flags(dev, rtnl_dev_combine_flags(dev, ifm),
+ 				       extack);
+@@ -2919,6 +2912,13 @@ static int do_setlink(const struct sk_buff *skb,
+ 			goto errout;
+ 	}
+ 
++	if (tb[IFLA_MASTER]) {
++		err = do_set_master(dev, nla_get_u32(tb[IFLA_MASTER]), extack);
++		if (err)
++			goto errout;
++		status |= DO_SETLINK_MODIFIED;
++	}
++
+ 	if (tb[IFLA_CARRIER]) {
+ 		err = dev_change_carrier(dev, nla_get_u8(tb[IFLA_CARRIER]));
+ 		if (err)
+diff --git a/net/dns_resolver/dns_key.c b/net/dns_resolver/dns_key.c
+index f18ca02aa95a6..c42ddd85ff1f9 100644
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -104,7 +104,7 @@ dns_resolver_preparse(struct key_preparsed_payload *prep)
+ 		const struct dns_server_list_v1_header *v1;
+ 
+ 		/* It may be a server list. */
+-		if (datalen <= sizeof(*v1))
++		if (datalen < sizeof(*v1))
+ 			return -EINVAL;
+ 
+ 		v1 = (const struct dns_server_list_v1_header *)data;
+diff --git a/net/dsa/user.c b/net/dsa/user.c
+index d438884a4eb0e..a82c7f5a1a120 100644
+--- a/net/dsa/user.c
++++ b/net/dsa/user.c
+@@ -2829,13 +2829,14 @@ EXPORT_SYMBOL_GPL(dsa_user_dev_check);
+ static int dsa_user_changeupper(struct net_device *dev,
+ 				struct netdev_notifier_changeupper_info *info)
+ {
+-	struct dsa_port *dp = dsa_user_to_port(dev);
+ 	struct netlink_ext_ack *extack;
+ 	int err = NOTIFY_DONE;
++	struct dsa_port *dp;
+ 
+ 	if (!dsa_user_dev_check(dev))
+ 		return err;
+ 
++	dp = dsa_user_to_port(dev);
+ 	extack = netdev_notifier_info_to_extack(&info->info);
+ 
+ 	if (netif_is_bridge_master(info->upper_dev)) {
+@@ -2888,11 +2889,13 @@ static int dsa_user_changeupper(struct net_device *dev,
+ static int dsa_user_prechangeupper(struct net_device *dev,
+ 				   struct netdev_notifier_changeupper_info *info)
+ {
+-	struct dsa_port *dp = dsa_user_to_port(dev);
++	struct dsa_port *dp;
+ 
+ 	if (!dsa_user_dev_check(dev))
+ 		return NOTIFY_DONE;
+ 
++	dp = dsa_user_to_port(dev);
++
+ 	if (netif_is_bridge_master(info->upper_dev) && !info->linking)
+ 		dsa_port_pre_bridge_leave(dp, info->upper_dev);
+ 	else if (netif_is_lag_master(info->upper_dev) && !info->linking)
+diff --git a/net/ethtool/features.c b/net/ethtool/features.c
+index a79af8c25a071..b6cb101d7f19e 100644
+--- a/net/ethtool/features.c
++++ b/net/ethtool/features.c
+@@ -234,17 +234,20 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ 	dev = req_info.dev;
+ 
+ 	rtnl_lock();
++	ret = ethnl_ops_begin(dev);
++	if (ret < 0)
++		goto out_rtnl;
+ 	ethnl_features_to_bitmap(old_active, dev->features);
+ 	ethnl_features_to_bitmap(old_wanted, dev->wanted_features);
+ 	ret = ethnl_parse_bitset(req_wanted, req_mask, NETDEV_FEATURE_COUNT,
+ 				 tb[ETHTOOL_A_FEATURES_WANTED],
+ 				 netdev_features_strings, info->extack);
+ 	if (ret < 0)
+-		goto out_rtnl;
++		goto out_ops;
+ 	if (ethnl_bitmap_to_features(req_mask) & ~NETIF_F_ETHTOOL_BITS) {
+ 		GENL_SET_ERR_MSG(info, "attempt to change non-ethtool features");
+ 		ret = -EINVAL;
+-		goto out_rtnl;
++		goto out_ops;
+ 	}
+ 
+ 	/* set req_wanted bits not in req_mask from old_wanted */
+@@ -281,6 +284,8 @@ int ethnl_set_features(struct sk_buff *skb, struct genl_info *info)
+ 	if (mod)
+ 		netdev_features_change(dev);
+ 
++out_ops:
++	ethnl_ops_complete(dev);
+ out_rtnl:
+ 	rtnl_unlock();
+ 	ethnl_parse_header_dev_put(&req_info);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index fb81de10d3320..ea0b0334a0fb0 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1633,6 +1633,7 @@ int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ #endif
+ 	return -EINVAL;
+ }
++EXPORT_SYMBOL(inet_recv_error);
+ 
+ int inet_gro_complete(struct sk_buff *skb, int nhoff)
+ {
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 9e222a57bc2b4..0063a237253bf 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -1025,6 +1025,10 @@ static int ipmr_cache_report(const struct mr_table *mrt,
+ 	struct sk_buff *skb;
+ 	int ret;
+ 
++	mroute_sk = rcu_dereference(mrt->mroute_sk);
++	if (!mroute_sk)
++		return -EINVAL;
++
+ 	if (assert == IGMPMSG_WHOLEPKT || assert == IGMPMSG_WRVIFWHOLE)
+ 		skb = skb_realloc_headroom(pkt, sizeof(struct iphdr));
+ 	else
+@@ -1069,7 +1073,8 @@ static int ipmr_cache_report(const struct mr_table *mrt,
+ 		msg = (struct igmpmsg *)skb_network_header(skb);
+ 		msg->im_vif = vifi;
+ 		msg->im_vif_hi = vifi >> 8;
+-		skb_dst_set(skb, dst_clone(skb_dst(pkt)));
++		ipv4_pktinfo_prepare(mroute_sk, pkt);
++		memcpy(skb->cb, pkt->cb, sizeof(skb->cb));
+ 		/* Add our header */
+ 		igmp = skb_put(skb, sizeof(struct igmphdr));
+ 		igmp->type = assert;
+@@ -1079,12 +1084,6 @@ static int ipmr_cache_report(const struct mr_table *mrt,
+ 		skb->transport_header = skb->network_header;
+ 	}
+ 
+-	mroute_sk = rcu_dereference(mrt->mroute_sk);
+-	if (!mroute_sk) {
+-		kfree_skb(skb);
+-		return -EINVAL;
+-	}
+-
+ 	igmpmsg_netlink_event(mrt, skb);
+ 
+ 	/* Deliver to mrouted */
+diff --git a/net/ipv4/netfilter/nf_reject_ipv4.c b/net/ipv4/netfilter/nf_reject_ipv4.c
+index f01b038fc1cda..04504b2b51df5 100644
+--- a/net/ipv4/netfilter/nf_reject_ipv4.c
++++ b/net/ipv4/netfilter/nf_reject_ipv4.c
+@@ -239,7 +239,6 @@ static int nf_reject_fill_skb_dst(struct sk_buff *skb_in)
+ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ 		   int hook)
+ {
+-	struct net_device *br_indev __maybe_unused;
+ 	struct sk_buff *nskb;
+ 	struct iphdr *niph;
+ 	const struct tcphdr *oth;
+@@ -289,9 +288,13 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ 	 * build the eth header using the original destination's MAC as the
+ 	 * source, and send the RST packet directly.
+ 	 */
+-	br_indev = nf_bridge_get_physindev(oldskb);
+-	if (br_indev) {
++	if (nf_bridge_info_exists(oldskb)) {
+ 		struct ethhdr *oeth = eth_hdr(oldskb);
++		struct net_device *br_indev;
++
++		br_indev = nf_bridge_get_physindev(oldskb, net);
++		if (!br_indev)
++			goto free_nskb;
+ 
+ 		nskb->dev = br_indev;
+ 		niph->tot_len = htons(nskb->len);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 89e5a806b82e9..148ffb007969f 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -805,7 +805,7 @@ void udp_flush_pending_frames(struct sock *sk)
+ 
+ 	if (up->pending) {
+ 		up->len = 0;
+-		up->pending = 0;
++		WRITE_ONCE(up->pending, 0);
+ 		ip_flush_pending_frames(sk);
+ 	}
+ }
+@@ -993,7 +993,7 @@ int udp_push_pending_frames(struct sock *sk)
+ 
+ out:
+ 	up->len = 0;
+-	up->pending = 0;
++	WRITE_ONCE(up->pending, 0);
+ 	return err;
+ }
+ EXPORT_SYMBOL(udp_push_pending_frames);
+@@ -1070,7 +1070,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	getfrag = is_udplite ? udplite_getfrag : ip_generic_getfrag;
+ 
+ 	fl4 = &inet->cork.fl.u.ip4;
+-	if (up->pending) {
++	if (READ_ONCE(up->pending)) {
+ 		/*
+ 		 * There are pending frames.
+ 		 * The socket lock must be held while it's corked.
+@@ -1269,7 +1269,7 @@ back_from_confirm:
+ 	fl4->saddr = saddr;
+ 	fl4->fl4_dport = dport;
+ 	fl4->fl4_sport = inet->inet_sport;
+-	up->pending = AF_INET;
++	WRITE_ONCE(up->pending, AF_INET);
+ 
+ do_append_data:
+ 	up->len += ulen;
+@@ -1281,7 +1281,7 @@ do_append_data:
+ 	else if (!corkreq)
+ 		err = udp_push_pending_frames(sk);
+ 	else if (unlikely(skb_queue_empty(&sk->sk_write_queue)))
+-		up->pending = 0;
++		WRITE_ONCE(up->pending, 0);
+ 	release_sock(sk);
+ 
+ out:
+@@ -1319,7 +1319,7 @@ void udp_splice_eof(struct socket *sock)
+ 	struct sock *sk = sock->sk;
+ 	struct udp_sock *up = udp_sk(sk);
+ 
+-	if (!up->pending || udp_test_bit(CORK, sk))
++	if (!READ_ONCE(up->pending) || udp_test_bit(CORK, sk))
+ 		return;
+ 
+ 	lock_sock(sk);
+@@ -3137,16 +3137,18 @@ static struct sock *bpf_iter_udp_batch(struct seq_file *seq)
+ 	struct bpf_udp_iter_state *iter = seq->private;
+ 	struct udp_iter_state *state = &iter->state;
+ 	struct net *net = seq_file_net(seq);
++	int resume_bucket, resume_offset;
+ 	struct udp_table *udptable;
+ 	unsigned int batch_sks = 0;
+ 	bool resized = false;
+ 	struct sock *sk;
+ 
++	resume_bucket = state->bucket;
++	resume_offset = iter->offset;
++
+ 	/* The current batch is done, so advance the bucket. */
+-	if (iter->st_bucket_done) {
++	if (iter->st_bucket_done)
+ 		state->bucket++;
+-		iter->offset = 0;
+-	}
+ 
+ 	udptable = udp_get_table_seq(seq, net);
+ 
+@@ -3166,19 +3168,19 @@ again:
+ 	for (; state->bucket <= udptable->mask; state->bucket++) {
+ 		struct udp_hslot *hslot2 = &udptable->hash2[state->bucket];
+ 
+-		if (hlist_empty(&hslot2->head)) {
+-			iter->offset = 0;
++		if (hlist_empty(&hslot2->head))
+ 			continue;
+-		}
+ 
++		iter->offset = 0;
+ 		spin_lock_bh(&hslot2->lock);
+ 		udp_portaddr_for_each_entry(sk, &hslot2->head) {
+ 			if (seq_sk_match(seq, sk)) {
+ 				/* Resume from the last iterated socket at the
+ 				 * offset in the bucket before iterator was stopped.
+ 				 */
+-				if (iter->offset) {
+-					--iter->offset;
++				if (state->bucket == resume_bucket &&
++				    iter->offset < resume_offset) {
++					++iter->offset;
+ 					continue;
+ 				}
+ 				if (iter->end_sk < iter->max_sk) {
+@@ -3192,9 +3194,6 @@ again:
+ 
+ 		if (iter->end_sk)
+ 			break;
+-
+-		/* Reset the current bucket's offset before moving to the next bucket. */
+-		iter->offset = 0;
+ 	}
+ 
+ 	/* All done: no batch made. */
+@@ -3213,7 +3212,6 @@ again:
+ 		/* After allocating a larger batch, retry one more time to grab
+ 		 * the whole bucket.
+ 		 */
+-		state->bucket--;
+ 		goto again;
+ 	}
+ done:
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 5e80e517f0710..46c19bd489901 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -399,7 +399,7 @@ __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw)
+ 	const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)raw;
+ 	unsigned int nhoff = raw - skb->data;
+ 	unsigned int off = nhoff + sizeof(*ipv6h);
+-	u8 next, nexthdr = ipv6h->nexthdr;
++	u8 nexthdr = ipv6h->nexthdr;
+ 
+ 	while (ipv6_ext_hdr(nexthdr) && nexthdr != NEXTHDR_NONE) {
+ 		struct ipv6_opt_hdr *hdr;
+@@ -410,25 +410,25 @@ __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw)
+ 
+ 		hdr = (struct ipv6_opt_hdr *)(skb->data + off);
+ 		if (nexthdr == NEXTHDR_FRAGMENT) {
+-			struct frag_hdr *frag_hdr = (struct frag_hdr *) hdr;
+-			if (frag_hdr->frag_off)
+-				break;
+ 			optlen = 8;
+ 		} else if (nexthdr == NEXTHDR_AUTH) {
+ 			optlen = ipv6_authlen(hdr);
+ 		} else {
+ 			optlen = ipv6_optlen(hdr);
+ 		}
+-		/* cache hdr->nexthdr, since pskb_may_pull() might
+-		 * invalidate hdr
+-		 */
+-		next = hdr->nexthdr;
+-		if (nexthdr == NEXTHDR_DEST) {
+-			u16 i = 2;
+ 
+-			/* Remember : hdr is no longer valid at this point. */
+-			if (!pskb_may_pull(skb, off + optlen))
++		if (!pskb_may_pull(skb, off + optlen))
++			break;
++
++		hdr = (struct ipv6_opt_hdr *)(skb->data + off);
++		if (nexthdr == NEXTHDR_FRAGMENT) {
++			struct frag_hdr *frag_hdr = (struct frag_hdr *)hdr;
++
++			if (frag_hdr->frag_off)
+ 				break;
++		}
++		if (nexthdr == NEXTHDR_DEST) {
++			u16 i = 2;
+ 
+ 			while (1) {
+ 				struct ipv6_tlv_tnl_enc_lim *tel;
+@@ -449,7 +449,7 @@ __u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw)
+ 					i++;
+ 			}
+ 		}
+-		nexthdr = next;
++		nexthdr = hdr->nexthdr;
+ 		off += optlen;
+ 	}
+ 	return 0;
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index b75d3c9d41bb5..bc6e0a0bad3c1 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2722,8 +2722,12 @@ void ipv6_mc_down(struct inet6_dev *idev)
+ 	synchronize_net();
+ 	mld_query_stop_work(idev);
+ 	mld_report_stop_work(idev);
++
++	mutex_lock(&idev->mc_lock);
+ 	mld_ifc_stop_work(idev);
+ 	mld_gq_stop_work(idev);
++	mutex_unlock(&idev->mc_lock);
++
+ 	mld_dad_stop_work(idev);
+ }
+ 
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index d45bc54b7ea55..196dd4ecb5e21 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -278,7 +278,6 @@ static int nf_reject6_fill_skb_dst(struct sk_buff *skb_in)
+ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ 		    int hook)
+ {
+-	struct net_device *br_indev __maybe_unused;
+ 	struct sk_buff *nskb;
+ 	struct tcphdr _otcph;
+ 	const struct tcphdr *otcph;
+@@ -354,9 +353,15 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ 	 * build the eth header using the original destination's MAC as the
+ 	 * source, and send the RST packet directly.
+ 	 */
+-	br_indev = nf_bridge_get_physindev(oldskb);
+-	if (br_indev) {
++	if (nf_bridge_info_exists(oldskb)) {
+ 		struct ethhdr *oeth = eth_hdr(oldskb);
++		struct net_device *br_indev;
++
++		br_indev = nf_bridge_get_physindev(oldskb, net);
++		if (!br_indev) {
++			kfree_skb(nskb);
++			return;
++		}
+ 
+ 		nskb->dev = br_indev;
+ 		nskb->protocol = htons(ETH_P_IPV6);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 622b10a549f75..a1a79ff46fd93 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1135,7 +1135,7 @@ static void udp_v6_flush_pending_frames(struct sock *sk)
+ 		udp_flush_pending_frames(sk);
+ 	else if (up->pending) {
+ 		up->len = 0;
+-		up->pending = 0;
++		WRITE_ONCE(up->pending, 0);
+ 		ip6_flush_pending_frames(sk);
+ 	}
+ }
+@@ -1313,7 +1313,7 @@ static int udp_v6_push_pending_frames(struct sock *sk)
+ 			      &inet_sk(sk)->cork.base);
+ out:
+ 	up->len = 0;
+-	up->pending = 0;
++	WRITE_ONCE(up->pending, 0);
+ 	return err;
+ }
+ 
+@@ -1370,7 +1370,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		default:
+ 			return -EINVAL;
+ 		}
+-	} else if (!up->pending) {
++	} else if (!READ_ONCE(up->pending)) {
+ 		if (sk->sk_state != TCP_ESTABLISHED)
+ 			return -EDESTADDRREQ;
+ 		daddr = &sk->sk_v6_daddr;
+@@ -1401,8 +1401,8 @@ do_udp_sendmsg:
+ 		return -EMSGSIZE;
+ 
+ 	getfrag  =  is_udplite ?  udplite_getfrag : ip_generic_getfrag;
+-	if (up->pending) {
+-		if (up->pending == AF_INET)
++	if (READ_ONCE(up->pending)) {
++		if (READ_ONCE(up->pending) == AF_INET)
+ 			return udp_sendmsg(sk, msg, len);
+ 		/*
+ 		 * There are pending frames.
+@@ -1593,7 +1593,7 @@ back_from_confirm:
+ 		goto out;
+ 	}
+ 
+-	up->pending = AF_INET6;
++	WRITE_ONCE(up->pending, AF_INET6);
+ 
+ do_append_data:
+ 	if (ipc6.dontfrag < 0)
+@@ -1607,7 +1607,7 @@ do_append_data:
+ 	else if (!corkreq)
+ 		err = udp_v6_push_pending_frames(sk);
+ 	else if (unlikely(skb_queue_empty(&sk->sk_write_queue)))
+-		up->pending = 0;
++		WRITE_ONCE(up->pending, 0);
+ 
+ 	if (err > 0)
+ 		err = inet6_test_bit(RECVERR6, sk) ? net_xmit_errno(err) : 0;
+@@ -1648,7 +1648,7 @@ static void udpv6_splice_eof(struct socket *sock)
+ 	struct sock *sk = sock->sk;
+ 	struct udp_sock *up = udp_sk(sk);
+ 
+-	if (!up->pending || udp_test_bit(CORK, sk))
++	if (!READ_ONCE(up->pending) || udp_test_bit(CORK, sk))
+ 		return;
+ 
+ 	lock_sock(sk);
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index c8998cf01b7a5..dcdaab19efbd5 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -43,6 +43,9 @@
+ #define IEEE80211_ASSOC_TIMEOUT_SHORT	(HZ / 10)
+ #define IEEE80211_ASSOC_MAX_TRIES	3
+ 
++#define IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS msecs_to_jiffies(100)
++#define IEEE80211_ADV_TTLM_ST_UNDERFLOW 0xff00
++
+ static int max_nullfunc_tries = 2;
+ module_param(max_nullfunc_tries, int, 0644);
+ MODULE_PARM_DESC(max_nullfunc_tries,
+@@ -5946,6 +5949,13 @@ ieee80211_parse_adv_t2l(struct ieee80211_sub_if_data *sdata,
+ 	pos++;
+ 
+ 	ttlm_info->switch_time = get_unaligned_le16(pos);
++
++	/* Since ttlm_info->switch_time == 0 means no switch time, bump it
++	 * by 1.
++	 */
++	if (!ttlm_info->switch_time)
++		ttlm_info->switch_time = 1;
++
+ 	pos += 2;
+ 
+ 	if (control & IEEE80211_TTLM_CONTROL_EXPECTED_DUR_PRESENT) {
+@@ -6040,25 +6050,46 @@ static void ieee80211_process_adv_ttlm(struct ieee80211_sub_if_data *sdata,
+ 		}
+ 
+ 		if (ttlm_info.switch_time) {
+-			u32 st_us, delay = 0;
+-			u32 ts_l26 = beacon_ts & GENMASK(25, 0);
++			u16 beacon_ts_tu, st_tu, delay;
++			u32 delay_jiffies;
++			u64 mask;
+ 
+ 			/* The t2l map switch time is indicated with a partial
+-			 * TSF value, convert it to TSF and calc the delay
+-			 * to the start time.
++			 * TSF value (bits 10 to 25), get the partial beacon TS
++			 * as well, and calc the delay to the start time.
++			 */
++			mask = GENMASK_ULL(25, 10);
++			beacon_ts_tu = (beacon_ts & mask) >> 10;
++			st_tu = ttlm_info.switch_time;
++			delay = st_tu - beacon_ts_tu;
++
++			/*
++			 * If the switch time is far in the future, then it
++			 * could also be the previous switch still being
++			 * announced.
++			 * We can simply ignore it for now, if it is a future
++			 * switch the AP will continue to announce it anyway.
++			 */
++			if (delay > IEEE80211_ADV_TTLM_ST_UNDERFLOW)
++				return;
++
++			delay_jiffies = TU_TO_JIFFIES(delay);
++
++			/* Link switching can take time, so schedule it
++			 * 100ms before to be ready on time
+ 			 */
+-			st_us = ieee80211_tu_to_usec(ttlm_info.switch_time);
+-			if (st_us > ts_l26)
+-				delay = st_us - ts_l26;
++			if (delay_jiffies > IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS)
++				delay_jiffies -=
++					IEEE80211_ADV_TTLM_SAFETY_BUFFER_MS;
+ 			else
+-				continue;
++				delay_jiffies = 0;
+ 
+ 			sdata->u.mgd.ttlm_info = ttlm_info;
+ 			wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ 						  &sdata->u.mgd.ttlm_work);
+ 			wiphy_delayed_work_queue(sdata->local->hw.wiphy,
+ 						 &sdata->u.mgd.ttlm_work,
+-						 usecs_to_jiffies(delay));
++						 delay_jiffies);
+ 			return;
+ 		}
+ 	}
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index c53914012d01d..d2527d189a799 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -123,8 +123,8 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 		break;
+ 
+ 	case MPTCPOPT_MP_JOIN:
+-		mp_opt->suboptions |= OPTIONS_MPTCP_MPJ;
+ 		if (opsize == TCPOLEN_MPTCP_MPJ_SYN) {
++			mp_opt->suboptions |= OPTION_MPTCP_MPJ_SYN;
+ 			mp_opt->backup = *ptr++ & MPTCPOPT_BACKUP;
+ 			mp_opt->join_id = *ptr++;
+ 			mp_opt->token = get_unaligned_be32(ptr);
+@@ -135,6 +135,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 				 mp_opt->backup, mp_opt->join_id,
+ 				 mp_opt->token, mp_opt->nonce);
+ 		} else if (opsize == TCPOLEN_MPTCP_MPJ_SYNACK) {
++			mp_opt->suboptions |= OPTION_MPTCP_MPJ_SYNACK;
+ 			mp_opt->backup = *ptr++ & MPTCPOPT_BACKUP;
+ 			mp_opt->join_id = *ptr++;
+ 			mp_opt->thmac = get_unaligned_be64(ptr);
+@@ -145,11 +146,10 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 				 mp_opt->backup, mp_opt->join_id,
+ 				 mp_opt->thmac, mp_opt->nonce);
+ 		} else if (opsize == TCPOLEN_MPTCP_MPJ_ACK) {
++			mp_opt->suboptions |= OPTION_MPTCP_MPJ_ACK;
+ 			ptr += 2;
+ 			memcpy(mp_opt->hmac, ptr, MPTCPOPT_HMAC_LEN);
+ 			pr_debug("MP_JOIN hmac");
+-		} else {
+-			mp_opt->suboptions &= ~OPTIONS_MPTCP_MPJ;
+ 		}
+ 		break;
+ 
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 852b3f4af000a..7785cda48e6bd 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -157,8 +157,8 @@ static int subflow_check_req(struct request_sock *req,
+ 
+ 	mptcp_get_options(skb, &mp_opt);
+ 
+-	opt_mp_capable = !!(mp_opt.suboptions & OPTIONS_MPTCP_MPC);
+-	opt_mp_join = !!(mp_opt.suboptions & OPTIONS_MPTCP_MPJ);
++	opt_mp_capable = !!(mp_opt.suboptions & OPTION_MPTCP_MPC_SYN);
++	opt_mp_join = !!(mp_opt.suboptions & OPTION_MPTCP_MPJ_SYN);
+ 	if (opt_mp_capable) {
+ 		SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVE);
+ 
+@@ -254,8 +254,8 @@ int mptcp_subflow_init_cookie_req(struct request_sock *req,
+ 	subflow_init_req(req, sk_listener);
+ 	mptcp_get_options(skb, &mp_opt);
+ 
+-	opt_mp_capable = !!(mp_opt.suboptions & OPTIONS_MPTCP_MPC);
+-	opt_mp_join = !!(mp_opt.suboptions & OPTIONS_MPTCP_MPJ);
++	opt_mp_capable = !!(mp_opt.suboptions & OPTION_MPTCP_MPC_ACK);
++	opt_mp_join = !!(mp_opt.suboptions & OPTION_MPTCP_MPJ_ACK);
+ 	if (opt_mp_capable && opt_mp_join)
+ 		return -EINVAL;
+ 
+@@ -486,7 +486,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 
+ 	mptcp_get_options(skb, &mp_opt);
+ 	if (subflow->request_mptcp) {
+-		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPC)) {
++		if (!(mp_opt.suboptions & OPTION_MPTCP_MPC_SYNACK)) {
+ 			MPTCP_INC_STATS(sock_net(sk),
+ 					MPTCP_MIB_MPCAPABLEACTIVEFALLBACK);
+ 			mptcp_do_fallback(sk);
+@@ -506,7 +506,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 	} else if (subflow->request_join) {
+ 		u8 hmac[SHA256_DIGEST_SIZE];
+ 
+-		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPJ)) {
++		if (!(mp_opt.suboptions & OPTION_MPTCP_MPJ_SYNACK)) {
+ 			subflow->reset_reason = MPTCP_RST_EMPTCP;
+ 			goto do_reset;
+ 		}
+@@ -783,12 +783,13 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 		 * options.
+ 		 */
+ 		mptcp_get_options(skb, &mp_opt);
+-		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPC))
++		if (!(mp_opt.suboptions &
++		      (OPTION_MPTCP_MPC_SYN | OPTION_MPTCP_MPC_ACK)))
+ 			fallback = true;
+ 
+ 	} else if (subflow_req->mp_join) {
+ 		mptcp_get_options(skb, &mp_opt);
+-		if (!(mp_opt.suboptions & OPTIONS_MPTCP_MPJ) ||
++		if (!(mp_opt.suboptions & OPTION_MPTCP_MPJ_ACK) ||
+ 		    !subflow_hmac_valid(req, &mp_opt) ||
+ 		    !mptcp_can_accept_new_subflow(subflow_req->msk)) {
+ 			SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index 03757e76bb6b9..374412ed780b6 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -105,8 +105,11 @@ enum {
+ 
+ 
+ struct ncsi_channel_version {
+-	u32 version;		/* Supported BCD encoded NCSI version */
+-	u32 alpha2;		/* Supported BCD encoded NCSI version */
++	u8   major;		/* NCSI version major */
++	u8   minor;		/* NCSI version minor */
++	u8   update;		/* NCSI version update */
++	char alpha1;		/* NCSI version alpha1 */
++	char alpha2;		/* NCSI version alpha2 */
+ 	u8  fw_name[12];	/* Firmware name string                */
+ 	u32 fw_version;		/* Firmware version                   */
+ 	u16 pci_ids[4];		/* PCI identification                 */
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index a3a6753a1db76..2f872d064396d 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -71,8 +71,8 @@ static int ncsi_write_channel_info(struct sk_buff *skb,
+ 	if (nc == nc->package->preferred_channel)
+ 		nla_put_flag(skb, NCSI_CHANNEL_ATTR_FORCED);
+ 
+-	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MAJOR, nc->version.version);
+-	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MINOR, nc->version.alpha2);
++	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MAJOR, nc->version.major);
++	nla_put_u32(skb, NCSI_CHANNEL_ATTR_VERSION_MINOR, nc->version.minor);
+ 	nla_put_string(skb, NCSI_CHANNEL_ATTR_VERSION_STR, nc->version.fw_name);
+ 
+ 	vid_nest = nla_nest_start_noflag(skb, NCSI_CHANNEL_ATTR_VLAN_LIST);
+diff --git a/net/ncsi/ncsi-pkt.h b/net/ncsi/ncsi-pkt.h
+index ba66c7dc3a216..c9d1da34dc4dc 100644
+--- a/net/ncsi/ncsi-pkt.h
++++ b/net/ncsi/ncsi-pkt.h
+@@ -197,9 +197,12 @@ struct ncsi_rsp_gls_pkt {
+ /* Get Version ID */
+ struct ncsi_rsp_gvi_pkt {
+ 	struct ncsi_rsp_pkt_hdr rsp;          /* Response header */
+-	__be32                  ncsi_version; /* NCSI version    */
++	unsigned char           major;        /* NCSI version major */
++	unsigned char           minor;        /* NCSI version minor */
++	unsigned char           update;       /* NCSI version update */
++	unsigned char           alpha1;       /* NCSI version alpha1 */
+ 	unsigned char           reserved[3];  /* Reserved        */
+-	unsigned char           alpha2;       /* NCSI version    */
++	unsigned char           alpha2;       /* NCSI version alpha2 */
+ 	unsigned char           fw_name[12];  /* f/w name string */
+ 	__be32                  fw_version;   /* f/w version     */
+ 	__be16                  pci_ids[4];   /* PCI IDs         */
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 069c2659074bc..480e80e3c2836 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -19,6 +19,19 @@
+ #include "ncsi-pkt.h"
+ #include "ncsi-netlink.h"
+ 
++/* Nibbles within [0xA, 0xF] add zero "0" to the returned value.
++ * Optional fields (encoded as 0xFF) will default to zero.
++ */
++static u8 decode_bcd_u8(u8 x)
++{
++	int lo = x & 0xF;
++	int hi = x >> 4;
++
++	lo = lo < 0xA ? lo : 0;
++	hi = hi < 0xA ? hi : 0;
++	return lo + hi * 10;
++}
++
+ static int ncsi_validate_rsp_pkt(struct ncsi_request *nr,
+ 				 unsigned short payload)
+ {
+@@ -755,9 +768,18 @@ static int ncsi_rsp_handler_gvi(struct ncsi_request *nr)
+ 	if (!nc)
+ 		return -ENODEV;
+ 
+-	/* Update to channel's version info */
++	/* Update channel's version info
++	 *
++	 * Major, minor, and update fields are supposed to be
++	 * unsigned integers encoded as packed BCD.
++	 *
++	 * Alpha1 and alpha2 are ISO/IEC 8859-1 characters.
++	 */
+ 	ncv = &nc->version;
+-	ncv->version = ntohl(rsp->ncsi_version);
++	ncv->major = decode_bcd_u8(rsp->major);
++	ncv->minor = decode_bcd_u8(rsp->minor);
++	ncv->update = decode_bcd_u8(rsp->update);
++	ncv->alpha1 = rsp->alpha1;
+ 	ncv->alpha2 = rsp->alpha2;
+ 	memcpy(ncv->fw_name, rsp->fw_name, 12);
+ 	ncv->fw_version = ntohl(rsp->fw_version);
+diff --git a/net/netfilter/ipset/ip_set_hash_netiface.c b/net/netfilter/ipset/ip_set_hash_netiface.c
+index 95aeb31c60e0d..30a655e5c4fdc 100644
+--- a/net/netfilter/ipset/ip_set_hash_netiface.c
++++ b/net/netfilter/ipset/ip_set_hash_netiface.c
+@@ -138,9 +138,9 @@ hash_netiface4_data_next(struct hash_netiface4_elem *next,
+ #include "ip_set_hash_gen.h"
+ 
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+-static const char *get_physindev_name(const struct sk_buff *skb)
++static const char *get_physindev_name(const struct sk_buff *skb, struct net *net)
+ {
+-	struct net_device *dev = nf_bridge_get_physindev(skb);
++	struct net_device *dev = nf_bridge_get_physindev(skb, net);
+ 
+ 	return dev ? dev->name : NULL;
+ }
+@@ -177,7 +177,7 @@ hash_netiface4_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 
+ 	if (opt->cmdflags & IPSET_FLAG_PHYSDEV) {
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+-		const char *eiface = SRCDIR ? get_physindev_name(skb) :
++		const char *eiface = SRCDIR ? get_physindev_name(skb, xt_net(par)) :
+ 					      get_physoutdev_name(skb);
+ 
+ 		if (!eiface)
+@@ -395,7 +395,7 @@ hash_netiface6_kadt(struct ip_set *set, const struct sk_buff *skb,
+ 
+ 	if (opt->cmdflags & IPSET_FLAG_PHYSDEV) {
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+-		const char *eiface = SRCDIR ? get_physindev_name(skb) :
++		const char *eiface = SRCDIR ? get_physindev_name(skb, xt_net(par)) :
+ 					      get_physoutdev_name(skb);
+ 
+ 		if (!eiface)
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index 9193e109e6b38..65e0259178da4 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -271,7 +271,7 @@ static inline bool decrement_ttl(struct netns_ipvs *ipvs,
+ 			skb->dev = dst->dev;
+ 			icmpv6_send(skb, ICMPV6_TIME_EXCEED,
+ 				    ICMPV6_EXC_HOPLIMIT, 0);
+-			__IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
++			IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
+ 
+ 			return false;
+ 		}
+@@ -286,7 +286,7 @@ static inline bool decrement_ttl(struct netns_ipvs *ipvs,
+ 	{
+ 		if (ip_hdr(skb)->ttl <= 1) {
+ 			/* Tell the sender its packet died... */
+-			__IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
++			IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
+ 			icmp_send(skb, ICMP_TIME_EXCEEDED, ICMP_EXC_TTL, 0);
+ 			return false;
+ 		}
+diff --git a/net/netfilter/nf_log_syslog.c b/net/netfilter/nf_log_syslog.c
+index c66689ad2b491..58402226045e8 100644
+--- a/net/netfilter/nf_log_syslog.c
++++ b/net/netfilter/nf_log_syslog.c
+@@ -111,7 +111,8 @@ nf_log_dump_packet_common(struct nf_log_buf *m, u8 pf,
+ 			  unsigned int hooknum, const struct sk_buff *skb,
+ 			  const struct net_device *in,
+ 			  const struct net_device *out,
+-			  const struct nf_loginfo *loginfo, const char *prefix)
++			  const struct nf_loginfo *loginfo, const char *prefix,
++			  struct net *net)
+ {
+ 	const struct net_device *physoutdev __maybe_unused;
+ 	const struct net_device *physindev __maybe_unused;
+@@ -121,7 +122,7 @@ nf_log_dump_packet_common(struct nf_log_buf *m, u8 pf,
+ 			in ? in->name : "",
+ 			out ? out->name : "");
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+-	physindev = nf_bridge_get_physindev(skb);
++	physindev = nf_bridge_get_physindev(skb, net);
+ 	if (physindev && in != physindev)
+ 		nf_log_buf_add(m, "PHYSIN=%s ", physindev->name);
+ 	physoutdev = nf_bridge_get_physoutdev(skb);
+@@ -148,7 +149,7 @@ static void nf_log_arp_packet(struct net *net, u_int8_t pf,
+ 		loginfo = &default_loginfo;
+ 
+ 	nf_log_dump_packet_common(m, pf, hooknum, skb, in, out, loginfo,
+-				  prefix);
++				  prefix, net);
+ 	dump_arp_packet(m, loginfo, skb, skb_network_offset(skb));
+ 
+ 	nf_log_buf_close(m);
+@@ -845,7 +846,7 @@ static void nf_log_ip_packet(struct net *net, u_int8_t pf,
+ 		loginfo = &default_loginfo;
+ 
+ 	nf_log_dump_packet_common(m, pf, hooknum, skb, in,
+-				  out, loginfo, prefix);
++				  out, loginfo, prefix, net);
+ 
+ 	if (in)
+ 		dump_mac_header(m, loginfo, skb);
+@@ -880,7 +881,7 @@ static void nf_log_ip6_packet(struct net *net, u_int8_t pf,
+ 		loginfo = &default_loginfo;
+ 
+ 	nf_log_dump_packet_common(m, pf, hooknum, skb, in, out,
+-				  loginfo, prefix);
++				  loginfo, prefix, net);
+ 
+ 	if (in)
+ 		dump_mac_header(m, loginfo, skb);
+@@ -916,7 +917,7 @@ static void nf_log_unknown_packet(struct net *net, u_int8_t pf,
+ 		loginfo = &default_loginfo;
+ 
+ 	nf_log_dump_packet_common(m, pf, hooknum, skb, in, out, loginfo,
+-				  prefix);
++				  prefix, net);
+ 
+ 	dump_mac_header(m, loginfo, skb);
+ 
+diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
+index 63d1516816b1f..e2f334f70281f 100644
+--- a/net/netfilter/nf_queue.c
++++ b/net/netfilter/nf_queue.c
+@@ -82,11 +82,9 @@ static void __nf_queue_entry_init_physdevs(struct nf_queue_entry *entry)
+ {
+ #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER)
+ 	const struct sk_buff *skb = entry->skb;
+-	struct nf_bridge_info *nf_bridge;
+ 
+-	nf_bridge = nf_bridge_info_get(skb);
+-	if (nf_bridge) {
+-		entry->physin = nf_bridge_get_physindev(skb);
++	if (nf_bridge_info_exists(skb)) {
++		entry->physin = nf_bridge_get_physindev(skb, entry->state.net);
+ 		entry->physout = nf_bridge_get_physoutdev(skb);
+ 	} else {
+ 		entry->physin = NULL;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index be04af433988d..f032c29f1da64 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2261,7 +2261,16 @@ static int nft_chain_parse_hook(struct net *net,
+ 				return -EOPNOTSUPP;
+ 		}
+ 
+-		type = basechain->type;
++		if (nla[NFTA_CHAIN_TYPE]) {
++			type = __nf_tables_chain_type_lookup(nla[NFTA_CHAIN_TYPE],
++							     family);
++			if (!type) {
++				NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_TYPE]);
++				return -ENOENT;
++			}
++		} else {
++			type = basechain->type;
++		}
+ 	}
+ 
+ 	if (!try_module_get(type->owner)) {
+@@ -4802,8 +4811,8 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ 			       const struct nlattr *nla)
+ {
++	u32 num_regs = 0, key_num_regs = 0;
+ 	struct nlattr *attr;
+-	u32 num_regs = 0;
+ 	int rem, err, i;
+ 
+ 	nla_for_each_nested(attr, nla, rem) {
+@@ -4818,6 +4827,10 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ 	for (i = 0; i < desc->field_count; i++)
+ 		num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
+ 
++	key_num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
++	if (key_num_regs != num_regs)
++		return -EINVAL;
++
+ 	if (num_regs > NFT_REG32_COUNT)
+ 		return -E2BIG;
+ 
+@@ -5039,16 +5052,28 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ 	}
+ 
+ 	desc.policy = NFT_SET_POL_PERFORMANCE;
+-	if (nla[NFTA_SET_POLICY] != NULL)
++	if (nla[NFTA_SET_POLICY] != NULL) {
+ 		desc.policy = ntohl(nla_get_be32(nla[NFTA_SET_POLICY]));
++		switch (desc.policy) {
++		case NFT_SET_POL_PERFORMANCE:
++		case NFT_SET_POL_MEMORY:
++			break;
++		default:
++			return -EOPNOTSUPP;
++		}
++	}
+ 
+ 	if (nla[NFTA_SET_DESC] != NULL) {
+ 		err = nf_tables_set_desc_parse(&desc, nla[NFTA_SET_DESC]);
+ 		if (err < 0)
+ 			return err;
+ 
+-		if (desc.field_count > 1 && !(flags & NFT_SET_CONCAT))
++		if (desc.field_count > 1) {
++			if (!(flags & NFT_SET_CONCAT))
++				return -EINVAL;
++		} else if (flags & NFT_SET_CONCAT) {
+ 			return -EINVAL;
++		}
+ 	} else if (flags & NFT_SET_CONCAT) {
+ 		return -EINVAL;
+ 	}
+@@ -5695,7 +5720,7 @@ static int nf_tables_dump_setelem(const struct nft_ctx *ctx,
+ 	const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
+ 	struct nft_set_dump_args *args;
+ 
+-	if (nft_set_elem_expired(ext))
++	if (nft_set_elem_expired(ext) || nft_set_elem_is_dead(ext))
+ 		return 0;
+ 
+ 	args = container_of(iter, struct nft_set_dump_args, iter);
+@@ -6478,7 +6503,7 @@ static int nft_setelem_catchall_deactivate(const struct net *net,
+ 
+ 	list_for_each_entry(catchall, &set->catchall_list, list) {
+ 		ext = nft_set_elem_ext(set, catchall->elem);
+-		if (!nft_is_active(net, ext))
++		if (!nft_is_active_next(net, ext))
+ 			continue;
+ 
+ 		kfree(elem->priv);
+@@ -10383,6 +10408,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 				nft_trans_destroy(trans);
+ 				break;
+ 			}
++			nft_trans_set(trans)->dead = 1;
+ 			list_del_rcu(&nft_trans_set(trans)->list);
+ 			break;
+ 		case NFT_MSG_DELSET:
+diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
+index f03f4d4d7d889..134e05d31061e 100644
+--- a/net/netfilter/nfnetlink_log.c
++++ b/net/netfilter/nfnetlink_log.c
+@@ -508,7 +508,7 @@ __build_packet_message(struct nfnl_log_net *log,
+ 					 htonl(br_port_get_rcu(indev)->br->dev->ifindex)))
+ 				goto nla_put_failure;
+ 		} else {
+-			struct net_device *physindev;
++			int physinif;
+ 
+ 			/* Case 2: indev is bridge group, we need to look for
+ 			 * physical device (when called from ipv4) */
+@@ -516,10 +516,10 @@ __build_packet_message(struct nfnl_log_net *log,
+ 					 htonl(indev->ifindex)))
+ 				goto nla_put_failure;
+ 
+-			physindev = nf_bridge_get_physindev(skb);
+-			if (physindev &&
++			physinif = nf_bridge_get_physinif(skb);
++			if (physinif &&
+ 			    nla_put_be32(inst->skb, NFULA_IFINDEX_PHYSINDEV,
+-					 htonl(physindev->ifindex)))
++					 htonl(physinif)))
+ 				goto nla_put_failure;
+ 		}
+ #endif
+diff --git a/net/netfilter/nft_limit.c b/net/netfilter/nft_limit.c
+index 145dc62c62472..79039afde34ec 100644
+--- a/net/netfilter/nft_limit.c
++++ b/net/netfilter/nft_limit.c
+@@ -58,6 +58,7 @@ static inline bool nft_limit_eval(struct nft_limit_priv *priv, u64 cost)
+ static int nft_limit_init(struct nft_limit_priv *priv,
+ 			  const struct nlattr * const tb[], bool pkts)
+ {
++	bool invert = false;
+ 	u64 unit, tokens;
+ 
+ 	if (tb[NFTA_LIMIT_RATE] == NULL ||
+@@ -90,19 +91,23 @@ static int nft_limit_init(struct nft_limit_priv *priv,
+ 				 priv->rate);
+ 	}
+ 
++	if (tb[NFTA_LIMIT_FLAGS]) {
++		u32 flags = ntohl(nla_get_be32(tb[NFTA_LIMIT_FLAGS]));
++
++		if (flags & ~NFT_LIMIT_F_INV)
++			return -EOPNOTSUPP;
++
++		if (flags & NFT_LIMIT_F_INV)
++			invert = true;
++	}
++
+ 	priv->limit = kmalloc(sizeof(*priv->limit), GFP_KERNEL_ACCOUNT);
+ 	if (!priv->limit)
+ 		return -ENOMEM;
+ 
+ 	priv->limit->tokens = tokens;
+ 	priv->tokens_max = priv->limit->tokens;
+-
+-	if (tb[NFTA_LIMIT_FLAGS]) {
+-		u32 flags = ntohl(nla_get_be32(tb[NFTA_LIMIT_FLAGS]));
+-
+-		if (flags & NFT_LIMIT_F_INV)
+-			priv->invert = true;
+-	}
++	priv->invert = invert;
+ 	priv->limit->last = ktime_get_ns();
+ 	spin_lock_init(&priv->limit->lock);
+ 
+diff --git a/net/netfilter/xt_physdev.c b/net/netfilter/xt_physdev.c
+index ec6ed6fda96c5..343e65f377d44 100644
+--- a/net/netfilter/xt_physdev.c
++++ b/net/netfilter/xt_physdev.c
+@@ -59,7 +59,7 @@ physdev_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 	    (!!outdev ^ !(info->invert & XT_PHYSDEV_OP_BRIDGED)))
+ 		return false;
+ 
+-	physdev = nf_bridge_get_physindev(skb);
++	physdev = nf_bridge_get_physindev(skb, xt_net(par));
+ 	indev = physdev ? physdev->name : NULL;
+ 
+ 	if ((info->bitmask & XT_PHYSDEV_OP_ISIN &&
+diff --git a/net/netlabel/netlabel_calipso.c b/net/netlabel/netlabel_calipso.c
+index f1d5b84652178..a07c2216d28b6 100644
+--- a/net/netlabel/netlabel_calipso.c
++++ b/net/netlabel/netlabel_calipso.c
+@@ -54,6 +54,28 @@ static const struct nla_policy calipso_genl_policy[NLBL_CALIPSO_A_MAX + 1] = {
+ 	[NLBL_CALIPSO_A_MTYPE] = { .type = NLA_U32 },
+ };
+ 
++static const struct netlbl_calipso_ops *calipso_ops;
++
++/**
++ * netlbl_calipso_ops_register - Register the CALIPSO operations
++ * @ops: ops to register
++ *
++ * Description:
++ * Register the CALIPSO packet engine operations.
++ *
++ */
++const struct netlbl_calipso_ops *
++netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops)
++{
++	return xchg(&calipso_ops, ops);
++}
++EXPORT_SYMBOL(netlbl_calipso_ops_register);
++
++static const struct netlbl_calipso_ops *netlbl_calipso_ops_get(void)
++{
++	return READ_ONCE(calipso_ops);
++}
++
+ /* NetLabel Command Handlers
+  */
+ /**
+@@ -96,15 +118,18 @@ static int netlbl_calipso_add_pass(struct genl_info *info,
+  *
+  */
+ static int netlbl_calipso_add(struct sk_buff *skb, struct genl_info *info)
+-
+ {
+ 	int ret_val = -EINVAL;
+ 	struct netlbl_audit audit_info;
++	const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
+ 
+ 	if (!info->attrs[NLBL_CALIPSO_A_DOI] ||
+ 	    !info->attrs[NLBL_CALIPSO_A_MTYPE])
+ 		return -EINVAL;
+ 
++	if (!ops)
++		return -EOPNOTSUPP;
++
+ 	netlbl_netlink_auditinfo(&audit_info);
+ 	switch (nla_get_u32(info->attrs[NLBL_CALIPSO_A_MTYPE])) {
+ 	case CALIPSO_MAP_PASS:
+@@ -363,28 +388,6 @@ int __init netlbl_calipso_genl_init(void)
+ 	return genl_register_family(&netlbl_calipso_gnl_family);
+ }
+ 
+-static const struct netlbl_calipso_ops *calipso_ops;
+-
+-/**
+- * netlbl_calipso_ops_register - Register the CALIPSO operations
+- * @ops: ops to register
+- *
+- * Description:
+- * Register the CALIPSO packet engine operations.
+- *
+- */
+-const struct netlbl_calipso_ops *
+-netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops)
+-{
+-	return xchg(&calipso_ops, ops);
+-}
+-EXPORT_SYMBOL(netlbl_calipso_ops_register);
+-
+-static const struct netlbl_calipso_ops *netlbl_calipso_ops_get(void)
+-{
+-	return READ_ONCE(calipso_ops);
+-}
+-
+ /**
+  * calipso_doi_add - Add a new DOI to the CALIPSO protocol engine
+  * @doi_def: the DOI structure
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index e8e14c6f904d9..e8b43408136ab 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -1076,6 +1076,7 @@ void rxrpc_send_version_request(struct rxrpc_local *local,
+ /*
+  * local_object.c
+  */
++void rxrpc_local_dont_fragment(const struct rxrpc_local *local, bool set);
+ struct rxrpc_local *rxrpc_lookup_local(struct net *, const struct sockaddr_rxrpc *);
+ struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *, enum rxrpc_local_trace);
+ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *, enum rxrpc_local_trace);
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 773eecd1e9794..f10b37c147721 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -545,8 +545,8 @@ void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
+  */
+ static void rxrpc_cleanup_ring(struct rxrpc_call *call)
+ {
+-	skb_queue_purge(&call->recvmsg_queue);
+-	skb_queue_purge(&call->rx_oos_queue);
++	rxrpc_purge_queue(&call->recvmsg_queue);
++	rxrpc_purge_queue(&call->rx_oos_queue);
+ }
+ 
+ /*
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index c553a30e9c838..34d3073681353 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -36,6 +36,17 @@ static void rxrpc_encap_err_rcv(struct sock *sk, struct sk_buff *skb, int err,
+ 		return ipv6_icmp_error(sk, skb, err, port, info, payload);
+ }
+ 
++/*
++ * Set or clear the Don't Fragment flag on a socket.
++ */
++void rxrpc_local_dont_fragment(const struct rxrpc_local *local, bool set)
++{
++	if (set)
++		ip_sock_set_mtu_discover(local->socket->sk, IP_PMTUDISC_DO);
++	else
++		ip_sock_set_mtu_discover(local->socket->sk, IP_PMTUDISC_DONT);
++}
++
+ /*
+  * Compare a local to an address.  Return -ve, 0 or +ve to indicate less than,
+  * same or greater than.
+@@ -203,7 +214,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		ip_sock_set_recverr(usk);
+ 
+ 		/* we want to set the don't fragment bit */
+-		ip_sock_set_mtu_discover(usk, IP_PMTUDISC_DO);
++		rxrpc_local_dont_fragment(local, true);
+ 
+ 		/* We want receive timestamps. */
+ 		sock_enable_timestamps(usk);
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 5e53429c69228..a0906145e8293 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -494,14 +494,12 @@ send_fragmentable:
+ 	switch (conn->local->srx.transport.family) {
+ 	case AF_INET6:
+ 	case AF_INET:
+-		ip_sock_set_mtu_discover(conn->local->socket->sk,
+-					 IP_PMTUDISC_DONT);
++		rxrpc_local_dont_fragment(conn->local, false);
+ 		rxrpc_inc_stat(call->rxnet, stat_tx_data_send_frag);
+ 		ret = do_udp_sendmsg(conn->local->socket, &msg, len);
+ 		conn->peer->last_tx_at = ktime_get_seconds();
+ 
+-		ip_sock_set_mtu_discover(conn->local->socket->sk,
+-					 IP_PMTUDISC_DO);
++		rxrpc_local_dont_fragment(conn->local, true);
+ 		break;
+ 
+ 	default:
+diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
+index 1bf571a66e020..b52dedcebce0a 100644
+--- a/net/rxrpc/rxkad.c
++++ b/net/rxrpc/rxkad.c
+@@ -724,7 +724,9 @@ static int rxkad_send_response(struct rxrpc_connection *conn,
+ 	serial = atomic_inc_return(&conn->serial);
+ 	whdr.serial = htonl(serial);
+ 
++	rxrpc_local_dont_fragment(conn->local, false);
+ 	ret = kernel_sendmsg(conn->local->socket, &msg, iov, 3, len);
++	rxrpc_local_dont_fragment(conn->local, true);
+ 	if (ret < 0) {
+ 		trace_rxrpc_tx_fail(conn->debug_id, serial, ret,
+ 				    rxrpc_tx_point_rxkad_response);
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index f69c47945175b..3d50215985d51 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -850,7 +850,6 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ 	if (err || !frag)
+ 		return err;
+ 
+-	skb_get(skb);
+ 	err = nf_ct_handle_fragments(net, skb, zone, family, &proto, &mru);
+ 	if (err)
+ 		return err;
+@@ -999,12 +998,8 @@ TC_INDIRECT_SCOPE int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 	nh_ofs = skb_network_offset(skb);
+ 	skb_pull_rcsum(skb, nh_ofs);
+ 	err = tcf_ct_handle_fragments(net, skb, family, p->zone, &defrag);
+-	if (err == -EINPROGRESS) {
+-		retval = TC_ACT_STOLEN;
+-		goto out_clear;
+-	}
+ 	if (err)
+-		goto drop;
++		goto out_frag;
+ 
+ 	err = nf_ct_skb_network_trim(skb, family);
+ 	if (err)
+@@ -1091,6 +1086,11 @@ out_clear:
+ 		qdisc_skb_cb(skb)->pkt_len = skb->len;
+ 	return retval;
+ 
++out_frag:
++	if (err != -EINPROGRESS)
++		tcf_action_inc_drop_qstats(&c->common);
++	return TC_ACT_CONSUMED;
++
+ drop:
+ 	tcf_action_inc_drop_qstats(&c->common);
+ 	return TC_ACT_SHOT;
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 7f89e43154c09..6b9fcdb0952a0 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -2099,6 +2099,13 @@ static int sctp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 	pr_debug("%s: sk:%p, msghdr:%p, len:%zd, flags:0x%x, addr_len:%p)\n",
+ 		 __func__, sk, msg, len, flags, addr_len);
+ 
++	if (unlikely(flags & MSG_ERRQUEUE))
++		return inet_recv_error(sk, msg, len, addr_len);
++
++	if (sk_can_busy_loop(sk) &&
++	    skb_queue_empty_lockless(&sk->sk_receive_queue))
++		sk_busy_loop(sk, flags & MSG_DONTWAIT);
++
+ 	lock_sock(sk);
+ 
+ 	if (sctp_style(sk, TCP) && !sctp_sstate(sk, ESTABLISHED) &&
+@@ -9043,12 +9050,6 @@ struct sk_buff *sctp_skb_recv_datagram(struct sock *sk, int flags, int *err)
+ 		if (sk->sk_shutdown & RCV_SHUTDOWN)
+ 			break;
+ 
+-		if (sk_can_busy_loop(sk)) {
+-			sk_busy_loop(sk, flags & MSG_DONTWAIT);
+-
+-			if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
+-				continue;
+-		}
+ 
+ 		/* User doesn't want to wait.  */
+ 		error = -EAGAIN;
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 2364c485540c6..6cc9ffac962d2 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -651,9 +651,9 @@ static unsigned long xprt_abs_ktime_to_jiffies(ktime_t abstime)
+ 		jiffies + nsecs_to_jiffies(-delta);
+ }
+ 
+-static unsigned long xprt_calc_majortimeo(struct rpc_rqst *req)
++static unsigned long xprt_calc_majortimeo(struct rpc_rqst *req,
++		const struct rpc_timeout *to)
+ {
+-	const struct rpc_timeout *to = req->rq_task->tk_client->cl_timeout;
+ 	unsigned long majortimeo = req->rq_timeout;
+ 
+ 	if (to->to_exponential)
+@@ -665,9 +665,10 @@ static unsigned long xprt_calc_majortimeo(struct rpc_rqst *req)
+ 	return majortimeo;
+ }
+ 
+-static void xprt_reset_majortimeo(struct rpc_rqst *req)
++static void xprt_reset_majortimeo(struct rpc_rqst *req,
++		const struct rpc_timeout *to)
+ {
+-	req->rq_majortimeo += xprt_calc_majortimeo(req);
++	req->rq_majortimeo += xprt_calc_majortimeo(req, to);
+ }
+ 
+ static void xprt_reset_minortimeo(struct rpc_rqst *req)
+@@ -675,7 +676,8 @@ static void xprt_reset_minortimeo(struct rpc_rqst *req)
+ 	req->rq_minortimeo += req->rq_timeout;
+ }
+ 
+-static void xprt_init_majortimeo(struct rpc_task *task, struct rpc_rqst *req)
++static void xprt_init_majortimeo(struct rpc_task *task, struct rpc_rqst *req,
++		const struct rpc_timeout *to)
+ {
+ 	unsigned long time_init;
+ 	struct rpc_xprt *xprt = req->rq_xprt;
+@@ -684,8 +686,9 @@ static void xprt_init_majortimeo(struct rpc_task *task, struct rpc_rqst *req)
+ 		time_init = jiffies;
+ 	else
+ 		time_init = xprt_abs_ktime_to_jiffies(task->tk_start);
+-	req->rq_timeout = task->tk_client->cl_timeout->to_initval;
+-	req->rq_majortimeo = time_init + xprt_calc_majortimeo(req);
++
++	req->rq_timeout = to->to_initval;
++	req->rq_majortimeo = time_init + xprt_calc_majortimeo(req, to);
+ 	req->rq_minortimeo = time_init + req->rq_timeout;
+ }
+ 
+@@ -713,7 +716,7 @@ int xprt_adjust_timeout(struct rpc_rqst *req)
+ 	} else {
+ 		req->rq_timeout = to->to_initval;
+ 		req->rq_retries = 0;
+-		xprt_reset_majortimeo(req);
++		xprt_reset_majortimeo(req, to);
+ 		/* Reset the RTT counters == "slow start" */
+ 		spin_lock(&xprt->transport_lock);
+ 		rpc_init_rtt(req->rq_task->tk_client->cl_rtt, to->to_initval);
+@@ -1886,7 +1889,7 @@ xprt_request_init(struct rpc_task *task)
+ 	req->rq_snd_buf.bvec = NULL;
+ 	req->rq_rcv_buf.bvec = NULL;
+ 	req->rq_release_snd_buf = NULL;
+-	xprt_init_majortimeo(task, req);
++	xprt_init_majortimeo(task, req, task->tk_client->cl_timeout);
+ 
+ 	trace_xprt_reserve(req);
+ }
+@@ -1996,6 +1999,8 @@ xprt_init_bc_request(struct rpc_rqst *req, struct rpc_task *task)
+ 	 */
+ 	xbufp->len = xbufp->head[0].iov_len + xbufp->page_len +
+ 		xbufp->tail[0].iov_len;
++
++	xprt_init_majortimeo(task, req, req->rq_xprt->timeout);
+ }
+ #endif
+ 
+diff --git a/net/sunrpc/xprtmultipath.c b/net/sunrpc/xprtmultipath.c
+index 701250b305dba..74ee2271251e3 100644
+--- a/net/sunrpc/xprtmultipath.c
++++ b/net/sunrpc/xprtmultipath.c
+@@ -284,7 +284,7 @@ struct rpc_xprt *_xprt_switch_find_current_entry(struct list_head *head,
+ 		if (cur == pos)
+ 			found = true;
+ 		if (found && ((find_active && xprt_is_active(pos)) ||
+-			      (!find_active && xprt_is_active(pos))))
++			      (!find_active && !xprt_is_active(pos))))
+ 			return pos;
+ 	}
+ 	return NULL;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index e37b4d2e2acde..31e8a94dfc111 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1052,7 +1052,11 @@ alloc_encrypted:
+ 			if (ret < 0)
+ 				goto send_end;
+ 			tls_ctx->pending_open_record_frags = true;
+-			if (full_record || eor || sk_msg_full(msg_pl))
++
++			if (sk_msg_full(msg_pl))
++				full_record = true;
++
++			if (full_record || eor)
+ 				goto copied;
+ 			continue;
+ 		}
+diff --git a/net/unix/unix_bpf.c b/net/unix/unix_bpf.c
+index 7ea7c3a0d0d06..bd84785bf8d6c 100644
+--- a/net/unix/unix_bpf.c
++++ b/net/unix/unix_bpf.c
+@@ -161,15 +161,30 @@ int unix_stream_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool r
+ {
+ 	struct sock *sk_pair;
+ 
++	/* Restore does not decrement the sk_pair reference yet because we must
++	 * keep the a reference to the socket until after an RCU grace period
++	 * and any pending sends have completed.
++	 */
+ 	if (restore) {
+ 		sk->sk_write_space = psock->saved_write_space;
+ 		sock_replace_proto(sk, psock->sk_proto);
+ 		return 0;
+ 	}
+ 
+-	sk_pair = unix_peer(sk);
+-	sock_hold(sk_pair);
+-	psock->sk_pair = sk_pair;
++	/* psock_update_sk_prot can be called multiple times if psock is
++	 * added to multiple maps and/or slots in the same map. There is
++	 * also an edge case where replacing a psock with itself can trigger
++	 * an extra psock_update_sk_prot during the insert process. So it
++	 * must be safe to do multiple calls. Here we need to ensure we don't
++	 * increment the refcnt through sock_hold many times. There will only
++	 * be a single matching destroy operation.
++	 */
++	if (!psock->sk_pair) {
++		sk_pair = unix_peer(sk);
++		sock_hold(sk_pair);
++		psock->sk_pair = sk_pair;
++	}
++
+ 	unix_stream_bpf_check_needs_rebuild(psock->sk_proto);
+ 	sock_replace_proto(sk, &unix_stream_bpf_prot);
+ 	return 0;
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 816725af281f3..54ba7316f8085 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -2264,8 +2264,13 @@ static int vsock_set_rcvlowat(struct sock *sk, int val)
+ 
+ 	transport = vsk->transport;
+ 
+-	if (transport && transport->set_rcvlowat)
+-		return transport->set_rcvlowat(vsk, val);
++	if (transport && transport->notify_set_rcvlowat) {
++		int err;
++
++		err = transport->notify_set_rcvlowat(vsk, val);
++		if (err)
++			return err;
++	}
+ 
+ 	WRITE_ONCE(sk->sk_rcvlowat, val ? : 1);
+ 	return 0;
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index 7cb1a9d2cdb4f..e2157e3872177 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -816,7 +816,7 @@ int hvs_notify_send_post_enqueue(struct vsock_sock *vsk, ssize_t written,
+ }
+ 
+ static
+-int hvs_set_rcvlowat(struct vsock_sock *vsk, int val)
++int hvs_notify_set_rcvlowat(struct vsock_sock *vsk, int val)
+ {
+ 	return -EOPNOTSUPP;
+ }
+@@ -856,7 +856,7 @@ static struct vsock_transport hvs_transport = {
+ 	.notify_send_pre_enqueue  = hvs_notify_send_pre_enqueue,
+ 	.notify_send_post_enqueue = hvs_notify_send_post_enqueue,
+ 
+-	.set_rcvlowat             = hvs_set_rcvlowat
++	.notify_set_rcvlowat      = hvs_notify_set_rcvlowat
+ };
+ 
+ static bool hvs_check_transport(struct vsock_sock *vsk)
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index af5bab1acee17..f495b9e5186b2 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -537,6 +537,7 @@ static struct virtio_transport virtio_transport = {
+ 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
+ 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
+ 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
++		.notify_set_rcvlowat      = virtio_transport_notify_set_rcvlowat,
+ 
+ 		.read_skb = virtio_transport_read_skb,
+ 	},
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 6df246b532606..16ff976a86e3e 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -557,6 +557,8 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 	struct virtio_vsock_sock *vvs = vsk->trans;
+ 	size_t bytes, total = 0;
+ 	struct sk_buff *skb;
++	u32 fwd_cnt_delta;
++	bool low_rx_bytes;
+ 	int err = -EFAULT;
+ 	u32 free_space;
+ 
+@@ -600,7 +602,10 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 		}
+ 	}
+ 
+-	free_space = vvs->buf_alloc - (vvs->fwd_cnt - vvs->last_fwd_cnt);
++	fwd_cnt_delta = vvs->fwd_cnt - vvs->last_fwd_cnt;
++	free_space = vvs->buf_alloc - fwd_cnt_delta;
++	low_rx_bytes = (vvs->rx_bytes <
++			sock_rcvlowat(sk_vsock(vsk), 0, INT_MAX));
+ 
+ 	spin_unlock_bh(&vvs->rx_lock);
+ 
+@@ -610,9 +615,11 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 	 * too high causes extra messages. Too low causes transmitter
+ 	 * stalls. As stalls are in theory more expensive than extra
+ 	 * messages, we set the limit to a high value. TODO: experiment
+-	 * with different values.
++	 * with different values. Also send credit update message when
++	 * number of bytes in rx queue is not enough to wake up reader.
+ 	 */
+-	if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE)
++	if (fwd_cnt_delta &&
++	    (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE || low_rx_bytes))
+ 		virtio_transport_send_credit_update(vsk);
+ 
+ 	return total;
+@@ -1683,6 +1690,36 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ }
+ EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
+ 
++int virtio_transport_notify_set_rcvlowat(struct vsock_sock *vsk, int val)
++{
++	struct virtio_vsock_sock *vvs = vsk->trans;
++	bool send_update;
++
++	spin_lock_bh(&vvs->rx_lock);
++
++	/* If number of available bytes is less than new SO_RCVLOWAT value,
++	 * kick sender to send more data, because sender may sleep in its
++	 * 'send()' syscall waiting for enough space at our side. Also
++	 * don't send credit update when peer already knows actual value -
++	 * such transmission will be useless.
++	 */
++	send_update = (vvs->rx_bytes < val) &&
++		      (vvs->fwd_cnt != vvs->last_fwd_cnt);
++
++	spin_unlock_bh(&vvs->rx_lock);
++
++	if (send_update) {
++		int err;
++
++		err = virtio_transport_send_credit_update(vsk);
++		if (err < 0)
++			return err;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(virtio_transport_notify_set_rcvlowat);
++
+ MODULE_LICENSE("GPL v2");
+ MODULE_AUTHOR("Asias He");
+ MODULE_DESCRIPTION("common code for virtio vsock");
+diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
+index 0486401674117..6dea6119f5b28 100644
+--- a/net/vmw_vsock/vsock_loopback.c
++++ b/net/vmw_vsock/vsock_loopback.c
+@@ -96,6 +96,7 @@ static struct virtio_transport loopback_transport = {
+ 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
+ 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
+ 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
++		.notify_set_rcvlowat      = virtio_transport_notify_set_rcvlowat,
+ 
+ 		.read_skb = virtio_transport_read_skb,
+ 	},
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 9e5ccffd68684..0d6c3fc1238ab 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -2591,10 +2591,12 @@ cfg80211_tbtt_info_for_mld_ap(const u8 *ie, size_t ielen, u8 mld_id, u8 link_id,
+ 	return false;
+ }
+ 
+-static void cfg80211_parse_ml_sta_data(struct wiphy *wiphy,
+-				       struct cfg80211_inform_single_bss_data *tx_data,
+-				       struct cfg80211_bss *source_bss,
+-				       gfp_t gfp)
++static void
++cfg80211_parse_ml_elem_sta_data(struct wiphy *wiphy,
++				struct cfg80211_inform_single_bss_data *tx_data,
++				struct cfg80211_bss *source_bss,
++				const struct element *elem,
++				gfp_t gfp)
+ {
+ 	struct cfg80211_inform_single_bss_data data = {
+ 		.drv_data = tx_data->drv_data,
+@@ -2603,7 +2605,6 @@ static void cfg80211_parse_ml_sta_data(struct wiphy *wiphy,
+ 		.bss_source = BSS_SOURCE_STA_PROFILE,
+ 	};
+ 	struct ieee80211_multi_link_elem *ml_elem;
+-	const struct element *elem;
+ 	struct cfg80211_mle *mle;
+ 	u16 control;
+ 	u8 *new_ie;
+@@ -2613,15 +2614,7 @@ static void cfg80211_parse_ml_sta_data(struct wiphy *wiphy,
+ 	const u8 *pos;
+ 	u8 i;
+ 
+-	if (!source_bss)
+-		return;
+-
+-	if (tx_data->ftype != CFG80211_BSS_FTYPE_PRESP)
+-		return;
+-
+-	elem = cfg80211_find_ext_elem(WLAN_EID_EXT_EHT_MULTI_LINK,
+-				      tx_data->ie, tx_data->ielen);
+-	if (!elem || !ieee80211_mle_size_ok(elem->data + 1, elem->datalen - 1))
++	if (!ieee80211_mle_size_ok(elem->data + 1, elem->datalen - 1))
+ 		return;
+ 
+ 	ml_elem = (void *)elem->data + 1;
+@@ -2647,8 +2640,11 @@ static void cfg80211_parse_ml_sta_data(struct wiphy *wiphy,
+ 	/* MLD capabilities and operations */
+ 	pos += 2;
+ 
+-	/* Not included when the (nontransmitted) AP is responding itself,
+-	 * but defined to zero then (Draft P802.11be_D3.0, 9.4.2.170.2)
++	/*
++	 * The MLD ID of the reporting AP is always zero. It is set if the AP
++	 * is part of an MBSSID set and will be non-zero for ML Elements
++	 * relating to a nontransmitted BSS (matching the Multi-BSSID Index,
++	 * Draft P802.11be_D3.2, 35.3.4.2)
+ 	 */
+ 	if (u16_get_bits(control, IEEE80211_MLC_BASIC_PRES_MLD_ID)) {
+ 		mld_id = *pos;
+@@ -2753,6 +2749,25 @@ out:
+ 	kfree(mle);
+ }
+ 
++static void cfg80211_parse_ml_sta_data(struct wiphy *wiphy,
++				       struct cfg80211_inform_single_bss_data *tx_data,
++				       struct cfg80211_bss *source_bss,
++				       gfp_t gfp)
++{
++	const struct element *elem;
++
++	if (!source_bss)
++		return;
++
++	if (tx_data->ftype != CFG80211_BSS_FTYPE_PRESP)
++		return;
++
++	for_each_element_extid(elem, WLAN_EID_EXT_EHT_MULTI_LINK,
++			       tx_data->ie, tx_data->ielen)
++		cfg80211_parse_ml_elem_sta_data(wiphy, tx_data, source_bss,
++						elem, gfp);
++}
++
+ struct cfg80211_bss *
+ cfg80211_inform_bss_data(struct wiphy *wiphy,
+ 			 struct cfg80211_inform_bss *data,
+diff --git a/rust/bindgen_parameters b/rust/bindgen_parameters
+index 552d9a85925b9..a721d466bee4b 100644
+--- a/rust/bindgen_parameters
++++ b/rust/bindgen_parameters
+@@ -20,3 +20,7 @@
+ 
+ # `seccomp`'s comment gets understood as a doctest
+ --no-doc-comments
++
++# These functions use the `__preserve_most` calling convention, which neither bindgen
++# nor Rust currently understand, and which Clang currently declares to be unstable.
++--blocklist-function __list_.*_report
+diff --git a/security/apparmor/lib.c b/security/apparmor/lib.c
+index 4c198d273f091..cd569fbbfe36d 100644
+--- a/security/apparmor/lib.c
++++ b/security/apparmor/lib.c
+@@ -41,6 +41,7 @@ void aa_free_str_table(struct aa_str_table *t)
+ 			kfree_sensitive(t->table[i]);
+ 		kfree_sensitive(t->table);
+ 		t->table = NULL;
++		t->size = 0;
+ 	}
+ }
+ 
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 4981bdf029931..608a849a74681 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -954,7 +954,6 @@ static int apparmor_task_kill(struct task_struct *target, struct kernel_siginfo
+ 		cl = aa_get_newest_cred_label(cred);
+ 		error = aa_may_signal(cred, cl, tc, tl, sig);
+ 		aa_put_label(cl);
+-		return error;
+ 	} else {
+ 		cl = __begin_current_label_crit_section();
+ 		error = aa_may_signal(current_cred(), cl, tc, tl, sig);
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index ed4c9803c8fad..957654d253dd7 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -99,13 +99,14 @@ const char *const aa_profile_mode_names[] = {
+ };
+ 
+ 
+-static void aa_free_pdb(struct aa_policydb *policy)
++static void aa_free_pdb(struct aa_policydb *pdb)
+ {
+-	if (policy) {
+-		aa_put_dfa(policy->dfa);
+-		if (policy->perms)
+-			kvfree(policy->perms);
+-		aa_free_str_table(&policy->trans);
++	if (pdb) {
++		aa_put_dfa(pdb->dfa);
++		if (pdb->perms)
++			kvfree(pdb->perms);
++		aa_free_str_table(&pdb->trans);
++		kfree(pdb);
+ 	}
+ }
+ 
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 47ec097d6741f..5e578ef0ddffb 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -478,6 +478,8 @@ static bool unpack_trans_table(struct aa_ext *e, struct aa_str_table *strs)
+ 		if (!table)
+ 			goto fail;
+ 
++		strs->table = table;
++		strs->size = size;
+ 		for (i = 0; i < size; i++) {
+ 			char *str;
+ 			int c, j, pos, size2 = aa_unpack_strdup(e, &str, NULL);
+@@ -520,14 +522,11 @@ static bool unpack_trans_table(struct aa_ext *e, struct aa_str_table *strs)
+ 			goto fail;
+ 		if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
+ 			goto fail;
+-
+-		strs->table = table;
+-		strs->size = size;
+ 	}
+ 	return true;
+ 
+ fail:
+-	kfree_sensitive(table);
++	aa_free_str_table(strs);
+ 	e->pos = saved_pos;
+ 	return false;
+ }
+@@ -833,6 +832,10 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 
+ 	tmpname = aa_splitn_fqname(name, strlen(name), &tmpns, &ns_len);
+ 	if (tmpns) {
++		if (!tmpname) {
++			info = "empty profile name";
++			goto fail;
++		}
+ 		*ns_name = kstrndup(tmpns, ns_len, GFP_KERNEL);
+ 		if (!*ns_name) {
+ 			info = "out of memory";
+@@ -1022,8 +1025,10 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 		}
+ 	} else if (rules->policy->dfa &&
+ 		   rules->policy->start[AA_CLASS_FILE]) {
++		aa_put_pdb(rules->file);
+ 		rules->file = aa_get_pdb(rules->policy);
+ 	} else {
++		aa_put_pdb(rules->file);
+ 		rules->file = aa_get_pdb(nullpdb);
+ 	}
+ 	error = -EPROTO;
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 340b2bbbb2dd3..0fb890bd72cb6 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -4661,6 +4661,13 @@ static int selinux_socket_bind(struct socket *sock, struct sockaddr *address, in
+ 				return -EINVAL;
+ 			addr4 = (struct sockaddr_in *)address;
+ 			if (family_sa == AF_UNSPEC) {
++				if (family == PF_INET6) {
++					/* Length check from inet6_bind_sk() */
++					if (addrlen < SIN6_LEN_RFC2133)
++						return -EINVAL;
++					/* Family check from __inet6_bind() */
++					goto err_af;
++				}
+ 				/* see __inet_bind(), we only want to allow
+ 				 * AF_UNSPEC if the address is INADDR_ANY
+ 				 */
+diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
+index e87dc67f33c69..1c65e0a3b13ce 100644
+--- a/sound/drivers/aloop.c
++++ b/sound/drivers/aloop.c
+@@ -322,6 +322,17 @@ static int loopback_snd_timer_close_cable(struct loopback_pcm *dpcm)
+ 	return 0;
+ }
+ 
++static bool is_access_interleaved(snd_pcm_access_t access)
++{
++	switch (access) {
++	case SNDRV_PCM_ACCESS_MMAP_INTERLEAVED:
++	case SNDRV_PCM_ACCESS_RW_INTERLEAVED:
++		return true;
++	default:
++		return false;
++	}
++};
++
+ static int loopback_check_format(struct loopback_cable *cable, int stream)
+ {
+ 	struct snd_pcm_runtime *runtime, *cruntime;
+@@ -341,7 +352,8 @@ static int loopback_check_format(struct loopback_cable *cable, int stream)
+ 	check = runtime->format != cruntime->format ||
+ 		runtime->rate != cruntime->rate ||
+ 		runtime->channels != cruntime->channels ||
+-		runtime->access != cruntime->access;
++		is_access_interleaved(runtime->access) !=
++		is_access_interleaved(cruntime->access);
+ 	if (!check)
+ 		return 0;
+ 	if (stream == SNDRV_PCM_STREAM_CAPTURE) {
+@@ -369,7 +381,8 @@ static int loopback_check_format(struct loopback_cable *cable, int stream)
+ 							&setup->channels_id);
+ 			setup->channels = runtime->channels;
+ 		}
+-		if (setup->access != runtime->access) {
++		if (is_access_interleaved(setup->access) !=
++		    is_access_interleaved(runtime->access)) {
+ 			snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE,
+ 							&setup->access_id);
+ 			setup->access = runtime->access;
+@@ -584,8 +597,7 @@ static void copy_play_buf(struct loopback_pcm *play,
+ 			size = play->pcm_buffer_size - src_off;
+ 		if (dst_off + size > capt->pcm_buffer_size)
+ 			size = capt->pcm_buffer_size - dst_off;
+-		if (runtime->access == SNDRV_PCM_ACCESS_RW_NONINTERLEAVED ||
+-		    runtime->access == SNDRV_PCM_ACCESS_MMAP_NONINTERLEAVED)
++		if (!is_access_interleaved(runtime->access))
+ 			copy_play_buf_part_n(play, capt, size, src_off, dst_off);
+ 		else
+ 			memcpy(dst + dst_off, src + src_off, size);
+@@ -1544,8 +1556,7 @@ static int loopback_access_get(struct snd_kcontrol *kcontrol,
+ 	mutex_lock(&loopback->cable_lock);
+ 	access = loopback->setup[kcontrol->id.subdevice][kcontrol->id.device].access;
+ 
+-	ucontrol->value.enumerated.item[0] = access == SNDRV_PCM_ACCESS_RW_NONINTERLEAVED ||
+-					     access == SNDRV_PCM_ACCESS_MMAP_NONINTERLEAVED;
++	ucontrol->value.enumerated.item[0] = !is_access_interleaved(access);
+ 
+ 	mutex_unlock(&loopback->cable_lock);
+ 	return 0;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 78cee53fee02a..038db8902c9ed 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2301,6 +2301,7 @@ static int generic_hdmi_build_pcms(struct hda_codec *codec)
+ 	codec_dbg(codec, "hdmi: pcm_num set to %d\n", pcm_num);
+ 
+ 	for (idx = 0; idx < pcm_num; idx++) {
++		struct hdmi_spec_per_cvt *per_cvt;
+ 		struct hda_pcm *info;
+ 		struct hda_pcm_stream *pstr;
+ 
+@@ -2316,6 +2317,11 @@ static int generic_hdmi_build_pcms(struct hda_codec *codec)
+ 		pstr = &info->stream[SNDRV_PCM_STREAM_PLAYBACK];
+ 		pstr->substreams = 1;
+ 		pstr->ops = generic_ops;
++
++		per_cvt = get_cvt(spec, 0);
++		pstr->channels_min = per_cvt->channels_min;
++		pstr->channels_max = per_cvt->channels_max;
++
+ 		/* pcm number is less than pcm_rec array size */
+ 		if (spec->pcm_used >= ARRAY_SIZE(spec->pcm_rec))
+ 			break;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 04a3dffcb4127..084857723ce6e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9848,6 +9848,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x87fe, "HP Laptop 15s-fq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+@@ -9942,6 +9943,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c71, "HP EliteBook 845 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c72, "HP EliteBook 865 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+@@ -10218,6 +10220,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
++	SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ 	SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+diff --git a/sound/pci/oxygen/oxygen_mixer.c b/sound/pci/oxygen/oxygen_mixer.c
+index 46705ec77b481..eb3aca16359c5 100644
+--- a/sound/pci/oxygen/oxygen_mixer.c
++++ b/sound/pci/oxygen/oxygen_mixer.c
+@@ -718,7 +718,7 @@ static int ac97_fp_rec_volume_put(struct snd_kcontrol *ctl,
+ 	oldreg = oxygen_read_ac97(chip, 1, AC97_REC_GAIN);
+ 	newreg = oldreg & ~0x0707;
+ 	newreg = newreg | (value->value.integer.value[0] & 7);
+-	newreg = newreg | ((value->value.integer.value[0] & 7) << 8);
++	newreg = newreg | ((value->value.integer.value[1] & 7) << 8);
+ 	change = newreg != oldreg;
+ 	if (change)
+ 		oxygen_write_ac97(chip, 1, AC97_REC_GAIN, newreg);
+diff --git a/sound/soc/amd/vangogh/acp5x-mach.c b/sound/soc/amd/vangogh/acp5x-mach.c
+index de4b478a983d0..7878e061ecb98 100644
+--- a/sound/soc/amd/vangogh/acp5x-mach.c
++++ b/sound/soc/amd/vangogh/acp5x-mach.c
+@@ -439,7 +439,15 @@ static const struct dmi_system_id acp5x_vg_quirk_table[] = {
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Valve"),
+ 			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Jupiter"),
+-		}
++		},
++		.driver_data = (void *)&acp5x_8821_35l41_card,
++	},
++	{
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Valve"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Galileo"),
++		},
++		.driver_data = (void *)&acp5x_8821_98388_card,
+ 	},
+ 	{}
+ };
+@@ -452,25 +460,15 @@ static int acp5x_probe(struct platform_device *pdev)
+ 	struct snd_soc_card *card;
+ 	int ret;
+ 
+-	card = (struct snd_soc_card *)device_get_match_data(dev);
+-	if (!card) {
+-		/*
+-		 * This is normally the result of directly probing the driver
+-		 * in pci-acp5x through platform_device_register_full(), which
+-		 * is necessary for the CS35L41 variant, as it doesn't support
+-		 * ACPI probing and relies on DMI quirks.
+-		 */
+-		dmi_id = dmi_first_match(acp5x_vg_quirk_table);
+-		if (!dmi_id)
+-			return -ENODEV;
+-
+-		card = &acp5x_8821_35l41_card;
+-	}
++	dmi_id = dmi_first_match(acp5x_vg_quirk_table);
++	if (!dmi_id || !dmi_id->driver_data)
++		return -ENODEV;
+ 
+ 	machine = devm_kzalloc(dev, sizeof(*machine), GFP_KERNEL);
+ 	if (!machine)
+ 		return -ENOMEM;
+ 
++	card = dmi_id->driver_data;
+ 	card->dev = dev;
+ 	platform_set_drvdata(pdev, card);
+ 	snd_soc_card_set_drvdata(card, machine);
+@@ -482,17 +480,10 @@ static int acp5x_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static const struct acpi_device_id acp5x_acpi_match[] = {
+-	{ "AMDI8821", (kernel_ulong_t)&acp5x_8821_98388_card },
+-	{},
+-};
+-MODULE_DEVICE_TABLE(acpi, acp5x_acpi_match);
+-
+ static struct platform_driver acp5x_mach_driver = {
+ 	.driver = {
+ 		.name = DRV_NAME,
+ 		.pm = &snd_soc_pm_ops,
+-		.acpi_match_table = acp5x_acpi_match,
+ 	},
+ 	.probe = acp5x_probe,
+ };
+diff --git a/sound/soc/codecs/cs35l33.c b/sound/soc/codecs/cs35l33.c
+index 4010a2d33a33c..a19a2bafb37c1 100644
+--- a/sound/soc/codecs/cs35l33.c
++++ b/sound/soc/codecs/cs35l33.c
+@@ -22,13 +22,11 @@
+ #include <sound/soc-dapm.h>
+ #include <sound/initval.h>
+ #include <sound/tlv.h>
+-#include <linux/gpio.h>
+ #include <linux/gpio/consumer.h>
+ #include <sound/cs35l33.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/regulator/machine.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of.h>
+ 
+ #include "cs35l33.h"
+@@ -1165,7 +1163,7 @@ static int cs35l33_i2c_probe(struct i2c_client *i2c_client)
+ 
+ 	/* We could issue !RST or skip it based on AMP topology */
+ 	cs35l33->reset_gpio = devm_gpiod_get_optional(&i2c_client->dev,
+-			"reset-gpios", GPIOD_OUT_HIGH);
++			"reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(cs35l33->reset_gpio)) {
+ 		dev_err(&i2c_client->dev, "%s ERROR: Can't get reset GPIO\n",
+ 			__func__);
+diff --git a/sound/soc/codecs/cs35l34.c b/sound/soc/codecs/cs35l34.c
+index e5871736fa29f..cca59de66b73f 100644
+--- a/sound/soc/codecs/cs35l34.c
++++ b/sound/soc/codecs/cs35l34.c
+@@ -20,14 +20,12 @@
+ #include <linux/regulator/machine.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/of.h>
+-#include <linux/of_gpio.h>
+ #include <linux/of_irq.h>
+ #include <sound/core.h>
+ #include <sound/pcm.h>
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+ #include <sound/soc-dapm.h>
+-#include <linux/gpio.h>
+ #include <linux/gpio/consumer.h>
+ #include <sound/initval.h>
+ #include <sound/tlv.h>
+@@ -1061,7 +1059,7 @@ static int cs35l34_i2c_probe(struct i2c_client *i2c_client)
+ 		dev_err(&i2c_client->dev, "Failed to request IRQ: %d\n", ret);
+ 
+ 	cs35l34->reset_gpio = devm_gpiod_get_optional(&i2c_client->dev,
+-				"reset-gpios", GPIOD_OUT_LOW);
++				"reset", GPIOD_OUT_LOW);
+ 	if (IS_ERR(cs35l34->reset_gpio)) {
+ 		ret = PTR_ERR(cs35l34->reset_gpio);
+ 		goto err_regulator;
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index a0d01d71d8b56..edcb85bd8ea7f 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3854,14 +3854,6 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ 		},
+ 		.driver_data = (void *)&ecs_ef20_platform_data,
+ 	},
+-	{
+-		.ident = "EF20EA",
+-		.callback = cht_rt5645_ef20_quirk_cb,
+-		.matches = {
+-			DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"),
+-		},
+-		.driver_data = (void *)&ecs_ef20_platform_data,
+-	},
+ 	{ }
+ };
+ 
+diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
+index 5c09e441a9368..85e14ff61769a 100644
+--- a/sound/soc/codecs/tas2781-fmwlib.c
++++ b/sound/soc/codecs/tas2781-fmwlib.c
+@@ -1982,6 +1982,7 @@ static int tasdevice_dspfw_ready(const struct firmware *fmw,
+ 	case 0x301:
+ 	case 0x302:
+ 	case 0x502:
++	case 0x503:
+ 		tas_priv->fw_parse_variable_header =
+ 			fw_parse_variable_header_kernel;
+ 		tas_priv->fw_parse_program_data =
+diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
+index be342ee03fb9c..d14061e88e58a 100644
+--- a/sound/soc/fsl/Kconfig
++++ b/sound/soc/fsl/Kconfig
+@@ -121,6 +121,7 @@ config SND_SOC_FSL_UTILS
+ config SND_SOC_FSL_RPMSG
+ 	tristate "NXP Audio Base On RPMSG support"
+ 	depends on COMMON_CLK
++	depends on OF && I2C
+ 	depends on RPMSG
+ 	depends on SND_IMX_SOC || SND_IMX_SOC = n
+ 	select SND_SOC_IMX_RPMSG if SND_IMX_SOC != n
+diff --git a/sound/soc/intel/boards/sof_sdw_rt_sdca_jack_common.c b/sound/soc/intel/boards/sof_sdw_rt_sdca_jack_common.c
+index 65bbcee88d6d2..49a513399dc41 100644
+--- a/sound/soc/intel/boards/sof_sdw_rt_sdca_jack_common.c
++++ b/sound/soc/intel/boards/sof_sdw_rt_sdca_jack_common.c
+@@ -168,6 +168,7 @@ int sof_sdw_rt_sdca_jack_exit(struct snd_soc_card *card, struct snd_soc_dai_link
+ 
+ 	device_remove_software_node(ctx->headset_codec_dev);
+ 	put_device(ctx->headset_codec_dev);
++	ctx->headset_codec_dev = NULL;
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/intel/common/soc-acpi-intel-glk-match.c b/sound/soc/intel/common/soc-acpi-intel-glk-match.c
+index 387e731008841..8911c90bbaf68 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-glk-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-glk-match.c
+@@ -19,6 +19,11 @@ static const struct snd_soc_acpi_codecs glk_codecs = {
+ 	.codecs = {"MX98357A"}
+ };
+ 
++static const struct snd_soc_acpi_codecs glk_rt5682_rt5682s_hp = {
++	.num_codecs = 2,
++	.codecs = {"10EC5682", "RTL5682"},
++};
++
+ struct snd_soc_acpi_mach snd_soc_acpi_intel_glk_machines[] = {
+ 	{
+ 		.id = "INT343A",
+@@ -35,20 +40,13 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_glk_machines[] = {
+ 		.sof_tplg_filename = "sof-glk-da7219.tplg",
+ 	},
+ 	{
+-		.id = "10EC5682",
++		.comp_ids = &glk_rt5682_rt5682s_hp,
+ 		.drv_name = "glk_rt5682_mx98357a",
+ 		.fw_filename = "intel/dsp_fw_glk.bin",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &glk_codecs,
+ 		.sof_tplg_filename = "sof-glk-rt5682.tplg",
+ 	},
+-	{
+-		.id = "RTL5682",
+-		.drv_name = "glk_rt5682_max98357a",
+-		.machine_quirk = snd_soc_acpi_codec_list,
+-		.quirk_data = &glk_codecs,
+-		.sof_tplg_filename = "sof-glk-rt5682.tplg",
+-	},
+ 	{
+ 		.id = "10134242",
+ 		.drv_name = "glk_cs4242_mx98357a",
+diff --git a/sound/soc/mediatek/common/mtk-dsp-sof-common.c b/sound/soc/mediatek/common/mtk-dsp-sof-common.c
+index f3894010f6563..7ec8965a70c06 100644
+--- a/sound/soc/mediatek/common/mtk-dsp-sof-common.c
++++ b/sound/soc/mediatek/common/mtk-dsp-sof-common.c
+@@ -24,7 +24,7 @@ int mtk_sof_dai_link_fixup(struct snd_soc_pcm_runtime *rtd,
+ 		struct snd_soc_dai_link *sof_dai_link = NULL;
+ 		const struct sof_conn_stream *conn = &sof_priv->conn_streams[i];
+ 
+-		if (strcmp(rtd->dai_link->name, conn->normal_link))
++		if (conn->normal_link && strcmp(rtd->dai_link->name, conn->normal_link))
+ 			continue;
+ 
+ 		for_each_card_rtds(card, runtime) {
+diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h
+index d628d6a3a7e5a..1592e27bc14da 100644
+--- a/sound/soc/sof/intel/hda.h
++++ b/sound/soc/sof/intel/hda.h
+@@ -882,6 +882,7 @@ extern const struct sof_intel_dsp_desc ehl_chip_info;
+ extern const struct sof_intel_dsp_desc jsl_chip_info;
+ extern const struct sof_intel_dsp_desc adls_chip_info;
+ extern const struct sof_intel_dsp_desc mtl_chip_info;
++extern const struct sof_intel_dsp_desc arl_s_chip_info;
+ extern const struct sof_intel_dsp_desc lnl_chip_info;
+ 
+ /* Probes support */
+diff --git a/sound/soc/sof/intel/mtl.c b/sound/soc/sof/intel/mtl.c
+index 254dbbeac1d07..7946110e7adf5 100644
+--- a/sound/soc/sof/intel/mtl.c
++++ b/sound/soc/sof/intel/mtl.c
+@@ -746,3 +746,31 @@ const struct sof_intel_dsp_desc mtl_chip_info = {
+ 	.hw_ip_version = SOF_INTEL_ACE_1_0,
+ };
+ EXPORT_SYMBOL_NS(mtl_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON);
++
++const struct sof_intel_dsp_desc arl_s_chip_info = {
++	.cores_num = 2,
++	.init_core_mask = BIT(0),
++	.host_managed_cores_mask = BIT(0),
++	.ipc_req = MTL_DSP_REG_HFIPCXIDR,
++	.ipc_req_mask = MTL_DSP_REG_HFIPCXIDR_BUSY,
++	.ipc_ack = MTL_DSP_REG_HFIPCXIDA,
++	.ipc_ack_mask = MTL_DSP_REG_HFIPCXIDA_DONE,
++	.ipc_ctl = MTL_DSP_REG_HFIPCXCTL,
++	.rom_status_reg = MTL_DSP_ROM_STS,
++	.rom_init_timeout	= 300,
++	.ssp_count = MTL_SSP_COUNT,
++	.ssp_base_offset = CNL_SSP_BASE_OFFSET,
++	.sdw_shim_base = SDW_SHIM_BASE_ACE,
++	.sdw_alh_base = SDW_ALH_BASE_ACE,
++	.d0i3_offset = MTL_HDA_VS_D0I3C,
++	.read_sdw_lcount =  hda_sdw_check_lcount_common,
++	.enable_sdw_irq = mtl_enable_sdw_irq,
++	.check_sdw_irq = mtl_dsp_check_sdw_irq,
++	.check_sdw_wakeen_irq = hda_sdw_check_wakeen_irq_common,
++	.check_ipc_irq = mtl_dsp_check_ipc_irq,
++	.cl_init = mtl_dsp_cl_init,
++	.power_down_dsp = mtl_power_down_dsp,
++	.disable_interrupts = mtl_dsp_disable_interrupts,
++	.hw_ip_version = SOF_INTEL_ACE_1_0,
++};
++EXPORT_SYMBOL_NS(arl_s_chip_info, SND_SOC_SOF_INTEL_HDA_COMMON);
+diff --git a/sound/soc/sof/intel/pci-mtl.c b/sound/soc/sof/intel/pci-mtl.c
+index 0f378f45486de..60d5e73cdad26 100644
+--- a/sound/soc/sof/intel/pci-mtl.c
++++ b/sound/soc/sof/intel/pci-mtl.c
+@@ -50,7 +50,7 @@ static const struct sof_dev_desc mtl_desc = {
+ 	.ops_free = hda_ops_free,
+ };
+ 
+-static const struct sof_dev_desc arl_desc = {
++static const struct sof_dev_desc arl_s_desc = {
+ 	.use_acpi_target_states = true,
+ 	.machines               = snd_soc_acpi_intel_arl_machines,
+ 	.alt_machines           = snd_soc_acpi_intel_arl_sdw_machines,
+@@ -58,21 +58,21 @@ static const struct sof_dev_desc arl_desc = {
+ 	.resindex_pcicfg_base   = -1,
+ 	.resindex_imr_base      = -1,
+ 	.irqindex_host_ipc      = -1,
+-	.chip_info = &mtl_chip_info,
++	.chip_info = &arl_s_chip_info,
+ 	.ipc_supported_mask     = BIT(SOF_IPC_TYPE_4),
+ 	.ipc_default            = SOF_IPC_TYPE_4,
+ 	.dspless_mode_supported = true,         /* Only supported for HDaudio */
+ 	.default_fw_path = {
+-		[SOF_IPC_TYPE_4] = "intel/sof-ipc4/arl",
++		[SOF_IPC_TYPE_4] = "intel/sof-ipc4/arl-s",
+ 	},
+ 	.default_lib_path = {
+-		[SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/arl",
++		[SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/arl-s",
+ 	},
+ 	.default_tplg_path = {
+ 		[SOF_IPC_TYPE_4] = "intel/sof-ace-tplg",
+ 	},
+ 	.default_fw_filename = {
+-		[SOF_IPC_TYPE_4] = "sof-arl.ri",
++		[SOF_IPC_TYPE_4] = "sof-arl-s.ri",
+ 	},
+ 	.nocodec_tplg_filename = "sof-arl-nocodec.tplg",
+ 	.ops = &sof_mtl_ops,
+@@ -83,7 +83,7 @@ static const struct sof_dev_desc arl_desc = {
+ /* PCI IDs */
+ static const struct pci_device_id sof_pci_ids[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, HDA_MTL, &mtl_desc) },
+-	{ PCI_DEVICE_DATA(INTEL, HDA_ARL_S, &arl_desc) },
++	{ PCI_DEVICE_DATA(INTEL, HDA_ARL_S, &arl_s_desc) },
+ 	{ 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, sof_pci_ids);
+diff --git a/sound/soc/sof/ipc4-loader.c b/sound/soc/sof/ipc4-loader.c
+index eaa04762eb112..4d37e89b592a9 100644
+--- a/sound/soc/sof/ipc4-loader.c
++++ b/sound/soc/sof/ipc4-loader.c
+@@ -479,13 +479,10 @@ void sof_ipc4_update_cpc_from_manifest(struct snd_sof_dev *sdev,
+ 		msg = "No CPC match in the firmware file's manifest";
+ 
+ no_cpc:
+-	dev_warn(sdev->dev, "%s (UUID: %pUL): %s (ibs/obs: %u/%u)\n",
+-		 fw_module->man4_module_entry.name,
+-		 &fw_module->man4_module_entry.uuid, msg, basecfg->ibs,
+-		 basecfg->obs);
+-	dev_warn_once(sdev->dev, "Please try to update the firmware.\n");
+-	dev_warn_once(sdev->dev, "If the issue persists, file a bug at\n");
+-	dev_warn_once(sdev->dev, "https://github.com/thesofproject/sof/issues/\n");
++	dev_dbg(sdev->dev, "%s (UUID: %pUL): %s (ibs/obs: %u/%u)\n",
++		fw_module->man4_module_entry.name,
++		&fw_module->man4_module_entry.uuid, msg, basecfg->ibs,
++		basecfg->obs);
+ }
+ 
+ const struct sof_ipc_fw_loader_ops ipc4_loader_ops = {
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 37ec671a2d766..7133ec13322b3 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1134,7 +1134,7 @@ static void sof_disconnect_dai_widget(struct snd_soc_component *scomp,
+ 	list_for_each_entry(rtd, &card->rtd_list, list) {
+ 		/* does stream match DAI link ? */
+ 		if (!rtd->dai_link->stream_name ||
+-		    strcmp(sname, rtd->dai_link->stream_name))
++		    !strstr(rtd->dai_link->stream_name, sname))
+ 			continue;
+ 
+ 		for_each_rtd_cpu_dais(rtd, i, cpu_dai)
+diff --git a/sound/usb/mixer_scarlett2.c b/sound/usb/mixer_scarlett2.c
+index 33a3d1161885b..3b7fcd0907e67 100644
+--- a/sound/usb/mixer_scarlett2.c
++++ b/sound/usb/mixer_scarlett2.c
+@@ -1524,9 +1524,11 @@ static void scarlett2_config_save(struct usb_mixer_interface *mixer)
+ {
+ 	__le32 req = cpu_to_le32(SCARLETT2_USB_CONFIG_SAVE);
+ 
+-	scarlett2_usb(mixer, SCARLETT2_USB_DATA_CMD,
+-		      &req, sizeof(u32),
+-		      NULL, 0);
++	int err = scarlett2_usb(mixer, SCARLETT2_USB_DATA_CMD,
++				&req, sizeof(u32),
++				NULL, 0);
++	if (err < 0)
++		usb_audio_err(mixer->chip, "config save failed: %d\n", err);
+ }
+ 
+ /* Delayed work to save config */
+@@ -1575,7 +1577,10 @@ static int scarlett2_usb_set_config(
+ 		size = 1;
+ 		offset = config_item->offset;
+ 
+-		scarlett2_usb_get(mixer, offset, &tmp, 1);
++		err = scarlett2_usb_get(mixer, offset, &tmp, 1);
++		if (err < 0)
++			return err;
++
+ 		if (value)
+ 			tmp |= (1 << index);
+ 		else
+@@ -2095,14 +2100,20 @@ static int scarlett2_sync_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->sync_updated)
+-		scarlett2_update_sync(mixer);
++
++	if (private->sync_updated) {
++		err = scarlett2_update_sync(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.enumerated.item[0] = private->sync;
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static const struct snd_kcontrol_new scarlett2_sync_ctl = {
+@@ -2185,14 +2196,20 @@ static int scarlett2_master_volume_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->vol_updated)
+-		scarlett2_update_volumes(mixer);
+-	mutex_unlock(&private->data_mutex);
+ 
++	if (private->vol_updated) {
++		err = scarlett2_update_volumes(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] = private->master_vol;
+-	return 0;
++
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int line_out_remap(struct scarlett2_data *private, int index)
+@@ -2218,14 +2235,20 @@ static int scarlett2_volume_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
+ 	int index = line_out_remap(private, elem->control);
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->vol_updated)
+-		scarlett2_update_volumes(mixer);
+-	mutex_unlock(&private->data_mutex);
+ 
++	if (private->vol_updated) {
++		err = scarlett2_update_volumes(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] = private->vol[index];
+-	return 0;
++
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_volume_ctl_put(struct snd_kcontrol *kctl,
+@@ -2292,14 +2315,20 @@ static int scarlett2_mute_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
+ 	int index = line_out_remap(private, elem->control);
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->vol_updated)
+-		scarlett2_update_volumes(mixer);
+-	mutex_unlock(&private->data_mutex);
+ 
++	if (private->vol_updated) {
++		err = scarlett2_update_volumes(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] = private->mute_switch[index];
+-	return 0;
++
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_mute_ctl_put(struct snd_kcontrol *kctl,
+@@ -2545,14 +2574,20 @@ static int scarlett2_level_enum_ctl_get(struct snd_kcontrol *kctl,
+ 	const struct scarlett2_device_info *info = private->info;
+ 
+ 	int index = elem->control + info->level_input_first;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->input_other_updated)
+-		scarlett2_update_input_other(mixer);
++
++	if (private->input_other_updated) {
++		err = scarlett2_update_input_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.enumerated.item[0] = private->level_switch[index];
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_level_enum_ctl_put(struct snd_kcontrol *kctl,
+@@ -2603,15 +2638,21 @@ static int scarlett2_pad_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->input_other_updated)
+-		scarlett2_update_input_other(mixer);
++
++	if (private->input_other_updated) {
++		err = scarlett2_update_input_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] =
+ 		private->pad_switch[elem->control];
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_pad_ctl_put(struct snd_kcontrol *kctl,
+@@ -2661,14 +2702,20 @@ static int scarlett2_air_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->input_other_updated)
+-		scarlett2_update_input_other(mixer);
++
++	if (private->input_other_updated) {
++		err = scarlett2_update_input_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] = private->air_switch[elem->control];
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_air_ctl_put(struct snd_kcontrol *kctl,
+@@ -2718,15 +2765,21 @@ static int scarlett2_phantom_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->input_other_updated)
+-		scarlett2_update_input_other(mixer);
++
++	if (private->input_other_updated) {
++		err = scarlett2_update_input_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] =
+ 		private->phantom_switch[elem->control];
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_phantom_ctl_put(struct snd_kcontrol *kctl,
+@@ -2898,14 +2951,20 @@ static int scarlett2_direct_monitor_ctl_get(
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = elem->head.mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->monitor_other_updated)
+-		scarlett2_update_monitor_other(mixer);
++
++	if (private->monitor_other_updated) {
++		err = scarlett2_update_monitor_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.enumerated.item[0] = private->direct_monitor_switch;
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_direct_monitor_ctl_put(
+@@ -3005,14 +3064,20 @@ static int scarlett2_speaker_switch_enum_ctl_get(
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->monitor_other_updated)
+-		scarlett2_update_monitor_other(mixer);
++
++	if (private->monitor_other_updated) {
++		err = scarlett2_update_monitor_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.enumerated.item[0] = private->speaker_switching_switch;
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ /* when speaker switching gets enabled, switch the main/alt speakers
+@@ -3160,14 +3225,20 @@ static int scarlett2_talkback_enum_ctl_get(
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->monitor_other_updated)
+-		scarlett2_update_monitor_other(mixer);
++
++	if (private->monitor_other_updated) {
++		err = scarlett2_update_monitor_other(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.enumerated.item[0] = private->talkback_switch;
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_talkback_enum_ctl_put(
+@@ -3315,14 +3386,20 @@ static int scarlett2_dim_mute_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_elem_info *elem = kctl->private_data;
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->vol_updated)
+-		scarlett2_update_volumes(mixer);
+-	mutex_unlock(&private->data_mutex);
+ 
++	if (private->vol_updated) {
++		err = scarlett2_update_volumes(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.integer.value[0] = private->dim_mute[elem->control];
+-	return 0;
++
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_dim_mute_ctl_put(struct snd_kcontrol *kctl,
+@@ -3586,7 +3663,8 @@ static int scarlett2_mixer_ctl_put(struct snd_kcontrol *kctl,
+ 	mutex_lock(&private->data_mutex);
+ 
+ 	oval = private->mix[index];
+-	val = ucontrol->value.integer.value[0];
++	val = clamp(ucontrol->value.integer.value[0],
++		    0L, (long)SCARLETT2_MIXER_MAX_VALUE);
+ 	num_mixer_in = port_count[SCARLETT2_PORT_TYPE_MIX][SCARLETT2_PORT_OUT];
+ 	mix_num = index / num_mixer_in;
+ 
+@@ -3693,14 +3771,20 @@ static int scarlett2_mux_src_enum_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_data *private = mixer->private_data;
+ 	int index = line_out_remap(private, elem->control);
++	int err = 0;
+ 
+ 	mutex_lock(&private->data_mutex);
+-	if (private->mux_updated)
+-		scarlett2_usb_get_mux(mixer);
++
++	if (private->mux_updated) {
++		err = scarlett2_usb_get_mux(mixer);
++		if (err < 0)
++			goto unlock;
++	}
+ 	ucontrol->value.enumerated.item[0] = private->mux[index];
+-	mutex_unlock(&private->data_mutex);
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++	return err;
+ }
+ 
+ static int scarlett2_mux_src_enum_ctl_put(struct snd_kcontrol *kctl,
+@@ -3796,10 +3880,12 @@ static int scarlett2_meter_ctl_get(struct snd_kcontrol *kctl,
+ 	u16 meter_levels[SCARLETT2_MAX_METERS];
+ 	int i, err;
+ 
++	mutex_lock(&private->data_mutex);
++
+ 	err = scarlett2_usb_get_meter_levels(elem->head.mixer, elem->channels,
+ 					     meter_levels);
+ 	if (err < 0)
+-		return err;
++		goto unlock;
+ 
+ 	/* copy & translate from meter_levels[] using meter_level_map[] */
+ 	for (i = 0; i < elem->channels; i++) {
+@@ -3814,7 +3900,10 @@ static int scarlett2_meter_ctl_get(struct snd_kcontrol *kctl,
+ 		ucontrol->value.integer.value[i] = value;
+ 	}
+ 
+-	return 0;
++unlock:
++	mutex_unlock(&private->data_mutex);
++
++	return err;
+ }
+ 
+ static const struct snd_kcontrol_new scarlett2_meter_ctl = {
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 0f6cdf52b1dab..bda948a685e50 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -4517,6 +4517,8 @@ union bpf_attr {
+  * long bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, u64 flags)
+  *	Description
+  *		Return a user or a kernel stack in bpf program provided buffer.
++ *		Note: the user stack will only be populated if the *task* is
++ *		the current task; all other tasks will return -EOPNOTSUPP.
+  *		To achieve this, the helper needs *task*, which is a valid
+  *		pointer to **struct task_struct**. To store the stacktrace, the
+  *		bpf program provides *buf* with a nonnegative *size*.
+@@ -4528,6 +4530,7 @@ union bpf_attr {
+  *
+  *		**BPF_F_USER_STACK**
+  *			Collect a user space stack instead of a kernel stack.
++ *			The *task* must be the current task.
+  *		**BPF_F_USER_BUILD_ID**
+  *			Collect buildid+offset instead of ips for user stack,
+  *			only valid if **BPF_F_USER_STACK** is also specified.
+diff --git a/tools/lib/api/io.h b/tools/lib/api/io.h
+index a77b74c5fb655..2a7fe97588131 100644
+--- a/tools/lib/api/io.h
++++ b/tools/lib/api/io.h
+@@ -12,6 +12,7 @@
+ #include <stdlib.h>
+ #include <string.h>
+ #include <unistd.h>
++#include <linux/types.h>
+ 
+ struct io {
+ 	/* File descriptor being read/ */
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index a3af805a1d572..78c1049221810 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -2695,15 +2695,19 @@ int cmd_stat(int argc, const char **argv)
+ 	 */
+ 	if (metrics) {
+ 		const char *pmu = parse_events_option_args.pmu_filter ?: "all";
++		int ret = metricgroup__parse_groups(evsel_list, pmu, metrics,
++						stat_config.metric_no_group,
++						stat_config.metric_no_merge,
++						stat_config.metric_no_threshold,
++						stat_config.user_requested_cpu_list,
++						stat_config.system_wide,
++						&stat_config.metric_events);
+ 
+-		metricgroup__parse_groups(evsel_list, pmu, metrics,
+-					stat_config.metric_no_group,
+-					stat_config.metric_no_merge,
+-					stat_config.metric_no_threshold,
+-					stat_config.user_requested_cpu_list,
+-					stat_config.system_wide,
+-					&stat_config.metric_events);
+ 		zfree(&metrics);
++		if (ret) {
++			status = ret;
++			goto out;
++		}
+ 	}
+ 
+ 	if (add_default_attributes())
+diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json
+index 88b23b85e33cd..879ff21e0b177 100644
+--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json
++++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/core-imp-def.json
+@@ -110,7 +110,7 @@
+     {
+         "PublicDescription": "Flushes due to memory hazards",
+         "EventCode": "0x121",
+-        "EventName": "BPU_FLUSH_MEM_FAULT",
++        "EventName": "GPC_FLUSH_MEM_FAULT",
+         "BriefDescription": "Flushes due to memory hazards"
+     },
+     {
+diff --git a/tools/perf/pmu-events/arch/arm64/arm/cmn/sys/cmn.json b/tools/perf/pmu-events/arch/arm64/arm/cmn/sys/cmn.json
+index 428605c37d10b..5ec157c39f0df 100644
+--- a/tools/perf/pmu-events/arch/arm64/arm/cmn/sys/cmn.json
++++ b/tools/perf/pmu-events/arch/arm64/arm/cmn/sys/cmn.json
+@@ -107,7 +107,7 @@
+ 		"EventName": "hnf_qos_hh_retry",
+ 		"EventidCode": "0xe",
+ 		"NodeType": "0x5",
+-		"BriefDescription": "Counts number of times a HighHigh priority request is protocolretried at the HN‑F.",
++		"BriefDescription": "Counts number of times a HighHigh priority request is protocolretried at the HN-F.",
+ 		"Unit": "arm_cmn",
+ 		"Compat": "(434|436|43c|43a).*"
+ 	},
+diff --git a/tools/perf/pmu-events/arch/powerpc/power10/datasource.json b/tools/perf/pmu-events/arch/powerpc/power10/datasource.json
+index 6b0356f2d3013..0eeaaf1a95b86 100644
+--- a/tools/perf/pmu-events/arch/powerpc/power10/datasource.json
++++ b/tools/perf/pmu-events/arch/powerpc/power10/datasource.json
+@@ -99,6 +99,11 @@
+     "EventName": "PM_INST_FROM_L2MISS",
+     "BriefDescription": "The processor's instruction cache was reloaded from a source beyond the local core's L2 due to a demand miss."
+   },
++  {
++    "EventCode": "0x0003C0000000C040",
++    "EventName": "PM_DATA_FROM_L2MISS_DSRC",
++    "BriefDescription": "The processor's L1 data cache was reloaded from a source beyond the local core's L2 due to a demand miss."
++  },
+   {
+     "EventCode": "0x000380000010C040",
+     "EventName": "PM_INST_FROM_L2MISS_ALL",
+@@ -161,9 +166,14 @@
+   },
+   {
+     "EventCode": "0x000780000000C040",
+-    "EventName": "PM_INST_FROM_L3MISS",
++    "EventName": "PM_INST_FROM_L3MISS_DSRC",
+     "BriefDescription": "The processor's instruction cache was reloaded from beyond the local core's L3 due to a demand miss."
+   },
++  {
++    "EventCode": "0x0007C0000000C040",
++    "EventName": "PM_DATA_FROM_L3MISS_DSRC",
++    "BriefDescription": "The processor's L1 data cache was reloaded from beyond the local core's L3 due to a demand miss."
++  },
+   {
+     "EventCode": "0x000780000010C040",
+     "EventName": "PM_INST_FROM_L3MISS_ALL",
+@@ -981,7 +991,7 @@
+   },
+   {
+     "EventCode": "0x0003C0000000C142",
+-    "EventName": "PM_MRK_DATA_FROM_L2MISS",
++    "EventName": "PM_MRK_DATA_FROM_L2MISS_DSRC",
+     "BriefDescription": "The processor's L1 data cache was reloaded from a source beyond the local core's L2 due to a demand miss for a marked instruction."
+   },
+   {
+@@ -1046,12 +1056,12 @@
+   },
+   {
+     "EventCode": "0x000780000000C142",
+-    "EventName": "PM_MRK_INST_FROM_L3MISS",
++    "EventName": "PM_MRK_INST_FROM_L3MISS_DSRC",
+     "BriefDescription": "The processor's instruction cache was reloaded from beyond the local core's L3 due to a demand miss for a marked instruction."
+   },
+   {
+     "EventCode": "0x0007C0000000C142",
+-    "EventName": "PM_MRK_DATA_FROM_L3MISS",
++    "EventName": "PM_MRK_DATA_FROM_L3MISS_DSRC",
+     "BriefDescription": "The processor's L1 data cache was reloaded from beyond the local core's L3 due to a demand miss for a marked instruction."
+   },
+   {
+diff --git a/tools/perf/tests/attr/test-record-user-regs-no-sve-aarch64 b/tools/perf/tests/attr/test-record-user-regs-no-sve-aarch64
+index fbb065842880f..bed765450ca97 100644
+--- a/tools/perf/tests/attr/test-record-user-regs-no-sve-aarch64
++++ b/tools/perf/tests/attr/test-record-user-regs-no-sve-aarch64
+@@ -6,4 +6,4 @@ args    = --no-bpf-event --user-regs=vg kill >/dev/null 2>&1
+ ret     = 129
+ test_ret = true
+ arch    = aarch64
+-auxv    = auxv["AT_HWCAP"] & 0x200000 == 0
++auxv    = auxv["AT_HWCAP"] & 0x400000 == 0
+diff --git a/tools/perf/tests/attr/test-record-user-regs-sve-aarch64 b/tools/perf/tests/attr/test-record-user-regs-sve-aarch64
+index c598c803221da..a65113cd7311b 100644
+--- a/tools/perf/tests/attr/test-record-user-regs-sve-aarch64
++++ b/tools/perf/tests/attr/test-record-user-regs-sve-aarch64
+@@ -6,7 +6,7 @@ args    = --no-bpf-event --user-regs=vg kill >/dev/null 2>&1
+ ret     = 1
+ test_ret = true
+ arch    = aarch64
+-auxv    = auxv["AT_HWCAP"] & 0x200000 == 0x200000
++auxv    = auxv["AT_HWCAP"] & 0x400000 == 0x400000
+ kernel_since = 6.1
+ 
+ [event:base-record]
+diff --git a/tools/perf/tests/workloads/thloop.c b/tools/perf/tests/workloads/thloop.c
+index af05269c2eb8a..457b29f91c3ee 100644
+--- a/tools/perf/tests/workloads/thloop.c
++++ b/tools/perf/tests/workloads/thloop.c
+@@ -7,7 +7,6 @@
+ #include "../tests.h"
+ 
+ static volatile sig_atomic_t done;
+-static volatile unsigned count;
+ 
+ /* We want to check this symbol in perf report */
+ noinline void test_loop(void);
+@@ -19,8 +18,7 @@ static void sighandler(int sig __maybe_unused)
+ 
+ noinline void test_loop(void)
+ {
+-	while (!done)
+-		__atomic_fetch_add(&count, 1, __ATOMIC_RELAXED);
++	while (!done);
+ }
+ 
+ static void *thfunc(void *arg)
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index 38fcf3ba5749d..b00b5a2634c3d 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -542,9 +542,9 @@ int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env)
+ 	return evlist__add_sb_event(evlist, &attr, bpf_event__sb_cb, env);
+ }
+ 
+-void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+-				    struct perf_env *env,
+-				    FILE *fp)
++void __bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
++				      struct perf_env *env,
++				      FILE *fp)
+ {
+ 	__u32 *prog_lens = (__u32 *)(uintptr_t)(info->jited_func_lens);
+ 	__u64 *prog_addrs = (__u64 *)(uintptr_t)(info->jited_ksyms);
+@@ -560,7 +560,7 @@ void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+ 	if (info->btf_id) {
+ 		struct btf_node *node;
+ 
+-		node = perf_env__find_btf(env, info->btf_id);
++		node = __perf_env__find_btf(env, info->btf_id);
+ 		if (node)
+ 			btf = btf__new((__u8 *)(node->data),
+ 				       node->data_size);
+diff --git a/tools/perf/util/bpf-event.h b/tools/perf/util/bpf-event.h
+index 1bcbd4fb6c669..e2f0420905f59 100644
+--- a/tools/perf/util/bpf-event.h
++++ b/tools/perf/util/bpf-event.h
+@@ -33,9 +33,9 @@ struct btf_node {
+ int machine__process_bpf(struct machine *machine, union perf_event *event,
+ 			 struct perf_sample *sample);
+ int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env);
+-void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
+-				    struct perf_env *env,
+-				    FILE *fp);
++void __bpf_event__print_bpf_prog_info(struct bpf_prog_info *info,
++				      struct perf_env *env,
++				      FILE *fp);
+ #else
+ static inline int machine__process_bpf(struct machine *machine __maybe_unused,
+ 				       union perf_event *event __maybe_unused,
+@@ -50,9 +50,9 @@ static inline int evlist__add_bpf_sb_event(struct evlist *evlist __maybe_unused,
+ 	return 0;
+ }
+ 
+-static inline void bpf_event__print_bpf_prog_info(struct bpf_prog_info *info __maybe_unused,
+-						  struct perf_env *env __maybe_unused,
+-						  FILE *fp __maybe_unused)
++static inline void __bpf_event__print_bpf_prog_info(struct bpf_prog_info *info __maybe_unused,
++						    struct perf_env *env __maybe_unused,
++						    FILE *fp __maybe_unused)
+ {
+ 
+ }
+diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
+index b9fb71ab7a730..106429155c2e9 100644
+--- a/tools/perf/util/db-export.c
++++ b/tools/perf/util/db-export.c
+@@ -253,8 +253,8 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
+ 		 */
+ 		addr_location__init(&al);
+ 		al.sym = node->ms.sym;
+-		al.map = node->ms.map;
+-		al.maps = thread__maps(thread);
++		al.map = map__get(node->ms.map);
++		al.maps = maps__get(thread__maps(thread));
+ 		al.addr = node->ip;
+ 
+ 		if (al.map && !al.sym)
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index 44140b7f596a3..8da0e2c763e2b 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -22,13 +22,19 @@ struct perf_env perf_env;
+ 
+ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 				    struct bpf_prog_info_node *info_node)
++{
++	down_write(&env->bpf_progs.lock);
++	__perf_env__insert_bpf_prog_info(env, info_node);
++	up_write(&env->bpf_progs.lock);
++}
++
++void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node)
+ {
+ 	__u32 prog_id = info_node->info_linear->info.id;
+ 	struct bpf_prog_info_node *node;
+ 	struct rb_node *parent = NULL;
+ 	struct rb_node **p;
+ 
+-	down_write(&env->bpf_progs.lock);
+ 	p = &env->bpf_progs.infos.rb_node;
+ 
+ 	while (*p != NULL) {
+@@ -40,15 +46,13 @@ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 			p = &(*p)->rb_right;
+ 		} else {
+ 			pr_debug("duplicated bpf prog info %u\n", prog_id);
+-			goto out;
++			return;
+ 		}
+ 	}
+ 
+ 	rb_link_node(&info_node->rb_node, parent, p);
+ 	rb_insert_color(&info_node->rb_node, &env->bpf_progs.infos);
+ 	env->bpf_progs.infos_cnt++;
+-out:
+-	up_write(&env->bpf_progs.lock);
+ }
+ 
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+@@ -77,14 +81,22 @@ out:
+ }
+ 
+ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
++{
++	bool ret;
++
++	down_write(&env->bpf_progs.lock);
++	ret = __perf_env__insert_btf(env, btf_node);
++	up_write(&env->bpf_progs.lock);
++	return ret;
++}
++
++bool __perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ {
+ 	struct rb_node *parent = NULL;
+ 	__u32 btf_id = btf_node->id;
+ 	struct btf_node *node;
+ 	struct rb_node **p;
+-	bool ret = true;
+ 
+-	down_write(&env->bpf_progs.lock);
+ 	p = &env->bpf_progs.btfs.rb_node;
+ 
+ 	while (*p != NULL) {
+@@ -96,25 +108,31 @@ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node)
+ 			p = &(*p)->rb_right;
+ 		} else {
+ 			pr_debug("duplicated btf %u\n", btf_id);
+-			ret = false;
+-			goto out;
++			return false;
+ 		}
+ 	}
+ 
+ 	rb_link_node(&btf_node->rb_node, parent, p);
+ 	rb_insert_color(&btf_node->rb_node, &env->bpf_progs.btfs);
+ 	env->bpf_progs.btfs_cnt++;
+-out:
+-	up_write(&env->bpf_progs.lock);
+-	return ret;
++	return true;
+ }
+ 
+ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id)
++{
++	struct btf_node *res;
++
++	down_read(&env->bpf_progs.lock);
++	res = __perf_env__find_btf(env, btf_id);
++	up_read(&env->bpf_progs.lock);
++	return res;
++}
++
++struct btf_node *__perf_env__find_btf(struct perf_env *env, __u32 btf_id)
+ {
+ 	struct btf_node *node = NULL;
+ 	struct rb_node *n;
+ 
+-	down_read(&env->bpf_progs.lock);
+ 	n = env->bpf_progs.btfs.rb_node;
+ 
+ 	while (n) {
+@@ -124,13 +142,9 @@ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id)
+ 		else if (btf_id > node->id)
+ 			n = n->rb_right;
+ 		else
+-			goto out;
++			return node;
+ 	}
+-	node = NULL;
+-
+-out:
+-	up_read(&env->bpf_progs.lock);
+-	return node;
++	return NULL;
+ }
+ 
+ /* purge data in bpf_progs.infos tree */
+diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
+index 4566c51f2fd95..359eff51cb85b 100644
+--- a/tools/perf/util/env.h
++++ b/tools/perf/util/env.h
+@@ -164,12 +164,16 @@ const char *perf_env__raw_arch(struct perf_env *env);
+ int perf_env__nr_cpus_avail(struct perf_env *env);
+ 
+ void perf_env__init(struct perf_env *env);
++void __perf_env__insert_bpf_prog_info(struct perf_env *env,
++				      struct bpf_prog_info_node *info_node);
+ void perf_env__insert_bpf_prog_info(struct perf_env *env,
+ 				    struct bpf_prog_info_node *info_node);
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+ 							__u32 prog_id);
+ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
++bool __perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
+ struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id);
++struct btf_node *__perf_env__find_btf(struct perf_env *env, __u32 btf_id);
+ 
+ int perf_env__numa_node(struct perf_env *env, struct perf_cpu cpu);
+ char *perf_env__find_pmu_cap(struct perf_env *env, const char *pmu_name,
+diff --git a/tools/perf/util/genelf.c b/tools/perf/util/genelf.c
+index fefc72066c4e8..ac17a3cb59dc0 100644
+--- a/tools/perf/util/genelf.c
++++ b/tools/perf/util/genelf.c
+@@ -293,9 +293,9 @@ jit_write_elf(int fd, uint64_t load_addr, const char *sym,
+ 	 */
+ 	phdr = elf_newphdr(e, 1);
+ 	phdr[0].p_type = PT_LOAD;
+-	phdr[0].p_offset = 0;
+-	phdr[0].p_vaddr = 0;
+-	phdr[0].p_paddr = 0;
++	phdr[0].p_offset = GEN_ELF_TEXT_OFFSET;
++	phdr[0].p_vaddr = GEN_ELF_TEXT_OFFSET;
++	phdr[0].p_paddr = GEN_ELF_TEXT_OFFSET;
+ 	phdr[0].p_filesz = csize;
+ 	phdr[0].p_memsz = csize;
+ 	phdr[0].p_flags = PF_X | PF_R;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index e86b9439ffee0..8b274ccab7a97 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -1444,7 +1444,9 @@ static int build_mem_topology(struct memory_node **nodesp, u64 *cntp)
+ 			nodes = new_nodes;
+ 			size += 4;
+ 		}
+-		ret = memory_node__read(&nodes[cnt++], idx);
++		ret = memory_node__read(&nodes[cnt], idx);
++		if (!ret)
++			cnt += 1;
+ 	}
+ out:
+ 	closedir(dir);
+@@ -1847,8 +1849,8 @@ static void print_bpf_prog_info(struct feat_fd *ff, FILE *fp)
+ 		node = rb_entry(next, struct bpf_prog_info_node, rb_node);
+ 		next = rb_next(&node->rb_node);
+ 
+-		bpf_event__print_bpf_prog_info(&node->info_linear->info,
+-					       env, fp);
++		__bpf_event__print_bpf_prog_info(&node->info_linear->info,
++						 env, fp);
+ 	}
+ 
+ 	up_read(&env->bpf_progs.lock);
+@@ -3178,7 +3180,7 @@ static int process_bpf_prog_info(struct feat_fd *ff, void *data __maybe_unused)
+ 		/* after reading from file, translate offset to address */
+ 		bpil_offs_to_addr(info_linear);
+ 		info_node->info_linear = info_linear;
+-		perf_env__insert_bpf_prog_info(env, info_node);
++		__perf_env__insert_bpf_prog_info(env, info_node);
+ 	}
+ 
+ 	up_write(&env->bpf_progs.lock);
+@@ -3225,7 +3227,7 @@ static int process_bpf_btf(struct feat_fd *ff, void *data __maybe_unused)
+ 		if (__do_read(ff, node->data, data_size))
+ 			goto out;
+ 
+-		perf_env__insert_btf(env, node);
++		__perf_env__insert_btf(env, node);
+ 		node = NULL;
+ 	}
+ 
+@@ -4369,9 +4371,10 @@ size_t perf_event__fprintf_event_update(union perf_event *event, FILE *fp)
+ 		ret += fprintf(fp, "... ");
+ 
+ 		map = cpu_map__new_data(&ev->cpus.cpus);
+-		if (map)
++		if (map) {
+ 			ret += cpu_map__fprintf(map, fp);
+-		else
++			perf_cpu_map__put(map);
++		} else
+ 			ret += fprintf(fp, "failed to get cpus\n");
+ 		break;
+ 	default:
+diff --git a/tools/perf/util/hisi-ptt.c b/tools/perf/util/hisi-ptt.c
+index 43bd1ca62d582..52d0ce302ca04 100644
+--- a/tools/perf/util/hisi-ptt.c
++++ b/tools/perf/util/hisi-ptt.c
+@@ -123,6 +123,7 @@ static int hisi_ptt_process_auxtrace_event(struct perf_session *session,
+ 	if (dump_trace)
+ 		hisi_ptt_dump_event(ptt, data, size);
+ 
++	free(data);
+ 	return 0;
+ }
+ 
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index 954b235e12e51..3a2e3687878c1 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -100,11 +100,14 @@ int perf_mem_events__parse(const char *str)
+ 	return -1;
+ }
+ 
+-static bool perf_mem_event__supported(const char *mnt, char *sysfs_name)
++static bool perf_mem_event__supported(const char *mnt, struct perf_pmu *pmu,
++				      struct perf_mem_event *e)
+ {
++	char sysfs_name[100];
+ 	char path[PATH_MAX];
+ 	struct stat st;
+ 
++	scnprintf(sysfs_name, sizeof(sysfs_name), e->sysfs_name, pmu->name);
+ 	scnprintf(path, PATH_MAX, "%s/devices/%s", mnt, sysfs_name);
+ 	return !stat(path, &st);
+ }
+@@ -120,7 +123,6 @@ int perf_mem_events__init(void)
+ 
+ 	for (j = 0; j < PERF_MEM_EVENTS__MAX; j++) {
+ 		struct perf_mem_event *e = perf_mem_events__ptr(j);
+-		char sysfs_name[100];
+ 		struct perf_pmu *pmu = NULL;
+ 
+ 		/*
+@@ -136,12 +138,12 @@ int perf_mem_events__init(void)
+ 		 * of core PMU.
+ 		 */
+ 		while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+-			scnprintf(sysfs_name, sizeof(sysfs_name), e->sysfs_name, pmu->name);
+-			e->supported |= perf_mem_event__supported(mnt, sysfs_name);
++			e->supported |= perf_mem_event__supported(mnt, pmu, e);
++			if (e->supported) {
++				found = true;
++				break;
++			}
+ 		}
+-
+-		if (e->supported)
+-			found = true;
+ 	}
+ 
+ 	return found ? 0 : -ENOENT;
+@@ -167,13 +169,10 @@ static void perf_mem_events__print_unsupport_hybrid(struct perf_mem_event *e,
+ 						    int idx)
+ {
+ 	const char *mnt = sysfs__mount();
+-	char sysfs_name[100];
+ 	struct perf_pmu *pmu = NULL;
+ 
+ 	while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+-		scnprintf(sysfs_name, sizeof(sysfs_name), e->sysfs_name,
+-			  pmu->name);
+-		if (!perf_mem_event__supported(mnt, sysfs_name)) {
++		if (!perf_mem_event__supported(mnt, pmu, e)) {
+ 			pr_err("failed: event '%s' not supported\n",
+ 			       perf_mem_events__name(idx, pmu->name));
+ 		}
+@@ -183,6 +182,7 @@ static void perf_mem_events__print_unsupport_hybrid(struct perf_mem_event *e,
+ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr,
+ 				 char **rec_tmp, int *tmp_nr)
+ {
++	const char *mnt = sysfs__mount();
+ 	int i = *argv_nr, k = 0;
+ 	struct perf_mem_event *e;
+ 
+@@ -211,6 +211,9 @@ int perf_mem_events__record_args(const char **rec_argv, int *argv_nr,
+ 			while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+ 				const char *s = perf_mem_events__name(j, pmu->name);
+ 
++				if (!perf_mem_event__supported(mnt, pmu, e))
++					continue;
++
+ 				rec_argv[i++] = "-e";
+ 				if (s) {
+ 					char *copy = strdup(s);
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index 1c5c3eeba4cfb..e31426167852a 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -264,7 +264,7 @@ static void print_ll_miss(struct perf_stat_config *config,
+ 	static const double color_ratios[3] = {20.0, 10.0, 5.0};
+ 
+ 	print_ratio(config, evsel, aggr_idx, misses, out, STAT_LL_CACHE, color_ratios,
+-		    "of all L1-icache accesses");
++		    "of all LL-cache accesses");
+ }
+ 
+ static void print_dtlb_miss(struct perf_stat_config *config,
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index 8554db3fc0d7c..6013335a8daea 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -46,6 +46,7 @@ static int __report_module(struct addr_location *al, u64 ip,
+ {
+ 	Dwfl_Module *mod;
+ 	struct dso *dso = NULL;
++	Dwarf_Addr base;
+ 	/*
+ 	 * Some callers will use al->sym, so we can't just use the
+ 	 * cheaper thread__find_map() here.
+@@ -58,13 +59,25 @@ static int __report_module(struct addr_location *al, u64 ip,
+ 	if (!dso)
+ 		return 0;
+ 
++	/*
++	 * The generated JIT DSO files only map the code segment without
++	 * ELF headers.  Since JIT codes used to be packed in a memory
++	 * segment, calculating the base address using pgoff falls into
++	 * a different code in another DSO.  So just use the map->start
++	 * directly to pick the correct one.
++	 */
++	if (!strncmp(dso->long_name, "/tmp/jitted-", 12))
++		base = map__start(al->map);
++	else
++		base = map__start(al->map) - map__pgoff(al->map);
++
+ 	mod = dwfl_addrmodule(ui->dwfl, ip);
+ 	if (mod) {
+ 		Dwarf_Addr s;
+ 
+ 		dwfl_module_info(mod, NULL, &s, NULL, NULL, NULL, NULL, NULL);
+-		if (s != map__start(al->map) - map__pgoff(al->map))
+-			mod = 0;
++		if (s != base)
++			mod = NULL;
+ 	}
+ 
+ 	if (!mod) {
+@@ -72,14 +85,14 @@ static int __report_module(struct addr_location *al, u64 ip,
+ 
+ 		__symbol__join_symfs(filename, sizeof(filename), dso->long_name);
+ 		mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
+-				      map__start(al->map) - map__pgoff(al->map), false);
++				      base, false);
+ 	}
+ 	if (!mod) {
+ 		char filename[PATH_MAX];
+ 
+ 		if (dso__build_id_filename(dso, filename, sizeof(filename), false))
+ 			mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
+-					      map__start(al->map) - map__pgoff(al->map), false);
++					      base, false);
+ 	}
+ 
+ 	if (mod) {
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index c0641882fd2fd..5e5c3395a4998 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -327,7 +327,7 @@ static int read_unwind_spec_eh_frame(struct dso *dso, struct unwind_info *ui,
+ 
+ 	maps__for_each_entry(thread__maps(ui->thread), map_node) {
+ 		struct map *map = map_node->map;
+-		u64 start = map__start(map);
++		u64 start = map__start(map) - map__pgoff(map);
+ 
+ 		if (map__dso(map) == dso && start < base_addr)
+ 			base_addr = start;
+diff --git a/tools/testing/selftests/alsa/conf.c b/tools/testing/selftests/alsa/conf.c
+index 00925eb8d9f47..89e3656a042d7 100644
+--- a/tools/testing/selftests/alsa/conf.c
++++ b/tools/testing/selftests/alsa/conf.c
+@@ -179,7 +179,7 @@ static char *sysfs_get(const char *sysfs_root, const char *id)
+ 	close(fd);
+ 	if (len < 0)
+ 		ksft_exit_fail_msg("sysfs: unable to read value '%s': %s\n",
+-				   path, errno);
++				   path, strerror(errno));
+ 	while (len > 0 && path[len-1] == '\n')
+ 		len--;
+ 	path[len] = '\0';
+diff --git a/tools/testing/selftests/alsa/mixer-test.c b/tools/testing/selftests/alsa/mixer-test.c
+index 23df154fcdd77..df942149c6f6c 100644
+--- a/tools/testing/selftests/alsa/mixer-test.c
++++ b/tools/testing/selftests/alsa/mixer-test.c
+@@ -166,7 +166,7 @@ static void find_controls(void)
+ 		err = snd_ctl_poll_descriptors(card_data->handle,
+ 					       &card_data->pollfd, 1);
+ 		if (err != 1) {
+-			ksft_exit_fail_msg("snd_ctl_poll_descriptors() failed for %d\n",
++			ksft_exit_fail_msg("snd_ctl_poll_descriptors() failed for card %d: %d\n",
+ 				       card, err);
+ 		}
+ 
+@@ -319,7 +319,7 @@ static bool ctl_value_index_valid(struct ctl_data *ctl,
+ 		}
+ 
+ 		if (int64_val > snd_ctl_elem_info_get_max64(ctl->info)) {
+-			ksft_print_msg("%s.%d value %lld more than maximum %lld\n",
++			ksft_print_msg("%s.%d value %lld more than maximum %ld\n",
+ 				       ctl->name, index, int64_val,
+ 				       snd_ctl_elem_info_get_max(ctl->info));
+ 			return false;
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+index e3498f607b49d..2cacc8fa96977 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+@@ -334,6 +334,8 @@ static void test_task_stack(void)
+ 	do_dummy_read(skel->progs.dump_task_stack);
+ 	do_dummy_read(skel->progs.get_task_user_stacks);
+ 
++	ASSERT_EQ(skel->bss->num_user_stacks, 1, "num_user_stacks");
++
+ 	bpf_iter_task_stack__destroy(skel);
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/time_tai.c b/tools/testing/selftests/bpf/prog_tests/time_tai.c
+index a311198236661..f45af1b0ef2c4 100644
+--- a/tools/testing/selftests/bpf/prog_tests/time_tai.c
++++ b/tools/testing/selftests/bpf/prog_tests/time_tai.c
+@@ -56,7 +56,7 @@ void test_time_tai(void)
+ 	ASSERT_NEQ(ts2, 0, "tai_ts2");
+ 
+ 	/* TAI is moving forward only */
+-	ASSERT_GT(ts2, ts1, "tai_forward");
++	ASSERT_GE(ts2, ts1, "tai_forward");
+ 
+ 	/* Check for future */
+ 	ret = clock_gettime(CLOCK_TAI, &now_tai);
+diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
+index f2b8167b72a84..442f4ca39fd76 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
++++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c
+@@ -35,6 +35,8 @@ int dump_task_stack(struct bpf_iter__task *ctx)
+ 	return 0;
+ }
+ 
++int num_user_stacks = 0;
++
+ SEC("iter/task")
+ int get_task_user_stacks(struct bpf_iter__task *ctx)
+ {
+@@ -51,6 +53,9 @@ int get_task_user_stacks(struct bpf_iter__task *ctx)
+ 	if (res <= 0)
+ 		return 0;
+ 
++	/* Only one task, the current one, should succeed */
++	++num_user_stacks;
++
+ 	buf_sz += res;
+ 
+ 	/* If the verifier doesn't refine bpf_get_task_stack res, and instead
+diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c
+index c20c4e38b71c5..844d968c27d68 100644
+--- a/tools/testing/selftests/bpf/progs/iters.c
++++ b/tools/testing/selftests/bpf/progs/iters.c
+@@ -846,7 +846,7 @@ __naked int delayed_precision_mark(void)
+ 		"call %[bpf_iter_num_next];"
+ 		"if r0 == 0 goto 2f;"
+ 		"if r6 != 42 goto 3f;"
+-		"r7 = -32;"
++		"r7 = -33;"
+ 		"call %[bpf_get_prandom_u32];"
+ 		"r6 = r0;"
+ 		"goto 1b;\n"
+diff --git a/tools/testing/selftests/bpf/progs/test_global_func16.c b/tools/testing/selftests/bpf/progs/test_global_func16.c
+index e7206304632e1..e3e64bc472cda 100644
+--- a/tools/testing/selftests/bpf/progs/test_global_func16.c
++++ b/tools/testing/selftests/bpf/progs/test_global_func16.c
+@@ -13,7 +13,7 @@ __noinline int foo(int (*arr)[10])
+ }
+ 
+ SEC("cgroup_skb/ingress")
+-__failure __msg("invalid indirect read from stack")
++__success
+ int global_func16(struct __sk_buff *skb)
+ {
+ 	int array[10];
+diff --git a/tools/testing/selftests/bpf/progs/verifier_basic_stack.c b/tools/testing/selftests/bpf/progs/verifier_basic_stack.c
+index 359df865a8f3e..8d77cc5323d33 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_basic_stack.c
++++ b/tools/testing/selftests/bpf/progs/verifier_basic_stack.c
+@@ -27,8 +27,8 @@ __naked void stack_out_of_bounds(void)
+ 
+ SEC("socket")
+ __description("uninitialized stack1")
+-__failure __msg("invalid indirect read from stack")
+-__failure_unpriv
++__success __log_level(4) __msg("stack depth 8")
++__failure_unpriv __msg_unpriv("invalid indirect read from stack")
+ __naked void uninitialized_stack1(void)
+ {
+ 	asm volatile ("					\
+@@ -45,8 +45,8 @@ __naked void uninitialized_stack1(void)
+ 
+ SEC("socket")
+ __description("uninitialized stack2")
+-__failure __msg("invalid read from stack")
+-__failure_unpriv
++__success __log_level(4) __msg("stack depth 8")
++__failure_unpriv __msg_unpriv("invalid read from stack")
+ __naked void uninitialized_stack2(void)
+ {
+ 	asm volatile ("					\
+diff --git a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
+index b054f9c481433..589e8270de462 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
++++ b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c
+@@ -5,9 +5,10 @@
+ #include <bpf/bpf_helpers.h>
+ #include "bpf_misc.h"
+ 
+-SEC("cgroup/sysctl")
++SEC("socket")
+ __description("ARG_PTR_TO_LONG uninitialized")
+-__failure __msg("invalid indirect read from stack R4 off -16+0 size 8")
++__success
++__failure_unpriv __msg_unpriv("invalid indirect read from stack R4 off -16+0 size 8")
+ __naked void arg_ptr_to_long_uninitialized(void)
+ {
+ 	asm volatile ("					\
+diff --git a/tools/testing/selftests/bpf/progs/verifier_raw_stack.c b/tools/testing/selftests/bpf/progs/verifier_raw_stack.c
+index efbfc3a4ad6a9..f67390224a9cf 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_raw_stack.c
++++ b/tools/testing/selftests/bpf/progs/verifier_raw_stack.c
+@@ -5,9 +5,10 @@
+ #include <bpf/bpf_helpers.h>
+ #include "bpf_misc.h"
+ 
+-SEC("tc")
++SEC("socket")
+ __description("raw_stack: no skb_load_bytes")
+-__failure __msg("invalid read from stack R6 off=-8 size=8")
++__success
++__failure_unpriv __msg_unpriv("invalid read from stack R6 off=-8 size=8")
+ __naked void stack_no_skb_load_bytes(void)
+ {
+ 	asm volatile ("					\
+diff --git a/tools/testing/selftests/bpf/progs/verifier_var_off.c b/tools/testing/selftests/bpf/progs/verifier_var_off.c
+index 83a90afba7857..d1f23c1a7c5b4 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_var_off.c
++++ b/tools/testing/selftests/bpf/progs/verifier_var_off.c
+@@ -59,9 +59,10 @@ __naked void stack_read_priv_vs_unpriv(void)
+ "	::: __clobber_all);
+ }
+ 
+-SEC("lwt_in")
++SEC("cgroup/skb")
+ __description("variable-offset stack read, uninitialized")
+-__failure __msg("invalid variable-offset read from stack R2")
++__success
++__failure_unpriv __msg_unpriv("R2 variable stack access prohibited for !root")
+ __naked void variable_offset_stack_read_uninitialized(void)
+ {
+ 	asm volatile ("					\
+@@ -83,12 +84,55 @@ __naked void variable_offset_stack_read_uninitialized(void)
+ 
+ SEC("socket")
+ __description("variable-offset stack write, priv vs unpriv")
+-__success __failure_unpriv
++__success
++/* Check that the maximum stack depth is correctly maintained according to the
++ * maximum possible variable offset.
++ */
++__log_level(4) __msg("stack depth 16")
++__failure_unpriv
+ /* Variable stack access is rejected for unprivileged.
+  */
+ __msg_unpriv("R2 variable stack access prohibited for !root")
+ __retval(0)
+ __naked void stack_write_priv_vs_unpriv(void)
++{
++	asm volatile ("                               \
++	/* Get an unknown value */                    \
++	r2 = *(u32*)(r1 + 0);                         \
++	/* Make it small and 8-byte aligned */        \
++	r2 &= 8;                                      \
++	r2 -= 16;                                     \
++	/* Add it to fp. We now have either fp-8 or   \
++	 * fp-16, but we don't know which             \
++	 */                                           \
++	r2 += r10;                                    \
++	/* Dereference it for a stack write */        \
++	r0 = 0;                                       \
++	*(u64*)(r2 + 0) = r0;                         \
++	exit;                                         \
++"	::: __clobber_all);
++}
++
++/* Similar to the previous test, but this time also perform a read from the
++ * address written to with a variable offset. The read is allowed, showing that,
++ * after a variable-offset write, a priviledged program can read the slots that
++ * were in the range of that write (even if the verifier doesn't actually know if
++ * the slot being read was really written to or not.
++ *
++ * Despite this test being mostly a superset, the previous test is also kept for
++ * the sake of it checking the stack depth in the case where there is no read.
++ */
++SEC("socket")
++__description("variable-offset stack write followed by read")
++__success
++/* Check that the maximum stack depth is correctly maintained according to the
++ * maximum possible variable offset.
++ */
++__log_level(4) __msg("stack depth 16")
++__failure_unpriv
++__msg_unpriv("R2 variable stack access prohibited for !root")
++__retval(0)
++__naked void stack_write_followed_by_read(void)
+ {
+ 	asm volatile ("					\
+ 	/* Get an unknown value */			\
+@@ -103,12 +147,7 @@ __naked void stack_write_priv_vs_unpriv(void)
+ 	/* Dereference it for a stack write */		\
+ 	r0 = 0;						\
+ 	*(u64*)(r2 + 0) = r0;				\
+-	/* Now read from the address we just wrote. This shows\
+-	 * that, after a variable-offset write, a priviledged\
+-	 * program can read the slots that were in the range of\
+-	 * that write (even if the verifier doesn't actually know\
+-	 * if the slot being read was really written to or not.\
+-	 */						\
++	/* Now read from the address we just wrote. */ \
+ 	r3 = *(u64*)(r2 + 0);				\
+ 	r0 = 0;						\
+ 	exit;						\
+@@ -253,9 +292,10 @@ __naked void access_min_out_of_bound(void)
+ 	: __clobber_all);
+ }
+ 
+-SEC("lwt_in")
++SEC("cgroup/skb")
+ __description("indirect variable-offset stack access, min_off < min_initialized")
+-__failure __msg("invalid indirect read from stack R2 var_off")
++__success
++__failure_unpriv __msg_unpriv("R2 variable stack access prohibited for !root")
+ __naked void access_min_off_min_initialized(void)
+ {
+ 	asm volatile ("					\
+diff --git a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
+index 80f620602d50f..518329c666e93 100644
+--- a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
++++ b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
+@@ -467,13 +467,13 @@ static __always_inline int tcp_lookup(void *ctx, struct header_pointers *hdr, bo
+ 		unsigned long status = ct->status;
+ 
+ 		bpf_ct_release(ct);
+-		if (status & IPS_CONFIRMED_BIT)
++		if (status & IPS_CONFIRMED)
+ 			return XDP_PASS;
+ 	} else if (ct_lookup_opts.error != -ENOENT) {
+ 		return XDP_ABORTED;
+ 	}
+ 
+-	/* error == -ENOENT || !(status & IPS_CONFIRMED_BIT) */
++	/* error == -ENOENT || !(status & IPS_CONFIRMED) */
+ 	return XDP_TX;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
+index 319337bdcfc85..9a7b1106fda81 100644
+--- a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
++++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
+@@ -83,17 +83,6 @@
+ 	.result = REJECT,
+ 	.errstr = "!read_ok",
+ },
+-{
+-	"Can't use cmpxchg on uninit memory",
+-	.insns = {
+-		BPF_MOV64_IMM(BPF_REG_0, 3),
+-		BPF_MOV64_IMM(BPF_REG_2, 4),
+-		BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_2, -8),
+-		BPF_EXIT_INSN(),
+-	},
+-	.result = REJECT,
+-	.errstr = "invalid read from stack",
+-},
+ {
+ 	"BPF_W cmpxchg should zero top 32 bits",
+ 	.insns = {
+diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
+index 3d5cd51071f04..ab25a81fd3a10 100644
+--- a/tools/testing/selftests/bpf/verifier/calls.c
++++ b/tools/testing/selftests/bpf/verifier/calls.c
+@@ -1505,7 +1505,9 @@
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.fixup_map_hash_8b = { 23 },
+ 	.result = REJECT,
+-	.errstr = "invalid read from stack R7 off=-16 size=8",
++	.errstr = "R0 invalid mem access 'scalar'",
++	.result_unpriv = REJECT,
++	.errstr_unpriv = "invalid read from stack R7 off=-16 size=8",
+ },
+ {
+ 	"calls: two calls that receive map_value via arg=ptr_stack_of_caller. test1",
+diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
+index b604c570309a7..b1102ee13faa1 100644
+--- a/tools/testing/selftests/bpf/xskxceiver.c
++++ b/tools/testing/selftests/bpf/xskxceiver.c
+@@ -634,16 +634,24 @@ static u32 pkt_nb_frags(u32 frame_size, struct pkt_stream *pkt_stream, struct pk
+ 	return nb_frags;
+ }
+ 
++static bool set_pkt_valid(int offset, u32 len)
++{
++	return len <= MAX_ETH_JUMBO_SIZE;
++}
++
+ static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt, int offset, u32 len)
+ {
+ 	pkt->offset = offset;
+ 	pkt->len = len;
+-	if (len > MAX_ETH_JUMBO_SIZE) {
+-		pkt->valid = false;
+-	} else {
+-		pkt->valid = true;
+-		pkt_stream->nb_valid_entries++;
+-	}
++	pkt->valid = set_pkt_valid(offset, len);
++}
++
++static void pkt_stream_pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt, int offset, u32 len)
++{
++	bool prev_pkt_valid = pkt->valid;
++
++	pkt_set(pkt_stream, pkt, offset, len);
++	pkt_stream->nb_valid_entries += pkt->valid - prev_pkt_valid;
+ }
+ 
+ static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
+@@ -665,7 +673,7 @@ static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb
+ 	for (i = 0; i < nb_pkts; i++) {
+ 		struct pkt *pkt = &pkt_stream->pkts[i];
+ 
+-		pkt_set(pkt_stream, pkt, 0, pkt_len);
++		pkt_stream_pkt_set(pkt_stream, pkt, 0, pkt_len);
+ 		pkt->pkt_nb = nb_start + i * nb_off;
+ 	}
+ 
+@@ -700,10 +708,9 @@ static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len,
+ 
+ 	pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
+ 	for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
+-		pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
++		pkt_stream_pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
+ 
+ 	ifobj->xsk->pkt_stream = pkt_stream;
+-	pkt_stream->nb_valid_entries /= 2;
+ }
+ 
+ static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset)
+diff --git a/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh b/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh
+index ad4c845a4ac7c..b76bf50309524 100755
+--- a/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh
++++ b/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ # Regression Test:
+diff --git a/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh b/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh
+index 2330d37453f95..8c26190021479 100755
+--- a/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh
++++ b/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ # Regression Test:
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
+index 42ce602d8d492..e71d811656bb5 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh
+@@ -120,6 +120,9 @@ h2_destroy()
+ 
+ switch_create()
+ {
++	local lanes_swp4
++	local pg1_size
++
+ 	# pools
+ 	# -----
+ 
+@@ -229,7 +232,20 @@ switch_create()
+ 	dcb pfc set dev $swp4 prio-pfc all:off 1:on
+ 	# PG0 will get autoconfigured to Xoff, give PG1 arbitrarily 100K, which
+ 	# is (-2*MTU) about 80K of delay provision.
+-	dcb buffer set dev $swp4 buffer-size all:0 1:$_100KB
++	pg1_size=$_100KB
++
++	setup_wait_dev_with_timeout $swp4
++
++	lanes_swp4=$(ethtool $swp4 | grep 'Lanes:')
++	lanes_swp4=${lanes_swp4#*"Lanes: "}
++
++	# 8-lane ports use two buffers among which the configured buffer
++	# is split, so double the size to get twice (20K + 80K).
++	if [[ $lanes_swp4 -eq 8 ]]; then
++		pg1_size=$((pg1_size * 2))
++	fi
++
++	dcb buffer set dev $swp4 buffer-size all:0 1:$pg1_size
+ 
+ 	# bridges
+ 	# -------
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+index fb850e0ec8375..616d3581419ca 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh
+@@ -10,7 +10,8 @@ lib_dir=$(dirname $0)/../../../../net/forwarding
+ ALL_TESTS="single_mask_test identical_filters_test two_masks_test \
+ 	multiple_masks_test ctcam_edge_cases_test delta_simple_test \
+ 	delta_two_masks_one_key_test delta_simple_rehash_test \
+-	bloom_simple_test bloom_complex_test bloom_delta_test"
++	bloom_simple_test bloom_complex_test bloom_delta_test \
++	max_erp_entries_test max_group_size_test"
+ NUM_NETIFS=2
+ source $lib_dir/lib.sh
+ source $lib_dir/tc_common.sh
+@@ -983,6 +984,109 @@ bloom_delta_test()
+ 	log_test "bloom delta test ($tcflags)"
+ }
+ 
++max_erp_entries_test()
++{
++	# The number of eRP entries is limited. Once the maximum number of eRPs
++	# has been reached, filters cannot be added. This test verifies that
++	# when this limit is reached, inserstion fails without crashing.
++
++	RET=0
++
++	local num_masks=32
++	local num_regions=15
++	local chain_failed
++	local mask_failed
++	local ret
++
++	if [[ "$tcflags" != "skip_sw" ]]; then
++		return 0;
++	fi
++
++	for ((i=1; i < $num_regions; i++)); do
++		for ((j=$num_masks; j >= 0; j--)); do
++			tc filter add dev $h2 ingress chain $i protocol ip \
++				pref $i	handle $j flower $tcflags \
++				dst_ip 192.1.0.0/$j &> /dev/null
++			ret=$?
++
++			if [ $ret -ne 0 ]; then
++				chain_failed=$i
++				mask_failed=$j
++				break 2
++			fi
++		done
++	done
++
++	# We expect to exceed the maximum number of eRP entries, so that
++	# insertion eventually fails. Otherwise, the test should be adjusted to
++	# add more filters.
++	check_fail $ret "expected to exceed number of eRP entries"
++
++	for ((; i >= 1; i--)); do
++		for ((j=0; j <= $num_masks; j++)); do
++			tc filter del dev $h2 ingress chain $i protocol ip \
++				pref $i handle $j flower &> /dev/null
++		done
++	done
++
++	log_test "max eRP entries test ($tcflags). " \
++		"max chain $chain_failed, mask $mask_failed"
++}
++
++max_group_size_test()
++{
++	# The number of ACLs in an ACL group is limited. Once the maximum
++	# number of ACLs has been reached, filters cannot be added. This test
++	# verifies that when this limit is reached, insertion fails without
++	# crashing.
++
++	RET=0
++
++	local num_acls=32
++	local max_size
++	local ret
++
++	if [[ "$tcflags" != "skip_sw" ]]; then
++		return 0;
++	fi
++
++	for ((i=1; i < $num_acls; i++)); do
++		if [[ $(( i % 2 )) == 1 ]]; then
++			tc filter add dev $h2 ingress pref $i proto ipv4 \
++				flower $tcflags dst_ip 198.51.100.1/32 \
++				ip_proto tcp tcp_flags 0x01/0x01 \
++				action drop &> /dev/null
++		else
++			tc filter add dev $h2 ingress pref $i proto ipv6 \
++				flower $tcflags dst_ip 2001:db8:1::1/128 \
++				action drop &> /dev/null
++		fi
++
++		ret=$?
++		[[ $ret -ne 0 ]] && max_size=$((i - 1)) && break
++	done
++
++	# We expect to exceed the maximum number of ACLs in a group, so that
++	# insertion eventually fails. Otherwise, the test should be adjusted to
++	# add more filters.
++	check_fail $ret "expected to exceed number of ACLs in a group"
++
++	for ((; i >= 1; i--)); do
++		if [[ $(( i % 2 )) == 1 ]]; then
++			tc filter del dev $h2 ingress pref $i proto ipv4 \
++				flower $tcflags dst_ip 198.51.100.1/32 \
++				ip_proto tcp tcp_flags 0x01/0x01 \
++				action drop &> /dev/null
++		else
++			tc filter del dev $h2 ingress pref $i proto ipv6 \
++				flower $tcflags dst_ip 2001:db8:1::1/128 \
++				action drop &> /dev/null
++		fi
++	done
++
++	log_test "max ACL group size test ($tcflags). max size $max_size"
++}
++
+ setup_prepare()
+ {
+ 	h1=${NETIFS[p1]}
+diff --git a/tools/testing/selftests/net/arp_ndisc_untracked_subnets.sh b/tools/testing/selftests/net/arp_ndisc_untracked_subnets.sh
+index c899b446acb62..327427ec10f56 100755
+--- a/tools/testing/selftests/net/arp_ndisc_untracked_subnets.sh
++++ b/tools/testing/selftests/net/arp_ndisc_untracked_subnets.sh
+@@ -150,7 +150,7 @@ arp_test_gratuitous() {
+ 	fi
+ 	# Supply arp_accept option to set up which sets it in sysctl
+ 	setup ${arp_accept}
+-	ip netns exec ${HOST_NS} arping -A -U ${HOST_ADDR} -c1 2>&1 >/dev/null
++	ip netns exec ${HOST_NS} arping -A -I ${HOST_INTF} -U ${HOST_ADDR} -c1 2>&1 >/dev/null
+ 
+ 	if verify_arp $1 $2; then
+ 		printf "    TEST: %-60s  [ OK ]\n" "${test_msg[*]}"
+diff --git a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+index 51df5e305855a..b52d59547fc59 100755
+--- a/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
++++ b/tools/testing/selftests/net/fib_nexthop_multiprefix.sh
+@@ -209,12 +209,12 @@ validate_v6_exception()
+ 		echo "Route get"
+ 		ip -netns h0 -6 ro get ${dst}
+ 		echo "Searching for:"
+-		echo "    ${dst} from :: via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
++		echo "    ${dst}.* via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
+ 		echo
+ 	fi
+ 
+ 	ip -netns h0 -6 ro get ${dst} | \
+-	grep -q "${dst} from :: via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
++	grep -q "${dst}.* via ${r1} dev eth0 src ${h0} .* mtu ${mtu}"
+ 	rc=$?
+ 
+ 	log_test $rc 0 "IPv6: host 0 to host ${i}, mtu ${mtu}"
+diff --git a/tools/testing/selftests/powerpc/math/fpu_preempt.c b/tools/testing/selftests/powerpc/math/fpu_preempt.c
+index 5235bdc8c0b11..3e5b5663d2449 100644
+--- a/tools/testing/selftests/powerpc/math/fpu_preempt.c
++++ b/tools/testing/selftests/powerpc/math/fpu_preempt.c
+@@ -37,19 +37,20 @@ __thread double darray[] = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0,
+ int threads_starting;
+ int running;
+ 
+-extern void preempt_fpu(double *darray, int *threads_starting, int *running);
++extern int preempt_fpu(double *darray, int *threads_starting, int *running);
+ 
+ void *preempt_fpu_c(void *p)
+ {
++	long rc;
+ 	int i;
++
+ 	srand(pthread_self());
+ 	for (i = 0; i < 21; i++)
+ 		darray[i] = rand();
+ 
+-	/* Test failed if it ever returns */
+-	preempt_fpu(darray, &threads_starting, &running);
++	rc = preempt_fpu(darray, &threads_starting, &running);
+ 
+-	return p;
++	return (void *)rc;
+ }
+ 
+ int test_preempt_fpu(void)
+diff --git a/tools/testing/selftests/powerpc/math/vmx_preempt.c b/tools/testing/selftests/powerpc/math/vmx_preempt.c
+index 6761d6ce30eca..6f7cf400c6875 100644
+--- a/tools/testing/selftests/powerpc/math/vmx_preempt.c
++++ b/tools/testing/selftests/powerpc/math/vmx_preempt.c
+@@ -37,19 +37,21 @@ __thread vector int varray[] = {{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10,11,12},
+ int threads_starting;
+ int running;
+ 
+-extern void preempt_vmx(vector int *varray, int *threads_starting, int *running);
++extern int preempt_vmx(vector int *varray, int *threads_starting, int *running);
+ 
+ void *preempt_vmx_c(void *p)
+ {
+ 	int i, j;
++	long rc;
++
+ 	srand(pthread_self());
+ 	for (i = 0; i < 12; i++)
+ 		for (j = 0; j < 4; j++)
+ 			varray[i][j] = rand();
+ 
+-	/* Test fails if it ever returns */
+-	preempt_vmx(varray, &threads_starting, &running);
+-	return p;
++	rc = preempt_vmx(varray, &threads_starting, &running);
++
++	return (void *)rc;
+ }
+ 
+ int test_preempt_vmx(void)
+diff --git a/tools/testing/selftests/sgx/Makefile b/tools/testing/selftests/sgx/Makefile
+index 50aab6b57da34..01abe4969b0f9 100644
+--- a/tools/testing/selftests/sgx/Makefile
++++ b/tools/testing/selftests/sgx/Makefile
+@@ -16,10 +16,10 @@ HOST_CFLAGS := -Wall -Werror -g $(INCLUDES) -fPIC -z noexecstack
+ ENCL_CFLAGS := -Wall -Werror -static -nostdlib -nostartfiles -fPIC \
+ 	       -fno-stack-protector -mrdrnd $(INCLUDES)
+ 
++ifeq ($(CAN_BUILD_X86_64), 1)
+ TEST_CUSTOM_PROGS := $(OUTPUT)/test_sgx
+ TEST_FILES := $(OUTPUT)/test_encl.elf
+ 
+-ifeq ($(CAN_BUILD_X86_64), 1)
+ all: $(TEST_CUSTOM_PROGS) $(OUTPUT)/test_encl.elf
+ endif
+ 
+diff --git a/tools/testing/selftests/sgx/load.c b/tools/testing/selftests/sgx/load.c
+index 94bdeac1cf041..c9f658e44de6c 100644
+--- a/tools/testing/selftests/sgx/load.c
++++ b/tools/testing/selftests/sgx/load.c
+@@ -136,11 +136,11 @@ static bool encl_ioc_add_pages(struct encl *encl, struct encl_segment *seg)
+  */
+ uint64_t encl_get_entry(struct encl *encl, const char *symbol)
+ {
++	Elf64_Sym *symtab = NULL;
++	char *sym_names = NULL;
+ 	Elf64_Shdr *sections;
+-	Elf64_Sym *symtab;
+ 	Elf64_Ehdr *ehdr;
+-	char *sym_names;
+-	int num_sym;
++	int num_sym = 0;
+ 	int i;
+ 
+ 	ehdr = encl->bin;
+@@ -161,6 +161,9 @@ uint64_t encl_get_entry(struct encl *encl, const char *symbol)
+ 		}
+ 	}
+ 
++	if (!symtab || !sym_names)
++		return 0;
++
+ 	for (i = 0; i < num_sym; i++) {
+ 		Elf64_Sym *sym = &symtab[i];
+ 
+diff --git a/tools/testing/selftests/sgx/sigstruct.c b/tools/testing/selftests/sgx/sigstruct.c
+index a07896a463643..d73b29becf5b0 100644
+--- a/tools/testing/selftests/sgx/sigstruct.c
++++ b/tools/testing/selftests/sgx/sigstruct.c
+@@ -318,9 +318,9 @@ bool encl_measure(struct encl *encl)
+ 	struct sgx_sigstruct *sigstruct = &encl->sigstruct;
+ 	struct sgx_sigstruct_payload payload;
+ 	uint8_t digest[SHA256_DIGEST_LENGTH];
++	EVP_MD_CTX *ctx = NULL;
+ 	unsigned int siglen;
+ 	RSA *key = NULL;
+-	EVP_MD_CTX *ctx;
+ 	int i;
+ 
+ 	memset(sigstruct, 0, sizeof(*sigstruct));
+@@ -384,7 +384,8 @@ bool encl_measure(struct encl *encl)
+ 	return true;
+ 
+ err:
+-	EVP_MD_CTX_destroy(ctx);
++	if (ctx)
++		EVP_MD_CTX_destroy(ctx);
+ 	RSA_free(key);
+ 	return false;
+ }
+diff --git a/tools/testing/selftests/sgx/test_encl.c b/tools/testing/selftests/sgx/test_encl.c
+index c0d6397295e31..ae791df3e5a57 100644
+--- a/tools/testing/selftests/sgx/test_encl.c
++++ b/tools/testing/selftests/sgx/test_encl.c
+@@ -24,10 +24,11 @@ static void do_encl_emodpe(void *_op)
+ 	secinfo.flags = op->flags;
+ 
+ 	asm volatile(".byte 0x0f, 0x01, 0xd7"
+-				:
++				: /* no outputs */
+ 				: "a" (EMODPE),
+ 				  "b" (&secinfo),
+-				  "c" (op->epc_addr));
++				  "c" (op->epc_addr)
++				: "memory" /* read from secinfo pointer */);
+ }
+ 
+ static void do_encl_eaccept(void *_op)
+@@ -42,7 +43,8 @@ static void do_encl_eaccept(void *_op)
+ 				: "=a" (rax)
+ 				: "a" (EACCEPT),
+ 				  "b" (&secinfo),
+-				  "c" (op->epc_addr));
++				  "c" (op->epc_addr)
++				: "memory" /* read from secinfo pointer */);
+ 
+ 	op->ret = rax;
+ }


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-26  0:20 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-26  0:20 UTC (permalink / raw
  To: gentoo-commits

commit:     cb734b7108233eb38031bb37bd5ca6dd00c6be2e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 26 00:19:58 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 26 00:19:58 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cb734b71

Remove redundant patch

Removed:
media: solo6x10: replace max(a, min(b, c)) by clamp(b, a, c)

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 --
 2700_solo6x10-mem-resource-reduction-fix.patch | 61 --------------------------
 2 files changed, 65 deletions(-)

diff --git a/0000_README b/0000_README
index 1eabc2b1..c1c228e5 100644
--- a/0000_README
+++ b/0000_README
@@ -71,10 +71,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_solo6x10-mem-resource-reduction-fix.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
-Desc:   media: solo6x10: replace max(a, min(b, c)) by clamp(b, a, c)
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_solo6x10-mem-resource-reduction-fix.patch b/2700_solo6x10-mem-resource-reduction-fix.patch
deleted file mode 100644
index bf406a92..00000000
--- a/2700_solo6x10-mem-resource-reduction-fix.patch
+++ /dev/null
@@ -1,61 +0,0 @@
-From 31e97d7c9ae3de072d7b424b2cf706a03ec10720 Mon Sep 17 00:00:00 2001
-From: Aurelien Jarno <aurelien@aurel32.net>
-Date: Sat, 13 Jan 2024 19:33:31 +0100
-Subject: media: solo6x10: replace max(a, min(b, c)) by clamp(b, a, c)
-
-This patch replaces max(a, min(b, c)) by clamp(b, a, c) in the solo6x10
-driver.  This improves the readability and more importantly, for the
-solo6x10-p2m.c file, this reduces on my system (x86-64, gcc 13):
-
- - the preprocessed size from 121 MiB to 4.5 MiB;
-
- - the build CPU time from 46.8 s to 1.6 s;
-
- - the build memory from 2786 MiB to 98MiB.
-
-In fine, this allows this relatively simple C file to be built on a
-32-bit system.
-
-Reported-by: Jiri Slaby <jirislaby@gmail.com>
-Closes: https://lore.kernel.org/lkml/18c6df0d-45ed-450c-9eda-95160a2bbb8e@gmail.com/
-Cc:  <stable@vger.kernel.org> # v6.7+
-Suggested-by: David Laight <David.Laight@ACULAB.COM>
-Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
-Reviewed-by: David Laight <David.Laight@ACULAB.COM>
-Reviewed-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
----
- drivers/media/pci/solo6x10/solo6x10-offsets.h | 10 +++++-----
- 1 file changed, 5 insertions(+), 5 deletions(-)
-
-(limited to 'drivers/media/pci/solo6x10/solo6x10-offsets.h')
-
-diff --git a/drivers/media/pci/solo6x10/solo6x10-offsets.h b/drivers/media/pci/solo6x10/solo6x10-offsets.h
-index f414ee1316f29c..fdbb817e63601c 100644
---- a/drivers/media/pci/solo6x10/solo6x10-offsets.h
-+++ b/drivers/media/pci/solo6x10/solo6x10-offsets.h
-@@ -57,16 +57,16 @@
- #define SOLO_MP4E_EXT_ADDR(__solo) \
- 	(SOLO_EREF_EXT_ADDR(__solo) + SOLO_EREF_EXT_AREA(__solo))
- #define SOLO_MP4E_EXT_SIZE(__solo) \
--	max((__solo->nr_chans * 0x00080000),				\
--	    min(((__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo)) -	\
--		 __SOLO_JPEG_MIN_SIZE(__solo)), 0x00ff0000))
-+	clamp(__solo->sdram_size - SOLO_MP4E_EXT_ADDR(__solo) -	\
-+	      __SOLO_JPEG_MIN_SIZE(__solo),			\
-+	      __solo->nr_chans * 0x00080000, 0x00ff0000)
- 
- #define __SOLO_JPEG_MIN_SIZE(__solo)		(__solo->nr_chans * 0x00080000)
- #define SOLO_JPEG_EXT_ADDR(__solo) \
- 		(SOLO_MP4E_EXT_ADDR(__solo) + SOLO_MP4E_EXT_SIZE(__solo))
- #define SOLO_JPEG_EXT_SIZE(__solo) \
--	max(__SOLO_JPEG_MIN_SIZE(__solo),				\
--	    min((__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo)), 0x00ff0000))
-+	clamp(__solo->sdram_size - SOLO_JPEG_EXT_ADDR(__solo),	\
-+	      __SOLO_JPEG_MIN_SIZE(__solo), 0x00ff0000)
- 
- #define SOLO_SDRAM_END(__solo) \
- 	(SOLO_JPEG_EXT_ADDR(__solo) + SOLO_JPEG_EXT_SIZE(__solo))
--- 
-cgit 1.2.3-korg
-


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-26 22:44 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-26 22:44 UTC (permalink / raw
  To: gentoo-commits

commit:     4421698e9ae682a70d6f2f22e2eeeef77251f58f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 26 22:40:04 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 26 22:40:04 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4421698e

amdgpu: Adjust kmalloc_array calls for new -Walloc-size

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 ...j-kmalloc-array-calls-for-new-Walloc-size.patch | 124 +++++++++++++++++++++
 2 files changed, 128 insertions(+)

diff --git a/0000_README b/0000_README
index c1c228e5..34a20863 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2800_amdgpu-Adj-kmalloc-array-calls-for-new-Walloc-size.patch
+From:   sam@gentoo.org
+Desc:   amdgpu: Adjust kmalloc_array calls for new -Walloc-size
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2800_amdgpu-Adj-kmalloc-array-calls-for-new-Walloc-size.patch b/2800_amdgpu-Adj-kmalloc-array-calls-for-new-Walloc-size.patch
new file mode 100644
index 00000000..798ac040
--- /dev/null
+++ b/2800_amdgpu-Adj-kmalloc-array-calls-for-new-Walloc-size.patch
@@ -0,0 +1,124 @@
+From: Sam James <sam@gentoo.org>
+To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
+Cc: linux-kernel@vger.kernel.org, Sam James <sam@gentoo.org>
+Subject: [PATCH] amdgpu: Adjust kmalloc_array calls for new -Walloc-size
+Date: Sun,  5 Nov 2023 16:06:50 +0000	[thread overview]
+Message-ID: <20231105160652.374422-1-sam@gentoo.org> (raw)
+In-Reply-To: <87wmuwo7i3.fsf@gentoo.org>
+
+GCC 14 introduces a new -Walloc-size included in -Wextra which errors out
+on various files in drivers/gpu/drm/amd/amdgpu like:
+```
+amdgpu_amdkfd_gfx_v8.c:241:15: error: allocation of insufficient size ‘4’ for type ‘uint32_t[2]’ {aka ‘unsigned int[2]'} with size ‘8’ [-Werror=alloc-size]
+```
+
+This is because each HQD_N_REGS is actually a uint32_t[2]. Move the * 2 to
+the size argument so GCC sees we're allocating enough.
+
+Originally did 'sizeof(uint32_t) * 2' for the size but a friend suggested
+'sizeof(**dump)' better communicates the intent.
+
+Link: https://lore.kernel.org/all/87wmuwo7i3.fsf@gentoo.org/
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c | 2 +-
+ drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gc_9_4_3.c | 2 +-
+ drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c   | 4 ++--
+ drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c   | 4 ++--
+ drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c   | 4 ++--
+ 5 files changed, 8 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
+index 625db444df1c..0ba15dcbe4e1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
+@@ -200,7 +200,7 @@ int kgd_arcturus_hqd_sdma_dump(struct amdgpu_device *adev,
+ #undef HQD_N_REGS
+ #define HQD_N_REGS (19+6+7+10)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gc_9_4_3.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gc_9_4_3.c
+index 490c8f5ddb60..ca7238b5535b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gc_9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gc_9_4_3.c
+@@ -141,7 +141,7 @@ static int kgd_gfx_v9_4_3_hqd_sdma_dump(struct amdgpu_device *adev,
+ 		(*dump)[i++][1] = RREG32(addr);         \
+ 	} while (0)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+index 6bf448ab3dff..ca4a6b82817f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+@@ -214,7 +214,7 @@ static int kgd_hqd_dump(struct amdgpu_device *adev,
+ 		(*dump)[i++][1] = RREG32(addr);		\
+ 	} while (0)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+@@ -301,7 +301,7 @@ static int kgd_hqd_sdma_dump(struct amdgpu_device *adev,
+ #undef HQD_N_REGS
+ #define HQD_N_REGS (19+4)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
+index cd06e4a6d1da..0f3e2944edd7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
+@@ -238,7 +238,7 @@ static int kgd_hqd_dump(struct amdgpu_device *adev,
+ 		(*dump)[i++][1] = RREG32(addr);		\
+ 	} while (0)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+@@ -324,7 +324,7 @@ static int kgd_hqd_sdma_dump(struct amdgpu_device *adev,
+ #undef HQD_N_REGS
+ #define HQD_N_REGS (19+4+2+3+7)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+index 51011e8ee90d..a3355b90aac5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+@@ -365,7 +365,7 @@ int kgd_gfx_v9_hqd_dump(struct amdgpu_device *adev,
+ 		(*dump)[i++][1] = RREG32(addr);		\
+ 	} while (0)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+@@ -462,7 +462,7 @@ static int kgd_hqd_sdma_dump(struct amdgpu_device *adev,
+ #undef HQD_N_REGS
+ #define HQD_N_REGS (19+6+7+10)
+ 
+-	*dump = kmalloc_array(HQD_N_REGS * 2, sizeof(uint32_t), GFP_KERNEL);
++	*dump = kmalloc_array(HQD_N_REGS, sizeof(**dump), GFP_KERNEL);
+ 	if (*dump == NULL)
+ 		return -ENOMEM;
+ 
+-- 
+2.42.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-26 22:44 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-26 22:44 UTC (permalink / raw
  To: gentoo-commits

commit:     7ebd3d5eb40bc16b8efe35e288abc36fb859c16c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 26 22:44:00 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 26 22:44:00 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7ebd3d5e

mm: huge_memory: don't force huge page alignment on 32 bit

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |  4 ++
 1800_mm-32-bit-huge-page-alignment-fix.patch | 57 ++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)

diff --git a/0000_README b/0000_README
index 34a20863..19339793 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
+Patch:  1800_mm-32-bit-huge-page-alignment-fix.patch
+From:   https://lore.kernel.org/all/20240118180505.2914778-1-shy828301@gmail.com/T/
+Desc:   mm: huge_memory: don't force huge page alignment on 32 bit
+
 Patch:  1805_mm-disable-CONFIG-PER-VMA-LOCK-by-def.patch
 From:   https://lore.kernel.org/all/20230703182150.2193578-1-surenb@google.com/
 Desc:   mm: disable CONFIG_PER_VMA_LOCK by default until its fixed

diff --git a/1800_mm-32-bit-huge-page-alignment-fix.patch b/1800_mm-32-bit-huge-page-alignment-fix.patch
new file mode 100644
index 00000000..5ad5d49b
--- /dev/null
+++ b/1800_mm-32-bit-huge-page-alignment-fix.patch
@@ -0,0 +1,57 @@
+From: Yang Shi @ 2024-01-18 13:35 UTC (permalink / raw)
+  To: jirislaby, surenb, riel, willy, cl, akpm; +Cc: yang, linux-mm, linux-kernel
+
+From: Yang Shi <yang@os.amperecomputing.com>
+
+The commit efa7df3e3bb5 ("mm: align larger anonymous mappings on THP
+boundaries") caused two issues [1] [2] reported on 32 bit system or compat
+userspace.
+
+It doesn't make too much sense to force huge page alignment on 32 bit
+system due to the constrained virtual address space.
+
+[1] https://lore.kernel.org/linux-mm/CAHbLzkqa1SCBA10yjWTtA2mKCsoK5+M1BthSDL8ROvUq2XxZMw@mail.gmail.com/T/#mf211643a0427f8d6495b5b53f8132f453d60ab95
+[2] https://lore.kernel.org/linux-mm/CAHbLzkqa1SCBA10yjWTtA2mKCsoK5+M1BthSDL8ROvUq2XxZMw@mail.gmail.com/T/#me93dff2ccbd9902c3e395e1c022fb454e48ecb1d
+
+Fixes: efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries")
+Reported-by: Jiri Slaby <jirislaby@kernel.org>
+Reported-by: Suren Baghdasaryan <surenb@google.com>
+Tested-by: Jiri Slaby <jirislaby@kernel.org>
+Tested-by: Suren Baghdasaryan <surenb@google.com>
+Cc: Rik van Riel <riel@surriel.com>
+Cc: Matthew Wilcox <willy@infradead.org>
+Cc: Christopher Lameter <cl@linux.com>
+Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
+---
+ mm/huge_memory.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 94ef5c02b459..e9fbaccbe0c0 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -37,6 +37,7 @@
+ #include <linux/page_owner.h>
+ #include <linux/sched/sysctl.h>
+ #include <linux/memory-tiers.h>
++#include <linux/compat.h>
+ 
+ #include <asm/tlb.h>
+ #include <asm/pgalloc.h>
+@@ -811,6 +812,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
+ 	loff_t off_align = round_up(off, size);
+ 	unsigned long len_pad, ret;
+ 
++	/*
++	 * It doesn't make too much sense to froce huge page alignment on
++	 * 32 bit system or compat userspace due to the contrained virtual
++	 * address space and address entropy.
++	 */
++	if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall())
++		return 0;
++
+ 	if (off_end <= off_align || (off_end - off_align) < size)
+ 		return 0;
+ 
+-- 
+2.41.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-30 19:20 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-30 19:20 UTC (permalink / raw
  To: gentoo-commits

commit:     1d697bf344ab2e768e87ae7a6b4039fa7cb7971e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 30 19:19:28 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 30 19:19:28 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1d697bf3

Removed redundant patch

Removed:
mm: disable CONFIG_PER_VMA_LOCK by default until its fixed

Bug: https://bugs.gentoo.org/923337

Reported-by: Holger Hoffstätte
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                      |  4 ---
 1805_mm-disable-CONFIG-PER-VMA-LOCK-by-def.patch | 35 ------------------------
 2 files changed, 39 deletions(-)

diff --git a/0000_README b/0000_README
index 19339793..6a261619 100644
--- a/0000_README
+++ b/0000_README
@@ -67,10 +67,6 @@ Patch:  1800_mm-32-bit-huge-page-alignment-fix.patch
 From:   https://lore.kernel.org/all/20240118180505.2914778-1-shy828301@gmail.com/T/
 Desc:   mm: huge_memory: don't force huge page alignment on 32 bit
 
-Patch:  1805_mm-disable-CONFIG-PER-VMA-LOCK-by-def.patch
-From:   https://lore.kernel.org/all/20230703182150.2193578-1-surenb@google.com/
-Desc:   mm: disable CONFIG_PER_VMA_LOCK by default until its fixed
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1805_mm-disable-CONFIG-PER-VMA-LOCK-by-def.patch b/1805_mm-disable-CONFIG-PER-VMA-LOCK-by-def.patch
deleted file mode 100644
index c98255a6..00000000
--- a/1805_mm-disable-CONFIG-PER-VMA-LOCK-by-def.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-Subject: [PATCH 1/1] mm: disable CONFIG_PER_VMA_LOCK by default until its fixed
-Date: Mon,  3 Jul 2023 11:21:50 -0700	[thread overview]
-Message-ID: <20230703182150.2193578-1-surenb@google.com> (raw)
-
-A memory corruption was reported in [1] with bisection pointing to the
-patch [2] enabling per-VMA locks for x86.
-Disable per-VMA locks config to prevent this issue while the problem is
-being investigated. This is expected to be a temporary measure.
-
-[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624
-[2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
-
-Reported-by: Jiri Slaby <jirislaby@kernel.org>
-Reported-by: Jacob Young <jacobly.alt@gmail.com>
-Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first")
-Signed-off-by: Suren Baghdasaryan <surenb@google.com>
----
- mm/Kconfig | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/mm/Kconfig b/mm/Kconfig
-index 09130434e30d..de94b2497600 100644
---- a/mm/Kconfig
-+++ b/mm/Kconfig
-@@ -1224,7 +1224,7 @@ config ARCH_SUPPORTS_PER_VMA_LOCK
-        def_bool n
- 
- config PER_VMA_LOCK
--	def_bool y
-+	bool "Enable per-vma locking during page fault handling."
- 	depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
- 	help
- 	  Allow per-vma locking during page fault handling.
--- 
-2.41.0.255.g8b1d071c50-goog


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-01-30 19:22 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-01-30 19:22 UTC (permalink / raw
  To: gentoo-commits

commit:     60370db1f47fd3fbd5afff386b079c05452cef7b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 30 19:21:57 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 30 19:21:57 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=60370db1

Bump BMQ to v6.7-r2

Bug: https://bugs.gentoo.org/923337

Reported-by: Holger Hoffstätte
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   2 +-
 ... => 5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch | 376 ++++++++++++++++-----
 2 files changed, 295 insertions(+), 83 deletions(-)

diff --git a/0000_README b/0000_README
index 6a261619..4b219a2e 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,6 @@ Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
 
-Patch:  5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
 From:   https://gitlab.com/alfredchen/projectc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
similarity index 97%
rename from 5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
index d0ff29ff..977bdcd6 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v6.7-r0.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
@@ -1,22 +1,5 @@
-diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
-index 65731b060e3f..56f8c9453e24 100644
---- a/Documentation/admin-guide/kernel-parameters.txt
-+++ b/Documentation/admin-guide/kernel-parameters.txt
-@@ -5714,6 +5714,12 @@
- 	sa1100ir	[NET]
- 			See drivers/net/irda/sa1100_ir.c.
- 
-+	sched_timeslice=
-+			[KNL] Time slice in ms for Project C BMQ/PDS scheduler.
-+			Format: integer 2, 4
-+			Default: 4
-+			See Documentation/scheduler/sched-BMQ.txt
-+
- 	sched_verbose	[KNL] Enables verbose scheduler debug messages.
- 
- 	schedstats=	[KNL,X86] Enable or disable scheduled statistics.
 diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
-index 6584a1f9bfe3..e332d9eff0d4 100644
+index 6584a1f9bfe3..226c79dd34cc 100644
 --- a/Documentation/admin-guide/sysctl/kernel.rst
 +++ b/Documentation/admin-guide/sysctl/kernel.rst
 @@ -1646,3 +1646,13 @@ is 10 seconds.
@@ -28,11 +11,11 @@ index 6584a1f9bfe3..e332d9eff0d4 100644
 +===========
 +
 +BMQ/PDS CPU scheduler only. This determines what type of yield calls
-+to sched_yield will perform.
++to sched_yield() will be performed.
 +
 +  0 - No yield.
-+  1 - Deboost and requeue task. (default)
-+  2 - Set run queue skip task.
++  1 - Requeue task. (default)
++  2 - Set run queue skip task. Same as CFS.
 diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
 new file mode 100644
 index 000000000000..05c84eec0f31
@@ -683,10 +666,10 @@ index 976092b7bd45..31d587c16ec1 100644
  obj-y += build_utility.o
 diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
 new file mode 100644
-index 000000000000..b0c4ade2788c
+index 000000000000..5b6bdff6e630
 --- /dev/null
 +++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,8715 @@
+@@ -0,0 +1,8944 @@
 +/*
 + *  kernel/sched/alt_core.c
 + *
@@ -761,7 +744,7 @@ index 000000000000..b0c4ade2788c
 +#define sched_feat(x)	(0)
 +#endif /* CONFIG_SCHED_DEBUG */
 +
-+#define ALT_SCHED_VERSION "v6.7-r0"
++#define ALT_SCHED_VERSION "v6.7-r2"
 +
 +/*
 + * Compile time debug macro
@@ -775,8 +758,11 @@ index 000000000000..b0c4ade2788c
 +
 +#define STOP_PRIO		(MAX_RT_PRIO - 1)
 +
-+/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
-+u64 sched_timeslice_ns __read_mostly = (4 << 20);
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
 +
 +static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx);
 +
@@ -793,28 +779,14 @@ index 000000000000..b0c4ade2788c
 +	unsigned int flags;
 +};
 +
-+static int __init sched_timeslice(char *str)
-+{
-+	int timeslice_ms;
-+
-+	get_option(&str, &timeslice_ms);
-+	if (2 != timeslice_ms)
-+		timeslice_ms = 4;
-+	sched_timeslice_ns = timeslice_ms << 20;
-+	sched_timeslice_imp(timeslice_ms);
-+
-+	return 0;
-+}
-+early_param("sched_timeslice", sched_timeslice);
-+
 +/* Reschedule if less than this many μs left */
 +#define RESCHED_NS		(100 << 10)
 +
 +/**
-+ * sched_yield_type - Choose what sort of yield sched_yield will perform.
++ * sched_yield_type - Type of sched_yield() will be performed.
 + * 0: No yield.
-+ * 1: Deboost and requeue task. (default)
-+ * 2: Set rq skip task.
++ * 1: Requeue task. (default)
++ * 2: Set rq skip task. (Same as mainline)
 + */
 +int sched_yield_type __read_mostly = 1;
 +
@@ -3937,7 +3909,7 @@ index 000000000000..b0c4ade2788c
 +#endif
 +
 +	if (p->time_slice < RESCHED_NS) {
-+		p->time_slice = sched_timeslice_ns;
++		p->time_slice = sysctl_sched_base_slice;
 +		resched_curr(rq);
 +	}
 +	sched_task_fork(p, rq);
@@ -5330,7 +5302,7 @@ index 000000000000..b0c4ade2788c
 +
 +static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
 +{
-+	p->time_slice = sched_timeslice_ns;
++	p->time_slice = sysctl_sched_base_slice;
 +
 +	sched_task_renew(p, rq);
 +
@@ -5666,7 +5638,7 @@ index 000000000000..b0c4ade2788c
 +{
 +	struct task_struct *tsk = current;
 +
-+#ifdef CONFIG_RT_MUTEXE
++#ifdef CONFIG_RT_MUTEXES
 +	lockdep_assert(!tsk->sched_rt_mutex);
 +#endif
 +
@@ -7073,6 +7045,7 @@ index 000000000000..b0c4ade2788c
 +{
 +	struct rq *rq;
 +	struct rq_flags rf;
++	struct task_struct *p;
 +
 +	if (!sched_yield_type)
 +		return;
@@ -7081,12 +7054,18 @@ index 000000000000..b0c4ade2788c
 +
 +	schedstat_inc(rq->yld_count);
 +
-+	if (1 == sched_yield_type) {
-+		if (!rt_task(current))
-+			do_sched_yield_type_1(current, rq);
-+	} else if (2 == sched_yield_type) {
-+		if (rq->nr_running > 1)
-+			rq->skip = current;
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq, task_sched_prio_idx(p, rq));
++	} else if (rq->nr_running > 1) {
++		if (1 == sched_yield_type) {
++			do_sched_yield_type_1(p, rq);
++			if (task_on_rq_queued(p))
++				requeue_task(p, rq, task_sched_prio_idx(p, rq));
++		} else if (2 == sched_yield_type) {
++			rq->skip = p;
++		}
 +	}
 +
 +	preempt_disable();
@@ -7608,7 +7587,7 @@ index 000000000000..b0c4ade2788c
 +	if (retval)
 +		return retval;
 +
-+	*t = ns_to_timespec64(sched_timeslice_ns);
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
 +	return 0;
 +}
 +
@@ -8289,7 +8268,7 @@ index 000000000000..b0c4ade2788c
 +
 +	for_each_online_cpu(cpu) {
 +		/* take chance to reset time slice for idle tasks */
-+		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
++		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
 +
 +		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
 +
@@ -8337,7 +8316,7 @@ index 000000000000..b0c4ade2788c
 +#else
 +void __init sched_init_smp(void)
 +{
-+	cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
 +}
 +#endif /* CONFIG_SMP */
 +
@@ -8841,6 +8820,100 @@ index 000000000000..b0c4ade2788c
 +}
 +#endif
 +
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
 +static struct cftype cpu_legacy_files[] = {
 +#ifdef CONFIG_FAIR_GROUP_SCHED
 +	{
@@ -8849,11 +8922,144 @@ index 000000000000..b0c4ade2788c
 +		.write_u64 = cpu_shares_write_u64,
 +	},
 +#endif
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
 +	{ }	/* Terminate */
 +};
 +
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
 +
 +static struct cftype cpu_files[] = {
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
 +	{ }	/* terminate */
 +};
 +
@@ -8863,17 +9069,23 @@ index 000000000000..b0c4ade2788c
 +	return 0;
 +}
 +
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
 +struct cgroup_subsys cpu_cgrp_subsys = {
 +	.css_alloc	= cpu_cgroup_css_alloc,
 +	.css_online	= cpu_cgroup_css_online,
 +	.css_released	= cpu_cgroup_css_released,
 +	.css_free	= cpu_cgroup_css_free,
 +	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
 +#ifdef CONFIG_RT_GROUP_SCHED
 +	.can_attach	= cpu_cgroup_can_attach,
 +#endif
 +	.attach		= cpu_cgroup_attach,
-+	.legacy_cftypes	= cpu_files,
 +	.legacy_cftypes	= cpu_legacy_files,
 +	.dfl_cftypes	= cpu_files,
 +	.early_init	= true,
@@ -9441,10 +9653,10 @@ index 000000000000..1212a031700e
 +{}
 diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
 new file mode 100644
-index 000000000000..1a4dab2b2bb5
+index 000000000000..0eff5391092c
 --- /dev/null
 +++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,921 @@
+@@ -0,0 +1,923 @@
 +#ifndef ALT_SCHED_H
 +#define ALT_SCHED_H
 +
@@ -9710,6 +9922,8 @@ index 000000000000..1a4dab2b2bb5
 +	cpumask_var_t		scratch_mask;
 +};
 +
++extern unsigned int sysctl_sched_base_slice;
++
 +extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
 +
 +extern unsigned long calc_load_update;
@@ -10368,17 +10582,17 @@ index 000000000000..1a4dab2b2bb5
 +#endif /* ALT_SCHED_H */
 diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
 new file mode 100644
-index 000000000000..d8f6381c27a9
+index 000000000000..840009dc1e8d
 --- /dev/null
 +++ b/kernel/sched/bmq.h
-@@ -0,0 +1,101 @@
+@@ -0,0 +1,99 @@
 +#define ALT_SCHED_NAME "BMQ"
 +
 +/*
 + * BMQ only routines
 + */
 +#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
-+#define boost_threshold(p)	(sched_timeslice_ns >> ((14 - (p)->boost_prio) / 2))
++#define boost_threshold(p)	(sysctl_sched_base_slice >> ((14 - (p)->boost_prio) / 2))
 +
 +static inline void boost_task(struct task_struct *p)
 +{
@@ -10445,14 +10659,16 @@ index 000000000000..d8f6381c27a9
 +}
 +
 +static inline void sched_update_rq_clock(struct rq *rq) {}
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq) {}
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
 +
-+static void sched_task_fork(struct task_struct *p, struct rq *rq)
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
 +{
-+	p->boost_prio = MAX_PRIORITY_ADJ;
++	if (rq_switch_time(rq) > sysctl_sched_base_slice)
++		deboost_task(p);
 +}
 +
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
 +static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
 +{
 +	p->boost_prio = MAX_PRIORITY_ADJ;
@@ -10460,18 +10676,14 @@ index 000000000000..d8f6381c27a9
 +
 +static inline void sched_task_ttwu(struct task_struct *p)
 +{
-+	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
++	if(this_rq()->clock_task - p->last_ran > sysctl_sched_base_slice)
 +		boost_task(p);
 +}
 +
 +static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
 +{
-+	u64 switch_ns = rq_switch_time(rq);
-+
-+	if (switch_ns < boost_threshold(p))
++	if (rq_switch_time(rq) < boost_threshold(p))
 +		boost_task(p);
-+	else if (switch_ns > sched_timeslice_ns)
-+		deboost_task(p);
 +}
 diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
 index d9dc9ab3773f..71a25540d65e 100644
@@ -10605,7 +10817,7 @@ index af7952f12e6c..6461cbbb734d 100644
  
  	if (task_cputime(p, &cputime.utime, &cputime.stime))
 diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
-index 4580a450700e..21c15b4488fe 100644
+index 4580a450700e..8c8fd7da4617 100644
 --- a/kernel/sched/debug.c
 +++ b/kernel/sched/debug.c
 @@ -7,6 +7,7 @@
@@ -10640,24 +10852,25 @@ index 4580a450700e..21c15b4488fe 100644
  
  static struct dentry *debugfs_sched;
  
-@@ -341,12 +345,16 @@ static __init int sched_init_debug(void)
+@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
  
  	debugfs_sched = debugfs_create_dir("sched", NULL);
  
 +#ifndef CONFIG_SCHED_ALT
  	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
  	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
-+	debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
 +#endif /* !CONFIG_SCHED_ALT */
  #ifdef CONFIG_PREEMPT_DYNAMIC
  	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
  #endif
  
-+#ifndef CONFIG_SCHED_ALT
  	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
  
++#ifndef CONFIG_SCHED_ALT
  	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
-@@ -373,11 +381,13 @@ static __init int sched_init_debug(void)
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
  #endif
  
  	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
@@ -10671,7 +10884,7 @@ index 4580a450700e..21c15b4488fe 100644
  #ifdef CONFIG_SMP
  
  static cpumask_var_t		sd_sysctl_cpus;
-@@ -1106,6 +1116,7 @@ void proc_sched_set_task(struct task_struct *p)
+@@ -1106,6 +1115,7 @@ void proc_sched_set_task(struct task_struct *p)
  	memset(&p->stats, 0, sizeof(p->stats));
  #endif
  }
@@ -10698,10 +10911,10 @@ index 565f8374ddbb..67d51e05a8ac 100644
 +#endif
 diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
 new file mode 100644
-index 000000000000..b20226ed47cc
+index 000000000000..c35dfb909f23
 --- /dev/null
 +++ b/kernel/sched/pds.h
-@@ -0,0 +1,142 @@
+@@ -0,0 +1,141 @@
 +#define ALT_SCHED_NAME "PDS"
 +
 +static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
@@ -10835,11 +11048,10 @@ index 000000000000..b20226ed47cc
 +	sched_task_renew(p, rq);
 +}
 +
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq);
-+
 +static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
 +{
-+	time_slice_expired(p, rq);
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
 +}
 +
 +static inline void sched_task_ttwu(struct task_struct *p) {}


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-01  1:22 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-01  1:22 UTC (permalink / raw
  To: gentoo-commits

commit:     2348849bc1eee6a0c237405a2398afc7821efcd4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb  1 01:21:50 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb  1 01:21:50 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2348849b

Linux patch 6.7.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1002_linux-6.7.3.patch | 18749 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18753 insertions(+)

diff --git a/0000_README b/0000_README
index 4b219a2e..8b508855 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-6.7.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.2
 
+Patch:  1002_linux-6.7.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.3
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1002_linux-6.7.3.patch b/1002_linux-6.7.3.patch
new file mode 100644
index 00000000..a92db1e5
--- /dev/null
+++ b/1002_linux-6.7.3.patch
@@ -0,0 +1,18749 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-devfreq b/Documentation/ABI/testing/sysfs-class-devfreq
+index 5e6b74f304062..1e7e0bb4c14ec 100644
+--- a/Documentation/ABI/testing/sysfs-class-devfreq
++++ b/Documentation/ABI/testing/sysfs-class-devfreq
+@@ -52,6 +52,9 @@ Description:
+ 
+ 			echo 0 > /sys/class/devfreq/.../trans_stat
+ 
++		If the transition table is bigger than PAGE_SIZE, reading
++		this will return an -EFBIG error.
++
+ What:		/sys/class/devfreq/.../available_frequencies
+ Date:		October 2012
+ Contact:	Nishanth Menon <nm@ti.com>
+diff --git a/Documentation/admin-guide/abi-obsolete.rst b/Documentation/admin-guide/abi-obsolete.rst
+index d095867899c59..594e697aa1b2f 100644
+--- a/Documentation/admin-guide/abi-obsolete.rst
++++ b/Documentation/admin-guide/abi-obsolete.rst
+@@ -7,5 +7,5 @@ marked to be removed at some later point in time.
+ The description of the interface will document the reason why it is
+ obsolete and when it can be expected to be removed.
+ 
+-.. kernel-abi:: $srctree/Documentation/ABI/obsolete
++.. kernel-abi:: ABI/obsolete
+    :rst:
+diff --git a/Documentation/admin-guide/abi-removed.rst b/Documentation/admin-guide/abi-removed.rst
+index f7e9e43023c13..f9e000c81828e 100644
+--- a/Documentation/admin-guide/abi-removed.rst
++++ b/Documentation/admin-guide/abi-removed.rst
+@@ -1,5 +1,5 @@
+ ABI removed symbols
+ ===================
+ 
+-.. kernel-abi:: $srctree/Documentation/ABI/removed
++.. kernel-abi:: ABI/removed
+    :rst:
+diff --git a/Documentation/admin-guide/abi-stable.rst b/Documentation/admin-guide/abi-stable.rst
+index 70490736e0d30..fc3361d847b12 100644
+--- a/Documentation/admin-guide/abi-stable.rst
++++ b/Documentation/admin-guide/abi-stable.rst
+@@ -10,5 +10,5 @@ for at least 2 years.
+ Most interfaces (like syscalls) are expected to never change and always
+ be available.
+ 
+-.. kernel-abi:: $srctree/Documentation/ABI/stable
++.. kernel-abi:: ABI/stable
+    :rst:
+diff --git a/Documentation/admin-guide/abi-testing.rst b/Documentation/admin-guide/abi-testing.rst
+index b205b16a72d08..19767926b3440 100644
+--- a/Documentation/admin-guide/abi-testing.rst
++++ b/Documentation/admin-guide/abi-testing.rst
+@@ -16,5 +16,5 @@ Programs that use these interfaces are strongly encouraged to add their
+ name to the description of these interfaces, so that the kernel
+ developers can easily notify them if any changes occur.
+ 
+-.. kernel-abi:: $srctree/Documentation/ABI/testing
++.. kernel-abi:: ABI/testing
+    :rst:
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index f47f63bcf67c9..7acd64c61f50c 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -71,6 +71,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A510     | #2658417        | ARM64_ERRATUM_2658417       |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM            | Cortex-A510     | #3117295        | ARM64_ERRATUM_3117295       |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A520     | #2966298        | ARM64_ERRATUM_2966298       |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM            | Cortex-A53      | #826319         | ARM64_ERRATUM_826319        |
+diff --git a/Documentation/filesystems/directory-locking.rst b/Documentation/filesystems/directory-locking.rst
+index dccd61c7c5c3b..193c22687851a 100644
+--- a/Documentation/filesystems/directory-locking.rst
++++ b/Documentation/filesystems/directory-locking.rst
+@@ -22,13 +22,16 @@ exclusive.
+ 3) object removal.  Locking rules: caller locks parent, finds victim,
+ locks victim and calls the method.  Locks are exclusive.
+ 
+-4) rename() that is _not_ cross-directory.  Locking rules: caller locks the
+-parent and finds source and target.  We lock both (provided they exist).  If we
+-need to lock two inodes of different type (dir vs non-dir), we lock directory
+-first.  If we need to lock two inodes of the same type, lock them in inode
+-pointer order.  Then call the method.  All locks are exclusive.
+-NB: we might get away with locking the source (and target in exchange
+-case) shared.
++4) rename() that is _not_ cross-directory.  Locking rules: caller locks
++the parent and finds source and target.  Then we decide which of the
++source and target need to be locked.  Source needs to be locked if it's a
++non-directory; target - if it's a non-directory or about to be removed.
++Take the locks that need to be taken, in inode pointer order if need
++to take both (that can happen only when both source and target are
++non-directories - the source because it wouldn't be locked otherwise
++and the target because mixing directory and non-directory is allowed
++only with RENAME_EXCHANGE, and that won't be removing the target).
++After the locks had been taken, call the method.  All locks are exclusive.
+ 
+ 5) link creation.  Locking rules:
+ 
+@@ -44,20 +47,17 @@ rules:
+ 
+ 	* lock the filesystem
+ 	* lock parents in "ancestors first" order. If one is not ancestor of
+-	  the other, lock them in inode pointer order.
++	  the other, lock the parent of source first.
+ 	* find source and target.
+ 	* if old parent is equal to or is a descendent of target
+ 	  fail with -ENOTEMPTY
+ 	* if new parent is equal to or is a descendent of source
+ 	  fail with -ELOOP
+-	* Lock both the source and the target provided they exist. If we
+-	  need to lock two inodes of different type (dir vs non-dir), we lock
+-	  the directory first. If we need to lock two inodes of the same type,
+-	  lock them in inode pointer order.
++	* Lock subdirectories involved (source before target).
++	* Lock non-directories involved, in inode pointer order.
+ 	* call the method.
+ 
+-All ->i_rwsem are taken exclusive.  Again, we might get away with locking
+-the source (and target in exchange case) shared.
++All ->i_rwsem are taken exclusive.
+ 
+ The rules above obviously guarantee that all directories that are going to be
+ read, modified or removed by method will be locked by caller.
+@@ -67,6 +67,7 @@ If no directory is its own ancestor, the scheme above is deadlock-free.
+ 
+ Proof:
+ 
++[XXX: will be updated once we are done massaging the lock_rename()]
+ 	First of all, at any moment we have a linear ordering of the
+ 	objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
+         of A and ptr(A) < ptr(B)).
+diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
+index 7be2900806c85..bd12f2f850ad3 100644
+--- a/Documentation/filesystems/locking.rst
++++ b/Documentation/filesystems/locking.rst
+@@ -101,7 +101,7 @@ symlink:	exclusive
+ mkdir:		exclusive
+ unlink:		exclusive (both)
+ rmdir:		exclusive (both)(see below)
+-rename:		exclusive (all)	(see below)
++rename:		exclusive (both parents, some children)	(see below)
+ readlink:	no
+ get_link:	no
+ setattr:	exclusive
+@@ -123,6 +123,9 @@ get_offset_ctx  no
+ 	Additionally, ->rmdir(), ->unlink() and ->rename() have ->i_rwsem
+ 	exclusive on victim.
+ 	cross-directory ->rename() has (per-superblock) ->s_vfs_rename_sem.
++	->unlink() and ->rename() have ->i_rwsem exclusive on all non-directories
++	involved.
++	->rename() has ->i_rwsem exclusive on any subdirectory that changes parent.
+ 
+ See Documentation/filesystems/directory-locking.rst for more detailed discussion
+ of the locking scheme for directory operations.
+diff --git a/Documentation/filesystems/overlayfs.rst b/Documentation/filesystems/overlayfs.rst
+index 0407f361f32a2..b28e5e3c239f6 100644
+--- a/Documentation/filesystems/overlayfs.rst
++++ b/Documentation/filesystems/overlayfs.rst
+@@ -145,7 +145,9 @@ filesystem, an overlay filesystem needs to record in the upper filesystem
+ that files have been removed.  This is done using whiteouts and opaque
+ directories (non-directories are always opaque).
+ 
+-A whiteout is created as a character device with 0/0 device number.
++A whiteout is created as a character device with 0/0 device number or
++as a zero-size regular file with the xattr "trusted.overlay.whiteout".
++
+ When a whiteout is found in the upper level of a merged directory, any
+ matching name in the lower level is ignored, and the whiteout itself
+ is also hidden.
+@@ -154,6 +156,13 @@ A directory is made opaque by setting the xattr "trusted.overlay.opaque"
+ to "y".  Where the upper filesystem contains an opaque directory, any
+ directory in the lower filesystem with the same name is ignored.
+ 
++An opaque directory should not conntain any whiteouts, because they do not
++serve any purpose.  A merge directory containing regular files with the xattr
++"trusted.overlay.whiteout", should be additionally marked by setting the xattr
++"trusted.overlay.opaque" to "x" on the merge directory itself.
++This is needed to avoid the overhead of checking the "trusted.overlay.whiteout"
++on all entries during readdir in the common case.
++
+ readdir
+ -------
+ 
+@@ -534,8 +543,9 @@ A lower dir with a regular whiteout will always be handled by the overlayfs
+ mount, so to support storing an effective whiteout file in an overlayfs mount an
+ alternative form of whiteout is supported. This form is a regular, zero-size
+ file with the "overlay.whiteout" xattr set, inside a directory with the
+-"overlay.whiteouts" xattr set. Such whiteouts are never created by overlayfs,
+-but can be used by userspace tools (like containers) that generate lower layers.
++"overlay.opaque" xattr set to "x" (see `whiteouts and opaque directories`_).
++These alternative whiteouts are never created by overlayfs, but can be used by
++userspace tools (like containers) that generate lower layers.
+ These alternative whiteouts can be escaped using the standard xattr escape
+ mechanism in order to properly nest to any depth.
+ 
+diff --git a/Documentation/filesystems/porting.rst b/Documentation/filesystems/porting.rst
+index 878e72b2f8b76..9100969e7de62 100644
+--- a/Documentation/filesystems/porting.rst
++++ b/Documentation/filesystems/porting.rst
+@@ -1061,3 +1061,21 @@ export_operations ->encode_fh() no longer has a default implementation to
+ encode FILEID_INO32_GEN* file handles.
+ Filesystems that used the default implementation may use the generic helper
+ generic_encode_ino32_fh() explicitly.
++
++---
++
++**mandatory**
++
++If ->rename() update of .. on cross-directory move needs an exclusion with
++directory modifications, do *not* lock the subdirectory in question in your
++->rename() - it's done by the caller now [that item should've been added in
++28eceeda130f "fs: Lock moved directories"].
++
++---
++
++**mandatory**
++
++On same-directory ->rename() the (tautological) update of .. is not protected
++by any locks; just don't do it if the old parent is the same as the new one.
++We really can't lock two subdirectories in same-directory rename - not without
++deadlocks.
+diff --git a/Documentation/gpu/drm-kms.rst b/Documentation/gpu/drm-kms.rst
+index 270d320407c7c..a98a7e04e86f3 100644
+--- a/Documentation/gpu/drm-kms.rst
++++ b/Documentation/gpu/drm-kms.rst
+@@ -548,6 +548,8 @@ Plane Composition Properties
+ .. kernel-doc:: drivers/gpu/drm/drm_blend.c
+    :doc: overview
+ 
++.. _damage_tracking_properties:
++
+ Damage Tracking Properties
+ --------------------------
+ 
+diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
+index 03fe5d1247be2..85bbe05436098 100644
+--- a/Documentation/gpu/todo.rst
++++ b/Documentation/gpu/todo.rst
+@@ -337,8 +337,8 @@ connector register/unregister fixes
+ 
+ Level: Intermediate
+ 
+-Remove load/unload callbacks from all non-DRIVER_LEGACY drivers
+----------------------------------------------------------------
++Remove load/unload callbacks
++----------------------------
+ 
+ The load/unload callbacks in struct &drm_driver are very much midlayers, plus
+ for historical reasons they get the ordering wrong (and we can't fix that)
+@@ -347,8 +347,7 @@ between setting up the &drm_driver structure and calling drm_dev_register().
+ - Rework drivers to no longer use the load/unload callbacks, directly coding the
+   load/unload sequence into the driver's probe function.
+ 
+-- Once all non-DRIVER_LEGACY drivers are converted, disallow the load/unload
+-  callbacks for all modern drivers.
++- Once all drivers are converted, remove the load/unload callbacks.
+ 
+ Contact: Daniel Vetter
+ 
+diff --git a/Documentation/sphinx/kernel_abi.py b/Documentation/sphinx/kernel_abi.py
+index 49797c55479c4..5911bd0d79657 100644
+--- a/Documentation/sphinx/kernel_abi.py
++++ b/Documentation/sphinx/kernel_abi.py
+@@ -39,8 +39,6 @@ import sys
+ import re
+ import kernellog
+ 
+-from os import path
+-
+ from docutils import nodes, statemachine
+ from docutils.statemachine import ViewList
+ from docutils.parsers.rst import directives, Directive
+@@ -73,60 +71,26 @@ class KernelCmd(Directive):
+     }
+ 
+     def run(self):
+-
+         doc = self.state.document
+         if not doc.settings.file_insertion_enabled:
+             raise self.warning("docutils: file insertion disabled")
+ 
+-        env = doc.settings.env
+-        cwd = path.dirname(doc.current_source)
+-        cmd = "get_abi.pl rest --enable-lineno --dir "
+-        cmd += self.arguments[0]
+-
+-        if 'rst' in self.options:
+-            cmd += " --rst-source"
++        srctree = os.path.abspath(os.environ["srctree"])
+ 
+-        srctree = path.abspath(os.environ["srctree"])
++        args = [
++            os.path.join(srctree, 'scripts/get_abi.pl'),
++            'rest',
++            '--enable-lineno',
++            '--dir', os.path.join(srctree, 'Documentation', self.arguments[0]),
++        ]
+ 
+-        fname = cmd
+-
+-        # extend PATH with $(srctree)/scripts
+-        path_env = os.pathsep.join([
+-            srctree + os.sep + "scripts",
+-            os.environ["PATH"]
+-        ])
+-        shell_env = os.environ.copy()
+-        shell_env["PATH"]    = path_env
+-        shell_env["srctree"] = srctree
++        if 'rst' in self.options:
++            args.append('--rst-source')
+ 
+-        lines = self.runCmd(cmd, shell=True, cwd=cwd, env=shell_env)
++        lines = subprocess.check_output(args, cwd=os.path.dirname(doc.current_source)).decode('utf-8')
+         nodeList = self.nestedParse(lines, self.arguments[0])
+         return nodeList
+ 
+-    def runCmd(self, cmd, **kwargs):
+-        u"""Run command ``cmd`` and return its stdout as unicode."""
+-
+-        try:
+-            proc = subprocess.Popen(
+-                cmd
+-                , stdout = subprocess.PIPE
+-                , stderr = subprocess.PIPE
+-                , **kwargs
+-            )
+-            out, err = proc.communicate()
+-
+-            out, err = codecs.decode(out, 'utf-8'), codecs.decode(err, 'utf-8')
+-
+-            if proc.returncode != 0:
+-                raise self.severe(
+-                    u"command '%s' failed with return code %d"
+-                    % (cmd, proc.returncode)
+-                )
+-        except OSError as exc:
+-            raise self.severe(u"problems with '%s' directive: %s."
+-                              % (self.name, ErrorString(exc)))
+-        return out
+-
+     def nestedParse(self, lines, fname):
+         env = self.state.document.settings.env
+         content = ViewList()
+diff --git a/Makefile b/Makefile
+index 0564d3d093954..96a08c9f0faa1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/alpha/kernel/rtc.c b/arch/alpha/kernel/rtc.c
+index fb3025396ac96..cfdf90bc8b3f8 100644
+--- a/arch/alpha/kernel/rtc.c
++++ b/arch/alpha/kernel/rtc.c
+@@ -80,7 +80,7 @@ init_rtc_epoch(void)
+ static int
+ alpha_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ {
+-	int ret = mc146818_get_time(tm);
++	int ret = mc146818_get_time(tm, 10);
+ 
+ 	if (ret < 0) {
+ 		dev_err_ratelimited(dev, "unable to read current time\n");
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6q-apalis-ixora-v1.2.dts b/arch/arm/boot/dts/nxp/imx/imx6q-apalis-ixora-v1.2.dts
+index 717decda0cebd..3ac7a45016205 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6q-apalis-ixora-v1.2.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx6q-apalis-ixora-v1.2.dts
+@@ -76,6 +76,7 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_enable_can1_power>;
+ 		regulator-name = "can1_supply";
++		startup-delay-us = <1000>;
+ 	};
+ 
+ 	reg_can2_supply: regulator-can2-supply {
+@@ -85,6 +86,7 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_enable_can2_power>;
+ 		regulator-name = "can2_supply";
++		startup-delay-us = <1000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
+index a88f186fcf03c..a976370264fcf 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
+@@ -585,10 +585,10 @@
+ 					  <&gcc GCC_USB30_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 198 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 158 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 51 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 11 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 10 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+@@ -612,7 +612,7 @@
+ 			compatible = "qcom,sdx55-pdc", "qcom,pdc";
+ 			reg = <0x0b210000 0x30000>;
+ 			qcom,pdc-ranges = <0 179 52>;
+-			#interrupt-cells = <3>;
++			#interrupt-cells = <2>;
+ 			interrupt-parent = <&intc>;
+ 			interrupt-controller;
+ 		};
+diff --git a/arch/arm/boot/dts/samsung/exynos4210-i9100.dts b/arch/arm/boot/dts/samsung/exynos4210-i9100.dts
+index a9ec1f6c1dea1..a076a1dfe41f8 100644
+--- a/arch/arm/boot/dts/samsung/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/samsung/exynos4210-i9100.dts
+@@ -527,6 +527,14 @@
+ 				regulator-name = "VT_CAM_1.8V";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
++
++				/*
++				 * Force-enable this regulator; otherwise the
++				 * kernel hangs very early in the boot process
++				 * for about 12 seconds, without apparent
++				 * reason.
++				 */
++				regulator-always-on;
+ 			};
+ 
+ 			vcclcd_reg: LDO13 {
+diff --git a/arch/arm/boot/dts/samsung/exynos4212-tab3.dtsi b/arch/arm/boot/dts/samsung/exynos4212-tab3.dtsi
+index d7954ff466b49..e5254e32aa8fc 100644
+--- a/arch/arm/boot/dts/samsung/exynos4212-tab3.dtsi
++++ b/arch/arm/boot/dts/samsung/exynos4212-tab3.dtsi
+@@ -434,6 +434,7 @@
+ };
+ 
+ &fimd {
++	samsung,invert-vclk;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 7b071a00425d2..456e8680e16ea 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1037,8 +1037,12 @@ config ARM64_ERRATUM_2645198
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
++	bool
++
+ config ARM64_ERRATUM_2966298
+ 	bool "Cortex-A520: 2966298: workaround for speculatively executed unprivileged load"
++	select ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
+ 	default y
+ 	help
+ 	  This option adds the workaround for ARM Cortex-A520 erratum 2966298.
+@@ -1050,6 +1054,20 @@ config ARM64_ERRATUM_2966298
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_3117295
++	bool "Cortex-A510: 3117295: workaround for speculatively executed unprivileged load"
++	select ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
++	default y
++	help
++	  This option adds the workaround for ARM Cortex-A510 erratum 3117295.
++
++	  On an affected Cortex-A510 core, a speculatively executed unprivileged
++	  load might leak data from a privileged level via a cache side channel.
++
++	  Work around this problem by executing a TLBI before returning to EL0.
++
++	  If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
+index 47d1c5cb13f4e..aad4a2e5e67ec 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
+@@ -93,6 +93,7 @@
+ 		#size-cells = <0>;
+ 
+ 		vcc-supply = <&pm8916_l17>;
++		vio-supply = <&pm8916_l6>;
+ 
+ 		led@0 {
+ 			reg = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts b/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts
+index 419f35c1fc92e..241c3a73c82d7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-wingtech-wt88047.dts
+@@ -118,6 +118,7 @@
+ 		#size-cells = <0>;
+ 
+ 		vcc-supply = <&pm8916_l16>;
++		vio-supply = <&pm8916_l5>;
+ 
+ 		led@0 {
+ 			reg = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 4f799b536a92a..057c1a0b7ab0b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -2106,6 +2106,7 @@
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,controlled-remotely;
+ 		};
+ 
+ 		blsp_uart1: serial@78af000 {
+diff --git a/arch/arm64/boot/dts/qcom/msm8939.dtsi b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+index 324b5d26db400..f9b6122e65462 100644
+--- a/arch/arm64/boot/dts/qcom/msm8939.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+@@ -1682,6 +1682,7 @@
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,controlled-remotely;
+ 		};
+ 
+ 		blsp_uart1: serial@78af000 {
+diff --git a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-mido.dts b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-mido.dts
+index ed95d09cedb1e..6b9245cd8b0c3 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-mido.dts
++++ b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-mido.dts
+@@ -111,6 +111,7 @@
+ 		reg = <0x45>;
+ 
+ 		vcc-supply = <&pm8953_l10>;
++		vio-supply = <&pm8953_l5>;
+ 
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-tissot.dts b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-tissot.dts
+index 61ff629c9bf34..9ac4f507e321a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-tissot.dts
++++ b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-tissot.dts
+@@ -104,6 +104,7 @@
+ 		reg = <0x45>;
+ 
+ 		vcc-supply = <&pm8953_l10>;
++		vio-supply = <&pm8953_l5>;
+ 
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts
+index 1a1d3f92a5116..b0588f30f8f1a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts
++++ b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts
+@@ -113,6 +113,7 @@
+ 		reg = <0x45>;
+ 
+ 		vcc-supply = <&pm8953_l10>;
++		vio-supply = <&pm8953_l5>;
+ 
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index c0365832c3152..5b7ffe2081593 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -2966,8 +2966,8 @@
+ 
+ 			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+ 					      <&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 8 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 9 IRQ_TYPE_LEVEL_HIGH>;
++					      <&pdc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index f7f616906f6f3..84de20c88a869 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -3685,9 +3685,9 @@
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+ 			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-					      <&pdc 14 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 14 IRQ_TYPE_EDGE_BOTH>,
+ 					      <&pdc 15 IRQ_TYPE_EDGE_BOTH>,
+-					      <&pdc 17 IRQ_TYPE_EDGE_BOTH>;
++					      <&pdc 17 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "hs_phy_irq",
+ 					  "dp_hs_phy_irq",
+ 					  "dm_hs_phy_irq",
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index b5eb842878703..b389d49d3ec8c 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -2552,10 +2552,10 @@
+ 		usb_prim: usb@a6f8800 {
+ 			compatible = "qcom,sc8180x-dwc3", "qcom,dwc3";
+ 			reg = <0 0x0a6f8800 0 0x400>;
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq",
+ 					  "ss_phy_irq",
+ 					  "dm_hs_phy_irq",
+@@ -2626,10 +2626,10 @@
+ 				      "xo";
+ 			resets = <&gcc GCC_USB30_SEC_BCR>;
+ 			power-domains = <&gcc USB30_SEC_GDSC>;
+-			interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 490 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 491 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 7 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 10 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 11 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts b/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
+index e4861c61a65bd..ffc4406422ae2 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
+@@ -458,6 +458,8 @@
+ };
+ 
+ &mdss0_dp3_phy {
++	compatible = "qcom,sc8280xp-edp-phy";
++
+ 	vdda-phy-supply = <&vreg_l6b>;
+ 	vdda-pll-supply = <&vreg_l3b>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm670.dtsi b/arch/arm64/boot/dts/qcom/sdm670.dtsi
+index ba2043d67370a..730c8351bcaa3 100644
+--- a/arch/arm64/boot/dts/qcom/sdm670.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm670.dtsi
+@@ -1295,10 +1295,10 @@
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <150000000>;
+ 
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 9648505644ff1..91169e438b463 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4053,10 +4053,10 @@
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <150000000>;
+ 
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc_intc 6 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc_intc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc_intc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+@@ -4104,10 +4104,10 @@
+ 					  <&gcc GCC_USB30_SEC_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <150000000>;
+ 
+-			interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 490 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 491 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc_intc 7 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc_intc 10 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc_intc 11 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 0e1aa86758795..e302eb7821af0 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -3565,10 +3565,10 @@
+ 					  <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+-			interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 8 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 9 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+@@ -3618,10 +3618,10 @@
+ 					  <&gcc GCC_USB30_SEC_MASTER_CLK>;
+ 			assigned-clock-rates = <19200000>, <200000000>;
+ 
+-			interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 490 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 491 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 7 IRQ_TYPE_LEVEL_HIGH>,
++					      <&pdc 10 IRQ_TYPE_EDGE_BOTH>,
++					      <&pdc 11 IRQ_TYPE_EDGE_BOTH>;
+ 			interrupt-names = "hs_phy_irq", "ss_phy_irq",
+ 					  "dm_hs_phy_irq", "dp_hs_phy_irq";
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts b/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
+index 5d7d567283e52..4237f2ee8fee3 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
+@@ -26,9 +26,11 @@
+ 			compatible = "ethernet-phy-ieee802.3-c22";
+ 			reg = <0>;
+ 
++			motorcomm,auto-sleep-disabled;
+ 			motorcomm,clk-out-frequency-hz = <125000000>;
+ 			motorcomm,keep-pll-enabled;
+-			motorcomm,auto-sleep-disabled;
++			motorcomm,rx-clk-drv-microamp = <5020>;
++			motorcomm,rx-data-drv-microamp = <5020>;
+ 
+ 			pinctrl-0 = <&eth_phy_reset_pin>;
+ 			pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
+index 8aa0499f9b032..f9f9749848bd9 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
+@@ -916,6 +916,7 @@
+ 				reg = <RK3588_PD_USB>;
+ 				clocks = <&cru PCLK_PHP_ROOT>,
+ 					 <&cru ACLK_USB_ROOT>,
++					 <&cru ACLK_USB>,
+ 					 <&cru HCLK_USB_ROOT>,
+ 					 <&cru HCLK_HOST0>,
+ 					 <&cru HCLK_HOST_ARB0>,
+diff --git a/arch/arm64/boot/dts/sprd/ums512.dtsi b/arch/arm64/boot/dts/sprd/ums512.dtsi
+index 024be594c47d1..97ac550af2f11 100644
+--- a/arch/arm64/boot/dts/sprd/ums512.dtsi
++++ b/arch/arm64/boot/dts/sprd/ums512.dtsi
+@@ -96,7 +96,7 @@
+ 
+ 		CPU6: cpu@600 {
+ 			device_type = "cpu";
+-			compatible = "arm,cortex-a55";
++			compatible = "arm,cortex-a75";
+ 			reg = <0x0 0x600>;
+ 			enable-method = "psci";
+ 			cpu-idle-states = <&CORE_PD>;
+@@ -104,7 +104,7 @@
+ 
+ 		CPU7: cpu@700 {
+ 			device_type = "cpu";
+-			compatible = "arm,cortex-a55";
++			compatible = "arm,cortex-a75";
+ 			reg = <0x0 0x700>;
+ 			enable-method = "psci";
+ 			cpu-idle-states = <&CORE_PD>;
+diff --git a/arch/arm64/boot/install.sh b/arch/arm64/boot/install.sh
+index 7399d706967a4..9b7a09808a3dd 100755
+--- a/arch/arm64/boot/install.sh
++++ b/arch/arm64/boot/install.sh
+@@ -17,7 +17,8 @@
+ #   $3 - kernel map file
+ #   $4 - default install path (blank if root directory)
+ 
+-if [ "$(basename $2)" = "Image.gz" ]; then
++if [ "$(basename $2)" = "Image.gz" ] || [ "$(basename $2)" = "vmlinuz.efi" ]
++then
+ # Compressed install
+   echo "Installing compressed kernel"
+   base=vmlinuz
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index e29e0fea63fb6..967c7c7a4e7db 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -416,6 +416,19 @@ static struct midr_range broken_aarch32_aes[] = {
+ };
+ #endif /* CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE */
+ 
++#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
++static const struct midr_range erratum_spec_unpriv_load_list[] = {
++#ifdef CONFIG_ARM64_ERRATUM_3117295
++	MIDR_ALL_VERSIONS(MIDR_CORTEX_A510),
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_2966298
++	/* Cortex-A520 r0p0 to r0p1 */
++	MIDR_REV_RANGE(MIDR_CORTEX_A520, 0, 0, 1),
++#endif
++	{},
++};
++#endif
++
+ const struct arm64_cpu_capabilities arm64_errata[] = {
+ #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
+ 	{
+@@ -713,12 +726,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)),
+ 	},
+ #endif
+-#ifdef CONFIG_ARM64_ERRATUM_2966298
++#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
+ 	{
+-		.desc = "ARM erratum 2966298",
+-		.capability = ARM64_WORKAROUND_2966298,
++		.desc = "ARM errata 2966298, 3117295",
++		.capability = ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD,
+ 		/* Cortex-A520 r0p0 - r0p1 */
+-		ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A520, 0, 0, 1),
++		ERRATA_MIDR_RANGE_LIST(erratum_spec_unpriv_load_list),
+ 	},
+ #endif
+ #ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index a6030913cd58c..7fcbee0f6c0e4 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -428,16 +428,9 @@ alternative_else_nop_endif
+ 	ldp	x28, x29, [sp, #16 * 14]
+ 
+ 	.if	\el == 0
+-alternative_if ARM64_WORKAROUND_2966298
+-	tlbi	vale1, xzr
+-	dsb	nsh
+-alternative_else_nop_endif
+-alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+-	ldr	lr, [sp, #S_LR]
+-	add	sp, sp, #PT_REGS_SIZE		// restore sp
+-	eret
+-alternative_else_nop_endif
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++	alternative_insn "b .L_skip_tramp_exit_\@", nop, ARM64_UNMAP_KERNEL_AT_EL0
++
+ 	msr	far_el1, x29
+ 
+ 	ldr_this_cpu	x30, this_cpu_vector, x29
+@@ -446,7 +439,18 @@ alternative_else_nop_endif
+ 	ldr		lr, [sp, #S_LR]		// restore x30
+ 	add		sp, sp, #PT_REGS_SIZE	// restore sp
+ 	br		x29
++
++.L_skip_tramp_exit_\@:
+ #endif
++	ldr	lr, [sp, #S_LR]
++	add	sp, sp, #PT_REGS_SIZE		// restore sp
++
++	/* This must be after the last explicit memory access */
++alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
++	tlbi	vale1, xzr
++	dsb	nsh
++alternative_else_nop_endif
++	eret
+ 	.else
+ 	ldr	lr, [sp, #S_LR]
+ 	add	sp, sp, #PT_REGS_SIZE		// restore sp
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 1559c706d32d1..7363f2eb98e81 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1245,8 +1245,10 @@ void fpsimd_release_task(struct task_struct *dead_task)
+  */
+ void sme_alloc(struct task_struct *task, bool flush)
+ {
+-	if (task->thread.sme_state && flush) {
+-		memset(task->thread.sme_state, 0, sme_state_size(task));
++	if (task->thread.sme_state) {
++		if (flush)
++			memset(task->thread.sme_state, 0,
++			       sme_state_size(task));
+ 		return;
+ 	}
+ 
+diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
+index b98c38288a9d3..3781ad1d0b265 100644
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -84,7 +84,6 @@ WORKAROUND_2077057
+ WORKAROUND_2457168
+ WORKAROUND_2645198
+ WORKAROUND_2658417
+-WORKAROUND_2966298
+ WORKAROUND_AMPERE_AC03_CPU_38
+ WORKAROUND_TRBE_OVERWRITE_FILL_MODE
+ WORKAROUND_TSB_FLUSH_FAILURE
+@@ -100,3 +99,4 @@ WORKAROUND_NVIDIA_CARMEL_CNP
+ WORKAROUND_QCOM_FALKOR_E1003
+ WORKAROUND_REPEAT_TLBI
+ WORKAROUND_SPECULATIVE_AT
++WORKAROUND_SPECULATIVE_UNPRIV_LOAD
+diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
+index 5bca12d16e069..2fbf541d7b4f6 100644
+--- a/arch/loongarch/kernel/smp.c
++++ b/arch/loongarch/kernel/smp.c
+@@ -506,7 +506,6 @@ asmlinkage void start_secondary(void)
+ 	sync_counter();
+ 	cpu = raw_smp_processor_id();
+ 	set_my_cpu_offset(per_cpu_offset(cpu));
+-	rcutree_report_cpu_starting(cpu);
+ 
+ 	cpu_probe();
+ 	constant_clockevent_init();
+diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c
+index 2c0a411f23aa7..0b95d32b30c94 100644
+--- a/arch/loongarch/mm/tlb.c
++++ b/arch/loongarch/mm/tlb.c
+@@ -284,12 +284,16 @@ static void setup_tlb_handler(int cpu)
+ 		set_handler(EXCCODE_TLBNR * VECSIZE, handle_tlb_protect, VECSIZE);
+ 		set_handler(EXCCODE_TLBNX * VECSIZE, handle_tlb_protect, VECSIZE);
+ 		set_handler(EXCCODE_TLBPE * VECSIZE, handle_tlb_protect, VECSIZE);
+-	}
++	} else {
++		int vec_sz __maybe_unused;
++		void *addr __maybe_unused;
++		struct page *page __maybe_unused;
++
++		/* Avoid lockdep warning */
++		rcutree_report_cpu_starting(cpu);
++
+ #ifdef CONFIG_NUMA
+-	else {
+-		void *addr;
+-		struct page *page;
+-		const int vec_sz = sizeof(exception_handlers);
++		vec_sz = sizeof(exception_handlers);
+ 
+ 		if (pcpu_handlers[cpu])
+ 			return;
+@@ -305,8 +309,8 @@ static void setup_tlb_handler(int cpu)
+ 		csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY);
+ 		csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY);
+ 		csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY);
+-	}
+ #endif
++	}
+ }
+ 
+ void tlb_init(int cpu)
+diff --git a/arch/mips/kernel/elf.c b/arch/mips/kernel/elf.c
+index 5582a4ca1e9e3..7aa2c2360ff60 100644
+--- a/arch/mips/kernel/elf.c
++++ b/arch/mips/kernel/elf.c
+@@ -11,6 +11,7 @@
+ 
+ #include <asm/cpu-features.h>
+ #include <asm/cpu-info.h>
++#include <asm/fpu.h>
+ 
+ #ifdef CONFIG_MIPS_FP_SUPPORT
+ 
+@@ -309,6 +310,11 @@ void mips_set_personality_nan(struct arch_elf_state *state)
+ 	struct cpuinfo_mips *c = &boot_cpu_data;
+ 	struct task_struct *t = current;
+ 
++	/* Do this early so t->thread.fpu.fcr31 won't be clobbered in case
++	 * we are preempted before the lose_fpu(0) in start_thread.
++	 */
++	lose_fpu(0);
++
+ 	t->thread.fpu.fcr31 = c->fpu_csr31;
+ 	switch (state->nan_2008) {
+ 	case 0:
+diff --git a/arch/mips/lantiq/prom.c b/arch/mips/lantiq/prom.c
+index a3cf293658581..0c45767eacf67 100644
+--- a/arch/mips/lantiq/prom.c
++++ b/arch/mips/lantiq/prom.c
+@@ -108,10 +108,9 @@ void __init prom_init(void)
+ 	prom_init_cmdline();
+ 
+ #if defined(CONFIG_MIPS_MT_SMP)
+-	if (cpu_has_mipsmt) {
+-		lantiq_smp_ops = vsmp_smp_ops;
++	lantiq_smp_ops = vsmp_smp_ops;
++	if (cpu_has_mipsmt)
+ 		lantiq_smp_ops.init_secondary = lantiq_init_secondary;
+-		register_smp_ops(&lantiq_smp_ops);
+-	}
++	register_smp_ops(&lantiq_smp_ops);
+ #endif
+ }
+diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
+index 5dcb525a89954..6e368a4658b54 100644
+--- a/arch/mips/mm/init.c
++++ b/arch/mips/mm/init.c
+@@ -422,7 +422,12 @@ void __init paging_init(void)
+ 		       (highend_pfn - max_low_pfn) << (PAGE_SHIFT - 10));
+ 		max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn;
+ 	}
++
++	max_mapnr = highend_pfn ? highend_pfn : max_low_pfn;
++#else
++	max_mapnr = max_low_pfn;
+ #endif
++	high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
+ 
+ 	free_area_init(max_zone_pfns);
+ }
+@@ -458,13 +463,6 @@ void __init mem_init(void)
+ 	 */
+ 	BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT));
+ 
+-#ifdef CONFIG_HIGHMEM
+-	max_mapnr = highend_pfn ? highend_pfn : max_low_pfn;
+-#else
+-	max_mapnr = max_low_pfn;
+-#endif
+-	high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
+-
+ 	maar_init();
+ 	memblock_free_all();
+ 	setup_zero_pages();	/* Setup zeroed pages.  */
+diff --git a/arch/parisc/kernel/firmware.c b/arch/parisc/kernel/firmware.c
+index 904ca3b9e7a71..c69f6d5946e90 100644
+--- a/arch/parisc/kernel/firmware.c
++++ b/arch/parisc/kernel/firmware.c
+@@ -123,10 +123,10 @@ static unsigned long f_extend(unsigned long address)
+ #ifdef CONFIG_64BIT
+ 	if(unlikely(parisc_narrow_firmware)) {
+ 		if((address & 0xff000000) == 0xf0000000)
+-			return 0xf0f0f0f000000000UL | (u32)address;
++			return (0xfffffff0UL << 32) | (u32)address;
+ 
+ 		if((address & 0xf0000000) == 0xf0000000)
+-			return 0xffffffff00000000UL | (u32)address;
++			return (0xffffffffUL << 32) | (u32)address;
+ 	}
+ #endif
+ 	return address;
+diff --git a/arch/powerpc/configs/ps3_defconfig b/arch/powerpc/configs/ps3_defconfig
+index 2b175ddf82f0b..aa8bb0208bcc8 100644
+--- a/arch/powerpc/configs/ps3_defconfig
++++ b/arch/powerpc/configs/ps3_defconfig
+@@ -24,6 +24,7 @@ CONFIG_PS3_VRAM=m
+ CONFIG_PS3_LPM=m
+ # CONFIG_PPC_OF_BOOT_TRAMPOLINE is not set
+ CONFIG_KEXEC=y
++# CONFIG_PPC64_BIG_ENDIAN_ELF_ABI_V2 is not set
+ CONFIG_PPC_4K_PAGES=y
+ CONFIG_SCHED_SMT=y
+ CONFIG_PM=y
+diff --git a/arch/riscv/boot/dts/sophgo/sg2042.dtsi b/arch/riscv/boot/dts/sophgo/sg2042.dtsi
+index 93256540d0788..ead1cc35d88b2 100644
+--- a/arch/riscv/boot/dts/sophgo/sg2042.dtsi
++++ b/arch/riscv/boot/dts/sophgo/sg2042.dtsi
+@@ -93,144 +93,160 @@
+ 					      <&cpu63_intc 3>;
+ 		};
+ 
+-		clint_mtimer0: timer@70ac000000 {
++		clint_mtimer0: timer@70ac004000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac000000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac004000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu0_intc 7>,
+ 					      <&cpu1_intc 7>,
+ 					      <&cpu2_intc 7>,
+ 					      <&cpu3_intc 7>;
+ 		};
+ 
+-		clint_mtimer1: timer@70ac010000 {
++		clint_mtimer1: timer@70ac014000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac010000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac014000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu4_intc 7>,
+ 					      <&cpu5_intc 7>,
+ 					      <&cpu6_intc 7>,
+ 					      <&cpu7_intc 7>;
+ 		};
+ 
+-		clint_mtimer2: timer@70ac020000 {
++		clint_mtimer2: timer@70ac024000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac020000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac024000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu8_intc 7>,
+ 					      <&cpu9_intc 7>,
+ 					      <&cpu10_intc 7>,
+ 					      <&cpu11_intc 7>;
+ 		};
+ 
+-		clint_mtimer3: timer@70ac030000 {
++		clint_mtimer3: timer@70ac034000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac030000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac034000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu12_intc 7>,
+ 					      <&cpu13_intc 7>,
+ 					      <&cpu14_intc 7>,
+ 					      <&cpu15_intc 7>;
+ 		};
+ 
+-		clint_mtimer4: timer@70ac040000 {
++		clint_mtimer4: timer@70ac044000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac040000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac044000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu16_intc 7>,
+ 					      <&cpu17_intc 7>,
+ 					      <&cpu18_intc 7>,
+ 					      <&cpu19_intc 7>;
+ 		};
+ 
+-		clint_mtimer5: timer@70ac050000 {
++		clint_mtimer5: timer@70ac054000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac050000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac054000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu20_intc 7>,
+ 					      <&cpu21_intc 7>,
+ 					      <&cpu22_intc 7>,
+ 					      <&cpu23_intc 7>;
+ 		};
+ 
+-		clint_mtimer6: timer@70ac060000 {
++		clint_mtimer6: timer@70ac064000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac060000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac064000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu24_intc 7>,
+ 					      <&cpu25_intc 7>,
+ 					      <&cpu26_intc 7>,
+ 					      <&cpu27_intc 7>;
+ 		};
+ 
+-		clint_mtimer7: timer@70ac070000 {
++		clint_mtimer7: timer@70ac074000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac070000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac074000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu28_intc 7>,
+ 					      <&cpu29_intc 7>,
+ 					      <&cpu30_intc 7>,
+ 					      <&cpu31_intc 7>;
+ 		};
+ 
+-		clint_mtimer8: timer@70ac080000 {
++		clint_mtimer8: timer@70ac084000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac080000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac084000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu32_intc 7>,
+ 					      <&cpu33_intc 7>,
+ 					      <&cpu34_intc 7>,
+ 					      <&cpu35_intc 7>;
+ 		};
+ 
+-		clint_mtimer9: timer@70ac090000 {
++		clint_mtimer9: timer@70ac094000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac090000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac094000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu36_intc 7>,
+ 					      <&cpu37_intc 7>,
+ 					      <&cpu38_intc 7>,
+ 					      <&cpu39_intc 7>;
+ 		};
+ 
+-		clint_mtimer10: timer@70ac0a0000 {
++		clint_mtimer10: timer@70ac0a4000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac0a0000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac0a4000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu40_intc 7>,
+ 					      <&cpu41_intc 7>,
+ 					      <&cpu42_intc 7>,
+ 					      <&cpu43_intc 7>;
+ 		};
+ 
+-		clint_mtimer11: timer@70ac0b0000 {
++		clint_mtimer11: timer@70ac0b4000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac0b0000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac0b4000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu44_intc 7>,
+ 					      <&cpu45_intc 7>,
+ 					      <&cpu46_intc 7>,
+ 					      <&cpu47_intc 7>;
+ 		};
+ 
+-		clint_mtimer12: timer@70ac0c0000 {
++		clint_mtimer12: timer@70ac0c4000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac0c0000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac0c4000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu48_intc 7>,
+ 					      <&cpu49_intc 7>,
+ 					      <&cpu50_intc 7>,
+ 					      <&cpu51_intc 7>;
+ 		};
+ 
+-		clint_mtimer13: timer@70ac0d0000 {
++		clint_mtimer13: timer@70ac0d4000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac0d0000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac0d4000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu52_intc 7>,
+ 					      <&cpu53_intc 7>,
+ 					      <&cpu54_intc 7>,
+ 					      <&cpu55_intc 7>;
+ 		};
+ 
+-		clint_mtimer14: timer@70ac0e0000 {
++		clint_mtimer14: timer@70ac0e4000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac0e0000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac0e4000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu56_intc 7>,
+ 					      <&cpu57_intc 7>,
+ 					      <&cpu58_intc 7>,
+ 					      <&cpu59_intc 7>;
+ 		};
+ 
+-		clint_mtimer15: timer@70ac0f0000 {
++		clint_mtimer15: timer@70ac0f4000 {
+ 			compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer";
+-			reg = <0x00000070 0xac0f0000 0x00000000 0x00007ff8>;
++			reg = <0x00000070 0xac0f4000 0x00000000 0x0000c000>;
++			reg-names = "mtimecmp";
+ 			interrupts-extended = <&cpu60_intc 7>,
+ 					      <&cpu61_intc 7>,
+ 					      <&cpu62_intc 7>,
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index ab00235b018f8..74ffb2178f545 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -881,7 +881,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
+ #define TASK_SIZE_MIN	(PGDIR_SIZE_L3 * PTRS_PER_PGD / 2)
+ 
+ #ifdef CONFIG_COMPAT
+-#define TASK_SIZE_32	(_AC(0x80000000, UL) - PAGE_SIZE)
++#define TASK_SIZE_32	(_AC(0x80000000, UL))
+ #define TASK_SIZE	(test_thread_flag(TIF_32BIT) ? \
+ 			 TASK_SIZE_32 : TASK_SIZE_64)
+ #else
+diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
+index f19f861cda549..e1944ff0757a8 100644
+--- a/arch/riscv/include/asm/processor.h
++++ b/arch/riscv/include/asm/processor.h
+@@ -16,7 +16,7 @@
+ 
+ #ifdef CONFIG_64BIT
+ #define DEFAULT_MAP_WINDOW	(UL(1) << (MMAP_VA_BITS - 1))
+-#define STACK_TOP_MAX		TASK_SIZE_64
++#define STACK_TOP_MAX		TASK_SIZE
+ 
+ #define arch_get_mmap_end(addr, len, flags)			\
+ ({								\
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 862834bb1d643..c9d59a5448b6c 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -723,8 +723,8 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+ 
+ 			if (!bucket) {
+ 				kfree(entry);
+-				kfree(rel_head);
+ 				kfree(rel_head->rel_entry);
++				kfree(rel_head);
+ 				return -ENOMEM;
+ 			}
+ 
+@@ -747,6 +747,10 @@ initialize_relocation_hashtable(unsigned int num_relocations,
+ {
+ 	/* Can safely assume that bits is not greater than sizeof(long) */
+ 	unsigned long hashtable_size = roundup_pow_of_two(num_relocations);
++	/*
++	 * When hashtable_size == 1, hashtable_bits == 0.
++	 * This is valid because the hashing algorithm returns 0 in this case.
++	 */
+ 	unsigned int hashtable_bits = ilog2(hashtable_size);
+ 
+ 	/*
+@@ -760,10 +764,10 @@ initialize_relocation_hashtable(unsigned int num_relocations,
+ 	hashtable_size <<= should_double_size;
+ 
+ 	*relocation_hashtable = kmalloc_array(hashtable_size,
+-					      sizeof(*relocation_hashtable),
++					      sizeof(**relocation_hashtable),
+ 					      GFP_KERNEL);
+ 	if (!*relocation_hashtable)
+-		return -ENOMEM;
++		return 0;
+ 
+ 	__hash_init(*relocation_hashtable, hashtable_size);
+ 
+@@ -789,8 +793,8 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+ 	hashtable_bits = initialize_relocation_hashtable(num_relocations,
+ 							 &relocation_hashtable);
+ 
+-	if (hashtable_bits < 0)
+-		return hashtable_bits;
++	if (!relocation_hashtable)
++		return -ENOMEM;
+ 
+ 	INIT_LIST_HEAD(&used_buckets_list);
+ 
+diff --git a/arch/riscv/kernel/pi/cmdline_early.c b/arch/riscv/kernel/pi/cmdline_early.c
+index 68e786c84c949..f6d4dedffb842 100644
+--- a/arch/riscv/kernel/pi/cmdline_early.c
++++ b/arch/riscv/kernel/pi/cmdline_early.c
+@@ -38,8 +38,7 @@ static char *get_early_cmdline(uintptr_t dtb_pa)
+ 	if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) ||
+ 	    IS_ENABLED(CONFIG_CMDLINE_FORCE) ||
+ 	    fdt_cmdline_size == 0 /* CONFIG_CMDLINE_FALLBACK */) {
+-		strncat(early_cmdline, CONFIG_CMDLINE,
+-			COMMAND_LINE_SIZE - fdt_cmdline_size);
++		strlcat(early_cmdline, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
+ 	}
+ 
+ 	return early_cmdline;
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index c773820e4af90..c6fe5405de4a4 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request *req)
+ 	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
+ 	 */
+ 	if (nbytes) {
+-		cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
++		memset(buf, 0, AES_BLOCK_SIZE);
++		memcpy(buf, walk.src.virt.addr, nbytes);
++		cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
+ 			    AES_BLOCK_SIZE, walk.iv);
+ 		memcpy(walk.dst.virt.addr, buf, nbytes);
+ 		crypto_inc(walk.iv, AES_BLOCK_SIZE);
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index 8b541e44151d4..55ee5567a5ea9 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -693,9 +693,11 @@ static int ctr_paes_crypt(struct skcipher_request *req)
+ 	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
+ 	 */
+ 	if (nbytes) {
++		memset(buf, 0, AES_BLOCK_SIZE);
++		memcpy(buf, walk.src.virt.addr, nbytes);
+ 		while (1) {
+ 			if (cpacf_kmctr(ctx->fc, &param, buf,
+-					walk.src.virt.addr, AES_BLOCK_SIZE,
++					buf, AES_BLOCK_SIZE,
+ 					walk.iv) == AES_BLOCK_SIZE)
+ 				break;
+ 			if (__paes_convert_key(ctx))
+diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c
+index 0f279360838a4..30d117f9ad7ee 100644
+--- a/arch/sh/boards/mach-ecovec24/setup.c
++++ b/arch/sh/boards/mach-ecovec24/setup.c
+@@ -1220,7 +1220,7 @@ static int __init arch_setup(void)
+ 		lcdc_info.ch[0].num_modes		= ARRAY_SIZE(ecovec_dvi_modes);
+ 
+ 		/* No backlight */
+-		gpio_backlight_data.fbdev = NULL;
++		gpio_backlight_data.dev = NULL;
+ 
+ 		gpio_set_value(GPIO_PTA2, 1);
+ 		gpio_set_value(GPIO_PTU1, 1);
+diff --git a/arch/x86/include/asm/syscall_wrapper.h b/arch/x86/include/asm/syscall_wrapper.h
+index 21f9407be5d35..7e88705e907f4 100644
+--- a/arch/x86/include/asm/syscall_wrapper.h
++++ b/arch/x86/include/asm/syscall_wrapper.h
+@@ -58,12 +58,29 @@ extern long __ia32_sys_ni_syscall(const struct pt_regs *regs);
+ 		,,regs->di,,regs->si,,regs->dx				\
+ 		,,regs->r10,,regs->r8,,regs->r9)			\
+ 
++
++/* SYSCALL_PT_ARGS is Adapted from s390x */
++#define SYSCALL_PT_ARG6(m, t1, t2, t3, t4, t5, t6)			\
++	SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5), m(t6, (regs->bp))
++#define SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5)				\
++	SYSCALL_PT_ARG4(m, t1, t2, t3, t4),  m(t5, (regs->di))
++#define SYSCALL_PT_ARG4(m, t1, t2, t3, t4)				\
++	SYSCALL_PT_ARG3(m, t1, t2, t3),  m(t4, (regs->si))
++#define SYSCALL_PT_ARG3(m, t1, t2, t3)					\
++	SYSCALL_PT_ARG2(m, t1, t2), m(t3, (regs->dx))
++#define SYSCALL_PT_ARG2(m, t1, t2)					\
++	SYSCALL_PT_ARG1(m, t1), m(t2, (regs->cx))
++#define SYSCALL_PT_ARG1(m, t1) m(t1, (regs->bx))
++#define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__)
++
++#define __SC_COMPAT_CAST(t, a)						\
++	(__typeof(__builtin_choose_expr(__TYPE_IS_L(t), 0, 0U)))	\
++	(unsigned int)a
++
+ /* Mapping of registers to parameters for syscalls on i386 */
+ #define SC_IA32_REGS_TO_ARGS(x, ...)					\
+-	__MAP(x,__SC_ARGS						\
+-	      ,,(unsigned int)regs->bx,,(unsigned int)regs->cx		\
+-	      ,,(unsigned int)regs->dx,,(unsigned int)regs->si		\
+-	      ,,(unsigned int)regs->di,,(unsigned int)regs->bp)
++	SYSCALL_PT_ARGS(x, __SC_COMPAT_CAST,				\
++			__MAP(x, __SC_TYPE, __VA_ARGS__))		\
+ 
+ #define __SYS_STUB0(abi, name)						\
+ 	long __##abi##_##name(const struct pt_regs *regs);		\
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 41eecf180b7f4..17adad4cbe78c 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1438,7 +1438,7 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
+ 	memset(&curr_time, 0, sizeof(struct rtc_time));
+ 
+ 	if (hpet_rtc_flags & (RTC_UIE | RTC_AIE)) {
+-		if (unlikely(mc146818_get_time(&curr_time) < 0)) {
++		if (unlikely(mc146818_get_time(&curr_time, 10) < 0)) {
+ 			pr_err_ratelimited("unable to read current time from RTC\n");
+ 			return IRQ_HANDLED;
+ 		}
+diff --git a/arch/x86/kernel/rtc.c b/arch/x86/kernel/rtc.c
+index 1309b9b053386..2e7066980f3e8 100644
+--- a/arch/x86/kernel/rtc.c
++++ b/arch/x86/kernel/rtc.c
+@@ -67,7 +67,7 @@ void mach_get_cmos_time(struct timespec64 *now)
+ 		return;
+ 	}
+ 
+-	if (mc146818_get_time(&tm)) {
++	if (mc146818_get_time(&tm, 1000)) {
+ 		pr_err("Unable to read current time from RTC\n");
+ 		now->tv_sec = now->tv_nsec = 0;
+ 		return;
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 9c73a763ef883..438f79c564cfc 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -20,8 +20,6 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 	struct blkpg_partition p;
+ 	sector_t start, length;
+ 
+-	if (disk->flags & GENHD_FL_NO_PART)
+-		return -EINVAL;
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
+ 	if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))
+diff --git a/block/partitions/core.c b/block/partitions/core.c
+index f47ffcfdfcec2..f14602022c5eb 100644
+--- a/block/partitions/core.c
++++ b/block/partitions/core.c
+@@ -447,6 +447,11 @@ int bdev_add_partition(struct gendisk *disk, int partno, sector_t start,
+ 		goto out;
+ 	}
+ 
++	if (disk->flags & GENHD_FL_NO_PART) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	if (partition_overlaps(disk, start, length, -1)) {
+ 		ret = -EBUSY;
+ 		goto out;
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 4fe95c4480473..85bc279b4233f 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -341,6 +341,7 @@ __crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_put)
+ 		}
+ 
+ 		if (!strcmp(q->cra_driver_name, alg->cra_name) ||
++		    !strcmp(q->cra_driver_name, alg->cra_driver_name) ||
+ 		    !strcmp(q->cra_name, alg->cra_driver_name))
+ 			goto err;
+ 	}
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index f85f3515c258f..9c5a5f4dba5a6 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -579,7 +579,7 @@ bool dev_pm_skip_resume(struct device *dev)
+ }
+ 
+ /**
+- * device_resume_noirq - Execute a "noirq resume" callback for given device.
++ * __device_resume_noirq - Execute a "noirq resume" callback for given device.
+  * @dev: Device to handle.
+  * @state: PM transition of the system being carried out.
+  * @async: If true, the device is being resumed asynchronously.
+@@ -587,7 +587,7 @@ bool dev_pm_skip_resume(struct device *dev)
+  * The driver of @dev will not receive interrupts while this function is being
+  * executed.
+  */
+-static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
++static void __device_resume_noirq(struct device *dev, pm_message_t state, bool async)
+ {
+ 	pm_callback_t callback = NULL;
+ 	const char *info = NULL;
+@@ -655,7 +655,13 @@ Skip:
+ Out:
+ 	complete_all(&dev->power.completion);
+ 	TRACE_RESUME(error);
+-	return error;
++
++	if (error) {
++		suspend_stats.failed_resume_noirq++;
++		dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
++		dpm_save_failed_dev(dev_name(dev));
++		pm_dev_err(dev, state, async ? " async noirq" : " noirq", error);
++	}
+ }
+ 
+ static bool is_async(struct device *dev)
+@@ -668,11 +674,15 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
+ {
+ 	reinit_completion(&dev->power.completion);
+ 
+-	if (is_async(dev)) {
+-		get_device(dev);
+-		async_schedule_dev(func, dev);
++	if (!is_async(dev))
++		return false;
++
++	get_device(dev);
++
++	if (async_schedule_dev_nocall(func, dev))
+ 		return true;
+-	}
++
++	put_device(dev);
+ 
+ 	return false;
+ }
+@@ -680,15 +690,19 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
+ static void async_resume_noirq(void *data, async_cookie_t cookie)
+ {
+ 	struct device *dev = data;
+-	int error;
+-
+-	error = device_resume_noirq(dev, pm_transition, true);
+-	if (error)
+-		pm_dev_err(dev, pm_transition, " async", error);
+ 
++	__device_resume_noirq(dev, pm_transition, true);
+ 	put_device(dev);
+ }
+ 
++static void device_resume_noirq(struct device *dev)
++{
++	if (dpm_async_fn(dev, async_resume_noirq))
++		return;
++
++	__device_resume_noirq(dev, pm_transition, false);
++}
++
+ static void dpm_noirq_resume_devices(pm_message_t state)
+ {
+ 	struct device *dev;
+@@ -698,14 +712,6 @@ static void dpm_noirq_resume_devices(pm_message_t state)
+ 	mutex_lock(&dpm_list_mtx);
+ 	pm_transition = state;
+ 
+-	/*
+-	 * Advanced the async threads upfront,
+-	 * in case the starting of async threads is
+-	 * delayed by non-async resuming devices.
+-	 */
+-	list_for_each_entry(dev, &dpm_noirq_list, power.entry)
+-		dpm_async_fn(dev, async_resume_noirq);
+-
+ 	while (!list_empty(&dpm_noirq_list)) {
+ 		dev = to_device(dpm_noirq_list.next);
+ 		get_device(dev);
+@@ -713,17 +719,7 @@ static void dpm_noirq_resume_devices(pm_message_t state)
+ 
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+-		if (!is_async(dev)) {
+-			int error;
+-
+-			error = device_resume_noirq(dev, state, false);
+-			if (error) {
+-				suspend_stats.failed_resume_noirq++;
+-				dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+-				dpm_save_failed_dev(dev_name(dev));
+-				pm_dev_err(dev, state, " noirq", error);
+-			}
+-		}
++		device_resume_noirq(dev);
+ 
+ 		put_device(dev);
+ 
+@@ -751,14 +747,14 @@ void dpm_resume_noirq(pm_message_t state)
+ }
+ 
+ /**
+- * device_resume_early - Execute an "early resume" callback for given device.
++ * __device_resume_early - Execute an "early resume" callback for given device.
+  * @dev: Device to handle.
+  * @state: PM transition of the system being carried out.
+  * @async: If true, the device is being resumed asynchronously.
+  *
+  * Runtime PM is disabled for @dev while this function is being executed.
+  */
+-static int device_resume_early(struct device *dev, pm_message_t state, bool async)
++static void __device_resume_early(struct device *dev, pm_message_t state, bool async)
+ {
+ 	pm_callback_t callback = NULL;
+ 	const char *info = NULL;
+@@ -811,21 +807,31 @@ Out:
+ 
+ 	pm_runtime_enable(dev);
+ 	complete_all(&dev->power.completion);
+-	return error;
++
++	if (error) {
++		suspend_stats.failed_resume_early++;
++		dpm_save_failed_step(SUSPEND_RESUME_EARLY);
++		dpm_save_failed_dev(dev_name(dev));
++		pm_dev_err(dev, state, async ? " async early" : " early", error);
++	}
+ }
+ 
+ static void async_resume_early(void *data, async_cookie_t cookie)
+ {
+ 	struct device *dev = data;
+-	int error;
+-
+-	error = device_resume_early(dev, pm_transition, true);
+-	if (error)
+-		pm_dev_err(dev, pm_transition, " async", error);
+ 
++	__device_resume_early(dev, pm_transition, true);
+ 	put_device(dev);
+ }
+ 
++static void device_resume_early(struct device *dev)
++{
++	if (dpm_async_fn(dev, async_resume_early))
++		return;
++
++	__device_resume_early(dev, pm_transition, false);
++}
++
+ /**
+  * dpm_resume_early - Execute "early resume" callbacks for all devices.
+  * @state: PM transition of the system being carried out.
+@@ -839,14 +845,6 @@ void dpm_resume_early(pm_message_t state)
+ 	mutex_lock(&dpm_list_mtx);
+ 	pm_transition = state;
+ 
+-	/*
+-	 * Advanced the async threads upfront,
+-	 * in case the starting of async threads is
+-	 * delayed by non-async resuming devices.
+-	 */
+-	list_for_each_entry(dev, &dpm_late_early_list, power.entry)
+-		dpm_async_fn(dev, async_resume_early);
+-
+ 	while (!list_empty(&dpm_late_early_list)) {
+ 		dev = to_device(dpm_late_early_list.next);
+ 		get_device(dev);
+@@ -854,17 +852,7 @@ void dpm_resume_early(pm_message_t state)
+ 
+ 		mutex_unlock(&dpm_list_mtx);
+ 
+-		if (!is_async(dev)) {
+-			int error;
+-
+-			error = device_resume_early(dev, state, false);
+-			if (error) {
+-				suspend_stats.failed_resume_early++;
+-				dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+-				dpm_save_failed_dev(dev_name(dev));
+-				pm_dev_err(dev, state, " early", error);
+-			}
+-		}
++		device_resume_early(dev);
+ 
+ 		put_device(dev);
+ 
+@@ -888,12 +876,12 @@ void dpm_resume_start(pm_message_t state)
+ EXPORT_SYMBOL_GPL(dpm_resume_start);
+ 
+ /**
+- * device_resume - Execute "resume" callbacks for given device.
++ * __device_resume - Execute "resume" callbacks for given device.
+  * @dev: Device to handle.
+  * @state: PM transition of the system being carried out.
+  * @async: If true, the device is being resumed asynchronously.
+  */
+-static int device_resume(struct device *dev, pm_message_t state, bool async)
++static void __device_resume(struct device *dev, pm_message_t state, bool async)
+ {
+ 	pm_callback_t callback = NULL;
+ 	const char *info = NULL;
+@@ -975,20 +963,30 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
+ 
+ 	TRACE_RESUME(error);
+ 
+-	return error;
++	if (error) {
++		suspend_stats.failed_resume++;
++		dpm_save_failed_step(SUSPEND_RESUME);
++		dpm_save_failed_dev(dev_name(dev));
++		pm_dev_err(dev, state, async ? " async" : "", error);
++	}
+ }
+ 
+ static void async_resume(void *data, async_cookie_t cookie)
+ {
+ 	struct device *dev = data;
+-	int error;
+ 
+-	error = device_resume(dev, pm_transition, true);
+-	if (error)
+-		pm_dev_err(dev, pm_transition, " async", error);
++	__device_resume(dev, pm_transition, true);
+ 	put_device(dev);
+ }
+ 
++static void device_resume(struct device *dev)
++{
++	if (dpm_async_fn(dev, async_resume))
++		return;
++
++	__device_resume(dev, pm_transition, false);
++}
++
+ /**
+  * dpm_resume - Execute "resume" callbacks for non-sysdev devices.
+  * @state: PM transition of the system being carried out.
+@@ -1008,27 +1006,17 @@ void dpm_resume(pm_message_t state)
+ 	pm_transition = state;
+ 	async_error = 0;
+ 
+-	list_for_each_entry(dev, &dpm_suspended_list, power.entry)
+-		dpm_async_fn(dev, async_resume);
+-
+ 	while (!list_empty(&dpm_suspended_list)) {
+ 		dev = to_device(dpm_suspended_list.next);
++
+ 		get_device(dev);
+-		if (!is_async(dev)) {
+-			int error;
+ 
+-			mutex_unlock(&dpm_list_mtx);
++		mutex_unlock(&dpm_list_mtx);
++
++		device_resume(dev);
+ 
+-			error = device_resume(dev, state, false);
+-			if (error) {
+-				suspend_stats.failed_resume++;
+-				dpm_save_failed_step(SUSPEND_RESUME);
+-				dpm_save_failed_dev(dev_name(dev));
+-				pm_dev_err(dev, state, "", error);
+-			}
++		mutex_lock(&dpm_list_mtx);
+ 
+-			mutex_lock(&dpm_list_mtx);
+-		}
+ 		if (!list_empty(&dev->power.entry))
+ 			list_move_tail(&dev->power.entry, &dpm_prepared_list);
+ 
+diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c
+index 72b7a92337b18..cd6e559648b21 100644
+--- a/drivers/base/power/trace.c
++++ b/drivers/base/power/trace.c
+@@ -120,7 +120,7 @@ static unsigned int read_magic_time(void)
+ 	struct rtc_time time;
+ 	unsigned int val;
+ 
+-	if (mc146818_get_time(&time) < 0) {
++	if (mc146818_get_time(&time, 1000) < 0) {
+ 		pr_err("Unable to read current time from RTC\n");
+ 		return 0;
+ 	}
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b6414e1e645b7..aa65313aabb8d 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -510,7 +510,7 @@ static int __sock_xmit(struct nbd_device *nbd, struct socket *sock, int send,
+ 		       struct iov_iter *iter, int msg_flags, int *sent)
+ {
+ 	int result;
+-	struct msghdr msg;
++	struct msghdr msg = {} ;
+ 	unsigned int noreclaim_flag;
+ 
+ 	if (unlikely(!sock)) {
+@@ -526,10 +526,6 @@ static int __sock_xmit(struct nbd_device *nbd, struct socket *sock, int send,
+ 	do {
+ 		sock->sk->sk_allocation = GFP_NOIO | __GFP_MEMALLOC;
+ 		sock->sk->sk_use_task_frag = false;
+-		msg.msg_name = NULL;
+-		msg.msg_namelen = 0;
+-		msg.msg_control = NULL;
+-		msg.msg_controllen = 0;
+ 		msg.msg_flags = msg_flags | MSG_NOSIGNAL;
+ 
+ 		if (send)
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index a999b698b131f..1e2596c5efd81 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -3452,14 +3452,15 @@ static bool rbd_lock_add_request(struct rbd_img_request *img_req)
+ static void rbd_lock_del_request(struct rbd_img_request *img_req)
+ {
+ 	struct rbd_device *rbd_dev = img_req->rbd_dev;
+-	bool need_wakeup;
++	bool need_wakeup = false;
+ 
+ 	lockdep_assert_held(&rbd_dev->lock_rwsem);
+ 	spin_lock(&rbd_dev->lock_lists_lock);
+-	rbd_assert(!list_empty(&img_req->lock_item));
+-	list_del_init(&img_req->lock_item);
+-	need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
+-		       list_empty(&rbd_dev->running_list));
++	if (!list_empty(&img_req->lock_item)) {
++		list_del_init(&img_req->lock_item);
++		need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
++			       list_empty(&rbd_dev->running_list));
++	}
+ 	spin_unlock(&rbd_dev->lock_lists_lock);
+ 	if (need_wakeup)
+ 		complete(&rbd_dev->releasing_wait);
+@@ -3842,14 +3843,19 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result)
+ 		return;
+ 	}
+ 
+-	list_for_each_entry(img_req, &rbd_dev->acquiring_list, lock_item) {
++	while (!list_empty(&rbd_dev->acquiring_list)) {
++		img_req = list_first_entry(&rbd_dev->acquiring_list,
++					   struct rbd_img_request, lock_item);
+ 		mutex_lock(&img_req->state_mutex);
+ 		rbd_assert(img_req->state == RBD_IMG_EXCLUSIVE_LOCK);
++		if (!result)
++			list_move_tail(&img_req->lock_item,
++				       &rbd_dev->running_list);
++		else
++			list_del_init(&img_req->lock_item);
+ 		rbd_img_schedule(img_req, result);
+ 		mutex_unlock(&img_req->state_mutex);
+ 	}
+-
+-	list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list);
+ }
+ 
+ static bool locker_equal(const struct ceph_locker *lhs,
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+index dcf627b36e829..d6653cbcf94a2 100644
+--- a/drivers/bus/mhi/host/main.c
++++ b/drivers/bus/mhi/host/main.c
+@@ -268,7 +268,8 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+ 
+ static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
+ {
+-	return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
++	return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len &&
++			!(addr & (sizeof(struct mhi_ring_element) - 1));
+ }
+ 
+ int mhi_destroy_device(struct device *dev, void *data)
+@@ -642,6 +643,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 			mhi_del_ring_element(mhi_cntrl, tre_ring);
+ 			local_rp = tre_ring->rp;
+ 
++			read_unlock_bh(&mhi_chan->lock);
++
+ 			/* notify client */
+ 			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ 
+@@ -667,6 +670,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 					kfree(buf_info->cb_buf);
+ 				}
+ 			}
++
++			read_lock_bh(&mhi_chan->lock);
+ 		}
+ 		break;
+ 	} /* CC_EOT */
+@@ -1122,17 +1127,15 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
+ 	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
+ 		return -EIO;
+ 
+-	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+-
+ 	ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
+-	if (unlikely(ret)) {
+-		ret = -EAGAIN;
+-		goto exit_unlock;
+-	}
++	if (unlikely(ret))
++		return -EAGAIN;
+ 
+ 	ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags);
+ 	if (unlikely(ret))
+-		goto exit_unlock;
++		return ret;
++
++	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+ 
+ 	/* Packet is queued, take a usage ref to exit M3 if necessary
+ 	 * for host->device buffer, balanced put is done on buffer completion
+@@ -1152,7 +1155,6 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
+ 	if (dir == DMA_FROM_DEVICE)
+ 		mhi_cntrl->runtime_put(mhi_cntrl);
+ 
+-exit_unlock:
+ 	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+ 
+ 	return ret;
+@@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ 	int eot, eob, chain, bei;
+ 	int ret;
+ 
++	/* Protect accesses for reading and incrementing WP */
++	write_lock_bh(&mhi_chan->lock);
++
+ 	buf_ring = &mhi_chan->buf_ring;
+ 	tre_ring = &mhi_chan->tre_ring;
+ 
+@@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ 
+ 	if (!info->pre_mapped) {
+ 		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+-		if (ret)
++		if (ret) {
++			write_unlock_bh(&mhi_chan->lock);
+ 			return ret;
++		}
+ 	}
+ 
+ 	eob = !!(flags & MHI_EOB);
+@@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ 	mhi_add_ring_element(mhi_cntrl, tre_ring);
+ 	mhi_add_ring_element(mhi_cntrl, buf_ring);
+ 
++	write_unlock_bh(&mhi_chan->lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 420f155d251fb..a3bbdd6e60fca 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -23,10 +23,13 @@
+ #include <linux/sched.h>
+ #include <linux/sched/signal.h>
+ #include <linux/slab.h>
++#include <linux/string.h>
+ #include <linux/uaccess.h>
+ 
+ #define RNG_MODULE_NAME		"hw_random"
+ 
++#define RNG_BUFFER_SIZE (SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES)
++
+ static struct hwrng *current_rng;
+ /* the current rng has been explicitly chosen by user via sysfs */
+ static int cur_rng_set_by_user;
+@@ -58,7 +61,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
+ 
+ static size_t rng_buffer_size(void)
+ {
+-	return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES;
++	return RNG_BUFFER_SIZE;
+ }
+ 
+ static void add_early_randomness(struct hwrng *rng)
+@@ -209,6 +212,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
+ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ 			    size_t size, loff_t *offp)
+ {
++	u8 buffer[RNG_BUFFER_SIZE];
+ 	ssize_t ret = 0;
+ 	int err = 0;
+ 	int bytes_read, len;
+@@ -236,34 +240,37 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ 			if (bytes_read < 0) {
+ 				err = bytes_read;
+ 				goto out_unlock_reading;
++			} else if (bytes_read == 0 &&
++				   (filp->f_flags & O_NONBLOCK)) {
++				err = -EAGAIN;
++				goto out_unlock_reading;
+ 			}
++
+ 			data_avail = bytes_read;
+ 		}
+ 
+-		if (!data_avail) {
+-			if (filp->f_flags & O_NONBLOCK) {
+-				err = -EAGAIN;
+-				goto out_unlock_reading;
+-			}
+-		} else {
+-			len = data_avail;
++		len = data_avail;
++		if (len) {
+ 			if (len > size)
+ 				len = size;
+ 
+ 			data_avail -= len;
+ 
+-			if (copy_to_user(buf + ret, rng_buffer + data_avail,
+-								len)) {
++			memcpy(buffer, rng_buffer + data_avail, len);
++		}
++		mutex_unlock(&reading_mutex);
++		put_rng(rng);
++
++		if (len) {
++			if (copy_to_user(buf + ret, buffer, len)) {
+ 				err = -EFAULT;
+-				goto out_unlock_reading;
++				goto out;
+ 			}
+ 
+ 			size -= len;
+ 			ret += len;
+ 		}
+ 
+-		mutex_unlock(&reading_mutex);
+-		put_rng(rng);
+ 
+ 		if (need_resched())
+ 			schedule_timeout_interruptible(1);
+@@ -274,6 +281,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
+ 		}
+ 	}
+ out:
++	memzero_explicit(buffer, sizeof(buffer));
+ 	return ret ? : err;
+ 
+ out_unlock_reading:
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 1f6186475715e..1791d37fbc53c 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1232,14 +1232,13 @@ static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
+ 	max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq);
+ 	min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq);
+ 
++	WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf);
++	WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf);
++
+ 	max_perf = clamp_t(unsigned long, max_perf, cpudata->min_limit_perf,
+ 			cpudata->max_limit_perf);
+ 	min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf,
+ 			cpudata->max_limit_perf);
+-
+-	WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf);
+-	WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf);
+-
+ 	value = READ_ONCE(cpudata->cppc_req_cached);
+ 
+ 	if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index a534a1f7f1ee7..f5c69fa230d9b 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -526,6 +526,30 @@ static int intel_pstate_cppc_get_scaling(int cpu)
+ }
+ #endif /* CONFIG_ACPI_CPPC_LIB */
+ 
++static int intel_pstate_freq_to_hwp_rel(struct cpudata *cpu, int freq,
++					unsigned int relation)
++{
++	if (freq == cpu->pstate.turbo_freq)
++		return cpu->pstate.turbo_pstate;
++
++	if (freq == cpu->pstate.max_freq)
++		return cpu->pstate.max_pstate;
++
++	switch (relation) {
++	case CPUFREQ_RELATION_H:
++		return freq / cpu->pstate.scaling;
++	case CPUFREQ_RELATION_C:
++		return DIV_ROUND_CLOSEST(freq, cpu->pstate.scaling);
++	}
++
++	return DIV_ROUND_UP(freq, cpu->pstate.scaling);
++}
++
++static int intel_pstate_freq_to_hwp(struct cpudata *cpu, int freq)
++{
++	return intel_pstate_freq_to_hwp_rel(cpu, freq, CPUFREQ_RELATION_L);
++}
++
+ /**
+  * intel_pstate_hybrid_hwp_adjust - Calibrate HWP performance levels.
+  * @cpu: Target CPU.
+@@ -543,6 +567,7 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu)
+ 	int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
+ 	int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu);
+ 	int scaling = cpu->pstate.scaling;
++	int freq;
+ 
+ 	pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
+ 	pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
+@@ -556,16 +581,16 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu)
+ 	cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
+ 					 perf_ctl_scaling);
+ 
+-	cpu->pstate.max_pstate_physical =
+-			DIV_ROUND_UP(perf_ctl_max_phys * perf_ctl_scaling,
+-				     scaling);
++	freq = perf_ctl_max_phys * perf_ctl_scaling;
++	cpu->pstate.max_pstate_physical = intel_pstate_freq_to_hwp(cpu, freq);
+ 
+-	cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling;
++	freq = cpu->pstate.min_pstate * perf_ctl_scaling;
++	cpu->pstate.min_freq = freq;
+ 	/*
+ 	 * Cast the min P-state value retrieved via pstate_funcs.get_min() to
+ 	 * the effective range of HWP performance levels.
+ 	 */
+-	cpu->pstate.min_pstate = DIV_ROUND_UP(cpu->pstate.min_freq, scaling);
++	cpu->pstate.min_pstate = intel_pstate_freq_to_hwp(cpu, freq);
+ }
+ 
+ static inline void update_turbo_state(void)
+@@ -2524,13 +2549,12 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
+ 	 * abstract values to represent performance rather than pure ratios.
+ 	 */
+ 	if (hwp_active && cpu->pstate.scaling != perf_ctl_scaling) {
+-		int scaling = cpu->pstate.scaling;
+ 		int freq;
+ 
+ 		freq = max_policy_perf * perf_ctl_scaling;
+-		max_policy_perf = DIV_ROUND_UP(freq, scaling);
++		max_policy_perf = intel_pstate_freq_to_hwp(cpu, freq);
+ 		freq = min_policy_perf * perf_ctl_scaling;
+-		min_policy_perf = DIV_ROUND_UP(freq, scaling);
++		min_policy_perf = intel_pstate_freq_to_hwp(cpu, freq);
+ 	}
+ 
+ 	pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n",
+@@ -2904,18 +2928,7 @@ static int intel_cpufreq_target(struct cpufreq_policy *policy,
+ 
+ 	cpufreq_freq_transition_begin(policy, &freqs);
+ 
+-	switch (relation) {
+-	case CPUFREQ_RELATION_L:
+-		target_pstate = DIV_ROUND_UP(freqs.new, cpu->pstate.scaling);
+-		break;
+-	case CPUFREQ_RELATION_H:
+-		target_pstate = freqs.new / cpu->pstate.scaling;
+-		break;
+-	default:
+-		target_pstate = DIV_ROUND_CLOSEST(freqs.new, cpu->pstate.scaling);
+-		break;
+-	}
+-
++	target_pstate = intel_pstate_freq_to_hwp_rel(cpu, freqs.new, relation);
+ 	target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, false);
+ 
+ 	freqs.new = target_pstate * cpu->pstate.scaling;
+@@ -2933,7 +2946,7 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy,
+ 
+ 	update_turbo_state();
+ 
+-	target_pstate = DIV_ROUND_UP(target_freq, cpu->pstate.scaling);
++	target_pstate = intel_pstate_freq_to_hwp(cpu, target_freq);
+ 
+ 	target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, true);
+ 
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 76fec47a13a4c..7bb656237fa0c 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -525,7 +525,7 @@ static int alloc_hpa(struct cxl_region *cxlr, resource_size_t size)
+ 	struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
+ 	struct cxl_region_params *p = &cxlr->params;
+ 	struct resource *res;
+-	u32 remainder = 0;
++	u64 remainder = 0;
+ 
+ 	lockdep_assert_held_write(&cxl_region_rwsem);
+ 
+@@ -545,7 +545,7 @@ static int alloc_hpa(struct cxl_region *cxlr, resource_size_t size)
+ 	    (cxlr->mode == CXL_DECODER_PMEM && uuid_is_null(&p->uuid)))
+ 		return -ENXIO;
+ 
+-	div_u64_rem(size, SZ_256M * p->interleave_ways, &remainder);
++	div64_u64_rem(size, (u64)SZ_256M * p->interleave_ways, &remainder);
+ 	if (remainder)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index b3a68d5833bd6..907f50ab70ed9 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1688,7 +1688,7 @@ static ssize_t trans_stat_show(struct device *dev,
+ 			       struct device_attribute *attr, char *buf)
+ {
+ 	struct devfreq *df = to_devfreq(dev);
+-	ssize_t len;
++	ssize_t len = 0;
+ 	int i, j;
+ 	unsigned int max_state;
+ 
+@@ -1697,7 +1697,7 @@ static ssize_t trans_stat_show(struct device *dev,
+ 	max_state = df->max_state;
+ 
+ 	if (max_state == 0)
+-		return sprintf(buf, "Not Supported.\n");
++		return scnprintf(buf, PAGE_SIZE, "Not Supported.\n");
+ 
+ 	mutex_lock(&df->lock);
+ 	if (!df->stop_polling &&
+@@ -1707,31 +1707,52 @@ static ssize_t trans_stat_show(struct device *dev,
+ 	}
+ 	mutex_unlock(&df->lock);
+ 
+-	len = sprintf(buf, "     From  :   To\n");
+-	len += sprintf(buf + len, "           :");
+-	for (i = 0; i < max_state; i++)
+-		len += sprintf(buf + len, "%10lu",
+-				df->freq_table[i]);
++	len += scnprintf(buf + len, PAGE_SIZE - len, "     From  :   To\n");
++	len += scnprintf(buf + len, PAGE_SIZE - len, "           :");
++	for (i = 0; i < max_state; i++) {
++		if (len >= PAGE_SIZE - 1)
++			break;
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%10lu",
++				 df->freq_table[i]);
++	}
++	if (len >= PAGE_SIZE - 1)
++		return PAGE_SIZE - 1;
+ 
+-	len += sprintf(buf + len, "   time(ms)\n");
++	len += scnprintf(buf + len, PAGE_SIZE - len, "   time(ms)\n");
+ 
+ 	for (i = 0; i < max_state; i++) {
++		if (len >= PAGE_SIZE - 1)
++			break;
+ 		if (df->freq_table[i] == df->previous_freq)
+-			len += sprintf(buf + len, "*");
++			len += scnprintf(buf + len, PAGE_SIZE - len, "*");
+ 		else
+-			len += sprintf(buf + len, " ");
++			len += scnprintf(buf + len, PAGE_SIZE - len, " ");
++		if (len >= PAGE_SIZE - 1)
++			break;
++
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%10lu:",
++				 df->freq_table[i]);
++		for (j = 0; j < max_state; j++) {
++			if (len >= PAGE_SIZE - 1)
++				break;
++			len += scnprintf(buf + len, PAGE_SIZE - len, "%10u",
++					 df->stats.trans_table[(i * max_state) + j]);
++		}
++		if (len >= PAGE_SIZE - 1)
++			break;
++		len += scnprintf(buf + len, PAGE_SIZE - len, "%10llu\n", (u64)
++				 jiffies64_to_msecs(df->stats.time_in_state[i]));
++	}
+ 
+-		len += sprintf(buf + len, "%10lu:", df->freq_table[i]);
+-		for (j = 0; j < max_state; j++)
+-			len += sprintf(buf + len, "%10u",
+-				df->stats.trans_table[(i * max_state) + j]);
++	if (len < PAGE_SIZE - 1)
++		len += scnprintf(buf + len, PAGE_SIZE - len, "Total transition : %u\n",
++				 df->stats.total_trans);
+ 
+-		len += sprintf(buf + len, "%10llu\n", (u64)
+-			jiffies64_to_msecs(df->stats.time_in_state[i]));
++	if (len >= PAGE_SIZE - 1) {
++		pr_warn_once("devfreq transition table exceeds PAGE_SIZE. Disabling\n");
++		return -EFBIG;
+ 	}
+ 
+-	len += sprintf(buf + len, "Total transition : %u\n",
+-					df->stats.total_trans);
+ 	return len;
+ }
+ 
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index b7388ae62d7f1..491b222402216 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -1103,6 +1103,9 @@ EXPORT_SYMBOL_GPL(dma_async_device_channel_register);
+ static void __dma_async_device_channel_unregister(struct dma_device *device,
+ 						  struct dma_chan *chan)
+ {
++	if (chan->local == NULL)
++		return;
++
+ 	WARN_ONCE(!device->device_release && chan->client_count,
+ 		  "%s called while %d clients hold a reference\n",
+ 		  __func__, chan->client_count);
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index 238a69bd0d6f5..75cae7ccae270 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -24,6 +24,8 @@
+ #define ARGS_RX                         BIT(0)
+ #define ARGS_REMOTE                     BIT(1)
+ #define ARGS_MULTI_FIFO                 BIT(2)
++#define ARGS_EVEN_CH                    BIT(3)
++#define ARGS_ODD_CH                     BIT(4)
+ 
+ static void fsl_edma_synchronize(struct dma_chan *chan)
+ {
+@@ -157,6 +159,12 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
+ 		fsl_chan->is_remote = dma_spec->args[2] & ARGS_REMOTE;
+ 		fsl_chan->is_multi_fifo = dma_spec->args[2] & ARGS_MULTI_FIFO;
+ 
++		if ((dma_spec->args[2] & ARGS_EVEN_CH) && (i & 0x1))
++			continue;
++
++		if ((dma_spec->args[2] & ARGS_ODD_CH) && !(i & 0x1))
++			continue;
++
+ 		if (!b_chmux && i == dma_spec->args[0]) {
+ 			chan = dma_get_slave_channel(chan);
+ 			chan->device->privatecnt++;
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 8f754f922217d..fa0f880beae64 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -802,6 +802,9 @@ err_bmap:
+ 
+ static void idxd_device_evl_free(struct idxd_device *idxd)
+ {
++	void *evl_log;
++	unsigned int evl_log_size;
++	dma_addr_t evl_dma;
+ 	union gencfg_reg gencfg;
+ 	union genctrl_reg genctrl;
+ 	struct device *dev = &idxd->pdev->dev;
+@@ -822,11 +825,15 @@ static void idxd_device_evl_free(struct idxd_device *idxd)
+ 	iowrite64(0, idxd->reg_base + IDXD_EVLCFG_OFFSET);
+ 	iowrite64(0, idxd->reg_base + IDXD_EVLCFG_OFFSET + 8);
+ 
+-	dma_free_coherent(dev, evl->log_size, evl->log, evl->dma);
+ 	bitmap_free(evl->bmap);
++	evl_log = evl->log;
++	evl_log_size = evl->log_size;
++	evl_dma = evl->dma;
+ 	evl->log = NULL;
+ 	evl->size = IDXD_EVL_SIZE_MIN;
+ 	spin_unlock(&evl->lock);
++
++	dma_free_coherent(dev, evl_log_size, evl_log, evl_dma);
+ }
+ 
+ static void idxd_group_config_write(struct idxd_group *group)
+diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
+index 84a88029226fd..2c9c72d4b5a20 100644
+--- a/drivers/dma/xilinx/xdma.c
++++ b/drivers/dma/xilinx/xdma.c
+@@ -754,9 +754,9 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
+ 	if (ret)
+ 		goto out;
+ 
+-	desc->completed_desc_num += complete_desc_num;
+-
+ 	if (desc->cyclic) {
++		desc->completed_desc_num = complete_desc_num;
++
+ 		ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS,
+ 				  &st);
+ 		if (ret)
+@@ -768,6 +768,8 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
+ 		goto out;
+ 	}
+ 
++	desc->completed_desc_num += complete_desc_num;
++
+ 	/*
+ 	 * if all data blocks are transferred, remove and complete the request
+ 	 */
+diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
+index 3568149b95620..f8fbf03942888 100644
+--- a/drivers/dpll/dpll_core.c
++++ b/drivers/dpll/dpll_core.c
+@@ -28,8 +28,6 @@ static u32 dpll_xa_id;
+ 	WARN_ON_ONCE(!xa_get_mark(&dpll_device_xa, (d)->id, DPLL_REGISTERED))
+ #define ASSERT_DPLL_NOT_REGISTERED(d)	\
+ 	WARN_ON_ONCE(xa_get_mark(&dpll_device_xa, (d)->id, DPLL_REGISTERED))
+-#define ASSERT_PIN_REGISTERED(p)	\
+-	WARN_ON_ONCE(!xa_get_mark(&dpll_pin_xa, (p)->id, DPLL_REGISTERED))
+ 
+ struct dpll_device_registration {
+ 	struct list_head list;
+@@ -424,6 +422,53 @@ void dpll_device_unregister(struct dpll_device *dpll,
+ }
+ EXPORT_SYMBOL_GPL(dpll_device_unregister);
+ 
++static void dpll_pin_prop_free(struct dpll_pin_properties *prop)
++{
++	kfree(prop->package_label);
++	kfree(prop->panel_label);
++	kfree(prop->board_label);
++	kfree(prop->freq_supported);
++}
++
++static int dpll_pin_prop_dup(const struct dpll_pin_properties *src,
++			     struct dpll_pin_properties *dst)
++{
++	memcpy(dst, src, sizeof(*dst));
++	if (src->freq_supported && src->freq_supported_num) {
++		size_t freq_size = src->freq_supported_num *
++				   sizeof(*src->freq_supported);
++		dst->freq_supported = kmemdup(src->freq_supported,
++					      freq_size, GFP_KERNEL);
++		if (!src->freq_supported)
++			return -ENOMEM;
++	}
++	if (src->board_label) {
++		dst->board_label = kstrdup(src->board_label, GFP_KERNEL);
++		if (!dst->board_label)
++			goto err_board_label;
++	}
++	if (src->panel_label) {
++		dst->panel_label = kstrdup(src->panel_label, GFP_KERNEL);
++		if (!dst->panel_label)
++			goto err_panel_label;
++	}
++	if (src->package_label) {
++		dst->package_label = kstrdup(src->package_label, GFP_KERNEL);
++		if (!dst->package_label)
++			goto err_package_label;
++	}
++
++	return 0;
++
++err_package_label:
++	kfree(dst->panel_label);
++err_panel_label:
++	kfree(dst->board_label);
++err_board_label:
++	kfree(dst->freq_supported);
++	return -ENOMEM;
++}
++
+ static struct dpll_pin *
+ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
+ 	       const struct dpll_pin_properties *prop)
+@@ -440,19 +485,23 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
+ 	if (WARN_ON(prop->type < DPLL_PIN_TYPE_MUX ||
+ 		    prop->type > DPLL_PIN_TYPE_MAX)) {
+ 		ret = -EINVAL;
+-		goto err;
++		goto err_pin_prop;
+ 	}
+-	pin->prop = prop;
++	ret = dpll_pin_prop_dup(prop, &pin->prop);
++	if (ret)
++		goto err_pin_prop;
+ 	refcount_set(&pin->refcount, 1);
+ 	xa_init_flags(&pin->dpll_refs, XA_FLAGS_ALLOC);
+ 	xa_init_flags(&pin->parent_refs, XA_FLAGS_ALLOC);
+ 	ret = xa_alloc(&dpll_pin_xa, &pin->id, pin, xa_limit_16b, GFP_KERNEL);
+ 	if (ret)
+-		goto err;
++		goto err_xa_alloc;
+ 	return pin;
+-err:
++err_xa_alloc:
+ 	xa_destroy(&pin->dpll_refs);
+ 	xa_destroy(&pin->parent_refs);
++	dpll_pin_prop_free(&pin->prop);
++err_pin_prop:
+ 	kfree(pin);
+ 	return ERR_PTR(ret);
+ }
+@@ -512,6 +561,7 @@ void dpll_pin_put(struct dpll_pin *pin)
+ 		xa_destroy(&pin->dpll_refs);
+ 		xa_destroy(&pin->parent_refs);
+ 		xa_erase(&dpll_pin_xa, pin->id);
++		dpll_pin_prop_free(&pin->prop);
+ 		kfree(pin);
+ 	}
+ 	mutex_unlock(&dpll_lock);
+@@ -562,8 +612,6 @@ dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
+ 	    WARN_ON(!ops->state_on_dpll_get) ||
+ 	    WARN_ON(!ops->direction_get))
+ 		return -EINVAL;
+-	if (ASSERT_DPLL_REGISTERED(dpll))
+-		return -EINVAL;
+ 
+ 	mutex_lock(&dpll_lock);
+ 	if (WARN_ON(!(dpll->module == pin->module &&
+@@ -634,15 +682,13 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
+ 	unsigned long i, stop;
+ 	int ret;
+ 
+-	if (WARN_ON(parent->prop->type != DPLL_PIN_TYPE_MUX))
++	if (WARN_ON(parent->prop.type != DPLL_PIN_TYPE_MUX))
+ 		return -EINVAL;
+ 
+ 	if (WARN_ON(!ops) ||
+ 	    WARN_ON(!ops->state_on_pin_get) ||
+ 	    WARN_ON(!ops->direction_get))
+ 		return -EINVAL;
+-	if (ASSERT_PIN_REGISTERED(parent))
+-		return -EINVAL;
+ 
+ 	mutex_lock(&dpll_lock);
+ 	ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv);
+diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h
+index 5585873c5c1b0..717f715015c74 100644
+--- a/drivers/dpll/dpll_core.h
++++ b/drivers/dpll/dpll_core.h
+@@ -44,7 +44,7 @@ struct dpll_device {
+  * @module:		module of creator
+  * @dpll_refs:		hold referencees to dplls pin was registered with
+  * @parent_refs:	hold references to parent pins pin was registered with
+- * @prop:		pointer to pin properties given by registerer
++ * @prop:		pin properties copied from the registerer
+  * @rclk_dev_name:	holds name of device when pin can recover clock from it
+  * @refcount:		refcount
+  **/
+@@ -55,7 +55,7 @@ struct dpll_pin {
+ 	struct module *module;
+ 	struct xarray dpll_refs;
+ 	struct xarray parent_refs;
+-	const struct dpll_pin_properties *prop;
++	struct dpll_pin_properties prop;
+ 	refcount_t refcount;
+ };
+ 
+diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c
+index ce7cf736f0208..7cc99d627942e 100644
+--- a/drivers/dpll/dpll_netlink.c
++++ b/drivers/dpll/dpll_netlink.c
+@@ -278,17 +278,17 @@ dpll_msg_add_pin_freq(struct sk_buff *msg, struct dpll_pin *pin,
+ 	if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY, sizeof(freq), &freq,
+ 			  DPLL_A_PIN_PAD))
+ 		return -EMSGSIZE;
+-	for (fs = 0; fs < pin->prop->freq_supported_num; fs++) {
++	for (fs = 0; fs < pin->prop.freq_supported_num; fs++) {
+ 		nest = nla_nest_start(msg, DPLL_A_PIN_FREQUENCY_SUPPORTED);
+ 		if (!nest)
+ 			return -EMSGSIZE;
+-		freq = pin->prop->freq_supported[fs].min;
++		freq = pin->prop.freq_supported[fs].min;
+ 		if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY_MIN, sizeof(freq),
+ 				  &freq, DPLL_A_PIN_PAD)) {
+ 			nla_nest_cancel(msg, nest);
+ 			return -EMSGSIZE;
+ 		}
+-		freq = pin->prop->freq_supported[fs].max;
++		freq = pin->prop.freq_supported[fs].max;
+ 		if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY_MAX, sizeof(freq),
+ 				  &freq, DPLL_A_PIN_PAD)) {
+ 			nla_nest_cancel(msg, nest);
+@@ -304,9 +304,9 @@ static bool dpll_pin_is_freq_supported(struct dpll_pin *pin, u32 freq)
+ {
+ 	int fs;
+ 
+-	for (fs = 0; fs < pin->prop->freq_supported_num; fs++)
+-		if (freq >= pin->prop->freq_supported[fs].min &&
+-		    freq <= pin->prop->freq_supported[fs].max)
++	for (fs = 0; fs < pin->prop.freq_supported_num; fs++)
++		if (freq >= pin->prop.freq_supported[fs].min &&
++		    freq <= pin->prop.freq_supported[fs].max)
+ 			return true;
+ 	return false;
+ }
+@@ -396,7 +396,7 @@ static int
+ dpll_cmd_pin_get_one(struct sk_buff *msg, struct dpll_pin *pin,
+ 		     struct netlink_ext_ack *extack)
+ {
+-	const struct dpll_pin_properties *prop = pin->prop;
++	const struct dpll_pin_properties *prop = &pin->prop;
+ 	struct dpll_pin_ref *ref;
+ 	int ret;
+ 
+@@ -525,6 +525,24 @@ __dpll_device_change_ntf(struct dpll_device *dpll)
+ 	return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll);
+ }
+ 
++static bool dpll_pin_available(struct dpll_pin *pin)
++{
++	struct dpll_pin_ref *par_ref;
++	unsigned long i;
++
++	if (!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED))
++		return false;
++	xa_for_each(&pin->parent_refs, i, par_ref)
++		if (xa_get_mark(&dpll_pin_xa, par_ref->pin->id,
++				DPLL_REGISTERED))
++			return true;
++	xa_for_each(&pin->dpll_refs, i, par_ref)
++		if (xa_get_mark(&dpll_device_xa, par_ref->dpll->id,
++				DPLL_REGISTERED))
++			return true;
++	return false;
++}
++
+ /**
+  * dpll_device_change_ntf - notify that the dpll device has been changed
+  * @dpll: registered dpll pointer
+@@ -551,7 +569,7 @@ dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin)
+ 	int ret = -ENOMEM;
+ 	void *hdr;
+ 
+-	if (WARN_ON(!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED)))
++	if (!dpll_pin_available(pin))
+ 		return -ENODEV;
+ 
+ 	msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+@@ -689,7 +707,7 @@ dpll_pin_on_pin_state_set(struct dpll_pin *pin, u32 parent_idx,
+ 	int ret;
+ 
+ 	if (!(DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE &
+-	      pin->prop->capabilities)) {
++	      pin->prop.capabilities)) {
+ 		NL_SET_ERR_MSG(extack, "state changing is not allowed");
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -725,7 +743,7 @@ dpll_pin_state_set(struct dpll_device *dpll, struct dpll_pin *pin,
+ 	int ret;
+ 
+ 	if (!(DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE &
+-	      pin->prop->capabilities)) {
++	      pin->prop.capabilities)) {
+ 		NL_SET_ERR_MSG(extack, "state changing is not allowed");
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -752,7 +770,7 @@ dpll_pin_prio_set(struct dpll_device *dpll, struct dpll_pin *pin,
+ 	int ret;
+ 
+ 	if (!(DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE &
+-	      pin->prop->capabilities)) {
++	      pin->prop.capabilities)) {
+ 		NL_SET_ERR_MSG(extack, "prio changing is not allowed");
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -780,7 +798,7 @@ dpll_pin_direction_set(struct dpll_pin *pin, struct dpll_device *dpll,
+ 	int ret;
+ 
+ 	if (!(DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE &
+-	      pin->prop->capabilities)) {
++	      pin->prop.capabilities)) {
+ 		NL_SET_ERR_MSG(extack, "direction changing is not allowed");
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -810,8 +828,8 @@ dpll_pin_phase_adj_set(struct dpll_pin *pin, struct nlattr *phase_adj_attr,
+ 	int ret;
+ 
+ 	phase_adj = nla_get_s32(phase_adj_attr);
+-	if (phase_adj > pin->prop->phase_range.max ||
+-	    phase_adj < pin->prop->phase_range.min) {
++	if (phase_adj > pin->prop.phase_range.max ||
++	    phase_adj < pin->prop.phase_range.min) {
+ 		NL_SET_ERR_MSG_ATTR(extack, phase_adj_attr,
+ 				    "phase adjust value not supported");
+ 		return -EINVAL;
+@@ -995,7 +1013,7 @@ dpll_pin_find(u64 clock_id, struct nlattr *mod_name_attr,
+ 	unsigned long i;
+ 
+ 	xa_for_each_marked(&dpll_pin_xa, i, pin, DPLL_REGISTERED) {
+-		prop = pin->prop;
++		prop = &pin->prop;
+ 		cid_match = clock_id ? pin->clock_id == clock_id : true;
+ 		mod_match = mod_name_attr && module_name(pin->module) ?
+ 			!nla_strcmp(mod_name_attr,
+@@ -1102,6 +1120,10 @@ int dpll_nl_pin_id_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 	pin = dpll_pin_find_from_nlattr(info);
+ 	if (!IS_ERR(pin)) {
++		if (!dpll_pin_available(pin)) {
++			nlmsg_free(msg);
++			return -ENODEV;
++		}
+ 		ret = dpll_msg_add_pin_handle(msg, pin);
+ 		if (ret) {
+ 			nlmsg_free(msg);
+@@ -1151,6 +1173,8 @@ int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	xa_for_each_marked_start(&dpll_pin_xa, i, pin, DPLL_REGISTERED,
+ 				 ctx->idx) {
++		if (!dpll_pin_available(pin))
++			continue;
+ 		hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+ 				  cb->nlh->nlmsg_seq,
+ 				  &dpll_nl_family, NLM_F_MULTI,
+@@ -1413,7 +1437,8 @@ int dpll_pin_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ 	}
+ 	info->user_ptr[0] = xa_load(&dpll_pin_xa,
+ 				    nla_get_u32(info->attrs[DPLL_A_PIN_ID]));
+-	if (!info->user_ptr[0]) {
++	if (!info->user_ptr[0] ||
++	    !dpll_pin_available(info->user_ptr[0])) {
+ 		NL_SET_ERR_MSG(info->extack, "pin not found");
+ 		ret = -ENODEV;
+ 		goto unlock_dev;
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 6146b2927d5c5..0ea1dd6e55c40 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -733,6 +733,11 @@ static void __do_sched_recv_cb(u16 part_id, u16 vcpu, bool is_per_vcpu)
+ 	void *cb_data;
+ 
+ 	partition = xa_load(&drv_info->partition_info, part_id);
++	if (!partition) {
++		pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id);
++		return;
++	}
++
+ 	read_lock(&partition->rw_lock);
+ 	callback = partition->callback;
+ 	cb_data = partition->cb_data;
+@@ -915,6 +920,11 @@ static int ffa_sched_recv_cb_update(u16 part_id, ffa_sched_recv_cb callback,
+ 		return -EOPNOTSUPP;
+ 
+ 	partition = xa_load(&drv_info->partition_info, part_id);
++	if (!partition) {
++		pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id);
++		return -EINVAL;
++	}
++
+ 	write_lock(&partition->rw_lock);
+ 
+ 	cb_valid = !!partition->callback;
+@@ -1226,6 +1236,7 @@ static void ffa_setup_partitions(void)
+ 			ffa_device_unregister(ffa_dev);
+ 			continue;
+ 		}
++		rwlock_init(&info->rw_lock);
+ 		xa_store(&drv_info->partition_info, tpbuf->id, info, GFP_KERNEL);
+ 	}
+ 	drv_info->partition_count = count;
+@@ -1236,6 +1247,7 @@ static void ffa_setup_partitions(void)
+ 	info = kzalloc(sizeof(*info), GFP_KERNEL);
+ 	if (!info)
+ 		return;
++	rwlock_init(&info->rw_lock);
+ 	xa_store(&drv_info->partition_info, drv_info->vm_id, info, GFP_KERNEL);
+ 	drv_info->partition_count++;
+ }
+diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c
+index 42b81c181d687..96b4f0694fed0 100644
+--- a/drivers/firmware/arm_scmi/clock.c
++++ b/drivers/firmware/arm_scmi/clock.c
+@@ -951,8 +951,7 @@ static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph)
+ 			scmi_clock_describe_rates_get(ph, clkid, clk);
+ 	}
+ 
+-	if (PROTOCOL_REV_MAJOR(version) >= 0x2 &&
+-	    PROTOCOL_REV_MINOR(version) >= 0x1) {
++	if (PROTOCOL_REV_MAJOR(version) >= 0x3) {
+ 		cinfo->clock_config_set = scmi_clock_config_set_v2;
+ 		cinfo->clock_config_get = scmi_clock_config_get_v2;
+ 	} else {
+diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
+index c46dc5215af7a..00b165d1f502d 100644
+--- a/drivers/firmware/arm_scmi/common.h
++++ b/drivers/firmware/arm_scmi/common.h
+@@ -314,6 +314,7 @@ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
+ void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem);
+ bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
+ 		     struct scmi_xfer *xfer);
++bool shmem_channel_free(struct scmi_shared_mem __iomem *shmem);
+ 
+ /* declarations for message passing transports */
+ struct scmi_msg_payld;
+diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
+index 19246ed1f01ff..b8d470417e8f9 100644
+--- a/drivers/firmware/arm_scmi/mailbox.c
++++ b/drivers/firmware/arm_scmi/mailbox.c
+@@ -45,6 +45,20 @@ static void rx_callback(struct mbox_client *cl, void *m)
+ {
+ 	struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl);
+ 
++	/*
++	 * An A2P IRQ is NOT valid when received while the platform still has
++	 * the ownership of the channel, because the platform at first releases
++	 * the SMT channel and then sends the completion interrupt.
++	 *
++	 * This addresses a possible race condition in which a spurious IRQ from
++	 * a previous timed-out reply which arrived late could be wrongly
++	 * associated with the next pending transaction.
++	 */
++	if (cl->knows_txdone && !shmem_channel_free(smbox->shmem)) {
++		dev_warn(smbox->cinfo->dev, "Ignoring spurious A2P IRQ !\n");
++		return;
++	}
++
+ 	scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL);
+ }
+ 
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index e11555de99ab8..d26eca37dc143 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -347,8 +347,8 @@ process_response_opp(struct scmi_opp *opp, unsigned int loop_idx,
+ }
+ 
+ static inline void
+-process_response_opp_v4(struct perf_dom_info *dom, struct scmi_opp *opp,
+-			unsigned int loop_idx,
++process_response_opp_v4(struct device *dev, struct perf_dom_info *dom,
++			struct scmi_opp *opp, unsigned int loop_idx,
+ 			const struct scmi_msg_resp_perf_describe_levels_v4 *r)
+ {
+ 	opp->perf = le32_to_cpu(r->opp[loop_idx].perf_val);
+@@ -359,10 +359,23 @@ process_response_opp_v4(struct perf_dom_info *dom, struct scmi_opp *opp,
+ 	/* Note that PERF v4 reports always five 32-bit words */
+ 	opp->indicative_freq = le32_to_cpu(r->opp[loop_idx].indicative_freq);
+ 	if (dom->level_indexing_mode) {
++		int ret;
++
+ 		opp->level_index = le32_to_cpu(r->opp[loop_idx].level_index);
+ 
+-		xa_store(&dom->opps_by_idx, opp->level_index, opp, GFP_KERNEL);
+-		xa_store(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL);
++		ret = xa_insert(&dom->opps_by_idx, opp->level_index, opp,
++				GFP_KERNEL);
++		if (ret)
++			dev_warn(dev,
++				 "Failed to add opps_by_idx at %d - ret:%d\n",
++				 opp->level_index, ret);
++
++		ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL);
++		if (ret)
++			dev_warn(dev,
++				 "Failed to add opps_by_lvl at %d - ret:%d\n",
++				 opp->perf, ret);
++
+ 		hash_add(dom->opps_by_freq, &opp->hash, opp->indicative_freq);
+ 	}
+ }
+@@ -379,7 +392,7 @@ iter_perf_levels_process_response(const struct scmi_protocol_handle *ph,
+ 	if (PROTOCOL_REV_MAJOR(p->version) <= 0x3)
+ 		process_response_opp(opp, st->loop_idx, response);
+ 	else
+-		process_response_opp_v4(p->perf_dom, opp, st->loop_idx,
++		process_response_opp_v4(ph->dev, p->perf_dom, opp, st->loop_idx,
+ 					response);
+ 	p->perf_dom->opp_count++;
+ 
+diff --git a/drivers/firmware/arm_scmi/raw_mode.c b/drivers/firmware/arm_scmi/raw_mode.c
+index 0493aa3c12bf5..3505735185033 100644
+--- a/drivers/firmware/arm_scmi/raw_mode.c
++++ b/drivers/firmware/arm_scmi/raw_mode.c
+@@ -1111,7 +1111,6 @@ static int scmi_raw_mode_setup(struct scmi_raw_mode_info *raw,
+ 		int i;
+ 
+ 		for (i = 0; i < num_chans; i++) {
+-			void *xret;
+ 			struct scmi_raw_queue *q;
+ 
+ 			q = scmi_raw_queue_init(raw);
+@@ -1120,13 +1119,12 @@ static int scmi_raw_mode_setup(struct scmi_raw_mode_info *raw,
+ 				goto err_xa;
+ 			}
+ 
+-			xret = xa_store(&raw->chans_q, channels[i], q,
++			ret = xa_insert(&raw->chans_q, channels[i], q,
+ 					GFP_KERNEL);
+-			if (xa_err(xret)) {
++			if (ret) {
+ 				dev_err(dev,
+ 					"Fail to allocate Raw queue 0x%02X\n",
+ 					channels[i]);
+-				ret = xa_err(xret);
+ 				goto err_xa;
+ 			}
+ 		}
+@@ -1322,6 +1320,12 @@ void scmi_raw_message_report(void *r, struct scmi_xfer *xfer,
+ 	dev = raw->handle->dev;
+ 	q = scmi_raw_queue_select(raw, idx,
+ 				  SCMI_XFER_IS_CHAN_SET(xfer) ? chan_id : 0);
++	if (!q) {
++		dev_warn(dev,
++			 "RAW[%d] - NO queue for chan 0x%X. Dropping report.\n",
++			 idx, chan_id);
++		return;
++	}
+ 
+ 	/*
+ 	 * Grab the msg_q_lock upfront to avoid a possible race between
+diff --git a/drivers/firmware/arm_scmi/shmem.c b/drivers/firmware/arm_scmi/shmem.c
+index 87b4f4d35f062..517d52fb3bcbb 100644
+--- a/drivers/firmware/arm_scmi/shmem.c
++++ b/drivers/firmware/arm_scmi/shmem.c
+@@ -122,3 +122,9 @@ bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
+ 		(SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR |
+ 		 SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
+ }
++
++bool shmem_channel_free(struct scmi_shared_mem __iomem *shmem)
++{
++	return (ioread32(&shmem->channel_status) &
++			SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
++}
+diff --git a/drivers/firmware/sysfb.c b/drivers/firmware/sysfb.c
+index 82fcfd29bc4d2..3c197db42c9d9 100644
+--- a/drivers/firmware/sysfb.c
++++ b/drivers/firmware/sysfb.c
+@@ -128,4 +128,4 @@ unlock_mutex:
+ }
+ 
+ /* must execute after PCI subsystem for EFI quirks */
+-subsys_initcall_sync(sysfb_init);
++device_initcall(sysfb_init);
+diff --git a/drivers/gpio/gpio-eic-sprd.c b/drivers/gpio/gpio-eic-sprd.c
+index be7f2fa5aa7b6..806b88d8dfb7b 100644
+--- a/drivers/gpio/gpio-eic-sprd.c
++++ b/drivers/gpio/gpio-eic-sprd.c
+@@ -330,20 +330,27 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 		switch (flow_type) {
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1);
+ 			break;
+ 		case IRQ_TYPE_EDGE_RISING:
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			state = sprd_eic_get(chip, offset);
+-			if (state)
++			if (state) {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_DBNC_IEV, 0);
+-			else
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_DBNC_IC, 1);
++			} else {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_DBNC_IEV, 1);
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_DBNC_IC, 1);
++			}
+ 			break;
+ 		default:
+ 			return -ENOTSUPP;
+@@ -355,20 +362,27 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 		switch (flow_type) {
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1);
+ 			break;
+ 		case IRQ_TYPE_EDGE_RISING:
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			state = sprd_eic_get(chip, offset);
+-			if (state)
++			if (state) {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_LATCH_INTPOL, 0);
+-			else
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_LATCH_INTCLR, 1);
++			} else {
+ 				sprd_eic_update(chip, offset,
+ 						SPRD_EIC_LATCH_INTPOL, 1);
++				sprd_eic_update(chip, offset,
++						SPRD_EIC_LATCH_INTCLR, 1);
++			}
+ 			break;
+ 		default:
+ 			return -ENOTSUPP;
+@@ -382,29 +396,34 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		default:
+@@ -417,29 +436,34 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_FALLING:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_BOTH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_HIGH:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		case IRQ_TYPE_LEVEL_LOW:
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
+ 			irq_set_handler_locked(data, handle_level_irq);
+ 			break;
+ 		default:
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 88066826d8e5b..cd3e9657cc36d 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1651,6 +1651,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.ignore_interrupt = "INT33FC:00@3",
+ 		},
+ 	},
++	{
++		/*
++		 * Spurious wakeups from TP_ATTN# pin
++		 * Found in BIOS 0.35
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/3073
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "GPD"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "G1619-04"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "PNP0C50:00@8",
++		},
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 9d92ca1576771..50f57d4dfd8ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -757,6 +757,7 @@ struct amdgpu_mqd_prop {
+ 	uint64_t eop_gpu_addr;
+ 	uint32_t hqd_pipe_priority;
+ 	uint32_t hqd_queue_priority;
++	bool allow_tunneling;
+ 	bool hqd_active;
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 75dc58470393f..3d126f2967a58 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -684,10 +684,8 @@ err:
+ void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle)
+ {
+ 	enum amd_powergating_state state = idle ? AMD_PG_STATE_GATE : AMD_PG_STATE_UNGATE;
+-	/* Temporary workaround to fix issues observed in some
+-	 * compute applications when GFXOFF is enabled on GFX11.
+-	 */
+-	if (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11) {
++	if (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 &&
++	    ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) {
+ 		pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");
+ 		amdgpu_gfx_off_ctrl(adev, idle);
+ 	} else if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 9) &&
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 0431eafa86b53..c7d60dd0fb975 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1963,8 +1963,6 @@ static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
+ 		amdgpu_device_ip_block_add(adev, &gfx_v9_0_ip_block);
+ 		break;
+ 	case IP_VERSION(9, 4, 3):
+-		if (!amdgpu_exp_hw_support)
+-			return -EINVAL;
+ 		amdgpu_device_ip_block_add(adev, &gfx_v9_4_3_ip_block);
+ 		break;
+ 	case IP_VERSION(10, 1, 10):
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index f4174eebe993c..a7ad77ed09ca4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -115,9 +115,10 @@
+  *   3.54.0 - Add AMDGPU_CTX_QUERY2_FLAGS_RESET_IN_PROGRESS support
+  * - 3.55.0 - Add AMDGPU_INFO_GPUVM_FAULT query
+  * - 3.56.0 - Update IB start address and size alignment for decode and encode
++ * - 3.57.0 - Compute tunneling on GFX10+
+  */
+ #define KMS_DRIVER_MAJOR	3
+-#define KMS_DRIVER_MINOR	56
++#define KMS_DRIVER_MINOR	57
+ #define KMS_DRIVER_PATCHLEVEL	0
+ 
+ /*
+@@ -2265,8 +2266,6 @@ retry_init:
+ 
+ 		pci_wake_from_d3(pdev, TRUE);
+ 
+-		pci_wake_from_d3(pdev, TRUE);
+-
+ 		/*
+ 		 * For runpm implemented via BACO, PMFW will handle the
+ 		 * timing for BACO in and out:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+index 73b8cca35bab8..c623e23049d1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+@@ -121,6 +121,7 @@ int amdgpu_gart_table_ram_alloc(struct amdgpu_device *adev)
+ 	struct amdgpu_bo_param bp;
+ 	dma_addr_t dma_addr;
+ 	struct page *p;
++	unsigned long x;
+ 	int ret;
+ 
+ 	if (adev->gart.bo != NULL)
+@@ -130,6 +131,10 @@ int amdgpu_gart_table_ram_alloc(struct amdgpu_device *adev)
+ 	if (!p)
+ 		return -ENOMEM;
+ 
++	/* assign pages to this device */
++	for (x = 0; x < (1UL << order); x++)
++		p[x].mapping = adev->mman.bdev.dev_mapping;
++
+ 	/* If the hardware does not support UTCL2 snooping of the CPU caches
+ 	 * then set_memory_wc() could be used as a workaround to mark the pages
+ 	 * as write combine memory.
+@@ -223,6 +228,7 @@ void amdgpu_gart_table_ram_free(struct amdgpu_device *adev)
+ 	unsigned int order = get_order(adev->gart.table_size);
+ 	struct sg_table *sg = adev->gart.bo->tbo.sg;
+ 	struct page *p;
++	unsigned long x;
+ 	int ret;
+ 
+ 	ret = amdgpu_bo_reserve(adev->gart.bo, false);
+@@ -234,6 +240,8 @@ void amdgpu_gart_table_ram_free(struct amdgpu_device *adev)
+ 	sg_free_table(sg);
+ 	kfree(sg);
+ 	p = virt_to_page(adev->gart.ptr);
++	for (x = 0; x < (1UL << order); x++)
++		p[x].mapping = NULL;
+ 	__free_pages(p, order);
+ 
+ 	adev->gart.ptr = NULL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 6a80d3ec887e9..45424ebf96814 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -642,6 +642,10 @@ static void amdgpu_ring_to_mqd_prop(struct amdgpu_ring *ring,
+ 				    struct amdgpu_mqd_prop *prop)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
++	bool is_high_prio_compute = ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE &&
++				    amdgpu_gfx_is_high_priority_compute_queue(adev, ring);
++	bool is_high_prio_gfx = ring->funcs->type == AMDGPU_RING_TYPE_GFX &&
++				amdgpu_gfx_is_high_priority_graphics_queue(adev, ring);
+ 
+ 	memset(prop, 0, sizeof(*prop));
+ 
+@@ -659,10 +663,8 @@ static void amdgpu_ring_to_mqd_prop(struct amdgpu_ring *ring,
+ 	 */
+ 	prop->hqd_active = ring->funcs->type == AMDGPU_RING_TYPE_KIQ;
+ 
+-	if ((ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE &&
+-	     amdgpu_gfx_is_high_priority_compute_queue(adev, ring)) ||
+-	    (ring->funcs->type == AMDGPU_RING_TYPE_GFX &&
+-	     amdgpu_gfx_is_high_priority_graphics_queue(adev, ring))) {
++	prop->allow_tunneling = is_high_prio_compute;
++	if (is_high_prio_compute || is_high_prio_gfx) {
+ 		prop->hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_HIGH;
+ 		prop->hqd_queue_priority = AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 08916538a615f..8db880244324f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -221,8 +221,23 @@ static struct attribute *amdgpu_vram_mgr_attributes[] = {
+ 	NULL
+ };
+ 
++static umode_t amdgpu_vram_attrs_is_visible(struct kobject *kobj,
++					    struct attribute *attr, int i)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct drm_device *ddev = dev_get_drvdata(dev);
++	struct amdgpu_device *adev = drm_to_adev(ddev);
++
++	if (attr == &dev_attr_mem_info_vram_vendor.attr &&
++	    !adev->gmc.vram_vendor)
++		return 0;
++
++	return attr->mode;
++}
++
+ const struct attribute_group amdgpu_vram_mgr_attr_group = {
+-	.attrs = amdgpu_vram_mgr_attributes
++	.attrs = amdgpu_vram_mgr_attributes,
++	.is_visible = amdgpu_vram_attrs_is_visible
+ };
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index c8a3bf01743f6..ecb622b7f9709 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -3996,16 +3996,13 @@ static int gfx_v10_0_init_microcode(struct amdgpu_device *adev)
+ 
+ 	if (!amdgpu_sriov_vf(adev)) {
+ 		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_rlc.bin", ucode_prefix);
+-		err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, fw_name);
+-		/* don't check this.  There are apparently firmwares in the wild with
+-		 * incorrect size in the header
+-		 */
+-		if (err == -ENODEV)
+-			goto out;
++		err = request_firmware(&adev->gfx.rlc_fw, fw_name, adev->dev);
+ 		if (err)
+-			dev_dbg(adev->dev,
+-				"gfx10: amdgpu_ucode_request() failed \"%s\"\n",
+-				fw_name);
++			goto out;
++
++		/* don't validate this firmware. There are apparently firmwares
++		 * in the wild with incorrect size in the header
++		 */
+ 		rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+ 		version_major = le16_to_cpu(rlc_hdr->header.header_version_major);
+ 		version_minor = le16_to_cpu(rlc_hdr->header.header_version_minor);
+@@ -6592,8 +6589,9 @@ static int gfx_v10_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ #ifdef __BIG_ENDIAN
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
+ #endif
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
++	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
++	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH,
++			    prop->allow_tunneling);
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+ 	mqd->cp_hqd_pq_control = tmp;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 8ed4a6fb147a2..806a8cf90487e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -114,7 +114,7 @@ static const struct soc15_reg_golden golden_settings_gc_11_5_0[] = {
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_ADDR_MATCH_MASK, 0xffffffff, 0xfffffff3),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL, 0xffffffff, 0xf37fff3f),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL3, 0xfffffffb, 0x00f40188),
+-	SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL4, 0xf0ffffff, 0x8000b007),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL4, 0xf0ffffff, 0x80009007),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, regPA_CL_ENHANCE, 0xf1ffffff, 0x00880007),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, regPC_CONFIG_CNTL_1, 0xffffffff, 0x00010000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, regTA_CNTL_AUX, 0xf7f7ffff, 0x01030000),
+@@ -3838,8 +3838,9 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ 			    (order_base_2(prop->queue_size / 4) - 1));
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+ 			    (order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1));
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
++	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
++	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH,
++			    prop->allow_tunneling);
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+ 	mqd->cp_hqd_pq_control = tmp;
+@@ -6328,6 +6329,9 @@ static int gfx_v11_0_get_cu_info(struct amdgpu_device *adev,
+ 	mutex_lock(&adev->grbm_idx_mutex);
+ 	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+ 		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
++			bitmap = i * adev->gfx.config.max_sh_per_se + j;
++			if (!((gfx_v11_0_get_sa_active_bitmap(adev) >> bitmap) & 1))
++				continue;
+ 			mask = 1;
+ 			counter = 0;
+ 			gfx_v11_0_select_se_sh(adev, i, j, 0xffffffff, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 2ac5820e9c924..77e625f24cd07 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1950,7 +1950,8 @@ static void gmc_v9_4_3_init_vram_info(struct amdgpu_device *adev)
+ 	static const u32 regBIF_BIOS_SCRATCH_4 = 0x50;
+ 	u32 vram_info;
+ 
+-	if (!amdgpu_sriov_vf(adev)) {
++	/* Only for dGPU, vendor informaton is reliable */
++	if (!amdgpu_sriov_vf(adev) && !(adev->flags & AMD_IS_APU)) {
+ 		vram_info = RREG32(regBIF_BIOS_SCRATCH_4);
+ 		adev->gmc.vram_vendor = vram_info & 0xF;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+index 8b7fed9135269..22cbfa1bdaddb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+@@ -170,6 +170,7 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+ 	m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
+ 	m->cp_hqd_pq_control |=
+ 			ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
++	m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ 	pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+ 
+ 	m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+index 15277f1d5cf0a..d722cbd317834 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+@@ -224,6 +224,7 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+ 	m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
+ 	m->cp_hqd_pq_control |=
+ 			ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
++	m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ 	pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+ 
+ 	m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index a9bd020b165a1..9dbbaeb8c6cf1 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2656,6 +2656,7 @@ static int dm_suspend(void *handle)
+ 	hpd_rx_irq_work_suspend(dm);
+ 
+ 	dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);
++	dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D3);
+ 
+ 	return 0;
+ }
+@@ -2824,7 +2825,7 @@ static int dm_resume(void *handle)
+ 	bool need_hotplug = false;
+ 
+ 	if (dm->dc->caps.ips_support) {
+-		dc_dmub_srv_exit_low_power_state(dm->dc);
++		dc_dmub_srv_apply_idle_power_optimizations(dm->dc, false);
+ 	}
+ 
+ 	if (amdgpu_in_reset(adev)) {
+@@ -2851,6 +2852,7 @@ static int dm_resume(void *handle)
+ 		if (r)
+ 			DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r);
+ 
++		dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0);
+ 		dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
+ 
+ 		dc_resume(dm->dc);
+@@ -2901,6 +2903,7 @@ static int dm_resume(void *handle)
+ 	}
+ 
+ 	/* power on hardware */
++	dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0);
+ 	dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
+ 
+ 	/* program HPD filter */
+@@ -8768,7 +8771,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 			if (new_con_state->crtc &&
+ 				new_con_state->crtc->state->active &&
+ 				drm_atomic_crtc_needs_modeset(new_con_state->crtc->state)) {
+-				dc_dmub_srv_exit_low_power_state(dm->dc);
++				dc_dmub_srv_apply_idle_power_optimizations(dm->dc, false);
+ 				break;
+ 			}
+ 		}
+@@ -10617,7 +10620,7 @@ static bool dm_edid_parser_send_cea(struct amdgpu_display_manager *dm,
+ 	input->cea_total_length = total_length;
+ 	memcpy(input->payload, data, length);
+ 
+-	res = dm_execute_dmub_cmd(dm->dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
++	res = dc_wake_and_execute_dmub_cmd(dm->dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
+ 	if (!res) {
+ 		DRM_ERROR("EDID CEA parser failed\n");
+ 		return false;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 13a177d343762..45c972f2630d8 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -3647,12 +3647,16 @@ static int capabilities_show(struct seq_file *m, void *unused)
+ 	bool mall_supported = dc->caps.mall_size_total;
+ 	bool subvp_supported = dc->caps.subvp_fw_processing_delay_us;
+ 	unsigned int mall_in_use = false;
+-	unsigned int subvp_in_use = dc->cap_funcs.get_subvp_en(dc, dc->current_state);
++	unsigned int subvp_in_use = false;
++
+ 	struct hubbub *hubbub = dc->res_pool->hubbub;
+ 
+ 	if (hubbub->funcs->get_mall_en)
+ 		hubbub->funcs->get_mall_en(hubbub, &mall_in_use);
+ 
++	if (dc->cap_funcs.get_subvp_en)
++		subvp_in_use = dc->cap_funcs.get_subvp_en(dc, dc->current_state);
++
+ 	seq_printf(m, "mall supported: %s, enabled: %s\n",
+ 			   mall_supported ? "yes" : "no", mall_in_use ? "yes" : "no");
+ 	seq_printf(m, "sub-viewport supported: %s, enabled: %s\n",
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index aac98f93545a2..4fad9c478c8aa 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -965,6 +965,11 @@ int dm_helper_dmub_aux_transfer_sync(
+ 		struct aux_payload *payload,
+ 		enum aux_return_code_type *operation_result)
+ {
++	if (!link->hpd_status) {
++		*operation_result = AUX_RET_ERROR_HPD_DISCON;
++		return -1;
++	}
++
+ 	return amdgpu_dm_process_dmub_aux_transfer_sync(ctx, link->link_index, payload,
+ 			operation_result);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+index ab0adabf9dd4c..293a919d605d1 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+@@ -123,7 +123,7 @@ static void encoder_control_dmcub(
+ 		sizeof(cmd.digx_encoder_control.header);
+ 	cmd.digx_encoder_control.encoder_control.dig.stream_param = *dig;
+ 
+-	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static enum bp_result encoder_control_digx_v1_5(
+@@ -259,7 +259,7 @@ static void transmitter_control_dmcub(
+ 		sizeof(cmd.dig1_transmitter_control.header);
+ 	cmd.dig1_transmitter_control.transmitter_control.dig = *dig;
+ 
+-	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static enum bp_result transmitter_control_v1_6(
+@@ -321,7 +321,7 @@ static void transmitter_control_dmcub_v1_7(
+ 		sizeof(cmd.dig1_transmitter_control.header);
+ 	cmd.dig1_transmitter_control.transmitter_control.dig_v1_7 = *dig;
+ 
+-	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static enum bp_result transmitter_control_v1_7(
+@@ -429,7 +429,7 @@ static void set_pixel_clock_dmcub(
+ 		sizeof(cmd.set_pixel_clock.header);
+ 	cmd.set_pixel_clock.pixel_clock.clk = *clk;
+ 
+-	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static enum bp_result set_pixel_clock_v7(
+@@ -796,7 +796,7 @@ static void enable_disp_power_gating_dmcub(
+ 		sizeof(cmd.enable_disp_power_gating.header);
+ 	cmd.enable_disp_power_gating.power_gating.pwr = *pwr;
+ 
+-	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static enum bp_result enable_disp_power_gating_v2_1(
+@@ -1006,7 +1006,7 @@ static void enable_lvtma_control_dmcub(
+ 			pwrseq_instance;
+ 	cmd.lvtma_control.data.bypass_panel_control_wait =
+ 			bypass_panel_control_wait;
+-	dm_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmcub->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static enum bp_result enable_lvtma_control(
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+index 3db4ef564b997..ce1386e22576e 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+@@ -253,7 +253,7 @@ void dcn31_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	cmd.notify_clocks.clocks.dispclk_khz = clk_mgr_base->clks.dispclk_khz;
+ 	cmd.notify_clocks.clocks.dppclk_khz = clk_mgr_base->clks.dppclk_khz;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static int get_vco_frequency_from_reg(struct clk_mgr_internal *clk_mgr)
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+index 7326b75658461..59c2a3545db34 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+@@ -131,30 +131,27 @@ static int dcn314_get_active_display_cnt_wa(
+ 	return display_count;
+ }
+ 
+-static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
++static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context,
++				  bool safe_to_lower, bool disable)
+ {
+ 	struct dc *dc = clk_mgr_base->ctx->dc;
+ 	int i;
+ 
+ 	for (i = 0; i < dc->res_pool->pipe_count; ++i) {
+-		struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
++		struct pipe_ctx *pipe = safe_to_lower
++			? &context->res_ctx.pipe_ctx[i]
++			: &dc->current_state->res_ctx.pipe_ctx[i];
+ 
+ 		if (pipe->top_pipe || pipe->prev_odm_pipe)
+ 			continue;
+ 		if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
+-			struct stream_encoder *stream_enc = pipe->stream_res.stream_enc;
+-
+ 			if (disable) {
+-				if (stream_enc && stream_enc->funcs->disable_fifo)
+-					pipe->stream_res.stream_enc->funcs->disable_fifo(stream_enc);
++				if (pipe->stream_res.tg && pipe->stream_res.tg->funcs->immediate_disable_crtc)
++					pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
+ 
+-				pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
+ 				reset_sync_context_for_pipe(dc, context, i);
+ 			} else {
+ 				pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
+-
+-				if (stream_enc && stream_enc->funcs->enable_fifo)
+-					pipe->stream_res.stream_enc->funcs->enable_fifo(stream_enc);
+ 			}
+ 		}
+ 	}
+@@ -252,11 +249,11 @@ void dcn314_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	}
+ 
+ 	if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
+-		dcn314_disable_otg_wa(clk_mgr_base, context, true);
++		dcn314_disable_otg_wa(clk_mgr_base, context, safe_to_lower, true);
+ 
+ 		clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+ 		dcn314_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
+-		dcn314_disable_otg_wa(clk_mgr_base, context, false);
++		dcn314_disable_otg_wa(clk_mgr_base, context, safe_to_lower, false);
+ 
+ 		update_dispclk = true;
+ 	}
+@@ -284,7 +281,7 @@ void dcn314_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	cmd.notify_clocks.clocks.dispclk_khz = clk_mgr_base->clks.dispclk_khz;
+ 	cmd.notify_clocks.clocks.dppclk_khz = clk_mgr_base->clks.dppclk_khz;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static int get_vco_frequency_from_reg(struct clk_mgr_internal *clk_mgr)
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+index 8776055bbeaae..644da46373209 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+@@ -232,7 +232,7 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	cmd.notify_clocks.clocks.dispclk_khz = clk_mgr_base->clks.dispclk_khz;
+ 	cmd.notify_clocks.clocks.dppclk_khz = clk_mgr_base->clks.dppclk_khz;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static void dcn315_dump_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+index 09151cc56ce4f..12f3e8aa46d8d 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+@@ -239,7 +239,7 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	cmd.notify_clocks.clocks.dispclk_khz = clk_mgr_base->clks.dispclk_khz;
+ 	cmd.notify_clocks.clocks.dppclk_khz = clk_mgr_base->clks.dppclk_khz;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static void dcn316_dump_clk_registers(struct clk_state_registers_and_bypass *regs_and_bypass,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index d5fde7d23fbf8..45ede6440a79f 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -349,7 +349,7 @@ void dcn35_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	cmd.notify_clocks.clocks.dispclk_khz = clk_mgr_base->clks.dispclk_khz;
+ 	cmd.notify_clocks.clocks.dppclk_khz = clk_mgr_base->clks.dppclk_khz;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static int get_vco_frequency_from_reg(struct clk_mgr_internal *clk_mgr)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 5c11852066459..bc098098345cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -519,7 +519,7 @@ dc_stream_forward_dmub_crc_window(struct dc_dmub_srv *dmub_srv,
+ 		cmd.secure_display.roi_info.y_end = rect->y + rect->height;
+ 	}
+ 
+-	dm_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ }
+ 
+ static inline void
+@@ -3386,7 +3386,7 @@ void dc_dmub_update_dirty_rect(struct dc *dc,
+ 
+ 			update_dirty_rect->panel_inst = panel_inst;
+ 			update_dirty_rect->pipe_idx = j;
+-			dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
++			dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 		}
+ 	}
+ }
+@@ -3626,7 +3626,7 @@ static void commit_planes_for_stream(struct dc *dc,
+ 	top_pipe_to_program = resource_get_otg_master_for_stream(
+ 				&context->res_ctx,
+ 				stream);
+-
++	ASSERT(top_pipe_to_program != NULL);
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+ 		struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+ 
+@@ -4457,6 +4457,8 @@ static bool should_commit_minimal_transition_for_windowed_mpo_odm(struct dc *dc,
+ 
+ 	cur_pipe = resource_get_otg_master_for_stream(&dc->current_state->res_ctx, stream);
+ 	new_pipe = resource_get_otg_master_for_stream(&context->res_ctx, stream);
++	if (!cur_pipe || !new_pipe)
++		return false;
+ 	cur_is_odm_in_use = resource_get_odm_slice_count(cur_pipe) > 1;
+ 	new_is_odm_in_use = resource_get_odm_slice_count(new_pipe) > 1;
+ 	if (cur_is_odm_in_use == new_is_odm_in_use)
+@@ -5213,7 +5215,7 @@ bool dc_process_dmub_aux_transfer_async(struct dc *dc,
+ 			);
+ 	}
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -5267,7 +5269,7 @@ bool dc_process_dmub_set_config_async(struct dc *dc,
+ 	cmd.set_config_access.set_config_control.cmd_pkt.msg_type = payload->msg_type;
+ 	cmd.set_config_access.set_config_control.cmd_pkt.msg_data = payload->msg_data;
+ 
+-	if (!dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY)) {
++	if (!dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY)) {
+ 		/* command is not processed by dmub */
+ 		notify->sc_status = SET_CONFIG_UNKNOWN_ERROR;
+ 		return is_cmd_complete;
+@@ -5310,7 +5312,7 @@ enum dc_status dc_process_dmub_set_mst_slots(const struct dc *dc,
+ 	cmd.set_mst_alloc_slots.mst_slots_control.instance = dc->links[link_index]->ddc_hw_inst;
+ 	cmd.set_mst_alloc_slots.mst_slots_control.mst_alloc_slots = mst_alloc_slots;
+ 
+-	if (!dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
++	if (!dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
+ 		/* command is not processed by dmub */
+ 		return DC_ERROR_UNEXPECTED;
+ 
+@@ -5348,7 +5350,7 @@ void dc_process_dmub_dpia_hpd_int_enable(const struct dc *dc,
+ 	cmd.dpia_hpd_int_enable.header.type = DMUB_CMD__DPIA_HPD_INT_ENABLE;
+ 	cmd.dpia_hpd_int_enable.enable = hpd_int_enable;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	DC_LOG_DEBUG("%s: hpd_int_enable(%d)\n", __func__, hpd_int_enable);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index fe07160932d69..fc18b9dc946f9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -724,7 +724,7 @@ void hwss_send_dmcub_cmd(union block_sequence_params *params)
+ 	union dmub_rb_cmd *cmd = params->send_dmcub_cmd_params.cmd;
+ 	enum dm_dmub_wait_type wait_type = params->send_dmcub_cmd_params.wait_type;
+ 
+-	dm_execute_dmub_cmd(ctx, cmd, wait_type);
++	dc_wake_and_execute_dmub_cmd(ctx, cmd, wait_type);
+ }
+ 
+ void hwss_program_manual_trigger(union block_sequence_params *params)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index e4bb1e25ee3b2..990d775e4cea2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -2170,6 +2170,10 @@ void resource_log_pipe_topology_update(struct dc *dc, struct dc_state *state)
+ 	for (stream_idx = 0; stream_idx < state->stream_count; stream_idx++) {
+ 		otg_master = resource_get_otg_master_for_stream(
+ 				&state->res_ctx, state->streams[stream_idx]);
++		if (!otg_master	|| otg_master->stream_res.tg == NULL) {
++			DC_LOG_DC("topology update: otg_master NULL stream_idx %d!\n", stream_idx);
++			return;
++		}
+ 		slice_count = resource_get_opp_heads_for_otg_master(otg_master,
+ 				&state->res_ctx, opp_heads);
+ 		for (slice_idx = 0; slice_idx < slice_count; slice_idx++) {
+@@ -2233,7 +2237,7 @@ static struct pipe_ctx *get_last_dpp_pipe_in_mpcc_combine(
+ }
+ 
+ static bool update_pipe_params_after_odm_slice_count_change(
+-		const struct dc_stream_state *stream,
++		struct pipe_ctx *otg_master,
+ 		struct dc_state *context,
+ 		const struct resource_pool *pool)
+ {
+@@ -2243,9 +2247,12 @@ static bool update_pipe_params_after_odm_slice_count_change(
+ 
+ 	for (i = 0; i < pool->pipe_count && result; i++) {
+ 		pipe = &context->res_ctx.pipe_ctx[i];
+-		if (pipe->stream == stream && pipe->plane_state)
++		if (pipe->stream == otg_master->stream && pipe->plane_state)
+ 			result = resource_build_scaling_params(pipe);
+ 	}
++
++	if (pool->funcs->build_pipe_pix_clk_params)
++		pool->funcs->build_pipe_pix_clk_params(otg_master);
+ 	return result;
+ }
+ 
+@@ -2928,7 +2935,7 @@ bool resource_update_pipes_for_stream_with_slice_count(
+ 					otg_master, new_ctx, pool);
+ 	if (result)
+ 		result = update_pipe_params_after_odm_slice_count_change(
+-				otg_master->stream, new_ctx, pool);
++				otg_master, new_ctx, pool);
+ 	return result;
+ }
+ 
+@@ -2990,7 +2997,8 @@ bool dc_add_plane_to_context(
+ 
+ 	otg_master_pipe = resource_get_otg_master_for_stream(
+ 			&context->res_ctx, stream);
+-	added = resource_append_dpp_pipes_for_plane_composition(context,
++	if (otg_master_pipe)
++		added = resource_append_dpp_pipes_for_plane_composition(context,
+ 			dc->current_state, pool, otg_master_pipe, plane_state);
+ 
+ 	if (added) {
+@@ -3766,7 +3774,8 @@ void dc_resource_state_construct(
+ 	dst_ctx->clk_mgr = dc->clk_mgr;
+ 
+ 	/* Initialise DIG link encoder resource tracking variables. */
+-	link_enc_cfg_init(dc, dst_ctx);
++	if (dc->res_pool)
++		link_enc_cfg_init(dc, dst_ctx);
+ }
+ 
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index 0e07699c1e835..61d1b4eadbee3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -282,17 +282,11 @@ bool dc_dmub_srv_optimized_init_done(struct dc_dmub_srv *dc_dmub_srv)
+ bool dc_dmub_srv_notify_stream_mask(struct dc_dmub_srv *dc_dmub_srv,
+ 				    unsigned int stream_mask)
+ {
+-	struct dmub_srv *dmub;
+-	const uint32_t timeout = 30;
+-
+ 	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
+ 		return false;
+ 
+-	dmub = dc_dmub_srv->dmub;
+-
+-	return dmub_srv_send_gpint_command(
+-		       dmub, DMUB_GPINT__IDLE_OPT_NOTIFY_STREAM_MASK,
+-		       stream_mask, timeout) == DMUB_STATUS_OK;
++	return dc_wake_and_execute_gpint(dc_dmub_srv->ctx, DMUB_GPINT__IDLE_OPT_NOTIFY_STREAM_MASK,
++					 stream_mask, NULL, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ bool dc_dmub_srv_is_restore_required(struct dc_dmub_srv *dc_dmub_srv)
+@@ -341,7 +335,7 @@ void dc_dmub_srv_drr_update_cmd(struct dc *dc, uint32_t tg_inst, uint32_t vtotal
+ 	cmd.drr_update.header.payload_bytes = sizeof(cmd.drr_update) - sizeof(cmd.drr_update.header);
+ 
+ 	// Send the command to the DMCUB.
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ void dc_dmub_srv_set_drr_manual_trigger_cmd(struct dc *dc, uint32_t tg_inst)
+@@ -355,7 +349,7 @@ void dc_dmub_srv_set_drr_manual_trigger_cmd(struct dc *dc, uint32_t tg_inst)
+ 	cmd.drr_update.header.payload_bytes = sizeof(cmd.drr_update) - sizeof(cmd.drr_update.header);
+ 
+ 	// Send the command to the DMCUB.
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ static uint8_t dc_dmub_srv_get_pipes_for_stream(struct dc *dc, struct dc_stream_state *stream)
+@@ -448,7 +442,7 @@ bool dc_dmub_srv_p_state_delegate(struct dc *dc, bool should_manage_pstate, stru
+ 		sizeof(cmd.fw_assisted_mclk_switch) - sizeof(cmd.fw_assisted_mclk_switch.header);
+ 
+ 	// Send the command to the DMCUB.
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -469,7 +463,7 @@ void dc_dmub_srv_query_caps_cmd(struct dc_dmub_srv *dc_dmub_srv)
+ 	cmd.query_feature_caps.header.payload_bytes = sizeof(struct dmub_cmd_query_feature_caps_data);
+ 
+ 	/* If command was processed, copy feature caps to dmub srv */
+-	if (dm_execute_dmub_cmd(dc_dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
++	if (dc_wake_and_execute_dmub_cmd(dc_dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
+ 	    cmd.query_feature_caps.header.ret_status == 0) {
+ 		memcpy(&dc_dmub_srv->dmub->feature_caps,
+ 		       &cmd.query_feature_caps.query_feature_caps_data,
+@@ -494,7 +488,7 @@ void dc_dmub_srv_get_visual_confirm_color_cmd(struct dc *dc, struct pipe_ctx *pi
+ 	cmd.visual_confirm_color.visual_confirm_color_data.visual_confirm_color.panel_inst = panel_inst;
+ 
+ 	// If command was processed, copy feature caps to dmub srv
+-	if (dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
++	if (dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
+ 		cmd.visual_confirm_color.header.ret_status == 0) {
+ 		memcpy(&dc->ctx->dmub_srv->dmub->visual_confirm_color,
+ 			&cmd.visual_confirm_color.visual_confirm_color_data,
+@@ -856,7 +850,7 @@ void dc_dmub_setup_subvp_dmub_command(struct dc *dc,
+ 		cmd.fw_assisted_mclk_switch_v2.config_data.watermark_a_cache = wm_val_refclk < 0xFFFF ? wm_val_refclk : 0xFFFF;
+ 	}
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ bool dc_dmub_srv_get_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv, struct dmub_diagnostic_data *diag_data)
+@@ -1093,7 +1087,7 @@ void dc_send_update_cursor_info_to_dmu(
+ 				pipe_idx, pCtx->plane_res.hubp, pCtx->plane_res.dpp);
+ 
+ 		/* Combine 2nd cmds update_curosr_info to DMU */
+-		dm_execute_dmub_cmd_list(pCtx->stream->ctx, 2, cmd, DM_DMUB_WAIT_TYPE_WAIT);
++		dc_wake_and_execute_dmub_cmd_list(pCtx->stream->ctx, 2, cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 	}
+ }
+ 
+@@ -1107,25 +1101,20 @@ bool dc_dmub_check_min_version(struct dmub_srv *srv)
+ void dc_dmub_srv_enable_dpia_trace(const struct dc *dc)
+ {
+ 	struct dc_dmub_srv *dc_dmub_srv = dc->ctx->dmub_srv;
+-	struct dmub_srv *dmub;
+-	enum dmub_status status;
+-	static const uint32_t timeout_us = 30;
+ 
+ 	if (!dc_dmub_srv || !dc_dmub_srv->dmub) {
+ 		DC_LOG_ERROR("%s: invalid parameters.", __func__);
+ 		return;
+ 	}
+ 
+-	dmub = dc_dmub_srv->dmub;
+-
+-	status = dmub_srv_send_gpint_command(dmub, DMUB_GPINT__SET_TRACE_BUFFER_MASK_WORD1, 0x0010, timeout_us);
+-	if (status != DMUB_STATUS_OK) {
++	if (!dc_wake_and_execute_gpint(dc->ctx, DMUB_GPINT__SET_TRACE_BUFFER_MASK_WORD1,
++				       0x0010, NULL, DM_DMUB_WAIT_TYPE_WAIT)) {
+ 		DC_LOG_ERROR("timeout updating trace buffer mask word\n");
+ 		return;
+ 	}
+ 
+-	status = dmub_srv_send_gpint_command(dmub, DMUB_GPINT__UPDATE_TRACE_BUFFER_MASK, 0x0000, timeout_us);
+-	if (status != DMUB_STATUS_OK) {
++	if (!dc_wake_and_execute_gpint(dc->ctx, DMUB_GPINT__UPDATE_TRACE_BUFFER_MASK,
++				       0x0000, NULL, DM_DMUB_WAIT_TYPE_WAIT)) {
+ 		DC_LOG_ERROR("timeout updating trace buffer mask word\n");
+ 		return;
+ 	}
+@@ -1143,6 +1132,9 @@ bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait)
+ 	struct dc_context *dc_ctx = dc_dmub_srv->ctx;
+ 	enum dmub_status status;
+ 
++	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
++		return true;
++
+ 	if (dc_dmub_srv->ctx->dc->debug.dmcub_emulation)
+ 		return true;
+ 
+@@ -1158,7 +1150,7 @@ bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait)
+ 	return true;
+ }
+ 
+-void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle)
++static void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle)
+ {
+ 	union dmub_rb_cmd cmd = {0};
+ 
+@@ -1179,10 +1171,11 @@ void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle)
+ 			dc->hwss.set_idle_state(dc, true);
+ 	}
+ 
++	/* NOTE: This does not use the "wake" interface since this is part of the wake path. */
+ 	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+-void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
++static void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ {
+ 	const uint32_t max_num_polls = 10000;
+ 	uint32_t allow_state = 0;
+@@ -1195,6 +1188,9 @@ void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ 	if (!dc->idle_optimizations_allowed)
+ 		return;
+ 
++	if (!dc->ctx->dmub_srv || !dc->ctx->dmub_srv->dmub)
++		return;
++
+ 	if (dc->hwss.get_idle_state &&
+ 		dc->hwss.set_idle_state &&
+ 		dc->clk_mgr->funcs->exit_low_power_state) {
+@@ -1251,3 +1247,131 @@ void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ 		ASSERT(0);
+ }
+ 
++void dc_dmub_srv_set_power_state(struct dc_dmub_srv *dc_dmub_srv, enum dc_acpi_cm_power_state powerState)
++{
++	struct dmub_srv *dmub;
++
++	if (!dc_dmub_srv)
++		return;
++
++	dmub = dc_dmub_srv->dmub;
++
++	if (powerState == DC_ACPI_CM_POWER_STATE_D0)
++		dmub_srv_set_power_state(dmub, DMUB_POWER_STATE_D0);
++	else
++		dmub_srv_set_power_state(dmub, DMUB_POWER_STATE_D3);
++}
++
++void dc_dmub_srv_apply_idle_power_optimizations(const struct dc *dc, bool allow_idle)
++{
++	struct dc_dmub_srv *dc_dmub_srv = dc->ctx->dmub_srv;
++
++	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
++		return;
++
++	if (dc_dmub_srv->idle_allowed == allow_idle)
++		return;
++
++	/*
++	 * Entering a low power state requires a driver notification.
++	 * Powering up the hardware requires notifying PMFW and DMCUB.
++	 * Clearing the driver idle allow requires a DMCUB command.
++	 * DMCUB commands requires the DMCUB to be powered up and restored.
++	 *
++	 * Exit out early to prevent an infinite loop of DMCUB commands
++	 * triggering exit low power - use software state to track this.
++	 */
++	dc_dmub_srv->idle_allowed = allow_idle;
++
++	if (!allow_idle)
++		dc_dmub_srv_exit_low_power_state(dc);
++	else
++		dc_dmub_srv_notify_idle(dc, allow_idle);
++}
++
++bool dc_wake_and_execute_dmub_cmd(const struct dc_context *ctx, union dmub_rb_cmd *cmd,
++				  enum dm_dmub_wait_type wait_type)
++{
++	return dc_wake_and_execute_dmub_cmd_list(ctx, 1, cmd, wait_type);
++}
++
++bool dc_wake_and_execute_dmub_cmd_list(const struct dc_context *ctx, unsigned int count,
++				       union dmub_rb_cmd *cmd, enum dm_dmub_wait_type wait_type)
++{
++	struct dc_dmub_srv *dc_dmub_srv = ctx->dmub_srv;
++	bool result = false, reallow_idle = false;
++
++	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
++		return false;
++
++	if (count == 0)
++		return true;
++
++	if (dc_dmub_srv->idle_allowed) {
++		dc_dmub_srv_apply_idle_power_optimizations(ctx->dc, false);
++		reallow_idle = true;
++	}
++
++	/*
++	 * These may have different implementations in DM, so ensure
++	 * that we guide it to the expected helper.
++	 */
++	if (count > 1)
++		result = dm_execute_dmub_cmd_list(ctx, count, cmd, wait_type);
++	else
++		result = dm_execute_dmub_cmd(ctx, cmd, wait_type);
++
++	if (result && reallow_idle)
++		dc_dmub_srv_apply_idle_power_optimizations(ctx->dc, true);
++
++	return result;
++}
++
++static bool dc_dmub_execute_gpint(const struct dc_context *ctx, enum dmub_gpint_command command_code,
++				  uint16_t param, uint32_t *response, enum dm_dmub_wait_type wait_type)
++{
++	struct dc_dmub_srv *dc_dmub_srv = ctx->dmub_srv;
++	const uint32_t wait_us = wait_type == DM_DMUB_WAIT_TYPE_NO_WAIT ? 0 : 30;
++	enum dmub_status status;
++
++	if (response)
++		*response = 0;
++
++	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
++		return false;
++
++	status = dmub_srv_send_gpint_command(dc_dmub_srv->dmub, command_code, param, wait_us);
++	if (status != DMUB_STATUS_OK) {
++		if (status == DMUB_STATUS_TIMEOUT && wait_type == DM_DMUB_WAIT_TYPE_NO_WAIT)
++			return true;
++
++		return false;
++	}
++
++	if (response && wait_type == DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY)
++		dmub_srv_get_gpint_response(dc_dmub_srv->dmub, response);
++
++	return true;
++}
++
++bool dc_wake_and_execute_gpint(const struct dc_context *ctx, enum dmub_gpint_command command_code,
++			       uint16_t param, uint32_t *response, enum dm_dmub_wait_type wait_type)
++{
++	struct dc_dmub_srv *dc_dmub_srv = ctx->dmub_srv;
++	bool result = false, reallow_idle = false;
++
++	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
++		return false;
++
++	if (dc_dmub_srv->idle_allowed) {
++		dc_dmub_srv_apply_idle_power_optimizations(ctx->dc, false);
++		reallow_idle = true;
++	}
++
++	result = dc_dmub_execute_gpint(ctx, command_code, param, response, wait_type);
++
++	if (result && reallow_idle)
++		dc_dmub_srv_apply_idle_power_optimizations(ctx->dc, true);
++
++	return result;
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
+index d4a60f53faab1..952bfb368886e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
+@@ -50,6 +50,8 @@ struct dc_dmub_srv {
+ 
+ 	struct dc_context *ctx;
+ 	void *dm;
++
++	bool idle_allowed;
+ };
+ 
+ void dc_dmub_srv_wait_idle(struct dc_dmub_srv *dc_dmub_srv);
+@@ -100,6 +102,59 @@ void dc_dmub_srv_enable_dpia_trace(const struct dc *dc);
+ void dc_dmub_srv_subvp_save_surf_addr(const struct dc_dmub_srv *dc_dmub_srv, const struct dc_plane_address *addr, uint8_t subvp_index);
+ 
+ bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait);
+-void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle);
+-void dc_dmub_srv_exit_low_power_state(const struct dc *dc);
++
++void dc_dmub_srv_apply_idle_power_optimizations(const struct dc *dc, bool allow_idle);
++
++void dc_dmub_srv_set_power_state(struct dc_dmub_srv *dc_dmub_srv, enum dc_acpi_cm_power_state powerState);
++
++/**
++ * dc_wake_and_execute_dmub_cmd() - Wrapper for DMUB command execution.
++ *
++ * Refer to dc_wake_and_execute_dmub_cmd_list() for usage and limitations,
++ * This function is a convenience wrapper for a single command execution.
++ *
++ * @ctx: DC context
++ * @cmd: The command to send/receive
++ * @wait_type: The wait behavior for the execution
++ *
++ * Return: true on command submission success, false otherwise
++ */
++bool dc_wake_and_execute_dmub_cmd(const struct dc_context *ctx, union dmub_rb_cmd *cmd,
++				  enum dm_dmub_wait_type wait_type);
++
++/**
++ * dc_wake_and_execute_dmub_cmd_list() - Wrapper for DMUB command list execution.
++ *
++ * If the DMCUB hardware was asleep then it wakes the DMUB before
++ * executing the command and attempts to re-enter if the command
++ * submission was successful.
++ *
++ * This should be the preferred command submission interface provided
++ * the DC lock is acquired.
++ *
++ * Entry/exit out of idle power optimizations would need to be
++ * manually performed otherwise through dc_allow_idle_optimizations().
++ *
++ * @ctx: DC context
++ * @count: Number of commands to send/receive
++ * @cmd: Array of commands to send
++ * @wait_type: The wait behavior for the execution
++ *
++ * Return: true on command submission success, false otherwise
++ */
++bool dc_wake_and_execute_dmub_cmd_list(const struct dc_context *ctx, unsigned int count,
++				       union dmub_rb_cmd *cmd, enum dm_dmub_wait_type wait_type);
++
++/**
++ * dc_wake_and_execute_gpint()
++ *
++ * @ctx: DC context
++ * @command_code: The command ID to send to DMCUB
++ * @param: The parameter to message DMCUB
++ * @response: Optional response out value - may be NULL.
++ * @wait_type: The wait behavior for the execution
++ */
++bool dc_wake_and_execute_gpint(const struct dc_context *ctx, enum dmub_gpint_command command_code,
++			       uint16_t param, uint32_t *response, enum dm_dmub_wait_type wait_type);
++
+ #endif /* _DMUB_DC_SRV_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_helper.c b/drivers/gpu/drm/amd/display/dc/dc_helper.c
+index cb6eaddab7205..8f9a678256150 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_helper.c
+@@ -50,7 +50,7 @@ static inline void submit_dmub_read_modify_write(
+ 	cmd_buf->header.payload_bytes =
+ 			sizeof(struct dmub_cmd_read_modify_write_sequence) * offload->reg_seq_count;
+ 
+-	dm_execute_dmub_cmd(ctx, &offload->cmd_data, DM_DMUB_WAIT_TYPE_NO_WAIT);
++	dc_wake_and_execute_dmub_cmd(ctx, &offload->cmd_data, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 	memset(cmd_buf, 0, sizeof(*cmd_buf));
+ 
+@@ -67,7 +67,7 @@ static inline void submit_dmub_burst_write(
+ 	cmd_buf->header.payload_bytes =
+ 			sizeof(uint32_t) * offload->reg_seq_count;
+ 
+-	dm_execute_dmub_cmd(ctx, &offload->cmd_data, DM_DMUB_WAIT_TYPE_NO_WAIT);
++	dc_wake_and_execute_dmub_cmd(ctx, &offload->cmd_data, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 	memset(cmd_buf, 0, sizeof(*cmd_buf));
+ 
+@@ -80,7 +80,7 @@ static inline void submit_dmub_reg_wait(
+ {
+ 	struct dmub_rb_cmd_reg_wait *cmd_buf = &offload->cmd_data.reg_wait;
+ 
+-	dm_execute_dmub_cmd(ctx, &offload->cmd_data, DM_DMUB_WAIT_TYPE_NO_WAIT);
++	dc_wake_and_execute_dmub_cmd(ctx, &offload->cmd_data, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 	memset(cmd_buf, 0, sizeof(*cmd_buf));
+ 	offload->reg_seq_count = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+index 42c802afc4681..4cff36351f40a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+@@ -76,7 +76,7 @@ static void dmub_abm_enable_fractional_pwm(struct dc_context *dc)
+ 	cmd.abm_set_pwm_frac.abm_set_pwm_frac_data.panel_mask = panel_mask;
+ 	cmd.abm_set_pwm_frac.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pwm_frac_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ void dmub_abm_init(struct abm *abm, uint32_t backlight)
+@@ -155,7 +155,7 @@ bool dmub_abm_set_level(struct abm *abm, uint32_t level, uint8_t panel_mask)
+ 	cmd.abm_set_level.abm_set_level_data.panel_mask = panel_mask;
+ 	cmd.abm_set_level.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_level_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -186,7 +186,7 @@ void dmub_abm_init_config(struct abm *abm,
+ 
+ 	cmd.abm_init_config.header.payload_bytes = sizeof(struct dmub_cmd_abm_init_config_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ }
+ 
+@@ -203,7 +203,7 @@ bool dmub_abm_set_pause(struct abm *abm, bool pause, unsigned int panel_inst, un
+ 	cmd.abm_pause.abm_pause_data.panel_mask = panel_mask;
+ 	cmd.abm_set_level.header.payload_bytes = sizeof(struct dmub_cmd_abm_pause_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -246,7 +246,7 @@ bool dmub_abm_save_restore(
+ 
+ 	cmd.abm_save_restore.header.payload_bytes = sizeof(struct dmub_rb_cmd_abm_save_restore);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	// Copy iramtable data into local structure
+ 	memcpy((void *)pData, dc->dmub_srv->dmub->scratch_mem_fb.cpu_addr, bytes);
+@@ -274,7 +274,7 @@ bool dmub_abm_set_pipe(struct abm *abm,
+ 	cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
+ 	cmd.abm_set_pipe.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pipe_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -296,7 +296,7 @@ bool dmub_abm_set_backlight_level(struct abm *abm,
+ 	cmd.abm_set_backlight.abm_set_backlight_data.panel_mask = (0x01 << panel_inst);
+ 	cmd.abm_set_backlight.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_backlight_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index 2aa0e01a6891b..ba1fec3016d5b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -47,7 +47,7 @@ void dmub_hw_lock_mgr_cmd(struct dc_dmub_srv *dmub_srv,
+ 	if (!lock)
+ 		cmd.lock_hw.lock_hw_data.should_release = 1;
+ 
+-	dm_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c
+index d8009b2dc56a0..98a778996e1a9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c
+@@ -48,5 +48,5 @@ void dmub_enable_outbox_notification(struct dc_dmub_srv *dmub_srv)
+ 		sizeof(cmd.outbox1_enable.header);
+ 	cmd.outbox1_enable.enable = true;
+ 
+-	dm_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+index 9d4170a356a20..3e243e407bb87 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+@@ -105,23 +105,18 @@ static enum dc_psr_state convert_psr_state(uint32_t raw_state)
+  */
+ static void dmub_psr_get_state(struct dmub_psr *dmub, enum dc_psr_state *state, uint8_t panel_inst)
+ {
+-	struct dmub_srv *srv = dmub->ctx->dmub_srv->dmub;
+ 	uint32_t raw_state = 0;
+ 	uint32_t retry_count = 0;
+-	enum dmub_status status;
+ 
+ 	do {
+ 		// Send gpint command and wait for ack
+-		status = dmub_srv_send_gpint_command(srv, DMUB_GPINT__GET_PSR_STATE, panel_inst, 30);
+-
+-		if (status == DMUB_STATUS_OK) {
+-			// GPINT was executed, get response
+-			dmub_srv_get_gpint_response(srv, &raw_state);
++		if (dc_wake_and_execute_gpint(dmub->ctx, DMUB_GPINT__GET_PSR_STATE, panel_inst, &raw_state,
++					      DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY)) {
+ 			*state = convert_psr_state(raw_state);
+-		} else
++		} else {
+ 			// Return invalid state when GPINT times out
+ 			*state = PSR_STATE_INVALID;
+-
++		}
+ 	} while (++retry_count <= 1000 && *state == PSR_STATE_INVALID);
+ 
+ 	// Assert if max retry hit
+@@ -171,7 +166,7 @@ static bool dmub_psr_set_version(struct dmub_psr *dmub, struct dc_stream_state *
+ 	cmd.psr_set_version.psr_set_version_data.panel_inst = panel_inst;
+ 	cmd.psr_set_version.header.payload_bytes = sizeof(struct dmub_cmd_psr_set_version_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -199,7 +194,7 @@ static void dmub_psr_enable(struct dmub_psr *dmub, bool enable, bool wait, uint8
+ 
+ 	cmd.psr_enable.header.payload_bytes = 0; // Send header only
+ 
+-	dm_execute_dmub_cmd(dc->dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	/* Below loops 1000 x 500us = 500 ms.
+ 	 *  Exit PSR may need to wait 1-2 frames to power up. Timeout after at
+@@ -248,7 +243,7 @@ static void dmub_psr_set_level(struct dmub_psr *dmub, uint16_t psr_level, uint8_
+ 	cmd.psr_set_level.psr_set_level_data.psr_level = psr_level;
+ 	cmd.psr_set_level.psr_set_level_data.cmd_version = DMUB_CMD_PSR_CONTROL_VERSION_1;
+ 	cmd.psr_set_level.psr_set_level_data.panel_inst = panel_inst;
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ /*
+@@ -267,7 +262,7 @@ static void dmub_psr_set_sink_vtotal_in_psr_active(struct dmub_psr *dmub,
+ 	cmd.psr_set_vtotal.psr_set_vtotal_data.psr_vtotal_idle = psr_vtotal_idle;
+ 	cmd.psr_set_vtotal.psr_set_vtotal_data.psr_vtotal_su = psr_vtotal_su;
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ /*
+@@ -286,7 +281,7 @@ static void dmub_psr_set_power_opt(struct dmub_psr *dmub, unsigned int power_opt
+ 	cmd.psr_set_power_opt.psr_set_power_opt_data.power_opt = power_opt;
+ 	cmd.psr_set_power_opt.psr_set_power_opt_data.panel_inst = panel_inst;
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ /*
+@@ -423,7 +418,7 @@ static bool dmub_psr_copy_settings(struct dmub_psr *dmub,
+ 		copy_settings_data->relock_delay_frame_cnt = 2;
+ 	copy_settings_data->dsc_slice_height = psr_context->dsc_slice_height;
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -444,7 +439,7 @@ static void dmub_psr_force_static(struct dmub_psr *dmub, uint8_t panel_inst)
+ 	cmd.psr_force_static.header.sub_type = DMUB_CMD__PSR_FORCE_STATIC;
+ 	cmd.psr_enable.header.payload_bytes = 0;
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ /*
+@@ -452,13 +447,11 @@ static void dmub_psr_force_static(struct dmub_psr *dmub, uint8_t panel_inst)
+  */
+ static void dmub_psr_get_residency(struct dmub_psr *dmub, uint32_t *residency, uint8_t panel_inst)
+ {
+-	struct dmub_srv *srv = dmub->ctx->dmub_srv->dmub;
+ 	uint16_t param = (uint16_t)(panel_inst << 8);
+ 
+ 	/* Send gpint command and wait for ack */
+-	dmub_srv_send_gpint_command(srv, DMUB_GPINT__PSR_RESIDENCY, param, 30);
+-
+-	dmub_srv_get_gpint_response(srv, residency);
++	dc_wake_and_execute_gpint(dmub->ctx, DMUB_GPINT__PSR_RESIDENCY, param, residency,
++				  DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
+ }
+ 
+ static const struct dmub_psr_funcs psr_funcs = {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 0a422fbb14bc8..e73e597548375 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -1273,15 +1273,19 @@ static void build_clamping_params(struct dc_stream_state *stream)
+ 	stream->clamping.pixel_encoding = stream->timing.pixel_encoding;
+ }
+ 
+-static enum dc_status build_pipe_hw_param(struct pipe_ctx *pipe_ctx)
++void dcn20_build_pipe_pix_clk_params(struct pipe_ctx *pipe_ctx)
+ {
+-
+ 	get_pixel_clock_parameters(pipe_ctx, &pipe_ctx->stream_res.pix_clk_params);
+-
+ 	pipe_ctx->clock_source->funcs->get_pix_clk_dividers(
+-		pipe_ctx->clock_source,
+-		&pipe_ctx->stream_res.pix_clk_params,
+-		&pipe_ctx->pll_settings);
++			pipe_ctx->clock_source,
++			&pipe_ctx->stream_res.pix_clk_params,
++			&pipe_ctx->pll_settings);
++}
++
++static enum dc_status build_pipe_hw_param(struct pipe_ctx *pipe_ctx)
++{
++
++	dcn20_build_pipe_pix_clk_params(pipe_ctx);
+ 
+ 	pipe_ctx->stream->clamping.pixel_encoding = pipe_ctx->stream->timing.pixel_encoding;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
+index 37ecaccc5d128..4cee3fa11a7ff 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
+@@ -165,6 +165,7 @@ enum dc_status dcn20_add_stream_to_ctx(struct dc *dc, struct dc_state *new_ctx,
+ enum dc_status dcn20_add_dsc_to_stream_resource(struct dc *dc, struct dc_state *dc_ctx, struct dc_stream_state *dc_stream);
+ enum dc_status dcn20_remove_stream_from_ctx(struct dc *dc, struct dc_state *new_ctx, struct dc_stream_state *dc_stream);
+ enum dc_status dcn20_patch_unknown_plane_state(struct dc_plane_state *plane_state);
++void dcn20_build_pipe_pix_clk_params(struct pipe_ctx *pipe_ctx);
+ 
+ #endif /* __DC_RESOURCE_DCN20_H__ */
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+index 68cad55c72ab8..e13d69a22c1c7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+@@ -691,7 +691,7 @@ static void dmcub_PLAT_54186_wa(struct hubp *hubp,
+ 	cmd.PLAT_54186_wa.flip.flip_params.vmid = flip_regs->vmid;
+ 
+ 	PERF_TRACE();  // TODO: remove after performance is stable.
+-	dm_execute_dmub_cmd(hubp->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(hubp->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 	PERF_TRACE();  // TODO: remove after performance is stable.
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
+index 4596f3bac1b4c..26be5fee7411d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dio_link_encoder.c
+@@ -125,7 +125,7 @@ static bool query_dp_alt_from_dmub(struct link_encoder *enc,
+ 	cmd->query_dp_alt.header.payload_bytes = sizeof(cmd->query_dp_alt.data);
+ 	cmd->query_dp_alt.data.phy_id = phy_id_from_transmitter(enc10->base.transmitter);
+ 
+-	if (!dm_execute_dmub_cmd(enc->ctx, cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
++	if (!dc_wake_and_execute_dmub_cmd(enc->ctx, cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
+ 		return false;
+ 
+ 	return true;
+@@ -436,7 +436,7 @@ static bool link_dpia_control(struct dc_context *dc_ctx,
+ 
+ 	cmd.dig1_dpia_control.dpia_control = *dpia_control;
+ 
+-	dm_execute_dmub_cmd(dc_ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc_ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
+index d849b1eaa4a5c..03248422d6ffd 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
+@@ -52,7 +52,7 @@ static bool dcn31_query_backlight_info(struct panel_cntl *panel_cntl, union dmub
+ 	cmd->panel_cntl.header.payload_bytes = sizeof(cmd->panel_cntl.data);
+ 	cmd->panel_cntl.data.pwrseq_inst = dcn31_panel_cntl->base.pwrseq_inst;
+ 
+-	return dm_execute_dmub_cmd(dc_dmub_srv->ctx, cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
++	return dc_wake_and_execute_dmub_cmd(dc_dmub_srv->ctx, cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
+ }
+ 
+ static uint32_t dcn31_get_16_bit_backlight_from_pwm(struct panel_cntl *panel_cntl)
+@@ -85,7 +85,7 @@ static uint32_t dcn31_panel_cntl_hw_init(struct panel_cntl *panel_cntl)
+ 		panel_cntl->stored_backlight_registers.LVTMA_PWRSEQ_REF_DIV_BL_PWM_REF_DIV;
+ 	cmd.panel_cntl.data.bl_pwm_ref_div2 =
+ 		panel_cntl->stored_backlight_registers.PANEL_PWRSEQ_REF_DIV2;
+-	if (!dm_execute_dmub_cmd(dc_dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
++	if (!dc_wake_and_execute_dmub_cmd(dc_dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
+ 		return 0;
+ 
+ 	panel_cntl->stored_backlight_registers.BL_PWM_CNTL = cmd.panel_cntl.data.bl_pwm_cntl;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
+index a2c4db2cebdd6..8234935433254 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
+@@ -166,6 +166,16 @@ static bool optc32_disable_crtc(struct timing_generator *optc)
+ {
+ 	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+ 
++	REG_UPDATE_5(OPTC_DATA_SOURCE_SELECT,
++			OPTC_SEG0_SRC_SEL, 0xf,
++			OPTC_SEG1_SRC_SEL, 0xf,
++			OPTC_SEG2_SRC_SEL, 0xf,
++			OPTC_SEG3_SRC_SEL, 0xf,
++			OPTC_NUM_OF_INPUT_SEGMENT, 0);
++
++	REG_UPDATE(OPTC_MEMORY_CONFIG,
++			OPTC_MEM_SEL, 0);
++
+ 	/* disable otg request until end of the first line
+ 	 * in the vertical blank region
+ 	 */
+@@ -198,6 +208,13 @@ static void optc32_disable_phantom_otg(struct timing_generator *optc)
+ {
+ 	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+ 
++	REG_UPDATE_5(OPTC_DATA_SOURCE_SELECT,
++			OPTC_SEG0_SRC_SEL, 0xf,
++			OPTC_SEG1_SRC_SEL, 0xf,
++			OPTC_SEG2_SRC_SEL, 0xf,
++			OPTC_SEG3_SRC_SEL, 0xf,
++			OPTC_NUM_OF_INPUT_SEGMENT, 0);
++
+ 	REG_UPDATE(OTG_CONTROL, OTG_MASTER_EN, 0);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+index 89b072447dba9..e940dd0f92b73 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+@@ -2041,6 +2041,7 @@ static struct resource_funcs dcn32_res_pool_funcs = {
+ 	.retain_phantom_pipes = dcn32_retain_phantom_pipes,
+ 	.save_mall_state = dcn32_save_mall_state,
+ 	.restore_mall_state = dcn32_restore_mall_state,
++	.build_pipe_pix_clk_params = dcn20_build_pipe_pix_clk_params,
+ };
+ 
+ static uint32_t read_pipe_fuses(struct dc_context *ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+index f7de3eca1225a..4156a8cc2bc7e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+@@ -1609,6 +1609,7 @@ static struct resource_funcs dcn321_res_pool_funcs = {
+ 	.retain_phantom_pipes = dcn32_retain_phantom_pipes,
+ 	.save_mall_state = dcn32_save_mall_state,
+ 	.restore_mall_state = dcn32_restore_mall_state,
++	.build_pipe_pix_clk_params = dcn20_build_pipe_pix_clk_params,
+ };
+ 
+ static uint32_t read_pipe_fuses(struct dc_context *ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_optc.c b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_optc.c
+index a4a39f1638cf2..5b15475088503 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_optc.c
+@@ -138,6 +138,16 @@ static bool optc35_disable_crtc(struct timing_generator *optc)
+ {
+ 	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+ 
++	REG_UPDATE_5(OPTC_DATA_SOURCE_SELECT,
++			OPTC_SEG0_SRC_SEL, 0xf,
++			OPTC_SEG1_SRC_SEL, 0xf,
++			OPTC_SEG2_SRC_SEL, 0xf,
++			OPTC_SEG3_SRC_SEL, 0xf,
++			OPTC_NUM_OF_INPUT_SEGMENT, 0);
++
++	REG_UPDATE(OPTC_MEMORY_CONFIG,
++			OPTC_MEM_SEL, 0);
++
+ 	/* disable otg request until end of the first line
+ 	 * in the vertical blank region
+ 	 */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index b46cde5250669..92e2ddc9ab7e2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -1237,15 +1237,11 @@ static void update_pipes_with_slice_table(struct dc *dc, struct dc_state *contex
+ {
+ 	int i;
+ 
+-	for (i = 0; i < table->odm_combine_count; i++) {
++	for (i = 0; i < table->odm_combine_count; i++)
+ 		resource_update_pipes_for_stream_with_slice_count(context,
+ 				dc->current_state, dc->res_pool,
+ 				table->odm_combines[i].stream,
+ 				table->odm_combines[i].slice_count);
+-		/* TODO: move this into the function above */
+-		dcn20_build_mapped_resource(dc, context,
+-				table->odm_combines[i].stream);
+-	}
+ 
+ 	for (i = 0; i < table->mpc_combine_count; i++)
+ 		resource_update_pipes_for_plane_with_slice_count(context,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index b95bf27f2fe2f..62ce95bac8f2b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -6329,7 +6329,7 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ 				mode_lib->ms.NoOfDPPThisState,
+ 				mode_lib->ms.dpte_group_bytes,
+ 				s->HostVMInefficiencyFactor,
+-				mode_lib->ms.soc.hostvm_min_page_size_kbytes,
++				mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024,
+ 				mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels);
+ 
+ 		s->NextMaxVStartup = s->MaxVStartupAllPlanes[j];
+@@ -6542,7 +6542,7 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ 						mode_lib->ms.cache_display_cfg.plane.HostVMEnable,
+ 						mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels,
+ 						mode_lib->ms.cache_display_cfg.plane.GPUVMEnable,
+-						mode_lib->ms.soc.hostvm_min_page_size_kbytes,
++						mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024,
+ 						mode_lib->ms.PDEAndMetaPTEBytesPerFrame[j][k],
+ 						mode_lib->ms.MetaRowBytes[j][k],
+ 						mode_lib->ms.DPTEBytesPerRow[j][k],
+@@ -7687,7 +7687,7 @@ dml_bool_t dml_core_mode_support(struct display_mode_lib_st *mode_lib)
+ 		CalculateVMRowAndSwath_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
+ 		CalculateVMRowAndSwath_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels;
+ 		CalculateVMRowAndSwath_params->GPUVMMinPageSizeKBytes = mode_lib->ms.cache_display_cfg.plane.GPUVMMinPageSizeKBytes;
+-		CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
++		CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024;
+ 		CalculateVMRowAndSwath_params->PTEBufferModeOverrideEn = mode_lib->ms.cache_display_cfg.plane.PTEBufferModeOverrideEn;
+ 		CalculateVMRowAndSwath_params->PTEBufferModeOverrideVal = mode_lib->ms.cache_display_cfg.plane.PTEBufferMode;
+ 		CalculateVMRowAndSwath_params->PTEBufferSizeNotExceeded = mode_lib->ms.PTEBufferSizeNotExceededPerState;
+@@ -7957,7 +7957,7 @@ dml_bool_t dml_core_mode_support(struct display_mode_lib_st *mode_lib)
+ 		UseMinimumDCFCLK_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels;
+ 		UseMinimumDCFCLK_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable;
+ 		UseMinimumDCFCLK_params->NumberOfActiveSurfaces = mode_lib->ms.num_active_planes;
+-		UseMinimumDCFCLK_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
++		UseMinimumDCFCLK_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024;
+ 		UseMinimumDCFCLK_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
+ 		UseMinimumDCFCLK_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled;
+ 		UseMinimumDCFCLK_params->ImmediateFlipRequirement = s->ImmediateFlipRequiredFinal;
+@@ -8699,7 +8699,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ 	CalculateVMRowAndSwath_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
+ 	CalculateVMRowAndSwath_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels;
+ 	CalculateVMRowAndSwath_params->GPUVMMinPageSizeKBytes = mode_lib->ms.cache_display_cfg.plane.GPUVMMinPageSizeKBytes;
+-	CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
++	CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024;
+ 	CalculateVMRowAndSwath_params->PTEBufferModeOverrideEn = mode_lib->ms.cache_display_cfg.plane.PTEBufferModeOverrideEn;
+ 	CalculateVMRowAndSwath_params->PTEBufferModeOverrideVal = mode_lib->ms.cache_display_cfg.plane.PTEBufferMode;
+ 	CalculateVMRowAndSwath_params->PTEBufferSizeNotExceeded = s->dummy_boolean_array[0];
+@@ -8805,7 +8805,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ 			mode_lib->ms.cache_display_cfg.hw.DPPPerSurface,
+ 			locals->dpte_group_bytes,
+ 			s->HostVMInefficiencyFactor,
+-			mode_lib->ms.soc.hostvm_min_page_size_kbytes,
++			mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024,
+ 			mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels);
+ 
+ 	locals->TCalc = 24.0 / locals->DCFCLKDeepSleep;
+@@ -8995,7 +8995,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ 			CalculatePrefetchSchedule_params->GPUVMEnable = mode_lib->ms.cache_display_cfg.plane.GPUVMEnable;
+ 			CalculatePrefetchSchedule_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable;
+ 			CalculatePrefetchSchedule_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
+-			CalculatePrefetchSchedule_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
++			CalculatePrefetchSchedule_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024;
+ 			CalculatePrefetchSchedule_params->DynamicMetadataEnable = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataEnable[k];
+ 			CalculatePrefetchSchedule_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled;
+ 			CalculatePrefetchSchedule_params->DynamicMetadataLinesBeforeActiveRequired = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataLinesBeforeActiveRequired[k];
+@@ -9240,7 +9240,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ 						mode_lib->ms.cache_display_cfg.plane.HostVMEnable,
+ 						mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels,
+ 						mode_lib->ms.cache_display_cfg.plane.GPUVMEnable,
+-						mode_lib->ms.soc.hostvm_min_page_size_kbytes,
++						mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024,
+ 						locals->PDEAndMetaPTEBytesFrame[k],
+ 						locals->MetaRowByte[k],
+ 						locals->PixelPTEBytesPerRow[k],
+@@ -9446,13 +9446,13 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ 		CalculateWatermarks_params->CompressedBufferSizeInkByte = locals->CompressedBufferSizeInkByte;
+ 
+ 		// Output
+-		CalculateWatermarks_params->Watermark = &s->dummy_watermark; // Watermarks *Watermark
+-		CalculateWatermarks_params->DRAMClockChangeSupport = &mode_lib->ms.support.DRAMClockChangeSupport[0];
+-		CalculateWatermarks_params->MaxActiveDRAMClockChangeLatencySupported = &s->dummy_single_array[0][0]; // dml_float_t *MaxActiveDRAMClockChangeLatencySupported[]
+-		CalculateWatermarks_params->SubViewportLinesNeededInMALL = &mode_lib->ms.SubViewportLinesNeededInMALL[j]; // dml_uint_t SubViewportLinesNeededInMALL[]
+-		CalculateWatermarks_params->FCLKChangeSupport = &mode_lib->ms.support.FCLKChangeSupport[0];
+-		CalculateWatermarks_params->MaxActiveFCLKChangeLatencySupported = &s->dummy_single[0]; // dml_float_t *MaxActiveFCLKChangeLatencySupported
+-		CalculateWatermarks_params->USRRetrainingSupport = &mode_lib->ms.support.USRRetrainingSupport[0];
++		CalculateWatermarks_params->Watermark = &locals->Watermark; // Watermarks *Watermark
++		CalculateWatermarks_params->DRAMClockChangeSupport = &locals->DRAMClockChangeSupport;
++		CalculateWatermarks_params->MaxActiveDRAMClockChangeLatencySupported = locals->MaxActiveDRAMClockChangeLatencySupported; // dml_float_t *MaxActiveDRAMClockChangeLatencySupported[]
++		CalculateWatermarks_params->SubViewportLinesNeededInMALL = locals->SubViewportLinesNeededInMALL; // dml_uint_t SubViewportLinesNeededInMALL[]
++		CalculateWatermarks_params->FCLKChangeSupport = &locals->FCLKChangeSupport;
++		CalculateWatermarks_params->MaxActiveFCLKChangeLatencySupported = &locals->MaxActiveFCLKChangeLatencySupported; // dml_float_t *MaxActiveFCLKChangeLatencySupported
++		CalculateWatermarks_params->USRRetrainingSupport = &locals->USRRetrainingSupport;
+ 
+ 		CalculateWatermarksMALLUseAndDRAMSpeedChangeSupport(
+ 			&mode_lib->scratch,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index db06a5b749b40..bcc03b184e5ea 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -624,8 +624,8 @@ static void populate_dml_output_cfg_from_stream_state(struct dml_output_cfg_st *
+ 		if (is_dp2p0_output_encoder(pipe))
+ 			out->OutputEncoder[location] = dml_dp2p0;
+ 		break;
+-		out->OutputEncoder[location] = dml_edp;
+ 	case SIGNAL_TYPE_EDP:
++		out->OutputEncoder[location] = dml_edp;
+ 		break;
+ 	case SIGNAL_TYPE_HDMI_TYPE_A:
+ 	case SIGNAL_TYPE_DVI_SINGLE_LINK:
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+index 08783ad097d21..8e88dcaf88f5b 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+@@ -154,7 +154,7 @@ static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst,
+ 	cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
+ 	cmd.abm_set_pipe.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pipe_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+@@ -173,7 +173,7 @@ static void dmub_abm_set_backlight(struct dc_context *dc, uint32_t backlight_pwm
+ 	cmd.abm_set_backlight.abm_set_backlight_data.panel_mask = (0x01 << panel_inst);
+ 	cmd.abm_set_backlight.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_backlight_data);
+ 
+-	dm_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index d71faf2ecd413..772dc0db916f7 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -750,7 +750,7 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 				cmd.mall.header.sub_type = DMUB_CMD__MALL_ACTION_NO_DF_REQ;
+ 				cmd.mall.header.payload_bytes = sizeof(cmd.mall) - sizeof(cmd.mall.header);
+ 
+-				dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
++				dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 				return true;
+ 			}
+@@ -872,7 +872,7 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 					cmd.mall.cursor_height = cursor_attr.height;
+ 					cmd.mall.cursor_pitch = cursor_attr.pitch;
+ 
+-					dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++					dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 					/* Use copied cursor, and it's okay to not switch back */
+ 					cursor_attr.address.quad_part = cmd.mall.cursor_copy_dst.quad_part;
+@@ -888,7 +888,7 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 				cmd.mall.tmr_scale = tmr_scale;
+ 				cmd.mall.debug_bits = dc->debug.mall_error_as_fatal;
+ 
+-				dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
++				dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 				return true;
+ 			}
+@@ -905,7 +905,7 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 	cmd.mall.header.payload_bytes =
+ 		sizeof(cmd.mall) - sizeof(cmd.mall.header);
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+index 97798cee876e2..52656691ae484 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+@@ -415,7 +415,7 @@ void dcn31_z10_save_init(struct dc *dc)
+ 	cmd.dcn_restore.header.type = DMUB_CMD__IDLE_OPT;
+ 	cmd.dcn_restore.header.sub_type = DMUB_CMD__IDLE_OPT_DCN_SAVE_INIT;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ void dcn31_z10_restore(const struct dc *dc)
+@@ -433,7 +433,7 @@ void dcn31_z10_restore(const struct dc *dc)
+ 	cmd.dcn_restore.header.type = DMUB_CMD__IDLE_OPT;
+ 	cmd.dcn_restore.header.sub_type = DMUB_CMD__IDLE_OPT_DCN_RESTORE;
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ }
+ 
+ void dcn31_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index c1a9b746c43fe..5bf9e7c1e052f 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -277,7 +277,7 @@ bool dcn32_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 				cmd.cab.header.sub_type = DMUB_CMD__CAB_NO_DCN_REQ;
+ 				cmd.cab.header.payload_bytes = sizeof(cmd.cab) - sizeof(cmd.cab.header);
+ 
+-				dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
++				dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 				return true;
+ 			}
+@@ -311,7 +311,7 @@ bool dcn32_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 				cmd.cab.header.payload_bytes = sizeof(cmd.cab) - sizeof(cmd.cab.header);
+ 				cmd.cab.cab_alloc_ways = (uint8_t)ways;
+ 
+-				dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
++				dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
+ 
+ 				return true;
+ 			}
+@@ -327,7 +327,7 @@ bool dcn32_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 	cmd.cab.header.payload_bytes =
+ 			sizeof(cmd.cab) - sizeof(cmd.cab.header);
+ 
+-	dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
++	dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index 5a8258287438e..cf26d2ad40083 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -671,11 +671,7 @@ bool dcn35_apply_idle_power_optimizations(struct dc *dc, bool enable)
+ 	}
+ 
+ 	// TODO: review other cases when idle optimization is allowed
+-
+-	if (!enable)
+-		dc_dmub_srv_exit_low_power_state(dc);
+-	else
+-		dc_dmub_srv_notify_idle(dc, enable);
++	dc_dmub_srv_apply_idle_power_optimizations(dc, enable);
+ 
+ 	return true;
+ }
+@@ -685,7 +681,7 @@ void dcn35_z10_restore(const struct dc *dc)
+ 	if (dc->debug.disable_z10)
+ 		return;
+ 
+-	dc_dmub_srv_exit_low_power_state(dc);
++	dc_dmub_srv_apply_idle_power_optimizations(dc, false);
+ 
+ 	dcn31_z10_restore(dc);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+index bac1420b1de84..10397d4dfb075 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+@@ -205,6 +205,7 @@ struct resource_funcs {
+ 	void (*get_panel_config_defaults)(struct dc_panel_config *panel_config);
+ 	void (*save_mall_state)(struct dc *dc, struct dc_state *context, struct mall_temp_config *temp_config);
+ 	void (*restore_mall_state)(struct dc *dc, struct dc_state *context, struct mall_temp_config *temp_config);
++	void (*build_pipe_pix_clk_params)(struct pipe_ctx *pipe_ctx);
+ };
+ 
+ struct audio_support{
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index f8e01ca09d964..04d1ecd6e5934 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -875,11 +875,15 @@ bool link_set_dsc_pps_packet(struct pipe_ctx *pipe_ctx, bool enable, bool immedi
+ {
+ 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
+ 	struct dc_stream_state *stream = pipe_ctx->stream;
+-	DC_LOGGER_INIT(dsc->ctx->logger);
+ 
+-	if (!pipe_ctx->stream->timing.flags.DSC || !dsc)
++	if (!pipe_ctx->stream->timing.flags.DSC)
+ 		return false;
+ 
++	if (!dsc)
++		return false;
++
++	DC_LOGGER_INIT(dsc->ctx->logger);
++
+ 	if (enable) {
+ 		struct dsc_config dsc_cfg;
+ 		uint8_t dsc_packed_pps[128];
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index db87aa7b5c90f..2f11eaabbe5f6 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -1392,7 +1392,7 @@ static bool get_usbc_cable_id(struct dc_link *link, union dp_cable_id *cable_id)
+ 	cmd.cable_id.header.payload_bytes = sizeof(cmd.cable_id.data);
+ 	cmd.cable_id.data.input.phy_inst = resource_transmitter_to_phy_idx(
+ 			link->dc, link->link_enc->transmitter);
+-	if (dm_execute_dmub_cmd(link->dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
++	if (dc_wake_and_execute_dmub_cmd(link->dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
+ 			cmd.cable_id.header.ret_status == 1) {
+ 		cable_id->raw = cmd.cable_id.data.output_raw;
+ 		DC_LOG_DC("usbc_cable_id = %d.\n", cable_id->raw);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
+index 0bb7491339098..982eda3c46f56 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
+@@ -90,7 +90,8 @@ bool dpia_query_hpd_status(struct dc_link *link)
+ 	cmd.query_hpd.data.ch_type = AUX_CHANNEL_DPIA;
+ 
+ 	/* Return HPD status reported by DMUB if query successfully executed. */
+-	if (dm_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) && cmd.query_hpd.data.status == AUX_RET_SUCCESS)
++	if (dc_wake_and_execute_dmub_cmd(dmub_srv->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
++	    cmd.query_hpd.data.status == AUX_RET_SUCCESS)
+ 		is_hpd_high = cmd.query_hpd.data.result;
+ 
+ 	DC_LOG_DEBUG("%s: link(%d) dpia(%d) cmd_status(%d) result(%d)\n",
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c
+index 5c9a30211c109..fc50931c2aecb 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dpcd.c
+@@ -205,7 +205,7 @@ enum dc_status core_link_read_dpcd(
+ 	uint32_t extended_size;
+ 	/* size of the remaining partitioned address space */
+ 	uint32_t size_left_to_read;
+-	enum dc_status status;
++	enum dc_status status = DC_ERROR_UNEXPECTED;
+ 	/* size of the next partition to be read from */
+ 	uint32_t partition_size;
+ 	uint32_t data_index = 0;
+@@ -234,7 +234,7 @@ enum dc_status core_link_write_dpcd(
+ {
+ 	uint32_t partition_size;
+ 	uint32_t data_index = 0;
+-	enum dc_status status;
++	enum dc_status status = DC_ERROR_UNEXPECTED;
+ 
+ 	while (size) {
+ 		partition_size = dpcd_get_next_partition_size(address, size);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+index e5cfaaef70b3f..bdae54c4648bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+@@ -927,8 +927,8 @@ bool edp_get_replay_state(const struct dc_link *link, uint64_t *state)
+ bool edp_setup_replay(struct dc_link *link, const struct dc_stream_state *stream)
+ {
+ 	/* To-do: Setup Replay */
+-	struct dc *dc = link->ctx->dc;
+-	struct dmub_replay *replay = dc->res_pool->replay;
++	struct dc *dc;
++	struct dmub_replay *replay;
+ 	int i;
+ 	unsigned int panel_inst;
+ 	struct replay_context replay_context = { 0 };
+@@ -944,6 +944,10 @@ bool edp_setup_replay(struct dc_link *link, const struct dc_stream_state *stream
+ 	if (!link)
+ 		return false;
+ 
++	dc = link->ctx->dc;
++
++	replay = dc->res_pool->replay;
++
+ 	if (!replay)
+ 		return false;
+ 
+@@ -972,8 +976,7 @@ bool edp_setup_replay(struct dc_link *link, const struct dc_stream_state *stream
+ 
+ 	replay_context.line_time_in_ns = lineTimeInNs;
+ 
+-	if (replay)
+-		link->replay_settings.replay_feature_enabled =
++	link->replay_settings.replay_feature_enabled =
+ 			replay->funcs->replay_copy_settings(replay, link, &replay_context, panel_inst);
+ 	if (link->replay_settings.replay_feature_enabled) {
+ 
+diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+index df63aa8f01e98..d1a4ed6f5916e 100644
+--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
++++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+@@ -150,6 +150,13 @@ enum dmub_memory_access_type {
+ 	DMUB_MEMORY_ACCESS_DMA
+ };
+ 
++/* enum dmub_power_state type - to track DC power state in dmub_srv */
++enum dmub_srv_power_state_type {
++	DMUB_POWER_STATE_UNDEFINED = 0,
++	DMUB_POWER_STATE_D0 = 1,
++	DMUB_POWER_STATE_D3 = 8
++};
++
+ /**
+  * struct dmub_region - dmub hw memory region
+  * @base: base address for region, must be 256 byte aligned
+@@ -485,6 +492,8 @@ struct dmub_srv {
+ 	/* Feature capabilities reported by fw */
+ 	struct dmub_feature_caps feature_caps;
+ 	struct dmub_visual_confirm_color visual_confirm_color;
++
++	enum dmub_srv_power_state_type power_state;
+ };
+ 
+ /**
+@@ -889,6 +898,18 @@ enum dmub_status dmub_srv_clear_inbox0_ack(struct dmub_srv *dmub);
+  */
+ void dmub_srv_subvp_save_surf_addr(struct dmub_srv *dmub, const struct dc_plane_address *addr, uint8_t subvp_index);
+ 
++/**
++ * dmub_srv_set_power_state() - Track DC power state in dmub_srv
++ * @dmub: The dmub service
++ * @power_state: DC power state setting
++ *
++ * Store DC power state in dmub_srv.  If dmub_srv is in D3, then don't send messages to DMUB
++ *
++ * Return:
++ *   void
++ */
++void dmub_srv_set_power_state(struct dmub_srv *dmub, enum dmub_srv_power_state_type dmub_srv_power_state);
++
+ #if defined(__cplusplus)
+ }
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+index 38360adc53d97..59d4e64845ca2 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+@@ -713,6 +713,7 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
+ 		dmub->hw_funcs.reset_release(dmub);
+ 
+ 	dmub->hw_init = true;
++	dmub->power_state = DMUB_POWER_STATE_D0;
+ 
+ 	return DMUB_STATUS_OK;
+ }
+@@ -766,6 +767,9 @@ enum dmub_status dmub_srv_cmd_queue(struct dmub_srv *dmub,
+ 	if (!dmub->hw_init)
+ 		return DMUB_STATUS_INVALID;
+ 
++	if (dmub->power_state != DMUB_POWER_STATE_D0)
++		return DMUB_STATUS_INVALID;
++
+ 	if (dmub->inbox1_rb.rptr > dmub->inbox1_rb.capacity ||
+ 	    dmub->inbox1_rb.wrpt > dmub->inbox1_rb.capacity) {
+ 		return DMUB_STATUS_HW_FAILURE;
+@@ -784,6 +788,9 @@ enum dmub_status dmub_srv_cmd_execute(struct dmub_srv *dmub)
+ 	if (!dmub->hw_init)
+ 		return DMUB_STATUS_INVALID;
+ 
++	if (dmub->power_state != DMUB_POWER_STATE_D0)
++		return DMUB_STATUS_INVALID;
++
+ 	/**
+ 	 * Read back all the queued commands to ensure that they've
+ 	 * been flushed to framebuffer memory. Otherwise DMCUB might
+@@ -1100,3 +1107,11 @@ void dmub_srv_subvp_save_surf_addr(struct dmub_srv *dmub, const struct dc_plane_
+ 				subvp_index);
+ 	}
+ }
++
++void dmub_srv_set_power_state(struct dmub_srv *dmub, enum dmub_srv_power_state_type dmub_srv_power_state)
++{
++	if (!dmub || !dmub->hw_init)
++		return;
++
++	dmub->power_state = dmub_srv_power_state;
++}
+diff --git a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+index 1675314a3ff20..7806056ca1dd1 100644
+--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
++++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+@@ -841,6 +841,8 @@ bool is_psr_su_specific_panel(struct dc_link *link)
+ 				isPSRSUSupported = false;
+ 			else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
+ 				isPSRSUSupported = false;
++			else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
++				isPSRSUSupported = false;
+ 			else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
+ 				isPSRSUSupported = true;
+ 		}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index e1a5ee911dbbc..cd252e2b584c6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -24,6 +24,7 @@
+ 
+ #include <linux/firmware.h>
+ #include <linux/pci.h>
++#include <linux/power_supply.h>
+ #include <linux/reboot.h>
+ 
+ #include "amdgpu.h"
+@@ -817,16 +818,8 @@ static int smu_late_init(void *handle)
+ 	 * handle the switch automatically. Driver involvement
+ 	 * is unnecessary.
+ 	 */
+-	if (!smu->dc_controlled_by_gpio) {
+-		ret = smu_set_power_source(smu,
+-					   adev->pm.ac_power ? SMU_POWER_SOURCE_AC :
+-					   SMU_POWER_SOURCE_DC);
+-		if (ret) {
+-			dev_err(adev->dev, "Failed to switch to %s mode!\n",
+-				adev->pm.ac_power ? "AC" : "DC");
+-			return ret;
+-		}
+-	}
++	adev->pm.ac_power = power_supply_is_system_supplied() > 0;
++	smu_set_ac_dc(smu);
+ 
+ 	if ((amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 1)) ||
+ 	    (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 3)))
+@@ -2502,6 +2495,7 @@ int smu_get_power_limit(void *handle,
+ 		case SMU_PPT_LIMIT_CURRENT:
+ 			switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
+ 			case IP_VERSION(13, 0, 2):
++			case IP_VERSION(13, 0, 6):
+ 			case IP_VERSION(11, 0, 7):
+ 			case IP_VERSION(11, 0, 11):
+ 			case IP_VERSION(11, 0, 12):
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+index 5a314d0316c1c..c7bfa68bf00f4 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+@@ -1442,10 +1442,12 @@ static int smu_v11_0_irq_process(struct amdgpu_device *adev,
+ 			case 0x3:
+ 				dev_dbg(adev->dev, "Switched to AC mode!\n");
+ 				schedule_work(&smu->interrupt_work);
++				adev->pm.ac_power = true;
+ 				break;
+ 			case 0x4:
+ 				dev_dbg(adev->dev, "Switched to DC mode!\n");
+ 				schedule_work(&smu->interrupt_work);
++				adev->pm.ac_power = false;
+ 				break;
+ 			case 0x7:
+ 				/*
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index cf1b84060bc3d..68981086b5a24 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -1379,10 +1379,12 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
+ 			case 0x3:
+ 				dev_dbg(adev->dev, "Switched to AC mode!\n");
+ 				smu_v13_0_ack_ac_dc_interrupt(smu);
++				adev->pm.ac_power = true;
+ 				break;
+ 			case 0x4:
+ 				dev_dbg(adev->dev, "Switched to DC mode!\n");
+ 				smu_v13_0_ack_ac_dc_interrupt(smu);
++				adev->pm.ac_power = false;
+ 				break;
+ 			case 0x7:
+ 				/*
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 82c4e1f1c6f07..2a76064d909fb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2352,6 +2352,7 @@ static int smu_v13_0_0_get_power_limit(struct smu_context *smu,
+ 	PPTable_t *pptable = table_context->driver_pptable;
+ 	SkuTable_t *skutable = &pptable->SkuTable;
+ 	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
+ 
+ 	if (smu_v13_0_get_current_power_limit(smu, &power_limit))
+ 		power_limit = smu->adev->pm.ac_power ?
+@@ -2375,7 +2376,7 @@ static int smu_v13_0_0_get_power_limit(struct smu_context *smu,
+ 					od_percent_upper, od_percent_lower, power_limit);
+ 
+ 	if (max_power_limit) {
+-		*max_power_limit = power_limit * (100 + od_percent_upper);
++		*max_power_limit = msg_limit * (100 + od_percent_upper);
+ 		*max_power_limit /= 100;
+ 	}
+ 
+@@ -2970,6 +2971,55 @@ static ssize_t smu_v13_0_0_get_ecc_info(struct smu_context *smu,
+ 	return ret;
+ }
+ 
++static int smu_v13_0_0_set_power_limit(struct smu_context *smu,
++				       enum smu_ppt_limit_type limit_type,
++				       uint32_t limit)
++{
++	PPTable_t *pptable = smu->smu_table.driver_pptable;
++	SkuTable_t *skutable = &pptable->SkuTable;
++	uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
++	struct smu_table_context *table_context = &smu->smu_table;
++	OverDriveTableExternal_t *od_table =
++		(OverDriveTableExternal_t *)table_context->overdrive_table;
++	int ret = 0;
++
++	if (limit_type != SMU_DEFAULT_PPT_LIMIT)
++		return -EINVAL;
++
++	if (limit <= msg_limit) {
++		if (smu->current_power_limit > msg_limit) {
++			od_table->OverDriveTable.Ppt = 0;
++			od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT;
++
++			ret = smu_v13_0_0_upload_overdrive_table(smu, od_table);
++			if (ret) {
++				dev_err(smu->adev->dev, "Failed to upload overdrive table!\n");
++				return ret;
++			}
++		}
++		return smu_v13_0_set_power_limit(smu, limit_type, limit);
++	} else if (smu->od_enabled) {
++		ret = smu_v13_0_set_power_limit(smu, limit_type, msg_limit);
++		if (ret)
++			return ret;
++
++		od_table->OverDriveTable.Ppt = (limit * 100) / msg_limit - 100;
++		od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT;
++
++		ret = smu_v13_0_0_upload_overdrive_table(smu, od_table);
++		if (ret) {
++		  dev_err(smu->adev->dev, "Failed to upload overdrive table!\n");
++		  return ret;
++		}
++
++		smu->current_power_limit = limit;
++	} else {
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
+ 	.get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask,
+ 	.set_default_dpm_table = smu_v13_0_0_set_default_dpm_table,
+@@ -3024,7 +3074,7 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
+ 	.set_fan_control_mode = smu_v13_0_set_fan_control_mode,
+ 	.enable_mgpu_fan_boost = smu_v13_0_0_enable_mgpu_fan_boost,
+ 	.get_power_limit = smu_v13_0_0_get_power_limit,
+-	.set_power_limit = smu_v13_0_set_power_limit,
++	.set_power_limit = smu_v13_0_0_set_power_limit,
+ 	.set_power_source = smu_v13_0_set_power_source,
+ 	.get_power_profile_mode = smu_v13_0_0_get_power_profile_mode,
+ 	.set_power_profile_mode = smu_v13_0_0_set_power_profile_mode,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+index b64e07b759374..8cd2b8cc3d58a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+@@ -924,7 +924,9 @@ static int smu_v13_0_6_print_clks(struct smu_context *smu, char *buf, int size,
+ 			if (i < (clocks.num_levels - 1))
+ 				clk2 = clocks.data[i + 1].clocks_in_khz / 1000;
+ 
+-			if (curr_clk >= clk1 && curr_clk < clk2) {
++			if (curr_clk == clk1) {
++				level = i;
++			} else if (curr_clk >= clk1 && curr_clk < clk2) {
+ 				level = (curr_clk - clk1) <= (clk2 - curr_clk) ?
+ 						i :
+ 						i + 1;
+@@ -2190,17 +2192,18 @@ static int smu_v13_0_6_mode2_reset(struct smu_context *smu)
+ 			continue;
+ 		}
+ 
+-		if (ret) {
+-			dev_err(adev->dev,
+-				"failed to send mode2 message \tparam: 0x%08x error code %d\n",
+-				SMU_RESET_MODE_2, ret);
++		if (ret)
+ 			goto out;
+-		}
++
+ 	} while (ret == -ETIME && timeout);
+ 
+ out:
+ 	mutex_unlock(&smu->message_lock);
+ 
++	if (ret)
++		dev_err(adev->dev, "failed to send mode2 reset, error code %d",
++			ret);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 81eafed76045e..d380a53e8f772 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2316,6 +2316,7 @@ static int smu_v13_0_7_get_power_limit(struct smu_context *smu,
+ 	PPTable_t *pptable = table_context->driver_pptable;
+ 	SkuTable_t *skutable = &pptable->SkuTable;
+ 	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
+ 
+ 	if (smu_v13_0_get_current_power_limit(smu, &power_limit))
+ 		power_limit = smu->adev->pm.ac_power ?
+@@ -2339,7 +2340,7 @@ static int smu_v13_0_7_get_power_limit(struct smu_context *smu,
+ 					od_percent_upper, od_percent_lower, power_limit);
+ 
+ 	if (max_power_limit) {
+-		*max_power_limit = power_limit * (100 + od_percent_upper);
++		*max_power_limit = msg_limit * (100 + od_percent_upper);
+ 		*max_power_limit /= 100;
+ 	}
+ 
+@@ -2567,6 +2568,55 @@ static int smu_v13_0_7_set_df_cstate(struct smu_context *smu,
+ 					       NULL);
+ }
+ 
++static int smu_v13_0_7_set_power_limit(struct smu_context *smu,
++				       enum smu_ppt_limit_type limit_type,
++				       uint32_t limit)
++{
++	PPTable_t *pptable = smu->smu_table.driver_pptable;
++	SkuTable_t *skutable = &pptable->SkuTable;
++	uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
++	struct smu_table_context *table_context = &smu->smu_table;
++	OverDriveTableExternal_t *od_table =
++		(OverDriveTableExternal_t *)table_context->overdrive_table;
++	int ret = 0;
++
++	if (limit_type != SMU_DEFAULT_PPT_LIMIT)
++		return -EINVAL;
++
++	if (limit <= msg_limit) {
++		if (smu->current_power_limit > msg_limit) {
++			od_table->OverDriveTable.Ppt = 0;
++			od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT;
++
++			ret = smu_v13_0_7_upload_overdrive_table(smu, od_table);
++			if (ret) {
++				dev_err(smu->adev->dev, "Failed to upload overdrive table!\n");
++				return ret;
++			}
++		}
++		return smu_v13_0_set_power_limit(smu, limit_type, limit);
++	} else if (smu->od_enabled) {
++		ret = smu_v13_0_set_power_limit(smu, limit_type, msg_limit);
++		if (ret)
++			return ret;
++
++		od_table->OverDriveTable.Ppt = (limit * 100) / msg_limit - 100;
++		od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT;
++
++		ret = smu_v13_0_7_upload_overdrive_table(smu, od_table);
++		if (ret) {
++		  dev_err(smu->adev->dev, "Failed to upload overdrive table!\n");
++		  return ret;
++		}
++
++		smu->current_power_limit = limit;
++	} else {
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
+ 	.get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask,
+ 	.set_default_dpm_table = smu_v13_0_7_set_default_dpm_table,
+@@ -2618,7 +2668,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
+ 	.set_fan_control_mode = smu_v13_0_set_fan_control_mode,
+ 	.enable_mgpu_fan_boost = smu_v13_0_7_enable_mgpu_fan_boost,
+ 	.get_power_limit = smu_v13_0_7_get_power_limit,
+-	.set_power_limit = smu_v13_0_set_power_limit,
++	.set_power_limit = smu_v13_0_7_set_power_limit,
+ 	.set_power_source = smu_v13_0_set_power_source,
+ 	.get_power_profile_mode = smu_v13_0_7_get_power_profile_mode,
+ 	.set_power_profile_mode = smu_v13_0_7_set_power_profile_mode,
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 51abe42c639e5..5168628f11cff 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1741,6 +1741,7 @@ static ssize_t anx7625_aux_transfer(struct drm_dp_aux *aux,
+ 	u8 request = msg->request & ~DP_AUX_I2C_MOT;
+ 	int ret = 0;
+ 
++	mutex_lock(&ctx->aux_lock);
+ 	pm_runtime_get_sync(dev);
+ 	msg->reply = 0;
+ 	switch (request) {
+@@ -1757,6 +1758,7 @@ static ssize_t anx7625_aux_transfer(struct drm_dp_aux *aux,
+ 					msg->size, msg->buffer);
+ 	pm_runtime_mark_last_busy(dev);
+ 	pm_runtime_put_autosuspend(dev);
++	mutex_unlock(&ctx->aux_lock);
+ 
+ 	return ret;
+ }
+@@ -2453,7 +2455,9 @@ static void anx7625_bridge_atomic_disable(struct drm_bridge *bridge,
+ 	ctx->connector = NULL;
+ 	anx7625_dp_stop(ctx);
+ 
+-	pm_runtime_put_sync(dev);
++	mutex_lock(&ctx->aux_lock);
++	pm_runtime_put_sync_suspend(dev);
++	mutex_unlock(&ctx->aux_lock);
+ }
+ 
+ static enum drm_connector_status
+@@ -2647,6 +2651,7 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 
+ 	mutex_init(&platform->lock);
+ 	mutex_init(&platform->hdcp_wq_lock);
++	mutex_init(&platform->aux_lock);
+ 
+ 	INIT_DELAYED_WORK(&platform->hdcp_work, hdcp_check_work_func);
+ 	platform->hdcp_workqueue = create_workqueue("hdcp workqueue");
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.h b/drivers/gpu/drm/bridge/analogix/anx7625.h
+index 5af819611ebce..80d3fb4e985f1 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.h
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.h
+@@ -471,6 +471,8 @@ struct anx7625_data {
+ 	struct workqueue_struct *hdcp_workqueue;
+ 	/* Lock for hdcp work queue */
+ 	struct mutex hdcp_wq_lock;
++	/* Lock for aux transfer and disable */
++	struct mutex aux_lock;
+ 	char edid_block;
+ 	struct display_timing dt;
+ 	u8 display_timing_valid;
+diff --git a/drivers/gpu/drm/bridge/nxp-ptn3460.c b/drivers/gpu/drm/bridge/nxp-ptn3460.c
+index d81920227a8ae..7c0076e499533 100644
+--- a/drivers/gpu/drm/bridge/nxp-ptn3460.c
++++ b/drivers/gpu/drm/bridge/nxp-ptn3460.c
+@@ -54,13 +54,13 @@ static int ptn3460_read_bytes(struct ptn3460_bridge *ptn_bridge, char addr,
+ 	int ret;
+ 
+ 	ret = i2c_master_send(ptn_bridge->client, &addr, 1);
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	ret = i2c_master_recv(ptn_bridge->client, buf, len);
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		DRM_ERROR("Failed to recv i2c data, ret=%d\n", ret);
+ 		return ret;
+ 	}
+@@ -78,7 +78,7 @@ static int ptn3460_write_byte(struct ptn3460_bridge *ptn_bridge, char addr,
+ 	buf[1] = val;
+ 
+ 	ret = i2c_master_send(ptn_bridge->client, buf, ARRAY_SIZE(buf));
+-	if (ret <= 0) {
++	if (ret < 0) {
+ 		DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/gpu/drm/bridge/parade-ps8640.c b/drivers/gpu/drm/bridge/parade-ps8640.c
+index 541e4f5afc4c8..14d4dcf239da8 100644
+--- a/drivers/gpu/drm/bridge/parade-ps8640.c
++++ b/drivers/gpu/drm/bridge/parade-ps8640.c
+@@ -107,6 +107,7 @@ struct ps8640 {
+ 	struct device_link *link;
+ 	bool pre_enabled;
+ 	bool need_post_hpd_delay;
++	struct mutex aux_lock;
+ };
+ 
+ static const struct regmap_config ps8640_regmap_config[] = {
+@@ -345,11 +346,20 @@ static ssize_t ps8640_aux_transfer(struct drm_dp_aux *aux,
+ 	struct device *dev = &ps_bridge->page[PAGE0_DP_CNTL]->dev;
+ 	int ret;
+ 
++	mutex_lock(&ps_bridge->aux_lock);
+ 	pm_runtime_get_sync(dev);
++	ret = _ps8640_wait_hpd_asserted(ps_bridge, 200 * 1000);
++	if (ret) {
++		pm_runtime_put_sync_suspend(dev);
++		goto exit;
++	}
+ 	ret = ps8640_aux_transfer_msg(aux, msg);
+ 	pm_runtime_mark_last_busy(dev);
+ 	pm_runtime_put_autosuspend(dev);
+ 
++exit:
++	mutex_unlock(&ps_bridge->aux_lock);
++
+ 	return ret;
+ }
+ 
+@@ -470,7 +480,18 @@ static void ps8640_atomic_post_disable(struct drm_bridge *bridge,
+ 	ps_bridge->pre_enabled = false;
+ 
+ 	ps8640_bridge_vdo_control(ps_bridge, DISABLE);
++
++	/*
++	 * The bridge seems to expect everything to be power cycled at the
++	 * disable process, so grab a lock here to make sure
++	 * ps8640_aux_transfer() is not holding a runtime PM reference and
++	 * preventing the bridge from suspend.
++	 */
++	mutex_lock(&ps_bridge->aux_lock);
++
+ 	pm_runtime_put_sync_suspend(&ps_bridge->page[PAGE0_DP_CNTL]->dev);
++
++	mutex_unlock(&ps_bridge->aux_lock);
+ }
+ 
+ static int ps8640_bridge_attach(struct drm_bridge *bridge,
+@@ -619,6 +640,8 @@ static int ps8640_probe(struct i2c_client *client)
+ 	if (!ps_bridge)
+ 		return -ENOMEM;
+ 
++	mutex_init(&ps_bridge->aux_lock);
++
+ 	ps_bridge->supplies[0].supply = "vdd12";
+ 	ps_bridge->supplies[1].supply = "vdd33";
+ 	ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ps_bridge->supplies),
+diff --git a/drivers/gpu/drm/bridge/samsung-dsim.c b/drivers/gpu/drm/bridge/samsung-dsim.c
+index be5914caa17d5..63a1a0c88be4d 100644
+--- a/drivers/gpu/drm/bridge/samsung-dsim.c
++++ b/drivers/gpu/drm/bridge/samsung-dsim.c
+@@ -969,10 +969,6 @@ static int samsung_dsim_init_link(struct samsung_dsim *dsi)
+ 	reg = samsung_dsim_read(dsi, DSIM_ESCMODE_REG);
+ 	reg &= ~DSIM_STOP_STATE_CNT_MASK;
+ 	reg |= DSIM_STOP_STATE_CNT(driver_data->reg_values[STOP_STATE_CNT]);
+-
+-	if (!samsung_dsim_hw_is_exynos(dsi->plat_data->hw_type))
+-		reg |= DSIM_FORCE_STOP_STATE;
+-
+ 	samsung_dsim_write(dsi, DSIM_ESCMODE_REG, reg);
+ 
+ 	reg = DSIM_BTA_TIMEOUT(0xff) | DSIM_LPDR_TIMEOUT(0xffff);
+@@ -1431,18 +1427,6 @@ static void samsung_dsim_disable_irq(struct samsung_dsim *dsi)
+ 	disable_irq(dsi->irq);
+ }
+ 
+-static void samsung_dsim_set_stop_state(struct samsung_dsim *dsi, bool enable)
+-{
+-	u32 reg = samsung_dsim_read(dsi, DSIM_ESCMODE_REG);
+-
+-	if (enable)
+-		reg |= DSIM_FORCE_STOP_STATE;
+-	else
+-		reg &= ~DSIM_FORCE_STOP_STATE;
+-
+-	samsung_dsim_write(dsi, DSIM_ESCMODE_REG, reg);
+-}
+-
+ static int samsung_dsim_init(struct samsung_dsim *dsi)
+ {
+ 	const struct samsung_dsim_driver_data *driver_data = dsi->driver_data;
+@@ -1492,9 +1476,6 @@ static void samsung_dsim_atomic_pre_enable(struct drm_bridge *bridge,
+ 		ret = samsung_dsim_init(dsi);
+ 		if (ret)
+ 			return;
+-
+-		samsung_dsim_set_display_mode(dsi);
+-		samsung_dsim_set_display_enable(dsi, true);
+ 	}
+ }
+ 
+@@ -1503,12 +1484,8 @@ static void samsung_dsim_atomic_enable(struct drm_bridge *bridge,
+ {
+ 	struct samsung_dsim *dsi = bridge_to_dsi(bridge);
+ 
+-	if (samsung_dsim_hw_is_exynos(dsi->plat_data->hw_type)) {
+-		samsung_dsim_set_display_mode(dsi);
+-		samsung_dsim_set_display_enable(dsi, true);
+-	} else {
+-		samsung_dsim_set_stop_state(dsi, false);
+-	}
++	samsung_dsim_set_display_mode(dsi);
++	samsung_dsim_set_display_enable(dsi, true);
+ 
+ 	dsi->state |= DSIM_STATE_VIDOUT_AVAILABLE;
+ }
+@@ -1521,9 +1498,6 @@ static void samsung_dsim_atomic_disable(struct drm_bridge *bridge,
+ 	if (!(dsi->state & DSIM_STATE_ENABLED))
+ 		return;
+ 
+-	if (!samsung_dsim_hw_is_exynos(dsi->plat_data->hw_type))
+-		samsung_dsim_set_stop_state(dsi, true);
+-
+ 	dsi->state &= ~DSIM_STATE_VIDOUT_AVAILABLE;
+ }
+ 
+@@ -1828,8 +1802,6 @@ static ssize_t samsung_dsim_host_transfer(struct mipi_dsi_host *host,
+ 	if (ret)
+ 		return ret;
+ 
+-	samsung_dsim_set_stop_state(dsi, false);
+-
+ 	ret = mipi_dsi_create_packet(&xfer.packet, msg);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/gpu/drm/bridge/sii902x.c b/drivers/gpu/drm/bridge/sii902x.c
+index 2bdc5b439bebd..4560ae9cbce15 100644
+--- a/drivers/gpu/drm/bridge/sii902x.c
++++ b/drivers/gpu/drm/bridge/sii902x.c
+@@ -1080,6 +1080,26 @@ static int sii902x_init(struct sii902x *sii902x)
+ 			return ret;
+ 	}
+ 
++	ret = sii902x_audio_codec_init(sii902x, dev);
++	if (ret)
++		return ret;
++
++	i2c_set_clientdata(sii902x->i2c, sii902x);
++
++	sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev,
++					1, 0, I2C_MUX_GATE,
++					sii902x_i2c_bypass_select,
++					sii902x_i2c_bypass_deselect);
++	if (!sii902x->i2cmux) {
++		ret = -ENOMEM;
++		goto err_unreg_audio;
++	}
++
++	sii902x->i2cmux->priv = sii902x;
++	ret = i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0);
++	if (ret)
++		goto err_unreg_audio;
++
+ 	sii902x->bridge.funcs = &sii902x_bridge_funcs;
+ 	sii902x->bridge.of_node = dev->of_node;
+ 	sii902x->bridge.timings = &default_sii902x_timings;
+@@ -1090,19 +1110,13 @@ static int sii902x_init(struct sii902x *sii902x)
+ 
+ 	drm_bridge_add(&sii902x->bridge);
+ 
+-	sii902x_audio_codec_init(sii902x, dev);
+-
+-	i2c_set_clientdata(sii902x->i2c, sii902x);
++	return 0;
+ 
+-	sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev,
+-					1, 0, I2C_MUX_GATE,
+-					sii902x_i2c_bypass_select,
+-					sii902x_i2c_bypass_deselect);
+-	if (!sii902x->i2cmux)
+-		return -ENOMEM;
++err_unreg_audio:
++	if (!PTR_ERR_OR_ZERO(sii902x->audio.pdev))
++		platform_device_unregister(sii902x->audio.pdev);
+ 
+-	sii902x->i2cmux->priv = sii902x;
+-	return i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0);
++	return ret;
+ }
+ 
+ static int sii902x_probe(struct i2c_client *client)
+@@ -1170,12 +1184,14 @@ static int sii902x_probe(struct i2c_client *client)
+ }
+ 
+ static void sii902x_remove(struct i2c_client *client)
+-
+ {
+ 	struct sii902x *sii902x = i2c_get_clientdata(client);
+ 
+-	i2c_mux_del_adapters(sii902x->i2cmux);
+ 	drm_bridge_remove(&sii902x->bridge);
++	i2c_mux_del_adapters(sii902x->i2cmux);
++
++	if (!PTR_ERR_OR_ZERO(sii902x->audio.pdev))
++		platform_device_unregister(sii902x->audio.pdev);
+ }
+ 
+ static const struct of_device_id sii902x_dt_ids[] = {
+diff --git a/drivers/gpu/drm/drm_damage_helper.c b/drivers/gpu/drm/drm_damage_helper.c
+index d8b2955e88fd0..afb02aae707b4 100644
+--- a/drivers/gpu/drm/drm_damage_helper.c
++++ b/drivers/gpu/drm/drm_damage_helper.c
+@@ -241,7 +241,8 @@ drm_atomic_helper_damage_iter_init(struct drm_atomic_helper_damage_iter *iter,
+ 	iter->plane_src.x2 = (src.x2 >> 16) + !!(src.x2 & 0xFFFF);
+ 	iter->plane_src.y2 = (src.y2 >> 16) + !!(src.y2 & 0xFFFF);
+ 
+-	if (!iter->clips || !drm_rect_equals(&state->src, &old_state->src)) {
++	if (!iter->clips || state->ignore_damage_clips ||
++	    !drm_rect_equals(&state->src, &old_state->src)) {
+ 		iter->clips = NULL;
+ 		iter->num_clips = 0;
+ 		iter->full_update = true;
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 24e7998d17313..311e179904a2a 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -678,6 +678,19 @@ int drm_mode_getplane_res(struct drm_device *dev, void *data,
+ 		    !file_priv->universal_planes)
+ 			continue;
+ 
++		/*
++		 * If we're running on a virtualized driver then,
++		 * unless userspace advertizes support for the
++		 * virtualized cursor plane, disable cursor planes
++		 * because they'll be broken due to missing cursor
++		 * hotspot info.
++		 */
++		if (plane->type == DRM_PLANE_TYPE_CURSOR &&
++		    drm_core_check_feature(dev, DRIVER_CURSOR_HOTSPOT) &&
++		    file_priv->atomic &&
++		    !file_priv->supports_virtualized_cursor_plane)
++			continue;
++
+ 		if (drm_lease_held(file_priv, plane->base.id)) {
+ 			if (count < plane_resp->count_planes &&
+ 			    put_user(plane->base.id, plane_ptr + count))
+@@ -1387,6 +1400,7 @@ retry:
+ out:
+ 	if (fb)
+ 		drm_framebuffer_put(fb);
++	fb = NULL;
+ 	if (plane->old_fb)
+ 		drm_framebuffer_put(plane->old_fb);
+ 	plane->old_fb = NULL;
+diff --git a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+index 4d986077738b9..bce027552474a 100644
+--- a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+@@ -319,9 +319,9 @@ static void decon_win_set_bldmod(struct decon_context *ctx, unsigned int win,
+ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
+ 				 struct drm_framebuffer *fb)
+ {
+-	struct exynos_drm_plane plane = ctx->planes[win];
++	struct exynos_drm_plane *plane = &ctx->planes[win];
+ 	struct exynos_drm_plane_state *state =
+-		to_exynos_plane_state(plane.base.state);
++		to_exynos_plane_state(plane->base.state);
+ 	unsigned int alpha = state->base.alpha;
+ 	unsigned int pixel_alpha;
+ 	unsigned long val;
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index 8dde7b1e9b35d..5bdc246f5fad0 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -661,9 +661,9 @@ static void fimd_win_set_bldmod(struct fimd_context *ctx, unsigned int win,
+ static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win,
+ 				struct drm_framebuffer *fb, int width)
+ {
+-	struct exynos_drm_plane plane = ctx->planes[win];
++	struct exynos_drm_plane *plane = &ctx->planes[win];
+ 	struct exynos_drm_plane_state *state =
+-		to_exynos_plane_state(plane.base.state);
++		to_exynos_plane_state(plane->base.state);
+ 	uint32_t pixel_format = fb->format->format;
+ 	unsigned int alpha = state->base.alpha;
+ 	u32 val = WINCONx_ENWIN;
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+index 34cdabc30b4f5..5302bebbe38c9 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+@@ -1342,7 +1342,7 @@ static int __maybe_unused gsc_runtime_resume(struct device *dev)
+ 	for (i = 0; i < ctx->num_clocks; i++) {
+ 		ret = clk_prepare_enable(ctx->clocks[i]);
+ 		if (ret) {
+-			while (--i > 0)
++			while (--i >= 0)
+ 				clk_disable_unprepare(ctx->clocks[i]);
+ 			return ret;
+ 		}
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index 67143a0f51893..bf0dbd7d0697d 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1155,6 +1155,7 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
+ 	}
+ 
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP);
++	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
+ 
+ 	/* ensure all panel commands dispatched before enabling transcoder */
+ 	wait_for_cmds_dispatched_to_panel(encoder);
+@@ -1255,8 +1256,6 @@ static void gen11_dsi_enable(struct intel_atomic_state *state,
+ 	/* step6d: enable dsi transcoder */
+ 	gen11_dsi_enable_transcoder(encoder);
+ 
+-	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
+-
+ 	/* step7: enable backlight */
+ 	intel_backlight_enable(crtc_state, conn_state);
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON);
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 4f1f31fc9529d..614b4534ef96b 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -1401,8 +1401,18 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp,
+ 	 * can rely on frontbuffer tracking.
+ 	 */
+ 	mask = EDP_PSR_DEBUG_MASK_MEMUP |
+-	       EDP_PSR_DEBUG_MASK_HPD |
+-	       EDP_PSR_DEBUG_MASK_LPSP;
++	       EDP_PSR_DEBUG_MASK_HPD;
++
++	/*
++	 * For some unknown reason on HSW non-ULT (or at least on
++	 * Dell Latitude E6540) external displays start to flicker
++	 * when PSR is enabled on the eDP. SR/PC6 residency is much
++	 * higher than should be possible with an external display.
++	 * As a workaround leave LPSP unmasked to prevent PSR entry
++	 * when external displays are active.
++	 */
++	if (DISPLAY_VER(dev_priv) >= 8 || IS_HASWELL_ULT(dev_priv))
++		mask |= EDP_PSR_DEBUG_MASK_LPSP;
+ 
+ 	if (DISPLAY_VER(dev_priv) < 20)
+ 		mask |= EDP_PSR_DEBUG_MASK_MAX_SLEEP;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
+index 5057d976fa578..ca762ea554136 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
+@@ -62,7 +62,7 @@ nouveau_fence_signal(struct nouveau_fence *fence)
+ 	if (test_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags)) {
+ 		struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+ 
+-		if (atomic_dec_and_test(&fctx->notify_ref))
++		if (!--fctx->notify_ref)
+ 			drop = 1;
+ 	}
+ 
+@@ -103,7 +103,6 @@ nouveau_fence_context_kill(struct nouveau_fence_chan *fctx, int error)
+ void
+ nouveau_fence_context_del(struct nouveau_fence_chan *fctx)
+ {
+-	cancel_work_sync(&fctx->allow_block_work);
+ 	nouveau_fence_context_kill(fctx, 0);
+ 	nvif_event_dtor(&fctx->event);
+ 	fctx->dead = 1;
+@@ -168,18 +167,6 @@ nouveau_fence_wait_uevent_handler(struct nvif_event *event, void *repv, u32 repc
+ 	return ret;
+ }
+ 
+-static void
+-nouveau_fence_work_allow_block(struct work_struct *work)
+-{
+-	struct nouveau_fence_chan *fctx = container_of(work, struct nouveau_fence_chan,
+-						       allow_block_work);
+-
+-	if (atomic_read(&fctx->notify_ref) == 0)
+-		nvif_event_block(&fctx->event);
+-	else
+-		nvif_event_allow(&fctx->event);
+-}
+-
+ void
+ nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx)
+ {
+@@ -191,7 +178,6 @@ nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_cha
+ 	} args;
+ 	int ret;
+ 
+-	INIT_WORK(&fctx->allow_block_work, nouveau_fence_work_allow_block);
+ 	INIT_LIST_HEAD(&fctx->flip);
+ 	INIT_LIST_HEAD(&fctx->pending);
+ 	spin_lock_init(&fctx->lock);
+@@ -535,19 +521,15 @@ static bool nouveau_fence_enable_signaling(struct dma_fence *f)
+ 	struct nouveau_fence *fence = from_fence(f);
+ 	struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
+ 	bool ret;
+-	bool do_work;
+ 
+-	if (atomic_inc_return(&fctx->notify_ref) == 0)
+-		do_work = true;
++	if (!fctx->notify_ref++)
++		nvif_event_allow(&fctx->event);
+ 
+ 	ret = nouveau_fence_no_signaling(f);
+ 	if (ret)
+ 		set_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags);
+-	else if (atomic_dec_and_test(&fctx->notify_ref))
+-		do_work = true;
+-
+-	if (do_work)
+-		schedule_work(&fctx->allow_block_work);
++	else if (!--fctx->notify_ref)
++		nvif_event_block(&fctx->event);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouveau/nouveau_fence.h
+index 28f5cf013b898..64d33ae7f3561 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.h
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.h
+@@ -3,7 +3,6 @@
+ #define __NOUVEAU_FENCE_H__
+ 
+ #include <linux/dma-fence.h>
+-#include <linux/workqueue.h>
+ #include <nvif/event.h>
+ 
+ struct nouveau_drm;
+@@ -46,9 +45,7 @@ struct nouveau_fence_chan {
+ 	char name[32];
+ 
+ 	struct nvif_event event;
+-	struct work_struct allow_block_work;
+-	atomic_t notify_ref;
+-	int dead, killed;
++	int notify_ref, dead, killed;
+ };
+ 
+ struct nouveau_fence_priv {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_vmm.c b/drivers/gpu/drm/nouveau/nouveau_vmm.c
+index a6602c0126715..3dda885df5b22 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_vmm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_vmm.c
+@@ -108,6 +108,9 @@ nouveau_vma_new(struct nouveau_bo *nvbo, struct nouveau_vmm *vmm,
+ 	} else {
+ 		ret = nvif_vmm_get(&vmm->vmm, PTES, false, mem->mem.page, 0,
+ 				   mem->mem.size, &tmp);
++		if (ret)
++			goto done;
++
+ 		vma->addr = tmp.addr;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
+index c8ce7ff187135..e74493a4569ed 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
+@@ -550,6 +550,10 @@ ga100_fifo_nonstall_ctor(struct nvkm_fifo *fifo)
+ 		struct nvkm_engn *engn = list_first_entry(&runl->engns, typeof(*engn), head);
+ 
+ 		runl->nonstall.vector = engn->func->nonstall(engn);
++
++		/* if no nonstall vector just keep going */
++		if (runl->nonstall.vector == -1)
++			continue;
+ 		if (runl->nonstall.vector < 0) {
+ 			RUNL_ERROR(runl, "nonstall %d", runl->nonstall.vector);
+ 			return runl->nonstall.vector;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/r535.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/r535.c
+index b903785056b5d..3454c7d295029 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/r535.c
+@@ -351,7 +351,7 @@ r535_engn_nonstall(struct nvkm_engn *engn)
+ 	int ret;
+ 
+ 	ret = nvkm_gsp_intr_nonstall(subdev->device->gsp, subdev->type, subdev->inst);
+-	WARN_ON(ret < 0);
++	WARN_ON(ret == -ENOENT);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
+index 04bceaa28a197..da1bebb896f7f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
+@@ -25,12 +25,8 @@ int
+ nvkm_gsp_intr_nonstall(struct nvkm_gsp *gsp, enum nvkm_subdev_type type, int inst)
+ {
+ 	for (int i = 0; i < gsp->intr_nr; i++) {
+-		if (gsp->intr[i].type == type && gsp->intr[i].inst == inst) {
+-			if (gsp->intr[i].nonstall != ~0)
+-				return gsp->intr[i].nonstall;
+-
+-			return -EINVAL;
+-		}
++		if (gsp->intr[i].type == type && gsp->intr[i].inst == inst)
++			return gsp->intr[i].nonstall;
+ 	}
+ 
+ 	return -ENOENT;
+diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
+index 99e14dc212ecb..a4ac4b47777fe 100644
+--- a/drivers/gpu/drm/panel/Kconfig
++++ b/drivers/gpu/drm/panel/Kconfig
+@@ -530,6 +530,8 @@ config DRM_PANEL_RAYDIUM_RM692E5
+ 	depends on OF
+ 	depends on DRM_MIPI_DSI
+ 	depends on BACKLIGHT_CLASS_DEVICE
++	select DRM_DISPLAY_DP_HELPER
++	select DRM_DISPLAY_HELPER
+ 	help
+ 	  Say Y here if you want to enable support for Raydium RM692E5-based
+ 	  display panels, such as the one found in the Fairphone 5 smartphone.
+diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
+index 95c8472d878a9..7dc6fb7308ce3 100644
+--- a/drivers/gpu/drm/panel/panel-edp.c
++++ b/drivers/gpu/drm/panel/panel-edp.c
+@@ -973,6 +973,8 @@ static const struct panel_desc auo_b116xak01 = {
+ 	},
+ 	.delay = {
+ 		.hpd_absent = 200,
++		.unprepare = 500,
++		.enable = 50,
+ 	},
+ };
+ 
+@@ -1840,7 +1842,8 @@ static const struct edp_panel_entry edp_panels[] = {
+ 	EDP_PANEL_ENTRY('A', 'U', 'O', 0x145c, &delay_200_500_e50, "B116XAB01.4"),
+ 	EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"),
+ 	EDP_PANEL_ENTRY('A', 'U', 'O', 0x1ea5, &delay_200_500_e50, "B116XAK01.6"),
+-	EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01"),
++	EDP_PANEL_ENTRY('A', 'U', 'O', 0x235c, &delay_200_500_e50, "B116XTN02.3"),
++	EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01.0"),
+ 	EDP_PANEL_ENTRY('A', 'U', 'O', 0x582d, &delay_200_500_e50, "B133UAN01.0"),
+ 	EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"),
+ 	EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"),
+@@ -1848,8 +1851,10 @@ static const struct edp_panel_entry edp_panels[] = {
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x0786, &delay_200_500_p2e80, "NV116WHM-T01"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x07d1, &boe_nv133fhm_n61.delay, "NV133FHM-N61"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x082d, &boe_nv133fhm_n61.delay, "NV133FHM-N62"),
++	EDP_PANEL_ENTRY('B', 'O', 'E', 0x09c3, &delay_200_500_e50, "NT116WHM-N21,836X2"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x094b, &delay_200_500_e50, "NT116WHM-N21"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x095f, &delay_200_500_e50, "NE135FBM-N41 v8.1"),
++	EDP_PANEL_ENTRY('B', 'O', 'E', 0x0979, &delay_200_500_e50, "NV116WHM-N49 V8.0"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x098d, &boe_nv110wtm_n61.delay, "NV110WTM-N61"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x09dd, &delay_200_500_e50, "NT116WHM-N21"),
+ 	EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a5d, &delay_200_500_e50, "NV116WHM-N45"),
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c
+index ea5a857793827..f23d8832a1ad0 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c
+@@ -309,7 +309,7 @@ static const struct s6d7aa0_panel_desc s6d7aa0_lsl080al02_desc = {
+ 	.off_func = s6d7aa0_lsl080al02_off,
+ 	.drm_mode = &s6d7aa0_lsl080al02_mode,
+ 	.mode_flags = MIPI_DSI_MODE_VSYNC_FLUSH | MIPI_DSI_MODE_VIDEO_NO_HFP,
+-	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
++	.bus_flags = 0,
+ 
+ 	.has_backlight = false,
+ 	.use_passwd3 = false,
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 9367a4572dcf6..1ba633fd56961 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -3861,6 +3861,7 @@ static const struct panel_desc tianma_tm070jdhg30 = {
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ };
+ 
+ static const struct panel_desc tianma_tm070jvhg33 = {
+@@ -3873,6 +3874,7 @@ static const struct panel_desc tianma_tm070jvhg33 = {
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ };
+ 
+ static const struct display_timing tianma_tm070rvhg71_timing = {
+diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
+index 46de4f171970b..beee5563031aa 100644
+--- a/drivers/gpu/drm/qxl/qxl_drv.c
++++ b/drivers/gpu/drm/qxl/qxl_drv.c
+@@ -285,7 +285,7 @@ static const struct drm_ioctl_desc qxl_ioctls[] = {
+ };
+ 
+ static struct drm_driver qxl_driver = {
+-	.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
++	.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC | DRIVER_CURSOR_HOTSPOT,
+ 
+ 	.dumb_create = qxl_mode_dumb_create,
+ 	.dumb_map_offset = drm_gem_ttm_dumb_map_offset,
+diff --git a/drivers/gpu/drm/tidss/tidss_crtc.c b/drivers/gpu/drm/tidss/tidss_crtc.c
+index 5e5e466f35d10..7c78c074e3a2e 100644
+--- a/drivers/gpu/drm/tidss/tidss_crtc.c
++++ b/drivers/gpu/drm/tidss/tidss_crtc.c
+@@ -169,13 +169,13 @@ static void tidss_crtc_atomic_flush(struct drm_crtc *crtc,
+ 	struct tidss_device *tidss = to_tidss(ddev);
+ 	unsigned long flags;
+ 
+-	dev_dbg(ddev->dev,
+-		"%s: %s enabled %d, needs modeset %d, event %p\n", __func__,
+-		crtc->name, drm_atomic_crtc_needs_modeset(crtc->state),
+-		crtc->state->enable, crtc->state->event);
++	dev_dbg(ddev->dev, "%s: %s is %sactive, %s modeset, event %p\n",
++		__func__, crtc->name, crtc->state->active ? "" : "not ",
++		drm_atomic_crtc_needs_modeset(crtc->state) ? "needs" : "doesn't need",
++		crtc->state->event);
+ 
+ 	/* There is nothing to do if CRTC is not going to be enabled. */
+-	if (!crtc->state->enable)
++	if (!crtc->state->active)
+ 		return;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/vboxvideo/vbox_drv.c b/drivers/gpu/drm/vboxvideo/vbox_drv.c
+index 047b958123341..cd9e66a06596a 100644
+--- a/drivers/gpu/drm/vboxvideo/vbox_drv.c
++++ b/drivers/gpu/drm/vboxvideo/vbox_drv.c
+@@ -182,7 +182,7 @@ DEFINE_DRM_GEM_FOPS(vbox_fops);
+ 
+ static const struct drm_driver driver = {
+ 	.driver_features =
+-	    DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC,
++	    DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC | DRIVER_CURSOR_HOTSPOT,
+ 
+ 	.fops = &vbox_fops,
+ 	.name = DRIVER_NAME,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
+index 4334c76084084..f8e9abe647b92 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
+@@ -177,7 +177,7 @@ static const struct drm_driver driver = {
+ 	 * out via drm_device::driver_features:
+ 	 */
+ 	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC |
+-			   DRIVER_SYNCOBJ | DRIVER_SYNCOBJ_TIMELINE,
++			   DRIVER_SYNCOBJ | DRIVER_SYNCOBJ_TIMELINE | DRIVER_CURSOR_HOTSPOT,
+ 	.open = virtio_gpu_driver_open,
+ 	.postclose = virtio_gpu_driver_postclose,
+ 
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index a2e045f3a0004..a1ef657eba077 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -79,6 +79,8 @@ static int virtio_gpu_plane_atomic_check(struct drm_plane *plane,
+ {
+ 	struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state,
+ 										 plane);
++	struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state,
++										 plane);
+ 	bool is_cursor = plane->type == DRM_PLANE_TYPE_CURSOR;
+ 	struct drm_crtc_state *crtc_state;
+ 	int ret;
+@@ -86,6 +88,14 @@ static int virtio_gpu_plane_atomic_check(struct drm_plane *plane,
+ 	if (!new_plane_state->fb || WARN_ON(!new_plane_state->crtc))
+ 		return 0;
+ 
++	/*
++	 * Ignore damage clips if the framebuffer attached to the plane's state
++	 * has changed since the last plane update (page-flip). In this case, a
++	 * full plane update should happen because uploads are done per-buffer.
++	 */
++	if (old_plane_state->fb != new_plane_state->fb)
++		new_plane_state->ignore_damage_clips = true;
++
+ 	crtc_state = drm_atomic_get_crtc_state(state,
+ 					       new_plane_state->crtc);
+ 	if (IS_ERR(crtc_state))
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 8b24ecf60e3ec..d3e308fdfd5be 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -1611,7 +1611,7 @@ static const struct file_operations vmwgfx_driver_fops = {
+ 
+ static const struct drm_driver driver = {
+ 	.driver_features =
+-	DRIVER_MODESET | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_GEM,
++	DRIVER_MODESET | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_GEM | DRIVER_CURSOR_HOTSPOT,
+ 	.ioctls = vmw_ioctls,
+ 	.num_ioctls = ARRAY_SIZE(vmw_ioctls),
+ 	.master_set = vmw_master_set,
+diff --git a/drivers/iio/adc/ad7091r-base.c b/drivers/iio/adc/ad7091r-base.c
+index 0e5d3d2e9c985..76002b91c86a4 100644
+--- a/drivers/iio/adc/ad7091r-base.c
++++ b/drivers/iio/adc/ad7091r-base.c
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include <linux/bitops.h>
++#include <linux/bitfield.h>
+ #include <linux/iio/events.h>
+ #include <linux/iio/iio.h>
+ #include <linux/interrupt.h>
+@@ -28,6 +29,7 @@
+ #define AD7091R_REG_RESULT_CONV_RESULT(x)   ((x) & 0xfff)
+ 
+ /* AD7091R_REG_CONF */
++#define AD7091R_REG_CONF_ALERT_EN   BIT(4)
+ #define AD7091R_REG_CONF_AUTO   BIT(8)
+ #define AD7091R_REG_CONF_CMD    BIT(10)
+ 
+@@ -49,6 +51,27 @@ struct ad7091r_state {
+ 	struct mutex lock; /*lock to prevent concurent reads */
+ };
+ 
++const struct iio_event_spec ad7091r_events[] = {
++	{
++		.type = IIO_EV_TYPE_THRESH,
++		.dir = IIO_EV_DIR_RISING,
++		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
++				 BIT(IIO_EV_INFO_ENABLE),
++	},
++	{
++		.type = IIO_EV_TYPE_THRESH,
++		.dir = IIO_EV_DIR_FALLING,
++		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
++				 BIT(IIO_EV_INFO_ENABLE),
++	},
++	{
++		.type = IIO_EV_TYPE_THRESH,
++		.dir = IIO_EV_DIR_EITHER,
++		.mask_separate = BIT(IIO_EV_INFO_HYSTERESIS),
++	},
++};
++EXPORT_SYMBOL_NS_GPL(ad7091r_events, IIO_AD7091R);
++
+ static int ad7091r_set_mode(struct ad7091r_state *st, enum ad7091r_mode mode)
+ {
+ 	int ret, conf;
+@@ -168,8 +191,142 @@ unlock:
+ 	return ret;
+ }
+ 
++static int ad7091r_read_event_config(struct iio_dev *indio_dev,
++				     const struct iio_chan_spec *chan,
++				     enum iio_event_type type,
++				     enum iio_event_direction dir)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++	int val, ret;
++
++	switch (dir) {
++	case IIO_EV_DIR_RISING:
++		ret = regmap_read(st->map,
++				  AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++				  &val);
++		if (ret)
++			return ret;
++		return val != AD7091R_HIGH_LIMIT;
++	case IIO_EV_DIR_FALLING:
++		ret = regmap_read(st->map,
++				  AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++				  &val);
++		if (ret)
++			return ret;
++		return val != AD7091R_LOW_LIMIT;
++	default:
++		return -EINVAL;
++	}
++}
++
++static int ad7091r_write_event_config(struct iio_dev *indio_dev,
++				      const struct iio_chan_spec *chan,
++				      enum iio_event_type type,
++				      enum iio_event_direction dir, int state)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++
++	if (state) {
++		return regmap_set_bits(st->map, AD7091R_REG_CONF,
++				       AD7091R_REG_CONF_ALERT_EN);
++	} else {
++		/*
++		 * Set thresholds either to 0 or to 2^12 - 1 as appropriate to
++		 * prevent alerts and thus disable event generation.
++		 */
++		switch (dir) {
++		case IIO_EV_DIR_RISING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++					    AD7091R_HIGH_LIMIT);
++		case IIO_EV_DIR_FALLING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++					    AD7091R_LOW_LIMIT);
++		default:
++			return -EINVAL;
++		}
++	}
++}
++
++static int ad7091r_read_event_value(struct iio_dev *indio_dev,
++				    const struct iio_chan_spec *chan,
++				    enum iio_event_type type,
++				    enum iio_event_direction dir,
++				    enum iio_event_info info, int *val, int *val2)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++	int ret;
++
++	switch (info) {
++	case IIO_EV_INFO_VALUE:
++		switch (dir) {
++		case IIO_EV_DIR_RISING:
++			ret = regmap_read(st->map,
++					  AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++					  val);
++			if (ret)
++				return ret;
++			return IIO_VAL_INT;
++		case IIO_EV_DIR_FALLING:
++			ret = regmap_read(st->map,
++					  AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++					  val);
++			if (ret)
++				return ret;
++			return IIO_VAL_INT;
++		default:
++			return -EINVAL;
++		}
++	case IIO_EV_INFO_HYSTERESIS:
++		ret = regmap_read(st->map,
++				  AD7091R_REG_CH_HYSTERESIS(chan->channel),
++				  val);
++		if (ret)
++			return ret;
++		return IIO_VAL_INT;
++	default:
++		return -EINVAL;
++	}
++}
++
++static int ad7091r_write_event_value(struct iio_dev *indio_dev,
++				     const struct iio_chan_spec *chan,
++				     enum iio_event_type type,
++				     enum iio_event_direction dir,
++				     enum iio_event_info info, int val, int val2)
++{
++	struct ad7091r_state *st = iio_priv(indio_dev);
++
++	switch (info) {
++	case IIO_EV_INFO_VALUE:
++		switch (dir) {
++		case IIO_EV_DIR_RISING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
++					    val);
++		case IIO_EV_DIR_FALLING:
++			return regmap_write(st->map,
++					    AD7091R_REG_CH_LOW_LIMIT(chan->channel),
++					    val);
++		default:
++			return -EINVAL;
++		}
++	case IIO_EV_INFO_HYSTERESIS:
++		return regmap_write(st->map,
++				    AD7091R_REG_CH_HYSTERESIS(chan->channel),
++				    val);
++	default:
++		return -EINVAL;
++	}
++}
++
+ static const struct iio_info ad7091r_info = {
+ 	.read_raw = ad7091r_read_raw,
++	.read_event_config = &ad7091r_read_event_config,
++	.write_event_config = &ad7091r_write_event_config,
++	.read_event_value = &ad7091r_read_event_value,
++	.write_event_value = &ad7091r_write_event_value,
+ };
+ 
+ static irqreturn_t ad7091r_event_handler(int irq, void *private)
+@@ -232,6 +389,11 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 	iio_dev->channels = chip_info->channels;
+ 
+ 	if (irq) {
++		ret = regmap_update_bits(st->map, AD7091R_REG_CONF,
++					 AD7091R_REG_CONF_ALERT_EN, BIT(4));
++		if (ret)
++			return ret;
++
+ 		ret = devm_request_threaded_irq(dev, irq, NULL,
+ 				ad7091r_event_handler,
+ 				IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, iio_dev);
+@@ -243,7 +405,14 @@ int ad7091r_probe(struct device *dev, const char *name,
+ 	if (IS_ERR(st->vref)) {
+ 		if (PTR_ERR(st->vref) == -EPROBE_DEFER)
+ 			return -EPROBE_DEFER;
++
+ 		st->vref = NULL;
++		/* Enable internal vref */
++		ret = regmap_set_bits(st->map, AD7091R_REG_CONF,
++				      AD7091R_REG_CONF_INT_VREF);
++		if (ret)
++			return dev_err_probe(st->dev, ret,
++					     "Error on enable internal reference\n");
+ 	} else {
+ 		ret = regulator_enable(st->vref);
+ 		if (ret)
+diff --git a/drivers/iio/adc/ad7091r-base.h b/drivers/iio/adc/ad7091r-base.h
+index 509748aef9b19..b9e1c8bf3440a 100644
+--- a/drivers/iio/adc/ad7091r-base.h
++++ b/drivers/iio/adc/ad7091r-base.h
+@@ -8,6 +8,12 @@
+ #ifndef __DRIVERS_IIO_ADC_AD7091R_BASE_H__
+ #define __DRIVERS_IIO_ADC_AD7091R_BASE_H__
+ 
++#define AD7091R_REG_CONF_INT_VREF	BIT(0)
++
++/* AD7091R_REG_CH_LIMIT */
++#define AD7091R_HIGH_LIMIT		0xFFF
++#define AD7091R_LOW_LIMIT		0x0
++
+ struct device;
+ struct ad7091r_state;
+ 
+@@ -17,6 +23,8 @@ struct ad7091r_chip_info {
+ 	unsigned int vref_mV;
+ };
+ 
++extern const struct iio_event_spec ad7091r_events[3];
++
+ extern const struct regmap_config ad7091r_regmap_config;
+ 
+ int ad7091r_probe(struct device *dev, const char *name,
+diff --git a/drivers/iio/adc/ad7091r5.c b/drivers/iio/adc/ad7091r5.c
+index 2f048527b7b78..dae98c95ebb87 100644
+--- a/drivers/iio/adc/ad7091r5.c
++++ b/drivers/iio/adc/ad7091r5.c
+@@ -12,26 +12,6 @@
+ 
+ #include "ad7091r-base.h"
+ 
+-static const struct iio_event_spec ad7091r5_events[] = {
+-	{
+-		.type = IIO_EV_TYPE_THRESH,
+-		.dir = IIO_EV_DIR_RISING,
+-		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
+-				 BIT(IIO_EV_INFO_ENABLE),
+-	},
+-	{
+-		.type = IIO_EV_TYPE_THRESH,
+-		.dir = IIO_EV_DIR_FALLING,
+-		.mask_separate = BIT(IIO_EV_INFO_VALUE) |
+-				 BIT(IIO_EV_INFO_ENABLE),
+-	},
+-	{
+-		.type = IIO_EV_TYPE_THRESH,
+-		.dir = IIO_EV_DIR_EITHER,
+-		.mask_separate = BIT(IIO_EV_INFO_HYSTERESIS),
+-	},
+-};
+-
+ #define AD7091R_CHANNEL(idx, bits, ev, num_ev) { \
+ 	.type = IIO_VOLTAGE, \
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+@@ -44,10 +24,10 @@ static const struct iio_event_spec ad7091r5_events[] = {
+ 	.scan_type.realbits = bits, \
+ }
+ static const struct iio_chan_spec ad7091r5_channels_irq[] = {
+-	AD7091R_CHANNEL(0, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
+-	AD7091R_CHANNEL(1, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
+-	AD7091R_CHANNEL(2, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
+-	AD7091R_CHANNEL(3, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
++	AD7091R_CHANNEL(0, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
++	AD7091R_CHANNEL(1, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
++	AD7091R_CHANNEL(2, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
++	AD7091R_CHANNEL(3, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
+ };
+ 
+ static const struct iio_chan_spec ad7091r5_channels_noirq[] = {
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+index 28f3fdfe23a29..6975a71d740f6 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+@@ -487,9 +487,15 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
+ static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf,
+ 				      struct iosys_map *map)
+ {
+-	struct vb2_dma_sg_buf *buf = dbuf->priv;
++	struct vb2_dma_sg_buf *buf;
++	void *vaddr;
++
++	buf = dbuf->priv;
++	vaddr = vb2_dma_sg_vaddr(buf->vb, buf);
++	if (!vaddr)
++		return -EINVAL;
+ 
+-	iosys_map_set_vaddr(map, buf->vaddr);
++	iosys_map_set_vaddr(map, vaddr);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/i2c/imx290.c b/drivers/media/i2c/imx290.c
+index 29098612813cb..c6fea5837a19f 100644
+--- a/drivers/media/i2c/imx290.c
++++ b/drivers/media/i2c/imx290.c
+@@ -41,18 +41,18 @@
+ #define IMX290_WINMODE_720P				(1 << 4)
+ #define IMX290_WINMODE_CROP				(4 << 4)
+ #define IMX290_FR_FDG_SEL				CCI_REG8(0x3009)
+-#define IMX290_BLKLEVEL					CCI_REG16(0x300a)
++#define IMX290_BLKLEVEL					CCI_REG16_LE(0x300a)
+ #define IMX290_GAIN					CCI_REG8(0x3014)
+-#define IMX290_VMAX					CCI_REG24(0x3018)
++#define IMX290_VMAX					CCI_REG24_LE(0x3018)
+ #define IMX290_VMAX_MAX					0x3ffff
+-#define IMX290_HMAX					CCI_REG16(0x301c)
++#define IMX290_HMAX					CCI_REG16_LE(0x301c)
+ #define IMX290_HMAX_MAX					0xffff
+-#define IMX290_SHS1					CCI_REG24(0x3020)
++#define IMX290_SHS1					CCI_REG24_LE(0x3020)
+ #define IMX290_WINWV_OB					CCI_REG8(0x303a)
+-#define IMX290_WINPV					CCI_REG16(0x303c)
+-#define IMX290_WINWV					CCI_REG16(0x303e)
+-#define IMX290_WINPH					CCI_REG16(0x3040)
+-#define IMX290_WINWH					CCI_REG16(0x3042)
++#define IMX290_WINPV					CCI_REG16_LE(0x303c)
++#define IMX290_WINWV					CCI_REG16_LE(0x303e)
++#define IMX290_WINPH					CCI_REG16_LE(0x3040)
++#define IMX290_WINWH					CCI_REG16_LE(0x3042)
+ #define IMX290_OUT_CTRL					CCI_REG8(0x3046)
+ #define IMX290_ODBIT_10BIT				(0 << 0)
+ #define IMX290_ODBIT_12BIT				(1 << 0)
+@@ -78,28 +78,28 @@
+ #define IMX290_ADBIT2					CCI_REG8(0x317c)
+ #define IMX290_ADBIT2_10BIT				0x12
+ #define IMX290_ADBIT2_12BIT				0x00
+-#define IMX290_CHIP_ID					CCI_REG16(0x319a)
++#define IMX290_CHIP_ID					CCI_REG16_LE(0x319a)
+ #define IMX290_ADBIT3					CCI_REG8(0x31ec)
+ #define IMX290_ADBIT3_10BIT				0x37
+ #define IMX290_ADBIT3_12BIT				0x0e
+ #define IMX290_REPETITION				CCI_REG8(0x3405)
+ #define IMX290_PHY_LANE_NUM				CCI_REG8(0x3407)
+ #define IMX290_OPB_SIZE_V				CCI_REG8(0x3414)
+-#define IMX290_Y_OUT_SIZE				CCI_REG16(0x3418)
+-#define IMX290_CSI_DT_FMT				CCI_REG16(0x3441)
++#define IMX290_Y_OUT_SIZE				CCI_REG16_LE(0x3418)
++#define IMX290_CSI_DT_FMT				CCI_REG16_LE(0x3441)
+ #define IMX290_CSI_DT_FMT_RAW10				0x0a0a
+ #define IMX290_CSI_DT_FMT_RAW12				0x0c0c
+ #define IMX290_CSI_LANE_MODE				CCI_REG8(0x3443)
+-#define IMX290_EXTCK_FREQ				CCI_REG16(0x3444)
+-#define IMX290_TCLKPOST					CCI_REG16(0x3446)
+-#define IMX290_THSZERO					CCI_REG16(0x3448)
+-#define IMX290_THSPREPARE				CCI_REG16(0x344a)
+-#define IMX290_TCLKTRAIL				CCI_REG16(0x344c)
+-#define IMX290_THSTRAIL					CCI_REG16(0x344e)
+-#define IMX290_TCLKZERO					CCI_REG16(0x3450)
+-#define IMX290_TCLKPREPARE				CCI_REG16(0x3452)
+-#define IMX290_TLPX					CCI_REG16(0x3454)
+-#define IMX290_X_OUT_SIZE				CCI_REG16(0x3472)
++#define IMX290_EXTCK_FREQ				CCI_REG16_LE(0x3444)
++#define IMX290_TCLKPOST					CCI_REG16_LE(0x3446)
++#define IMX290_THSZERO					CCI_REG16_LE(0x3448)
++#define IMX290_THSPREPARE				CCI_REG16_LE(0x344a)
++#define IMX290_TCLKTRAIL				CCI_REG16_LE(0x344c)
++#define IMX290_THSTRAIL					CCI_REG16_LE(0x344e)
++#define IMX290_TCLKZERO					CCI_REG16_LE(0x3450)
++#define IMX290_TCLKPREPARE				CCI_REG16_LE(0x3452)
++#define IMX290_TLPX					CCI_REG16_LE(0x3454)
++#define IMX290_X_OUT_SIZE				CCI_REG16_LE(0x3472)
+ #define IMX290_INCKSEL7					CCI_REG8(0x3480)
+ 
+ #define IMX290_PGCTRL_REGEN				BIT(0)
+diff --git a/drivers/media/i2c/imx355.c b/drivers/media/i2c/imx355.c
+index 9c58c1a80cba3..81bb61407bdce 100644
+--- a/drivers/media/i2c/imx355.c
++++ b/drivers/media/i2c/imx355.c
+@@ -1748,10 +1748,6 @@ static int imx355_probe(struct i2c_client *client)
+ 		goto error_handler_free;
+ 	}
+ 
+-	ret = v4l2_async_register_subdev_sensor(&imx355->sd);
+-	if (ret < 0)
+-		goto error_media_entity;
+-
+ 	/*
+ 	 * Device is already turned on by i2c-core with ACPI domain PM.
+ 	 * Enable runtime PM and turn off the device.
+@@ -1760,9 +1756,15 @@ static int imx355_probe(struct i2c_client *client)
+ 	pm_runtime_enable(&client->dev);
+ 	pm_runtime_idle(&client->dev);
+ 
++	ret = v4l2_async_register_subdev_sensor(&imx355->sd);
++	if (ret < 0)
++		goto error_media_entity_runtime_pm;
++
+ 	return 0;
+ 
+-error_media_entity:
++error_media_entity_runtime_pm:
++	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ 	media_entity_cleanup(&imx355->sd.entity);
+ 
+ error_handler_free:
+diff --git a/drivers/media/i2c/ov01a10.c b/drivers/media/i2c/ov01a10.c
+index bbd5740d2280b..02de09edc3f96 100644
+--- a/drivers/media/i2c/ov01a10.c
++++ b/drivers/media/i2c/ov01a10.c
+@@ -859,6 +859,7 @@ static void ov01a10_remove(struct i2c_client *client)
+ 	v4l2_ctrl_handler_free(sd->ctrl_handler);
+ 
+ 	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ }
+ 
+ static int ov01a10_probe(struct i2c_client *client)
+@@ -905,17 +906,26 @@ static int ov01a10_probe(struct i2c_client *client)
+ 		goto err_media_entity_cleanup;
+ 	}
+ 
++	/*
++	 * Device is already turned on by i2c-core with ACPI domain PM.
++	 * Enable runtime PM and turn off the device.
++	 */
++	pm_runtime_set_active(&client->dev);
++	pm_runtime_enable(dev);
++	pm_runtime_idle(dev);
++
+ 	ret = v4l2_async_register_subdev_sensor(&ov01a10->sd);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to register subdev: %d\n", ret);
+-		goto err_media_entity_cleanup;
++		goto err_pm_disable;
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-	pm_runtime_idle(dev);
+-
+ 	return 0;
+ 
++err_pm_disable:
++	pm_runtime_disable(dev);
++	pm_runtime_set_suspended(&client->dev);
++
+ err_media_entity_cleanup:
+ 	media_entity_cleanup(&ov01a10->sd.entity);
+ 
+diff --git a/drivers/media/i2c/ov13b10.c b/drivers/media/i2c/ov13b10.c
+index 970d2caeb3d62..2793e4675c6da 100644
+--- a/drivers/media/i2c/ov13b10.c
++++ b/drivers/media/i2c/ov13b10.c
+@@ -1556,24 +1556,27 @@ static int ov13b10_probe(struct i2c_client *client)
+ 		goto error_handler_free;
+ 	}
+ 
+-	ret = v4l2_async_register_subdev_sensor(&ov13b->sd);
+-	if (ret < 0)
+-		goto error_media_entity;
+ 
+ 	/*
+ 	 * Device is already turned on by i2c-core with ACPI domain PM.
+ 	 * Enable runtime PM and turn off the device.
+ 	 */
+-
+ 	/* Set the device's state to active if it's in D0 state. */
+ 	if (full_power)
+ 		pm_runtime_set_active(&client->dev);
+ 	pm_runtime_enable(&client->dev);
+ 	pm_runtime_idle(&client->dev);
+ 
++	ret = v4l2_async_register_subdev_sensor(&ov13b->sd);
++	if (ret < 0)
++		goto error_media_entity_runtime_pm;
++
+ 	return 0;
+ 
+-error_media_entity:
++error_media_entity_runtime_pm:
++	pm_runtime_disable(&client->dev);
++	if (full_power)
++		pm_runtime_set_suspended(&client->dev);
+ 	media_entity_cleanup(&ov13b->sd.entity);
+ 
+ error_handler_free:
+@@ -1596,6 +1599,7 @@ static void ov13b10_remove(struct i2c_client *client)
+ 	ov13b10_free_controls(ov13b);
+ 
+ 	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ }
+ 
+ static DEFINE_RUNTIME_DEV_PM_OPS(ov13b10_pm_ops, ov13b10_suspend,
+diff --git a/drivers/media/i2c/ov9734.c b/drivers/media/i2c/ov9734.c
+index ee33152996055..f1377c305bcef 100644
+--- a/drivers/media/i2c/ov9734.c
++++ b/drivers/media/i2c/ov9734.c
+@@ -894,6 +894,7 @@ static void ov9734_remove(struct i2c_client *client)
+ 	media_entity_cleanup(&sd->entity);
+ 	v4l2_ctrl_handler_free(sd->ctrl_handler);
+ 	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ 	mutex_destroy(&ov9734->mutex);
+ }
+ 
+@@ -939,13 +940,6 @@ static int ov9734_probe(struct i2c_client *client)
+ 		goto probe_error_v4l2_ctrl_handler_free;
+ 	}
+ 
+-	ret = v4l2_async_register_subdev_sensor(&ov9734->sd);
+-	if (ret < 0) {
+-		dev_err(&client->dev, "failed to register V4L2 subdev: %d",
+-			ret);
+-		goto probe_error_media_entity_cleanup;
+-	}
+-
+ 	/*
+ 	 * Device is already turned on by i2c-core with ACPI domain PM.
+ 	 * Enable runtime PM and turn off the device.
+@@ -954,9 +948,18 @@ static int ov9734_probe(struct i2c_client *client)
+ 	pm_runtime_enable(&client->dev);
+ 	pm_runtime_idle(&client->dev);
+ 
++	ret = v4l2_async_register_subdev_sensor(&ov9734->sd);
++	if (ret < 0) {
++		dev_err(&client->dev, "failed to register V4L2 subdev: %d",
++			ret);
++		goto probe_error_media_entity_cleanup_pm;
++	}
++
+ 	return 0;
+ 
+-probe_error_media_entity_cleanup:
++probe_error_media_entity_cleanup_pm:
++	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ 	media_entity_cleanup(&ov9734->sd.entity);
+ 
+ probe_error_v4l2_ctrl_handler_free:
+diff --git a/drivers/media/i2c/st-mipid02.c b/drivers/media/i2c/st-mipid02.c
+index fa27638edc072..dab14787116b6 100644
+--- a/drivers/media/i2c/st-mipid02.c
++++ b/drivers/media/i2c/st-mipid02.c
+@@ -770,6 +770,7 @@ static void mipid02_set_fmt_sink(struct v4l2_subdev *sd,
+ 				 struct v4l2_subdev_format *format)
+ {
+ 	struct mipid02_dev *bridge = to_mipid02_dev(sd);
++	struct v4l2_subdev_format source_fmt;
+ 	struct v4l2_mbus_framefmt *fmt;
+ 
+ 	format->format.code = get_fmt_code(format->format.code);
+@@ -781,8 +782,12 @@ static void mipid02_set_fmt_sink(struct v4l2_subdev *sd,
+ 
+ 	*fmt = format->format;
+ 
+-	/* Propagate the format change to the source pad */
+-	mipid02_set_fmt_source(sd, sd_state, format);
++	/*
++	 * Propagate the format change to the source pad, taking
++	 * care not to update the format pointer given back to user
++	 */
++	source_fmt = *format;
++	mipid02_set_fmt_source(sd, sd_state, &source_fmt);
+ }
+ 
+ static int mipid02_set_fmt(struct v4l2_subdev *sd,
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index 60425c99a2b8b..c3456c700c07e 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1021,13 +1021,13 @@ static void mtk_jpeg_dec_device_run(void *priv)
+ 	if (ret < 0)
+ 		goto dec_end;
+ 
+-	schedule_delayed_work(&jpeg->job_timeout_work,
+-			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
+-
+ 	mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
+ 	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
+ 		goto dec_end;
+ 
++	schedule_delayed_work(&jpeg->job_timeout_work,
++			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
++
+ 	spin_lock_irqsave(&jpeg->hw_lock, flags);
+ 	mtk_jpeg_dec_reset(jpeg->reg_base);
+ 	mtk_jpeg_dec_set_config(jpeg->reg_base,
+@@ -1749,9 +1749,6 @@ retry_select:
+ 	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ 	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ 
+-	schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work,
+-			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
+-
+ 	mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
+ 	if (mtk_jpeg_set_dec_dst(ctx,
+ 				 &jpeg_src_buf->dec_param,
+@@ -1761,6 +1758,9 @@ retry_select:
+ 		goto setdst_end;
+ 	}
+ 
++	schedule_delayed_work(&comp_jpeg[hw_id]->job_timeout_work,
++			      msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
++
+ 	spin_lock_irqsave(&comp_jpeg[hw_id]->hw_lock, flags);
+ 	ctx->total_frame_num++;
+ 	mtk_jpeg_dec_reset(comp_jpeg[hw_id]->reg_base);
+diff --git a/drivers/media/v4l2-core/v4l2-cci.c b/drivers/media/v4l2-core/v4l2-cci.c
+index bc2dbec019b04..10005c80f43b5 100644
+--- a/drivers/media/v4l2-core/v4l2-cci.c
++++ b/drivers/media/v4l2-core/v4l2-cci.c
+@@ -18,6 +18,7 @@
+ 
+ int cci_read(struct regmap *map, u32 reg, u64 *val, int *err)
+ {
++	bool little_endian;
+ 	unsigned int len;
+ 	u8 buf[8];
+ 	int ret;
+@@ -25,8 +26,9 @@ int cci_read(struct regmap *map, u32 reg, u64 *val, int *err)
+ 	if (err && *err)
+ 		return *err;
+ 
+-	len = FIELD_GET(CCI_REG_WIDTH_MASK, reg);
+-	reg = FIELD_GET(CCI_REG_ADDR_MASK, reg);
++	little_endian = reg & CCI_REG_LE;
++	len = CCI_REG_WIDTH_BYTES(reg);
++	reg = CCI_REG_ADDR(reg);
+ 
+ 	ret = regmap_bulk_read(map, reg, buf, len);
+ 	if (ret) {
+@@ -40,16 +42,28 @@ int cci_read(struct regmap *map, u32 reg, u64 *val, int *err)
+ 		*val = buf[0];
+ 		break;
+ 	case 2:
+-		*val = get_unaligned_be16(buf);
++		if (little_endian)
++			*val = get_unaligned_le16(buf);
++		else
++			*val = get_unaligned_be16(buf);
+ 		break;
+ 	case 3:
+-		*val = get_unaligned_be24(buf);
++		if (little_endian)
++			*val = get_unaligned_le24(buf);
++		else
++			*val = get_unaligned_be24(buf);
+ 		break;
+ 	case 4:
+-		*val = get_unaligned_be32(buf);
++		if (little_endian)
++			*val = get_unaligned_le32(buf);
++		else
++			*val = get_unaligned_be32(buf);
+ 		break;
+ 	case 8:
+-		*val = get_unaligned_be64(buf);
++		if (little_endian)
++			*val = get_unaligned_le64(buf);
++		else
++			*val = get_unaligned_be64(buf);
+ 		break;
+ 	default:
+ 		dev_err(regmap_get_device(map), "Error invalid reg-width %u for reg 0x%04x\n",
+@@ -68,6 +82,7 @@ EXPORT_SYMBOL_GPL(cci_read);
+ 
+ int cci_write(struct regmap *map, u32 reg, u64 val, int *err)
+ {
++	bool little_endian;
+ 	unsigned int len;
+ 	u8 buf[8];
+ 	int ret;
+@@ -75,24 +90,37 @@ int cci_write(struct regmap *map, u32 reg, u64 val, int *err)
+ 	if (err && *err)
+ 		return *err;
+ 
+-	len = FIELD_GET(CCI_REG_WIDTH_MASK, reg);
+-	reg = FIELD_GET(CCI_REG_ADDR_MASK, reg);
++	little_endian = reg & CCI_REG_LE;
++	len = CCI_REG_WIDTH_BYTES(reg);
++	reg = CCI_REG_ADDR(reg);
+ 
+ 	switch (len) {
+ 	case 1:
+ 		buf[0] = val;
+ 		break;
+ 	case 2:
+-		put_unaligned_be16(val, buf);
++		if (little_endian)
++			put_unaligned_le16(val, buf);
++		else
++			put_unaligned_be16(val, buf);
+ 		break;
+ 	case 3:
+-		put_unaligned_be24(val, buf);
++		if (little_endian)
++			put_unaligned_le24(val, buf);
++		else
++			put_unaligned_be24(val, buf);
+ 		break;
+ 	case 4:
+-		put_unaligned_be32(val, buf);
++		if (little_endian)
++			put_unaligned_le32(val, buf);
++		else
++			put_unaligned_be32(val, buf);
+ 		break;
+ 	case 8:
+-		put_unaligned_be64(val, buf);
++		if (little_endian)
++			put_unaligned_le64(val, buf);
++		else
++			put_unaligned_be64(val, buf);
+ 		break;
+ 	default:
+ 		dev_err(regmap_get_device(map), "Error invalid reg-width %u for reg 0x%04x\n",
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 134c36edb6cf7..32d49100dff51 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -400,6 +400,10 @@ struct mmc_blk_ioc_data {
+ 	struct mmc_ioc_cmd ic;
+ 	unsigned char *buf;
+ 	u64 buf_bytes;
++	unsigned int flags;
++#define MMC_BLK_IOC_DROP	BIT(0)	/* drop this mrq */
++#define MMC_BLK_IOC_SBC	BIT(1)	/* use mrq.sbc */
++
+ 	struct mmc_rpmb_data *rpmb;
+ };
+ 
+@@ -465,7 +469,7 @@ static int mmc_blk_ioctl_copy_to_user(struct mmc_ioc_cmd __user *ic_ptr,
+ }
+ 
+ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+-			       struct mmc_blk_ioc_data *idata)
++			       struct mmc_blk_ioc_data **idatas, int i)
+ {
+ 	struct mmc_command cmd = {}, sbc = {};
+ 	struct mmc_data data = {};
+@@ -475,10 +479,18 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	unsigned int busy_timeout_ms;
+ 	int err;
+ 	unsigned int target_part;
++	struct mmc_blk_ioc_data *idata = idatas[i];
++	struct mmc_blk_ioc_data *prev_idata = NULL;
+ 
+ 	if (!card || !md || !idata)
+ 		return -EINVAL;
+ 
++	if (idata->flags & MMC_BLK_IOC_DROP)
++		return 0;
++
++	if (idata->flags & MMC_BLK_IOC_SBC)
++		prev_idata = idatas[i - 1];
++
+ 	/*
+ 	 * The RPMB accesses comes in from the character device, so we
+ 	 * need to target these explicitly. Else we just target the
+@@ -532,7 +544,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 			return err;
+ 	}
+ 
+-	if (idata->rpmb) {
++	if (idata->rpmb || prev_idata) {
+ 		sbc.opcode = MMC_SET_BLOCK_COUNT;
+ 		/*
+ 		 * We don't do any blockcount validation because the max size
+@@ -540,6 +552,8 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 		 * 'Reliable Write' bit here.
+ 		 */
+ 		sbc.arg = data.blocks | (idata->ic.write_flag & BIT(31));
++		if (prev_idata)
++			sbc.arg = prev_idata->ic.arg;
+ 		sbc.flags = MMC_RSP_R1 | MMC_CMD_AC;
+ 		mrq.sbc = &sbc;
+ 	}
+@@ -557,6 +571,15 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	mmc_wait_for_req(card->host, &mrq);
+ 	memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
+ 
++	if (prev_idata) {
++		memcpy(&prev_idata->ic.response, sbc.resp, sizeof(sbc.resp));
++		if (sbc.error) {
++			dev_err(mmc_dev(card->host), "%s: sbc error %d\n",
++							__func__, sbc.error);
++			return sbc.error;
++		}
++	}
++
+ 	if (cmd.error) {
+ 		dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
+ 						__func__, cmd.error);
+@@ -1034,6 +1057,20 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
+ 	md->reset_done &= ~type;
+ }
+ 
++static void mmc_blk_check_sbc(struct mmc_queue_req *mq_rq)
++{
++	struct mmc_blk_ioc_data **idata = mq_rq->drv_op_data;
++	int i;
++
++	for (i = 1; i < mq_rq->ioc_count; i++) {
++		if (idata[i - 1]->ic.opcode == MMC_SET_BLOCK_COUNT &&
++		    mmc_op_multi(idata[i]->ic.opcode)) {
++			idata[i - 1]->flags |= MMC_BLK_IOC_DROP;
++			idata[i]->flags |= MMC_BLK_IOC_SBC;
++		}
++	}
++}
++
+ /*
+  * The non-block commands come back from the block layer after it queued it and
+  * processed it with all other requests and then they get issued in this
+@@ -1061,11 +1098,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+ 			if (ret)
+ 				break;
+ 		}
++
++		mmc_blk_check_sbc(mq_rq);
++
+ 		fallthrough;
+ 	case MMC_DRV_OP_IOCTL_RPMB:
+ 		idata = mq_rq->drv_op_data;
+ 		for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
+-			ret = __mmc_blk_ioctl_cmd(card, md, idata[i]);
++			ret = __mmc_blk_ioctl_cmd(card, md, idata, i);
+ 			if (ret)
+ 				break;
+ 		}
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index cc333ad67cac8..2a99ffb61f8c0 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -15,7 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/module.h>
+ #include <linux/bio.h>
+-#include <linux/dma-mapping.h>
++#include <linux/dma-direction.h>
+ #include <linux/crc7.h>
+ #include <linux/crc-itu-t.h>
+ #include <linux/scatterlist.h>
+@@ -119,19 +119,14 @@ struct mmc_spi_host {
+ 	struct spi_transfer	status;
+ 	struct spi_message	readback;
+ 
+-	/* underlying DMA-aware controller, or null */
+-	struct device		*dma_dev;
+-
+ 	/* buffer used for commands and for message "overhead" */
+ 	struct scratch		*data;
+-	dma_addr_t		data_dma;
+ 
+ 	/* Specs say to write ones most of the time, even when the card
+ 	 * has no need to read its input data; and many cards won't care.
+ 	 * This is our source of those ones.
+ 	 */
+ 	void			*ones;
+-	dma_addr_t		ones_dma;
+ };
+ 
+ 
+@@ -147,11 +142,8 @@ static inline int mmc_cs_off(struct mmc_spi_host *host)
+ 	return spi_setup(host->spi);
+ }
+ 
+-static int
+-mmc_spi_readbytes(struct mmc_spi_host *host, unsigned len)
++static int mmc_spi_readbytes(struct mmc_spi_host *host, unsigned int len)
+ {
+-	int status;
+-
+ 	if (len > sizeof(*host->data)) {
+ 		WARN_ON(1);
+ 		return -EIO;
+@@ -159,19 +151,7 @@ mmc_spi_readbytes(struct mmc_spi_host *host, unsigned len)
+ 
+ 	host->status.len = len;
+ 
+-	if (host->dma_dev)
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_FROM_DEVICE);
+-
+-	status = spi_sync_locked(host->spi, &host->readback);
+-
+-	if (host->dma_dev)
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_FROM_DEVICE);
+-
+-	return status;
++	return spi_sync_locked(host->spi, &host->readback);
+ }
+ 
+ static int mmc_spi_skip(struct mmc_spi_host *host, unsigned long timeout,
+@@ -506,23 +486,11 @@ mmc_spi_command_send(struct mmc_spi_host *host,
+ 	t = &host->t;
+ 	memset(t, 0, sizeof(*t));
+ 	t->tx_buf = t->rx_buf = data->status;
+-	t->tx_dma = t->rx_dma = host->data_dma;
+ 	t->len = cp - data->status;
+ 	t->cs_change = 1;
+ 	spi_message_add_tail(t, &host->m);
+ 
+-	if (host->dma_dev) {
+-		host->m.is_dma_mapped = 1;
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_BIDIRECTIONAL);
+-	}
+ 	status = spi_sync_locked(host->spi, &host->m);
+-
+-	if (host->dma_dev)
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*host->data),
+-				DMA_BIDIRECTIONAL);
+ 	if (status < 0) {
+ 		dev_dbg(&host->spi->dev, "  ... write returned %d\n", status);
+ 		cmd->error = status;
+@@ -540,9 +508,6 @@ mmc_spi_command_send(struct mmc_spi_host *host,
+  * We always provide TX data for data and CRC.  The MMC/SD protocol
+  * requires us to write ones; but Linux defaults to writing zeroes;
+  * so we explicitly initialize it to all ones on RX paths.
+- *
+- * We also handle DMA mapping, so the underlying SPI controller does
+- * not need to (re)do it for each message.
+  */
+ static void
+ mmc_spi_setup_data_message(
+@@ -552,11 +517,8 @@ mmc_spi_setup_data_message(
+ {
+ 	struct spi_transfer	*t;
+ 	struct scratch		*scratch = host->data;
+-	dma_addr_t		dma = host->data_dma;
+ 
+ 	spi_message_init(&host->m);
+-	if (dma)
+-		host->m.is_dma_mapped = 1;
+ 
+ 	/* for reads, readblock() skips 0xff bytes before finding
+ 	 * the token; for writes, this transfer issues that token.
+@@ -570,8 +532,6 @@ mmc_spi_setup_data_message(
+ 		else
+ 			scratch->data_token = SPI_TOKEN_SINGLE;
+ 		t->tx_buf = &scratch->data_token;
+-		if (dma)
+-			t->tx_dma = dma + offsetof(struct scratch, data_token);
+ 		spi_message_add_tail(t, &host->m);
+ 	}
+ 
+@@ -581,7 +541,6 @@ mmc_spi_setup_data_message(
+ 	t = &host->t;
+ 	memset(t, 0, sizeof(*t));
+ 	t->tx_buf = host->ones;
+-	t->tx_dma = host->ones_dma;
+ 	/* length and actual buffer info are written later */
+ 	spi_message_add_tail(t, &host->m);
+ 
+@@ -591,14 +550,9 @@ mmc_spi_setup_data_message(
+ 	if (direction == DMA_TO_DEVICE) {
+ 		/* the actual CRC may get written later */
+ 		t->tx_buf = &scratch->crc_val;
+-		if (dma)
+-			t->tx_dma = dma + offsetof(struct scratch, crc_val);
+ 	} else {
+ 		t->tx_buf = host->ones;
+-		t->tx_dma = host->ones_dma;
+ 		t->rx_buf = &scratch->crc_val;
+-		if (dma)
+-			t->rx_dma = dma + offsetof(struct scratch, crc_val);
+ 	}
+ 	spi_message_add_tail(t, &host->m);
+ 
+@@ -621,10 +575,7 @@ mmc_spi_setup_data_message(
+ 		memset(t, 0, sizeof(*t));
+ 		t->len = (direction == DMA_TO_DEVICE) ? sizeof(scratch->status) : 1;
+ 		t->tx_buf = host->ones;
+-		t->tx_dma = host->ones_dma;
+ 		t->rx_buf = scratch->status;
+-		if (dma)
+-			t->rx_dma = dma + offsetof(struct scratch, status);
+ 		t->cs_change = 1;
+ 		spi_message_add_tail(t, &host->m);
+ 	}
+@@ -653,23 +604,13 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 
+ 	if (host->mmc->use_spi_crc)
+ 		scratch->crc_val = cpu_to_be16(crc_itu_t(0, t->tx_buf, t->len));
+-	if (host->dma_dev)
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+ 
+ 	status = spi_sync_locked(spi, &host->m);
+-
+ 	if (status != 0) {
+ 		dev_dbg(&spi->dev, "write error (%d)\n", status);
+ 		return status;
+ 	}
+ 
+-	if (host->dma_dev)
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+-
+ 	/*
+ 	 * Get the transmission data-response reply.  It must follow
+ 	 * immediately after the data block we transferred.  This reply
+@@ -718,8 +659,6 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 
+ 	t->tx_buf += t->len;
+-	if (host->dma_dev)
+-		t->tx_dma += t->len;
+ 
+ 	/* Return when not busy.  If we didn't collect that status yet,
+ 	 * we'll need some more I/O.
+@@ -783,30 +722,12 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 	leftover = status << 1;
+ 
+-	if (host->dma_dev) {
+-		dma_sync_single_for_device(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+-		dma_sync_single_for_device(host->dma_dev,
+-				t->rx_dma, t->len,
+-				DMA_FROM_DEVICE);
+-	}
+-
+ 	status = spi_sync_locked(spi, &host->m);
+ 	if (status < 0) {
+ 		dev_dbg(&spi->dev, "read error %d\n", status);
+ 		return status;
+ 	}
+ 
+-	if (host->dma_dev) {
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				host->data_dma, sizeof(*scratch),
+-				DMA_BIDIRECTIONAL);
+-		dma_sync_single_for_cpu(host->dma_dev,
+-				t->rx_dma, t->len,
+-				DMA_FROM_DEVICE);
+-	}
+-
+ 	if (bitshift) {
+ 		/* Walk through the data and the crc and do
+ 		 * all the magic to get byte-aligned data.
+@@ -841,8 +762,6 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 
+ 	t->rx_buf += t->len;
+-	if (host->dma_dev)
+-		t->rx_dma += t->len;
+ 
+ 	return 0;
+ }
+@@ -857,7 +776,6 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 		struct mmc_data *data, u32 blk_size)
+ {
+ 	struct spi_device	*spi = host->spi;
+-	struct device		*dma_dev = host->dma_dev;
+ 	struct spi_transfer	*t;
+ 	enum dma_data_direction	direction = mmc_get_dma_dir(data);
+ 	struct scatterlist	*sg;
+@@ -884,31 +802,8 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 	 */
+ 	for_each_sg(data->sg, sg, data->sg_len, n_sg) {
+ 		int			status = 0;
+-		dma_addr_t		dma_addr = 0;
+ 		void			*kmap_addr;
+ 		unsigned		length = sg->length;
+-		enum dma_data_direction	dir = direction;
+-
+-		/* set up dma mapping for controller drivers that might
+-		 * use DMA ... though they may fall back to PIO
+-		 */
+-		if (dma_dev) {
+-			/* never invalidate whole *shared* pages ... */
+-			if ((sg->offset != 0 || length != PAGE_SIZE)
+-					&& dir == DMA_FROM_DEVICE)
+-				dir = DMA_BIDIRECTIONAL;
+-
+-			dma_addr = dma_map_page(dma_dev, sg_page(sg), 0,
+-						PAGE_SIZE, dir);
+-			if (dma_mapping_error(dma_dev, dma_addr)) {
+-				data->error = -EFAULT;
+-				break;
+-			}
+-			if (direction == DMA_TO_DEVICE)
+-				t->tx_dma = dma_addr + sg->offset;
+-			else
+-				t->rx_dma = dma_addr + sg->offset;
+-		}
+ 
+ 		/* allow pio too; we don't allow highmem */
+ 		kmap_addr = kmap(sg_page(sg));
+@@ -941,8 +836,6 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 		if (direction == DMA_FROM_DEVICE)
+ 			flush_dcache_page(sg_page(sg));
+ 		kunmap(sg_page(sg));
+-		if (dma_dev)
+-			dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir);
+ 
+ 		if (status < 0) {
+ 			data->error = status;
+@@ -977,21 +870,9 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
+ 		scratch->status[0] = SPI_TOKEN_STOP_TRAN;
+ 
+ 		host->early_status.tx_buf = host->early_status.rx_buf;
+-		host->early_status.tx_dma = host->early_status.rx_dma;
+ 		host->early_status.len = statlen;
+ 
+-		if (host->dma_dev)
+-			dma_sync_single_for_device(host->dma_dev,
+-					host->data_dma, sizeof(*scratch),
+-					DMA_BIDIRECTIONAL);
+-
+ 		tmp = spi_sync_locked(spi, &host->m);
+-
+-		if (host->dma_dev)
+-			dma_sync_single_for_cpu(host->dma_dev,
+-					host->data_dma, sizeof(*scratch),
+-					DMA_BIDIRECTIONAL);
+-
+ 		if (tmp < 0) {
+ 			if (!data->error)
+ 				data->error = tmp;
+@@ -1265,52 +1146,6 @@ mmc_spi_detect_irq(int irq, void *mmc)
+ 	return IRQ_HANDLED;
+ }
+ 
+-#ifdef CONFIG_HAS_DMA
+-static int mmc_spi_dma_alloc(struct mmc_spi_host *host)
+-{
+-	struct spi_device *spi = host->spi;
+-	struct device *dev;
+-
+-	if (!spi->master->dev.parent->dma_mask)
+-		return 0;
+-
+-	dev = spi->master->dev.parent;
+-
+-	host->ones_dma = dma_map_single(dev, host->ones, MMC_SPI_BLOCKSIZE,
+-					DMA_TO_DEVICE);
+-	if (dma_mapping_error(dev, host->ones_dma))
+-		return -ENOMEM;
+-
+-	host->data_dma = dma_map_single(dev, host->data, sizeof(*host->data),
+-					DMA_BIDIRECTIONAL);
+-	if (dma_mapping_error(dev, host->data_dma)) {
+-		dma_unmap_single(dev, host->ones_dma, MMC_SPI_BLOCKSIZE,
+-				 DMA_TO_DEVICE);
+-		return -ENOMEM;
+-	}
+-
+-	dma_sync_single_for_cpu(dev, host->data_dma, sizeof(*host->data),
+-				DMA_BIDIRECTIONAL);
+-
+-	host->dma_dev = dev;
+-	return 0;
+-}
+-
+-static void mmc_spi_dma_free(struct mmc_spi_host *host)
+-{
+-	if (!host->dma_dev)
+-		return;
+-
+-	dma_unmap_single(host->dma_dev, host->ones_dma, MMC_SPI_BLOCKSIZE,
+-			 DMA_TO_DEVICE);
+-	dma_unmap_single(host->dma_dev, host->data_dma,	sizeof(*host->data),
+-			 DMA_BIDIRECTIONAL);
+-}
+-#else
+-static inline int mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; }
+-static inline void mmc_spi_dma_free(struct mmc_spi_host *host) {}
+-#endif
+-
+ static int mmc_spi_probe(struct spi_device *spi)
+ {
+ 	void			*ones;
+@@ -1402,24 +1237,17 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 			host->powerup_msecs = 250;
+ 	}
+ 
+-	/* preallocate dma buffers */
++	/* Preallocate buffers */
+ 	host->data = kmalloc(sizeof(*host->data), GFP_KERNEL);
+ 	if (!host->data)
+ 		goto fail_nobuf1;
+ 
+-	status = mmc_spi_dma_alloc(host);
+-	if (status)
+-		goto fail_dma;
+-
+ 	/* setup message for status/busy readback */
+ 	spi_message_init(&host->readback);
+-	host->readback.is_dma_mapped = (host->dma_dev != NULL);
+ 
+ 	spi_message_add_tail(&host->status, &host->readback);
+ 	host->status.tx_buf = host->ones;
+-	host->status.tx_dma = host->ones_dma;
+ 	host->status.rx_buf = &host->data->status;
+-	host->status.rx_dma = host->data_dma + offsetof(struct scratch, status);
+ 	host->status.cs_change = 1;
+ 
+ 	/* register card detect irq */
+@@ -1464,9 +1292,8 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 	if (!status)
+ 		has_ro = true;
+ 
+-	dev_info(&spi->dev, "SD/MMC host %s%s%s%s%s\n",
++	dev_info(&spi->dev, "SD/MMC host %s%s%s%s\n",
+ 			dev_name(&mmc->class_dev),
+-			host->dma_dev ? "" : ", no DMA",
+ 			has_ro ? "" : ", no WP",
+ 			(host->pdata && host->pdata->setpower)
+ 				? "" : ", no poweroff",
+@@ -1477,8 +1304,6 @@ static int mmc_spi_probe(struct spi_device *spi)
+ fail_gpiod_request:
+ 	mmc_remove_host(mmc);
+ fail_glue_init:
+-	mmc_spi_dma_free(host);
+-fail_dma:
+ 	kfree(host->data);
+ fail_nobuf1:
+ 	mmc_spi_put_pdata(spi);
+@@ -1500,7 +1325,6 @@ static void mmc_spi_remove(struct spi_device *spi)
+ 
+ 	mmc_remove_host(mmc);
+ 
+-	mmc_spi_dma_free(host);
+ 	kfree(host->data);
+ 	kfree(host->ones);
+ 
+diff --git a/drivers/mtd/maps/vmu-flash.c b/drivers/mtd/maps/vmu-flash.c
+index a7ec947a3ebb1..53019d313db71 100644
+--- a/drivers/mtd/maps/vmu-flash.c
++++ b/drivers/mtd/maps/vmu-flash.c
+@@ -719,7 +719,7 @@ static int vmu_can_unload(struct maple_device *mdev)
+ 	card = maple_get_drvdata(mdev);
+ 	for (x = 0; x < card->partitions; x++) {
+ 		mtd = &((card->mtd)[x]);
+-		if (mtd->usecount > 0)
++		if (kref_read(&mtd->refcnt))
+ 			return 0;
+ 	}
+ 	return 1;
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index 9e24bedffd89a..bbdcfbe643f3f 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -1207,6 +1207,23 @@ static int nand_lp_exec_read_page_op(struct nand_chip *chip, unsigned int page,
+ 	return nand_exec_op(chip, &op);
+ }
+ 
++static void rawnand_cap_cont_reads(struct nand_chip *chip)
++{
++	struct nand_memory_organization *memorg;
++	unsigned int pages_per_lun, first_lun, last_lun;
++
++	memorg = nanddev_get_memorg(&chip->base);
++	pages_per_lun = memorg->pages_per_eraseblock * memorg->eraseblocks_per_lun;
++	first_lun = chip->cont_read.first_page / pages_per_lun;
++	last_lun = chip->cont_read.last_page / pages_per_lun;
++
++	/* Prevent sequential cache reads across LUN boundaries */
++	if (first_lun != last_lun)
++		chip->cont_read.pause_page = first_lun * pages_per_lun + pages_per_lun - 1;
++	else
++		chip->cont_read.pause_page = chip->cont_read.last_page;
++}
++
+ static int nand_lp_exec_cont_read_page_op(struct nand_chip *chip, unsigned int page,
+ 					  unsigned int offset_in_page, void *buf,
+ 					  unsigned int len, bool check_only)
+@@ -1225,7 +1242,7 @@ static int nand_lp_exec_cont_read_page_op(struct nand_chip *chip, unsigned int p
+ 		NAND_OP_DATA_IN(len, buf, 0),
+ 	};
+ 	struct nand_op_instr cont_instrs[] = {
+-		NAND_OP_CMD(page == chip->cont_read.last_page ?
++		NAND_OP_CMD(page == chip->cont_read.pause_page ?
+ 			    NAND_CMD_READCACHEEND : NAND_CMD_READCACHESEQ,
+ 			    NAND_COMMON_TIMING_NS(conf, tWB_max)),
+ 		NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tR_max),
+@@ -1262,16 +1279,29 @@ static int nand_lp_exec_cont_read_page_op(struct nand_chip *chip, unsigned int p
+ 	}
+ 
+ 	if (page == chip->cont_read.first_page)
+-		return nand_exec_op(chip, &start_op);
++		ret = nand_exec_op(chip, &start_op);
+ 	else
+-		return nand_exec_op(chip, &cont_op);
++		ret = nand_exec_op(chip, &cont_op);
++	if (ret)
++		return ret;
++
++	if (!chip->cont_read.ongoing)
++		return 0;
++
++	if (page == chip->cont_read.pause_page &&
++	    page != chip->cont_read.last_page) {
++		chip->cont_read.first_page = chip->cont_read.pause_page + 1;
++		rawnand_cap_cont_reads(chip);
++	} else if (page == chip->cont_read.last_page) {
++		chip->cont_read.ongoing = false;
++	}
++
++	return 0;
+ }
+ 
+ static bool rawnand_cont_read_ongoing(struct nand_chip *chip, unsigned int page)
+ {
+-	return chip->cont_read.ongoing &&
+-		page >= chip->cont_read.first_page &&
+-		page <= chip->cont_read.last_page;
++	return chip->cont_read.ongoing && page >= chip->cont_read.first_page;
+ }
+ 
+ /**
+@@ -3430,21 +3460,42 @@ static void rawnand_enable_cont_reads(struct nand_chip *chip, unsigned int page,
+ 				      u32 readlen, int col)
+ {
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
++	unsigned int end_page, end_col;
++
++	chip->cont_read.ongoing = false;
+ 
+ 	if (!chip->controller->supported_op.cont_read)
+ 		return;
+ 
+-	if ((col && col + readlen < (3 * mtd->writesize)) ||
+-	    (!col && readlen < (2 * mtd->writesize))) {
+-		chip->cont_read.ongoing = false;
++	end_page = DIV_ROUND_UP(col + readlen, mtd->writesize);
++	end_col = (col + readlen) % mtd->writesize;
++
++	if (col)
++		page++;
++
++	if (end_col && end_page)
++		end_page--;
++
++	if (page + 1 > end_page)
+ 		return;
+-	}
+ 
+-	chip->cont_read.ongoing = true;
+ 	chip->cont_read.first_page = page;
+-	if (col)
++	chip->cont_read.last_page = end_page;
++	chip->cont_read.ongoing = true;
++
++	rawnand_cap_cont_reads(chip);
++}
++
++static void rawnand_cont_read_skip_first_page(struct nand_chip *chip, unsigned int page)
++{
++	if (!chip->cont_read.ongoing || page != chip->cont_read.first_page)
++		return;
++
++	chip->cont_read.first_page++;
++	if (chip->cont_read.first_page == chip->cont_read.pause_page)
+ 		chip->cont_read.first_page++;
+-	chip->cont_read.last_page = page + ((readlen >> chip->page_shift) & chip->pagemask);
++	if (chip->cont_read.first_page >= chip->cont_read.last_page)
++		chip->cont_read.ongoing = false;
+ }
+ 
+ /**
+@@ -3621,6 +3672,8 @@ read_retry:
+ 			buf += bytes;
+ 			max_bitflips = max_t(unsigned int, max_bitflips,
+ 					     chip->pagecache.bitflips);
++
++			rawnand_cont_read_skip_first_page(chip, page);
+ 		}
+ 
+ 		readlen -= bytes;
+@@ -5125,6 +5178,14 @@ static void rawnand_late_check_supported_ops(struct nand_chip *chip)
+ 	/* The supported_op fields should not be set by individual drivers */
+ 	WARN_ON_ONCE(chip->controller->supported_op.cont_read);
+ 
++	/*
++	 * Too many devices do not support sequential cached reads with on-die
++	 * ECC correction enabled, so in this case refuse to perform the
++	 * automation.
++	 */
++	if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_DIE)
++		return;
++
+ 	if (!nand_has_exec_op(chip))
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index e1f1e646cf480..22c8bfb5ed9dd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -10627,10 +10627,12 @@ int bnxt_half_open_nic(struct bnxt *bp)
+ 		netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
+ 		goto half_open_err;
+ 	}
++	bnxt_init_napi(bp);
+ 	set_bit(BNXT_STATE_HALF_OPEN, &bp->state);
+ 	rc = bnxt_init_nic(bp, true);
+ 	if (rc) {
+ 		clear_bit(BNXT_STATE_HALF_OPEN, &bp->state);
++		bnxt_del_napi(bp);
+ 		netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc);
+ 		goto half_open_err;
+ 	}
+@@ -10649,6 +10651,7 @@ half_open_err:
+ void bnxt_half_close_nic(struct bnxt *bp)
+ {
+ 	bnxt_hwrm_resource_free(bp, false, true);
++	bnxt_del_napi(bp);
+ 	bnxt_free_skbs(bp);
+ 	bnxt_free_mem(bp, true);
+ 	clear_bit(BNXT_STATE_HALF_OPEN, &bp->state);
+@@ -12298,6 +12301,11 @@ static int bnxt_fw_init_one_p1(struct bnxt *bp)
+ 
+ 	bp->fw_cap = 0;
+ 	rc = bnxt_hwrm_ver_get(bp);
++	/* FW may be unresponsive after FLR. FLR must complete within 100 msec
++	 * so wait before continuing with recovery.
++	 */
++	if (rc)
++		msleep(100);
+ 	bnxt_try_map_fw_health_reg(bp);
+ 	if (rc) {
+ 		rc = bnxt_try_recover_fw(bp);
+diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
+index df40c720e7b23..9aeff2b37a612 100644
+--- a/drivers/net/ethernet/engleder/tsnep_main.c
++++ b/drivers/net/ethernet/engleder/tsnep_main.c
+@@ -1485,7 +1485,7 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi,
+ 
+ 			xdp_prepare_buff(&xdp, page_address(entry->page),
+ 					 XDP_PACKET_HEADROOM + TSNEP_RX_INLINE_METADATA_SIZE,
+-					 length, false);
++					 length - ETH_FCS_LEN, false);
+ 
+ 			consume = tsnep_xdp_run_prog(rx, prog, &xdp,
+ 						     &xdp_status, tx_nq, tx);
+@@ -1568,7 +1568,7 @@ static int tsnep_rx_poll_zc(struct tsnep_rx *rx, struct napi_struct *napi,
+ 		prefetch(entry->xdp->data);
+ 		length = __le32_to_cpu(entry->desc_wb->properties) &
+ 			 TSNEP_DESC_LENGTH_MASK;
+-		xsk_buff_set_size(entry->xdp, length);
++		xsk_buff_set_size(entry->xdp, length - ETH_FCS_LEN);
+ 		xsk_buff_dma_sync_for_cpu(entry->xdp, rx->xsk_pool);
+ 
+ 		/* RX metadata with timestamps is in front of actual data,
+@@ -1762,6 +1762,19 @@ static void tsnep_rx_reopen_xsk(struct tsnep_rx *rx)
+ 			allocated--;
+ 		}
+ 	}
++
++	/* set need wakeup flag immediately if ring is not filled completely,
++	 * first polling would be too late as need wakeup signalisation would
++	 * be delayed for an indefinite time
++	 */
++	if (xsk_uses_need_wakeup(rx->xsk_pool)) {
++		int desc_available = tsnep_rx_desc_available(rx);
++
++		if (desc_available)
++			xsk_set_rx_need_wakeup(rx->xsk_pool);
++		else
++			xsk_clear_rx_need_wakeup(rx->xsk_pool);
++	}
+ }
+ 
+ static bool tsnep_pending(struct tsnep_queue *queue)
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index e08c7b572497d..c107680985e48 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2036,6 +2036,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 
+ 		/* if any of the above changed restart the FEC */
+ 		if (status_change) {
++			netif_stop_queue(ndev);
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_restart(ndev);
+@@ -2045,6 +2046,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 		}
+ 	} else {
+ 		if (fep->link) {
++			netif_stop_queue(ndev);
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_stop(ndev);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d5519af346577..2bd7b29fb2516 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -3588,40 +3588,55 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
+ 	struct i40e_hmc_obj_rxq rx_ctx;
+ 	int err = 0;
+ 	bool ok;
+-	int ret;
+ 
+ 	bitmap_zero(ring->state, __I40E_RING_STATE_NBITS);
+ 
+ 	/* clear the context structure first */
+ 	memset(&rx_ctx, 0, sizeof(rx_ctx));
+ 
+-	if (ring->vsi->type == I40E_VSI_MAIN)
+-		xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
++	ring->rx_buf_len = vsi->rx_buf_len;
++
++	/* XDP RX-queue info only needed for RX rings exposed to XDP */
++	if (ring->vsi->type != I40E_VSI_MAIN)
++		goto skip;
++
++	if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
++		err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
++					 ring->queue_index,
++					 ring->q_vector->napi.napi_id,
++					 ring->rx_buf_len);
++		if (err)
++			return err;
++	}
+ 
+ 	ring->xsk_pool = i40e_xsk_pool(ring);
+ 	if (ring->xsk_pool) {
+-		ring->rx_buf_len =
+-		  xsk_pool_get_rx_frame_size(ring->xsk_pool);
+-		ret = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
++		xdp_rxq_info_unreg(&ring->xdp_rxq);
++		ring->rx_buf_len = xsk_pool_get_rx_frame_size(ring->xsk_pool);
++		err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
++					 ring->queue_index,
++					 ring->q_vector->napi.napi_id,
++					 ring->rx_buf_len);
++		if (err)
++			return err;
++		err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
+ 						 MEM_TYPE_XSK_BUFF_POOL,
+ 						 NULL);
+-		if (ret)
+-			return ret;
++		if (err)
++			return err;
+ 		dev_info(&vsi->back->pdev->dev,
+ 			 "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
+ 			 ring->queue_index);
+ 
+ 	} else {
+-		ring->rx_buf_len = vsi->rx_buf_len;
+-		if (ring->vsi->type == I40E_VSI_MAIN) {
+-			ret = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
+-							 MEM_TYPE_PAGE_SHARED,
+-							 NULL);
+-			if (ret)
+-				return ret;
+-		}
++		err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
++						 MEM_TYPE_PAGE_SHARED,
++						 NULL);
++		if (err)
++			return err;
+ 	}
+ 
++skip:
+ 	xdp_init_buff(&ring->xdp, i40e_rx_pg_size(ring) / 2, &ring->xdp_rxq);
+ 
+ 	rx_ctx.dbuff = DIV_ROUND_UP(ring->rx_buf_len,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index dd410b15000f7..071ef309a3a42 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -1555,7 +1555,6 @@ void i40e_free_rx_resources(struct i40e_ring *rx_ring)
+ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
+ {
+ 	struct device *dev = rx_ring->dev;
+-	int err;
+ 
+ 	u64_stats_init(&rx_ring->syncp);
+ 
+@@ -1576,14 +1575,6 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
+ 	rx_ring->next_to_process = 0;
+ 	rx_ring->next_to_use = 0;
+ 
+-	/* XDP RX-queue info only needed for RX rings exposed to XDP */
+-	if (rx_ring->vsi->type == I40E_VSI_MAIN) {
+-		err = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+-				       rx_ring->queue_index, rx_ring->q_vector->napi.napi_id);
+-		if (err < 0)
+-			return err;
+-	}
+-
+ 	rx_ring->xdp_prog = rx_ring->vsi->xdp_prog;
+ 
+ 	rx_ring->rx_bi =
+@@ -2099,7 +2090,8 @@ static void i40e_put_rx_buffer(struct i40e_ring *rx_ring,
+ static void i40e_process_rx_buffs(struct i40e_ring *rx_ring, int xdp_res,
+ 				  struct xdp_buff *xdp)
+ {
+-	u32 next = rx_ring->next_to_clean;
++	u32 nr_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
++	u32 next = rx_ring->next_to_clean, i = 0;
+ 	struct i40e_rx_buffer *rx_buffer;
+ 
+ 	xdp->flags = 0;
+@@ -2112,10 +2104,10 @@ static void i40e_process_rx_buffs(struct i40e_ring *rx_ring, int xdp_res,
+ 		if (!rx_buffer->page)
+ 			continue;
+ 
+-		if (xdp_res == I40E_XDP_CONSUMED)
+-			rx_buffer->pagecnt_bias++;
+-		else
++		if (xdp_res != I40E_XDP_CONSUMED)
+ 			i40e_rx_buffer_flip(rx_buffer, xdp->frame_sz);
++		else if (i++ <= nr_frags)
++			rx_buffer->pagecnt_bias++;
+ 
+ 		/* EOP buffer will be put in i40e_clean_rx_irq() */
+ 		if (next == rx_ring->next_to_process)
+@@ -2129,20 +2121,20 @@ static void i40e_process_rx_buffs(struct i40e_ring *rx_ring, int xdp_res,
+  * i40e_construct_skb - Allocate skb and populate it
+  * @rx_ring: rx descriptor ring to transact packets on
+  * @xdp: xdp_buff pointing to the data
+- * @nr_frags: number of buffers for the packet
+  *
+  * This function allocates an skb.  It then populates it with the page
+  * data from the current receive descriptor, taking care to set up the
+  * skb correctly.
+  */
+ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,
+-					  struct xdp_buff *xdp,
+-					  u32 nr_frags)
++					  struct xdp_buff *xdp)
+ {
+ 	unsigned int size = xdp->data_end - xdp->data;
+ 	struct i40e_rx_buffer *rx_buffer;
++	struct skb_shared_info *sinfo;
+ 	unsigned int headlen;
+ 	struct sk_buff *skb;
++	u32 nr_frags = 0;
+ 
+ 	/* prefetch first cache line of first page */
+ 	net_prefetch(xdp->data);
+@@ -2180,6 +2172,10 @@ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,
+ 	memcpy(__skb_put(skb, headlen), xdp->data,
+ 	       ALIGN(headlen, sizeof(long)));
+ 
++	if (unlikely(xdp_buff_has_frags(xdp))) {
++		sinfo = xdp_get_shared_info_from_buff(xdp);
++		nr_frags = sinfo->nr_frags;
++	}
+ 	rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
+ 	/* update all of the pointers */
+ 	size -= headlen;
+@@ -2199,9 +2195,8 @@ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,
+ 	}
+ 
+ 	if (unlikely(xdp_buff_has_frags(xdp))) {
+-		struct skb_shared_info *sinfo, *skinfo = skb_shinfo(skb);
++		struct skb_shared_info *skinfo = skb_shinfo(skb);
+ 
+-		sinfo = xdp_get_shared_info_from_buff(xdp);
+ 		memcpy(&skinfo->frags[skinfo->nr_frags], &sinfo->frags[0],
+ 		       sizeof(skb_frag_t) * nr_frags);
+ 
+@@ -2224,17 +2219,17 @@ static struct sk_buff *i40e_construct_skb(struct i40e_ring *rx_ring,
+  * i40e_build_skb - Build skb around an existing buffer
+  * @rx_ring: Rx descriptor ring to transact packets on
+  * @xdp: xdp_buff pointing to the data
+- * @nr_frags: number of buffers for the packet
+  *
+  * This function builds an skb around an existing Rx buffer, taking care
+  * to set up the skb correctly and avoid any memcpy overhead.
+  */
+ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring,
+-				      struct xdp_buff *xdp,
+-				      u32 nr_frags)
++				      struct xdp_buff *xdp)
+ {
+ 	unsigned int metasize = xdp->data - xdp->data_meta;
++	struct skb_shared_info *sinfo;
+ 	struct sk_buff *skb;
++	u32 nr_frags;
+ 
+ 	/* Prefetch first cache line of first page. If xdp->data_meta
+ 	 * is unused, this points exactly as xdp->data, otherwise we
+@@ -2243,6 +2238,11 @@ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring,
+ 	 */
+ 	net_prefetch(xdp->data_meta);
+ 
++	if (unlikely(xdp_buff_has_frags(xdp))) {
++		sinfo = xdp_get_shared_info_from_buff(xdp);
++		nr_frags = sinfo->nr_frags;
++	}
++
+ 	/* build an skb around the page buffer */
+ 	skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz);
+ 	if (unlikely(!skb))
+@@ -2255,9 +2255,6 @@ static struct sk_buff *i40e_build_skb(struct i40e_ring *rx_ring,
+ 		skb_metadata_set(skb, metasize);
+ 
+ 	if (unlikely(xdp_buff_has_frags(xdp))) {
+-		struct skb_shared_info *sinfo;
+-
+-		sinfo = xdp_get_shared_info_from_buff(xdp);
+ 		xdp_update_skb_shared_info(skb, nr_frags,
+ 					   sinfo->xdp_frags_size,
+ 					   nr_frags * xdp->frame_sz,
+@@ -2602,9 +2599,9 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
+ 			total_rx_bytes += size;
+ 		} else {
+ 			if (ring_uses_build_skb(rx_ring))
+-				skb = i40e_build_skb(rx_ring, xdp, nfrags);
++				skb = i40e_build_skb(rx_ring, xdp);
+ 			else
+-				skb = i40e_construct_skb(rx_ring, xdp, nfrags);
++				skb = i40e_construct_skb(rx_ring, xdp);
+ 
+ 			/* drop if we failed to retrieve a buffer */
+ 			if (!skb) {
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index e99fa854d17f1..65f38a57b3dfe 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -414,7 +414,8 @@ i40e_add_xsk_frag(struct i40e_ring *rx_ring, struct xdp_buff *first,
+ 	}
+ 
+ 	__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++,
+-				   virt_to_page(xdp->data_hard_start), 0, size);
++				   virt_to_page(xdp->data_hard_start),
++				   XDP_PACKET_HEADROOM, size);
+ 	sinfo->xdp_frags_size += size;
+ 	xsk_buff_add_frag(xdp);
+ 
+@@ -499,7 +500,6 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
+ 		xdp_res = i40e_run_xdp_zc(rx_ring, first, xdp_prog);
+ 		i40e_handle_xdp_result_zc(rx_ring, first, rx_desc, &rx_packets,
+ 					  &rx_bytes, xdp_res, &failure);
+-		first->flags = 0;
+ 		next_to_clean = next_to_process;
+ 		if (failure)
+ 			break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
+index 7fa43827a3f06..4f3e65b47cdc3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_base.c
++++ b/drivers/net/ethernet/intel/ice/ice_base.c
+@@ -534,19 +534,27 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ 	ring->rx_buf_len = ring->vsi->rx_buf_len;
+ 
+ 	if (ring->vsi->type == ICE_VSI_PF) {
+-		if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))
+-			/* coverity[check_return] */
+-			__xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
+-					   ring->q_index,
+-					   ring->q_vector->napi.napi_id,
+-					   ring->vsi->rx_buf_len);
++		if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
++			err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
++						 ring->q_index,
++						 ring->q_vector->napi.napi_id,
++						 ring->rx_buf_len);
++			if (err)
++				return err;
++		}
+ 
+ 		ring->xsk_pool = ice_xsk_pool(ring);
+ 		if (ring->xsk_pool) {
+-			xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
++			xdp_rxq_info_unreg(&ring->xdp_rxq);
+ 
+ 			ring->rx_buf_len =
+ 				xsk_pool_get_rx_frame_size(ring->xsk_pool);
++			err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
++						 ring->q_index,
++						 ring->q_vector->napi.napi_id,
++						 ring->rx_buf_len);
++			if (err)
++				return err;
+ 			err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
+ 							 MEM_TYPE_XSK_BUFF_POOL,
+ 							 NULL);
+@@ -557,13 +565,14 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
+ 			dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
+ 				 ring->q_index);
+ 		} else {
+-			if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))
+-				/* coverity[check_return] */
+-				__xdp_rxq_info_reg(&ring->xdp_rxq,
+-						   ring->netdev,
+-						   ring->q_index,
+-						   ring->q_vector->napi.napi_id,
+-						   ring->vsi->rx_buf_len);
++			if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
++				err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
++							 ring->q_index,
++							 ring->q_vector->napi.napi_id,
++							 ring->rx_buf_len);
++				if (err)
++					return err;
++			}
+ 
+ 			err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
+ 							 MEM_TYPE_PAGE_SHARED,
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 9e97ea8630686..9170a3e8f0884 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -513,11 +513,6 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
+ 	if (ice_is_xdp_ena_vsi(rx_ring->vsi))
+ 		WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog);
+ 
+-	if (rx_ring->vsi->type == ICE_VSI_PF &&
+-	    !xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
+-		if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+-				     rx_ring->q_index, rx_ring->q_vector->napi.napi_id))
+-			goto err;
+ 	return 0;
+ 
+ err:
+@@ -600,9 +595,7 @@ out_failure:
+ 		ret = ICE_XDP_CONSUMED;
+ 	}
+ exit:
+-	rx_buf->act = ret;
+-	if (unlikely(xdp_buff_has_frags(xdp)))
+-		ice_set_rx_bufs_act(xdp, rx_ring, ret);
++	ice_set_rx_bufs_act(xdp, rx_ring, ret);
+ }
+ 
+ /**
+@@ -890,14 +883,17 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ 	}
+ 
+ 	if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) {
+-		if (unlikely(xdp_buff_has_frags(xdp)))
+-			ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
++		ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
+ 		return -ENOMEM;
+ 	}
+ 
+ 	__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
+ 				   rx_buf->page_offset, size);
+ 	sinfo->xdp_frags_size += size;
++	/* remember frag count before XDP prog execution; bpf_xdp_adjust_tail()
++	 * can pop off frags but driver has to handle it on its own
++	 */
++	rx_ring->nr_frags = sinfo->nr_frags;
+ 
+ 	if (page_is_pfmemalloc(rx_buf->page))
+ 		xdp_buff_set_frag_pfmemalloc(xdp);
+@@ -1249,6 +1245,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 
+ 		xdp->data = NULL;
+ 		rx_ring->first_desc = ntc;
++		rx_ring->nr_frags = 0;
+ 		continue;
+ construct_skb:
+ 		if (likely(ice_ring_uses_build_skb(rx_ring)))
+@@ -1264,10 +1261,12 @@ construct_skb:
+ 						    ICE_XDP_CONSUMED);
+ 			xdp->data = NULL;
+ 			rx_ring->first_desc = ntc;
++			rx_ring->nr_frags = 0;
+ 			break;
+ 		}
+ 		xdp->data = NULL;
+ 		rx_ring->first_desc = ntc;
++		rx_ring->nr_frags = 0;
+ 
+ 		stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
+ 		if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index daf7b9dbb1435..b28b9826bbcd3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -333,6 +333,7 @@ struct ice_rx_ring {
+ 	struct ice_channel *ch;
+ 	struct ice_tx_ring *xdp_ring;
+ 	struct xsk_buff_pool *xsk_pool;
++	u32 nr_frags;
+ 	dma_addr_t dma;			/* physical address of ring */
+ 	u64 cached_phctime;
+ 	u16 rx_buf_len;
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+index 115969ecdf7b9..b0e56675f98b2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+@@ -12,26 +12,39 @@
+  * act: action to store onto Rx buffers related to XDP buffer parts
+  *
+  * Set action that should be taken before putting Rx buffer from first frag
+- * to one before last. Last one is handled by caller of this function as it
+- * is the EOP frag that is currently being processed. This function is
+- * supposed to be called only when XDP buffer contains frags.
++ * to the last.
+  */
+ static inline void
+ ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring,
+ 		    const unsigned int act)
+ {
+-	const struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+-	u32 first = rx_ring->first_desc;
+-	u32 nr_frags = sinfo->nr_frags;
++	u32 sinfo_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
++	u32 nr_frags = rx_ring->nr_frags + 1;
++	u32 idx = rx_ring->first_desc;
+ 	u32 cnt = rx_ring->count;
+ 	struct ice_rx_buf *buf;
+ 
+ 	for (int i = 0; i < nr_frags; i++) {
+-		buf = &rx_ring->rx_buf[first];
++		buf = &rx_ring->rx_buf[idx];
+ 		buf->act = act;
+ 
+-		if (++first == cnt)
+-			first = 0;
++		if (++idx == cnt)
++			idx = 0;
++	}
++
++	/* adjust pagecnt_bias on frags freed by XDP prog */
++	if (sinfo_frags < rx_ring->nr_frags && act == ICE_XDP_CONSUMED) {
++		u32 delta = rx_ring->nr_frags - sinfo_frags;
++
++		while (delta) {
++			if (idx == 0)
++				idx = cnt - 1;
++			else
++				idx--;
++			buf = &rx_ring->rx_buf[idx];
++			buf->pagecnt_bias--;
++			delta--;
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 99954508184f9..f3663b3f6390e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -820,7 +820,8 @@ ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first,
+ 	}
+ 
+ 	__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++,
+-				   virt_to_page(xdp->data_hard_start), 0, size);
++				   virt_to_page(xdp->data_hard_start),
++				   XDP_PACKET_HEADROOM, size);
+ 	sinfo->xdp_frags_size += size;
+ 	xsk_buff_add_frag(xdp);
+ 
+@@ -891,7 +892,6 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
+ 
+ 		if (!first) {
+ 			first = xdp;
+-			xdp_buff_clear_frags_flag(first);
+ 		} else if (ice_add_xsk_frag(rx_ring, first, xdp, size)) {
+ 			break;
+ 		}
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 19809b0ddcd90..0241e498cc20f 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -783,6 +783,8 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
+ 	/* setup watchdog timeout value to be 5 second */
+ 	netdev->watchdog_timeo = 5 * HZ;
+ 
++	netdev->dev_port = idx;
++
+ 	/* configure default MTU size */
+ 	netdev->min_mtu = ETH_MIN_MTU;
+ 	netdev->max_mtu = vport->max_mtu;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 93137606869e2..065f07392c964 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -614,12 +614,38 @@ static void mvpp23_bm_set_8pool_mode(struct mvpp2 *priv)
+ 	mvpp2_write(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG, val);
+ }
+ 
++/* Cleanup pool before actual initialization in the OS */
++static void mvpp2_bm_pool_cleanup(struct mvpp2 *priv, int pool_id)
++{
++	unsigned int thread = mvpp2_cpu_to_thread(priv, get_cpu());
++	u32 val;
++	int i;
++
++	/* Drain the BM from all possible residues left by firmware */
++	for (i = 0; i < MVPP2_BM_POOL_SIZE_MAX; i++)
++		mvpp2_thread_read(priv, thread, MVPP2_BM_PHY_ALLOC_REG(pool_id));
++
++	put_cpu();
++
++	/* Stop the BM pool */
++	val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(pool_id));
++	val |= MVPP2_BM_STOP_MASK;
++	mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(pool_id), val);
++}
++
+ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
+ {
+ 	enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
+ 	int i, err, poolnum = MVPP2_BM_POOLS_NUM;
+ 	struct mvpp2_port *port;
+ 
++	if (priv->percpu_pools)
++		poolnum = mvpp2_get_nrxqs(priv) * 2;
++
++	/* Clean up the pool state in case it contains stale state */
++	for (i = 0; i < poolnum; i++)
++		mvpp2_bm_pool_cleanup(priv, i);
++
+ 	if (priv->percpu_pools) {
+ 		for (i = 0; i < priv->port_count; i++) {
+ 			port = priv->port_list[i];
+@@ -629,7 +655,6 @@ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
+ 			}
+ 		}
+ 
+-		poolnum = mvpp2_get_nrxqs(priv) * 2;
+ 		for (i = 0; i < poolnum; i++) {
+ 			/* the pool in use */
+ 			int pn = i / (poolnum / 2);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index a7b1f9686c09a..4957412ff1f65 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1923,6 +1923,7 @@ static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status,
+ {
+ 	const char *namep = mlx5_command_str(opcode);
+ 	struct mlx5_cmd_stats *stats;
++	unsigned long flags;
+ 
+ 	if (!err || !(strcmp(namep, "unknown command opcode")))
+ 		return;
+@@ -1930,7 +1931,7 @@ static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status,
+ 	stats = xa_load(&dev->cmd.stats, opcode);
+ 	if (!stats)
+ 		return;
+-	spin_lock_irq(&stats->lock);
++	spin_lock_irqsave(&stats->lock, flags);
+ 	stats->failed++;
+ 	if (err < 0)
+ 		stats->last_failed_errno = -err;
+@@ -1939,7 +1940,7 @@ static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status,
+ 		stats->last_failed_mbox_status = status;
+ 		stats->last_failed_syndrome = syndrome;
+ 	}
+-	spin_unlock_irq(&stats->lock);
++	spin_unlock_irqrestore(&stats->lock, flags);
+ }
+ 
+ /* preserve -EREMOTEIO for outbox.status != OK, otherwise return err as is */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs_tt_redirect.c b/drivers/net/ethernet/mellanox/mlx5/core/en/fs_tt_redirect.c
+index e1283531e0b81..671adbad0a40f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs_tt_redirect.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs_tt_redirect.c
+@@ -436,6 +436,7 @@ static int fs_any_create_groups(struct mlx5e_flow_table *ft)
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+ 	if  (!in || !ft->g) {
+ 		kfree(ft->g);
++		ft->g = NULL;
+ 		kvfree(in);
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index e097f336e1c4a..30507b7c2fb17 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -1062,8 +1062,8 @@ void mlx5e_build_sq_param(struct mlx5_core_dev *mdev,
+ 	void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
+ 	bool allow_swp;
+ 
+-	allow_swp =
+-		mlx5_geneve_tx_allowed(mdev) || !!mlx5_ipsec_device_caps(mdev);
++	allow_swp = mlx5_geneve_tx_allowed(mdev) ||
++		    (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_CRYPTO);
+ 	mlx5e_build_sq_param_common(mdev, param);
+ 	MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size);
+ 	MLX5_SET(sqc, sqc, allow_swp, allow_swp);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+index af3928eddafd1..803035d4e5976 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+@@ -213,7 +213,7 @@ static void mlx5e_ptp_handle_ts_cqe(struct mlx5e_ptpsq *ptpsq,
+ 	mlx5e_ptpsq_mark_ts_cqes_undelivered(ptpsq, hwtstamp);
+ out:
+ 	napi_consume_skb(skb, budget);
+-	md_buff[*md_buff_sz++] = metadata_id;
++	md_buff[(*md_buff_sz)++] = metadata_id;
+ 	if (unlikely(mlx5e_ptp_metadata_map_unhealthy(&ptpsq->metadata_map)) &&
+ 	    !test_and_set_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state))
+ 		queue_work(ptpsq->txqsq.priv->wq, &ptpsq->report_unhealthy_work);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 161c5190c236a..05612d9c6080c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -336,12 +336,17 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
+ 	/* iv len */
+ 	aes_gcm->icv_len = x->aead->alg_icv_len;
+ 
++	attrs->dir = x->xso.dir;
++
+ 	/* esn */
+ 	if (x->props.flags & XFRM_STATE_ESN) {
+ 		attrs->replay_esn.trigger = true;
+ 		attrs->replay_esn.esn = sa_entry->esn_state.esn;
+ 		attrs->replay_esn.esn_msb = sa_entry->esn_state.esn_msb;
+ 		attrs->replay_esn.overlap = sa_entry->esn_state.overlap;
++		if (attrs->dir == XFRM_DEV_OFFLOAD_OUT)
++			goto skip_replay_window;
++
+ 		switch (x->replay_esn->replay_window) {
+ 		case 32:
+ 			attrs->replay_esn.replay_window =
+@@ -365,7 +370,7 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
+ 		}
+ 	}
+ 
+-	attrs->dir = x->xso.dir;
++skip_replay_window:
+ 	/* spi */
+ 	attrs->spi = be32_to_cpu(x->id.spi);
+ 
+@@ -501,7 +506,8 @@ static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (x->replay_esn && x->replay_esn->replay_window != 32 &&
++		if (x->replay_esn && x->xso.dir == XFRM_DEV_OFFLOAD_IN &&
++		    x->replay_esn->replay_window != 32 &&
+ 		    x->replay_esn->replay_window != 64 &&
+ 		    x->replay_esn->replay_window != 128 &&
+ 		    x->replay_esn->replay_window != 256) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+index bb7f86c993e55..e66f486faafe1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+@@ -254,11 +254,13 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 
+ 	ft->g = kcalloc(MLX5E_ARFS_NUM_GROUPS,
+ 			sizeof(*ft->g), GFP_KERNEL);
+-	in = kvzalloc(inlen, GFP_KERNEL);
+-	if  (!in || !ft->g) {
+-		kfree(ft->g);
+-		kvfree(in);
++	if (!ft->g)
+ 		return -ENOMEM;
++
++	in = kvzalloc(inlen, GFP_KERNEL);
++	if (!in) {
++		err = -ENOMEM;
++		goto err_free_g;
+ 	}
+ 
+ 	mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+@@ -278,7 +280,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 		break;
+ 	default:
+ 		err = -EINVAL;
+-		goto out;
++		goto err_free_in;
+ 	}
+ 
+ 	switch (type) {
+@@ -300,7 +302,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 		break;
+ 	default:
+ 		err = -EINVAL;
+-		goto out;
++		goto err_free_in;
+ 	}
+ 
+ 	MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+@@ -309,7 +311,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 	MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ 	ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+ 	if (IS_ERR(ft->g[ft->num_groups]))
+-		goto err;
++		goto err_clean_group;
+ 	ft->num_groups++;
+ 
+ 	memset(in, 0, inlen);
+@@ -318,18 +320,20 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ 	MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ 	ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+ 	if (IS_ERR(ft->g[ft->num_groups]))
+-		goto err;
++		goto err_clean_group;
+ 	ft->num_groups++;
+ 
+ 	kvfree(in);
+ 	return 0;
+ 
+-err:
++err_clean_group:
+ 	err = PTR_ERR(ft->g[ft->num_groups]);
+ 	ft->g[ft->num_groups] = NULL;
+-out:
++err_free_in:
+ 	kvfree(in);
+-
++err_free_g:
++	kfree(ft->g);
++	ft->g = NULL;
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 96af9e2ab1d87..404dd1d9b28bf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -761,7 +761,7 @@ static int mlx5e_hairpin_create_indirect_rqt(struct mlx5e_hairpin *hp)
+ 
+ 	err = mlx5e_rss_params_indir_init(&indir, mdev,
+ 					  mlx5e_rqt_size(mdev, hp->num_channels),
+-					  mlx5e_rqt_size(mdev, priv->max_nch));
++					  mlx5e_rqt_size(mdev, hp->num_channels));
+ 	if (err)
+ 		return err;
+ 
+@@ -2014,9 +2014,10 @@ static void mlx5e_tc_del_fdb_peer_flow(struct mlx5e_tc_flow *flow,
+ 	list_for_each_entry_safe(peer_flow, tmp, &flow->peer_flows, peer_flows) {
+ 		if (peer_index != mlx5_get_dev_index(peer_flow->priv->mdev))
+ 			continue;
++
++		list_del(&peer_flow->peer_flows);
+ 		if (refcount_dec_and_test(&peer_flow->refcnt)) {
+ 			mlx5e_tc_del_fdb_flow(peer_flow->priv, peer_flow);
+-			list_del(&peer_flow->peer_flows);
+ 			kfree(peer_flow);
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
+index a7ed87e9d8426..22dd30cf8033f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
+@@ -83,6 +83,7 @@ mlx5_esw_bridge_mdb_flow_create(u16 esw_owner_vhca_id, struct mlx5_esw_bridge_md
+ 		i++;
+ 	}
+ 
++	rule_spec->flow_context.flags |= FLOW_CONTEXT_UPLINK_HAIRPIN_EN;
+ 	rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ 	dmac_v = MLX5_ADDR_OF(fte_match_param, rule_spec->match_value, outer_headers.dmac_47_16);
+ 	ether_addr_copy(dmac_v, entry->key.addr);
+@@ -587,6 +588,7 @@ mlx5_esw_bridge_mcast_vlan_flow_create(u16 vlan_proto, struct mlx5_esw_bridge_po
+ 	if (!rule_spec)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	rule_spec->flow_context.flags |= FLOW_CONTEXT_UPLINK_HAIRPIN_EN;
+ 	rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ 
+ 	flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+@@ -662,6 +664,7 @@ mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port)
+ 		dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID;
+ 		dest.vport.vhca_id = port->esw_owner_vhca_id;
+ 	}
++	rule_spec->flow_context.flags |= FLOW_CONTEXT_UPLINK_HAIRPIN_EN;
+ 	handle = mlx5_add_flow_rules(port->mcast.ft, rule_spec, &flow_act, &dest, 1);
+ 
+ 	kvfree(rule_spec);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+index a4b9253316618..b29299c49ab3d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+@@ -566,6 +566,8 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
+ 		 fte->flow_context.flow_tag);
+ 	MLX5_SET(flow_context, in_flow_context, flow_source,
+ 		 fte->flow_context.flow_source);
++	MLX5_SET(flow_context, in_flow_context, uplink_hairpin_en,
++		 !!(fte->flow_context.flags & FLOW_CONTEXT_UPLINK_HAIRPIN_EN));
+ 
+ 	MLX5_SET(flow_context, in_flow_context, extended_destination,
+ 		 extended_dest);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c
+index 40c7be1240416..58bd749b5e4de 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/aso.c
+@@ -98,7 +98,7 @@ static int create_aso_cq(struct mlx5_aso_cq *cq, void *cqc_data)
+ 	mlx5_fill_page_frag_array(&cq->wq_ctrl.buf,
+ 				  (__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas));
+ 
+-	MLX5_SET(cqc,   cqc, cq_period_mode, DIM_CQ_PERIOD_MODE_START_FROM_EQE);
++	MLX5_SET(cqc,   cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE);
+ 	MLX5_SET(cqc,   cqc, c_eqn_or_apu_element, eqn);
+ 	MLX5_SET(cqc,   cqc, uar_page,      mdev->priv.uar->index);
+ 	MLX5_SET(cqc,   cqc, log_page_size, cq->wq_ctrl.buf.page_shift -
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+index e3ec559369fa0..d2b65a0ce47b6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+@@ -788,6 +788,7 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
+ 		switch (action_type) {
+ 		case DR_ACTION_TYP_DROP:
+ 			attr.final_icm_addr = nic_dmn->drop_icm_addr;
++			attr.hit_gvmi = nic_dmn->drop_icm_addr >> 48;
+ 			break;
+ 		case DR_ACTION_TYP_FT:
+ 			dest_action = action;
+@@ -873,11 +874,17 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
+ 							action->sampler->tx_icm_addr;
+ 			break;
+ 		case DR_ACTION_TYP_VPORT:
+-			attr.hit_gvmi = action->vport->caps->vhca_gvmi;
+-			dest_action = action;
+-			attr.final_icm_addr = rx_rule ?
+-				action->vport->caps->icm_address_rx :
+-				action->vport->caps->icm_address_tx;
++			if (unlikely(rx_rule && action->vport->caps->num == MLX5_VPORT_UPLINK)) {
++				/* can't go to uplink on RX rule - dropping instead */
++				attr.final_icm_addr = nic_dmn->drop_icm_addr;
++				attr.hit_gvmi = nic_dmn->drop_icm_addr >> 48;
++			} else {
++				attr.hit_gvmi = action->vport->caps->vhca_gvmi;
++				dest_action = action;
++				attr.final_icm_addr = rx_rule ?
++						      action->vport->caps->icm_address_rx :
++						      action->vport->caps->icm_address_tx;
++			}
+ 			break;
+ 		case DR_ACTION_TYP_POP_VLAN:
+ 			if (!rx_rule && !(dmn->ste_ctx->actions_caps &
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 49b81daf7411d..d094c3c1e2ee7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7467,6 +7467,9 @@ int stmmac_dvr_probe(struct device *device,
+ 		dev_err(priv->device, "unable to bring out of ahb reset: %pe\n",
+ 			ERR_PTR(ret));
+ 
++	/* Wait a bit for the reset to take effect */
++	udelay(10);
++
+ 	/* Init MAC and get the capabilities */
+ 	ret = stmmac_hw_init(priv);
+ 	if (ret)
+diff --git a/drivers/net/fjes/fjes_hw.c b/drivers/net/fjes/fjes_hw.c
+index 704e949484d0c..b9b5554ea8620 100644
+--- a/drivers/net/fjes/fjes_hw.c
++++ b/drivers/net/fjes/fjes_hw.c
+@@ -221,21 +221,25 @@ static int fjes_hw_setup(struct fjes_hw *hw)
+ 
+ 	mem_size = FJES_DEV_REQ_BUF_SIZE(hw->max_epid);
+ 	hw->hw_info.req_buf = kzalloc(mem_size, GFP_KERNEL);
+-	if (!(hw->hw_info.req_buf))
+-		return -ENOMEM;
++	if (!(hw->hw_info.req_buf)) {
++		result = -ENOMEM;
++		goto free_ep_info;
++	}
+ 
+ 	hw->hw_info.req_buf_size = mem_size;
+ 
+ 	mem_size = FJES_DEV_RES_BUF_SIZE(hw->max_epid);
+ 	hw->hw_info.res_buf = kzalloc(mem_size, GFP_KERNEL);
+-	if (!(hw->hw_info.res_buf))
+-		return -ENOMEM;
++	if (!(hw->hw_info.res_buf)) {
++		result = -ENOMEM;
++		goto free_req_buf;
++	}
+ 
+ 	hw->hw_info.res_buf_size = mem_size;
+ 
+ 	result = fjes_hw_alloc_shared_status_region(hw);
+ 	if (result)
+-		return result;
++		goto free_res_buf;
+ 
+ 	hw->hw_info.buffer_share_bit = 0;
+ 	hw->hw_info.buffer_unshare_reserve_bit = 0;
+@@ -246,11 +250,11 @@ static int fjes_hw_setup(struct fjes_hw *hw)
+ 
+ 			result = fjes_hw_alloc_epbuf(&buf_pair->tx);
+ 			if (result)
+-				return result;
++				goto free_epbuf;
+ 
+ 			result = fjes_hw_alloc_epbuf(&buf_pair->rx);
+ 			if (result)
+-				return result;
++				goto free_epbuf;
+ 
+ 			spin_lock_irqsave(&hw->rx_status_lock, flags);
+ 			fjes_hw_setup_epbuf(&buf_pair->tx, mac,
+@@ -273,6 +277,25 @@ static int fjes_hw_setup(struct fjes_hw *hw)
+ 	fjes_hw_init_command_registers(hw, &param);
+ 
+ 	return 0;
++
++free_epbuf:
++	for (epidx = 0; epidx < hw->max_epid ; epidx++) {
++		if (epidx == hw->my_epid)
++			continue;
++		fjes_hw_free_epbuf(&hw->ep_shm_info[epidx].tx);
++		fjes_hw_free_epbuf(&hw->ep_shm_info[epidx].rx);
++	}
++	fjes_hw_free_shared_status_region(hw);
++free_res_buf:
++	kfree(hw->hw_info.res_buf);
++	hw->hw_info.res_buf = NULL;
++free_req_buf:
++	kfree(hw->hw_info.req_buf);
++	hw->hw_info.req_buf = NULL;
++free_ep_info:
++	kfree(hw->ep_shm_info);
++	hw->ep_shm_info = NULL;
++	return result;
+ }
+ 
+ static void fjes_hw_cleanup(struct fjes_hw *hw)
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 706ea5263e879..cd15d7b380ab5 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -44,7 +44,7 @@
+ 
+ static unsigned int ring_size __ro_after_init = 128;
+ module_param(ring_size, uint, 0444);
+-MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)");
++MODULE_PARM_DESC(ring_size, "Ring buffer size (# of 4K pages)");
+ unsigned int netvsc_ring_bytes __ro_after_init;
+ 
+ static const u32 default_msg = NETIF_MSG_DRV | NETIF_MSG_PROBE |
+@@ -2805,7 +2805,7 @@ static int __init netvsc_drv_init(void)
+ 		pr_info("Increased ring_size to %u (min allowed)\n",
+ 			ring_size);
+ 	}
+-	netvsc_ring_bytes = ring_size * PAGE_SIZE;
++	netvsc_ring_bytes = VMBUS_RING_SIZE(ring_size * 4096);
+ 
+ 	register_netdevice_notifier(&netvsc_netdev_notifier);
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index ce5ad4a824813..858175ca58cd8 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -120,6 +120,11 @@
+  */
+ #define LAN8814_1PPM_FORMAT			17179
+ 
++#define PTP_RX_VERSION				0x0248
++#define PTP_TX_VERSION				0x0288
++#define PTP_MAX_VERSION(x)			(((x) & GENMASK(7, 0)) << 8)
++#define PTP_MIN_VERSION(x)			((x) & GENMASK(7, 0))
++
+ #define PTP_RX_MOD				0x024F
+ #define PTP_RX_MOD_BAD_UDPV4_CHKSUM_FORCE_FCS_DIS_ BIT(3)
+ #define PTP_RX_TIMESTAMP_EN			0x024D
+@@ -3147,6 +3152,12 @@ static void lan8814_ptp_init(struct phy_device *phydev)
+ 	lanphy_write_page_reg(phydev, 5, PTP_TX_PARSE_IP_ADDR_EN, 0);
+ 	lanphy_write_page_reg(phydev, 5, PTP_RX_PARSE_IP_ADDR_EN, 0);
+ 
++	/* Disable checking for minorVersionPTP field */
++	lanphy_write_page_reg(phydev, 5, PTP_RX_VERSION,
++			      PTP_MAX_VERSION(0xff) | PTP_MIN_VERSION(0x0));
++	lanphy_write_page_reg(phydev, 5, PTP_TX_VERSION,
++			      PTP_MAX_VERSION(0xff) | PTP_MIN_VERSION(0x0));
++
+ 	skb_queue_head_init(&ptp_priv->tx_queue);
+ 	skb_queue_head_init(&ptp_priv->rx_queue);
+ 	INIT_LIST_HEAD(&ptp_priv->rx_ts_list);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index afa5497f7c35c..4a4f8c8e79fa1 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1630,13 +1630,19 @@ static int tun_xdp_act(struct tun_struct *tun, struct bpf_prog *xdp_prog,
+ 	switch (act) {
+ 	case XDP_REDIRECT:
+ 		err = xdp_do_redirect(tun->dev, xdp, xdp_prog);
+-		if (err)
++		if (err) {
++			dev_core_stats_rx_dropped_inc(tun->dev);
+ 			return err;
++		}
++		dev_sw_netstats_rx_add(tun->dev, xdp->data_end - xdp->data);
+ 		break;
+ 	case XDP_TX:
+ 		err = tun_xdp_tx(tun->dev, xdp);
+-		if (err < 0)
++		if (err < 0) {
++			dev_core_stats_rx_dropped_inc(tun->dev);
+ 			return err;
++		}
++		dev_sw_netstats_rx_add(tun->dev, xdp->data_end - xdp->data);
+ 		break;
+ 	case XDP_PASS:
+ 		break;
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index f12b606e2d2e5..667d55e261560 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -368,10 +368,6 @@ struct ath11k_vif {
+ 	struct ieee80211_chanctx_conf chanctx;
+ 	struct ath11k_arp_ns_offload arp_ns_offload;
+ 	struct ath11k_rekey_data rekey_data;
+-
+-#ifdef CONFIG_ATH11K_DEBUGFS
+-	struct dentry *debugfs_twt;
+-#endif /* CONFIG_ATH11K_DEBUGFS */
+ };
+ 
+ struct ath11k_vif_iter {
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
+index be76e7d1c4366..0796f4d92b477 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.c
++++ b/drivers/net/wireless/ath/ath11k/debugfs.c
+@@ -1893,35 +1893,30 @@ static const struct file_operations ath11k_fops_twt_resume_dialog = {
+ 	.open = simple_open
+ };
+ 
+-void ath11k_debugfs_add_interface(struct ath11k_vif *arvif)
++void ath11k_debugfs_op_vif_add(struct ieee80211_hw *hw,
++			       struct ieee80211_vif *vif)
+ {
++	struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
+ 	struct ath11k_base *ab = arvif->ar->ab;
++	struct dentry *debugfs_twt;
+ 
+ 	if (arvif->vif->type != NL80211_IFTYPE_AP &&
+ 	    !(arvif->vif->type == NL80211_IFTYPE_STATION &&
+ 	      test_bit(WMI_TLV_SERVICE_STA_TWT, ab->wmi_ab.svc_map)))
+ 		return;
+ 
+-	arvif->debugfs_twt = debugfs_create_dir("twt",
+-						arvif->vif->debugfs_dir);
+-	debugfs_create_file("add_dialog", 0200, arvif->debugfs_twt,
++	debugfs_twt = debugfs_create_dir("twt",
++					 arvif->vif->debugfs_dir);
++	debugfs_create_file("add_dialog", 0200, debugfs_twt,
+ 			    arvif, &ath11k_fops_twt_add_dialog);
+ 
+-	debugfs_create_file("del_dialog", 0200, arvif->debugfs_twt,
++	debugfs_create_file("del_dialog", 0200, debugfs_twt,
+ 			    arvif, &ath11k_fops_twt_del_dialog);
+ 
+-	debugfs_create_file("pause_dialog", 0200, arvif->debugfs_twt,
++	debugfs_create_file("pause_dialog", 0200, debugfs_twt,
+ 			    arvif, &ath11k_fops_twt_pause_dialog);
+ 
+-	debugfs_create_file("resume_dialog", 0200, arvif->debugfs_twt,
++	debugfs_create_file("resume_dialog", 0200, debugfs_twt,
+ 			    arvif, &ath11k_fops_twt_resume_dialog);
+ }
+ 
+-void ath11k_debugfs_remove_interface(struct ath11k_vif *arvif)
+-{
+-	if (!arvif->debugfs_twt)
+-		return;
+-
+-	debugfs_remove_recursive(arvif->debugfs_twt);
+-	arvif->debugfs_twt = NULL;
+-}
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.h b/drivers/net/wireless/ath/ath11k/debugfs.h
+index 3af0169f6cf21..6f630b42e95c7 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.h
++++ b/drivers/net/wireless/ath/ath11k/debugfs.h
+@@ -306,8 +306,8 @@ static inline int ath11k_debugfs_rx_filter(struct ath11k *ar)
+ 	return ar->debug.rx_filter;
+ }
+ 
+-void ath11k_debugfs_add_interface(struct ath11k_vif *arvif);
+-void ath11k_debugfs_remove_interface(struct ath11k_vif *arvif);
++void ath11k_debugfs_op_vif_add(struct ieee80211_hw *hw,
++			       struct ieee80211_vif *vif);
+ void ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
+ 				     enum wmi_direct_buffer_module id,
+ 				     enum ath11k_dbg_dbr_event event,
+@@ -386,14 +386,6 @@ static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar,
+ 	return 0;
+ }
+ 
+-static inline void ath11k_debugfs_add_interface(struct ath11k_vif *arvif)
+-{
+-}
+-
+-static inline void ath11k_debugfs_remove_interface(struct ath11k_vif *arvif)
+-{
+-}
+-
+ static inline void
+ ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
+ 				enum wmi_direct_buffer_module id,
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 7f7b398177737..71c6dab1aedba 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6750,13 +6750,6 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ 		goto err;
+ 	}
+ 
+-	/* In the case of hardware recovery, debugfs files are
+-	 * not deleted since ieee80211_ops.remove_interface() is
+-	 * not invoked. In such cases, try to delete the files.
+-	 * These will be re-created later.
+-	 */
+-	ath11k_debugfs_remove_interface(arvif);
+-
+ 	memset(arvif, 0, sizeof(*arvif));
+ 
+ 	arvif->ar = ar;
+@@ -6933,8 +6926,6 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ 
+ 	ath11k_dp_vdev_tx_attach(ar, arvif);
+ 
+-	ath11k_debugfs_add_interface(arvif);
+-
+ 	if (vif->type != NL80211_IFTYPE_MONITOR &&
+ 	    test_bit(ATH11K_FLAG_MONITOR_CONF_ENABLED, &ar->monitor_flags)) {
+ 		ret = ath11k_mac_monitor_vdev_create(ar);
+@@ -7050,8 +7041,6 @@ err_vdev_del:
+ 	/* Recalc txpower for remaining vdev */
+ 	ath11k_mac_txpower_recalc(ar);
+ 
+-	ath11k_debugfs_remove_interface(arvif);
+-
+ 	/* TODO: recal traffic pause state based on the available vdevs */
+ 
+ 	mutex_unlock(&ar->conf_mutex);
+@@ -9149,6 +9138,7 @@ static const struct ieee80211_ops ath11k_ops = {
+ #endif
+ 
+ #ifdef CONFIG_ATH11K_DEBUGFS
++	.vif_add_debugfs		= ath11k_debugfs_op_vif_add,
+ 	.sta_add_debugfs		= ath11k_debugfs_sta_op_add,
+ #endif
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index b658cf228fbe2..9160d81a871ee 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2018-2023 Intel Corporation
++ * Copyright (C) 2018-2024 Intel Corporation
+  */
+ #include <linux/firmware.h>
+ #include "iwl-drv.h"
+@@ -1096,7 +1096,7 @@ static int iwl_dbg_tlv_override_trig_node(struct iwl_fw_runtime *fwrt,
+ 		node_trig = (void *)node_tlv->data;
+ 	}
+ 
+-	memcpy(node_trig->data + offset, trig->data, trig_data_len);
++	memcpy((u8 *)node_trig->data + offset, trig->data, trig_data_len);
+ 	node_tlv->length = cpu_to_le32(size);
+ 
+ 	if (policy & IWL_FW_INI_APPLY_POLICY_OVERRIDE_CFG) {
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 84f345c69ea56..b2971dd953358 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -1378,12 +1378,12 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ 		 * value of the frequency. In such a case, do not abort but
+ 		 * configure the hardware to the desired frequency forcefully.
+ 		 */
+-		forced = opp_table->rate_clk_single != target_freq;
++		forced = opp_table->rate_clk_single != freq;
+ 	}
+ 
+-	ret = _set_opp(dev, opp_table, opp, &target_freq, forced);
++	ret = _set_opp(dev, opp_table, opp, &freq, forced);
+ 
+-	if (target_freq)
++	if (freq)
+ 		dev_pm_opp_put(opp);
+ 
+ put_opp_table:
+diff --git a/drivers/parisc/power.c b/drivers/parisc/power.c
+index bb0d92461b08b..7a6a3e7f2825b 100644
+--- a/drivers/parisc/power.c
++++ b/drivers/parisc/power.c
+@@ -213,7 +213,7 @@ static int __init power_init(void)
+ 	if (running_on_qemu && soft_power_reg)
+ 		register_sys_off_handler(SYS_OFF_MODE_POWER_OFF, SYS_OFF_PRIO_DEFAULT,
+ 					qemu_power_off, (void *)soft_power_reg);
+-	else
++	if (!running_on_qemu || soft_power_reg)
+ 		power_task = kthread_run(kpowerswd, (void*)soft_power_reg,
+ 					KTHREAD_NAME);
+ 	if (IS_ERR(power_task)) {
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 1dd84c7a79de9..b1995ac268d77 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -1170,7 +1170,7 @@ static int mlxbf_pmc_program_crspace_counter(int blk_num, uint32_t cnt_num,
+ 	int ret;
+ 
+ 	addr = pmc->block[blk_num].mmio_base +
+-		(rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);
++		((cnt_num / 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);
+ 	ret = mlxbf_pmc_readl(addr, &word);
+ 	if (ret)
+ 		return ret;
+@@ -1413,7 +1413,7 @@ static int mlxbf_pmc_read_crspace_event(int blk_num, uint32_t cnt_num,
+ 	int ret;
+ 
+ 	addr = pmc->block[blk_num].mmio_base +
+-		(rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);
++		((cnt_num / 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);
+ 	ret = mlxbf_pmc_readl(addr, &word);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/platform/x86/intel/ifs/load.c b/drivers/platform/x86/intel/ifs/load.c
+index a1ee1a74fc3c4..2cf3b4a8813f9 100644
+--- a/drivers/platform/x86/intel/ifs/load.c
++++ b/drivers/platform/x86/intel/ifs/load.c
+@@ -399,7 +399,8 @@ int ifs_load_firmware(struct device *dev)
+ 	if (fw->size != expected_size) {
+ 		dev_err(dev, "File size mismatch (expected %u, actual %zu). Corrupted IFS image.\n",
+ 			expected_size, fw->size);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto release;
+ 	}
+ 
+ 	ret = image_sanity_check(dev, (struct microcode_header_intel *)fw->data);
+diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
+index 33ab207493e3e..33bb58dc3f78c 100644
+--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
+@@ -23,23 +23,23 @@ static int (*uncore_read)(struct uncore_data *data, unsigned int *min, unsigned
+ static int (*uncore_write)(struct uncore_data *data, unsigned int input, unsigned int min_max);
+ static int (*uncore_read_freq)(struct uncore_data *data, unsigned int *freq);
+ 
+-static ssize_t show_domain_id(struct device *dev, struct device_attribute *attr, char *buf)
++static ssize_t show_domain_id(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+ {
+-	struct uncore_data *data = container_of(attr, struct uncore_data, domain_id_dev_attr);
++	struct uncore_data *data = container_of(attr, struct uncore_data, domain_id_kobj_attr);
+ 
+ 	return sprintf(buf, "%u\n", data->domain_id);
+ }
+ 
+-static ssize_t show_fabric_cluster_id(struct device *dev, struct device_attribute *attr, char *buf)
++static ssize_t show_fabric_cluster_id(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+ {
+-	struct uncore_data *data = container_of(attr, struct uncore_data, fabric_cluster_id_dev_attr);
++	struct uncore_data *data = container_of(attr, struct uncore_data, fabric_cluster_id_kobj_attr);
+ 
+ 	return sprintf(buf, "%u\n", data->cluster_id);
+ }
+ 
+-static ssize_t show_package_id(struct device *dev, struct device_attribute *attr, char *buf)
++static ssize_t show_package_id(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+ {
+-	struct uncore_data *data = container_of(attr, struct uncore_data, package_id_dev_attr);
++	struct uncore_data *data = container_of(attr, struct uncore_data, package_id_kobj_attr);
+ 
+ 	return sprintf(buf, "%u\n", data->package_id);
+ }
+@@ -97,30 +97,30 @@ static ssize_t show_perf_status_freq_khz(struct uncore_data *data, char *buf)
+ }
+ 
+ #define store_uncore_min_max(name, min_max)				\
+-	static ssize_t store_##name(struct device *dev,		\
+-				     struct device_attribute *attr,	\
++	static ssize_t store_##name(struct kobject *kobj,		\
++				     struct kobj_attribute *attr,	\
+ 				     const char *buf, size_t count)	\
+ 	{								\
+-		struct uncore_data *data = container_of(attr, struct uncore_data, name##_dev_attr);\
++		struct uncore_data *data = container_of(attr, struct uncore_data, name##_kobj_attr);\
+ 									\
+ 		return store_min_max_freq_khz(data, buf, count,	\
+ 					      min_max);		\
+ 	}
+ 
+ #define show_uncore_min_max(name, min_max)				\
+-	static ssize_t show_##name(struct device *dev,		\
+-				    struct device_attribute *attr, char *buf)\
++	static ssize_t show_##name(struct kobject *kobj,		\
++				    struct kobj_attribute *attr, char *buf)\
+ 	{                                                               \
+-		struct uncore_data *data = container_of(attr, struct uncore_data, name##_dev_attr);\
++		struct uncore_data *data = container_of(attr, struct uncore_data, name##_kobj_attr);\
+ 									\
+ 		return show_min_max_freq_khz(data, buf, min_max);	\
+ 	}
+ 
+ #define show_uncore_perf_status(name)					\
+-	static ssize_t show_##name(struct device *dev,		\
+-				   struct device_attribute *attr, char *buf)\
++	static ssize_t show_##name(struct kobject *kobj,		\
++				   struct kobj_attribute *attr, char *buf)\
+ 	{                                                               \
+-		struct uncore_data *data = container_of(attr, struct uncore_data, name##_dev_attr);\
++		struct uncore_data *data = container_of(attr, struct uncore_data, name##_kobj_attr);\
+ 									\
+ 		return show_perf_status_freq_khz(data, buf); \
+ 	}
+@@ -134,11 +134,11 @@ show_uncore_min_max(max_freq_khz, 1);
+ show_uncore_perf_status(current_freq_khz);
+ 
+ #define show_uncore_data(member_name)					\
+-	static ssize_t show_##member_name(struct device *dev,	\
+-					   struct device_attribute *attr, char *buf)\
++	static ssize_t show_##member_name(struct kobject *kobj,	\
++					   struct kobj_attribute *attr, char *buf)\
+ 	{                                                               \
+ 		struct uncore_data *data = container_of(attr, struct uncore_data,\
+-							  member_name##_dev_attr);\
++							  member_name##_kobj_attr);\
+ 									\
+ 		return sysfs_emit(buf, "%u\n",				\
+ 				 data->member_name);			\
+@@ -149,29 +149,29 @@ show_uncore_data(initial_max_freq_khz);
+ 
+ #define init_attribute_rw(_name)					\
+ 	do {								\
+-		sysfs_attr_init(&data->_name##_dev_attr.attr);	\
+-		data->_name##_dev_attr.show = show_##_name;		\
+-		data->_name##_dev_attr.store = store_##_name;		\
+-		data->_name##_dev_attr.attr.name = #_name;		\
+-		data->_name##_dev_attr.attr.mode = 0644;		\
++		sysfs_attr_init(&data->_name##_kobj_attr.attr);	\
++		data->_name##_kobj_attr.show = show_##_name;		\
++		data->_name##_kobj_attr.store = store_##_name;		\
++		data->_name##_kobj_attr.attr.name = #_name;		\
++		data->_name##_kobj_attr.attr.mode = 0644;		\
+ 	} while (0)
+ 
+ #define init_attribute_ro(_name)					\
+ 	do {								\
+-		sysfs_attr_init(&data->_name##_dev_attr.attr);	\
+-		data->_name##_dev_attr.show = show_##_name;		\
+-		data->_name##_dev_attr.store = NULL;			\
+-		data->_name##_dev_attr.attr.name = #_name;		\
+-		data->_name##_dev_attr.attr.mode = 0444;		\
++		sysfs_attr_init(&data->_name##_kobj_attr.attr);	\
++		data->_name##_kobj_attr.show = show_##_name;		\
++		data->_name##_kobj_attr.store = NULL;			\
++		data->_name##_kobj_attr.attr.name = #_name;		\
++		data->_name##_kobj_attr.attr.mode = 0444;		\
+ 	} while (0)
+ 
+ #define init_attribute_root_ro(_name)					\
+ 	do {								\
+-		sysfs_attr_init(&data->_name##_dev_attr.attr);	\
+-		data->_name##_dev_attr.show = show_##_name;		\
+-		data->_name##_dev_attr.store = NULL;			\
+-		data->_name##_dev_attr.attr.name = #_name;		\
+-		data->_name##_dev_attr.attr.mode = 0400;		\
++		sysfs_attr_init(&data->_name##_kobj_attr.attr);	\
++		data->_name##_kobj_attr.show = show_##_name;		\
++		data->_name##_kobj_attr.store = NULL;			\
++		data->_name##_kobj_attr.attr.name = #_name;		\
++		data->_name##_kobj_attr.attr.mode = 0400;		\
+ 	} while (0)
+ 
+ static int create_attr_group(struct uncore_data *data, char *name)
+@@ -186,21 +186,21 @@ static int create_attr_group(struct uncore_data *data, char *name)
+ 
+ 	if (data->domain_id != UNCORE_DOMAIN_ID_INVALID) {
+ 		init_attribute_root_ro(domain_id);
+-		data->uncore_attrs[index++] = &data->domain_id_dev_attr.attr;
++		data->uncore_attrs[index++] = &data->domain_id_kobj_attr.attr;
+ 		init_attribute_root_ro(fabric_cluster_id);
+-		data->uncore_attrs[index++] = &data->fabric_cluster_id_dev_attr.attr;
++		data->uncore_attrs[index++] = &data->fabric_cluster_id_kobj_attr.attr;
+ 		init_attribute_root_ro(package_id);
+-		data->uncore_attrs[index++] = &data->package_id_dev_attr.attr;
++		data->uncore_attrs[index++] = &data->package_id_kobj_attr.attr;
+ 	}
+ 
+-	data->uncore_attrs[index++] = &data->max_freq_khz_dev_attr.attr;
+-	data->uncore_attrs[index++] = &data->min_freq_khz_dev_attr.attr;
+-	data->uncore_attrs[index++] = &data->initial_min_freq_khz_dev_attr.attr;
+-	data->uncore_attrs[index++] = &data->initial_max_freq_khz_dev_attr.attr;
++	data->uncore_attrs[index++] = &data->max_freq_khz_kobj_attr.attr;
++	data->uncore_attrs[index++] = &data->min_freq_khz_kobj_attr.attr;
++	data->uncore_attrs[index++] = &data->initial_min_freq_khz_kobj_attr.attr;
++	data->uncore_attrs[index++] = &data->initial_max_freq_khz_kobj_attr.attr;
+ 
+ 	ret = uncore_read_freq(data, &freq);
+ 	if (!ret)
+-		data->uncore_attrs[index++] = &data->current_freq_khz_dev_attr.attr;
++		data->uncore_attrs[index++] = &data->current_freq_khz_kobj_attr.attr;
+ 
+ 	data->uncore_attrs[index] = NULL;
+ 
+diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.h b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.h
+index 7afb69977c7e8..0e5bf507e5552 100644
+--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.h
++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.h
+@@ -26,14 +26,14 @@
+  * @instance_id:	Unique instance id to append to directory name
+  * @name:		Sysfs entry name for this instance
+  * @uncore_attr_group:	Attribute group storage
+- * @max_freq_khz_dev_attr: Storage for device attribute max_freq_khz
+- * @mix_freq_khz_dev_attr: Storage for device attribute min_freq_khz
+- * @initial_max_freq_khz_dev_attr: Storage for device attribute initial_max_freq_khz
+- * @initial_min_freq_khz_dev_attr: Storage for device attribute initial_min_freq_khz
+- * @current_freq_khz_dev_attr: Storage for device attribute current_freq_khz
+- * @domain_id_dev_attr: Storage for device attribute domain_id
+- * @fabric_cluster_id_dev_attr: Storage for device attribute fabric_cluster_id
+- * @package_id_dev_attr: Storage for device attribute package_id
++ * @max_freq_khz_kobj_attr: Storage for kobject attribute max_freq_khz
++ * @mix_freq_khz_kobj_attr: Storage for kobject attribute min_freq_khz
++ * @initial_max_freq_khz_kobj_attr: Storage for kobject attribute initial_max_freq_khz
++ * @initial_min_freq_khz_kobj_attr: Storage for kobject attribute initial_min_freq_khz
++ * @current_freq_khz_kobj_attr: Storage for kobject attribute current_freq_khz
++ * @domain_id_kobj_attr: Storage for kobject attribute domain_id
++ * @fabric_cluster_id_kobj_attr: Storage for kobject attribute fabric_cluster_id
++ * @package_id_kobj_attr: Storage for kobject attribute package_id
+  * @uncore_attrs:	Attribute storage for group creation
+  *
+  * This structure is used to encapsulate all data related to uncore sysfs
+@@ -53,14 +53,14 @@ struct uncore_data {
+ 	char name[32];
+ 
+ 	struct attribute_group uncore_attr_group;
+-	struct device_attribute max_freq_khz_dev_attr;
+-	struct device_attribute min_freq_khz_dev_attr;
+-	struct device_attribute initial_max_freq_khz_dev_attr;
+-	struct device_attribute initial_min_freq_khz_dev_attr;
+-	struct device_attribute current_freq_khz_dev_attr;
+-	struct device_attribute domain_id_dev_attr;
+-	struct device_attribute fabric_cluster_id_dev_attr;
+-	struct device_attribute package_id_dev_attr;
++	struct kobj_attribute max_freq_khz_kobj_attr;
++	struct kobj_attribute min_freq_khz_kobj_attr;
++	struct kobj_attribute initial_max_freq_khz_kobj_attr;
++	struct kobj_attribute initial_min_freq_khz_kobj_attr;
++	struct kobj_attribute current_freq_khz_kobj_attr;
++	struct kobj_attribute domain_id_kobj_attr;
++	struct kobj_attribute fabric_cluster_id_kobj_attr;
++	struct kobj_attribute package_id_kobj_attr;
+ 	struct attribute *uncore_attrs[9];
+ };
+ 
+diff --git a/drivers/platform/x86/p2sb.c b/drivers/platform/x86/p2sb.c
+index 1cf2471d54dde..17cc4b45e0239 100644
+--- a/drivers/platform/x86/p2sb.c
++++ b/drivers/platform/x86/p2sb.c
+@@ -26,6 +26,21 @@ static const struct x86_cpu_id p2sb_cpu_ids[] = {
+ 	{}
+ };
+ 
++/*
++ * Cache BAR0 of P2SB device functions 0 to 7.
++ * TODO: The constant 8 is the number of functions that PCI specification
++ *       defines. Same definitions exist tree-wide. Unify this definition and
++ *       the other definitions then move to include/uapi/linux/pci.h.
++ */
++#define NR_P2SB_RES_CACHE 8
++
++struct p2sb_res_cache {
++	u32 bus_dev_id;
++	struct resource res;
++};
++
++static struct p2sb_res_cache p2sb_resources[NR_P2SB_RES_CACHE];
++
+ static int p2sb_get_devfn(unsigned int *devfn)
+ {
+ 	unsigned int fn = P2SB_DEVFN_DEFAULT;
+@@ -39,8 +54,16 @@ static int p2sb_get_devfn(unsigned int *devfn)
+ 	return 0;
+ }
+ 
++static bool p2sb_valid_resource(struct resource *res)
++{
++	if (res->flags)
++		return true;
++
++	return false;
++}
++
+ /* Copy resource from the first BAR of the device in question */
+-static int p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem)
++static void p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem)
+ {
+ 	struct resource *bar0 = &pdev->resource[0];
+ 
+@@ -56,49 +79,66 @@ static int p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem)
+ 	mem->end = bar0->end;
+ 	mem->flags = bar0->flags;
+ 	mem->desc = bar0->desc;
+-
+-	return 0;
+ }
+ 
+-static int p2sb_scan_and_read(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
++static void p2sb_scan_and_cache_devfn(struct pci_bus *bus, unsigned int devfn)
+ {
++	struct p2sb_res_cache *cache = &p2sb_resources[PCI_FUNC(devfn)];
+ 	struct pci_dev *pdev;
+-	int ret;
+ 
+ 	pdev = pci_scan_single_device(bus, devfn);
+ 	if (!pdev)
+-		return -ENODEV;
++		return;
+ 
+-	ret = p2sb_read_bar0(pdev, mem);
++	p2sb_read_bar0(pdev, &cache->res);
++	cache->bus_dev_id = bus->dev.id;
+ 
+ 	pci_stop_and_remove_bus_device(pdev);
+-	return ret;
+ }
+ 
+-/**
+- * p2sb_bar - Get Primary to Sideband (P2SB) bridge device BAR
+- * @bus: PCI bus to communicate with
+- * @devfn: PCI slot and function to communicate with
+- * @mem: memory resource to be filled in
+- *
+- * The BIOS prevents the P2SB device from being enumerated by the PCI
+- * subsystem, so we need to unhide and hide it back to lookup the BAR.
+- *
+- * if @bus is NULL, the bus 0 in domain 0 will be used.
+- * If @devfn is 0, it will be replaced by devfn of the P2SB device.
+- *
+- * Caller must provide a valid pointer to @mem.
+- *
+- * Locking is handled by pci_rescan_remove_lock mutex.
+- *
+- * Return:
+- * 0 on success or appropriate errno value on error.
+- */
+-int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
++static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)
++{
++	unsigned int slot, fn;
++
++	if (PCI_FUNC(devfn) == 0) {
++		/*
++		 * When function number of the P2SB device is zero, scan it and
++		 * other function numbers, and if devices are available, cache
++		 * their BAR0s.
++		 */
++		slot = PCI_SLOT(devfn);
++		for (fn = 0; fn < NR_P2SB_RES_CACHE; fn++)
++			p2sb_scan_and_cache_devfn(bus, PCI_DEVFN(slot, fn));
++	} else {
++		/* Scan the P2SB device and cache its BAR0 */
++		p2sb_scan_and_cache_devfn(bus, devfn);
++	}
++
++	if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res))
++		return -ENOENT;
++
++	return 0;
++}
++
++static struct pci_bus *p2sb_get_bus(struct pci_bus *bus)
++{
++	static struct pci_bus *p2sb_bus;
++
++	bus = bus ?: p2sb_bus;
++	if (bus)
++		return bus;
++
++	/* Assume P2SB is on the bus 0 in domain 0 */
++	p2sb_bus = pci_find_bus(0, 0);
++	return p2sb_bus;
++}
++
++static int p2sb_cache_resources(void)
+ {
+-	struct pci_dev *pdev_p2sb;
+ 	unsigned int devfn_p2sb;
+ 	u32 value = P2SBC_HIDE;
++	struct pci_bus *bus;
++	u16 class;
+ 	int ret;
+ 
+ 	/* Get devfn for P2SB device itself */
+@@ -106,8 +146,17 @@ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ 	if (ret)
+ 		return ret;
+ 
+-	/* if @bus is NULL, use bus 0 in domain 0 */
+-	bus = bus ?: pci_find_bus(0, 0);
++	bus = p2sb_get_bus(NULL);
++	if (!bus)
++		return -ENODEV;
++
++	/*
++	 * When a device with same devfn exists and its device class is not
++	 * PCI_CLASS_MEMORY_OTHER for P2SB, do not touch it.
++	 */
++	pci_bus_read_config_word(bus, devfn_p2sb, PCI_CLASS_DEVICE, &class);
++	if (!PCI_POSSIBLE_ERROR(class) && class != PCI_CLASS_MEMORY_OTHER)
++		return -ENODEV;
+ 
+ 	/*
+ 	 * Prevent concurrent PCI bus scan from seeing the P2SB device and
+@@ -115,17 +164,16 @@ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ 	 */
+ 	pci_lock_rescan_remove();
+ 
+-	/* Unhide the P2SB device, if needed */
++	/*
++	 * The BIOS prevents the P2SB device from being enumerated by the PCI
++	 * subsystem, so we need to unhide and hide it back to lookup the BAR.
++	 * Unhide the P2SB device here, if needed.
++	 */
+ 	pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);
+ 	if (value & P2SBC_HIDE)
+ 		pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0);
+ 
+-	pdev_p2sb = pci_scan_single_device(bus, devfn_p2sb);
+-	if (devfn)
+-		ret = p2sb_scan_and_read(bus, devfn, mem);
+-	else
+-		ret = p2sb_read_bar0(pdev_p2sb, mem);
+-	pci_stop_and_remove_bus_device(pdev_p2sb);
++	ret = p2sb_scan_and_cache(bus, devfn_p2sb);
+ 
+ 	/* Hide the P2SB device, if it was hidden */
+ 	if (value & P2SBC_HIDE)
+@@ -133,12 +181,62 @@ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ 
+ 	pci_unlock_rescan_remove();
+ 
+-	if (ret)
+-		return ret;
++	return ret;
++}
++
++/**
++ * p2sb_bar - Get Primary to Sideband (P2SB) bridge device BAR
++ * @bus: PCI bus to communicate with
++ * @devfn: PCI slot and function to communicate with
++ * @mem: memory resource to be filled in
++ *
++ * If @bus is NULL, the bus 0 in domain 0 will be used.
++ * If @devfn is 0, it will be replaced by devfn of the P2SB device.
++ *
++ * Caller must provide a valid pointer to @mem.
++ *
++ * Return:
++ * 0 on success or appropriate errno value on error.
++ */
++int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
++{
++	struct p2sb_res_cache *cache;
++	int ret;
++
++	bus = p2sb_get_bus(bus);
++	if (!bus)
++		return -ENODEV;
++
++	if (!devfn) {
++		ret = p2sb_get_devfn(&devfn);
++		if (ret)
++			return ret;
++	}
+ 
+-	if (mem->flags == 0)
++	cache = &p2sb_resources[PCI_FUNC(devfn)];
++	if (cache->bus_dev_id != bus->dev.id)
+ 		return -ENODEV;
+ 
++	if (!p2sb_valid_resource(&cache->res))
++		return -ENOENT;
++
++	memcpy(mem, &cache->res, sizeof(*mem));
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(p2sb_bar);
++
++static int __init p2sb_fs_init(void)
++{
++	p2sb_cache_resources();
++	return 0;
++}
++
++/*
++ * pci_rescan_remove_lock to avoid access to unhidden P2SB devices can
++ * not be locked in sysfs pci bus rescan path because of deadlock. To
++ * avoid the deadlock, access to P2SB devices with the lock at an early
++ * step in kernel initialization and cache required resources. This
++ * should happen after subsys_initcall which initializes PCI subsystem
++ * and before device_initcall which requires P2SB resources.
++ */
++fs_initcall(p2sb_fs_init);
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 5dd22258cb3bc..bd017478e61b6 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -686,9 +686,10 @@ acpi_status wmi_install_notify_handler(const char *guid,
+ 			block->handler_data = data;
+ 
+ 			wmi_status = wmi_method_enable(block, true);
+-			if ((wmi_status != AE_OK) ||
+-			    ((wmi_status == AE_OK) && (status == AE_NOT_EXIST)))
+-				status = wmi_status;
++			if (ACPI_FAILURE(wmi_status))
++				dev_warn(&block->dev.dev, "Failed to enable device\n");
++
++			status = AE_OK;
+ 		}
+ 	}
+ 
+@@ -729,12 +730,13 @@ acpi_status wmi_remove_notify_handler(const char *guid)
+ 				status = AE_OK;
+ 			} else {
+ 				wmi_status = wmi_method_enable(block, false);
++				if (ACPI_FAILURE(wmi_status))
++					dev_warn(&block->dev.dev, "Failed to disable device\n");
++
+ 				block->handler = NULL;
+ 				block->handler_data = NULL;
+-				if ((wmi_status != AE_OK) ||
+-				    ((wmi_status == AE_OK) &&
+-				     (status == AE_NOT_EXIST)))
+-					status = wmi_status;
++
++				status = AE_OK;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
+index dc87965f81641..1062939c32645 100644
+--- a/drivers/rpmsg/virtio_rpmsg_bus.c
++++ b/drivers/rpmsg/virtio_rpmsg_bus.c
+@@ -378,6 +378,7 @@ static void virtio_rpmsg_release_device(struct device *dev)
+ 	struct rpmsg_device *rpdev = to_rpmsg_device(dev);
+ 	struct virtio_rpmsg_channel *vch = to_virtio_rpmsg_channel(rpdev);
+ 
++	kfree(rpdev->driver_override);
+ 	kfree(vch);
+ }
+ 
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 228fb2d11c709..7d99cd2c37a0b 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -231,7 +231,7 @@ static int cmos_read_time(struct device *dev, struct rtc_time *t)
+ 	if (!pm_trace_rtc_valid())
+ 		return -EIO;
+ 
+-	ret = mc146818_get_time(t);
++	ret = mc146818_get_time(t, 1000);
+ 	if (ret < 0) {
+ 		dev_err_ratelimited(dev, "unable to read current time\n");
+ 		return ret;
+@@ -292,7 +292,7 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 
+ 	/* This not only a rtc_op, but also called directly */
+ 	if (!is_valid_irq(cmos->irq))
+-		return -EIO;
++		return -ETIMEDOUT;
+ 
+ 	/* Basic alarms only support hour, minute, and seconds fields.
+ 	 * Some also support day and month, for alarms up to a year in
+@@ -307,7 +307,7 @@ static int cmos_read_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	 *
+ 	 * Use the mc146818_avoid_UIP() function to avoid this.
+ 	 */
+-	if (!mc146818_avoid_UIP(cmos_read_alarm_callback, &p))
++	if (!mc146818_avoid_UIP(cmos_read_alarm_callback, 10, &p))
+ 		return -EIO;
+ 
+ 	if (!(p.rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+@@ -556,8 +556,8 @@ static int cmos_set_alarm(struct device *dev, struct rtc_wkalrm *t)
+ 	 *
+ 	 * Use mc146818_avoid_UIP() to avoid this.
+ 	 */
+-	if (!mc146818_avoid_UIP(cmos_set_alarm_callback, &p))
+-		return -EIO;
++	if (!mc146818_avoid_UIP(cmos_set_alarm_callback, 10, &p))
++		return -ETIMEDOUT;
+ 
+ 	cmos->alarm_expires = rtc_tm_to_time64(&t->time);
+ 
+@@ -818,18 +818,24 @@ static void rtc_wake_off(struct device *dev)
+ }
+ 
+ #ifdef CONFIG_X86
+-/* Enable use_acpi_alarm mode for Intel platforms no earlier than 2015 */
+ static void use_acpi_alarm_quirks(void)
+ {
+-	if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++	switch (boot_cpu_data.x86_vendor) {
++	case X86_VENDOR_INTEL:
++		if (dmi_get_bios_year() < 2015)
++			return;
++		break;
++	case X86_VENDOR_AMD:
++	case X86_VENDOR_HYGON:
++		if (dmi_get_bios_year() < 2021)
++			return;
++		break;
++	default:
+ 		return;
+-
++	}
+ 	if (!is_hpet_enabled())
+ 		return;
+ 
+-	if (dmi_get_bios_year() < 2015)
+-		return;
+-
+ 	use_acpi_alarm = true;
+ }
+ #else
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index f1c09f1db044c..651bf3c279c74 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -8,26 +8,31 @@
+ #include <linux/acpi.h>
+ #endif
+ 
++#define UIP_RECHECK_DELAY		100	/* usec */
++#define UIP_RECHECK_DELAY_MS		(USEC_PER_MSEC / UIP_RECHECK_DELAY)
++#define UIP_RECHECK_LOOPS_MS(x)		(x / UIP_RECHECK_DELAY_MS)
++
+ /*
+  * Execute a function while the UIP (Update-in-progress) bit of the RTC is
+- * unset.
++ * unset. The timeout is configurable by the caller in ms.
+  *
+  * Warning: callback may be executed more then once.
+  */
+ bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
++			int timeout,
+ 			void *param)
+ {
+ 	int i;
+ 	unsigned long flags;
+ 	unsigned char seconds;
+ 
+-	for (i = 0; i < 100; i++) {
++	for (i = 0; UIP_RECHECK_LOOPS_MS(i) < timeout; i++) {
+ 		spin_lock_irqsave(&rtc_lock, flags);
+ 
+ 		/*
+ 		 * Check whether there is an update in progress during which the
+ 		 * readout is unspecified. The maximum update time is ~2ms. Poll
+-		 * every 100 usec for completion.
++		 * for completion.
+ 		 *
+ 		 * Store the second value before checking UIP so a long lasting
+ 		 * NMI which happens to hit after the UIP check cannot make
+@@ -37,7 +42,7 @@ bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
+ 
+ 		if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
+ 			spin_unlock_irqrestore(&rtc_lock, flags);
+-			udelay(100);
++			udelay(UIP_RECHECK_DELAY);
+ 			continue;
+ 		}
+ 
+@@ -56,7 +61,7 @@ bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
+ 		 */
+ 		if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) {
+ 			spin_unlock_irqrestore(&rtc_lock, flags);
+-			udelay(100);
++			udelay(UIP_RECHECK_DELAY);
+ 			continue;
+ 		}
+ 
+@@ -72,6 +77,10 @@ bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
+ 		}
+ 		spin_unlock_irqrestore(&rtc_lock, flags);
+ 
++		if (UIP_RECHECK_LOOPS_MS(i) >= 100)
++			pr_warn("Reading current time from RTC took around %li ms\n",
++				UIP_RECHECK_LOOPS_MS(i));
++
+ 		return true;
+ 	}
+ 	return false;
+@@ -84,7 +93,7 @@ EXPORT_SYMBOL_GPL(mc146818_avoid_UIP);
+  */
+ bool mc146818_does_rtc_work(void)
+ {
+-	return mc146818_avoid_UIP(NULL, NULL);
++	return mc146818_avoid_UIP(NULL, 1000, NULL);
+ }
+ EXPORT_SYMBOL_GPL(mc146818_does_rtc_work);
+ 
+@@ -130,15 +139,27 @@ static void mc146818_get_time_callback(unsigned char seconds, void *param_in)
+ 	p->ctrl = CMOS_READ(RTC_CONTROL);
+ }
+ 
+-int mc146818_get_time(struct rtc_time *time)
++/**
++ * mc146818_get_time - Get the current time from the RTC
++ * @time: pointer to struct rtc_time to store the current time
++ * @timeout: timeout value in ms
++ *
++ * This function reads the current time from the RTC and stores it in the
++ * provided struct rtc_time. The timeout parameter specifies the maximum
++ * time to wait for the RTC to become ready.
++ *
++ * Return: 0 on success, -ETIMEDOUT if the RTC did not become ready within
++ * the specified timeout, or another error code if an error occurred.
++ */
++int mc146818_get_time(struct rtc_time *time, int timeout)
+ {
+ 	struct mc146818_get_time_callback_param p = {
+ 		.time = time
+ 	};
+ 
+-	if (!mc146818_avoid_UIP(mc146818_get_time_callback, &p)) {
++	if (!mc146818_avoid_UIP(mc146818_get_time_callback, timeout, &p)) {
+ 		memset(time, 0, sizeof(*time));
+-		return -EIO;
++		return -ETIMEDOUT;
+ 	}
+ 
+ 	if (!(p.ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 4db538a551925..88f41f95cc94d 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -32,7 +32,8 @@
+ 
+ #define AP_RESET_INTERVAL		20	/* Reset sleep interval (20ms)		*/
+ 
+-static int vfio_ap_mdev_reset_queues(struct ap_queue_table *qtable);
++static int vfio_ap_mdev_reset_queues(struct ap_matrix_mdev *matrix_mdev);
++static int vfio_ap_mdev_reset_qlist(struct list_head *qlist);
+ static struct vfio_ap_queue *vfio_ap_find_queue(int apqn);
+ static const struct vfio_device_ops vfio_ap_matrix_dev_ops;
+ static void vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q);
+@@ -457,6 +458,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
+ 		VFIO_AP_DBF_WARN("%s: gisc registration failed: nisc=%d, isc=%d, apqn=%#04x\n",
+ 				 __func__, nisc, isc, q->apqn);
+ 
++		vfio_unpin_pages(&q->matrix_mdev->vdev, nib, 1);
+ 		status.response_code = AP_RESPONSE_INVALID_GISA;
+ 		return status;
+ 	}
+@@ -661,17 +663,23 @@ static bool vfio_ap_mdev_filter_cdoms(struct ap_matrix_mdev *matrix_mdev)
+  *				device driver.
+  *
+  * @matrix_mdev: the matrix mdev whose matrix is to be filtered.
++ * @apm_filtered: a 256-bit bitmap for storing the APIDs filtered from the
++ *		  guest's AP configuration that are still in the host's AP
++ *		  configuration.
+  *
+  * Note: If an APQN referencing a queue device that is not bound to the vfio_ap
+  *	 driver, its APID will be filtered from the guest's APCB. The matrix
+  *	 structure precludes filtering an individual APQN, so its APID will be
+- *	 filtered.
++ *	 filtered. Consequently, all queues associated with the adapter that
++ *	 are in the host's AP configuration must be reset. If queues are
++ *	 subsequently made available again to the guest, they should re-appear
++ *	 in a reset state
+  *
+  * Return: a boolean value indicating whether the KVM guest's APCB was changed
+  *	   by the filtering or not.
+  */
+-static bool vfio_ap_mdev_filter_matrix(unsigned long *apm, unsigned long *aqm,
+-				       struct ap_matrix_mdev *matrix_mdev)
++static bool vfio_ap_mdev_filter_matrix(struct ap_matrix_mdev *matrix_mdev,
++				       unsigned long *apm_filtered)
+ {
+ 	unsigned long apid, apqi, apqn;
+ 	DECLARE_BITMAP(prev_shadow_apm, AP_DEVICES);
+@@ -681,6 +689,7 @@ static bool vfio_ap_mdev_filter_matrix(unsigned long *apm, unsigned long *aqm,
+ 	bitmap_copy(prev_shadow_apm, matrix_mdev->shadow_apcb.apm, AP_DEVICES);
+ 	bitmap_copy(prev_shadow_aqm, matrix_mdev->shadow_apcb.aqm, AP_DOMAINS);
+ 	vfio_ap_matrix_init(&matrix_dev->info, &matrix_mdev->shadow_apcb);
++	bitmap_clear(apm_filtered, 0, AP_DEVICES);
+ 
+ 	/*
+ 	 * Copy the adapters, domains and control domains to the shadow_apcb
+@@ -692,8 +701,9 @@ static bool vfio_ap_mdev_filter_matrix(unsigned long *apm, unsigned long *aqm,
+ 	bitmap_and(matrix_mdev->shadow_apcb.aqm, matrix_mdev->matrix.aqm,
+ 		   (unsigned long *)matrix_dev->info.aqm, AP_DOMAINS);
+ 
+-	for_each_set_bit_inv(apid, apm, AP_DEVICES) {
+-		for_each_set_bit_inv(apqi, aqm, AP_DOMAINS) {
++	for_each_set_bit_inv(apid, matrix_mdev->shadow_apcb.apm, AP_DEVICES) {
++		for_each_set_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm,
++				     AP_DOMAINS) {
+ 			/*
+ 			 * If the APQN is not bound to the vfio_ap device
+ 			 * driver, then we can't assign it to the guest's
+@@ -705,8 +715,16 @@ static bool vfio_ap_mdev_filter_matrix(unsigned long *apm, unsigned long *aqm,
+ 			apqn = AP_MKQID(apid, apqi);
+ 			q = vfio_ap_mdev_get_queue(matrix_mdev, apqn);
+ 			if (!q || q->reset_status.response_code) {
+-				clear_bit_inv(apid,
+-					      matrix_mdev->shadow_apcb.apm);
++				clear_bit_inv(apid, matrix_mdev->shadow_apcb.apm);
++
++				/*
++				 * If the adapter was previously plugged into
++				 * the guest, let's let the caller know that
++				 * the APID was filtered.
++				 */
++				if (test_bit_inv(apid, prev_shadow_apm))
++					set_bit_inv(apid, apm_filtered);
++
+ 				break;
+ 			}
+ 		}
+@@ -808,7 +826,7 @@ static void vfio_ap_mdev_remove(struct mdev_device *mdev)
+ 
+ 	mutex_lock(&matrix_dev->guests_lock);
+ 	mutex_lock(&matrix_dev->mdevs_lock);
+-	vfio_ap_mdev_reset_queues(&matrix_mdev->qtable);
++	vfio_ap_mdev_reset_queues(matrix_mdev);
+ 	vfio_ap_mdev_unlink_fr_queues(matrix_mdev);
+ 	list_del(&matrix_mdev->node);
+ 	mutex_unlock(&matrix_dev->mdevs_lock);
+@@ -918,6 +936,47 @@ static void vfio_ap_mdev_link_adapter(struct ap_matrix_mdev *matrix_mdev,
+ 				       AP_MKQID(apid, apqi));
+ }
+ 
++static void collect_queues_to_reset(struct ap_matrix_mdev *matrix_mdev,
++				    unsigned long apid,
++				    struct list_head *qlist)
++{
++	struct vfio_ap_queue *q;
++	unsigned long  apqi;
++
++	for_each_set_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm, AP_DOMAINS) {
++		q = vfio_ap_mdev_get_queue(matrix_mdev, AP_MKQID(apid, apqi));
++		if (q)
++			list_add_tail(&q->reset_qnode, qlist);
++	}
++}
++
++static void reset_queues_for_apid(struct ap_matrix_mdev *matrix_mdev,
++				  unsigned long apid)
++{
++	struct list_head qlist;
++
++	INIT_LIST_HEAD(&qlist);
++	collect_queues_to_reset(matrix_mdev, apid, &qlist);
++	vfio_ap_mdev_reset_qlist(&qlist);
++}
++
++static int reset_queues_for_apids(struct ap_matrix_mdev *matrix_mdev,
++				  unsigned long *apm_reset)
++{
++	struct list_head qlist;
++	unsigned long apid;
++
++	if (bitmap_empty(apm_reset, AP_DEVICES))
++		return 0;
++
++	INIT_LIST_HEAD(&qlist);
++
++	for_each_set_bit_inv(apid, apm_reset, AP_DEVICES)
++		collect_queues_to_reset(matrix_mdev, apid, &qlist);
++
++	return vfio_ap_mdev_reset_qlist(&qlist);
++}
++
+ /**
+  * assign_adapter_store - parses the APID from @buf and sets the
+  * corresponding bit in the mediated matrix device's APM
+@@ -958,7 +1017,7 @@ static ssize_t assign_adapter_store(struct device *dev,
+ {
+ 	int ret;
+ 	unsigned long apid;
+-	DECLARE_BITMAP(apm_delta, AP_DEVICES);
++	DECLARE_BITMAP(apm_filtered, AP_DEVICES);
+ 	struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
+ 
+ 	mutex_lock(&ap_perms_mutex);
+@@ -987,12 +1046,11 @@ static ssize_t assign_adapter_store(struct device *dev,
+ 	}
+ 
+ 	vfio_ap_mdev_link_adapter(matrix_mdev, apid);
+-	memset(apm_delta, 0, sizeof(apm_delta));
+-	set_bit_inv(apid, apm_delta);
+ 
+-	if (vfio_ap_mdev_filter_matrix(apm_delta,
+-				       matrix_mdev->matrix.aqm, matrix_mdev))
++	if (vfio_ap_mdev_filter_matrix(matrix_mdev, apm_filtered)) {
+ 		vfio_ap_mdev_update_guest_apcb(matrix_mdev);
++		reset_queues_for_apids(matrix_mdev, apm_filtered);
++	}
+ 
+ 	ret = count;
+ done:
+@@ -1023,11 +1081,12 @@ static struct vfio_ap_queue
+  *				 adapter was assigned.
+  * @matrix_mdev: the matrix mediated device to which the adapter was assigned.
+  * @apid: the APID of the unassigned adapter.
+- * @qtable: table for storing queues associated with unassigned adapter.
++ * @qlist: list for storing queues associated with unassigned adapter that
++ *	   need to be reset.
+  */
+ static void vfio_ap_mdev_unlink_adapter(struct ap_matrix_mdev *matrix_mdev,
+ 					unsigned long apid,
+-					struct ap_queue_table *qtable)
++					struct list_head *qlist)
+ {
+ 	unsigned long apqi;
+ 	struct vfio_ap_queue *q;
+@@ -1035,11 +1094,10 @@ static void vfio_ap_mdev_unlink_adapter(struct ap_matrix_mdev *matrix_mdev,
+ 	for_each_set_bit_inv(apqi, matrix_mdev->matrix.aqm, AP_DOMAINS) {
+ 		q = vfio_ap_unlink_apqn_fr_mdev(matrix_mdev, apid, apqi);
+ 
+-		if (q && qtable) {
++		if (q && qlist) {
+ 			if (test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
+ 			    test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm))
+-				hash_add(qtable->queues, &q->mdev_qnode,
+-					 q->apqn);
++				list_add_tail(&q->reset_qnode, qlist);
+ 		}
+ 	}
+ }
+@@ -1047,26 +1105,23 @@ static void vfio_ap_mdev_unlink_adapter(struct ap_matrix_mdev *matrix_mdev,
+ static void vfio_ap_mdev_hot_unplug_adapter(struct ap_matrix_mdev *matrix_mdev,
+ 					    unsigned long apid)
+ {
+-	int loop_cursor;
+-	struct vfio_ap_queue *q;
+-	struct ap_queue_table *qtable = kzalloc(sizeof(*qtable), GFP_KERNEL);
++	struct vfio_ap_queue *q, *tmpq;
++	struct list_head qlist;
+ 
+-	hash_init(qtable->queues);
+-	vfio_ap_mdev_unlink_adapter(matrix_mdev, apid, qtable);
++	INIT_LIST_HEAD(&qlist);
++	vfio_ap_mdev_unlink_adapter(matrix_mdev, apid, &qlist);
+ 
+ 	if (test_bit_inv(apid, matrix_mdev->shadow_apcb.apm)) {
+ 		clear_bit_inv(apid, matrix_mdev->shadow_apcb.apm);
+ 		vfio_ap_mdev_update_guest_apcb(matrix_mdev);
+ 	}
+ 
+-	vfio_ap_mdev_reset_queues(qtable);
++	vfio_ap_mdev_reset_qlist(&qlist);
+ 
+-	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
++	list_for_each_entry_safe(q, tmpq, &qlist, reset_qnode) {
+ 		vfio_ap_unlink_mdev_fr_queue(q);
+-		hash_del(&q->mdev_qnode);
++		list_del(&q->reset_qnode);
+ 	}
+-
+-	kfree(qtable);
+ }
+ 
+ /**
+@@ -1167,7 +1222,7 @@ static ssize_t assign_domain_store(struct device *dev,
+ {
+ 	int ret;
+ 	unsigned long apqi;
+-	DECLARE_BITMAP(aqm_delta, AP_DOMAINS);
++	DECLARE_BITMAP(apm_filtered, AP_DEVICES);
+ 	struct ap_matrix_mdev *matrix_mdev = dev_get_drvdata(dev);
+ 
+ 	mutex_lock(&ap_perms_mutex);
+@@ -1196,12 +1251,11 @@ static ssize_t assign_domain_store(struct device *dev,
+ 	}
+ 
+ 	vfio_ap_mdev_link_domain(matrix_mdev, apqi);
+-	memset(aqm_delta, 0, sizeof(aqm_delta));
+-	set_bit_inv(apqi, aqm_delta);
+ 
+-	if (vfio_ap_mdev_filter_matrix(matrix_mdev->matrix.apm, aqm_delta,
+-				       matrix_mdev))
++	if (vfio_ap_mdev_filter_matrix(matrix_mdev, apm_filtered)) {
+ 		vfio_ap_mdev_update_guest_apcb(matrix_mdev);
++		reset_queues_for_apids(matrix_mdev, apm_filtered);
++	}
+ 
+ 	ret = count;
+ done:
+@@ -1214,7 +1268,7 @@ static DEVICE_ATTR_WO(assign_domain);
+ 
+ static void vfio_ap_mdev_unlink_domain(struct ap_matrix_mdev *matrix_mdev,
+ 				       unsigned long apqi,
+-				       struct ap_queue_table *qtable)
++				       struct list_head *qlist)
+ {
+ 	unsigned long apid;
+ 	struct vfio_ap_queue *q;
+@@ -1222,11 +1276,10 @@ static void vfio_ap_mdev_unlink_domain(struct ap_matrix_mdev *matrix_mdev,
+ 	for_each_set_bit_inv(apid, matrix_mdev->matrix.apm, AP_DEVICES) {
+ 		q = vfio_ap_unlink_apqn_fr_mdev(matrix_mdev, apid, apqi);
+ 
+-		if (q && qtable) {
++		if (q && qlist) {
+ 			if (test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
+ 			    test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm))
+-				hash_add(qtable->queues, &q->mdev_qnode,
+-					 q->apqn);
++				list_add_tail(&q->reset_qnode, qlist);
+ 		}
+ 	}
+ }
+@@ -1234,26 +1287,23 @@ static void vfio_ap_mdev_unlink_domain(struct ap_matrix_mdev *matrix_mdev,
+ static void vfio_ap_mdev_hot_unplug_domain(struct ap_matrix_mdev *matrix_mdev,
+ 					   unsigned long apqi)
+ {
+-	int loop_cursor;
+-	struct vfio_ap_queue *q;
+-	struct ap_queue_table *qtable = kzalloc(sizeof(*qtable), GFP_KERNEL);
++	struct vfio_ap_queue *q, *tmpq;
++	struct list_head qlist;
+ 
+-	hash_init(qtable->queues);
+-	vfio_ap_mdev_unlink_domain(matrix_mdev, apqi, qtable);
++	INIT_LIST_HEAD(&qlist);
++	vfio_ap_mdev_unlink_domain(matrix_mdev, apqi, &qlist);
+ 
+ 	if (test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm)) {
+ 		clear_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm);
+ 		vfio_ap_mdev_update_guest_apcb(matrix_mdev);
+ 	}
+ 
+-	vfio_ap_mdev_reset_queues(qtable);
++	vfio_ap_mdev_reset_qlist(&qlist);
+ 
+-	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
++	list_for_each_entry_safe(q, tmpq, &qlist, reset_qnode) {
+ 		vfio_ap_unlink_mdev_fr_queue(q);
+-		hash_del(&q->mdev_qnode);
++		list_del(&q->reset_qnode);
+ 	}
+-
+-	kfree(qtable);
+ }
+ 
+ /**
+@@ -1608,7 +1658,7 @@ static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev)
+ 		get_update_locks_for_kvm(kvm);
+ 
+ 		kvm_arch_crypto_clear_masks(kvm);
+-		vfio_ap_mdev_reset_queues(&matrix_mdev->qtable);
++		vfio_ap_mdev_reset_queues(matrix_mdev);
+ 		kvm_put_kvm(kvm);
+ 		matrix_mdev->kvm = NULL;
+ 
+@@ -1744,15 +1794,33 @@ static void vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q)
+ 	}
+ }
+ 
+-static int vfio_ap_mdev_reset_queues(struct ap_queue_table *qtable)
++static int vfio_ap_mdev_reset_queues(struct ap_matrix_mdev *matrix_mdev)
+ {
+ 	int ret = 0, loop_cursor;
+ 	struct vfio_ap_queue *q;
+ 
+-	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode)
++	hash_for_each(matrix_mdev->qtable.queues, loop_cursor, q, mdev_qnode)
+ 		vfio_ap_mdev_reset_queue(q);
+ 
+-	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
++	hash_for_each(matrix_mdev->qtable.queues, loop_cursor, q, mdev_qnode) {
++		flush_work(&q->reset_work);
++
++		if (q->reset_status.response_code)
++			ret = -EIO;
++	}
++
++	return ret;
++}
++
++static int vfio_ap_mdev_reset_qlist(struct list_head *qlist)
++{
++	int ret = 0;
++	struct vfio_ap_queue *q;
++
++	list_for_each_entry(q, qlist, reset_qnode)
++		vfio_ap_mdev_reset_queue(q);
++
++	list_for_each_entry(q, qlist, reset_qnode) {
+ 		flush_work(&q->reset_work);
+ 
+ 		if (q->reset_status.response_code)
+@@ -1938,7 +2006,7 @@ static ssize_t vfio_ap_mdev_ioctl(struct vfio_device *vdev,
+ 		ret = vfio_ap_mdev_get_device_info(arg);
+ 		break;
+ 	case VFIO_DEVICE_RESET:
+-		ret = vfio_ap_mdev_reset_queues(&matrix_mdev->qtable);
++		ret = vfio_ap_mdev_reset_queues(matrix_mdev);
+ 		break;
+ 	case VFIO_DEVICE_GET_IRQ_INFO:
+ 			ret = vfio_ap_get_irq_info(arg);
+@@ -2070,6 +2138,7 @@ int vfio_ap_mdev_probe_queue(struct ap_device *apdev)
+ {
+ 	int ret;
+ 	struct vfio_ap_queue *q;
++	DECLARE_BITMAP(apm_filtered, AP_DEVICES);
+ 	struct ap_matrix_mdev *matrix_mdev;
+ 
+ 	ret = sysfs_create_group(&apdev->device.kobj, &vfio_queue_attr_group);
+@@ -2091,15 +2160,28 @@ int vfio_ap_mdev_probe_queue(struct ap_device *apdev)
+ 	if (matrix_mdev) {
+ 		vfio_ap_mdev_link_queue(matrix_mdev, q);
+ 
+-		if (vfio_ap_mdev_filter_matrix(matrix_mdev->matrix.apm,
+-					       matrix_mdev->matrix.aqm,
+-					       matrix_mdev))
++		/*
++		 * If we're in the process of handling the adding of adapters or
++		 * domains to the host's AP configuration, then let the
++		 * vfio_ap device driver's on_scan_complete callback filter the
++		 * matrix and update the guest's AP configuration after all of
++		 * the new queue devices are probed.
++		 */
++		if (!bitmap_empty(matrix_mdev->apm_add, AP_DEVICES) ||
++		    !bitmap_empty(matrix_mdev->aqm_add, AP_DOMAINS))
++			goto done;
++
++		if (vfio_ap_mdev_filter_matrix(matrix_mdev, apm_filtered)) {
+ 			vfio_ap_mdev_update_guest_apcb(matrix_mdev);
++			reset_queues_for_apids(matrix_mdev, apm_filtered);
++		}
+ 	}
++
++done:
+ 	dev_set_drvdata(&apdev->device, q);
+ 	release_update_locks_for_mdev(matrix_mdev);
+ 
+-	return 0;
++	return ret;
+ 
+ err_remove_group:
+ 	sysfs_remove_group(&apdev->device.kobj, &vfio_queue_attr_group);
+@@ -2116,26 +2198,40 @@ void vfio_ap_mdev_remove_queue(struct ap_device *apdev)
+ 	q = dev_get_drvdata(&apdev->device);
+ 	get_update_locks_for_queue(q);
+ 	matrix_mdev = q->matrix_mdev;
++	apid = AP_QID_CARD(q->apqn);
++	apqi = AP_QID_QUEUE(q->apqn);
+ 
+ 	if (matrix_mdev) {
+-		vfio_ap_unlink_queue_fr_mdev(q);
+-
+-		apid = AP_QID_CARD(q->apqn);
+-		apqi = AP_QID_QUEUE(q->apqn);
+-
+-		/*
+-		 * If the queue is assigned to the guest's APCB, then remove
+-		 * the adapter's APID from the APCB and hot it into the guest.
+-		 */
++		/* If the queue is assigned to the guest's AP configuration */
+ 		if (test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
+ 		    test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm)) {
++			/*
++			 * Since the queues are defined via a matrix of adapters
++			 * and domains, it is not possible to hot unplug a
++			 * single queue; so, let's unplug the adapter.
++			 */
+ 			clear_bit_inv(apid, matrix_mdev->shadow_apcb.apm);
+ 			vfio_ap_mdev_update_guest_apcb(matrix_mdev);
++			reset_queues_for_apid(matrix_mdev, apid);
++			goto done;
+ 		}
+ 	}
+ 
+-	vfio_ap_mdev_reset_queue(q);
+-	flush_work(&q->reset_work);
++	/*
++	 * If the queue is not in the host's AP configuration, then resetting
++	 * it will fail with response code 01, (APQN not valid); so, let's make
++	 * sure it is in the host's config.
++	 */
++	if (test_bit_inv(apid, (unsigned long *)matrix_dev->info.apm) &&
++	    test_bit_inv(apqi, (unsigned long *)matrix_dev->info.aqm)) {
++		vfio_ap_mdev_reset_queue(q);
++		flush_work(&q->reset_work);
++	}
++
++done:
++	if (matrix_mdev)
++		vfio_ap_unlink_queue_fr_mdev(q);
++
+ 	dev_set_drvdata(&apdev->device, NULL);
+ 	kfree(q);
+ 	release_update_locks_for_mdev(matrix_mdev);
+@@ -2443,39 +2539,30 @@ void vfio_ap_on_cfg_changed(struct ap_config_info *cur_cfg_info,
+ 
+ static void vfio_ap_mdev_hot_plug_cfg(struct ap_matrix_mdev *matrix_mdev)
+ {
+-	bool do_hotplug = false;
+-	int filter_domains = 0;
+-	int filter_adapters = 0;
+-	DECLARE_BITMAP(apm, AP_DEVICES);
+-	DECLARE_BITMAP(aqm, AP_DOMAINS);
++	DECLARE_BITMAP(apm_filtered, AP_DEVICES);
++	bool filter_domains, filter_adapters, filter_cdoms, do_hotplug = false;
+ 
+ 	mutex_lock(&matrix_mdev->kvm->lock);
+ 	mutex_lock(&matrix_dev->mdevs_lock);
+ 
+-	filter_adapters = bitmap_and(apm, matrix_mdev->matrix.apm,
+-				     matrix_mdev->apm_add, AP_DEVICES);
+-	filter_domains = bitmap_and(aqm, matrix_mdev->matrix.aqm,
+-				    matrix_mdev->aqm_add, AP_DOMAINS);
+-
+-	if (filter_adapters && filter_domains)
+-		do_hotplug |= vfio_ap_mdev_filter_matrix(apm, aqm, matrix_mdev);
+-	else if (filter_adapters)
+-		do_hotplug |=
+-			vfio_ap_mdev_filter_matrix(apm,
+-						   matrix_mdev->shadow_apcb.aqm,
+-						   matrix_mdev);
+-	else
+-		do_hotplug |=
+-			vfio_ap_mdev_filter_matrix(matrix_mdev->shadow_apcb.apm,
+-						   aqm, matrix_mdev);
++	filter_adapters = bitmap_intersects(matrix_mdev->matrix.apm,
++					    matrix_mdev->apm_add, AP_DEVICES);
++	filter_domains = bitmap_intersects(matrix_mdev->matrix.aqm,
++					   matrix_mdev->aqm_add, AP_DOMAINS);
++	filter_cdoms = bitmap_intersects(matrix_mdev->matrix.adm,
++					 matrix_mdev->adm_add, AP_DOMAINS);
+ 
+-	if (bitmap_intersects(matrix_mdev->matrix.adm, matrix_mdev->adm_add,
+-			      AP_DOMAINS))
++	if (filter_adapters || filter_domains)
++		do_hotplug = vfio_ap_mdev_filter_matrix(matrix_mdev, apm_filtered);
++
++	if (filter_cdoms)
+ 		do_hotplug |= vfio_ap_mdev_filter_cdoms(matrix_mdev);
+ 
+ 	if (do_hotplug)
+ 		vfio_ap_mdev_update_guest_apcb(matrix_mdev);
+ 
++	reset_queues_for_apids(matrix_mdev, apm_filtered);
++
+ 	mutex_unlock(&matrix_dev->mdevs_lock);
+ 	mutex_unlock(&matrix_mdev->kvm->lock);
+ }
+diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h
+index 88aff8b81f2fc..98d37aa27044a 100644
+--- a/drivers/s390/crypto/vfio_ap_private.h
++++ b/drivers/s390/crypto/vfio_ap_private.h
+@@ -133,6 +133,8 @@ struct ap_matrix_mdev {
+  * @apqn: the APQN of the AP queue device
+  * @saved_isc: the guest ISC registered with the GIB interface
+  * @mdev_qnode: allows the vfio_ap_queue struct to be added to a hashtable
++ * @reset_qnode: allows the vfio_ap_queue struct to be added to a list of queues
++ *		 that need to be reset
+  * @reset_status: the status from the last reset of the queue
+  * @reset_work: work to wait for queue reset to complete
+  */
+@@ -143,6 +145,7 @@ struct vfio_ap_queue {
+ #define VFIO_AP_ISC_INVALID 0xff
+ 	unsigned char saved_isc;
+ 	struct hlist_node mdev_qnode;
++	struct list_head reset_qnode;
+ 	struct ap_queue_status reset_status;
+ 	struct work_struct reset_work;
+ };
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 1223d34c04da3..d983f4a0e9f14 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -2196,15 +2196,18 @@ void scsi_eh_flush_done_q(struct list_head *done_q)
+ 	struct scsi_cmnd *scmd, *next;
+ 
+ 	list_for_each_entry_safe(scmd, next, done_q, eh_entry) {
++		struct scsi_device *sdev = scmd->device;
++
+ 		list_del_init(&scmd->eh_entry);
+-		if (scsi_device_online(scmd->device) &&
+-		    !scsi_noretry_cmd(scmd) && scsi_cmd_retry_allowed(scmd) &&
+-			scsi_eh_should_retry_cmd(scmd)) {
++		if (scsi_device_online(sdev) && !scsi_noretry_cmd(scmd) &&
++		    scsi_cmd_retry_allowed(scmd) &&
++		    scsi_eh_should_retry_cmd(scmd)) {
+ 			SCSI_LOG_ERROR_RECOVERY(3,
+ 				scmd_printk(KERN_INFO, scmd,
+ 					     "%s: flush retry cmd\n",
+ 					     current->comm));
+ 				scsi_queue_insert(scmd, SCSI_MLQUEUE_EH_RETRY);
++				blk_mq_kick_requeue_list(sdev->request_queue);
+ 		} else {
+ 			/*
+ 			 * If just we got sense for the device (called
+diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c
+index 92ec76c03965b..2312152a44b3e 100644
+--- a/drivers/soc/fsl/qe/qmc.c
++++ b/drivers/soc/fsl/qe/qmc.c
+@@ -175,7 +175,7 @@ struct qmc_chan {
+ 	struct list_head list;
+ 	unsigned int id;
+ 	struct qmc *qmc;
+-	void *__iomem s_param;
++	void __iomem *s_param;
+ 	enum qmc_mode mode;
+ 	u64	tx_ts_mask;
+ 	u64	rx_ts_mask;
+@@ -203,9 +203,9 @@ struct qmc_chan {
+ struct qmc {
+ 	struct device *dev;
+ 	struct tsa_serial *tsa_serial;
+-	void *__iomem scc_regs;
+-	void *__iomem scc_pram;
+-	void *__iomem dpram;
++	void __iomem *scc_regs;
++	void __iomem *scc_pram;
++	void __iomem *dpram;
+ 	u16 scc_pram_offset;
+ 	cbd_t __iomem *bd_table;
+ 	dma_addr_t bd_dma_addr;
+@@ -218,37 +218,37 @@ struct qmc {
+ 	struct qmc_chan *chans[64];
+ };
+ 
+-static inline void qmc_write16(void *__iomem addr, u16 val)
++static inline void qmc_write16(void __iomem *addr, u16 val)
+ {
+ 	iowrite16be(val, addr);
+ }
+ 
+-static inline u16 qmc_read16(void *__iomem addr)
++static inline u16 qmc_read16(void __iomem *addr)
+ {
+ 	return ioread16be(addr);
+ }
+ 
+-static inline void qmc_setbits16(void *__iomem addr, u16 set)
++static inline void qmc_setbits16(void __iomem *addr, u16 set)
+ {
+ 	qmc_write16(addr, qmc_read16(addr) | set);
+ }
+ 
+-static inline void qmc_clrbits16(void *__iomem addr, u16 clr)
++static inline void qmc_clrbits16(void __iomem *addr, u16 clr)
+ {
+ 	qmc_write16(addr, qmc_read16(addr) & ~clr);
+ }
+ 
+-static inline void qmc_write32(void *__iomem addr, u32 val)
++static inline void qmc_write32(void __iomem *addr, u32 val)
+ {
+ 	iowrite32be(val, addr);
+ }
+ 
+-static inline u32 qmc_read32(void *__iomem addr)
++static inline u32 qmc_read32(void __iomem *addr)
+ {
+ 	return ioread32be(addr);
+ }
+ 
+-static inline void qmc_setbits32(void *__iomem addr, u32 set)
++static inline void qmc_setbits32(void __iomem *addr, u32 set)
+ {
+ 	qmc_write32(addr, qmc_read32(addr) | set);
+ }
+@@ -318,7 +318,7 @@ int qmc_chan_write_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length,
+ {
+ 	struct qmc_xfer_desc *xfer_desc;
+ 	unsigned long flags;
+-	cbd_t *__iomem bd;
++	cbd_t __iomem *bd;
+ 	u16 ctrl;
+ 	int ret;
+ 
+@@ -374,7 +374,7 @@ static void qmc_chan_write_done(struct qmc_chan *chan)
+ 	void (*complete)(void *context);
+ 	unsigned long flags;
+ 	void *context;
+-	cbd_t *__iomem bd;
++	cbd_t __iomem *bd;
+ 	u16 ctrl;
+ 
+ 	/*
+@@ -425,7 +425,7 @@ int qmc_chan_read_submit(struct qmc_chan *chan, dma_addr_t addr, size_t length,
+ {
+ 	struct qmc_xfer_desc *xfer_desc;
+ 	unsigned long flags;
+-	cbd_t *__iomem bd;
++	cbd_t __iomem *bd;
+ 	u16 ctrl;
+ 	int ret;
+ 
+@@ -488,7 +488,7 @@ static void qmc_chan_read_done(struct qmc_chan *chan)
+ 	void (*complete)(void *context, size_t size);
+ 	struct qmc_xfer_desc *xfer_desc;
+ 	unsigned long flags;
+-	cbd_t *__iomem bd;
++	cbd_t __iomem *bd;
+ 	void *context;
+ 	u16 datalen;
+ 	u16 ctrl;
+@@ -663,7 +663,7 @@ static void qmc_chan_reset_rx(struct qmc_chan *chan)
+ {
+ 	struct qmc_xfer_desc *xfer_desc;
+ 	unsigned long flags;
+-	cbd_t *__iomem bd;
++	cbd_t __iomem *bd;
+ 	u16 ctrl;
+ 
+ 	spin_lock_irqsave(&chan->rx_lock, flags);
+@@ -685,7 +685,6 @@ static void qmc_chan_reset_rx(struct qmc_chan *chan)
+ 		    qmc_read16(chan->s_param + QMC_SPE_RBASE));
+ 
+ 	chan->rx_pending = 0;
+-	chan->is_rx_stopped = false;
+ 
+ 	spin_unlock_irqrestore(&chan->rx_lock, flags);
+ }
+@@ -694,7 +693,7 @@ static void qmc_chan_reset_tx(struct qmc_chan *chan)
+ {
+ 	struct qmc_xfer_desc *xfer_desc;
+ 	unsigned long flags;
+-	cbd_t *__iomem bd;
++	cbd_t __iomem *bd;
+ 	u16 ctrl;
+ 
+ 	spin_lock_irqsave(&chan->tx_lock, flags);
+diff --git a/drivers/soc/fsl/qe/tsa.c b/drivers/soc/fsl/qe/tsa.c
+index 3f99813355900..6c5741cf5e9d2 100644
+--- a/drivers/soc/fsl/qe/tsa.c
++++ b/drivers/soc/fsl/qe/tsa.c
+@@ -98,9 +98,9 @@
+ #define TSA_SIRP	0x10
+ 
+ struct tsa_entries_area {
+-	void *__iomem entries_start;
+-	void *__iomem entries_next;
+-	void *__iomem last_entry;
++	void __iomem *entries_start;
++	void __iomem *entries_next;
++	void __iomem *last_entry;
+ };
+ 
+ struct tsa_tdm {
+@@ -117,8 +117,8 @@ struct tsa_tdm {
+ 
+ struct tsa {
+ 	struct device *dev;
+-	void *__iomem si_regs;
+-	void *__iomem si_ram;
++	void __iomem *si_regs;
++	void __iomem *si_ram;
+ 	resource_size_t si_ram_sz;
+ 	spinlock_t	lock;
+ 	int tdms; /* TSA_TDMx ORed */
+@@ -135,27 +135,27 @@ static inline struct tsa *tsa_serial_get_tsa(struct tsa_serial *tsa_serial)
+ 	return container_of(tsa_serial, struct tsa, serials[tsa_serial->id]);
+ }
+ 
+-static inline void tsa_write32(void *__iomem addr, u32 val)
++static inline void tsa_write32(void __iomem *addr, u32 val)
+ {
+ 	iowrite32be(val, addr);
+ }
+ 
+-static inline void tsa_write8(void *__iomem addr, u32 val)
++static inline void tsa_write8(void __iomem *addr, u32 val)
+ {
+ 	iowrite8(val, addr);
+ }
+ 
+-static inline u32 tsa_read32(void *__iomem addr)
++static inline u32 tsa_read32(void __iomem *addr)
+ {
+ 	return ioread32be(addr);
+ }
+ 
+-static inline void tsa_clrbits32(void *__iomem addr, u32 clr)
++static inline void tsa_clrbits32(void __iomem *addr, u32 clr)
+ {
+ 	tsa_write32(addr, tsa_read32(addr) & ~clr);
+ }
+ 
+-static inline void tsa_clrsetbits32(void *__iomem addr, u32 clr, u32 set)
++static inline void tsa_clrsetbits32(void __iomem *addr, u32 clr, u32 set)
+ {
+ 	tsa_write32(addr, (tsa_read32(addr) & ~clr) | set);
+ }
+@@ -313,7 +313,7 @@ static u32 tsa_serial_id2csel(struct tsa *tsa, u32 serial_id)
+ static int tsa_add_entry(struct tsa *tsa, struct tsa_entries_area *area,
+ 			 u32 count, u32 serial_id)
+ {
+-	void *__iomem addr;
++	void __iomem *addr;
+ 	u32 left;
+ 	u32 val;
+ 	u32 cnt;
+diff --git a/drivers/soc/qcom/pmic_glink_altmode.c b/drivers/soc/qcom/pmic_glink_altmode.c
+index b78279e2f54cf..7ee52cf2570fa 100644
+--- a/drivers/soc/qcom/pmic_glink_altmode.c
++++ b/drivers/soc/qcom/pmic_glink_altmode.c
+@@ -285,7 +285,7 @@ static void pmic_glink_altmode_sc8180xp_notify(struct pmic_glink_altmode *altmod
+ 
+ 	svid = mux == 2 ? USB_TYPEC_DP_SID : 0;
+ 
+-	if (!altmode->ports[port].altmode) {
++	if (port >= ARRAY_SIZE(altmode->ports) || !altmode->ports[port].altmode) {
+ 		dev_dbg(altmode->dev, "notification on undefined port %d\n", port);
+ 		return;
+ 	}
+@@ -328,7 +328,7 @@ static void pmic_glink_altmode_sc8280xp_notify(struct pmic_glink_altmode *altmod
+ 	hpd_state = FIELD_GET(SC8280XP_HPD_STATE_MASK, notify->payload[8]);
+ 	hpd_irq = FIELD_GET(SC8280XP_HPD_IRQ_MASK, notify->payload[8]);
+ 
+-	if (!altmode->ports[port].altmode) {
++	if (port >= ARRAY_SIZE(altmode->ports) || !altmode->ports[port].altmode) {
+ 		dev_dbg(altmode->dev, "notification on undefined port %d\n", port);
+ 		return;
+ 	}
+diff --git a/drivers/soundwire/amd_manager.c b/drivers/soundwire/amd_manager.c
+index 3a99f6dcdfafa..a3b1f4e6f0f90 100644
+--- a/drivers/soundwire/amd_manager.c
++++ b/drivers/soundwire/amd_manager.c
+@@ -927,6 +927,14 @@ static int amd_sdw_manager_probe(struct platform_device *pdev)
+ 	amd_manager->bus.clk_stop_timeout = 200;
+ 	amd_manager->bus.link_id = amd_manager->instance;
+ 
++	/*
++	 * Due to BIOS compatibility, the two links are exposed within
++	 * the scope of a single controller. If this changes, the
++	 * controller_id will have to be updated with drv_data
++	 * information.
++	 */
++	amd_manager->bus.controller_id = 0;
++
+ 	switch (amd_manager->instance) {
+ 	case ACP_SDW0:
+ 		amd_manager->num_dout_ports = AMD_SDW0_MAX_TX_PORTS;
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 41b0d9adf68ef..f3fec15c31122 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -22,6 +22,10 @@ static int sdw_get_id(struct sdw_bus *bus)
+ 		return rc;
+ 
+ 	bus->id = rc;
++
++	if (bus->controller_id == -1)
++		bus->controller_id = rc;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soundwire/debugfs.c b/drivers/soundwire/debugfs.c
+index d1553cb771874..67abd7e52f092 100644
+--- a/drivers/soundwire/debugfs.c
++++ b/drivers/soundwire/debugfs.c
+@@ -20,7 +20,7 @@ void sdw_bus_debugfs_init(struct sdw_bus *bus)
+ 		return;
+ 
+ 	/* create the debugfs master-N */
+-	snprintf(name, sizeof(name), "master-%d-%d", bus->id, bus->link_id);
++	snprintf(name, sizeof(name), "master-%d-%d", bus->controller_id, bus->link_id);
+ 	bus->debugfs = debugfs_create_dir(name, sdw_debugfs_root);
+ }
+ 
+diff --git a/drivers/soundwire/intel_auxdevice.c b/drivers/soundwire/intel_auxdevice.c
+index 7f15e3549e539..93698532deac4 100644
+--- a/drivers/soundwire/intel_auxdevice.c
++++ b/drivers/soundwire/intel_auxdevice.c
+@@ -234,6 +234,9 @@ static int intel_link_probe(struct auxiliary_device *auxdev,
+ 	cdns->instance = sdw->instance;
+ 	cdns->msg_count = 0;
+ 
++	/* single controller for all SoundWire links */
++	bus->controller_id = 0;
++
+ 	bus->link_id = auxdev->id;
+ 	bus->clk_stop_timeout = 1;
+ 
+diff --git a/drivers/soundwire/master.c b/drivers/soundwire/master.c
+index 9b05c9e25ebe4..51abedbbaa663 100644
+--- a/drivers/soundwire/master.c
++++ b/drivers/soundwire/master.c
+@@ -145,7 +145,7 @@ int sdw_master_device_add(struct sdw_bus *bus, struct device *parent,
+ 	md->dev.fwnode = fwnode;
+ 	md->dev.dma_mask = parent->dma_mask;
+ 
+-	dev_set_name(&md->dev, "sdw-master-%d", bus->id);
++	dev_set_name(&md->dev, "sdw-master-%d-%d", bus->controller_id, bus->link_id);
+ 
+ 	ret = device_register(&md->dev);
+ 	if (ret) {
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index a1e2d6c981862..8e027eee8b733 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -1624,6 +1624,9 @@ static int qcom_swrm_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	/* FIXME: is there a DT-defined value to use ? */
++	ctrl->bus.controller_id = -1;
++
+ 	ret = sdw_bus_master_add(&ctrl->bus, dev, dev->fwnode);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register Soundwire controller (%d)\n",
+diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
+index c1c1a2ac293af..060c2982e26b0 100644
+--- a/drivers/soundwire/slave.c
++++ b/drivers/soundwire/slave.c
+@@ -39,14 +39,14 @@ int sdw_slave_add(struct sdw_bus *bus,
+ 	slave->dev.fwnode = fwnode;
+ 
+ 	if (id->unique_id == SDW_IGNORED_UNIQUE_ID) {
+-		/* name shall be sdw:link:mfg:part:class */
+-		dev_set_name(&slave->dev, "sdw:%01x:%04x:%04x:%02x",
+-			     bus->link_id, id->mfg_id, id->part_id,
++		/* name shall be sdw:ctrl:link:mfg:part:class */
++		dev_set_name(&slave->dev, "sdw:%01x:%01x:%04x:%04x:%02x",
++			     bus->controller_id, bus->link_id, id->mfg_id, id->part_id,
+ 			     id->class_id);
+ 	} else {
+-		/* name shall be sdw:link:mfg:part:class:unique */
+-		dev_set_name(&slave->dev, "sdw:%01x:%04x:%04x:%02x:%01x",
+-			     bus->link_id, id->mfg_id, id->part_id,
++		/* name shall be sdw:ctrl:link:mfg:part:class:unique */
++		dev_set_name(&slave->dev, "sdw:%01x:%01x:%04x:%04x:%02x:%01x",
++			     bus->controller_id, bus->link_id, id->mfg_id, id->part_id,
+ 			     id->class_id, id->unique_id);
+ 	}
+ 
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index ef08fcac2f6da..0407b91183caa 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -19,7 +19,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/spi/spi.h>
+-#include <linux/spi/spi-mem.h>
++#include <linux/mtd/spi-nor.h>
+ #include <linux/sysfs.h>
+ #include <linux/types.h>
+ #include "spi-bcm-qspi.h"
+@@ -1221,7 +1221,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ 
+ 	/* non-aligned and very short transfers are handled by MSPI */
+ 	if (!IS_ALIGNED((uintptr_t)addr, 4) || !IS_ALIGNED((uintptr_t)buf, 4) ||
+-	    len < 4)
++	    len < 4 || op->cmd.opcode == SPINOR_OP_RDSFDP)
+ 		mspi_read = true;
+ 
+ 	if (!has_bspi(qspi) || mspi_read)
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index a50eb4db79de8..e5140532071d2 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -317,6 +317,15 @@ static void cdns_spi_process_fifo(struct cdns_spi *xspi, int ntx, int nrx)
+ 	xspi->rx_bytes -= nrx;
+ 
+ 	while (ntx || nrx) {
++		if (nrx) {
++			u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD);
++
++			if (xspi->rxbuf)
++				*xspi->rxbuf++ = data;
++
++			nrx--;
++		}
++
+ 		if (ntx) {
+ 			if (xspi->txbuf)
+ 				cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++);
+@@ -326,14 +335,6 @@ static void cdns_spi_process_fifo(struct cdns_spi *xspi, int ntx, int nrx)
+ 			ntx--;
+ 		}
+ 
+-		if (nrx) {
+-			u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD);
+-
+-			if (xspi->rxbuf)
+-				*xspi->rxbuf++ = data;
+-
+-			nrx--;
+-		}
+ 	}
+ }
+ 
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index 57d767a68e7b2..b9918dcc38027 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -84,7 +84,6 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0xa2a4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa324), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa3a4), (unsigned long)&cnl_info },
+-	{ PCI_VDEVICE(INTEL, 0xae23), (unsigned long)&cnl_info },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(pci, intel_spi_pci_ids);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 8ead7acb99f34..4adc56dabf55c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1624,6 +1624,10 @@ static int __spi_pump_transfer_message(struct spi_controller *ctlr,
+ 			pm_runtime_put_noidle(ctlr->dev.parent);
+ 			dev_err(&ctlr->dev, "Failed to power device: %d\n",
+ 				ret);
++
++			msg->status = ret;
++			spi_finalize_current_message(ctlr);
++
+ 			return ret;
+ 		}
+ 	}
+diff --git a/drivers/thermal/gov_power_allocator.c b/drivers/thermal/gov_power_allocator.c
+index 83d4f451b1a97..931cd88425e49 100644
+--- a/drivers/thermal/gov_power_allocator.c
++++ b/drivers/thermal/gov_power_allocator.c
+@@ -693,7 +693,7 @@ static int power_allocator_throttle(struct thermal_zone_device *tz,
+ 
+ 	trip = params->trip_switch_on;
+ 	if (trip && tz->temperature < trip->temperature) {
+-		update = tz->last_temperature >= trip->temperature;
++		update = tz->passive;
+ 		tz->passive = 0;
+ 		reset_pid_controller(params);
+ 		allow_maximum_power(tz, update);
+diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c
+index c69db6c90869c..1c5a429b2e3e9 100644
+--- a/drivers/thermal/intel/intel_hfi.c
++++ b/drivers/thermal/intel/intel_hfi.c
+@@ -24,6 +24,7 @@
+ #include <linux/bitops.h>
+ #include <linux/cpufeature.h>
+ #include <linux/cpumask.h>
++#include <linux/delay.h>
+ #include <linux/gfp.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+@@ -34,7 +35,9 @@
+ #include <linux/processor.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/suspend.h>
+ #include <linux/string.h>
++#include <linux/syscore_ops.h>
+ #include <linux/topology.h>
+ #include <linux/workqueue.h>
+ 
+@@ -347,6 +350,52 @@ static void init_hfi_instance(struct hfi_instance *hfi_instance)
+ 	hfi_instance->data = hfi_instance->hdr + hfi_features.hdr_size;
+ }
+ 
++/* Caller must hold hfi_instance_lock. */
++static void hfi_enable(void)
++{
++	u64 msr_val;
++
++	rdmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
++	msr_val |= HW_FEEDBACK_CONFIG_HFI_ENABLE_BIT;
++	wrmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
++}
++
++static void hfi_set_hw_table(struct hfi_instance *hfi_instance)
++{
++	phys_addr_t hw_table_pa;
++	u64 msr_val;
++
++	hw_table_pa = virt_to_phys(hfi_instance->hw_table);
++	msr_val = hw_table_pa | HW_FEEDBACK_PTR_VALID_BIT;
++	wrmsrl(MSR_IA32_HW_FEEDBACK_PTR, msr_val);
++}
++
++/* Caller must hold hfi_instance_lock. */
++static void hfi_disable(void)
++{
++	u64 msr_val;
++	int i;
++
++	rdmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
++	msr_val &= ~HW_FEEDBACK_CONFIG_HFI_ENABLE_BIT;
++	wrmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
++
++	/*
++	 * Wait for hardware to acknowledge the disabling of HFI. Some
++	 * processors may not do it. Wait for ~2ms. This is a reasonable
++	 * time for hardware to complete any pending actions on the HFI
++	 * memory.
++	 */
++	for (i = 0; i < 2000; i++) {
++		rdmsrl(MSR_IA32_PACKAGE_THERM_STATUS, msr_val);
++		if (msr_val & PACKAGE_THERM_STATUS_HFI_UPDATED)
++			break;
++
++		udelay(1);
++		cpu_relax();
++	}
++}
++
+ /**
+  * intel_hfi_online() - Enable HFI on @cpu
+  * @cpu:	CPU in which the HFI will be enabled
+@@ -364,8 +413,6 @@ void intel_hfi_online(unsigned int cpu)
+ {
+ 	struct hfi_instance *hfi_instance;
+ 	struct hfi_cpu_info *info;
+-	phys_addr_t hw_table_pa;
+-	u64 msr_val;
+ 	u16 die_id;
+ 
+ 	/* Nothing to do if hfi_instances are missing. */
+@@ -403,14 +450,16 @@ void intel_hfi_online(unsigned int cpu)
+ 	/*
+ 	 * Hardware is programmed with the physical address of the first page
+ 	 * frame of the table. Hence, the allocated memory must be page-aligned.
++	 *
++	 * Some processors do not forget the initial address of the HFI table
++	 * even after having been reprogrammed. Keep using the same pages. Do
++	 * not free them.
+ 	 */
+ 	hfi_instance->hw_table = alloc_pages_exact(hfi_features.nr_table_pages,
+ 						   GFP_KERNEL | __GFP_ZERO);
+ 	if (!hfi_instance->hw_table)
+ 		goto unlock;
+ 
+-	hw_table_pa = virt_to_phys(hfi_instance->hw_table);
+-
+ 	/*
+ 	 * Allocate memory to keep a local copy of the table that
+ 	 * hardware generates.
+@@ -420,16 +469,6 @@ void intel_hfi_online(unsigned int cpu)
+ 	if (!hfi_instance->local_table)
+ 		goto free_hw_table;
+ 
+-	/*
+-	 * Program the address of the feedback table of this die/package. On
+-	 * some processors, hardware remembers the old address of the HFI table
+-	 * even after having been reprogrammed and re-enabled. Thus, do not free
+-	 * the pages allocated for the table or reprogram the hardware with a
+-	 * new base address. Namely, program the hardware only once.
+-	 */
+-	msr_val = hw_table_pa | HW_FEEDBACK_PTR_VALID_BIT;
+-	wrmsrl(MSR_IA32_HW_FEEDBACK_PTR, msr_val);
+-
+ 	init_hfi_instance(hfi_instance);
+ 
+ 	INIT_DELAYED_WORK(&hfi_instance->update_work, hfi_update_work_fn);
+@@ -438,13 +477,8 @@ void intel_hfi_online(unsigned int cpu)
+ 
+ 	cpumask_set_cpu(cpu, hfi_instance->cpus);
+ 
+-	/*
+-	 * Enable the hardware feedback interface and never disable it. See
+-	 * comment on programming the address of the table.
+-	 */
+-	rdmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
+-	msr_val |= HW_FEEDBACK_CONFIG_HFI_ENABLE_BIT;
+-	wrmsrl(MSR_IA32_HW_FEEDBACK_CONFIG, msr_val);
++	hfi_set_hw_table(hfi_instance);
++	hfi_enable();
+ 
+ unlock:
+ 	mutex_unlock(&hfi_instance_lock);
+@@ -484,6 +518,10 @@ void intel_hfi_offline(unsigned int cpu)
+ 
+ 	mutex_lock(&hfi_instance_lock);
+ 	cpumask_clear_cpu(cpu, hfi_instance->cpus);
++
++	if (!cpumask_weight(hfi_instance->cpus))
++		hfi_disable();
++
+ 	mutex_unlock(&hfi_instance_lock);
+ }
+ 
+@@ -532,6 +570,30 @@ static __init int hfi_parse_features(void)
+ 	return 0;
+ }
+ 
++static void hfi_do_enable(void)
++{
++	/* This code runs only on the boot CPU. */
++	struct hfi_cpu_info *info = &per_cpu(hfi_cpu_info, 0);
++	struct hfi_instance *hfi_instance = info->hfi_instance;
++
++	/* No locking needed. There is no concurrency with CPU online. */
++	hfi_set_hw_table(hfi_instance);
++	hfi_enable();
++}
++
++static int hfi_do_disable(void)
++{
++	/* No locking needed. There is no concurrency with CPU offline. */
++	hfi_disable();
++
++	return 0;
++}
++
++static struct syscore_ops hfi_pm_ops = {
++	.resume = hfi_do_enable,
++	.suspend = hfi_do_disable,
++};
++
+ void __init intel_hfi_init(void)
+ {
+ 	struct hfi_instance *hfi_instance;
+@@ -563,6 +625,8 @@ void __init intel_hfi_init(void)
+ 	if (!hfi_updates_wq)
+ 		goto err_nomem;
+ 
++	register_syscore_ops(&hfi_pm_ops);
++
+ 	return;
+ 
+ err_nomem:
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index d5bb0da95376f..dba25e70903c3 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -301,8 +301,8 @@
+ 
+ 
+ /* Misc definitions */
++#define SC16IS7XX_SPI_READ_BIT		BIT(7)
+ #define SC16IS7XX_FIFO_SIZE		(64)
+-#define SC16IS7XX_REG_SHIFT		2
+ #define SC16IS7XX_GPIOS_PER_BANK	4
+ 
+ struct sc16is7xx_devtype {
+@@ -323,7 +323,8 @@ struct sc16is7xx_one_config {
+ 
+ struct sc16is7xx_one {
+ 	struct uart_port		port;
+-	u8				line;
++	struct regmap			*regmap;
++	struct mutex			efr_lock; /* EFR registers access */
+ 	struct kthread_work		tx_work;
+ 	struct kthread_work		reg_work;
+ 	struct kthread_delayed_work	ms_work;
+@@ -334,7 +335,6 @@ struct sc16is7xx_one {
+ 
+ struct sc16is7xx_port {
+ 	const struct sc16is7xx_devtype	*devtype;
+-	struct regmap			*regmap;
+ 	struct clk			*clk;
+ #ifdef CONFIG_GPIOLIB
+ 	struct gpio_chip		gpio;
+@@ -344,7 +344,6 @@ struct sc16is7xx_port {
+ 	unsigned char			buf[SC16IS7XX_FIFO_SIZE];
+ 	struct kthread_worker		kworker;
+ 	struct task_struct		*kworker_task;
+-	struct mutex			efr_lock;
+ 	struct sc16is7xx_one		p[];
+ };
+ 
+@@ -361,48 +360,35 @@ static void sc16is7xx_stop_tx(struct uart_port *port);
+ 
+ #define to_sc16is7xx_one(p,e)	((container_of((p), struct sc16is7xx_one, e)))
+ 
+-static int sc16is7xx_line(struct uart_port *port)
+-{
+-	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+-
+-	return one->line;
+-}
+-
+ static u8 sc16is7xx_port_read(struct uart_port *port, u8 reg)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 	unsigned int val = 0;
+-	const u8 line = sc16is7xx_line(port);
+ 
+-	regmap_read(s->regmap, (reg << SC16IS7XX_REG_SHIFT) | line, &val);
++	regmap_read(one->regmap, reg, &val);
+ 
+ 	return val;
+ }
+ 
+ static void sc16is7xx_port_write(struct uart_port *port, u8 reg, u8 val)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+-	const u8 line = sc16is7xx_line(port);
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 
+-	regmap_write(s->regmap, (reg << SC16IS7XX_REG_SHIFT) | line, val);
++	regmap_write(one->regmap, reg, val);
+ }
+ 
+ static void sc16is7xx_fifo_read(struct uart_port *port, unsigned int rxlen)
+ {
+ 	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+-	const u8 line = sc16is7xx_line(port);
+-	u8 addr = (SC16IS7XX_RHR_REG << SC16IS7XX_REG_SHIFT) | line;
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 
+-	regcache_cache_bypass(s->regmap, true);
+-	regmap_raw_read(s->regmap, addr, s->buf, rxlen);
+-	regcache_cache_bypass(s->regmap, false);
++	regmap_noinc_read(one->regmap, SC16IS7XX_RHR_REG, s->buf, rxlen);
+ }
+ 
+ static void sc16is7xx_fifo_write(struct uart_port *port, u8 to_send)
+ {
+ 	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+-	const u8 line = sc16is7xx_line(port);
+-	u8 addr = (SC16IS7XX_THR_REG << SC16IS7XX_REG_SHIFT) | line;
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 
+ 	/*
+ 	 * Don't send zero-length data, at least on SPI it confuses the chip
+@@ -411,32 +397,15 @@ static void sc16is7xx_fifo_write(struct uart_port *port, u8 to_send)
+ 	if (unlikely(!to_send))
+ 		return;
+ 
+-	regcache_cache_bypass(s->regmap, true);
+-	regmap_raw_write(s->regmap, addr, s->buf, to_send);
+-	regcache_cache_bypass(s->regmap, false);
++	regmap_noinc_write(one->regmap, SC16IS7XX_THR_REG, s->buf, to_send);
+ }
+ 
+ static void sc16is7xx_port_update(struct uart_port *port, u8 reg,
+ 				  u8 mask, u8 val)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+-	const u8 line = sc16is7xx_line(port);
+-
+-	regmap_update_bits(s->regmap, (reg << SC16IS7XX_REG_SHIFT) | line,
+-			   mask, val);
+-}
+-
+-static int sc16is7xx_alloc_line(void)
+-{
+-	int i;
+-
+-	BUILD_BUG_ON(SC16IS7XX_MAX_DEVS > BITS_PER_LONG);
+-
+-	for (i = 0; i < SC16IS7XX_MAX_DEVS; i++)
+-		if (!test_and_set_bit(i, &sc16is7xx_lines))
+-			break;
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 
+-	return i;
++	regmap_update_bits(one->regmap, reg, mask, val);
+ }
+ 
+ static void sc16is7xx_power(struct uart_port *port, int on)
+@@ -478,7 +447,7 @@ static const struct sc16is7xx_devtype sc16is762_devtype = {
+ 
+ static bool sc16is7xx_regmap_volatile(struct device *dev, unsigned int reg)
+ {
+-	switch (reg >> SC16IS7XX_REG_SHIFT) {
++	switch (reg) {
+ 	case SC16IS7XX_RHR_REG:
+ 	case SC16IS7XX_IIR_REG:
+ 	case SC16IS7XX_LSR_REG:
+@@ -497,7 +466,7 @@ static bool sc16is7xx_regmap_volatile(struct device *dev, unsigned int reg)
+ 
+ static bool sc16is7xx_regmap_precious(struct device *dev, unsigned int reg)
+ {
+-	switch (reg >> SC16IS7XX_REG_SHIFT) {
++	switch (reg) {
+ 	case SC16IS7XX_RHR_REG:
+ 		return true;
+ 	default:
+@@ -507,9 +476,14 @@ static bool sc16is7xx_regmap_precious(struct device *dev, unsigned int reg)
+ 	return false;
+ }
+ 
++static bool sc16is7xx_regmap_noinc(struct device *dev, unsigned int reg)
++{
++	return reg == SC16IS7XX_RHR_REG;
++}
++
+ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 	u8 lcr;
+ 	u8 prescaler = 0;
+ 	unsigned long clk = port->uartclk, div = clk / 16 / baud;
+@@ -532,7 +506,7 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 	 * because the bulk of the interrupt processing is run as a workqueue
+ 	 * job in thread context.
+ 	 */
+-	mutex_lock(&s->efr_lock);
++	mutex_lock(&one->efr_lock);
+ 
+ 	lcr = sc16is7xx_port_read(port, SC16IS7XX_LCR_REG);
+ 
+@@ -541,17 +515,17 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 			     SC16IS7XX_LCR_CONF_MODE_B);
+ 
+ 	/* Enable enhanced features */
+-	regcache_cache_bypass(s->regmap, true);
++	regcache_cache_bypass(one->regmap, true);
+ 	sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,
+ 			      SC16IS7XX_EFR_ENABLE_BIT,
+ 			      SC16IS7XX_EFR_ENABLE_BIT);
+ 
+-	regcache_cache_bypass(s->regmap, false);
++	regcache_cache_bypass(one->regmap, false);
+ 
+ 	/* Put LCR back to the normal mode */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr);
+ 
+-	mutex_unlock(&s->efr_lock);
++	mutex_unlock(&one->efr_lock);
+ 
+ 	sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,
+ 			      SC16IS7XX_MCR_CLKSEL_BIT,
+@@ -562,10 +536,10 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
+ 			     SC16IS7XX_LCR_CONF_MODE_A);
+ 
+ 	/* Write the new divisor */
+-	regcache_cache_bypass(s->regmap, true);
++	regcache_cache_bypass(one->regmap, true);
+ 	sc16is7xx_port_write(port, SC16IS7XX_DLH_REG, div / 256);
+ 	sc16is7xx_port_write(port, SC16IS7XX_DLL_REG, div % 256);
+-	regcache_cache_bypass(s->regmap, false);
++	regcache_cache_bypass(one->regmap, false);
+ 
+ 	/* Put LCR back to the normal mode */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr);
+@@ -701,6 +675,8 @@ static void sc16is7xx_handle_tx(struct uart_port *port)
+ 
+ 	if (uart_circ_empty(xmit))
+ 		sc16is7xx_stop_tx(port);
++	else
++		sc16is7xx_ier_set(port, SC16IS7XX_IER_THRI_BIT);
+ 	uart_port_unlock_irqrestore(port, flags);
+ }
+ 
+@@ -719,11 +695,10 @@ static unsigned int sc16is7xx_get_hwmctrl(struct uart_port *port)
+ static void sc16is7xx_update_mlines(struct sc16is7xx_one *one)
+ {
+ 	struct uart_port *port = &one->port;
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 	unsigned long flags;
+ 	unsigned int status, changed;
+ 
+-	lockdep_assert_held_once(&s->efr_lock);
++	lockdep_assert_held_once(&one->efr_lock);
+ 
+ 	status = sc16is7xx_get_hwmctrl(port);
+ 	changed = status ^ one->old_mctrl;
+@@ -749,74 +724,77 @@ static void sc16is7xx_update_mlines(struct sc16is7xx_one *one)
+ 
+ static bool sc16is7xx_port_irq(struct sc16is7xx_port *s, int portno)
+ {
++	bool rc = true;
++	unsigned int iir, rxlen;
+ 	struct uart_port *port = &s->p[portno].port;
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 
+-	do {
+-		unsigned int iir, rxlen;
+-		struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+-
+-		iir = sc16is7xx_port_read(port, SC16IS7XX_IIR_REG);
+-		if (iir & SC16IS7XX_IIR_NO_INT_BIT)
+-			return false;
+-
+-		iir &= SC16IS7XX_IIR_ID_MASK;
+-
+-		switch (iir) {
+-		case SC16IS7XX_IIR_RDI_SRC:
+-		case SC16IS7XX_IIR_RLSE_SRC:
+-		case SC16IS7XX_IIR_RTOI_SRC:
+-		case SC16IS7XX_IIR_XOFFI_SRC:
+-			rxlen = sc16is7xx_port_read(port, SC16IS7XX_RXLVL_REG);
+-
+-			/*
+-			 * There is a silicon bug that makes the chip report a
+-			 * time-out interrupt but no data in the FIFO. This is
+-			 * described in errata section 18.1.4.
+-			 *
+-			 * When this happens, read one byte from the FIFO to
+-			 * clear the interrupt.
+-			 */
+-			if (iir == SC16IS7XX_IIR_RTOI_SRC && !rxlen)
+-				rxlen = 1;
+-
+-			if (rxlen)
+-				sc16is7xx_handle_rx(port, rxlen, iir);
+-			break;
++	mutex_lock(&one->efr_lock);
++
++	iir = sc16is7xx_port_read(port, SC16IS7XX_IIR_REG);
++	if (iir & SC16IS7XX_IIR_NO_INT_BIT) {
++		rc = false;
++		goto out_port_irq;
++	}
++
++	iir &= SC16IS7XX_IIR_ID_MASK;
++
++	switch (iir) {
++	case SC16IS7XX_IIR_RDI_SRC:
++	case SC16IS7XX_IIR_RLSE_SRC:
++	case SC16IS7XX_IIR_RTOI_SRC:
++	case SC16IS7XX_IIR_XOFFI_SRC:
++		rxlen = sc16is7xx_port_read(port, SC16IS7XX_RXLVL_REG);
++
++		/*
++		 * There is a silicon bug that makes the chip report a
++		 * time-out interrupt but no data in the FIFO. This is
++		 * described in errata section 18.1.4.
++		 *
++		 * When this happens, read one byte from the FIFO to
++		 * clear the interrupt.
++		 */
++		if (iir == SC16IS7XX_IIR_RTOI_SRC && !rxlen)
++			rxlen = 1;
++
++		if (rxlen)
++			sc16is7xx_handle_rx(port, rxlen, iir);
++		break;
+ 		/* CTSRTS interrupt comes only when CTS goes inactive */
+-		case SC16IS7XX_IIR_CTSRTS_SRC:
+-		case SC16IS7XX_IIR_MSI_SRC:
+-			sc16is7xx_update_mlines(one);
+-			break;
+-		case SC16IS7XX_IIR_THRI_SRC:
+-			sc16is7xx_handle_tx(port);
+-			break;
+-		default:
+-			dev_err_ratelimited(port->dev,
+-					    "ttySC%i: Unexpected interrupt: %x",
+-					    port->line, iir);
+-			break;
+-		}
+-	} while (0);
+-	return true;
++	case SC16IS7XX_IIR_CTSRTS_SRC:
++	case SC16IS7XX_IIR_MSI_SRC:
++		sc16is7xx_update_mlines(one);
++		break;
++	case SC16IS7XX_IIR_THRI_SRC:
++		sc16is7xx_handle_tx(port);
++		break;
++	default:
++		dev_err_ratelimited(port->dev,
++				    "ttySC%i: Unexpected interrupt: %x",
++				    port->line, iir);
++		break;
++	}
++
++out_port_irq:
++	mutex_unlock(&one->efr_lock);
++
++	return rc;
+ }
+ 
+ static irqreturn_t sc16is7xx_irq(int irq, void *dev_id)
+ {
+-	struct sc16is7xx_port *s = (struct sc16is7xx_port *)dev_id;
++	bool keep_polling;
+ 
+-	mutex_lock(&s->efr_lock);
++	struct sc16is7xx_port *s = (struct sc16is7xx_port *)dev_id;
+ 
+-	while (1) {
+-		bool keep_polling = false;
++	do {
+ 		int i;
+ 
++		keep_polling = false;
++
+ 		for (i = 0; i < s->devtype->nr_uart; ++i)
+ 			keep_polling |= sc16is7xx_port_irq(s, i);
+-		if (!keep_polling)
+-			break;
+-	}
+-
+-	mutex_unlock(&s->efr_lock);
++	} while (keep_polling);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -824,20 +802,15 @@ static irqreturn_t sc16is7xx_irq(int irq, void *dev_id)
+ static void sc16is7xx_tx_proc(struct kthread_work *ws)
+ {
+ 	struct uart_port *port = &(to_sc16is7xx_one(ws, tx_work)->port);
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+-	unsigned long flags;
++	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 
+ 	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+ 	    (port->rs485.delay_rts_before_send > 0))
+ 		msleep(port->rs485.delay_rts_before_send);
+ 
+-	mutex_lock(&s->efr_lock);
++	mutex_lock(&one->efr_lock);
+ 	sc16is7xx_handle_tx(port);
+-	mutex_unlock(&s->efr_lock);
+-
+-	uart_port_lock_irqsave(port, &flags);
+-	sc16is7xx_ier_set(port, SC16IS7XX_IER_THRI_BIT);
+-	uart_port_unlock_irqrestore(port, flags);
++	mutex_unlock(&one->efr_lock);
+ }
+ 
+ static void sc16is7xx_reconf_rs485(struct uart_port *port)
+@@ -940,9 +913,9 @@ static void sc16is7xx_ms_proc(struct kthread_work *ws)
+ 	struct sc16is7xx_port *s = dev_get_drvdata(one->port.dev);
+ 
+ 	if (one->port.state) {
+-		mutex_lock(&s->efr_lock);
++		mutex_lock(&one->efr_lock);
+ 		sc16is7xx_update_mlines(one);
+-		mutex_unlock(&s->efr_lock);
++		mutex_unlock(&one->efr_lock);
+ 
+ 		kthread_queue_delayed_work(&s->kworker, &one->ms_work, HZ);
+ 	}
+@@ -1026,7 +999,6 @@ static void sc16is7xx_set_termios(struct uart_port *port,
+ 				  struct ktermios *termios,
+ 				  const struct ktermios *old)
+ {
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+ 	unsigned int lcr, flow = 0;
+ 	int baud;
+@@ -1085,13 +1057,13 @@ static void sc16is7xx_set_termios(struct uart_port *port,
+ 		port->ignore_status_mask |= SC16IS7XX_LSR_BRK_ERROR_MASK;
+ 
+ 	/* As above, claim the mutex while accessing the EFR. */
+-	mutex_lock(&s->efr_lock);
++	mutex_lock(&one->efr_lock);
+ 
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+ 			     SC16IS7XX_LCR_CONF_MODE_B);
+ 
+ 	/* Configure flow control */
+-	regcache_cache_bypass(s->regmap, true);
++	regcache_cache_bypass(one->regmap, true);
+ 	sc16is7xx_port_write(port, SC16IS7XX_XON1_REG, termios->c_cc[VSTART]);
+ 	sc16is7xx_port_write(port, SC16IS7XX_XOFF1_REG, termios->c_cc[VSTOP]);
+ 
+@@ -1110,12 +1082,12 @@ static void sc16is7xx_set_termios(struct uart_port *port,
+ 			      SC16IS7XX_EFR_REG,
+ 			      SC16IS7XX_EFR_FLOWCTRL_BITS,
+ 			      flow);
+-	regcache_cache_bypass(s->regmap, false);
++	regcache_cache_bypass(one->regmap, false);
+ 
+ 	/* Update LCR register */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr);
+ 
+-	mutex_unlock(&s->efr_lock);
++	mutex_unlock(&one->efr_lock);
+ 
+ 	/* Get baud rate generator configuration */
+ 	baud = uart_get_baud_rate(port, termios, old,
+@@ -1161,7 +1133,6 @@ static int sc16is7xx_config_rs485(struct uart_port *port, struct ktermios *termi
+ static int sc16is7xx_startup(struct uart_port *port)
+ {
+ 	struct sc16is7xx_one *one = to_sc16is7xx_one(port, port);
+-	struct sc16is7xx_port *s = dev_get_drvdata(port->dev);
+ 	unsigned int val;
+ 	unsigned long flags;
+ 
+@@ -1178,7 +1149,7 @@ static int sc16is7xx_startup(struct uart_port *port)
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+ 			     SC16IS7XX_LCR_CONF_MODE_B);
+ 
+-	regcache_cache_bypass(s->regmap, true);
++	regcache_cache_bypass(one->regmap, true);
+ 
+ 	/* Enable write access to enhanced features and internal clock div */
+ 	sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,
+@@ -1196,7 +1167,7 @@ static int sc16is7xx_startup(struct uart_port *port)
+ 			     SC16IS7XX_TCR_RX_RESUME(24) |
+ 			     SC16IS7XX_TCR_RX_HALT(48));
+ 
+-	regcache_cache_bypass(s->regmap, false);
++	regcache_cache_bypass(one->regmap, false);
+ 
+ 	/* Now, initialize the UART */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, SC16IS7XX_LCR_WORD_LEN_8);
+@@ -1447,7 +1418,8 @@ static void sc16is7xx_setup_irda_ports(struct sc16is7xx_port *s)
+ /*
+  * Configure ports designated to operate as modem control lines.
+  */
+-static int sc16is7xx_setup_mctrl_ports(struct sc16is7xx_port *s)
++static int sc16is7xx_setup_mctrl_ports(struct sc16is7xx_port *s,
++				       struct regmap *regmap)
+ {
+ 	int i;
+ 	int ret;
+@@ -1476,8 +1448,8 @@ static int sc16is7xx_setup_mctrl_ports(struct sc16is7xx_port *s)
+ 
+ 	if (s->mctrl_mask)
+ 		regmap_update_bits(
+-			s->regmap,
+-			SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT,
++			regmap,
++			SC16IS7XX_IOCONTROL_REG,
+ 			SC16IS7XX_IOCONTROL_MODEM_A_BIT |
+ 			SC16IS7XX_IOCONTROL_MODEM_B_BIT, s->mctrl_mask);
+ 
+@@ -1492,7 +1464,7 @@ static const struct serial_rs485 sc16is7xx_rs485_supported = {
+ 
+ static int sc16is7xx_probe(struct device *dev,
+ 			   const struct sc16is7xx_devtype *devtype,
+-			   struct regmap *regmap, int irq)
++			   struct regmap *regmaps[], int irq)
+ {
+ 	unsigned long freq = 0, *pfreq = dev_get_platdata(dev);
+ 	unsigned int val;
+@@ -1500,16 +1472,20 @@ static int sc16is7xx_probe(struct device *dev,
+ 	int i, ret;
+ 	struct sc16is7xx_port *s;
+ 
+-	if (IS_ERR(regmap))
+-		return PTR_ERR(regmap);
++	for (i = 0; i < devtype->nr_uart; i++)
++		if (IS_ERR(regmaps[i]))
++			return PTR_ERR(regmaps[i]);
+ 
+ 	/*
+ 	 * This device does not have an identification register that would
+ 	 * tell us if we are really connected to the correct device.
+ 	 * The best we can do is to check if communication is at all possible.
++	 *
++	 * Note: regmap[0] is used in the probe function to access registers
++	 * common to all channels/ports, as it is guaranteed to be present on
++	 * all variants.
+ 	 */
+-	ret = regmap_read(regmap,
+-			  SC16IS7XX_LSR_REG << SC16IS7XX_REG_SHIFT, &val);
++	ret = regmap_read(regmaps[0], SC16IS7XX_LSR_REG, &val);
+ 	if (ret < 0)
+ 		return -EPROBE_DEFER;
+ 
+@@ -1543,10 +1519,8 @@ static int sc16is7xx_probe(struct device *dev,
+ 			return -EINVAL;
+ 	}
+ 
+-	s->regmap = regmap;
+ 	s->devtype = devtype;
+ 	dev_set_drvdata(dev, s);
+-	mutex_init(&s->efr_lock);
+ 
+ 	kthread_init_worker(&s->kworker);
+ 	s->kworker_task = kthread_run(kthread_worker_fn, &s->kworker,
+@@ -1558,11 +1532,17 @@ static int sc16is7xx_probe(struct device *dev,
+ 	sched_set_fifo(s->kworker_task);
+ 
+ 	/* reset device, purging any pending irq / data */
+-	regmap_write(s->regmap, SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT,
+-			SC16IS7XX_IOCONTROL_SRESET_BIT);
++	regmap_write(regmaps[0], SC16IS7XX_IOCONTROL_REG,
++		     SC16IS7XX_IOCONTROL_SRESET_BIT);
+ 
+ 	for (i = 0; i < devtype->nr_uart; ++i) {
+-		s->p[i].line		= i;
++		s->p[i].port.line = find_first_zero_bit(&sc16is7xx_lines,
++							SC16IS7XX_MAX_DEVS);
++		if (s->p[i].port.line >= SC16IS7XX_MAX_DEVS) {
++			ret = -ERANGE;
++			goto out_ports;
++		}
++
+ 		/* Initialize port data */
+ 		s->p[i].port.dev	= dev;
+ 		s->p[i].port.irq	= irq;
+@@ -1582,12 +1562,9 @@ static int sc16is7xx_probe(struct device *dev,
+ 		s->p[i].port.rs485_supported = sc16is7xx_rs485_supported;
+ 		s->p[i].port.ops	= &sc16is7xx_ops;
+ 		s->p[i].old_mctrl	= 0;
+-		s->p[i].port.line	= sc16is7xx_alloc_line();
++		s->p[i].regmap		= regmaps[i];
+ 
+-		if (s->p[i].port.line >= SC16IS7XX_MAX_DEVS) {
+-			ret = -ENOMEM;
+-			goto out_ports;
+-		}
++		mutex_init(&s->p[i].efr_lock);
+ 
+ 		ret = uart_get_rs485_mode(&s->p[i].port);
+ 		if (ret)
+@@ -1604,20 +1581,25 @@ static int sc16is7xx_probe(struct device *dev,
+ 		kthread_init_work(&s->p[i].tx_work, sc16is7xx_tx_proc);
+ 		kthread_init_work(&s->p[i].reg_work, sc16is7xx_reg_proc);
+ 		kthread_init_delayed_work(&s->p[i].ms_work, sc16is7xx_ms_proc);
++
+ 		/* Register port */
+-		uart_add_one_port(&sc16is7xx_uart, &s->p[i].port);
++		ret = uart_add_one_port(&sc16is7xx_uart, &s->p[i].port);
++		if (ret)
++			goto out_ports;
++
++		set_bit(s->p[i].port.line, &sc16is7xx_lines);
+ 
+ 		/* Enable EFR */
+ 		sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_LCR_REG,
+ 				     SC16IS7XX_LCR_CONF_MODE_B);
+ 
+-		regcache_cache_bypass(s->regmap, true);
++		regcache_cache_bypass(regmaps[i], true);
+ 
+ 		/* Enable write access to enhanced features */
+ 		sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_EFR_REG,
+ 				     SC16IS7XX_EFR_ENABLE_BIT);
+ 
+-		regcache_cache_bypass(s->regmap, false);
++		regcache_cache_bypass(regmaps[i], false);
+ 
+ 		/* Restore access to general registers */
+ 		sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_LCR_REG, 0x00);
+@@ -1628,7 +1610,7 @@ static int sc16is7xx_probe(struct device *dev,
+ 
+ 	sc16is7xx_setup_irda_ports(s);
+ 
+-	ret = sc16is7xx_setup_mctrl_ports(s);
++	ret = sc16is7xx_setup_mctrl_ports(s, regmaps[0]);
+ 	if (ret)
+ 		goto out_ports;
+ 
+@@ -1663,10 +1645,9 @@ static int sc16is7xx_probe(struct device *dev,
+ #endif
+ 
+ out_ports:
+-	for (i--; i >= 0; i--) {
+-		uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
+-		clear_bit(s->p[i].port.line, &sc16is7xx_lines);
+-	}
++	for (i = 0; i < devtype->nr_uart; i++)
++		if (test_and_clear_bit(s->p[i].port.line, &sc16is7xx_lines))
++			uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
+ 
+ 	kthread_stop(s->kworker_task);
+ 
+@@ -1688,8 +1669,8 @@ static void sc16is7xx_remove(struct device *dev)
+ 
+ 	for (i = 0; i < s->devtype->nr_uart; i++) {
+ 		kthread_cancel_delayed_work_sync(&s->p[i].ms_work);
+-		uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
+-		clear_bit(s->p[i].port.line, &sc16is7xx_lines);
++		if (test_and_clear_bit(s->p[i].port.line, &sc16is7xx_lines))
++			uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
+ 		sc16is7xx_power(&s->p[i].port, 0);
+ 	}
+ 
+@@ -1711,19 +1692,42 @@ static const struct of_device_id __maybe_unused sc16is7xx_dt_ids[] = {
+ MODULE_DEVICE_TABLE(of, sc16is7xx_dt_ids);
+ 
+ static struct regmap_config regcfg = {
+-	.reg_bits = 7,
+-	.pad_bits = 1,
++	.reg_bits = 5,
++	.pad_bits = 3,
+ 	.val_bits = 8,
+ 	.cache_type = REGCACHE_RBTREE,
+ 	.volatile_reg = sc16is7xx_regmap_volatile,
+ 	.precious_reg = sc16is7xx_regmap_precious,
++	.writeable_noinc_reg = sc16is7xx_regmap_noinc,
++	.readable_noinc_reg = sc16is7xx_regmap_noinc,
++	.max_raw_read = SC16IS7XX_FIFO_SIZE,
++	.max_raw_write = SC16IS7XX_FIFO_SIZE,
++	.max_register = SC16IS7XX_EFCR_REG,
+ };
+ 
++static const char *sc16is7xx_regmap_name(u8 port_id)
++{
++	switch (port_id) {
++	case 0:	return "port0";
++	case 1:	return "port1";
++	default:
++		WARN_ON(true);
++		return NULL;
++	}
++}
++
++static unsigned int sc16is7xx_regmap_port_mask(unsigned int port_id)
++{
++	/* CH1,CH0 are at bits 2:1. */
++	return port_id << 1;
++}
++
+ #ifdef CONFIG_SERIAL_SC16IS7XX_SPI
+ static int sc16is7xx_spi_probe(struct spi_device *spi)
+ {
+ 	const struct sc16is7xx_devtype *devtype;
+-	struct regmap *regmap;
++	struct regmap *regmaps[2];
++	unsigned int i;
+ 	int ret;
+ 
+ 	/* Setup SPI bus */
+@@ -1748,11 +1752,20 @@ static int sc16is7xx_spi_probe(struct spi_device *spi)
+ 		devtype = (struct sc16is7xx_devtype *)id_entry->driver_data;
+ 	}
+ 
+-	regcfg.max_register = (0xf << SC16IS7XX_REG_SHIFT) |
+-			      (devtype->nr_uart - 1);
+-	regmap = devm_regmap_init_spi(spi, &regcfg);
++	for (i = 0; i < devtype->nr_uart; i++) {
++		regcfg.name = sc16is7xx_regmap_name(i);
++		/*
++		 * If read_flag_mask is 0, the regmap code sets it to a default
++		 * of 0x80. Since we specify our own mask, we must add the READ
++		 * bit ourselves:
++		 */
++		regcfg.read_flag_mask = sc16is7xx_regmap_port_mask(i) |
++			SC16IS7XX_SPI_READ_BIT;
++		regcfg.write_flag_mask = sc16is7xx_regmap_port_mask(i);
++		regmaps[i] = devm_regmap_init_spi(spi, &regcfg);
++	}
+ 
+-	return sc16is7xx_probe(&spi->dev, devtype, regmap, spi->irq);
++	return sc16is7xx_probe(&spi->dev, devtype, regmaps, spi->irq);
+ }
+ 
+ static void sc16is7xx_spi_remove(struct spi_device *spi)
+@@ -1791,7 +1804,8 @@ static int sc16is7xx_i2c_probe(struct i2c_client *i2c)
+ {
+ 	const struct i2c_device_id *id = i2c_client_get_device_id(i2c);
+ 	const struct sc16is7xx_devtype *devtype;
+-	struct regmap *regmap;
++	struct regmap *regmaps[2];
++	unsigned int i;
+ 
+ 	if (i2c->dev.of_node) {
+ 		devtype = device_get_match_data(&i2c->dev);
+@@ -1801,11 +1815,14 @@ static int sc16is7xx_i2c_probe(struct i2c_client *i2c)
+ 		devtype = (struct sc16is7xx_devtype *)id->driver_data;
+ 	}
+ 
+-	regcfg.max_register = (0xf << SC16IS7XX_REG_SHIFT) |
+-			      (devtype->nr_uart - 1);
+-	regmap = devm_regmap_init_i2c(i2c, &regcfg);
++	for (i = 0; i < devtype->nr_uart; i++) {
++		regcfg.name = sc16is7xx_regmap_name(i);
++		regcfg.read_flag_mask = sc16is7xx_regmap_port_mask(i);
++		regcfg.write_flag_mask = sc16is7xx_regmap_port_mask(i);
++		regmaps[i] = devm_regmap_init_i2c(i2c, &regcfg);
++	}
+ 
+-	return sc16is7xx_probe(&i2c->dev, devtype, regmap, i2c->irq);
++	return sc16is7xx_probe(&i2c->dev, devtype, regmaps, i2c->irq);
+ }
+ 
+ static void sc16is7xx_i2c_remove(struct i2c_client *client)
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index ce2f76769fb0b..1f8d86b9c4fa0 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8916,12 +8916,9 @@ static void ufshcd_async_scan(void *data, async_cookie_t cookie)
+ 
+ out:
+ 	pm_runtime_put_sync(hba->dev);
+-	/*
+-	 * If we failed to initialize the device or the device is not
+-	 * present, turn off the power/clocks etc.
+-	 */
++
+ 	if (ret)
+-		ufshcd_hba_exit(hba);
++		dev_err(hba->dev, "%s failed: %d\n", __func__, ret);
+ }
+ 
+ static enum scsi_timeout_action ufshcd_eh_timed_out(struct scsi_cmnd *scmd)
+diff --git a/fs/afs/addr_list.c b/fs/afs/addr_list.c
+index de1ae0bead3ba..f4837c3b8ae2b 100644
+--- a/fs/afs/addr_list.c
++++ b/fs/afs/addr_list.c
+@@ -13,26 +13,33 @@
+ #include "internal.h"
+ #include "afs_fs.h"
+ 
++static void afs_free_addrlist(struct rcu_head *rcu)
++{
++	struct afs_addr_list *alist = container_of(rcu, struct afs_addr_list, rcu);
++	unsigned int i;
++
++	for (i = 0; i < alist->nr_addrs; i++)
++		rxrpc_kernel_put_peer(alist->addrs[i].peer);
++}
++
+ /*
+  * Release an address list.
+  */
+ void afs_put_addrlist(struct afs_addr_list *alist)
+ {
+ 	if (alist && refcount_dec_and_test(&alist->usage))
+-		kfree_rcu(alist, rcu);
++		call_rcu(&alist->rcu, afs_free_addrlist);
+ }
+ 
+ /*
+  * Allocate an address list.
+  */
+-struct afs_addr_list *afs_alloc_addrlist(unsigned int nr,
+-					 unsigned short service,
+-					 unsigned short port)
++struct afs_addr_list *afs_alloc_addrlist(unsigned int nr, u16 service_id)
+ {
+ 	struct afs_addr_list *alist;
+ 	unsigned int i;
+ 
+-	_enter("%u,%u,%u", nr, service, port);
++	_enter("%u,%u", nr, service_id);
+ 
+ 	if (nr > AFS_MAX_ADDRESSES)
+ 		nr = AFS_MAX_ADDRESSES;
+@@ -44,16 +51,8 @@ struct afs_addr_list *afs_alloc_addrlist(unsigned int nr,
+ 	refcount_set(&alist->usage, 1);
+ 	alist->max_addrs = nr;
+ 
+-	for (i = 0; i < nr; i++) {
+-		struct sockaddr_rxrpc *srx = &alist->addrs[i];
+-		srx->srx_family			= AF_RXRPC;
+-		srx->srx_service		= service;
+-		srx->transport_type		= SOCK_DGRAM;
+-		srx->transport_len		= sizeof(srx->transport.sin6);
+-		srx->transport.sin6.sin6_family	= AF_INET6;
+-		srx->transport.sin6.sin6_port	= htons(port);
+-	}
+-
++	for (i = 0; i < nr; i++)
++		alist->addrs[i].service_id = service_id;
+ 	return alist;
+ }
+ 
+@@ -126,7 +125,7 @@ struct afs_vlserver_list *afs_parse_text_addrs(struct afs_net *net,
+ 	if (!vllist->servers[0].server)
+ 		goto error_vl;
+ 
+-	alist = afs_alloc_addrlist(nr, service, AFS_VL_PORT);
++	alist = afs_alloc_addrlist(nr, service);
+ 	if (!alist)
+ 		goto error;
+ 
+@@ -197,9 +196,11 @@ struct afs_vlserver_list *afs_parse_text_addrs(struct afs_net *net,
+ 		}
+ 
+ 		if (family == AF_INET)
+-			afs_merge_fs_addr4(alist, x[0], xport);
++			ret = afs_merge_fs_addr4(net, alist, x[0], xport);
+ 		else
+-			afs_merge_fs_addr6(alist, x, xport);
++			ret = afs_merge_fs_addr6(net, alist, x, xport);
++		if (ret < 0)
++			goto error;
+ 
+ 	} while (p < end);
+ 
+@@ -271,25 +272,33 @@ struct afs_vlserver_list *afs_dns_query(struct afs_cell *cell, time64_t *_expiry
+ /*
+  * Merge an IPv4 entry into a fileserver address list.
+  */
+-void afs_merge_fs_addr4(struct afs_addr_list *alist, __be32 xdr, u16 port)
++int afs_merge_fs_addr4(struct afs_net *net, struct afs_addr_list *alist,
++		       __be32 xdr, u16 port)
+ {
+-	struct sockaddr_rxrpc *srx;
+-	u32 addr = ntohl(xdr);
++	struct sockaddr_rxrpc srx;
++	struct rxrpc_peer *peer;
+ 	int i;
+ 
+ 	if (alist->nr_addrs >= alist->max_addrs)
+-		return;
++		return 0;
+ 
+-	for (i = 0; i < alist->nr_ipv4; i++) {
+-		struct sockaddr_in *a = &alist->addrs[i].transport.sin;
+-		u32 a_addr = ntohl(a->sin_addr.s_addr);
+-		u16 a_port = ntohs(a->sin_port);
++	srx.srx_family = AF_RXRPC;
++	srx.transport_type = SOCK_DGRAM;
++	srx.transport_len = sizeof(srx.transport.sin);
++	srx.transport.sin.sin_family = AF_INET;
++	srx.transport.sin.sin_port = htons(port);
++	srx.transport.sin.sin_addr.s_addr = xdr;
+ 
+-		if (addr == a_addr && port == a_port)
+-			return;
+-		if (addr == a_addr && port < a_port)
+-			break;
+-		if (addr < a_addr)
++	peer = rxrpc_kernel_lookup_peer(net->socket, &srx, GFP_KERNEL);
++	if (!peer)
++		return -ENOMEM;
++
++	for (i = 0; i < alist->nr_ipv4; i++) {
++		if (peer == alist->addrs[i].peer) {
++			rxrpc_kernel_put_peer(peer);
++			return 0;
++		}
++		if (peer <= alist->addrs[i].peer)
+ 			break;
+ 	}
+ 
+@@ -298,38 +307,42 @@ void afs_merge_fs_addr4(struct afs_addr_list *alist, __be32 xdr, u16 port)
+ 			alist->addrs + i,
+ 			sizeof(alist->addrs[0]) * (alist->nr_addrs - i));
+ 
+-	srx = &alist->addrs[i];
+-	srx->srx_family = AF_RXRPC;
+-	srx->transport_type = SOCK_DGRAM;
+-	srx->transport_len = sizeof(srx->transport.sin);
+-	srx->transport.sin.sin_family = AF_INET;
+-	srx->transport.sin.sin_port = htons(port);
+-	srx->transport.sin.sin_addr.s_addr = xdr;
++	alist->addrs[i].peer = peer;
+ 	alist->nr_ipv4++;
+ 	alist->nr_addrs++;
++	return 0;
+ }
+ 
+ /*
+  * Merge an IPv6 entry into a fileserver address list.
+  */
+-void afs_merge_fs_addr6(struct afs_addr_list *alist, __be32 *xdr, u16 port)
++int afs_merge_fs_addr6(struct afs_net *net, struct afs_addr_list *alist,
++		       __be32 *xdr, u16 port)
+ {
+-	struct sockaddr_rxrpc *srx;
+-	int i, diff;
++	struct sockaddr_rxrpc srx;
++	struct rxrpc_peer *peer;
++	int i;
+ 
+ 	if (alist->nr_addrs >= alist->max_addrs)
+-		return;
++		return 0;
+ 
+-	for (i = alist->nr_ipv4; i < alist->nr_addrs; i++) {
+-		struct sockaddr_in6 *a = &alist->addrs[i].transport.sin6;
+-		u16 a_port = ntohs(a->sin6_port);
++	srx.srx_family = AF_RXRPC;
++	srx.transport_type = SOCK_DGRAM;
++	srx.transport_len = sizeof(srx.transport.sin6);
++	srx.transport.sin6.sin6_family = AF_INET6;
++	srx.transport.sin6.sin6_port = htons(port);
++	memcpy(&srx.transport.sin6.sin6_addr, xdr, 16);
+ 
+-		diff = memcmp(xdr, &a->sin6_addr, 16);
+-		if (diff == 0 && port == a_port)
+-			return;
+-		if (diff == 0 && port < a_port)
+-			break;
+-		if (diff < 0)
++	peer = rxrpc_kernel_lookup_peer(net->socket, &srx, GFP_KERNEL);
++	if (!peer)
++		return -ENOMEM;
++
++	for (i = alist->nr_ipv4; i < alist->nr_addrs; i++) {
++		if (peer == alist->addrs[i].peer) {
++			rxrpc_kernel_put_peer(peer);
++			return 0;
++		}
++		if (peer <= alist->addrs[i].peer)
+ 			break;
+ 	}
+ 
+@@ -337,15 +350,9 @@ void afs_merge_fs_addr6(struct afs_addr_list *alist, __be32 *xdr, u16 port)
+ 		memmove(alist->addrs + i + 1,
+ 			alist->addrs + i,
+ 			sizeof(alist->addrs[0]) * (alist->nr_addrs - i));
+-
+-	srx = &alist->addrs[i];
+-	srx->srx_family = AF_RXRPC;
+-	srx->transport_type = SOCK_DGRAM;
+-	srx->transport_len = sizeof(srx->transport.sin6);
+-	srx->transport.sin6.sin6_family = AF_INET6;
+-	srx->transport.sin6.sin6_port = htons(port);
+-	memcpy(&srx->transport.sin6.sin6_addr, xdr, 16);
++	alist->addrs[i].peer = peer;
+ 	alist->nr_addrs++;
++	return 0;
+ }
+ 
+ /*
+@@ -379,26 +386,24 @@ bool afs_iterate_addresses(struct afs_addr_cursor *ac)
+ selected:
+ 	ac->index = index;
+ 	set_bit(index, &ac->tried);
+-	ac->responded = false;
++	ac->call_responded = false;
+ 	return true;
+ }
+ 
+ /*
+  * Release an address list cursor.
+  */
+-int afs_end_cursor(struct afs_addr_cursor *ac)
++void afs_end_cursor(struct afs_addr_cursor *ac)
+ {
+ 	struct afs_addr_list *alist;
+ 
+ 	alist = ac->alist;
+ 	if (alist) {
+-		if (ac->responded &&
++		if (ac->call_responded &&
+ 		    ac->index != alist->preferred &&
+ 		    test_bit(ac->alist->preferred, &ac->tried))
+ 			WRITE_ONCE(alist->preferred, ac->index);
+ 		afs_put_addrlist(alist);
+ 		ac->alist = NULL;
+ 	}
+-
+-	return ac->error;
+ }
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index d4ddb20d67320..99a3f20bc7869 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -146,10 +146,11 @@ static int afs_find_cm_server_by_peer(struct afs_call *call)
+ {
+ 	struct sockaddr_rxrpc srx;
+ 	struct afs_server *server;
++	struct rxrpc_peer *peer;
+ 
+-	rxrpc_kernel_get_peer(call->net->socket, call->rxcall, &srx);
++	peer = rxrpc_kernel_get_call_peer(call->net->socket, call->rxcall);
+ 
+-	server = afs_find_server(call->net, &srx);
++	server = afs_find_server(call->net, peer);
+ 	if (!server) {
+ 		trace_afs_cm_no_server(call, &srx);
+ 		return 0;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 5219182e52e1a..9140780be5a43 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -474,6 +474,14 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 			continue;
+ 		}
+ 
++		/* Don't expose silly rename entries to userspace. */
++		if (nlen > 6 &&
++		    dire->u.name[0] == '.' &&
++		    ctx->actor != afs_lookup_filldir &&
++		    ctx->actor != afs_lookup_one_filldir &&
++		    memcmp(dire->u.name, ".__afs", 6) == 0)
++			continue;
++
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+ 			      ntohl(dire->u.vnode),
+@@ -693,8 +701,9 @@ static void afs_do_lookup_success(struct afs_operation *op)
+ 			vp = &op->file[0];
+ 			abort_code = vp->scb.status.abort_code;
+ 			if (abort_code != 0) {
+-				op->ac.abort_code = abort_code;
+-				op->error = afs_abort_to_error(abort_code);
++				op->call_abort_code = abort_code;
++				afs_op_set_error(op, afs_abort_to_error(abort_code));
++				op->cumul_error.abort_code = abort_code;
+ 			}
+ 			break;
+ 
+@@ -707,6 +716,8 @@ static void afs_do_lookup_success(struct afs_operation *op)
+ 			break;
+ 		}
+ 
++		if (vp->scb.status.abort_code)
++			trace_afs_bulkstat_error(op, &vp->fid, i, vp->scb.status.abort_code);
+ 		if (!vp->scb.have_status && !vp->scb.have_error)
+ 			continue;
+ 
+@@ -846,13 +857,14 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+ 	_debug("nr_files %u", op->nr_files);
+ 
+ 	/* Need space for examining all the selected files */
+-	op->error = -ENOMEM;
+ 	if (op->nr_files > 2) {
+ 		op->more_files = kvcalloc(op->nr_files - 2,
+ 					  sizeof(struct afs_vnode_param),
+ 					  GFP_KERNEL);
+-		if (!op->more_files)
++		if (!op->more_files) {
++			afs_op_nomem(op);
+ 			goto out_op;
++		}
+ 
+ 		for (i = 2; i < op->nr_files; i++) {
+ 			vp = &op->more_files[i - 2];
+@@ -878,14 +890,14 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+ 	 * lookups contained therein are stored in the reply without aborting
+ 	 * the whole operation.
+ 	 */
+-	op->error = -ENOTSUPP;
++	afs_op_set_error(op, -ENOTSUPP);
+ 	if (!cookie->one_only) {
+ 		op->ops = &afs_inline_bulk_status_operation;
+ 		afs_begin_vnode_operation(op);
+ 		afs_wait_for_operation(op);
+ 	}
+ 
+-	if (op->error == -ENOTSUPP) {
++	if (afs_op_error(op) == -ENOTSUPP) {
+ 		/* We could try FS.BulkStatus next, but this aborts the entire
+ 		 * op if any of the lookups fails - so, for the moment, revert
+ 		 * to FS.FetchStatus for op->file[1].
+@@ -895,12 +907,16 @@ static struct inode *afs_do_lookup(struct inode *dir, struct dentry *dentry,
+ 		afs_begin_vnode_operation(op);
+ 		afs_wait_for_operation(op);
+ 	}
+-	inode = ERR_PTR(op->error);
+ 
+ out_op:
+-	if (op->error == 0) {
+-		inode = &op->file[1].vnode->netfs.inode;
+-		op->file[1].vnode = NULL;
++	if (!afs_op_error(op)) {
++		if (op->file[1].scb.status.abort_code) {
++			afs_op_accumulate_error(op, -ECONNABORTED,
++						op->file[1].scb.status.abort_code);
++		} else {
++			inode = &op->file[1].vnode->netfs.inode;
++			op->file[1].vnode = NULL;
++		}
+ 	}
+ 
+ 	if (op->file[0].scb.have_status)
+@@ -1255,7 +1271,7 @@ void afs_check_for_remote_deletion(struct afs_operation *op)
+ {
+ 	struct afs_vnode *vnode = op->file[0].vnode;
+ 
+-	switch (op->ac.abort_code) {
++	switch (afs_op_abort_code(op)) {
+ 	case VNOVNODE:
+ 		set_bit(AFS_VNODE_DELETED, &vnode->flags);
+ 		afs_break_callback(vnode, afs_cb_break_for_deleted);
+@@ -1273,20 +1289,20 @@ static void afs_vnode_new_inode(struct afs_operation *op)
+ 
+ 	_enter("");
+ 
+-	ASSERTCMP(op->error, ==, 0);
++	ASSERTCMP(afs_op_error(op), ==, 0);
+ 
+ 	inode = afs_iget(op, vp);
+ 	if (IS_ERR(inode)) {
+ 		/* ENOMEM or EINTR at a really inconvenient time - just abandon
+ 		 * the new directory on the server.
+ 		 */
+-		op->error = PTR_ERR(inode);
++		afs_op_accumulate_error(op, PTR_ERR(inode), 0);
+ 		return;
+ 	}
+ 
+ 	vnode = AFS_FS_I(inode);
+ 	set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
+-	if (!op->error)
++	if (!afs_op_error(op))
+ 		afs_cache_permit(vnode, op->key, vnode->cb_break, &vp->scb);
+ 	d_instantiate(op->dentry, inode);
+ }
+@@ -1320,7 +1336,7 @@ static void afs_create_put(struct afs_operation *op)
+ {
+ 	_enter("op=%08x", op->debug_id);
+ 
+-	if (op->error)
++	if (afs_op_error(op))
+ 		d_drop(op->dentry);
+ }
+ 
+@@ -1480,7 +1496,7 @@ static void afs_dir_remove_link(struct afs_operation *op)
+ 	struct dentry *dentry = op->dentry;
+ 	int ret;
+ 
+-	if (op->error != 0 ||
++	if (afs_op_error(op) ||
+ 	    (op->file[1].scb.have_status && op->file[1].scb.have_error))
+ 		return;
+ 	if (d_really_is_positive(dentry))
+@@ -1504,10 +1520,10 @@ static void afs_dir_remove_link(struct afs_operation *op)
+ 
+ 		ret = afs_validate(vnode, op->key);
+ 		if (ret != -ESTALE)
+-			op->error = ret;
++			afs_op_set_error(op, ret);
+ 	}
+ 
+-	_debug("nlink %d [val %d]", vnode->netfs.inode.i_nlink, op->error);
++	_debug("nlink %d [val %d]", vnode->netfs.inode.i_nlink, afs_op_error(op));
+ }
+ 
+ static void afs_unlink_success(struct afs_operation *op)
+@@ -1538,7 +1554,7 @@ static void afs_unlink_edit_dir(struct afs_operation *op)
+ static void afs_unlink_put(struct afs_operation *op)
+ {
+ 	_enter("op=%08x", op->debug_id);
+-	if (op->unlink.need_rehash && op->error < 0 && op->error != -ENOENT)
++	if (op->unlink.need_rehash && afs_op_error(op) < 0 && afs_op_error(op) != -ENOENT)
+ 		d_rehash(op->dentry);
+ }
+ 
+@@ -1579,7 +1595,7 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
+ 	/* Try to make sure we have a callback promise on the victim. */
+ 	ret = afs_validate(vnode, op->key);
+ 	if (ret < 0) {
+-		op->error = ret;
++		afs_op_set_error(op, ret);
+ 		goto error;
+ 	}
+ 
+@@ -1588,7 +1604,7 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
+ 		spin_unlock(&dentry->d_lock);
+ 		/* Start asynchronous writeout of the inode */
+ 		write_inode_now(d_inode(dentry), 0);
+-		op->error = afs_sillyrename(dvnode, vnode, dentry, op->key);
++		afs_op_set_error(op, afs_sillyrename(dvnode, vnode, dentry, op->key));
+ 		goto error;
+ 	}
+ 	if (!d_unhashed(dentry)) {
+@@ -1609,7 +1625,7 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
+ 	/* If there was a conflict with a third party, check the status of the
+ 	 * unlinked vnode.
+ 	 */
+-	if (op->error == 0 && (op->flags & AFS_OPERATION_DIR_CONFLICT)) {
++	if (afs_op_error(op) == 0 && (op->flags & AFS_OPERATION_DIR_CONFLICT)) {
+ 		op->file[1].update_ctime = false;
+ 		op->fetch_status.which = 1;
+ 		op->ops = &afs_fetch_status_operation;
+@@ -1691,7 +1707,7 @@ static void afs_link_success(struct afs_operation *op)
+ static void afs_link_put(struct afs_operation *op)
+ {
+ 	_enter("op=%08x", op->debug_id);
+-	if (op->error)
++	if (afs_op_error(op))
+ 		d_drop(op->dentry);
+ }
+ 
+@@ -1889,7 +1905,7 @@ static void afs_rename_put(struct afs_operation *op)
+ 	if (op->rename.rehash)
+ 		d_rehash(op->rename.rehash);
+ 	dput(op->rename.tmp);
+-	if (op->error)
++	if (afs_op_error(op))
+ 		d_rehash(op->dentry);
+ }
+ 
+@@ -1934,7 +1950,7 @@ static int afs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 		return PTR_ERR(op);
+ 
+ 	ret = afs_validate(vnode, op->key);
+-	op->error = ret;
++	afs_op_set_error(op, ret);
+ 	if (ret < 0)
+ 		goto error;
+ 
+@@ -1971,7 +1987,7 @@ static int afs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 			op->rename.tmp = d_alloc(new_dentry->d_parent,
+ 						 &new_dentry->d_name);
+ 			if (!op->rename.tmp) {
+-				op->error = -ENOMEM;
++				afs_op_nomem(op);
+ 				goto error;
+ 			}
+ 
+@@ -1979,7 +1995,7 @@ static int afs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 					      AFS_FS_I(d_inode(new_dentry)),
+ 					      new_dentry, op->key);
+ 			if (ret) {
+-				op->error = ret;
++				afs_op_set_error(op, ret);
+ 				goto error;
+ 			}
+ 
+diff --git a/fs/afs/dir_silly.c b/fs/afs/dir_silly.c
+index bb5807e87fa4c..a1e581946b930 100644
+--- a/fs/afs/dir_silly.c
++++ b/fs/afs/dir_silly.c
+@@ -218,7 +218,7 @@ static int afs_do_silly_unlink(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 	/* If there was a conflict with a third party, check the status of the
+ 	 * unlinked vnode.
+ 	 */
+-	if (op->error == 0 && (op->flags & AFS_OPERATION_DIR_CONFLICT)) {
++	if (op->cumul_error.error == 0 && (op->flags & AFS_OPERATION_DIR_CONFLICT)) {
+ 		op->file[1].update_ctime = false;
+ 		op->fetch_status.which = 1;
+ 		op->ops = &afs_fetch_status_operation;
+diff --git a/fs/afs/file.c b/fs/afs/file.c
+index d37dd201752ba..8f9b424275698 100644
+--- a/fs/afs/file.c
++++ b/fs/afs/file.c
+@@ -243,12 +243,9 @@ static void afs_fetch_data_notify(struct afs_operation *op)
+ {
+ 	struct afs_read *req = op->fetch.req;
+ 	struct netfs_io_subrequest *subreq = req->subreq;
+-	int error = op->error;
++	int error = afs_op_error(op);
+ 
+-	if (error == -ECONNABORTED)
+-		error = afs_abort_to_error(op->ac.abort_code);
+ 	req->error = error;
+-
+ 	if (subreq) {
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 		netfs_subreq_terminated(subreq, error ?: req->actual_len, false);
+@@ -271,7 +268,7 @@ static void afs_fetch_data_success(struct afs_operation *op)
+ 
+ static void afs_fetch_data_put(struct afs_operation *op)
+ {
+-	op->fetch.req->error = op->error;
++	op->fetch.req->error = afs_op_error(op);
+ 	afs_put_read(op->fetch.req);
+ }
+ 
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 7a3803ce3a229..cebe4fad81922 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -40,8 +40,8 @@ struct afs_operation *afs_alloc_operation(struct key *key, struct afs_volume *vo
+ 	op->net		= volume->cell->net;
+ 	op->cb_v_break	= volume->cb_v_break;
+ 	op->debug_id	= atomic_inc_return(&afs_operation_debug_counter);
+-	op->error	= -EDESTADDRREQ;
+-	op->ac.error	= SHRT_MAX;
++	op->nr_iterations = -1;
++	afs_op_set_error(op, -EDESTADDRREQ);
+ 
+ 	_leave(" = [op=%08x]", op->debug_id);
+ 	return op;
+@@ -71,7 +71,7 @@ static bool afs_get_io_locks(struct afs_operation *op)
+ 		swap(vnode, vnode2);
+ 
+ 	if (mutex_lock_interruptible(&vnode->io_lock) < 0) {
+-		op->error = -ERESTARTSYS;
++		afs_op_set_error(op, -ERESTARTSYS);
+ 		op->flags |= AFS_OPERATION_STOP;
+ 		_leave(" = f [I 0]");
+ 		return false;
+@@ -80,7 +80,7 @@ static bool afs_get_io_locks(struct afs_operation *op)
+ 
+ 	if (vnode2) {
+ 		if (mutex_lock_interruptible_nested(&vnode2->io_lock, 1) < 0) {
+-			op->error = -ERESTARTSYS;
++			afs_op_set_error(op, -ERESTARTSYS);
+ 			op->flags |= AFS_OPERATION_STOP;
+ 			mutex_unlock(&vnode->io_lock);
+ 			op->flags &= ~AFS_OPERATION_LOCK_0;
+@@ -159,16 +159,16 @@ static void afs_end_vnode_operation(struct afs_operation *op)
+ {
+ 	_enter("");
+ 
+-	if (op->error == -EDESTADDRREQ ||
+-	    op->error == -EADDRNOTAVAIL ||
+-	    op->error == -ENETUNREACH ||
+-	    op->error == -EHOSTUNREACH)
++	switch (afs_op_error(op)) {
++	case -EDESTADDRREQ:
++	case -EADDRNOTAVAIL:
++	case -ENETUNREACH:
++	case -EHOSTUNREACH:
+ 		afs_dump_edestaddrreq(op);
++		break;
++	}
+ 
+ 	afs_drop_io_locks(op);
+-
+-	if (op->error == -ECONNABORTED)
+-		op->error = afs_abort_to_error(op->ac.abort_code);
+ }
+ 
+ /*
+@@ -179,6 +179,8 @@ void afs_wait_for_operation(struct afs_operation *op)
+ 	_enter("");
+ 
+ 	while (afs_select_fileserver(op)) {
++		op->call_error = 0;
++		op->call_abort_code = 0;
+ 		op->cb_s_break = op->server->cb_s_break;
+ 		if (test_bit(AFS_SERVER_FL_IS_YFS, &op->server->flags) &&
+ 		    op->ops->issue_yfs_rpc)
+@@ -186,30 +188,34 @@ void afs_wait_for_operation(struct afs_operation *op)
+ 		else if (op->ops->issue_afs_rpc)
+ 			op->ops->issue_afs_rpc(op);
+ 		else
+-			op->ac.error = -ENOTSUPP;
+-
+-		if (op->call)
+-			op->error = afs_wait_for_call_to_complete(op->call, &op->ac);
++			op->call_error = -ENOTSUPP;
++
++		if (op->call) {
++			afs_wait_for_call_to_complete(op->call, &op->ac);
++			op->call_abort_code = op->call->abort_code;
++			op->call_error = op->call->error;
++			op->call_responded = op->call->responded;
++			op->ac.call_responded = true;
++			WRITE_ONCE(op->ac.alist->addrs[op->ac.index].last_error,
++				   op->call_error);
++			afs_put_call(op->call);
++		}
+ 	}
+ 
+-	switch (op->error) {
+-	case 0:
++	if (!afs_op_error(op)) {
+ 		_debug("success");
+ 		op->ops->success(op);
+-		break;
+-	case -ECONNABORTED:
++	} else if (op->cumul_error.aborted) {
+ 		if (op->ops->aborted)
+ 			op->ops->aborted(op);
+-		fallthrough;
+-	default:
++	} else {
+ 		if (op->ops->failed)
+ 			op->ops->failed(op);
+-		break;
+ 	}
+ 
+ 	afs_end_vnode_operation(op);
+ 
+-	if (op->error == 0 && op->ops->edit_dir) {
++	if (!afs_op_error(op) && op->ops->edit_dir) {
+ 		_debug("edit_dir");
+ 		op->ops->edit_dir(op);
+ 	}
+@@ -221,7 +227,7 @@ void afs_wait_for_operation(struct afs_operation *op)
+  */
+ int afs_put_operation(struct afs_operation *op)
+ {
+-	int i, ret = op->error;
++	int i, ret = afs_op_error(op);
+ 
+ 	_enter("op=%08x,%d", op->debug_id, ret);
+ 
+diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
+index daaf3810cc925..58d28b82571e3 100644
+--- a/fs/afs/fs_probe.c
++++ b/fs/afs/fs_probe.c
+@@ -101,6 +101,7 @@ static void afs_fs_probe_not_done(struct afs_net *net,
+ void afs_fileserver_probe_result(struct afs_call *call)
+ {
+ 	struct afs_addr_list *alist = call->alist;
++	struct afs_address *addr = &alist->addrs[call->addr_ix];
+ 	struct afs_server *server = call->server;
+ 	unsigned int index = call->addr_ix;
+ 	unsigned int rtt_us = 0, cap0;
+@@ -153,12 +154,12 @@ responded:
+ 	if (call->service_id == YFS_FS_SERVICE) {
+ 		server->probe.is_yfs = true;
+ 		set_bit(AFS_SERVER_FL_IS_YFS, &server->flags);
+-		alist->addrs[index].srx_service = call->service_id;
++		addr->service_id = call->service_id;
+ 	} else {
+ 		server->probe.not_yfs = true;
+ 		if (!server->probe.is_yfs) {
+ 			clear_bit(AFS_SERVER_FL_IS_YFS, &server->flags);
+-			alist->addrs[index].srx_service = call->service_id;
++			addr->service_id = call->service_id;
+ 		}
+ 		cap0 = ntohl(call->tmp);
+ 		if (cap0 & AFS3_VICED_CAPABILITY_64BITFILES)
+@@ -167,7 +168,7 @@ responded:
+ 			clear_bit(AFS_SERVER_FL_HAS_FS64, &server->flags);
+ 	}
+ 
+-	rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
++	rtt_us = rxrpc_kernel_get_srtt(addr->peer);
+ 	if (rtt_us < server->probe.rtt) {
+ 		server->probe.rtt = rtt_us;
+ 		server->rtt = rtt_us;
+@@ -181,8 +182,8 @@ responded:
+ out:
+ 	spin_unlock(&server->probe_lock);
+ 
+-	_debug("probe %pU [%u] %pISpc rtt=%u ret=%d",
+-	       &server->uuid, index, &alist->addrs[index].transport,
++	_debug("probe %pU [%u] %pISpc rtt=%d ret=%d",
++	       &server->uuid, index, rxrpc_kernel_remote_addr(alist->addrs[index].peer),
+ 	       rtt_us, ret);
+ 
+ 	return afs_done_one_fs_probe(call->net, server);
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index 7d37f63ef0f09..2a56dea225190 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -1612,6 +1612,7 @@ int afs_fs_give_up_all_callbacks(struct afs_net *net,
+ {
+ 	struct afs_call *call;
+ 	__be32 *bp;
++	int ret;
+ 
+ 	_enter("");
+ 
+@@ -1627,7 +1628,10 @@ int afs_fs_give_up_all_callbacks(struct afs_net *net,
+ 
+ 	call->server = afs_use_server(server, afs_server_trace_give_up_cb);
+ 	afs_make_call(ac, call, GFP_NOFS);
+-	return afs_wait_for_call_to_complete(call, ac);
++	afs_wait_for_call_to_complete(call, ac);
++	ret = call->error;
++	afs_put_call(call);
++	return ret;
+ }
+ 
+ /*
+@@ -1899,7 +1903,7 @@ void afs_fs_inline_bulk_status(struct afs_operation *op)
+ 	int i;
+ 
+ 	if (test_bit(AFS_SERVER_FL_NO_IBULK, &op->server->flags)) {
+-		op->error = -ENOTSUPP;
++		afs_op_set_error(op, -ENOTSUPP);
+ 		return;
+ 	}
+ 
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 78efc97193493..d6eed332507f0 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -331,7 +331,7 @@ static void afs_fetch_status_success(struct afs_operation *op)
+ 
+ 	if (vnode->netfs.inode.i_state & I_NEW) {
+ 		ret = afs_inode_init_from_status(op, vp, vnode);
+-		op->error = ret;
++		afs_op_set_error(op, ret);
+ 		if (ret == 0)
+ 			afs_cache_permit(vnode, op->key, vp->cb_break_before, &vp->scb);
+ 	} else {
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 7385d62c8cf50..5f6db0ac06ac6 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -72,6 +72,12 @@ enum afs_call_state {
+ 	AFS_CALL_COMPLETE,		/* Completed or failed */
+ };
+ 
++struct afs_address {
++	struct rxrpc_peer	*peer;
++	u16			service_id;
++	short			last_error;	/* Last error from this address */
++};
++
+ /*
+  * List of server addresses.
+  */
+@@ -87,7 +93,7 @@ struct afs_addr_list {
+ 	enum dns_lookup_status	status:8;
+ 	unsigned long		failed;		/* Mask of addrs that failed locally/ICMP */
+ 	unsigned long		responded;	/* Mask of addrs that responded */
+-	struct sockaddr_rxrpc	addrs[] __counted_by(max_addrs);
++	struct afs_address	addrs[] __counted_by(max_addrs);
+ #define AFS_MAX_ADDRESSES ((unsigned int)(sizeof(unsigned long) * 8))
+ };
+ 
+@@ -116,7 +122,6 @@ struct afs_call {
+ 	};
+ 	void			*buffer;	/* reply receive buffer */
+ 	union {
+-		long			ret0;	/* Value to reply with instead of 0 */
+ 		struct afs_addr_list	*ret_alist;
+ 		struct afs_vldb_entry	*ret_vldb;
+ 		char			*ret_str;
+@@ -140,6 +145,7 @@ struct afs_call {
+ 	bool			upgrade;	/* T to request service upgrade */
+ 	bool			intr;		/* T if interruptible */
+ 	bool			unmarshalling_error; /* T if an unmarshalling error occurred */
++	bool			responded;	/* Got a response from the call (may be abort) */
+ 	u16			service_id;	/* Actual service ID (after upgrade) */
+ 	unsigned int		debug_id;	/* Trace ID */
+ 	u32			operation_ID;	/* operation ID for an incoming call */
+@@ -418,7 +424,7 @@ struct afs_vlserver {
+ 	atomic_t		probe_outstanding;
+ 	spinlock_t		probe_lock;
+ 	struct {
+-		unsigned int	rtt;		/* RTT in uS */
++		unsigned int	rtt;		/* Best RTT in uS (or UINT_MAX) */
+ 		u32		abort_code;
+ 		short		error;
+ 		unsigned short	flags;
+@@ -535,7 +541,7 @@ struct afs_server {
+ 	atomic_t		probe_outstanding;
+ 	spinlock_t		probe_lock;
+ 	struct {
+-		unsigned int	rtt;		/* RTT in uS */
++		unsigned int	rtt;		/* Best RTT in uS (or UINT_MAX) */
+ 		u32		abort_code;
+ 		short		error;
+ 		bool		responded:1;
+@@ -714,8 +720,10 @@ struct afs_permits {
+  * Error prioritisation and accumulation.
+  */
+ struct afs_error {
+-	short	error;			/* Accumulated error */
++	s32	abort_code;		/* Cumulative abort code */
++	short	error;			/* Cumulative error */
+ 	bool	responded;		/* T if server responded */
++	bool	aborted;		/* T if ->error is from an abort */
+ };
+ 
+ /*
+@@ -725,10 +733,8 @@ struct afs_addr_cursor {
+ 	struct afs_addr_list	*alist;		/* Current address list (pins ref) */
+ 	unsigned long		tried;		/* Tried addresses */
+ 	signed char		index;		/* Current address */
+-	bool			responded;	/* T if the current address responded */
+ 	unsigned short		nr_iterations;	/* Number of address iterations */
+-	short			error;
+-	u32			abort_code;
++	bool			call_responded;
+ };
+ 
+ /*
+@@ -741,13 +747,16 @@ struct afs_vl_cursor {
+ 	struct afs_vlserver	*server;	/* Server on which this resides */
+ 	struct key		*key;		/* Key for the server */
+ 	unsigned long		untried;	/* Bitmask of untried servers */
++	struct afs_error	cumul_error;	/* Cumulative error */
++	s32			call_abort_code;
+ 	short			index;		/* Current server */
+-	short			error;
++	short			call_error;	/* Error from single call */
+ 	unsigned short		flags;
+ #define AFS_VL_CURSOR_STOP	0x0001		/* Set to cease iteration */
+ #define AFS_VL_CURSOR_RETRY	0x0002		/* Set to do a retry */
+ #define AFS_VL_CURSOR_RETRIED	0x0004		/* Set if started a retry */
+-	unsigned short		nr_iterations;	/* Number of server iterations */
++	short			nr_iterations;	/* Number of server iterations */
++	bool			call_responded;	/* T if the current address responded */
+ };
+ 
+ /*
+@@ -798,8 +807,10 @@ struct afs_operation {
+ 	struct dentry		*dentry_2;	/* Second dentry to be altered */
+ 	struct timespec64	mtime;		/* Modification time to record */
+ 	struct timespec64	ctime;		/* Change time to set */
++	struct afs_error	cumul_error;	/* Cumulative error */
+ 	short			nr_files;	/* Number of entries in file[], more_files */
+-	short			error;
++	short			call_error;	/* Error from single call */
++	s32			call_abort_code; /* Abort code from single call */
+ 	unsigned int		debug_id;
+ 
+ 	unsigned int		cb_v_break;	/* Volume break counter before op */
+@@ -854,7 +865,9 @@ struct afs_operation {
+ 	struct afs_call		*call;
+ 	unsigned long		untried;	/* Bitmask of untried servers */
+ 	short			index;		/* Current server */
+-	unsigned short		nr_iterations;	/* Number of server iterations */
++	short			nr_iterations;	/* Number of server iterations */
++	bool			call_responded;	/* T if the current address responded */
++
+ 
+ 	unsigned int		flags;
+ #define AFS_OPERATION_STOP		0x0001	/* Set to cease iteration */
+@@ -962,19 +975,21 @@ static inline struct afs_addr_list *afs_get_addrlist(struct afs_addr_list *alist
+ 		refcount_inc(&alist->usage);
+ 	return alist;
+ }
+-extern struct afs_addr_list *afs_alloc_addrlist(unsigned int,
+-						unsigned short,
+-						unsigned short);
++extern struct afs_addr_list *afs_alloc_addrlist(unsigned int nr, u16 service_id);
+ extern void afs_put_addrlist(struct afs_addr_list *);
+ extern struct afs_vlserver_list *afs_parse_text_addrs(struct afs_net *,
+ 						      const char *, size_t, char,
+ 						      unsigned short, unsigned short);
++bool afs_addr_list_same(const struct afs_addr_list *a,
++			const struct afs_addr_list *b);
+ extern struct afs_vlserver_list *afs_dns_query(struct afs_cell *, time64_t *);
+ extern bool afs_iterate_addresses(struct afs_addr_cursor *);
+-extern int afs_end_cursor(struct afs_addr_cursor *);
++extern void afs_end_cursor(struct afs_addr_cursor *ac);
+ 
+-extern void afs_merge_fs_addr4(struct afs_addr_list *, __be32, u16);
+-extern void afs_merge_fs_addr6(struct afs_addr_list *, __be32 *, u16);
++extern int afs_merge_fs_addr4(struct afs_net *net, struct afs_addr_list *addr,
++			      __be32 xdr, u16 port);
++extern int afs_merge_fs_addr6(struct afs_net *net, struct afs_addr_list *addr,
++			      __be32 *xdr, u16 port);
+ 
+ /*
+  * callback.c
+@@ -1133,11 +1148,6 @@ extern bool afs_begin_vnode_operation(struct afs_operation *);
+ extern void afs_wait_for_operation(struct afs_operation *);
+ extern int afs_do_sync_operation(struct afs_operation *);
+ 
+-static inline void afs_op_nomem(struct afs_operation *op)
+-{
+-	op->error = -ENOMEM;
+-}
+-
+ static inline void afs_op_set_vnode(struct afs_operation *op, unsigned int n,
+ 				    struct afs_vnode *vnode)
+ {
+@@ -1231,6 +1241,31 @@ static inline void __afs_stat(atomic_t *s)
+ extern int afs_abort_to_error(u32);
+ extern void afs_prioritise_error(struct afs_error *, int, u32);
+ 
++static inline void afs_op_nomem(struct afs_operation *op)
++{
++	op->cumul_error.error = -ENOMEM;
++}
++
++static inline int afs_op_error(const struct afs_operation *op)
++{
++	return op->cumul_error.error;
++}
++
++static inline s32 afs_op_abort_code(const struct afs_operation *op)
++{
++	return op->cumul_error.abort_code;
++}
++
++static inline int afs_op_set_error(struct afs_operation *op, int error)
++{
++	return op->cumul_error.error = error;
++}
++
++static inline void afs_op_accumulate_error(struct afs_operation *op, int error, s32 abort_code)
++{
++	afs_prioritise_error(&op->cumul_error, error, abort_code);
++}
++
+ /*
+  * mntpt.c
+  */
+@@ -1274,7 +1309,7 @@ extern void __net_exit afs_close_socket(struct afs_net *);
+ extern void afs_charge_preallocation(struct work_struct *);
+ extern void afs_put_call(struct afs_call *);
+ extern void afs_make_call(struct afs_addr_cursor *, struct afs_call *, gfp_t);
+-extern long afs_wait_for_call_to_complete(struct afs_call *, struct afs_addr_cursor *);
++void afs_wait_for_call_to_complete(struct afs_call *call, struct afs_addr_cursor *ac);
+ extern struct afs_call *afs_alloc_flat_call(struct afs_net *,
+ 					    const struct afs_call_type *,
+ 					    size_t, size_t);
+@@ -1401,8 +1436,7 @@ extern void __exit afs_clean_up_permit_cache(void);
+  */
+ extern spinlock_t afs_server_peer_lock;
+ 
+-extern struct afs_server *afs_find_server(struct afs_net *,
+-					  const struct sockaddr_rxrpc *);
++extern struct afs_server *afs_find_server(struct afs_net *, const struct rxrpc_peer *);
+ extern struct afs_server *afs_find_server_by_uuid(struct afs_net *, const uuid_t *);
+ extern struct afs_server *afs_lookup_server(struct afs_cell *, struct key *, const uuid_t *, u32);
+ extern struct afs_server *afs_get_server(struct afs_server *, enum afs_server_trace);
+@@ -1603,7 +1637,7 @@ static inline void afs_update_dentry_version(struct afs_operation *op,
+ 					     struct afs_vnode_param *dir_vp,
+ 					     struct dentry *dentry)
+ {
+-	if (!op->error)
++	if (!op->cumul_error.error)
+ 		dentry->d_fsdata =
+ 			(void *)(unsigned long)dir_vp->scb.status.data_version;
+ }
+diff --git a/fs/afs/misc.c b/fs/afs/misc.c
+index 805328ca54284..b8180bf2281f7 100644
+--- a/fs/afs/misc.c
++++ b/fs/afs/misc.c
+@@ -116,6 +116,8 @@ void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)
+ {
+ 	switch (error) {
+ 	case 0:
++		e->aborted = false;
++		e->error = 0;
+ 		return;
+ 	default:
+ 		if (e->error == -ETIMEDOUT ||
+@@ -161,12 +163,16 @@ void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)
+ 		if (e->responded)
+ 			return;
+ 		e->error = error;
++		e->aborted = false;
+ 		return;
+ 
+ 	case -ECONNABORTED:
+-		error = afs_abort_to_error(abort_code);
+-		fallthrough;
++		e->error = afs_abort_to_error(abort_code);
++		e->aborted = true;
++		e->responded = true;
++		return;
+ 	case -ENETRESET: /* Responded, but we seem to have changed address */
++		e->aborted = false;
+ 		e->responded = true;
+ 		e->error = error;
+ 		return;
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 2a0c83d715659..8a65a06908d21 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -307,7 +307,7 @@ static int afs_proc_cell_vlservers_show(struct seq_file *m, void *v)
+ 		for (i = 0; i < alist->nr_addrs; i++)
+ 			seq_printf(m, " %c %pISpc\n",
+ 				   alist->preferred == i ? '>' : '-',
+-				   &alist->addrs[i].transport);
++				   rxrpc_kernel_remote_addr(alist->addrs[i].peer));
+ 	}
+ 	seq_printf(m, " info: fl=%lx rtt=%d\n", vlserver->flags, vlserver->rtt);
+ 	seq_printf(m, " probe: fl=%x e=%d ac=%d out=%d\n",
+@@ -398,9 +398,10 @@ static int afs_proc_servers_show(struct seq_file *m, void *v)
+ 	seq_printf(m, "  - ALIST v=%u rsp=%lx f=%lx\n",
+ 		   alist->version, alist->responded, alist->failed);
+ 	for (i = 0; i < alist->nr_addrs; i++)
+-		seq_printf(m, "    [%x] %pISpc%s\n",
+-			   i, &alist->addrs[i].transport,
+-			   alist->preferred == i ? "*" : "");
++		seq_printf(m, "    [%x] %pISpc%s rtt=%d\n",
++			   i, rxrpc_kernel_remote_addr(alist->addrs[i].peer),
++			   alist->preferred == i ? "*" : "",
++			   rxrpc_kernel_get_srtt(alist->addrs[i].peer));
+ 	return 0;
+ }
+ 
+diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c
+index a840c3588ebbb..68c88e3a0916b 100644
+--- a/fs/afs/rotate.c
++++ b/fs/afs/rotate.c
+@@ -13,6 +13,7 @@
+ #include <linux/sched/signal.h>
+ #include "internal.h"
+ #include "afs_fs.h"
++#include "protocol_uae.h"
+ 
+ /*
+  * Begin iteration through a server list, starting with the vnode's last used
+@@ -50,7 +51,7 @@ static bool afs_start_fs_iteration(struct afs_operation *op,
+ 		 * and have to return an error.
+ 		 */
+ 		if (op->flags & AFS_OPERATION_CUR_ONLY) {
+-			op->error = -ESTALE;
++			afs_op_set_error(op, -ESTALE);
+ 			return false;
+ 		}
+ 
+@@ -92,7 +93,7 @@ static bool afs_sleep_and_retry(struct afs_operation *op)
+ 	if (!(op->flags & AFS_OPERATION_UNINTR)) {
+ 		msleep_interruptible(1000);
+ 		if (signal_pending(current)) {
+-			op->error = -ERESTARTSYS;
++			afs_op_set_error(op, -ERESTARTSYS);
+ 			return false;
+ 		}
+ 	} else {
+@@ -111,31 +112,34 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 	struct afs_addr_list *alist;
+ 	struct afs_server *server;
+ 	struct afs_vnode *vnode = op->file[0].vnode;
+-	struct afs_error e;
+-	u32 rtt;
+-	int error = op->ac.error, i;
++	unsigned int rtt;
++	s32 abort_code = op->call_abort_code;
++	int error = op->call_error, i;
+ 
+-	_enter("%lx[%d],%lx[%d],%d,%d",
++	op->nr_iterations++;
++
++	_enter("OP=%x+%x,%llx,%lx[%d],%lx[%d],%d,%d",
++	       op->debug_id, op->nr_iterations, op->volume->vid,
+ 	       op->untried, op->index,
+ 	       op->ac.tried, op->ac.index,
+-	       error, op->ac.abort_code);
++	       error, abort_code);
+ 
+ 	if (op->flags & AFS_OPERATION_STOP) {
+ 		_leave(" = f [stopped]");
+ 		return false;
+ 	}
+ 
+-	op->nr_iterations++;
+-
+-	/* Evaluate the result of the previous operation, if there was one. */
+-	switch (error) {
+-	case SHRT_MAX:
++	if (op->nr_iterations == 0)
+ 		goto start;
+ 
++	/* Evaluate the result of the previous operation, if there was one. */
++	switch (op->call_error) {
+ 	case 0:
++		op->cumul_error.responded = true;
++		fallthrough;
+ 	default:
+ 		/* Success or local failure.  Stop. */
+-		op->error = error;
++		afs_op_set_error(op, error);
+ 		op->flags |= AFS_OPERATION_STOP;
+ 		_leave(" = f [okay/local %d]", error);
+ 		return false;
+@@ -143,16 +147,27 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 	case -ECONNABORTED:
+ 		/* The far side rejected the operation on some grounds.  This
+ 		 * might involve the server being busy or the volume having been moved.
++		 *
++		 * Note that various V* errors should not be sent to a cache manager
++		 * by a fileserver as they should be translated to more modern UAE*
++		 * errors instead.  IBM AFS and OpenAFS fileservers, however, do leak
++		 * these abort codes.
+ 		 */
+-		switch (op->ac.abort_code) {
++		op->cumul_error.responded = true;
++		switch (abort_code) {
+ 		case VNOVOL:
+ 			/* This fileserver doesn't know about the volume.
+ 			 * - May indicate that the VL is wrong - retry once and compare
+ 			 *   the results.
+ 			 * - May indicate that the fileserver couldn't attach to the vol.
++			 * - The volume might have been temporarily removed so that it can
++			 *   be replaced by a volume restore.  "vos" might have ended one
++			 *   transaction and has yet to create the next.
++			 * - The volume might not be blessed or might not be in-service
++			 *   (administrative action).
+ 			 */
+ 			if (op->flags & AFS_OPERATION_VNOVOL) {
+-				op->error = -EREMOTEIO;
++				afs_op_accumulate_error(op, -EREMOTEIO, abort_code);
+ 				goto next_server;
+ 			}
+ 
+@@ -162,11 +177,13 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 
+ 			set_bit(AFS_VOLUME_NEEDS_UPDATE, &op->volume->flags);
+ 			error = afs_check_volume_status(op->volume, op);
+-			if (error < 0)
+-				goto failed_set_error;
++			if (error < 0) {
++				afs_op_set_error(op, error);
++				goto failed;
++			}
+ 
+ 			if (test_bit(AFS_VOLUME_DELETED, &op->volume->flags)) {
+-				op->error = -ENOMEDIUM;
++				afs_op_set_error(op, -ENOMEDIUM);
+ 				goto failed;
+ 			}
+ 
+@@ -174,7 +191,7 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 			 * it's the fileserver having trouble.
+ 			 */
+ 			if (rcu_access_pointer(op->volume->servers) == op->server_list) {
+-				op->error = -EREMOTEIO;
++				afs_op_accumulate_error(op, -EREMOTEIO, abort_code);
+ 				goto next_server;
+ 			}
+ 
+@@ -183,42 +200,91 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 			_leave(" = t [vnovol]");
+ 			return true;
+ 
+-		case VSALVAGE: /* TODO: Should this return an error or iterate? */
+ 		case VVOLEXISTS:
+-		case VNOSERVICE:
+ 		case VONLINE:
+-		case VDISKFULL:
+-		case VOVERQUOTA:
+-			op->error = afs_abort_to_error(op->ac.abort_code);
++			/* These should not be returned from the fileserver. */
++			pr_warn("Fileserver returned unexpected abort %d\n",
++				abort_code);
++			afs_op_accumulate_error(op, -EREMOTEIO, abort_code);
++			goto next_server;
++
++		case VNOSERVICE:
++			/* Prior to AFS 3.2 VNOSERVICE was returned from the fileserver
++			 * if the volume was neither in-service nor administratively
++			 * blessed.  All usage was replaced by VNOVOL because AFS 3.1 and
++			 * earlier cache managers did not handle VNOSERVICE and assumed
++			 * it was the client OSes errno 105.
++			 *
++			 * Starting with OpenAFS 1.4.8 VNOSERVICE was repurposed as the
++			 * fileserver idle dead time error which was sent in place of
++			 * RX_CALL_TIMEOUT (-3).  The error was intended to be sent if the
++			 * fileserver took too long to send a reply to the client.
++			 * RX_CALL_TIMEOUT would have caused the cache manager to mark the
++			 * server down whereas VNOSERVICE since AFS 3.2 would cause cache
++			 * manager to temporarily (up to 15 minutes) mark the volume
++			 * instance as unusable.
++			 *
++			 * The idle dead logic resulted in cache inconsistency since a
++			 * state changing call that the cache manager assumed was dead
++			 * could still be processed to completion by the fileserver.  This
++			 * logic was removed in OpenAFS 1.8.0 and VNOSERVICE is no longer
++			 * returned.  However, many 1.4.8 through 1.6.24 fileservers are
++			 * still in existence.
++			 *
++			 * AuriStorFS fileservers have never returned VNOSERVICE.
++			 *
++			 * VNOSERVICE should be treated as an alias for RX_CALL_TIMEOUT.
++			 */
++		case RX_CALL_TIMEOUT:
++			afs_op_accumulate_error(op, -ETIMEDOUT, abort_code);
+ 			goto next_server;
+ 
++		case VSALVAGING: /* This error should not be leaked to cache managers
++				  * but is from OpenAFS demand attach fileservers.
++				  * It should be treated as an alias for VOFFLINE.
++				  */
++		case VSALVAGE: /* VSALVAGE should be treated as a synonym of VOFFLINE */
+ 		case VOFFLINE:
++			/* The volume is in use by the volserver or another volume utility
++			 * for an operation that might alter the contents.  The volume is
++			 * expected to come back but it might take a long time (could be
++			 * days).
++			 */
+ 			if (!test_and_set_bit(AFS_VOLUME_OFFLINE, &op->volume->flags)) {
+-				afs_busy(op->volume, op->ac.abort_code);
++				afs_busy(op->volume, abort_code);
+ 				clear_bit(AFS_VOLUME_BUSY, &op->volume->flags);
+ 			}
+ 			if (op->flags & AFS_OPERATION_NO_VSLEEP) {
+-				op->error = -EADV;
++				afs_op_set_error(op, -EADV);
+ 				goto failed;
+ 			}
+ 			if (op->flags & AFS_OPERATION_CUR_ONLY) {
+-				op->error = -ESTALE;
++				afs_op_set_error(op, -ESTALE);
+ 				goto failed;
+ 			}
+ 			goto busy;
+ 
+-		case VSALVAGING:
+-		case VRESTARTING:
++		case VRESTARTING: /* The fileserver is either shutting down or starting up. */
+ 		case VBUSY:
+-			/* Retry after going round all the servers unless we
+-			 * have a file lock we need to maintain.
++			/* The volume is in use by the volserver or another volume
++			 * utility for an operation that is not expected to alter the
++			 * contents of the volume.  VBUSY does not need to be returned
++			 * for a ROVOL or BACKVOL bound to an ITBusy volserver
++			 * transaction.  The fileserver is permitted to continue serving
++			 * content from ROVOLs and BACKVOLs during an ITBusy transaction
++			 * because the content will not change.  However, many fileserver
++			 * releases do return VBUSY for ROVOL and BACKVOL instances under
++			 * many circumstances.
++			 *
++			 * Retry after going round all the servers unless we have a file
++			 * lock we need to maintain.
+ 			 */
+ 			if (op->flags & AFS_OPERATION_NO_VSLEEP) {
+-				op->error = -EBUSY;
++				afs_op_set_error(op, -EBUSY);
+ 				goto failed;
+ 			}
+ 			if (!test_and_set_bit(AFS_VOLUME_BUSY, &op->volume->flags)) {
+-				afs_busy(op->volume, op->ac.abort_code);
++				afs_busy(op->volume, abort_code);
+ 				clear_bit(AFS_VOLUME_OFFLINE, &op->volume->flags);
+ 			}
+ 		busy:
+@@ -226,7 +292,7 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 				if (!afs_sleep_and_retry(op))
+ 					goto failed;
+ 
+-				 /* Retry with same server & address */
++				/* Retry with same server & address */
+ 				_leave(" = t [vbusy]");
+ 				return true;
+ 			}
+@@ -243,7 +309,7 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 			 * honour, just in case someone sets up a loop.
+ 			 */
+ 			if (op->flags & AFS_OPERATION_VMOVED) {
+-				op->error = -EREMOTEIO;
++				afs_op_set_error(op, -EREMOTEIO);
+ 				goto failed;
+ 			}
+ 			op->flags |= AFS_OPERATION_VMOVED;
+@@ -251,8 +317,10 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 			set_bit(AFS_VOLUME_WAIT, &op->volume->flags);
+ 			set_bit(AFS_VOLUME_NEEDS_UPDATE, &op->volume->flags);
+ 			error = afs_check_volume_status(op->volume, op);
+-			if (error < 0)
+-				goto failed_set_error;
++			if (error < 0) {
++				afs_op_set_error(op, error);
++				goto failed;
++			}
+ 
+ 			/* If the server list didn't change, then the VLDB is
+ 			 * out of sync with the fileservers.  This is hopefully
+@@ -264,22 +332,48 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 			 * TODO: Retry a few times with sleeps.
+ 			 */
+ 			if (rcu_access_pointer(op->volume->servers) == op->server_list) {
+-				op->error = -ENOMEDIUM;
++				afs_op_accumulate_error(op, -ENOMEDIUM, abort_code);
+ 				goto failed;
+ 			}
+ 
+ 			goto restart_from_beginning;
+ 
++		case UAEIO:
++		case VIO:
++			afs_op_accumulate_error(op, -EREMOTEIO, abort_code);
++			if (op->volume->type != AFSVL_RWVOL)
++				goto next_server;
++			goto failed;
++
++		case VDISKFULL:
++		case UAENOSPC:
++			/* The partition is full.  Only applies to RWVOLs.
++			 * Translate locally and return ENOSPC.
++			 * No replicas to failover to.
++			 */
++			afs_op_set_error(op, -ENOSPC);
++			goto failed_but_online;
++
++		case VOVERQUOTA:
++		case UAEDQUOT:
++			/* Volume is full.  Only applies to RWVOLs.
++			 * Translate locally and return EDQUOT.
++			 * No replicas to failover to.
++			 */
++			afs_op_set_error(op, -EDQUOT);
++			goto failed_but_online;
++
+ 		default:
++			afs_op_accumulate_error(op, error, abort_code);
++		failed_but_online:
+ 			clear_bit(AFS_VOLUME_OFFLINE, &op->volume->flags);
+ 			clear_bit(AFS_VOLUME_BUSY, &op->volume->flags);
+-			op->error = afs_abort_to_error(op->ac.abort_code);
+ 			goto failed;
+ 		}
+ 
+ 	case -ETIMEDOUT:
+ 	case -ETIME:
+-		if (op->error != -EDESTADDRREQ)
++		if (afs_op_error(op) != -EDESTADDRREQ)
+ 			goto iterate_address;
+ 		fallthrough;
+ 	case -ERFKILL:
+@@ -289,7 +383,7 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 	case -EHOSTDOWN:
+ 	case -ECONNREFUSED:
+ 		_debug("no conn");
+-		op->error = error;
++		afs_op_accumulate_error(op, error, 0);
+ 		goto iterate_address;
+ 
+ 	case -ENETRESET:
+@@ -298,7 +392,7 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 		fallthrough;
+ 	case -ECONNRESET:
+ 		_debug("call reset");
+-		op->error = error;
++		afs_op_set_error(op, error);
+ 		goto failed;
+ 	}
+ 
+@@ -314,8 +408,10 @@ start:
+ 	 * volume may have moved or even have been deleted.
+ 	 */
+ 	error = afs_check_volume_status(op->volume, op);
+-	if (error < 0)
+-		goto failed_set_error;
++	if (error < 0) {
++		afs_op_set_error(op, error);
++		goto failed;
++	}
+ 
+ 	if (!afs_start_fs_iteration(op, vnode))
+ 		goto failed;
+@@ -326,8 +422,10 @@ pick_server:
+ 	_debug("pick [%lx]", op->untried);
+ 
+ 	error = afs_wait_for_fs_probes(op->server_list, op->untried);
+-	if (error < 0)
+-		goto failed_set_error;
++	if (error < 0) {
++		afs_op_set_error(op, error);
++		goto failed;
++	}
+ 
+ 	/* Pick the untried server with the lowest RTT.  If we have outstanding
+ 	 * callbacks, we stick with the server we're already using if we can.
+@@ -341,7 +439,7 @@ pick_server:
+ 	}
+ 
+ 	op->index = -1;
+-	rtt = U32_MAX;
++	rtt = UINT_MAX;
+ 	for (i = 0; i < op->server_list->nr_servers; i++) {
+ 		struct afs_server *s = op->server_list->servers[i].server;
+ 
+@@ -409,8 +507,9 @@ iterate_address:
+ 
+ 	_debug("address [%u] %u/%u %pISp",
+ 	       op->index, op->ac.index, op->ac.alist->nr_addrs,
+-	       &op->ac.alist->addrs[op->ac.index].transport);
++	       rxrpc_kernel_remote_addr(op->ac.alist->addrs[op->ac.index].peer));
+ 
++	op->call_responded = false;
+ 	_leave(" = t");
+ 	return true;
+ 
+@@ -428,7 +527,8 @@ out_of_addresses:
+ 			op->flags &= ~AFS_OPERATION_RETRY_SERVER;
+ 			goto retry_server;
+ 		case -ERESTARTSYS:
+-			goto failed_set_error;
++			afs_op_set_error(op, error);
++			goto failed;
+ 		case -ETIME:
+ 		case -EDESTADDRREQ:
+ 			goto next_server;
+@@ -447,23 +547,18 @@ no_more_servers:
+ 	if (op->flags & AFS_OPERATION_VBUSY)
+ 		goto restart_from_beginning;
+ 
+-	e.error = -EDESTADDRREQ;
+-	e.responded = false;
+ 	for (i = 0; i < op->server_list->nr_servers; i++) {
+ 		struct afs_server *s = op->server_list->servers[i].server;
+ 
+-		afs_prioritise_error(&e, READ_ONCE(s->probe.error),
+-				     s->probe.abort_code);
++		error = READ_ONCE(s->probe.error);
++		if (error < 0)
++			afs_op_accumulate_error(op, error, s->probe.abort_code);
+ 	}
+ 
+-	error = e.error;
+-
+-failed_set_error:
+-	op->error = error;
+ failed:
+ 	op->flags |= AFS_OPERATION_STOP;
+ 	afs_end_cursor(&op->ac);
+-	_leave(" = f [failed %d]", op->error);
++	_leave(" = f [failed %d]", afs_op_error(op));
+ 	return false;
+ }
+ 
+@@ -482,11 +577,13 @@ void afs_dump_edestaddrreq(const struct afs_operation *op)
+ 	rcu_read_lock();
+ 
+ 	pr_notice("EDESTADDR occurred\n");
+-	pr_notice("FC: cbb=%x cbb2=%x fl=%x err=%hd\n",
++	pr_notice("OP: cbb=%x cbb2=%x fl=%x err=%hd\n",
+ 		  op->file[0].cb_break_before,
+-		  op->file[1].cb_break_before, op->flags, op->error);
+-	pr_notice("FC: ut=%lx ix=%d ni=%u\n",
++		  op->file[1].cb_break_before, op->flags, op->cumul_error.error);
++	pr_notice("OP: ut=%lx ix=%d ni=%u\n",
+ 		  op->untried, op->index, op->nr_iterations);
++	pr_notice("OP: call  er=%d ac=%d r=%u\n",
++		  op->call_error, op->call_abort_code, op->call_responded);
+ 
+ 	if (op->server_list) {
+ 		const struct afs_server_list *sl = op->server_list;
+@@ -511,8 +608,7 @@ void afs_dump_edestaddrreq(const struct afs_operation *op)
+ 		}
+ 	}
+ 
+-	pr_notice("AC: t=%lx ax=%u ac=%d er=%d r=%u ni=%u\n",
+-		  op->ac.tried, op->ac.index, op->ac.abort_code, op->ac.error,
+-		  op->ac.responded, op->ac.nr_iterations);
++	pr_notice("AC: t=%lx ax=%u ni=%u\n",
++		  op->ac.tried, op->ac.index, op->ac.nr_iterations);
+ 	rcu_read_unlock();
+ }
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index d642d06a453be..0b3e2f20b0e08 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -296,7 +296,8 @@ static void afs_notify_end_request_tx(struct sock *sock,
+  */
+ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ {
+-	struct sockaddr_rxrpc *srx = &ac->alist->addrs[ac->index];
++	struct afs_address *addr = &ac->alist->addrs[ac->index];
++	struct rxrpc_peer *peer = addr->peer;
+ 	struct rxrpc_call *rxcall;
+ 	struct msghdr msg;
+ 	struct kvec iov[1];
+@@ -304,7 +305,7 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ 	s64 tx_total_len;
+ 	int ret;
+ 
+-	_enter(",{%pISp},", &srx->transport);
++	_enter(",{%pISp},", rxrpc_kernel_remote_addr(addr->peer));
+ 
+ 	ASSERT(call->type != NULL);
+ 	ASSERT(call->type->name != NULL);
+@@ -333,7 +334,7 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ 	}
+ 
+ 	/* create a call */
+-	rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key,
++	rxcall = rxrpc_kernel_begin_call(call->net->socket, peer, call->key,
+ 					 (unsigned long)call,
+ 					 tx_total_len,
+ 					 call->max_lifespan,
+@@ -341,6 +342,7 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
+ 					 (call->async ?
+ 					  afs_wake_up_async_call :
+ 					  afs_wake_up_call_waiter),
++					 addr->service_id,
+ 					 call->upgrade,
+ 					 (call->intr ? RXRPC_PREINTERRUPTIBLE :
+ 					  RXRPC_UNINTERRUPTIBLE),
+@@ -406,8 +408,7 @@ error_do_abort:
+ 		rxrpc_kernel_recv_data(call->net->socket, rxcall,
+ 				       &msg.msg_iter, &len, false,
+ 				       &call->abort_code, &call->service_id);
+-		ac->abort_code = call->abort_code;
+-		ac->responded = true;
++		call->responded = true;
+ 	}
+ 	call->error = ret;
+ 	trace_afs_call_done(call);
+@@ -427,7 +428,7 @@ error_kill_call:
+ 		afs_set_call_complete(call, ret, 0);
+ 	}
+ 
+-	ac->error = ret;
++	call->error = ret;
+ 	call->state = AFS_CALL_COMPLETE;
+ 	_leave(" = %d", ret);
+ }
+@@ -461,7 +462,7 @@ static void afs_log_error(struct afs_call *call, s32 remote_abort)
+ 		max = m + 1;
+ 		pr_notice("kAFS: Peer reported %s failure on %s [%pISp]\n",
+ 			  msg, call->type->name,
+-			  &call->alist->addrs[call->addr_ix].transport);
++			  rxrpc_kernel_remote_addr(call->alist->addrs[call->addr_ix].peer));
+ 	}
+ }
+ 
+@@ -508,6 +509,7 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 			ret = -EBADMSG;
+ 		switch (ret) {
+ 		case 0:
++			call->responded = true;
+ 			afs_queue_call_work(call);
+ 			if (state == AFS_CALL_CL_PROC_REPLY) {
+ 				if (call->op)
+@@ -522,9 +524,11 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 			goto out;
+ 		case -ECONNABORTED:
+ 			ASSERTCMP(state, ==, AFS_CALL_COMPLETE);
++			call->responded = true;
+ 			afs_log_error(call, call->abort_code);
+ 			goto done;
+ 		case -ENOTSUPP:
++			call->responded = true;
+ 			abort_code = RXGEN_OPCODE;
+ 			rxrpc_kernel_abort_call(call->net->socket, call->rxcall,
+ 						abort_code, ret,
+@@ -571,50 +575,46 @@ call_complete:
+ }
+ 
+ /*
+- * Wait synchronously for a call to complete and clean up the call struct.
++ * Wait synchronously for a call to complete.
+  */
+-long afs_wait_for_call_to_complete(struct afs_call *call,
+-				   struct afs_addr_cursor *ac)
++void afs_wait_for_call_to_complete(struct afs_call *call, struct afs_addr_cursor *ac)
+ {
+-	long ret;
+ 	bool rxrpc_complete = false;
+ 
+-	DECLARE_WAITQUEUE(myself, current);
+-
+ 	_enter("");
+ 
+-	ret = call->error;
+-	if (ret < 0)
+-		goto out;
++	if (!afs_check_call_state(call, AFS_CALL_COMPLETE)) {
++		DECLARE_WAITQUEUE(myself, current);
++
++		add_wait_queue(&call->waitq, &myself);
++		for (;;) {
++			set_current_state(TASK_UNINTERRUPTIBLE);
++
++			/* deliver any messages that are in the queue */
++			if (!afs_check_call_state(call, AFS_CALL_COMPLETE) &&
++			    call->need_attention) {
++				call->need_attention = false;
++				__set_current_state(TASK_RUNNING);
++				afs_deliver_to_call(call);
++				continue;
++			}
+ 
+-	add_wait_queue(&call->waitq, &myself);
+-	for (;;) {
+-		set_current_state(TASK_UNINTERRUPTIBLE);
+-
+-		/* deliver any messages that are in the queue */
+-		if (!afs_check_call_state(call, AFS_CALL_COMPLETE) &&
+-		    call->need_attention) {
+-			call->need_attention = false;
+-			__set_current_state(TASK_RUNNING);
+-			afs_deliver_to_call(call);
+-			continue;
+-		}
++			if (afs_check_call_state(call, AFS_CALL_COMPLETE))
++				break;
+ 
+-		if (afs_check_call_state(call, AFS_CALL_COMPLETE))
+-			break;
++			if (!rxrpc_kernel_check_life(call->net->socket, call->rxcall)) {
++				/* rxrpc terminated the call. */
++				rxrpc_complete = true;
++				break;
++			}
+ 
+-		if (!rxrpc_kernel_check_life(call->net->socket, call->rxcall)) {
+-			/* rxrpc terminated the call. */
+-			rxrpc_complete = true;
+-			break;
++			schedule();
+ 		}
+ 
+-		schedule();
++		remove_wait_queue(&call->waitq, &myself);
++		__set_current_state(TASK_RUNNING);
+ 	}
+ 
+-	remove_wait_queue(&call->waitq, &myself);
+-	__set_current_state(TASK_RUNNING);
+-
+ 	if (!afs_check_call_state(call, AFS_CALL_COMPLETE)) {
+ 		if (rxrpc_complete) {
+ 			afs_set_call_complete(call, call->error, call->abort_code);
+@@ -628,28 +628,8 @@ long afs_wait_for_call_to_complete(struct afs_call *call,
+ 		}
+ 	}
+ 
+-	spin_lock_bh(&call->state_lock);
+-	ac->abort_code = call->abort_code;
+-	ac->error = call->error;
+-	spin_unlock_bh(&call->state_lock);
+-
+-	ret = ac->error;
+-	switch (ret) {
+-	case 0:
+-		ret = call->ret0;
+-		call->ret0 = 0;
+-
+-		fallthrough;
+-	case -ECONNABORTED:
+-		ac->responded = true;
+-		break;
+-	}
+-
+-out:
+-	_debug("call complete");
+-	afs_put_call(call);
+-	_leave(" = %p", (void *)ret);
+-	return ret;
++	if (call->error == 0 || call->error == -ECONNABORTED)
++		call->responded = true;
+ }
+ 
+ /*
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index b5237206eac3e..f7791ef13618a 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -21,13 +21,12 @@ static void __afs_put_server(struct afs_net *, struct afs_server *);
+ /*
+  * Find a server by one of its addresses.
+  */
+-struct afs_server *afs_find_server(struct afs_net *net,
+-				   const struct sockaddr_rxrpc *srx)
++struct afs_server *afs_find_server(struct afs_net *net, const struct rxrpc_peer *peer)
+ {
+ 	const struct afs_addr_list *alist;
+ 	struct afs_server *server = NULL;
+ 	unsigned int i;
+-	int seq = 0, diff;
++	int seq = 1;
+ 
+ 	rcu_read_lock();
+ 
+@@ -35,39 +34,14 @@ struct afs_server *afs_find_server(struct afs_net *net,
+ 		if (server)
+ 			afs_unuse_server_notime(net, server, afs_server_trace_put_find_rsq);
+ 		server = NULL;
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&net->fs_addr_lock, &seq);
+ 
+-		if (srx->transport.family == AF_INET6) {
+-			const struct sockaddr_in6 *a = &srx->transport.sin6, *b;
+-			hlist_for_each_entry_rcu(server, &net->fs_addresses6, addr6_link) {
+-				alist = rcu_dereference(server->addresses);
+-				for (i = alist->nr_ipv4; i < alist->nr_addrs; i++) {
+-					b = &alist->addrs[i].transport.sin6;
+-					diff = ((u16 __force)a->sin6_port -
+-						(u16 __force)b->sin6_port);
+-					if (diff == 0)
+-						diff = memcmp(&a->sin6_addr,
+-							      &b->sin6_addr,
+-							      sizeof(struct in6_addr));
+-					if (diff == 0)
+-						goto found;
+-				}
+-			}
+-		} else {
+-			const struct sockaddr_in *a = &srx->transport.sin, *b;
+-			hlist_for_each_entry_rcu(server, &net->fs_addresses4, addr4_link) {
+-				alist = rcu_dereference(server->addresses);
+-				for (i = 0; i < alist->nr_ipv4; i++) {
+-					b = &alist->addrs[i].transport.sin;
+-					diff = ((u16 __force)a->sin_port -
+-						(u16 __force)b->sin_port);
+-					if (diff == 0)
+-						diff = ((u32 __force)a->sin_addr.s_addr -
+-							(u32 __force)b->sin_addr.s_addr);
+-					if (diff == 0)
+-						goto found;
+-				}
+-			}
++		hlist_for_each_entry_rcu(server, &net->fs_addresses6, addr6_link) {
++			alist = rcu_dereference(server->addresses);
++			for (i = 0; i < alist->nr_addrs; i++)
++				if (alist->addrs[i].peer == peer)
++					goto found;
+ 		}
+ 
+ 		server = NULL;
+@@ -90,7 +64,7 @@ struct afs_server *afs_find_server_by_uuid(struct afs_net *net, const uuid_t *uu
+ {
+ 	struct afs_server *server = NULL;
+ 	struct rb_node *p;
+-	int diff, seq = 0;
++	int diff, seq = 1;
+ 
+ 	_enter("%pU", uuid);
+ 
+@@ -102,7 +76,7 @@ struct afs_server *afs_find_server_by_uuid(struct afs_net *net, const uuid_t *uu
+ 		if (server)
+ 			afs_unuse_server(net, server, afs_server_trace_put_uuid_rsq);
+ 		server = NULL;
+-
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&net->fs_lock, &seq);
+ 
+ 		p = net->fs_servers.rb_node;
+@@ -463,7 +437,6 @@ static void afs_give_up_callbacks(struct afs_net *net, struct afs_server *server
+ 	struct afs_addr_cursor ac = {
+ 		.alist	= alist,
+ 		.index	= alist->preferred,
+-		.error	= 0,
+ 	};
+ 
+ 	afs_fs_give_up_all_callbacks(net, server, &ac, NULL);
+@@ -655,8 +628,8 @@ static noinline bool afs_update_server_record(struct afs_operation *op,
+ 			_leave(" = t [intr]");
+ 			return true;
+ 		}
+-		op->error = PTR_ERR(alist);
+-		_leave(" = f [%d]", op->error);
++		afs_op_set_error(op, PTR_ERR(alist));
++		_leave(" = f [%d]", afs_op_error(op));
+ 		return false;
+ 	}
+ 
+@@ -710,7 +683,7 @@ wait:
+ 			  (op->flags & AFS_OPERATION_UNINTR) ?
+ 			  TASK_UNINTERRUPTIBLE : TASK_INTERRUPTIBLE);
+ 	if (ret == -ERESTARTSYS) {
+-		op->error = ret;
++		afs_op_set_error(op, ret);
+ 		_leave(" = f [intr]");
+ 		return false;
+ 	}
+diff --git a/fs/afs/vl_alias.c b/fs/afs/vl_alias.c
+index f04a80e4f5c3f..89cadd9a69e12 100644
+--- a/fs/afs/vl_alias.c
++++ b/fs/afs/vl_alias.c
+@@ -32,55 +32,6 @@ static struct afs_volume *afs_sample_volume(struct afs_cell *cell, struct key *k
+ 	return volume;
+ }
+ 
+-/*
+- * Compare two addresses.
+- */
+-static int afs_compare_addrs(const struct sockaddr_rxrpc *srx_a,
+-			     const struct sockaddr_rxrpc *srx_b)
+-{
+-	short port_a, port_b;
+-	int addr_a, addr_b, diff;
+-
+-	diff = (short)srx_a->transport_type - (short)srx_b->transport_type;
+-	if (diff)
+-		goto out;
+-
+-	switch (srx_a->transport_type) {
+-	case AF_INET: {
+-		const struct sockaddr_in *a = &srx_a->transport.sin;
+-		const struct sockaddr_in *b = &srx_b->transport.sin;
+-		addr_a = ntohl(a->sin_addr.s_addr);
+-		addr_b = ntohl(b->sin_addr.s_addr);
+-		diff = addr_a - addr_b;
+-		if (diff == 0) {
+-			port_a = ntohs(a->sin_port);
+-			port_b = ntohs(b->sin_port);
+-			diff = port_a - port_b;
+-		}
+-		break;
+-	}
+-
+-	case AF_INET6: {
+-		const struct sockaddr_in6 *a = &srx_a->transport.sin6;
+-		const struct sockaddr_in6 *b = &srx_b->transport.sin6;
+-		diff = memcmp(&a->sin6_addr, &b->sin6_addr, 16);
+-		if (diff == 0) {
+-			port_a = ntohs(a->sin6_port);
+-			port_b = ntohs(b->sin6_port);
+-			diff = port_a - port_b;
+-		}
+-		break;
+-	}
+-
+-	default:
+-		WARN_ON(1);
+-		diff = 1;
+-	}
+-
+-out:
+-	return diff;
+-}
+-
+ /*
+  * Compare the address lists of a pair of fileservers.
+  */
+@@ -94,9 +45,9 @@ static int afs_compare_fs_alists(const struct afs_server *server_a,
+ 	lb = rcu_dereference(server_b->addresses);
+ 
+ 	while (a < la->nr_addrs && b < lb->nr_addrs) {
+-		const struct sockaddr_rxrpc *srx_a = &la->addrs[a];
+-		const struct sockaddr_rxrpc *srx_b = &lb->addrs[b];
+-		int diff = afs_compare_addrs(srx_a, srx_b);
++		unsigned long pa = (unsigned long)la->addrs[a].peer;
++		unsigned long pb = (unsigned long)lb->addrs[b].peer;
++		long diff = pa - pb;
+ 
+ 		if (diff < 0) {
+ 			a++;
+@@ -285,7 +236,7 @@ static char *afs_vl_get_cell_name(struct afs_cell *cell, struct key *key)
+ 
+ 	while (afs_select_vlserver(&vc)) {
+ 		if (!test_bit(AFS_VLSERVER_FL_IS_YFS, &vc.server->flags)) {
+-			vc.ac.error = -EOPNOTSUPP;
++			vc.call_error = -EOPNOTSUPP;
+ 			skipped = true;
+ 			continue;
+ 		}
+diff --git a/fs/afs/vl_list.c b/fs/afs/vl_list.c
+index acc48216136ae..ba89140eee9e9 100644
+--- a/fs/afs/vl_list.c
++++ b/fs/afs/vl_list.c
+@@ -83,14 +83,15 @@ static u16 afs_extract_le16(const u8 **_b)
+ /*
+  * Build a VL server address list from a DNS queried server list.
+  */
+-static struct afs_addr_list *afs_extract_vl_addrs(const u8 **_b, const u8 *end,
++static struct afs_addr_list *afs_extract_vl_addrs(struct afs_net *net,
++						  const u8 **_b, const u8 *end,
+ 						  u8 nr_addrs, u16 port)
+ {
+ 	struct afs_addr_list *alist;
+ 	const u8 *b = *_b;
+ 	int ret = -EINVAL;
+ 
+-	alist = afs_alloc_addrlist(nr_addrs, VL_SERVICE, port);
++	alist = afs_alloc_addrlist(nr_addrs, VL_SERVICE);
+ 	if (!alist)
+ 		return ERR_PTR(-ENOMEM);
+ 	if (nr_addrs == 0)
+@@ -109,7 +110,9 @@ static struct afs_addr_list *afs_extract_vl_addrs(const u8 **_b, const u8 *end,
+ 				goto error;
+ 			}
+ 			memcpy(x, b, 4);
+-			afs_merge_fs_addr4(alist, x[0], port);
++			ret = afs_merge_fs_addr4(net, alist, x[0], port);
++			if (ret < 0)
++				goto error;
+ 			b += 4;
+ 			break;
+ 
+@@ -119,7 +122,9 @@ static struct afs_addr_list *afs_extract_vl_addrs(const u8 **_b, const u8 *end,
+ 				goto error;
+ 			}
+ 			memcpy(x, b, 16);
+-			afs_merge_fs_addr6(alist, x, port);
++			ret = afs_merge_fs_addr6(net, alist, x, port);
++			if (ret < 0)
++				goto error;
+ 			b += 16;
+ 			break;
+ 
+@@ -247,7 +252,7 @@ struct afs_vlserver_list *afs_extract_vlserver_list(struct afs_cell *cell,
+ 		/* Extract the addresses - note that we can't skip this as we
+ 		 * have to advance the payload pointer.
+ 		 */
+-		addrs = afs_extract_vl_addrs(&b, end, bs.nr_addrs, bs.port);
++		addrs = afs_extract_vl_addrs(cell->net, &b, end, bs.nr_addrs, bs.port);
+ 		if (IS_ERR(addrs)) {
+ 			ret = PTR_ERR(addrs);
+ 			goto error_2;
+diff --git a/fs/afs/vl_probe.c b/fs/afs/vl_probe.c
+index 58452b86e6727..2f8a13c2bf0c8 100644
+--- a/fs/afs/vl_probe.c
++++ b/fs/afs/vl_probe.c
+@@ -48,6 +48,7 @@ void afs_vlserver_probe_result(struct afs_call *call)
+ {
+ 	struct afs_addr_list *alist = call->alist;
+ 	struct afs_vlserver *server = call->vlserver;
++	struct afs_address *addr = &alist->addrs[call->addr_ix];
+ 	unsigned int server_index = call->server_index;
+ 	unsigned int rtt_us = 0;
+ 	unsigned int index = call->addr_ix;
+@@ -106,16 +107,16 @@ responded:
+ 	if (call->service_id == YFS_VL_SERVICE) {
+ 		server->probe.flags |= AFS_VLSERVER_PROBE_IS_YFS;
+ 		set_bit(AFS_VLSERVER_FL_IS_YFS, &server->flags);
+-		alist->addrs[index].srx_service = call->service_id;
++		addr->service_id = call->service_id;
+ 	} else {
+ 		server->probe.flags |= AFS_VLSERVER_PROBE_NOT_YFS;
+ 		if (!(server->probe.flags & AFS_VLSERVER_PROBE_IS_YFS)) {
+ 			clear_bit(AFS_VLSERVER_FL_IS_YFS, &server->flags);
+-			alist->addrs[index].srx_service = call->service_id;
++			addr->service_id = call->service_id;
+ 		}
+ 	}
+ 
+-	rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us);
++	rtt_us = rxrpc_kernel_get_srtt(addr->peer);
+ 	if (rtt_us < server->probe.rtt) {
+ 		server->probe.rtt = rtt_us;
+ 		server->rtt = rtt_us;
+@@ -130,8 +131,9 @@ responded:
+ out:
+ 	spin_unlock(&server->probe_lock);
+ 
+-	_debug("probe [%u][%u] %pISpc rtt=%u ret=%d",
+-	       server_index, index, &alist->addrs[index].transport, rtt_us, ret);
++	_debug("probe [%u][%u] %pISpc rtt=%d ret=%d",
++	       server_index, index, rxrpc_kernel_remote_addr(addr->peer),
++	       rtt_us, ret);
+ 
+ 	afs_done_one_vl_probe(server, have_result);
+ }
+@@ -167,10 +169,11 @@ static bool afs_do_probe_vlserver(struct afs_net *net,
+ 		call = afs_vl_get_capabilities(net, &ac, key, server,
+ 					       server_index);
+ 		if (!IS_ERR(call)) {
++			afs_prioritise_error(_e, call->error, call->abort_code);
+ 			afs_put_call(call);
+ 			in_progress = true;
+ 		} else {
+-			afs_prioritise_error(_e, PTR_ERR(call), ac.abort_code);
++			afs_prioritise_error(_e, PTR_ERR(call), 0);
+ 			afs_done_one_vl_probe(server, false);
+ 		}
+ 	}
+@@ -185,12 +188,10 @@ int afs_send_vl_probes(struct afs_net *net, struct key *key,
+ 		       struct afs_vlserver_list *vllist)
+ {
+ 	struct afs_vlserver *server;
+-	struct afs_error e;
++	struct afs_error e = {};
+ 	bool in_progress = false;
+ 	int i;
+ 
+-	e.error = 0;
+-	e.responded = false;
+ 	for (i = 0; i < vllist->nr_servers; i++) {
+ 		server = vllist->servers[i].server;
+ 		if (test_bit(AFS_VLSERVER_FL_PROBED, &server->flags))
+diff --git a/fs/afs/vl_rotate.c b/fs/afs/vl_rotate.c
+index eb415ce563600..e2dc54082a05d 100644
+--- a/fs/afs/vl_rotate.c
++++ b/fs/afs/vl_rotate.c
+@@ -20,11 +20,11 @@ bool afs_begin_vlserver_operation(struct afs_vl_cursor *vc, struct afs_cell *cel
+ 	memset(vc, 0, sizeof(*vc));
+ 	vc->cell = cell;
+ 	vc->key = key;
+-	vc->error = -EDESTADDRREQ;
+-	vc->ac.error = SHRT_MAX;
++	vc->cumul_error.error = -EDESTADDRREQ;
++	vc->nr_iterations = -1;
+ 
+ 	if (signal_pending(current)) {
+-		vc->error = -EINTR;
++		vc->cumul_error.error = -EINTR;
+ 		vc->flags |= AFS_VL_CURSOR_STOP;
+ 		return false;
+ 	}
+@@ -52,7 +52,7 @@ static bool afs_start_vl_iteration(struct afs_vl_cursor *vc)
+ 				    &cell->dns_lookup_count,
+ 				    smp_load_acquire(&cell->dns_lookup_count)
+ 				    != dns_lookup_count) < 0) {
+-				vc->error = -ERESTARTSYS;
++				vc->cumul_error.error = -ERESTARTSYS;
+ 				return false;
+ 			}
+ 		}
+@@ -60,12 +60,12 @@ static bool afs_start_vl_iteration(struct afs_vl_cursor *vc)
+ 		/* Status load is ordered after lookup counter load */
+ 		if (cell->dns_status == DNS_LOOKUP_GOT_NOT_FOUND) {
+ 			pr_warn("No record of cell %s\n", cell->name);
+-			vc->error = -ENOENT;
++			vc->cumul_error.error = -ENOENT;
+ 			return false;
+ 		}
+ 
+ 		if (cell->dns_source == DNS_RECORD_UNAVAILABLE) {
+-			vc->error = -EDESTADDRREQ;
++			vc->cumul_error.error = -EDESTADDRREQ;
+ 			return false;
+ 		}
+ 	}
+@@ -91,52 +91,52 @@ bool afs_select_vlserver(struct afs_vl_cursor *vc)
+ {
+ 	struct afs_addr_list *alist;
+ 	struct afs_vlserver *vlserver;
+-	struct afs_error e;
+-	u32 rtt;
+-	int error = vc->ac.error, i;
++	unsigned int rtt;
++	s32 abort_code = vc->call_abort_code;
++	int error = vc->call_error, i;
++
++	vc->nr_iterations++;
+ 
+ 	_enter("%lx[%d],%lx[%d],%d,%d",
+ 	       vc->untried, vc->index,
+ 	       vc->ac.tried, vc->ac.index,
+-	       error, vc->ac.abort_code);
++	       error, abort_code);
+ 
+ 	if (vc->flags & AFS_VL_CURSOR_STOP) {
+ 		_leave(" = f [stopped]");
+ 		return false;
+ 	}
+ 
+-	vc->nr_iterations++;
++	if (vc->nr_iterations == 0)
++		goto start;
+ 
+ 	/* Evaluate the result of the previous operation, if there was one. */
+ 	switch (error) {
+-	case SHRT_MAX:
+-		goto start;
+-
+ 	default:
+ 	case 0:
+ 		/* Success or local failure.  Stop. */
+-		vc->error = error;
++		vc->cumul_error.error = error;
+ 		vc->flags |= AFS_VL_CURSOR_STOP;
+-		_leave(" = f [okay/local %d]", vc->ac.error);
++		_leave(" = f [okay/local %d]", vc->cumul_error.error);
+ 		return false;
+ 
+ 	case -ECONNABORTED:
+ 		/* The far side rejected the operation on some grounds.  This
+ 		 * might involve the server being busy or the volume having been moved.
+ 		 */
+-		switch (vc->ac.abort_code) {
++		switch (abort_code) {
+ 		case AFSVL_IO:
+ 		case AFSVL_BADVOLOPER:
+ 		case AFSVL_NOMEM:
+ 			/* The server went weird. */
+-			vc->error = -EREMOTEIO;
++			afs_prioritise_error(&vc->cumul_error, -EREMOTEIO, abort_code);
+ 			//write_lock(&vc->cell->vl_servers_lock);
+ 			//vc->server_list->weird_mask |= 1 << vc->index;
+ 			//write_unlock(&vc->cell->vl_servers_lock);
+ 			goto next_server;
+ 
+ 		default:
+-			vc->error = afs_abort_to_error(vc->ac.abort_code);
++			afs_prioritise_error(&vc->cumul_error, error, abort_code);
+ 			goto failed;
+ 		}
+ 
+@@ -149,12 +149,12 @@ bool afs_select_vlserver(struct afs_vl_cursor *vc)
+ 	case -ETIMEDOUT:
+ 	case -ETIME:
+ 		_debug("no conn %d", error);
+-		vc->error = error;
++		afs_prioritise_error(&vc->cumul_error, error, 0);
+ 		goto iterate_address;
+ 
+ 	case -ECONNRESET:
+ 		_debug("call reset");
+-		vc->error = error;
++		afs_prioritise_error(&vc->cumul_error, error, 0);
+ 		vc->flags |= AFS_VL_CURSOR_RETRY;
+ 		goto next_server;
+ 
+@@ -178,15 +178,19 @@ start:
+ 		goto failed;
+ 
+ 	error = afs_send_vl_probes(vc->cell->net, vc->key, vc->server_list);
+-	if (error < 0)
+-		goto failed_set_error;
++	if (error < 0) {
++		afs_prioritise_error(&vc->cumul_error, error, 0);
++		goto failed;
++	}
+ 
+ pick_server:
+ 	_debug("pick [%lx]", vc->untried);
+ 
+ 	error = afs_wait_for_vl_probes(vc->server_list, vc->untried);
+-	if (error < 0)
+-		goto failed_set_error;
++	if (error < 0) {
++		afs_prioritise_error(&vc->cumul_error, error, 0);
++		goto failed;
++	}
+ 
+ 	/* Pick the untried server with the lowest RTT. */
+ 	vc->index = vc->server_list->preferred;
+@@ -194,7 +198,7 @@ pick_server:
+ 		goto selected_server;
+ 
+ 	vc->index = -1;
+-	rtt = U32_MAX;
++	rtt = UINT_MAX;
+ 	for (i = 0; i < vc->server_list->nr_servers; i++) {
+ 		struct afs_vlserver *s = vc->server_list->servers[i].server;
+ 
+@@ -249,7 +253,8 @@ iterate_address:
+ 
+ 	_debug("VL address %d/%d", vc->ac.index, vc->ac.alist->nr_addrs);
+ 
+-	_leave(" = t %pISpc", &vc->ac.alist->addrs[vc->ac.index].transport);
++	vc->call_responded = false;
++	_leave(" = t %pISpc", rxrpc_kernel_remote_addr(vc->ac.alist->addrs[vc->ac.index].peer));
+ 	return true;
+ 
+ next_server:
+@@ -264,25 +269,19 @@ no_more_servers:
+ 	if (vc->flags & AFS_VL_CURSOR_RETRY)
+ 		goto restart_from_beginning;
+ 
+-	e.error = -EDESTADDRREQ;
+-	e.responded = false;
+ 	for (i = 0; i < vc->server_list->nr_servers; i++) {
+ 		struct afs_vlserver *s = vc->server_list->servers[i].server;
+ 
+ 		if (test_bit(AFS_VLSERVER_FL_RESPONDING, &s->flags))
+-			e.responded = true;
+-		afs_prioritise_error(&e, READ_ONCE(s->probe.error),
++			vc->cumul_error.responded = true;
++		afs_prioritise_error(&vc->cumul_error, READ_ONCE(s->probe.error),
+ 				     s->probe.abort_code);
+ 	}
+ 
+-	error = e.error;
+-
+-failed_set_error:
+-	vc->error = error;
+ failed:
+ 	vc->flags |= AFS_VL_CURSOR_STOP;
+ 	afs_end_cursor(&vc->ac);
+-	_leave(" = f [failed %d]", vc->error);
++	_leave(" = f [failed %d]", vc->cumul_error.error);
+ 	return false;
+ }
+ 
+@@ -305,7 +304,10 @@ static void afs_vl_dump_edestaddrreq(const struct afs_vl_cursor *vc)
+ 	pr_notice("DNS: src=%u st=%u lc=%x\n",
+ 		  cell->dns_source, cell->dns_status, cell->dns_lookup_count);
+ 	pr_notice("VC: ut=%lx ix=%u ni=%hu fl=%hx err=%hd\n",
+-		  vc->untried, vc->index, vc->nr_iterations, vc->flags, vc->error);
++		  vc->untried, vc->index, vc->nr_iterations, vc->flags,
++		  vc->cumul_error.error);
++	pr_notice("VC: call  er=%d ac=%d r=%u\n",
++		  vc->call_error, vc->call_abort_code, vc->call_responded);
+ 
+ 	if (vc->server_list) {
+ 		const struct afs_vlserver_list *sl = vc->server_list;
+@@ -329,9 +331,8 @@ static void afs_vl_dump_edestaddrreq(const struct afs_vl_cursor *vc)
+ 		}
+ 	}
+ 
+-	pr_notice("AC: t=%lx ax=%u ac=%d er=%d r=%u ni=%u\n",
+-		  vc->ac.tried, vc->ac.index, vc->ac.abort_code, vc->ac.error,
+-		  vc->ac.responded, vc->ac.nr_iterations);
++	pr_notice("AC: t=%lx ax=%u ni=%u\n",
++		  vc->ac.tried, vc->ac.index, vc->ac.nr_iterations);
+ 	rcu_read_unlock();
+ }
+ 
+@@ -342,17 +343,16 @@ int afs_end_vlserver_operation(struct afs_vl_cursor *vc)
+ {
+ 	struct afs_net *net = vc->cell->net;
+ 
+-	if (vc->error == -EDESTADDRREQ ||
+-	    vc->error == -EADDRNOTAVAIL ||
+-	    vc->error == -ENETUNREACH ||
+-	    vc->error == -EHOSTUNREACH)
++	switch (vc->cumul_error.error) {
++	case -EDESTADDRREQ:
++	case -EADDRNOTAVAIL:
++	case -ENETUNREACH:
++	case -EHOSTUNREACH:
+ 		afs_vl_dump_edestaddrreq(vc);
++		break;
++	}
+ 
+ 	afs_end_cursor(&vc->ac);
+ 	afs_put_vlserverlist(net, vc->server_list);
+-
+-	if (vc->error == -ECONNABORTED)
+-		vc->error = afs_abort_to_error(vc->ac.abort_code);
+-
+-	return vc->error;
++	return vc->cumul_error.error;
+ }
+diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
+index 00fca3c66ba61..db7e94584e874 100644
+--- a/fs/afs/vlclient.c
++++ b/fs/afs/vlclient.c
+@@ -106,12 +106,6 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call)
+ 	return 0;
+ }
+ 
+-static void afs_destroy_vl_get_entry_by_name_u(struct afs_call *call)
+-{
+-	kfree(call->ret_vldb);
+-	afs_flat_call_destructor(call);
+-}
+-
+ /*
+  * VL.GetEntryByNameU operation type.
+  */
+@@ -119,7 +113,7 @@ static const struct afs_call_type afs_RXVLGetEntryByNameU = {
+ 	.name		= "VL.GetEntryByNameU",
+ 	.op		= afs_VL_GetEntryByNameU,
+ 	.deliver	= afs_deliver_vl_get_entry_by_name_u,
+-	.destructor	= afs_destroy_vl_get_entry_by_name_u,
++	.destructor	= afs_flat_call_destructor,
+ };
+ 
+ /*
+@@ -166,7 +160,16 @@ struct afs_vldb_entry *afs_vl_get_entry_by_name_u(struct afs_vl_cursor *vc,
+ 
+ 	trace_afs_make_vl_call(call);
+ 	afs_make_call(&vc->ac, call, GFP_KERNEL);
+-	return (struct afs_vldb_entry *)afs_wait_for_call_to_complete(call, &vc->ac);
++	afs_wait_for_call_to_complete(call, &vc->ac);
++	vc->call_abort_code	= call->abort_code;
++	vc->call_error		= call->error;
++	vc->call_responded	= call->responded;
++	afs_put_call(call);
++	if (vc->call_error) {
++		kfree(entry);
++		return ERR_PTR(vc->call_error);
++	}
++	return entry;
+ }
+ 
+ /*
+@@ -208,7 +211,7 @@ static int afs_deliver_vl_get_addrs_u(struct afs_call *call)
+ 		count		= ntohl(*bp);
+ 
+ 		nentries = min(nentries, count);
+-		alist = afs_alloc_addrlist(nentries, FS_SERVICE, AFS_FS_PORT);
++		alist = afs_alloc_addrlist(nentries, FS_SERVICE);
+ 		if (!alist)
+ 			return -ENOMEM;
+ 		alist->version = uniquifier;
+@@ -230,9 +233,13 @@ static int afs_deliver_vl_get_addrs_u(struct afs_call *call)
+ 		alist = call->ret_alist;
+ 		bp = call->buffer;
+ 		count = min(call->count, 4U);
+-		for (i = 0; i < count; i++)
+-			if (alist->nr_addrs < call->count2)
+-				afs_merge_fs_addr4(alist, *bp++, AFS_FS_PORT);
++		for (i = 0; i < count; i++) {
++			if (alist->nr_addrs < call->count2) {
++				ret = afs_merge_fs_addr4(call->net, alist, *bp++, AFS_FS_PORT);
++				if (ret < 0)
++					return ret;
++			}
++		}
+ 
+ 		call->count -= count;
+ 		if (call->count > 0)
+@@ -245,12 +252,6 @@ static int afs_deliver_vl_get_addrs_u(struct afs_call *call)
+ 	return 0;
+ }
+ 
+-static void afs_vl_get_addrs_u_destructor(struct afs_call *call)
+-{
+-	afs_put_addrlist(call->ret_alist);
+-	return afs_flat_call_destructor(call);
+-}
+-
+ /*
+  * VL.GetAddrsU operation type.
+  */
+@@ -258,7 +259,7 @@ static const struct afs_call_type afs_RXVLGetAddrsU = {
+ 	.name		= "VL.GetAddrsU",
+ 	.op		= afs_VL_GetAddrsU,
+ 	.deliver	= afs_deliver_vl_get_addrs_u,
+-	.destructor	= afs_vl_get_addrs_u_destructor,
++	.destructor	= afs_flat_call_destructor,
+ };
+ 
+ /*
+@@ -269,6 +270,7 @@ struct afs_addr_list *afs_vl_get_addrs_u(struct afs_vl_cursor *vc,
+ 					 const uuid_t *uuid)
+ {
+ 	struct afs_ListAddrByAttributes__xdr *r;
++	struct afs_addr_list *alist;
+ 	const struct afs_uuid *u = (const struct afs_uuid *)uuid;
+ 	struct afs_call *call;
+ 	struct afs_net *net = vc->cell->net;
+@@ -305,7 +307,17 @@ struct afs_addr_list *afs_vl_get_addrs_u(struct afs_vl_cursor *vc,
+ 
+ 	trace_afs_make_vl_call(call);
+ 	afs_make_call(&vc->ac, call, GFP_KERNEL);
+-	return (struct afs_addr_list *)afs_wait_for_call_to_complete(call, &vc->ac);
++	afs_wait_for_call_to_complete(call, &vc->ac);
++	vc->call_abort_code	= call->abort_code;
++	vc->call_error		= call->error;
++	vc->call_responded	= call->responded;
++	alist			= call->ret_alist;
++	afs_put_call(call);
++	if (vc->call_error) {
++		afs_put_addrlist(alist);
++		return ERR_PTR(vc->call_error);
++	}
++	return alist;
+ }
+ 
+ /*
+@@ -450,7 +462,7 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ 		if (call->count > YFS_MAXENDPOINTS)
+ 			return afs_protocol_error(call, afs_eproto_yvl_fsendpt_num);
+ 
+-		alist = afs_alloc_addrlist(call->count, FS_SERVICE, AFS_FS_PORT);
++		alist = afs_alloc_addrlist(call->count, FS_SERVICE);
+ 		if (!alist)
+ 			return -ENOMEM;
+ 		alist->version = uniquifier;
+@@ -488,14 +500,18 @@ static int afs_deliver_yfsvl_get_endpoints(struct afs_call *call)
+ 			if (ntohl(bp[0]) != sizeof(__be32) * 2)
+ 				return afs_protocol_error(
+ 					call, afs_eproto_yvl_fsendpt4_len);
+-			afs_merge_fs_addr4(alist, bp[1], ntohl(bp[2]));
++			ret = afs_merge_fs_addr4(call->net, alist, bp[1], ntohl(bp[2]));
++			if (ret < 0)
++				return ret;
+ 			bp += 3;
+ 			break;
+ 		case YFS_ENDPOINT_IPV6:
+ 			if (ntohl(bp[0]) != sizeof(__be32) * 5)
+ 				return afs_protocol_error(
+ 					call, afs_eproto_yvl_fsendpt6_len);
+-			afs_merge_fs_addr6(alist, bp + 1, ntohl(bp[5]));
++			ret = afs_merge_fs_addr6(call->net, alist, bp + 1, ntohl(bp[5]));
++			if (ret < 0)
++				return ret;
+ 			bp += 6;
+ 			break;
+ 		default:
+@@ -610,7 +626,7 @@ static const struct afs_call_type afs_YFSVLGetEndpoints = {
+ 	.name		= "YFSVL.GetEndpoints",
+ 	.op		= afs_YFSVL_GetEndpoints,
+ 	.deliver	= afs_deliver_yfsvl_get_endpoints,
+-	.destructor	= afs_vl_get_addrs_u_destructor,
++	.destructor	= afs_flat_call_destructor,
+ };
+ 
+ /*
+@@ -620,6 +636,7 @@ static const struct afs_call_type afs_YFSVLGetEndpoints = {
+ struct afs_addr_list *afs_yfsvl_get_endpoints(struct afs_vl_cursor *vc,
+ 					      const uuid_t *uuid)
+ {
++	struct afs_addr_list *alist;
+ 	struct afs_call *call;
+ 	struct afs_net *net = vc->cell->net;
+ 	__be32 *bp;
+@@ -644,7 +661,17 @@ struct afs_addr_list *afs_yfsvl_get_endpoints(struct afs_vl_cursor *vc,
+ 
+ 	trace_afs_make_vl_call(call);
+ 	afs_make_call(&vc->ac, call, GFP_KERNEL);
+-	return (struct afs_addr_list *)afs_wait_for_call_to_complete(call, &vc->ac);
++	afs_wait_for_call_to_complete(call, &vc->ac);
++	vc->call_abort_code	= call->abort_code;
++	vc->call_error		= call->error;
++	vc->call_responded	= call->responded;
++	alist			= call->ret_alist;
++	afs_put_call(call);
++	if (vc->call_error) {
++		afs_put_addrlist(alist);
++		return ERR_PTR(vc->call_error);
++	}
++	return alist;
+ }
+ 
+ /*
+@@ -709,12 +736,6 @@ static int afs_deliver_yfsvl_get_cell_name(struct afs_call *call)
+ 	return 0;
+ }
+ 
+-static void afs_destroy_yfsvl_get_cell_name(struct afs_call *call)
+-{
+-	kfree(call->ret_str);
+-	afs_flat_call_destructor(call);
+-}
+-
+ /*
+  * VL.GetCapabilities operation type
+  */
+@@ -722,7 +743,7 @@ static const struct afs_call_type afs_YFSVLGetCellName = {
+ 	.name		= "YFSVL.GetCellName",
+ 	.op		= afs_YFSVL_GetCellName,
+ 	.deliver	= afs_deliver_yfsvl_get_cell_name,
+-	.destructor	= afs_destroy_yfsvl_get_cell_name,
++	.destructor	= afs_flat_call_destructor,
+ };
+ 
+ /*
+@@ -737,6 +758,7 @@ char *afs_yfsvl_get_cell_name(struct afs_vl_cursor *vc)
+ 	struct afs_call *call;
+ 	struct afs_net *net = vc->cell->net;
+ 	__be32 *bp;
++	char *cellname;
+ 
+ 	_enter("");
+ 
+@@ -755,5 +777,15 @@ char *afs_yfsvl_get_cell_name(struct afs_vl_cursor *vc)
+ 	/* Can't take a ref on server */
+ 	trace_afs_make_vl_call(call);
+ 	afs_make_call(&vc->ac, call, GFP_KERNEL);
+-	return (char *)afs_wait_for_call_to_complete(call, &vc->ac);
++	afs_wait_for_call_to_complete(call, &vc->ac);
++	vc->call_abort_code	= call->abort_code;
++	vc->call_error		= call->error;
++	vc->call_responded	= call->responded;
++	cellname		= call->ret_str;
++	afs_put_call(call);
++	if (vc->call_error) {
++		kfree(cellname);
++		return ERR_PTR(vc->call_error);
++	}
++	return cellname;
+ }
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 4a168781936b5..9f90d8970ce9e 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -366,7 +366,7 @@ static void afs_store_data_success(struct afs_operation *op)
+ 
+ 	op->ctime = op->file[0].scb.status.mtime_client;
+ 	afs_vnode_commit_status(op, &op->file[0]);
+-	if (op->error == 0) {
++	if (!afs_op_error(op)) {
+ 		if (!op->store.laundering)
+ 			afs_pages_written_back(vnode, op->store.pos, op->store.size);
+ 		afs_stat_v(vnode, n_stores);
+@@ -428,7 +428,7 @@ try_next_key:
+ 
+ 	afs_wait_for_operation(op);
+ 
+-	switch (op->error) {
++	switch (afs_op_error(op)) {
+ 	case -EACCES:
+ 	case -EPERM:
+ 	case -ENOKEY:
+@@ -447,7 +447,7 @@ try_next_key:
+ 	}
+ 
+ 	afs_put_wb_key(wbk);
+-	_leave(" = %d", op->error);
++	_leave(" = %d", afs_op_error(op));
+ 	return afs_put_operation(op);
+ }
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 01423670bc8a2..31d64812bb60a 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1260,7 +1260,8 @@ static int btrfs_issue_discard(struct block_device *bdev, u64 start, u64 len,
+ 	u64 bytes_left, end;
+ 	u64 aligned_start = ALIGN(start, 1 << SECTOR_SHIFT);
+ 
+-	if (WARN_ON(start != aligned_start)) {
++	/* Adjust the range to be aligned to 512B sectors if necessary. */
++	if (start != aligned_start) {
+ 		len -= aligned_start - start;
+ 		len = round_down(len, 1 << SECTOR_SHIFT);
+ 		start = aligned_start;
+@@ -4300,6 +4301,42 @@ static int prepare_allocation_clustered(struct btrfs_fs_info *fs_info,
+ 	return 0;
+ }
+ 
++static int prepare_allocation_zoned(struct btrfs_fs_info *fs_info,
++				    struct find_free_extent_ctl *ffe_ctl)
++{
++	if (ffe_ctl->for_treelog) {
++		spin_lock(&fs_info->treelog_bg_lock);
++		if (fs_info->treelog_bg)
++			ffe_ctl->hint_byte = fs_info->treelog_bg;
++		spin_unlock(&fs_info->treelog_bg_lock);
++	} else if (ffe_ctl->for_data_reloc) {
++		spin_lock(&fs_info->relocation_bg_lock);
++		if (fs_info->data_reloc_bg)
++			ffe_ctl->hint_byte = fs_info->data_reloc_bg;
++		spin_unlock(&fs_info->relocation_bg_lock);
++	} else if (ffe_ctl->flags & BTRFS_BLOCK_GROUP_DATA) {
++		struct btrfs_block_group *block_group;
++
++		spin_lock(&fs_info->zone_active_bgs_lock);
++		list_for_each_entry(block_group, &fs_info->zone_active_bgs, active_bg_list) {
++			/*
++			 * No lock is OK here because avail is monotinically
++			 * decreasing, and this is just a hint.
++			 */
++			u64 avail = block_group->zone_capacity - block_group->alloc_offset;
++
++			if (block_group_bits(block_group, ffe_ctl->flags) &&
++			    avail >= ffe_ctl->num_bytes) {
++				ffe_ctl->hint_byte = block_group->start;
++				break;
++			}
++		}
++		spin_unlock(&fs_info->zone_active_bgs_lock);
++	}
++
++	return 0;
++}
++
+ static int prepare_allocation(struct btrfs_fs_info *fs_info,
+ 			      struct find_free_extent_ctl *ffe_ctl,
+ 			      struct btrfs_space_info *space_info,
+@@ -4310,19 +4347,7 @@ static int prepare_allocation(struct btrfs_fs_info *fs_info,
+ 		return prepare_allocation_clustered(fs_info, ffe_ctl,
+ 						    space_info, ins);
+ 	case BTRFS_EXTENT_ALLOC_ZONED:
+-		if (ffe_ctl->for_treelog) {
+-			spin_lock(&fs_info->treelog_bg_lock);
+-			if (fs_info->treelog_bg)
+-				ffe_ctl->hint_byte = fs_info->treelog_bg;
+-			spin_unlock(&fs_info->treelog_bg_lock);
+-		}
+-		if (ffe_ctl->for_data_reloc) {
+-			spin_lock(&fs_info->relocation_bg_lock);
+-			if (fs_info->data_reloc_bg)
+-				ffe_ctl->hint_byte = fs_info->data_reloc_bg;
+-			spin_unlock(&fs_info->relocation_bg_lock);
+-		}
+-		return 0;
++		return prepare_allocation_zoned(fs_info, ffe_ctl);
+ 	default:
+ 		BUG();
+ 	}
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index fb3c3f43c3fa4..97cd57e710cb2 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4449,6 +4449,8 @@ int btrfs_delete_subvolume(struct btrfs_inode *dir, struct dentry *dentry)
+ 	u64 root_flags;
+ 	int ret;
+ 
++	down_write(&fs_info->subvol_sem);
++
+ 	/*
+ 	 * Don't allow to delete a subvolume with send in progress. This is
+ 	 * inside the inode lock so the error handling that has to drop the bit
+@@ -4460,25 +4462,25 @@ int btrfs_delete_subvolume(struct btrfs_inode *dir, struct dentry *dentry)
+ 		btrfs_warn(fs_info,
+ 			   "attempt to delete subvolume %llu during send",
+ 			   dest->root_key.objectid);
+-		return -EPERM;
++		ret = -EPERM;
++		goto out_up_write;
+ 	}
+ 	if (atomic_read(&dest->nr_swapfiles)) {
+ 		spin_unlock(&dest->root_item_lock);
+ 		btrfs_warn(fs_info,
+ 			   "attempt to delete subvolume %llu with active swapfile",
+ 			   root->root_key.objectid);
+-		return -EPERM;
++		ret = -EPERM;
++		goto out_up_write;
+ 	}
+ 	root_flags = btrfs_root_flags(&dest->root_item);
+ 	btrfs_set_root_flags(&dest->root_item,
+ 			     root_flags | BTRFS_ROOT_SUBVOL_DEAD);
+ 	spin_unlock(&dest->root_item_lock);
+ 
+-	down_write(&fs_info->subvol_sem);
+-
+ 	ret = may_destroy_subvol(dest);
+ 	if (ret)
+-		goto out_up_write;
++		goto out_undead;
+ 
+ 	btrfs_init_block_rsv(&block_rsv, BTRFS_BLOCK_RSV_TEMP);
+ 	/*
+@@ -4488,7 +4490,7 @@ int btrfs_delete_subvolume(struct btrfs_inode *dir, struct dentry *dentry)
+ 	 */
+ 	ret = btrfs_subvolume_reserve_metadata(root, &block_rsv, 5, true);
+ 	if (ret)
+-		goto out_up_write;
++		goto out_undead;
+ 
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+@@ -4554,15 +4556,17 @@ out_end_trans:
+ 	inode->i_flags |= S_DEAD;
+ out_release:
+ 	btrfs_subvolume_release_metadata(root, &block_rsv);
+-out_up_write:
+-	up_write(&fs_info->subvol_sem);
++out_undead:
+ 	if (ret) {
+ 		spin_lock(&dest->root_item_lock);
+ 		root_flags = btrfs_root_flags(&dest->root_item);
+ 		btrfs_set_root_flags(&dest->root_item,
+ 				root_flags & ~BTRFS_ROOT_SUBVOL_DEAD);
+ 		spin_unlock(&dest->root_item_lock);
+-	} else {
++	}
++out_up_write:
++	up_write(&fs_info->subvol_sem);
++	if (!ret) {
+ 		d_invalidate(dentry);
+ 		btrfs_prune_dentries(dest);
+ 		ASSERT(dest->send_in_progress == 0);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index a1743904202b7..69e91341fd637 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -790,6 +790,9 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (btrfs_root_refs(&root->root_item) == 0)
++		return -ENOENT;
++
+ 	if (!test_bit(BTRFS_ROOT_SHAREABLE, &root->state))
+ 		return -EINVAL;
+ 
+@@ -2608,6 +2611,10 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp)
+ 				ret = -EFAULT;
+ 				goto out;
+ 			}
++			if (range.flags & ~BTRFS_DEFRAG_RANGE_FLAGS_SUPP) {
++				ret = -EOPNOTSUPP;
++				goto out;
++			}
+ 			/* compression requires us to start the IO */
+ 			if ((range.flags & BTRFS_DEFRAG_RANGE_COMPRESS)) {
+ 				range.flags |= BTRFS_DEFRAG_RANGE_START_IO;
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index 6486f0d7e9931..8c4fc98ca9ce7 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -889,8 +889,10 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
+ out_unlock:
+ 	spin_unlock(&fs_info->ref_verify_lock);
+ out:
+-	if (ret)
++	if (ret) {
++		btrfs_free_ref_cache(fs_info);
+ 		btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
++	}
+ 	return ret;
+ }
+ 
+@@ -1021,8 +1023,8 @@ int btrfs_build_ref_tree(struct btrfs_fs_info *fs_info)
+ 		}
+ 	}
+ 	if (ret) {
+-		btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
+ 		btrfs_free_ref_cache(fs_info);
++		btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
+ 	}
+ 	btrfs_free_path(path);
+ 	return ret;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index f62a408671cbc..443d2519f0a9d 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -1099,12 +1099,22 @@ out:
+ static void scrub_read_endio(struct btrfs_bio *bbio)
+ {
+ 	struct scrub_stripe *stripe = bbio->private;
++	struct bio_vec *bvec;
++	int sector_nr = calc_sector_number(stripe, bio_first_bvec_all(&bbio->bio));
++	int num_sectors;
++	u32 bio_size = 0;
++	int i;
++
++	ASSERT(sector_nr < stripe->nr_sectors);
++	bio_for_each_bvec_all(bvec, &bbio->bio, i)
++		bio_size += bvec->bv_len;
++	num_sectors = bio_size >> stripe->bg->fs_info->sectorsize_bits;
+ 
+ 	if (bbio->bio.bi_status) {
+-		bitmap_set(&stripe->io_error_bitmap, 0, stripe->nr_sectors);
+-		bitmap_set(&stripe->error_bitmap, 0, stripe->nr_sectors);
++		bitmap_set(&stripe->io_error_bitmap, sector_nr, num_sectors);
++		bitmap_set(&stripe->error_bitmap, sector_nr, num_sectors);
+ 	} else {
+-		bitmap_clear(&stripe->io_error_bitmap, 0, stripe->nr_sectors);
++		bitmap_clear(&stripe->io_error_bitmap, sector_nr, num_sectors);
+ 	}
+ 	bio_put(&bbio->bio);
+ 	if (atomic_dec_and_test(&stripe->pending_io)) {
+@@ -1705,6 +1715,9 @@ static void scrub_submit_initial_read(struct scrub_ctx *sctx,
+ {
+ 	struct btrfs_fs_info *fs_info = sctx->fs_info;
+ 	struct btrfs_bio *bbio;
++	unsigned int nr_sectors = min(BTRFS_STRIPE_LEN, stripe->bg->start +
++				      stripe->bg->length - stripe->logical) >>
++				  fs_info->sectorsize_bits;
+ 	int mirror = stripe->mirror_num;
+ 
+ 	ASSERT(stripe->bg);
+@@ -1719,14 +1732,16 @@ static void scrub_submit_initial_read(struct scrub_ctx *sctx,
+ 	bbio = btrfs_bio_alloc(SCRUB_STRIPE_PAGES, REQ_OP_READ, fs_info,
+ 			       scrub_read_endio, stripe);
+ 
+-	/* Read the whole stripe. */
+ 	bbio->bio.bi_iter.bi_sector = stripe->logical >> SECTOR_SHIFT;
+-	for (int i = 0; i < BTRFS_STRIPE_LEN >> PAGE_SHIFT; i++) {
++	/* Read the whole range inside the chunk boundary. */
++	for (unsigned int cur = 0; cur < nr_sectors; cur++) {
++		struct page *page = scrub_stripe_get_page(stripe, cur);
++		unsigned int pgoff = scrub_stripe_get_page_offset(stripe, cur);
+ 		int ret;
+ 
+-		ret = bio_add_page(&bbio->bio, stripe->pages[i], PAGE_SIZE, 0);
++		ret = bio_add_page(&bbio->bio, page, fs_info->sectorsize, pgoff);
+ 		/* We should have allocated enough bio vectors. */
+-		ASSERT(ret == PAGE_SIZE);
++		ASSERT(ret == fs_info->sectorsize);
+ 	}
+ 	atomic_inc(&stripe->pending_io);
+ 
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index e6b51fb3ddc1e..84c05246ffd8a 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1783,6 +1783,10 @@ static ssize_t btrfs_devinfo_scrub_speed_max_store(struct kobject *kobj,
+ 	unsigned long long limit;
+ 
+ 	limit = memparse(buf, &endptr);
++	/* There could be trailing '\n', also catch any typos after the value. */
++	endptr = skip_spaces(endptr);
++	if (*endptr != 0)
++		return -EINVAL;
+ 	WRITE_ONCE(device->scrub_speed_max, limit);
+ 	return len;
+ }
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 50fdc69fdddf9..6eccf8496486c 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1436,7 +1436,7 @@ static int check_extent_item(struct extent_buffer *leaf,
+ 		if (unlikely(ptr + btrfs_extent_inline_ref_size(inline_type) > end)) {
+ 			extent_err(leaf, slot,
+ "inline ref item overflows extent item, ptr %lu iref size %u end %lu",
+-				   ptr, inline_type, end);
++				   ptr, btrfs_extent_inline_ref_size(inline_type), end);
+ 			return -EUCLEAN;
+ 		}
+ 
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 188378ca19c7f..3779e76a15d64 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -2094,6 +2094,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ 
+ 	map = block_group->physical_map;
+ 
++	spin_lock(&fs_info->zone_active_bgs_lock);
+ 	spin_lock(&block_group->lock);
+ 	if (test_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags)) {
+ 		ret = true;
+@@ -2106,7 +2107,6 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ 		goto out_unlock;
+ 	}
+ 
+-	spin_lock(&fs_info->zone_active_bgs_lock);
+ 	for (i = 0; i < map->num_stripes; i++) {
+ 		struct btrfs_zoned_device_info *zinfo;
+ 		int reserved = 0;
+@@ -2126,20 +2126,17 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ 		 */
+ 		if (atomic_read(&zinfo->active_zones_left) <= reserved) {
+ 			ret = false;
+-			spin_unlock(&fs_info->zone_active_bgs_lock);
+ 			goto out_unlock;
+ 		}
+ 
+ 		if (!btrfs_dev_set_active_zone(device, physical)) {
+ 			/* Cannot activate the zone */
+ 			ret = false;
+-			spin_unlock(&fs_info->zone_active_bgs_lock);
+ 			goto out_unlock;
+ 		}
+ 		if (!is_data)
+ 			zinfo->reserved_active_zones--;
+ 	}
+-	spin_unlock(&fs_info->zone_active_bgs_lock);
+ 
+ 	/* Successfully activated all the zones */
+ 	set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &block_group->runtime_flags);
+@@ -2147,8 +2144,6 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ 
+ 	/* For the active block group list */
+ 	btrfs_get_block_group(block_group);
+-
+-	spin_lock(&fs_info->zone_active_bgs_lock);
+ 	list_add_tail(&block_group->active_bg_list, &fs_info->zone_active_bgs);
+ 	spin_unlock(&fs_info->zone_active_bgs_lock);
+ 
+@@ -2156,6 +2151,7 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ 
+ out_unlock:
+ 	spin_unlock(&block_group->lock);
++	spin_unlock(&fs_info->zone_active_bgs_lock);
+ 	return ret;
+ }
+ 
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 67f8dd8a05ef2..6296c62c10fa9 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -1817,8 +1817,8 @@ static int dlm_tcp_bind(struct socket *sock)
+ 	memcpy(&src_addr, &dlm_local_addr[0], sizeof(src_addr));
+ 	make_sockaddr(&src_addr, 0, &addr_len);
+ 
+-	result = sock->ops->bind(sock, (struct sockaddr *)&src_addr,
+-				 addr_len);
++	result = kernel_bind(sock, (struct sockaddr *)&src_addr,
++			     addr_len);
+ 	if (result < 0) {
+ 		/* This *may* not indicate a critical error */
+ 		log_print("could not bind for connect: %d", result);
+@@ -1830,7 +1830,7 @@ static int dlm_tcp_bind(struct socket *sock)
+ static int dlm_tcp_connect(struct connection *con, struct socket *sock,
+ 			   struct sockaddr *addr, int addr_len)
+ {
+-	return sock->ops->connect(sock, addr, addr_len, O_NONBLOCK);
++	return kernel_connect(sock, addr, addr_len, O_NONBLOCK);
+ }
+ 
+ static int dlm_tcp_listen_validate(void)
+@@ -1862,8 +1862,8 @@ static int dlm_tcp_listen_bind(struct socket *sock)
+ 
+ 	/* Bind to our port */
+ 	make_sockaddr(&dlm_local_addr[0], dlm_config.ci_tcp_port, &addr_len);
+-	return sock->ops->bind(sock, (struct sockaddr *)&dlm_local_addr[0],
+-			       addr_len);
++	return kernel_bind(sock, (struct sockaddr *)&dlm_local_addr[0],
++			   addr_len);
+ }
+ 
+ static const struct dlm_proto_ops dlm_tcp_ops = {
+@@ -1888,12 +1888,12 @@ static int dlm_sctp_connect(struct connection *con, struct socket *sock,
+ 	int ret;
+ 
+ 	/*
+-	 * Make sock->ops->connect() function return in specified time,
++	 * Make kernel_connect() function return in specified time,
+ 	 * since O_NONBLOCK argument in connect() function does not work here,
+ 	 * then, we should restore the default value of this attribute.
+ 	 */
+ 	sock_set_sndtimeo(sock->sk, 5);
+-	ret = sock->ops->connect(sock, addr, addr_len, 0);
++	ret = kernel_connect(sock, addr, addr_len, 0);
+ 	sock_set_sndtimeo(sock->sk, 0);
+ 	return ret;
+ }
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index af98e88908eea..066ddc03b7b4d 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -121,11 +121,11 @@ static int z_erofs_lz4_prepare_dstpages(struct z_erofs_lz4_decompress_ctx *ctx,
+ }
+ 
+ static void *z_erofs_lz4_handle_overlap(struct z_erofs_lz4_decompress_ctx *ctx,
+-			void *inpage, unsigned int *inputmargin, int *maptype,
+-			bool may_inplace)
++			void *inpage, void *out, unsigned int *inputmargin,
++			int *maptype, bool may_inplace)
+ {
+ 	struct z_erofs_decompress_req *rq = ctx->rq;
+-	unsigned int omargin, total, i, j;
++	unsigned int omargin, total, i;
+ 	struct page **in;
+ 	void *src, *tmp;
+ 
+@@ -135,12 +135,13 @@ static void *z_erofs_lz4_handle_overlap(struct z_erofs_lz4_decompress_ctx *ctx,
+ 		    omargin < LZ4_DECOMPRESS_INPLACE_MARGIN(rq->inputsize))
+ 			goto docopy;
+ 
+-		for (i = 0; i < ctx->inpages; ++i) {
+-			DBG_BUGON(rq->in[i] == NULL);
+-			for (j = 0; j < ctx->outpages - ctx->inpages + i; ++j)
+-				if (rq->out[j] == rq->in[i])
+-					goto docopy;
+-		}
++		for (i = 0; i < ctx->inpages; ++i)
++			if (rq->out[ctx->outpages - ctx->inpages + i] !=
++			    rq->in[i])
++				goto docopy;
++		kunmap_local(inpage);
++		*maptype = 3;
++		return out + ((ctx->outpages - ctx->inpages) << PAGE_SHIFT);
+ 	}
+ 
+ 	if (ctx->inpages <= 1) {
+@@ -148,7 +149,6 @@ static void *z_erofs_lz4_handle_overlap(struct z_erofs_lz4_decompress_ctx *ctx,
+ 		return inpage;
+ 	}
+ 	kunmap_local(inpage);
+-	might_sleep();
+ 	src = erofs_vm_map_ram(rq->in, ctx->inpages);
+ 	if (!src)
+ 		return ERR_PTR(-ENOMEM);
+@@ -204,12 +204,12 @@ int z_erofs_fixup_insize(struct z_erofs_decompress_req *rq, const char *padbuf,
+ }
+ 
+ static int z_erofs_lz4_decompress_mem(struct z_erofs_lz4_decompress_ctx *ctx,
+-				      u8 *out)
++				      u8 *dst)
+ {
+ 	struct z_erofs_decompress_req *rq = ctx->rq;
+ 	bool support_0padding = false, may_inplace = false;
+ 	unsigned int inputmargin;
+-	u8 *headpage, *src;
++	u8 *out, *headpage, *src;
+ 	int ret, maptype;
+ 
+ 	DBG_BUGON(*rq->in == NULL);
+@@ -230,11 +230,12 @@ static int z_erofs_lz4_decompress_mem(struct z_erofs_lz4_decompress_ctx *ctx,
+ 	}
+ 
+ 	inputmargin = rq->pageofs_in;
+-	src = z_erofs_lz4_handle_overlap(ctx, headpage, &inputmargin,
++	src = z_erofs_lz4_handle_overlap(ctx, headpage, dst, &inputmargin,
+ 					 &maptype, may_inplace);
+ 	if (IS_ERR(src))
+ 		return PTR_ERR(src);
+ 
++	out = dst + rq->pageofs_out;
+ 	/* legacy format could compress extra data in a pcluster. */
+ 	if (rq->partial_decoding || !support_0padding)
+ 		ret = LZ4_decompress_safe_partial(src + inputmargin, out,
+@@ -265,7 +266,7 @@ static int z_erofs_lz4_decompress_mem(struct z_erofs_lz4_decompress_ctx *ctx,
+ 		vm_unmap_ram(src, ctx->inpages);
+ 	} else if (maptype == 2) {
+ 		erofs_put_pcpubuf(src);
+-	} else {
++	} else if (maptype != 3) {
+ 		DBG_BUGON(1);
+ 		return -EFAULT;
+ 	}
+@@ -308,7 +309,7 @@ static int z_erofs_lz4_decompress(struct z_erofs_decompress_req *rq,
+ 	}
+ 
+ dstmap_out:
+-	ret = z_erofs_lz4_decompress_mem(&ctx, dst + rq->pageofs_out);
++	ret = z_erofs_lz4_decompress_mem(&ctx, dst);
+ 	if (!dst_maptype)
+ 		kunmap_local(dst);
+ 	else if (dst_maptype == 2)
+diff --git a/fs/exec.c b/fs/exec.c
+index 4aa19b24f2810..6d9ed2d765efe 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1408,6 +1408,9 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 
+ out_unlock:
+ 	up_write(&me->signal->exec_update_lock);
++	if (!bprm->cred)
++		mutex_unlock(&me->signal->cred_guard_mutex);
++
+ out:
+ 	return retval;
+ }
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index d72b5e3c92ec4..ab023d709f72b 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6735,11 +6735,16 @@ __acquires(bitlock)
+ static ext4_grpblk_t ext4_last_grp_cluster(struct super_block *sb,
+ 					   ext4_group_t grp)
+ {
+-	if (grp < ext4_get_groups_count(sb))
+-		return EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-	return (ext4_blocks_count(EXT4_SB(sb)->s_es) -
+-		ext4_group_first_block_no(sb, grp) - 1) >>
+-					EXT4_CLUSTER_BITS(sb);
++	unsigned long nr_clusters_in_group;
++
++	if (grp < (ext4_get_groups_count(sb) - 1))
++		nr_clusters_in_group = EXT4_CLUSTERS_PER_GROUP(sb);
++	else
++		nr_clusters_in_group = (ext4_blocks_count(EXT4_SB(sb)->s_es) -
++					ext4_group_first_block_no(sb, grp))
++				       >> EXT4_CLUSTER_BITS(sb);
++
++	return nr_clusters_in_group - 1;
+ }
+ 
+ static bool ext4_trim_interrupted(void)
+diff --git a/fs/fscache/cache.c b/fs/fscache/cache.c
+index d645f8b302a27..9397ed39b0b4e 100644
+--- a/fs/fscache/cache.c
++++ b/fs/fscache/cache.c
+@@ -179,13 +179,14 @@ EXPORT_SYMBOL(fscache_acquire_cache);
+ void fscache_put_cache(struct fscache_cache *cache,
+ 		       enum fscache_cache_trace where)
+ {
+-	unsigned int debug_id = cache->debug_id;
++	unsigned int debug_id;
+ 	bool zero;
+ 	int ref;
+ 
+ 	if (IS_ERR_OR_NULL(cache))
+ 		return;
+ 
++	debug_id = cache->debug_id;
+ 	zero = __refcount_dec_and_test(&cache->ref, &ref);
+ 	trace_fscache_cache(debug_id, ref - 1, where);
+ 
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index f5fd99d6b0d4e..76cf22ac97d76 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -920,8 +920,7 @@ COMPAT_SYSCALL_DEFINE3(ioctl, unsigned int, fd, unsigned int, cmd,
+ 	if (!f.file)
+ 		return -EBADF;
+ 
+-	/* RED-PEN how should LSM module know it's handling 32bit? */
+-	error = security_file_ioctl(f.file, cmd, arg);
++	error = security_file_ioctl_compat(f.file, cmd, arg);
+ 	if (error)
+ 		goto out;
+ 
+diff --git a/fs/namei.c b/fs/namei.c
+index 71c13b2990b44..29bafbdb44ca8 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3021,20 +3021,14 @@ static struct dentry *lock_two_directories(struct dentry *p1, struct dentry *p2)
+ 	p = d_ancestor(p2, p1);
+ 	if (p) {
+ 		inode_lock_nested(p2->d_inode, I_MUTEX_PARENT);
+-		inode_lock_nested(p1->d_inode, I_MUTEX_CHILD);
++		inode_lock_nested(p1->d_inode, I_MUTEX_PARENT2);
+ 		return p;
+ 	}
+ 
+ 	p = d_ancestor(p1, p2);
+-	if (p) {
+-		inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
+-		inode_lock_nested(p2->d_inode, I_MUTEX_CHILD);
+-		return p;
+-	}
+-
+-	lock_two_inodes(p1->d_inode, p2->d_inode,
+-			I_MUTEX_PARENT, I_MUTEX_PARENT2);
+-	return NULL;
++	inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
++	inode_lock_nested(p2->d_inode, I_MUTEX_PARENT2);
++	return p;
+ }
+ 
+ /*
+@@ -4716,11 +4710,12 @@ SYSCALL_DEFINE2(link, const char __user *, oldname, const char __user *, newname
+  *
+  *	a) we can get into loop creation.
+  *	b) race potential - two innocent renames can create a loop together.
+- *	   That's where 4.4 screws up. Current fix: serialization on
++ *	   That's where 4.4BSD screws up. Current fix: serialization on
+  *	   sb->s_vfs_rename_mutex. We might be more accurate, but that's another
+  *	   story.
+- *	c) we have to lock _four_ objects - parents and victim (if it exists),
+- *	   and source.
++ *	c) we may have to lock up to _four_ objects - parents and victim (if it exists),
++ *	   and source (if it's a non-directory or a subdirectory that moves to
++ *	   different parent).
+  *	   And that - after we got ->i_mutex on parents (until then we don't know
+  *	   whether the target exists).  Solution: try to be smart with locking
+  *	   order for inodes.  We rely on the fact that tree topology may change
+@@ -4752,6 +4747,7 @@ int vfs_rename(struct renamedata *rd)
+ 	bool new_is_dir = false;
+ 	unsigned max_links = new_dir->i_sb->s_max_links;
+ 	struct name_snapshot old_name;
++	bool lock_old_subdir, lock_new_subdir;
+ 
+ 	if (source == target)
+ 		return 0;
+@@ -4805,15 +4801,32 @@ int vfs_rename(struct renamedata *rd)
+ 	take_dentry_name_snapshot(&old_name, old_dentry);
+ 	dget(new_dentry);
+ 	/*
+-	 * Lock all moved children. Moved directories may need to change parent
+-	 * pointer so they need the lock to prevent against concurrent
+-	 * directory changes moving parent pointer. For regular files we've
+-	 * historically always done this. The lockdep locking subclasses are
+-	 * somewhat arbitrary but RENAME_EXCHANGE in particular can swap
+-	 * regular files and directories so it's difficult to tell which
+-	 * subclasses to use.
++	 * Lock children.
++	 * The source subdirectory needs to be locked on cross-directory
++	 * rename or cross-directory exchange since its parent changes.
++	 * The target subdirectory needs to be locked on cross-directory
++	 * exchange due to parent change and on any rename due to becoming
++	 * a victim.
++	 * Non-directories need locking in all cases (for NFS reasons);
++	 * they get locked after any subdirectories (in inode address order).
++	 *
++	 * NOTE: WE ONLY LOCK UNRELATED DIRECTORIES IN CROSS-DIRECTORY CASE.
++	 * NEVER, EVER DO THAT WITHOUT ->s_vfs_rename_mutex.
+ 	 */
+-	lock_two_inodes(source, target, I_MUTEX_NORMAL, I_MUTEX_NONDIR2);
++	lock_old_subdir = new_dir != old_dir;
++	lock_new_subdir = new_dir != old_dir || !(flags & RENAME_EXCHANGE);
++	if (is_dir) {
++		if (lock_old_subdir)
++			inode_lock_nested(source, I_MUTEX_CHILD);
++		if (target && (!new_is_dir || lock_new_subdir))
++			inode_lock(target);
++	} else if (new_is_dir) {
++		if (lock_new_subdir)
++			inode_lock_nested(target, I_MUTEX_CHILD);
++		inode_lock(source);
++	} else {
++		lock_two_nondirectories(source, target);
++	}
+ 
+ 	error = -EPERM;
+ 	if (IS_SWAPFILE(source) || (target && IS_SWAPFILE(target)))
+@@ -4861,8 +4874,9 @@ int vfs_rename(struct renamedata *rd)
+ 			d_exchange(old_dentry, new_dentry);
+ 	}
+ out:
+-	inode_unlock(source);
+-	if (target)
++	if (!is_dir || lock_old_subdir)
++		inode_unlock(source);
++	if (target && (!new_is_dir || lock_new_subdir))
+ 		inode_unlock(target);
+ 	dput(new_dentry);
+ 	if (!error) {
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 3edbfa0233e68..21180fa953806 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7911,14 +7911,16 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ {
+ 	struct file_lock *fl;
+ 	int status = false;
+-	struct nfsd_file *nf = find_any_file(fp);
++	struct nfsd_file *nf;
+ 	struct inode *inode;
+ 	struct file_lock_context *flctx;
+ 
++	spin_lock(&fp->fi_lock);
++	nf = find_any_file_locked(fp);
+ 	if (!nf) {
+ 		/* Any valid lock stateid should have some sort of access */
+ 		WARN_ON_ONCE(1);
+-		return status;
++		goto out;
+ 	}
+ 
+ 	inode = file_inode(nf->nf_file);
+@@ -7934,7 +7936,8 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ 		}
+ 		spin_unlock(&flctx->flc_lock);
+ 	}
+-	nfsd_file_put(nf);
++out:
++	spin_unlock(&fp->fi_lock);
+ 	return status;
+ }
+ 
+@@ -7944,10 +7947,8 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+  * @cstate: NFSv4 COMPOUND state
+  * @u: RELEASE_LOCKOWNER arguments
+  *
+- * The lockowner's so_count is bumped when a lock record is added
+- * or when copying a conflicting lock. The latter case is brief,
+- * but can lead to fleeting false positives when looking for
+- * locks-in-use.
++ * Check if theree are any locks still held and if not - free the lockowner
++ * and any lock state that is owned.
+  *
+  * Return values:
+  *   %nfs_ok: lockowner released or not found
+@@ -7983,10 +7984,13 @@ nfsd4_release_lockowner(struct svc_rqst *rqstp,
+ 		spin_unlock(&clp->cl_lock);
+ 		return nfs_ok;
+ 	}
+-	if (atomic_read(&lo->lo_owner.so_count) != 2) {
+-		spin_unlock(&clp->cl_lock);
+-		nfs4_put_stateowner(&lo->lo_owner);
+-		return nfserr_locks_held;
++
++	list_for_each_entry(stp, &lo->lo_owner.so_stateids, st_perstateowner) {
++		if (check_for_locks(stp->st_stid.sc_file, lo)) {
++			spin_unlock(&clp->cl_lock);
++			nfs4_put_stateowner(&lo->lo_owner);
++			return nfserr_locks_held;
++		}
+ 	}
+ 	unhash_lockowner_locked(lo);
+ 	while (!list_empty(&lo->lo_owner.so_stateids)) {
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index 03bc8d5dfa318..22c29e540e3ce 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -18,10 +18,11 @@
+ 
+ struct ovl_lookup_data {
+ 	struct super_block *sb;
+-	struct vfsmount *mnt;
++	const struct ovl_layer *layer;
+ 	struct qstr name;
+ 	bool is_dir;
+ 	bool opaque;
++	bool xwhiteouts;
+ 	bool stop;
+ 	bool last;
+ 	char *redirect;
+@@ -201,17 +202,13 @@ struct dentry *ovl_decode_real_fh(struct ovl_fs *ofs, struct ovl_fh *fh,
+ 	return real;
+ }
+ 
+-static bool ovl_is_opaquedir(struct ovl_fs *ofs, const struct path *path)
+-{
+-	return ovl_path_check_dir_xattr(ofs, path, OVL_XATTR_OPAQUE);
+-}
+-
+ static struct dentry *ovl_lookup_positive_unlocked(struct ovl_lookup_data *d,
+ 						   const char *name,
+ 						   struct dentry *base, int len,
+ 						   bool drop_negative)
+ {
+-	struct dentry *ret = lookup_one_unlocked(mnt_idmap(d->mnt), name, base, len);
++	struct dentry *ret = lookup_one_unlocked(mnt_idmap(d->layer->mnt), name,
++						 base, len);
+ 
+ 	if (!IS_ERR(ret) && d_flags_negative(smp_load_acquire(&ret->d_flags))) {
+ 		if (drop_negative && ret->d_lockref.count == 1) {
+@@ -232,10 +229,13 @@ static int ovl_lookup_single(struct dentry *base, struct ovl_lookup_data *d,
+ 			     size_t prelen, const char *post,
+ 			     struct dentry **ret, bool drop_negative)
+ {
++	struct ovl_fs *ofs = OVL_FS(d->sb);
+ 	struct dentry *this;
+ 	struct path path;
+ 	int err;
+ 	bool last_element = !post[0];
++	bool is_upper = d->layer->idx == 0;
++	char val;
+ 
+ 	this = ovl_lookup_positive_unlocked(d, name, base, namelen, drop_negative);
+ 	if (IS_ERR(this)) {
+@@ -253,8 +253,8 @@ static int ovl_lookup_single(struct dentry *base, struct ovl_lookup_data *d,
+ 	}
+ 
+ 	path.dentry = this;
+-	path.mnt = d->mnt;
+-	if (ovl_path_is_whiteout(OVL_FS(d->sb), &path)) {
++	path.mnt = d->layer->mnt;
++	if (ovl_path_is_whiteout(ofs, &path)) {
+ 		d->stop = d->opaque = true;
+ 		goto put_and_out;
+ 	}
+@@ -272,7 +272,7 @@ static int ovl_lookup_single(struct dentry *base, struct ovl_lookup_data *d,
+ 			d->stop = true;
+ 			goto put_and_out;
+ 		}
+-		err = ovl_check_metacopy_xattr(OVL_FS(d->sb), &path, NULL);
++		err = ovl_check_metacopy_xattr(ofs, &path, NULL);
+ 		if (err < 0)
+ 			goto out_err;
+ 
+@@ -292,7 +292,12 @@ static int ovl_lookup_single(struct dentry *base, struct ovl_lookup_data *d,
+ 		if (d->last)
+ 			goto out;
+ 
+-		if (ovl_is_opaquedir(OVL_FS(d->sb), &path)) {
++		/* overlay.opaque=x means xwhiteouts directory */
++		val = ovl_get_opaquedir_val(ofs, &path);
++		if (last_element && !is_upper && val == 'x') {
++			d->xwhiteouts = true;
++			ovl_layer_set_xwhiteouts(ofs, d->layer);
++		} else if (val == 'y') {
+ 			d->stop = true;
+ 			if (last_element)
+ 				d->opaque = true;
+@@ -863,7 +868,8 @@ fail:
+  * Returns next layer in stack starting from top.
+  * Returns -1 if this is the last layer.
+  */
+-int ovl_path_next(int idx, struct dentry *dentry, struct path *path)
++int ovl_path_next(int idx, struct dentry *dentry, struct path *path,
++		  const struct ovl_layer **layer)
+ {
+ 	struct ovl_entry *oe = OVL_E(dentry);
+ 	struct ovl_path *lowerstack = ovl_lowerstack(oe);
+@@ -871,13 +877,16 @@ int ovl_path_next(int idx, struct dentry *dentry, struct path *path)
+ 	BUG_ON(idx < 0);
+ 	if (idx == 0) {
+ 		ovl_path_upper(dentry, path);
+-		if (path->dentry)
++		if (path->dentry) {
++			*layer = &OVL_FS(dentry->d_sb)->layers[0];
+ 			return ovl_numlower(oe) ? 1 : -1;
++		}
+ 		idx++;
+ 	}
+ 	BUG_ON(idx > ovl_numlower(oe));
+ 	path->dentry = lowerstack[idx - 1].dentry;
+-	path->mnt = lowerstack[idx - 1].layer->mnt;
++	*layer = lowerstack[idx - 1].layer;
++	path->mnt = (*layer)->mnt;
+ 
+ 	return (idx < ovl_numlower(oe)) ? idx + 1 : -1;
+ }
+@@ -1055,7 +1064,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 	old_cred = ovl_override_creds(dentry->d_sb);
+ 	upperdir = ovl_dentry_upper(dentry->d_parent);
+ 	if (upperdir) {
+-		d.mnt = ovl_upper_mnt(ofs);
++		d.layer = &ofs->layers[0];
+ 		err = ovl_lookup_layer(upperdir, &d, &upperdentry, true);
+ 		if (err)
+ 			goto out;
+@@ -1111,7 +1120,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 		else if (d.is_dir || !ofs->numdatalayer)
+ 			d.last = lower.layer->idx == ovl_numlower(roe);
+ 
+-		d.mnt = lower.layer->mnt;
++		d.layer = lower.layer;
+ 		err = ovl_lookup_layer(lower.dentry, &d, &this, false);
+ 		if (err)
+ 			goto out_put;
+@@ -1278,6 +1287,8 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 
+ 	if (upperopaque)
+ 		ovl_dentry_set_opaque(dentry);
++	if (d.xwhiteouts)
++		ovl_dentry_set_xwhiteouts(dentry);
+ 
+ 	if (upperdentry)
+ 		ovl_dentry_set_upper_alias(dentry);
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 05c3dd597fa8d..18132ab04ead4 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -50,7 +50,6 @@ enum ovl_xattr {
+ 	OVL_XATTR_METACOPY,
+ 	OVL_XATTR_PROTATTR,
+ 	OVL_XATTR_XWHITEOUT,
+-	OVL_XATTR_XWHITEOUTS,
+ };
+ 
+ enum ovl_inode_flag {
+@@ -70,6 +69,8 @@ enum ovl_entry_flag {
+ 	OVL_E_UPPER_ALIAS,
+ 	OVL_E_OPAQUE,
+ 	OVL_E_CONNECTED,
++	/* Lower stack may contain xwhiteout entries */
++	OVL_E_XWHITEOUTS,
+ };
+ 
+ enum {
+@@ -471,6 +472,10 @@ bool ovl_dentry_test_flag(unsigned long flag, struct dentry *dentry);
+ bool ovl_dentry_is_opaque(struct dentry *dentry);
+ bool ovl_dentry_is_whiteout(struct dentry *dentry);
+ void ovl_dentry_set_opaque(struct dentry *dentry);
++bool ovl_dentry_has_xwhiteouts(struct dentry *dentry);
++void ovl_dentry_set_xwhiteouts(struct dentry *dentry);
++void ovl_layer_set_xwhiteouts(struct ovl_fs *ofs,
++			      const struct ovl_layer *layer);
+ bool ovl_dentry_has_upper_alias(struct dentry *dentry);
+ void ovl_dentry_set_upper_alias(struct dentry *dentry);
+ bool ovl_dentry_needs_data_copy_up(struct dentry *dentry, int flags);
+@@ -488,11 +493,10 @@ struct file *ovl_path_open(const struct path *path, int flags);
+ int ovl_copy_up_start(struct dentry *dentry, int flags);
+ void ovl_copy_up_end(struct dentry *dentry);
+ bool ovl_already_copied_up(struct dentry *dentry, int flags);
+-bool ovl_path_check_dir_xattr(struct ovl_fs *ofs, const struct path *path,
+-			      enum ovl_xattr ox);
++char ovl_get_dir_xattr_val(struct ovl_fs *ofs, const struct path *path,
++			   enum ovl_xattr ox);
+ bool ovl_path_check_origin_xattr(struct ovl_fs *ofs, const struct path *path);
+ bool ovl_path_check_xwhiteout_xattr(struct ovl_fs *ofs, const struct path *path);
+-bool ovl_path_check_xwhiteouts_xattr(struct ovl_fs *ofs, const struct path *path);
+ bool ovl_init_uuid_xattr(struct super_block *sb, struct ovl_fs *ofs,
+ 			 const struct path *upperpath);
+ 
+@@ -567,7 +571,13 @@ static inline bool ovl_is_impuredir(struct super_block *sb,
+ 		.mnt = ovl_upper_mnt(ofs),
+ 	};
+ 
+-	return ovl_path_check_dir_xattr(ofs, &upperpath, OVL_XATTR_IMPURE);
++	return ovl_get_dir_xattr_val(ofs, &upperpath, OVL_XATTR_IMPURE) == 'y';
++}
++
++static inline char ovl_get_opaquedir_val(struct ovl_fs *ofs,
++					 const struct path *path)
++{
++	return ovl_get_dir_xattr_val(ofs, path, OVL_XATTR_OPAQUE);
+ }
+ 
+ static inline bool ovl_redirect_follow(struct ovl_fs *ofs)
+@@ -674,7 +684,8 @@ int ovl_get_index_name(struct ovl_fs *ofs, struct dentry *origin,
+ struct dentry *ovl_get_index_fh(struct ovl_fs *ofs, struct ovl_fh *fh);
+ struct dentry *ovl_lookup_index(struct ovl_fs *ofs, struct dentry *upper,
+ 				struct dentry *origin, bool verify);
+-int ovl_path_next(int idx, struct dentry *dentry, struct path *path);
++int ovl_path_next(int idx, struct dentry *dentry, struct path *path,
++		  const struct ovl_layer **layer);
+ int ovl_verify_lowerdata(struct dentry *dentry);
+ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 			  unsigned int flags);
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index d82d2a043da2c..f02c4bf231825 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -40,6 +40,8 @@ struct ovl_layer {
+ 	int idx;
+ 	/* One fsid per unique underlying sb (upper fsid == 0) */
+ 	int fsid;
++	/* xwhiteouts were found on this layer */
++	bool has_xwhiteouts;
+ };
+ 
+ struct ovl_path {
+@@ -59,7 +61,7 @@ struct ovl_fs {
+ 	unsigned int numfs;
+ 	/* Number of data-only lower layers */
+ 	unsigned int numdatalayer;
+-	const struct ovl_layer *layers;
++	struct ovl_layer *layers;
+ 	struct ovl_sb *fs;
+ 	/* workbasedir is the path at workdir= mount option */
+ 	struct dentry *workbasedir;
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index a490fc47c3e7e..8e8545bc273f6 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -305,8 +305,6 @@ static inline int ovl_dir_read(const struct path *realpath,
+ 	if (IS_ERR(realfile))
+ 		return PTR_ERR(realfile);
+ 
+-	rdd->in_xwhiteouts_dir = rdd->dentry &&
+-		ovl_path_check_xwhiteouts_xattr(OVL_FS(rdd->dentry->d_sb), realpath);
+ 	rdd->first_maybe_whiteout = NULL;
+ 	rdd->ctx.pos = 0;
+ 	do {
+@@ -359,10 +357,13 @@ static int ovl_dir_read_merged(struct dentry *dentry, struct list_head *list,
+ 		.is_lowest = false,
+ 	};
+ 	int idx, next;
++	const struct ovl_layer *layer;
+ 
+ 	for (idx = 0; idx != -1; idx = next) {
+-		next = ovl_path_next(idx, dentry, &realpath);
++		next = ovl_path_next(idx, dentry, &realpath, &layer);
+ 		rdd.is_upper = ovl_dentry_upper(dentry) == realpath.dentry;
++		rdd.in_xwhiteouts_dir = layer->has_xwhiteouts &&
++					ovl_dentry_has_xwhiteouts(dentry);
+ 
+ 		if (next != -1) {
+ 			err = ovl_dir_read(&realpath, &rdd);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index a0967bb250036..36543e9ffd744 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1250,6 +1250,7 @@ static struct dentry *ovl_get_root(struct super_block *sb,
+ 				   struct ovl_entry *oe)
+ {
+ 	struct dentry *root;
++	struct ovl_fs *ofs = OVL_FS(sb);
+ 	struct ovl_path *lowerpath = ovl_lowerstack(oe);
+ 	unsigned long ino = d_inode(lowerpath->dentry)->i_ino;
+ 	int fsid = lowerpath->layer->fsid;
+@@ -1271,6 +1272,20 @@ static struct dentry *ovl_get_root(struct super_block *sb,
+ 			ovl_set_flag(OVL_IMPURE, d_inode(root));
+ 	}
+ 
++	/* Look for xwhiteouts marker except in the lowermost layer */
++	for (int i = 0; i < ovl_numlower(oe) - 1; i++, lowerpath++) {
++		struct path path = {
++			.mnt = lowerpath->layer->mnt,
++			.dentry = lowerpath->dentry,
++		};
++
++		/* overlay.opaque=x means xwhiteouts directory */
++		if (ovl_get_opaquedir_val(ofs, &path) == 'x') {
++			ovl_layer_set_xwhiteouts(ofs, lowerpath->layer);
++			ovl_dentry_set_xwhiteouts(root);
++		}
++	}
++
+ 	/* Root is always merge -> can have whiteouts */
+ 	ovl_set_flag(OVL_WHITEOUTS, d_inode(root));
+ 	ovl_dentry_set_flag(OVL_E_CONNECTED, root);
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index c3f020ca13a8c..cd0722309e5d1 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -461,6 +461,33 @@ void ovl_dentry_set_opaque(struct dentry *dentry)
+ 	ovl_dentry_set_flag(OVL_E_OPAQUE, dentry);
+ }
+ 
++bool ovl_dentry_has_xwhiteouts(struct dentry *dentry)
++{
++	return ovl_dentry_test_flag(OVL_E_XWHITEOUTS, dentry);
++}
++
++void ovl_dentry_set_xwhiteouts(struct dentry *dentry)
++{
++	ovl_dentry_set_flag(OVL_E_XWHITEOUTS, dentry);
++}
++
++/*
++ * ovl_layer_set_xwhiteouts() is called before adding the overlay dir
++ * dentry to dcache, while readdir of that same directory happens after
++ * the overlay dir dentry is in dcache, so if some cpu observes that
++ * ovl_dentry_is_xwhiteouts(), it will also observe layer->has_xwhiteouts
++ * for the layers where xwhiteouts marker was found in that merge dir.
++ */
++void ovl_layer_set_xwhiteouts(struct ovl_fs *ofs,
++			      const struct ovl_layer *layer)
++{
++	if (layer->has_xwhiteouts)
++		return;
++
++	/* Write once to read-mostly layer properties */
++	ofs->layers[layer->idx].has_xwhiteouts = true;
++}
++
+ /*
+  * For hard links and decoded file handles, it's possible for ovl_dentry_upper()
+  * to return positive, while there's no actual upper alias for the inode.
+@@ -739,19 +766,6 @@ bool ovl_path_check_xwhiteout_xattr(struct ovl_fs *ofs, const struct path *path)
+ 	return res >= 0;
+ }
+ 
+-bool ovl_path_check_xwhiteouts_xattr(struct ovl_fs *ofs, const struct path *path)
+-{
+-	struct dentry *dentry = path->dentry;
+-	int res;
+-
+-	/* xattr.whiteouts must be a directory */
+-	if (!d_is_dir(dentry))
+-		return false;
+-
+-	res = ovl_path_getxattr(ofs, path, OVL_XATTR_XWHITEOUTS, NULL, 0);
+-	return res >= 0;
+-}
+-
+ /*
+  * Load persistent uuid from xattr into s_uuid if found, or store a new
+  * random generated value in s_uuid and in xattr.
+@@ -811,20 +825,17 @@ fail:
+ 	return false;
+ }
+ 
+-bool ovl_path_check_dir_xattr(struct ovl_fs *ofs, const struct path *path,
+-			       enum ovl_xattr ox)
++char ovl_get_dir_xattr_val(struct ovl_fs *ofs, const struct path *path,
++			   enum ovl_xattr ox)
+ {
+ 	int res;
+ 	char val;
+ 
+ 	if (!d_is_dir(path->dentry))
+-		return false;
++		return 0;
+ 
+ 	res = ovl_path_getxattr(ofs, path, ox, &val, 1);
+-	if (res == 1 && val == 'y')
+-		return true;
+-
+-	return false;
++	return res == 1 ? val : 0;
+ }
+ 
+ #define OVL_XATTR_OPAQUE_POSTFIX	"opaque"
+@@ -837,7 +848,6 @@ bool ovl_path_check_dir_xattr(struct ovl_fs *ofs, const struct path *path,
+ #define OVL_XATTR_METACOPY_POSTFIX	"metacopy"
+ #define OVL_XATTR_PROTATTR_POSTFIX	"protattr"
+ #define OVL_XATTR_XWHITEOUT_POSTFIX	"whiteout"
+-#define OVL_XATTR_XWHITEOUTS_POSTFIX	"whiteouts"
+ 
+ #define OVL_XATTR_TAB_ENTRY(x) \
+ 	[x] = { [false] = OVL_XATTR_TRUSTED_PREFIX x ## _POSTFIX, \
+@@ -854,7 +864,6 @@ const char *const ovl_xattr_table[][2] = {
+ 	OVL_XATTR_TAB_ENTRY(OVL_XATTR_METACOPY),
+ 	OVL_XATTR_TAB_ENTRY(OVL_XATTR_PROTATTR),
+ 	OVL_XATTR_TAB_ENTRY(OVL_XATTR_XWHITEOUT),
+-	OVL_XATTR_TAB_ENTRY(OVL_XATTR_XWHITEOUTS),
+ };
+ 
+ int ovl_check_setxattr(struct ovl_fs *ofs, struct dentry *upperdentry,
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 226e7f66b5903..8d9286a1f2e85 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -1324,6 +1324,11 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ 	pipe->tail = tail;
+ 	pipe->head = head;
+ 
++	if (!pipe_has_watch_queue(pipe)) {
++		pipe->max_usage = nr_slots;
++		pipe->nr_accounted = nr_slots;
++	}
++
+ 	spin_unlock_irq(&pipe->rd_wait.lock);
+ 
+ 	/* This might have made more room for writers */
+@@ -1375,8 +1380,6 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned int arg)
+ 	if (ret < 0)
+ 		goto out_revert_acct;
+ 
+-	pipe->max_usage = nr_slots;
+-	pipe->nr_accounted = nr_slots;
+ 	return pipe->max_usage * PAGE_SIZE;
+ 
+ out_revert_acct:
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 435b61054b5b9..4905420d339f4 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -2415,7 +2415,6 @@ static long pagemap_scan_flush_buffer(struct pagemap_scan_private *p)
+ 
+ static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg)
+ {
+-	struct mmu_notifier_range range;
+ 	struct pagemap_scan_private p = {0};
+ 	unsigned long walk_start;
+ 	size_t n_ranges_out = 0;
+@@ -2431,15 +2430,9 @@ static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg)
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Protection change for the range is going to happen. */
+-	if (p.arg.flags & PM_SCAN_WP_MATCHING) {
+-		mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
+-					mm, p.arg.start, p.arg.end);
+-		mmu_notifier_invalidate_range_start(&range);
+-	}
+-
+ 	for (walk_start = p.arg.start; walk_start < p.arg.end;
+ 			walk_start = p.arg.walk_end) {
++		struct mmu_notifier_range range;
+ 		long n_out;
+ 
+ 		if (fatal_signal_pending(current)) {
+@@ -2450,8 +2443,20 @@ static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg)
+ 		ret = mmap_read_lock_killable(mm);
+ 		if (ret)
+ 			break;
++
++		/* Protection change for the range is going to happen. */
++		if (p.arg.flags & PM_SCAN_WP_MATCHING) {
++			mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
++						mm, walk_start, p.arg.end);
++			mmu_notifier_invalidate_range_start(&range);
++		}
++
+ 		ret = walk_page_range(mm, walk_start, p.arg.end,
+ 				      &pagemap_scan_ops, &p);
++
++		if (p.arg.flags & PM_SCAN_WP_MATCHING)
++			mmu_notifier_invalidate_range_end(&range);
++
+ 		mmap_read_unlock(mm);
+ 
+ 		n_out = pagemap_scan_flush_buffer(&p);
+@@ -2477,9 +2482,6 @@ static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg)
+ 	if (pagemap_scan_writeback_args(&p.arg, uarg))
+ 		ret = -EFAULT;
+ 
+-	if (p.arg.flags & PM_SCAN_WP_MATCHING)
+-		mmu_notifier_invalidate_range_end(&range);
+-
+ 	kfree(p.vec_buf);
+ 	return ret;
+ }
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 14bc745de199b..beb81fa00cff4 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -614,7 +614,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 				 "multichannel not available\n"
+ 				 "Empty network interface list returned by server %s\n",
+ 				 ses->server->hostname);
+-		rc = -EINVAL;
++		rc = -EOPNOTSUPP;
++		ses->iface_last_update = jiffies;
+ 		goto out;
+ 	}
+ 
+@@ -712,7 +713,6 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 
+ 		ses->iface_count++;
+ 		spin_unlock(&ses->iface_lock);
+-		ses->iface_last_update = jiffies;
+ next_iface:
+ 		nb_iface++;
+ 		next = le32_to_cpu(p->Next);
+@@ -734,11 +734,7 @@ next_iface:
+ 	if ((bytes_left > 8) || p->Next)
+ 		cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);
+ 
+-
+-	if (!ses->iface_count) {
+-		rc = -EINVAL;
+-		goto out;
+-	}
++	ses->iface_last_update = jiffies;
+ 
+ out:
+ 	/*
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 4f971c1061f0a..f5006aa97f5b3 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -156,6 +156,56 @@ out:
+ 	return;
+ }
+ 
++/* helper function for code reuse */
++static int
++cifs_chan_skip_or_disable(struct cifs_ses *ses,
++			  struct TCP_Server_Info *server,
++			  bool from_reconnect)
++{
++	struct TCP_Server_Info *pserver;
++	unsigned int chan_index;
++
++	if (SERVER_IS_CHAN(server)) {
++		cifs_dbg(VFS,
++			"server %s does not support multichannel anymore. Skip secondary channel\n",
++			 ses->server->hostname);
++
++		spin_lock(&ses->chan_lock);
++		chan_index = cifs_ses_get_chan_index(ses, server);
++		if (chan_index == CIFS_INVAL_CHAN_INDEX) {
++			spin_unlock(&ses->chan_lock);
++			goto skip_terminate;
++		}
++
++		ses->chans[chan_index].server = NULL;
++		spin_unlock(&ses->chan_lock);
++
++		/*
++		 * the above reference of server by channel
++		 * needs to be dropped without holding chan_lock
++		 * as cifs_put_tcp_session takes a higher lock
++		 * i.e. cifs_tcp_ses_lock
++		 */
++		cifs_put_tcp_session(server, from_reconnect);
++
++		server->terminate = true;
++		cifs_signal_cifsd_for_reconnect(server, false);
++
++		/* mark primary server as needing reconnect */
++		pserver = server->primary_server;
++		cifs_signal_cifsd_for_reconnect(pserver, false);
++skip_terminate:
++		return -EHOSTDOWN;
++	}
++
++	cifs_server_dbg(VFS,
++		"server does not support multichannel anymore. Disable all other channels\n");
++	cifs_disable_secondary_channels(ses);
++
++
++	return 0;
++}
++
+ static int
+ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 	       struct TCP_Server_Info *server, bool from_reconnect)
+@@ -164,8 +214,6 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 	struct nls_table *nls_codepage = NULL;
+ 	struct cifs_ses *ses;
+ 	int xid;
+-	struct TCP_Server_Info *pserver;
+-	unsigned int chan_index;
+ 
+ 	/*
+ 	 * SMB2s NegProt, SessSetup, Logoff do not have tcon yet so
+@@ -310,44 +358,11 @@ again:
+ 		 */
+ 		if (ses->chan_count > 1 &&
+ 		    !(server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
+-			if (SERVER_IS_CHAN(server)) {
+-				cifs_dbg(VFS, "server %s does not support " \
+-					 "multichannel anymore. skipping secondary channel\n",
+-					 ses->server->hostname);
+-
+-				spin_lock(&ses->chan_lock);
+-				chan_index = cifs_ses_get_chan_index(ses, server);
+-				if (chan_index == CIFS_INVAL_CHAN_INDEX) {
+-					spin_unlock(&ses->chan_lock);
+-					goto skip_terminate;
+-				}
+-
+-				ses->chans[chan_index].server = NULL;
+-				spin_unlock(&ses->chan_lock);
+-
+-				/*
+-				 * the above reference of server by channel
+-				 * needs to be dropped without holding chan_lock
+-				 * as cifs_put_tcp_session takes a higher lock
+-				 * i.e. cifs_tcp_ses_lock
+-				 */
+-				cifs_put_tcp_session(server, from_reconnect);
+-
+-				server->terminate = true;
+-				cifs_signal_cifsd_for_reconnect(server, false);
+-
+-				/* mark primary server as needing reconnect */
+-				pserver = server->primary_server;
+-				cifs_signal_cifsd_for_reconnect(pserver, false);
+-
+-skip_terminate:
++			rc = cifs_chan_skip_or_disable(ses, server,
++						       from_reconnect);
++			if (rc) {
+ 				mutex_unlock(&ses->session_mutex);
+-				rc = -EHOSTDOWN;
+ 				goto out;
+-			} else {
+-				cifs_server_dbg(VFS, "does not support " \
+-					 "multichannel anymore. disabling all other channels\n");
+-				cifs_disable_secondary_channels(ses);
+ 			}
+ 		}
+ 
+@@ -395,11 +410,23 @@ skip_sess_setup:
+ 		rc = SMB3_request_interfaces(xid, tcon, false);
+ 		free_xid(xid);
+ 
+-		if (rc)
++		if (rc == -EOPNOTSUPP) {
++			/*
++			 * some servers like Azure SMB server do not advertise
++			 * that multichannel has been disabled with server
++			 * capabilities, rather return STATUS_NOT_IMPLEMENTED.
++			 * treat this as server not supporting multichannel
++			 */
++
++			rc = cifs_chan_skip_or_disable(ses, server,
++						       from_reconnect);
++			goto skip_add_channels;
++		} else if (rc)
+ 			cifs_dbg(FYI, "%s: failed to query server interfaces: %d\n",
+ 				 __func__, rc);
+ 
+ 		if (ses->chan_max > ses->chan_count &&
++		    ses->iface_count &&
+ 		    !SERVER_IS_CHAN(server)) {
+ 			if (ses->chan_count == 1)
+ 				cifs_server_dbg(VFS, "supports multichannel now\n");
+@@ -409,6 +436,7 @@ skip_sess_setup:
+ 	} else {
+ 		mutex_unlock(&ses->session_mutex);
+ 	}
++skip_add_channels:
+ 
+ 	if (smb2_command != SMB2_INTERNAL_CMD)
+ 		mod_delayed_work(cifsiod_wq, &server->reconnect, 0);
+@@ -2279,7 +2307,7 @@ int smb2_parse_contexts(struct TCP_Server_Info *server,
+ 
+ 		noff = le16_to_cpu(cc->NameOffset);
+ 		nlen = le16_to_cpu(cc->NameLength);
+-		if (noff + nlen >= doff)
++		if (noff + nlen > doff)
+ 			return -EINVAL;
+ 
+ 		name = (char *)cc + noff;
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index 7977827c65410..09e1e7771592f 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -284,6 +284,7 @@ int ksmbd_conn_handler_loop(void *p)
+ 		goto out;
+ 
+ 	conn->last_active = jiffies;
++	set_freezable();
+ 	while (ksmbd_conn_alive(conn)) {
+ 		if (try_to_freeze())
+ 			continue;
+diff --git a/fs/smb/server/ksmbd_netlink.h b/fs/smb/server/ksmbd_netlink.h
+index b7521e41402e0..0ebf91ffa2361 100644
+--- a/fs/smb/server/ksmbd_netlink.h
++++ b/fs/smb/server/ksmbd_netlink.h
+@@ -304,7 +304,8 @@ enum ksmbd_event {
+ 	KSMBD_EVENT_SPNEGO_AUTHEN_REQUEST,
+ 	KSMBD_EVENT_SPNEGO_AUTHEN_RESPONSE	= 15,
+ 
+-	KSMBD_EVENT_MAX
++	__KSMBD_EVENT_MAX,
++	KSMBD_EVENT_MAX = __KSMBD_EVENT_MAX - 1
+ };
+ 
+ /*
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index e0eb7cb2a5258..53dfaac425c68 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -105,7 +105,7 @@ static int alloc_lease(struct oplock_info *opinfo, struct lease_ctx_info *lctx)
+ 	lease->is_dir = lctx->is_dir;
+ 	memcpy(lease->parent_lease_key, lctx->parent_lease_key, SMB2_LEASE_KEY_SIZE);
+ 	lease->version = lctx->version;
+-	lease->epoch = le16_to_cpu(lctx->epoch);
++	lease->epoch = le16_to_cpu(lctx->epoch) + 1;
+ 	INIT_LIST_HEAD(&opinfo->lease_entry);
+ 	opinfo->o_lease = lease;
+ 
+@@ -546,6 +546,7 @@ static struct oplock_info *same_client_has_lease(struct ksmbd_inode *ci,
+ 			     atomic_read(&ci->sop_count)) == 1) {
+ 				if (lease->state != SMB2_LEASE_NONE_LE &&
+ 				    lease->state == (lctx->req_state & lease->state)) {
++					lease->epoch++;
+ 					lease->state |= lctx->req_state;
+ 					if (lctx->req_state &
+ 						SMB2_LEASE_WRITE_CACHING_LE)
+@@ -556,13 +557,17 @@ static struct oplock_info *same_client_has_lease(struct ksmbd_inode *ci,
+ 				    atomic_read(&ci->sop_count)) > 1) {
+ 				if (lctx->req_state ==
+ 				    (SMB2_LEASE_READ_CACHING_LE |
+-				     SMB2_LEASE_HANDLE_CACHING_LE))
++				     SMB2_LEASE_HANDLE_CACHING_LE)) {
++					lease->epoch++;
+ 					lease->state = lctx->req_state;
++				}
+ 			}
+ 
+ 			if (lctx->req_state && lease->state ==
+-			    SMB2_LEASE_NONE_LE)
++			    SMB2_LEASE_NONE_LE) {
++				lease->epoch++;
+ 				lease_none_upgrade(opinfo, lctx->req_state);
++			}
+ 		}
+ 		read_lock(&ci->m_lock);
+ 	}
+@@ -1035,7 +1040,8 @@ static void copy_lease(struct oplock_info *op1, struct oplock_info *op2)
+ 	       SMB2_LEASE_KEY_SIZE);
+ 	lease2->duration = lease1->duration;
+ 	lease2->flags = lease1->flags;
+-	lease2->epoch = lease1->epoch++;
++	lease2->epoch = lease1->epoch;
++	lease2->version = lease1->version;
+ }
+ 
+ static int add_lease_global_list(struct oplock_info *opinfo)
+@@ -1453,7 +1459,7 @@ void create_lease_buf(u8 *rbuf, struct lease *lease)
+ 		memcpy(buf->lcontext.LeaseKey, lease->lease_key,
+ 		       SMB2_LEASE_KEY_SIZE);
+ 		buf->lcontext.LeaseFlags = lease->flags;
+-		buf->lcontext.Epoch = cpu_to_le16(++lease->epoch);
++		buf->lcontext.Epoch = cpu_to_le16(lease->epoch);
+ 		buf->lcontext.LeaseState = lease->state;
+ 		memcpy(buf->lcontext.ParentLeaseKey, lease->parent_lease_key,
+ 		       SMB2_LEASE_KEY_SIZE);
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 948665f8370f4..ba7a72a6a4f45 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -2323,11 +2323,12 @@ out:
+  * @eabuf:	set info command buffer
+  * @buf_len:	set info command buffer length
+  * @path:	dentry path for get ea
++ * @get_write:	get write access to a mount
+  *
+  * Return:	0 on success, otherwise error
+  */
+ static int smb2_set_ea(struct smb2_ea_info *eabuf, unsigned int buf_len,
+-		       const struct path *path)
++		       const struct path *path, bool get_write)
+ {
+ 	struct mnt_idmap *idmap = mnt_idmap(path->mnt);
+ 	char *attr_name = NULL, *value;
+@@ -3015,7 +3016,7 @@ int smb2_open(struct ksmbd_work *work)
+ 
+ 			rc = smb2_set_ea(&ea_buf->ea,
+ 					 le32_to_cpu(ea_buf->ccontext.DataLength),
+-					 &path);
++					 &path, false);
+ 			if (rc == -EOPNOTSUPP)
+ 				rc = 0;
+ 			else if (rc)
+@@ -5580,6 +5581,7 @@ static int smb2_rename(struct ksmbd_work *work,
+ 	if (!file_info->ReplaceIfExists)
+ 		flags = RENAME_NOREPLACE;
+ 
++	smb_break_all_levII_oplock(work, fp, 0);
+ 	rc = ksmbd_vfs_rename(work, &fp->filp->f_path, new_name, flags);
+ out:
+ 	kfree(new_name);
+@@ -5992,7 +5994,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			return -EINVAL;
+ 
+ 		return smb2_set_ea((struct smb2_ea_info *)req->Buffer,
+-				   buf_len, &fp->filp->f_path);
++				   buf_len, &fp->filp->f_path, true);
+ 	}
+ 	case FILE_POSITION_INFORMATION:
+ 	{
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index b49d47bdafc94..f29bb03f0dc47 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -74,7 +74,7 @@ static int handle_unsupported_event(struct sk_buff *skb, struct genl_info *info)
+ static int handle_generic_event(struct sk_buff *skb, struct genl_info *info);
+ static int ksmbd_ipc_heartbeat_request(void);
+ 
+-static const struct nla_policy ksmbd_nl_policy[KSMBD_EVENT_MAX] = {
++static const struct nla_policy ksmbd_nl_policy[KSMBD_EVENT_MAX + 1] = {
+ 	[KSMBD_EVENT_UNSPEC] = {
+ 		.len = 0,
+ 	},
+@@ -403,7 +403,7 @@ static int handle_generic_event(struct sk_buff *skb, struct genl_info *info)
+ 		return -EPERM;
+ #endif
+ 
+-	if (type >= KSMBD_EVENT_MAX) {
++	if (type > KSMBD_EVENT_MAX) {
+ 		WARN_ON(1);
+ 		return -EINVAL;
+ 	}
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 3b13c648d4900..e413a9cf8ee38 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1234,6 +1234,8 @@ out_cancel:
+ 	dir_ui->ui_size = dir->i_size;
+ 	mutex_unlock(&dir_ui->ui_mutex);
+ out_inode:
++	/* Free inode->i_link before inode is marked as bad. */
++	fscrypt_free_inode(inode);
+ 	make_bad_inode(inode);
+ 	iput(inode);
+ out_fname:
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index 764304595e8b0..3f8e6233fff7c 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -1510,6 +1510,18 @@ xfs_fs_fill_super(
+ 
+ 	mp->m_super = sb;
+ 
++	/*
++	 * Copy VFS mount flags from the context now that all parameter parsing
++	 * is guaranteed to have been completed by either the old mount API or
++	 * the newer fsopen/fsconfig API.
++	 */
++	if (fc->sb_flags & SB_RDONLY)
++		set_bit(XFS_OPSTATE_READONLY, &mp->m_opstate);
++	if (fc->sb_flags & SB_DIRSYNC)
++		mp->m_features |= XFS_FEAT_DIRSYNC;
++	if (fc->sb_flags & SB_SYNCHRONOUS)
++		mp->m_features |= XFS_FEAT_WSYNC;
++
+ 	error = xfs_fs_validate_params(mp);
+ 	if (error)
+ 		return error;
+@@ -1979,6 +1991,11 @@ static const struct fs_context_operations xfs_context_ops = {
+ 	.free        = xfs_fs_free,
+ };
+ 
++/*
++ * WARNING: do not initialise any parameters in this function that depend on
++ * mount option parsing having already been performed as this can be called from
++ * fsopen() before any parameters have been set.
++ */
+ static int xfs_init_fs_context(
+ 	struct fs_context	*fc)
+ {
+@@ -2010,16 +2027,6 @@ static int xfs_init_fs_context(
+ 	mp->m_logbsize = -1;
+ 	mp->m_allocsize_log = 16; /* 64k */
+ 
+-	/*
+-	 * Copy binary VFS mount flags we are interested in.
+-	 */
+-	if (fc->sb_flags & SB_RDONLY)
+-		set_bit(XFS_OPSTATE_READONLY, &mp->m_opstate);
+-	if (fc->sb_flags & SB_DIRSYNC)
+-		mp->m_features |= XFS_FEAT_DIRSYNC;
+-	if (fc->sb_flags & SB_SYNCHRONOUS)
+-		mp->m_features |= XFS_FEAT_WSYNC;
+-
+ 	fc->s_fs_info = mp;
+ 	fc->ops = &xfs_context_ops;
+ 
+diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
+index e2640dc64e081..ea36aa79dca25 100644
+--- a/include/drm/drm_drv.h
++++ b/include/drm/drm_drv.h
+@@ -110,6 +110,15 @@ enum drm_driver_feature {
+ 	 * Driver supports user defined GPU VA bindings for GEM objects.
+ 	 */
+ 	DRIVER_GEM_GPUVA		= BIT(8),
++	/**
++	 * @DRIVER_CURSOR_HOTSPOT:
++	 *
++	 * Driver supports and requires cursor hotspot information in the
++	 * cursor plane (e.g. cursor plane has to actually track the mouse
++	 * cursor and the clients are required to set hotspot in order for
++	 * the cursor planes to work correctly).
++	 */
++	DRIVER_CURSOR_HOTSPOT           = BIT(9),
+ 
+ 	/* IMPORTANT: Below are all the legacy flags, add new ones above. */
+ 
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index e1b5b4282f75d..8f35dcea82d3b 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -226,6 +226,18 @@ struct drm_file {
+ 	 */
+ 	bool is_master;
+ 
++	/**
++	 * @supports_virtualized_cursor_plane:
++	 *
++	 * This client is capable of handling the cursor plane with the
++	 * restrictions imposed on it by the virtualized drivers.
++	 *
++	 * This implies that the cursor plane has to behave like a cursor
++	 * i.e. track cursor movement. It also requires setting of the
++	 * hotspot properties by the client on the cursor plane.
++	 */
++	bool supports_virtualized_cursor_plane;
++
+ 	/**
+ 	 * @master:
+ 	 *
+diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
+index 79d62856defbf..fef775200a81f 100644
+--- a/include/drm/drm_plane.h
++++ b/include/drm/drm_plane.h
+@@ -190,6 +190,16 @@ struct drm_plane_state {
+ 	 */
+ 	struct drm_property_blob *fb_damage_clips;
+ 
++	/**
++	 * @ignore_damage_clips:
++	 *
++	 * Set by drivers to indicate the drm_atomic_helper_damage_iter_init()
++	 * helper that the @fb_damage_clips blob property should be ignored.
++	 *
++	 * See :ref:`damage_tracking_properties` for more information.
++	 */
++	bool ignore_damage_clips;
++
+ 	/**
+ 	 * @src:
+ 	 *
+diff --git a/include/linux/async.h b/include/linux/async.h
+index cce4ad31e8fcf..33c9ff4afb492 100644
+--- a/include/linux/async.h
++++ b/include/linux/async.h
+@@ -90,6 +90,8 @@ async_schedule_dev(async_func_t func, struct device *dev)
+ 	return async_schedule_node(func, dev, dev_to_node(dev));
+ }
+ 
++bool async_schedule_dev_nocall(async_func_t func, struct device *dev);
++
+ /**
+  * async_schedule_dev_domain - A device specific version of async_schedule_domain
+  * @func: function to execute asynchronously
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index ff217a5ce5521..472cb16458b0b 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -171,6 +171,8 @@ LSM_HOOK(int, 0, file_alloc_security, struct file *file)
+ LSM_HOOK(void, LSM_RET_VOID, file_free_security, struct file *file)
+ LSM_HOOK(int, 0, file_ioctl, struct file *file, unsigned int cmd,
+ 	 unsigned long arg)
++LSM_HOOK(int, 0, file_ioctl_compat, struct file *file, unsigned int cmd,
++	 unsigned long arg)
+ LSM_HOOK(int, 0, mmap_addr, unsigned long addr)
+ LSM_HOOK(int, 0, mmap_file, struct file *file, unsigned long reqprot,
+ 	 unsigned long prot, unsigned long flags)
+diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h
+index b0da04fe087bb..34dfcc77f505a 100644
+--- a/include/linux/mc146818rtc.h
++++ b/include/linux/mc146818rtc.h
+@@ -126,10 +126,11 @@ struct cmos_rtc_board_info {
+ #endif /* ARCH_RTC_LOCATION */
+ 
+ bool mc146818_does_rtc_work(void);
+-int mc146818_get_time(struct rtc_time *time);
++int mc146818_get_time(struct rtc_time *time, int timeout);
+ int mc146818_set_time(struct rtc_time *time);
+ 
+ bool mc146818_avoid_UIP(void (*callback)(unsigned char seconds, void *param),
++			int timeout,
+ 			void *param);
+ 
+ #endif /* _MC146818RTC_H */
+diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
+index 6f7725238abc2..3fb428ce7d1c7 100644
+--- a/include/linux/mlx5/fs.h
++++ b/include/linux/mlx5/fs.h
+@@ -132,6 +132,7 @@ struct mlx5_flow_handle;
+ 
+ enum {
+ 	FLOW_CONTEXT_HAS_TAG = BIT(0),
++	FLOW_CONTEXT_UPLINK_HAIRPIN_EN = BIT(1),
+ };
+ 
+ struct mlx5_flow_context {
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 3f7b664d625b9..fb8d26a15df47 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -3557,7 +3557,7 @@ struct mlx5_ifc_flow_context_bits {
+ 	u8         action[0x10];
+ 
+ 	u8         extended_destination[0x1];
+-	u8         reserved_at_81[0x1];
++	u8         uplink_hairpin_en[0x1];
+ 	u8         flow_source[0x2];
+ 	u8         encrypt_decrypt_type[0x4];
+ 	u8         destination_list_size[0x18];
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 9db36e1977125..a7c32324c2639 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1793,6 +1793,7 @@ static inline unsigned long section_nr_to_pfn(unsigned long sec)
+ #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK)
+ 
+ struct mem_section_usage {
++	struct rcu_head rcu;
+ #ifdef CONFIG_SPARSEMEM_VMEMMAP
+ 	DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION);
+ #endif
+@@ -1986,7 +1987,7 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+ {
+ 	int idx = subsection_map_index(pfn);
+ 
+-	return test_bit(idx, ms->usage->subsection_map);
++	return test_bit(idx, READ_ONCE(ms->usage)->subsection_map);
+ }
+ #else
+ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+@@ -2010,6 +2011,7 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
+ static inline int pfn_valid(unsigned long pfn)
+ {
+ 	struct mem_section *ms;
++	int ret;
+ 
+ 	/*
+ 	 * Ensure the upper PAGE_SHIFT bits are clear in the
+@@ -2023,13 +2025,19 @@ static inline int pfn_valid(unsigned long pfn)
+ 	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
+ 		return 0;
+ 	ms = __pfn_to_section(pfn);
+-	if (!valid_section(ms))
++	rcu_read_lock();
++	if (!valid_section(ms)) {
++		rcu_read_unlock();
+ 		return 0;
++	}
+ 	/*
+ 	 * Traditionally early sections always returned pfn_valid() for
+ 	 * the entire section-sized span.
+ 	 */
+-	return early_section(ms) || pfn_section_valid(ms, pfn);
++	ret = early_section(ms) || pfn_section_valid(ms, pfn);
++	rcu_read_unlock();
++
++	return ret;
+ }
+ #endif
+ 
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index c29ace15a053a..9d0fc5109af66 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -1265,6 +1265,7 @@ struct nand_secure_region {
+  * @cont_read: Sequential page read internals
+  * @cont_read.ongoing: Whether a continuous read is ongoing or not
+  * @cont_read.first_page: Start of the continuous read operation
++ * @cont_read.pause_page: End of the current sequential cache read operation
+  * @cont_read.last_page: End of the continuous read operation
+  * @controller: The hardware controller	structure which is shared among multiple
+  *              independent devices
+@@ -1321,6 +1322,7 @@ struct nand_chip {
+ 	struct {
+ 		bool ongoing;
+ 		unsigned int first_page;
++		unsigned int pause_page;
+ 		unsigned int last_page;
+ 	} cont_read;
+ 
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index b26fe858fd444..3c2fc291b071d 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -261,8 +261,8 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound,
+ 	 * guarantee the pinned page won't be randomly replaced in the
+ 	 * future on write faults.
+ 	 */
+-	if (likely(!is_device_private_page(page) &&
+-	    unlikely(page_needs_cow_for_dma(vma, page))))
++	if (likely(!is_device_private_page(page)) &&
++	    unlikely(page_needs_cow_for_dma(vma, page)))
+ 		return -EBUSY;
+ 
+ 	ClearPageAnonExclusive(page);
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 1d1df326c881c..9d3138c6364c3 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -389,6 +389,8 @@ int security_file_permission(struct file *file, int mask);
+ int security_file_alloc(struct file *file);
+ void security_file_free(struct file *file);
+ int security_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
++int security_file_ioctl_compat(struct file *file, unsigned int cmd,
++			       unsigned long arg);
+ int security_mmap_file(struct file *file, unsigned long prot,
+ 			unsigned long flags);
+ int security_mmap_addr(unsigned long addr);
+@@ -987,6 +989,13 @@ static inline int security_file_ioctl(struct file *file, unsigned int cmd,
+ 	return 0;
+ }
+ 
++static inline int security_file_ioctl_compat(struct file *file,
++					     unsigned int cmd,
++					     unsigned long arg)
++{
++	return 0;
++}
++
+ static inline int security_mmap_file(struct file *file, unsigned long prot,
+ 				     unsigned long flags)
+ {
+diff --git a/include/linux/seq_buf.h b/include/linux/seq_buf.h
+index 5fb1f12c33f90..c44f4b47b9453 100644
+--- a/include/linux/seq_buf.h
++++ b/include/linux/seq_buf.h
+@@ -22,9 +22,8 @@ struct seq_buf {
+ };
+ 
+ #define DECLARE_SEQ_BUF(NAME, SIZE)			\
+-	char __ ## NAME ## _buffer[SIZE] = "";		\
+ 	struct seq_buf NAME = {				\
+-		.buffer = &__ ## NAME ## _buffer,	\
++		.buffer = (char[SIZE]) { 0 },		\
+ 		.size = SIZE,				\
+ 	}
+ 
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index c953b8c0d2f43..bd4418377bacf 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -500,12 +500,6 @@ static inline bool sk_psock_strp_enabled(struct sk_psock *psock)
+ 	return !!psock->saved_data_ready;
+ }
+ 
+-static inline bool sk_is_udp(const struct sock *sk)
+-{
+-	return sk->sk_type == SOCK_DGRAM &&
+-	       sk->sk_protocol == IPPROTO_UDP;
+-}
+-
+ #if IS_ENABLED(CONFIG_NET_SOCK_MSG)
+ 
+ #define BPF_F_STRPARSER	(1UL << 1)
+diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h
+index 4f3d14bb15385..c383579a008ba 100644
+--- a/include/linux/soundwire/sdw.h
++++ b/include/linux/soundwire/sdw.h
+@@ -886,7 +886,8 @@ struct sdw_master_ops {
+  * struct sdw_bus - SoundWire bus
+  * @dev: Shortcut to &bus->md->dev to avoid changing the entire code.
+  * @md: Master device
+- * @link_id: Link id number, can be 0 to N, unique for each Master
++ * @controller_id: system-unique controller ID. If set to -1, the bus @id will be used.
++ * @link_id: Link id number, can be 0 to N, unique for each Controller
+  * @id: bus system-wide unique id
+  * @slaves: list of Slaves on this bus
+  * @assigned: Bitmap for Slave device numbers.
+@@ -918,6 +919,7 @@ struct sdw_master_ops {
+ struct sdw_bus {
+ 	struct device *dev;
+ 	struct sdw_master_device *md;
++	int controller_id;
+ 	unsigned int link_id;
+ 	int id;
+ 	struct list_head slaves;
+diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
+index fd9d12de7e929..59fdd40740558 100644
+--- a/include/linux/syscalls.h
++++ b/include/linux/syscalls.h
+@@ -125,6 +125,7 @@ struct cachestat;
+ #define __TYPE_IS_LL(t) (__TYPE_AS(t, 0LL) || __TYPE_AS(t, 0ULL))
+ #define __SC_LONG(t, a) __typeof(__builtin_choose_expr(__TYPE_IS_LL(t), 0LL, 0L)) a
+ #define __SC_CAST(t, a)	(__force t) a
++#define __SC_TYPE(t, a)	t
+ #define __SC_ARGS(t, a)	a
+ #define __SC_TEST(t, a) (void)BUILD_BUG_ON_ZERO(!__TYPE_IS_LL(t) && sizeof(t) > sizeof(long))
+ 
+diff --git a/include/media/v4l2-cci.h b/include/media/v4l2-cci.h
+index 0f6803e4b17e9..8b0b361b464cc 100644
+--- a/include/media/v4l2-cci.h
++++ b/include/media/v4l2-cci.h
+@@ -7,6 +7,8 @@
+ #ifndef _V4L2_CCI_H
+ #define _V4L2_CCI_H
+ 
++#include <linux/bitfield.h>
++#include <linux/bits.h>
+ #include <linux/types.h>
+ 
+ struct i2c_client;
+@@ -33,11 +35,20 @@ struct cci_reg_sequence {
+ #define CCI_REG_WIDTH_SHIFT		16
+ #define CCI_REG_WIDTH_MASK		GENMASK(19, 16)
+ 
++#define CCI_REG_WIDTH_BYTES(x)		FIELD_GET(CCI_REG_WIDTH_MASK, x)
++#define CCI_REG_WIDTH(x)		(CCI_REG_WIDTH_BYTES(x) << 3)
++#define CCI_REG_ADDR(x)			FIELD_GET(CCI_REG_ADDR_MASK, x)
++#define CCI_REG_LE			BIT(20)
++
+ #define CCI_REG8(x)			((1 << CCI_REG_WIDTH_SHIFT) | (x))
+ #define CCI_REG16(x)			((2 << CCI_REG_WIDTH_SHIFT) | (x))
+ #define CCI_REG24(x)			((3 << CCI_REG_WIDTH_SHIFT) | (x))
+ #define CCI_REG32(x)			((4 << CCI_REG_WIDTH_SHIFT) | (x))
+ #define CCI_REG64(x)			((8 << CCI_REG_WIDTH_SHIFT) | (x))
++#define CCI_REG16_LE(x)			(CCI_REG_LE | (2U << CCI_REG_WIDTH_SHIFT) | (x))
++#define CCI_REG24_LE(x)			(CCI_REG_LE | (3U << CCI_REG_WIDTH_SHIFT) | (x))
++#define CCI_REG32_LE(x)			(CCI_REG_LE | (4U << CCI_REG_WIDTH_SHIFT) | (x))
++#define CCI_REG64_LE(x)			(CCI_REG_LE | (8U << CCI_REG_WIDTH_SHIFT) | (x))
+ 
+ /**
+  * cci_read() - Read a value from a single CCI register
+diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
+index 5531dd08061e5..0754c463224a4 100644
+--- a/include/net/af_rxrpc.h
++++ b/include/net/af_rxrpc.h
+@@ -15,6 +15,7 @@ struct key;
+ struct sock;
+ struct socket;
+ struct rxrpc_call;
++struct rxrpc_peer;
+ enum rxrpc_abort_reason;
+ 
+ enum rxrpc_interruptibility {
+@@ -41,13 +42,14 @@ void rxrpc_kernel_new_call_notification(struct socket *,
+ 					rxrpc_notify_new_call_t,
+ 					rxrpc_discard_new_call_t);
+ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+-					   struct sockaddr_rxrpc *srx,
++					   struct rxrpc_peer *peer,
+ 					   struct key *key,
+ 					   unsigned long user_call_ID,
+ 					   s64 tx_total_len,
+ 					   u32 hard_timeout,
+ 					   gfp_t gfp,
+ 					   rxrpc_notify_rx_t notify_rx,
++					   u16 service_id,
+ 					   bool upgrade,
+ 					   enum rxrpc_interruptibility interruptibility,
+ 					   unsigned int debug_id);
+@@ -60,9 +62,14 @@ bool rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *,
+ 			     u32, int, enum rxrpc_abort_reason);
+ void rxrpc_kernel_shutdown_call(struct socket *sock, struct rxrpc_call *call);
+ void rxrpc_kernel_put_call(struct socket *sock, struct rxrpc_call *call);
+-void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *,
+-			   struct sockaddr_rxrpc *);
+-bool rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *, u32 *);
++struct rxrpc_peer *rxrpc_kernel_lookup_peer(struct socket *sock,
++					    struct sockaddr_rxrpc *srx, gfp_t gfp);
++void rxrpc_kernel_put_peer(struct rxrpc_peer *peer);
++struct rxrpc_peer *rxrpc_kernel_get_peer(struct rxrpc_peer *peer);
++struct rxrpc_peer *rxrpc_kernel_get_call_peer(struct socket *sock, struct rxrpc_call *call);
++const struct sockaddr_rxrpc *rxrpc_kernel_remote_srx(const struct rxrpc_peer *peer);
++const struct sockaddr *rxrpc_kernel_remote_addr(const struct rxrpc_peer *peer);
++unsigned int rxrpc_kernel_get_srtt(const struct rxrpc_peer *);
+ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
+ 			       rxrpc_user_attach_call_t, unsigned long, gfp_t,
+ 			       unsigned int);
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index d0a2f827d5f20..9ab4bf704e864 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -357,4 +357,12 @@ static inline bool inet_csk_has_ulp(const struct sock *sk)
+ 	return inet_test_bit(IS_ICSK, sk) && !!inet_csk(sk)->icsk_ulp_ops;
+ }
+ 
++static inline void inet_init_csk_locks(struct sock *sk)
++{
++	struct inet_connection_sock *icsk = inet_csk(sk);
++
++	spin_lock_init(&icsk->icsk_accept_queue.rskq_lock);
++	spin_lock_init(&icsk->icsk_accept_queue.fastopenq.lock);
++}
++
+ #endif /* _INET_CONNECTION_SOCK_H */
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 74db6d97cae10..8d5fe15b0f6f2 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -310,11 +310,6 @@ static inline unsigned long inet_cmsg_flags(const struct inet_sock *inet)
+ #define inet_assign_bit(nr, sk, val)		\
+ 	assign_bit(INET_FLAGS_##nr, &inet_sk(sk)->inet_flags, val)
+ 
+-static inline bool sk_is_inet(struct sock *sk)
+-{
+-	return sk->sk_family == AF_INET || sk->sk_family == AF_INET6;
+-}
+-
+ /**
+  * sk_to_full_sk - Access to a full socket
+  * @sk: pointer to a socket
+diff --git a/include/net/llc_pdu.h b/include/net/llc_pdu.h
+index 7e73f8e5e4970..1d55ba7c45be1 100644
+--- a/include/net/llc_pdu.h
++++ b/include/net/llc_pdu.h
+@@ -262,8 +262,7 @@ static inline void llc_pdu_header_init(struct sk_buff *skb, u8 type,
+  */
+ static inline void llc_pdu_decode_sa(struct sk_buff *skb, u8 *sa)
+ {
+-	if (skb->protocol == htons(ETH_P_802_2))
+-		memcpy(sa, eth_hdr(skb)->h_source, ETH_ALEN);
++	memcpy(sa, eth_hdr(skb)->h_source, ETH_ALEN);
+ }
+ 
+ /**
+@@ -275,8 +274,7 @@ static inline void llc_pdu_decode_sa(struct sk_buff *skb, u8 *sa)
+  */
+ static inline void llc_pdu_decode_da(struct sk_buff *skb, u8 *da)
+ {
+-	if (skb->protocol == htons(ETH_P_802_2))
+-		memcpy(da, eth_hdr(skb)->h_dest, ETH_ALEN);
++	memcpy(da, eth_hdr(skb)->h_dest, ETH_ALEN);
+ }
+ 
+ /**
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index dcb9160e64671..959a7725c27bb 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -375,6 +375,10 @@ struct tcf_proto_ops {
+ 						struct nlattr **tca,
+ 						struct netlink_ext_ack *extack);
+ 	void			(*tmplt_destroy)(void *tmplt_priv);
++	void			(*tmplt_reoffload)(struct tcf_chain *chain,
++						   bool add,
++						   flow_setup_cb_t *cb,
++						   void *cb_priv);
+ 	struct tcf_exts *	(*get_exts)(const struct tcf_proto *tp,
+ 					    u32 handle);
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 0201136b0b9ca..f9a9f61fa1222 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2794,9 +2794,25 @@ static inline void skb_setup_tx_timestamp(struct sk_buff *skb, __u16 tsflags)
+ 			   &skb_shinfo(skb)->tskey);
+ }
+ 
++static inline bool sk_is_inet(const struct sock *sk)
++{
++	int family = READ_ONCE(sk->sk_family);
++
++	return family == AF_INET || family == AF_INET6;
++}
++
+ static inline bool sk_is_tcp(const struct sock *sk)
+ {
+-	return sk->sk_type == SOCK_STREAM && sk->sk_protocol == IPPROTO_TCP;
++	return sk_is_inet(sk) &&
++	       sk->sk_type == SOCK_STREAM &&
++	       sk->sk_protocol == IPPROTO_TCP;
++}
++
++static inline bool sk_is_udp(const struct sock *sk)
++{
++	return sk_is_inet(sk) &&
++	       sk->sk_type == SOCK_DGRAM &&
++	       sk->sk_protocol == IPPROTO_UDP;
+ }
+ 
+ static inline bool sk_is_stream_unix(const struct sock *sk)
+diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
+index 1f6fc8c7a84c6..5425f7ad5ebde 100644
+--- a/include/net/xdp_sock_drv.h
++++ b/include/net/xdp_sock_drv.h
+@@ -147,11 +147,29 @@ static inline struct xdp_buff *xsk_buff_get_frag(struct xdp_buff *first)
+ 	return ret;
+ }
+ 
++static inline void xsk_buff_del_tail(struct xdp_buff *tail)
++{
++	struct xdp_buff_xsk *xskb = container_of(tail, struct xdp_buff_xsk, xdp);
++
++	list_del(&xskb->xskb_list_node);
++}
++
++static inline struct xdp_buff *xsk_buff_get_tail(struct xdp_buff *first)
++{
++	struct xdp_buff_xsk *xskb = container_of(first, struct xdp_buff_xsk, xdp);
++	struct xdp_buff_xsk *frag;
++
++	frag = list_last_entry(&xskb->pool->xskb_list, struct xdp_buff_xsk,
++			       xskb_list_node);
++	return &frag->xdp;
++}
++
+ static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
+ {
+ 	xdp->data = xdp->data_hard_start + XDP_PACKET_HEADROOM;
+ 	xdp->data_meta = xdp->data;
+ 	xdp->data_end = xdp->data + size;
++	xdp->flags = 0;
+ }
+ 
+ static inline dma_addr_t xsk_buff_raw_get_dma(struct xsk_buff_pool *pool,
+@@ -309,6 +327,15 @@ static inline struct xdp_buff *xsk_buff_get_frag(struct xdp_buff *first)
+ 	return NULL;
+ }
+ 
++static inline void xsk_buff_del_tail(struct xdp_buff *tail)
++{
++}
++
++static inline struct xdp_buff *xsk_buff_get_tail(struct xdp_buff *first)
++{
++	return NULL;
++}
++
+ static inline void xsk_buff_set_size(struct xdp_buff *xdp, u32 size)
+ {
+ }
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index e9d412d19dbbb..caec276515dcc 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -1216,6 +1216,31 @@ TRACE_EVENT(afs_file_error,
+ 		      __print_symbolic(__entry->where, afs_file_errors))
+ 	    );
+ 
++TRACE_EVENT(afs_bulkstat_error,
++	    TP_PROTO(struct afs_operation *op, struct afs_fid *fid, unsigned int index, s32 abort),
++
++	    TP_ARGS(op, fid, index, abort),
++
++	    TP_STRUCT__entry(
++		    __field_struct(struct afs_fid,	fid)
++		    __field(unsigned int,		op)
++		    __field(unsigned int,		index)
++		    __field(s32,			abort)
++			     ),
++
++	    TP_fast_assign(
++		    __entry->op = op->debug_id;
++		    __entry->fid = *fid;
++		    __entry->index = index;
++		    __entry->abort = abort;
++			   ),
++
++	    TP_printk("OP=%08x[%02x] %llx:%llx:%x a=%d",
++		      __entry->op, __entry->index,
++		      __entry->fid.vid, __entry->fid.vnode, __entry->fid.unique,
++		      __entry->abort)
++	    );
++
+ TRACE_EVENT(afs_cm_no_server,
+ 	    TP_PROTO(struct afs_call *call, struct sockaddr_rxrpc *srx),
+ 
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index f7e537f64db45..4c1ef7b3705c2 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -178,7 +178,9 @@
+ #define rxrpc_peer_traces \
+ 	EM(rxrpc_peer_free,			"FREE        ") \
+ 	EM(rxrpc_peer_get_accept,		"GET accept  ") \
++	EM(rxrpc_peer_get_application,		"GET app     ") \
+ 	EM(rxrpc_peer_get_bundle,		"GET bundle  ") \
++	EM(rxrpc_peer_get_call,			"GET call    ") \
+ 	EM(rxrpc_peer_get_client_conn,		"GET cln-conn") \
+ 	EM(rxrpc_peer_get_input,		"GET input   ") \
+ 	EM(rxrpc_peer_get_input_error,		"GET inpt-err") \
+@@ -187,6 +189,7 @@
+ 	EM(rxrpc_peer_get_service_conn,		"GET srv-conn") \
+ 	EM(rxrpc_peer_new_client,		"NEW client  ") \
+ 	EM(rxrpc_peer_new_prealloc,		"NEW prealloc") \
++	EM(rxrpc_peer_put_application,		"PUT app     ") \
+ 	EM(rxrpc_peer_put_bundle,		"PUT bundle  ") \
+ 	EM(rxrpc_peer_put_call,			"PUT call    ") \
+ 	EM(rxrpc_peer_put_conn,			"PUT conn    ") \
+diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
+index 7c29d82db9ee0..f8bc34a6bcfa2 100644
+--- a/include/uapi/linux/btrfs.h
++++ b/include/uapi/linux/btrfs.h
+@@ -614,6 +614,9 @@ struct btrfs_ioctl_clone_range_args {
+  */
+ #define BTRFS_DEFRAG_RANGE_COMPRESS 1
+ #define BTRFS_DEFRAG_RANGE_START_IO 2
++#define BTRFS_DEFRAG_RANGE_FLAGS_SUPP	(BTRFS_DEFRAG_RANGE_COMPRESS |		\
++					 BTRFS_DEFRAG_RANGE_START_IO)
++
+ struct btrfs_ioctl_defrag_range_args {
+ 	/* start of the defrag operation */
+ 	__u64 start;
+diff --git a/kernel/async.c b/kernel/async.c
+index b2c4ba5686ee4..673bba6bdf3a0 100644
+--- a/kernel/async.c
++++ b/kernel/async.c
+@@ -145,6 +145,39 @@ static void async_run_entry_fn(struct work_struct *work)
+ 	wake_up(&async_done);
+ }
+ 
++static async_cookie_t __async_schedule_node_domain(async_func_t func,
++						   void *data, int node,
++						   struct async_domain *domain,
++						   struct async_entry *entry)
++{
++	async_cookie_t newcookie;
++	unsigned long flags;
++
++	INIT_LIST_HEAD(&entry->domain_list);
++	INIT_LIST_HEAD(&entry->global_list);
++	INIT_WORK(&entry->work, async_run_entry_fn);
++	entry->func = func;
++	entry->data = data;
++	entry->domain = domain;
++
++	spin_lock_irqsave(&async_lock, flags);
++
++	/* allocate cookie and queue */
++	newcookie = entry->cookie = next_cookie++;
++
++	list_add_tail(&entry->domain_list, &domain->pending);
++	if (domain->registered)
++		list_add_tail(&entry->global_list, &async_global_pending);
++
++	atomic_inc(&entry_count);
++	spin_unlock_irqrestore(&async_lock, flags);
++
++	/* schedule for execution */
++	queue_work_node(node, system_unbound_wq, &entry->work);
++
++	return newcookie;
++}
++
+ /**
+  * async_schedule_node_domain - NUMA specific version of async_schedule_domain
+  * @func: function to execute asynchronously
+@@ -186,29 +219,8 @@ async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+ 		func(data, newcookie);
+ 		return newcookie;
+ 	}
+-	INIT_LIST_HEAD(&entry->domain_list);
+-	INIT_LIST_HEAD(&entry->global_list);
+-	INIT_WORK(&entry->work, async_run_entry_fn);
+-	entry->func = func;
+-	entry->data = data;
+-	entry->domain = domain;
+-
+-	spin_lock_irqsave(&async_lock, flags);
+-
+-	/* allocate cookie and queue */
+-	newcookie = entry->cookie = next_cookie++;
+-
+-	list_add_tail(&entry->domain_list, &domain->pending);
+-	if (domain->registered)
+-		list_add_tail(&entry->global_list, &async_global_pending);
+-
+-	atomic_inc(&entry_count);
+-	spin_unlock_irqrestore(&async_lock, flags);
+-
+-	/* schedule for execution */
+-	queue_work_node(node, system_unbound_wq, &entry->work);
+ 
+-	return newcookie;
++	return __async_schedule_node_domain(func, data, node, domain, entry);
+ }
+ EXPORT_SYMBOL_GPL(async_schedule_node_domain);
+ 
+@@ -231,6 +243,35 @@ async_cookie_t async_schedule_node(async_func_t func, void *data, int node)
+ }
+ EXPORT_SYMBOL_GPL(async_schedule_node);
+ 
++/**
++ * async_schedule_dev_nocall - A simplified variant of async_schedule_dev()
++ * @func: function to execute asynchronously
++ * @dev: device argument to be passed to function
++ *
++ * @dev is used as both the argument for the function and to provide NUMA
++ * context for where to run the function.
++ *
++ * If the asynchronous execution of @func is scheduled successfully, return
++ * true. Otherwise, do nothing and return false, unlike async_schedule_dev()
++ * that will run the function synchronously then.
++ */
++bool async_schedule_dev_nocall(async_func_t func, struct device *dev)
++{
++	struct async_entry *entry;
++
++	entry = kzalloc(sizeof(struct async_entry), GFP_KERNEL);
++
++	/* Give up if there is no memory or too much work. */
++	if (!entry || atomic_read(&entry_count) > MAX_WORK) {
++		kfree(entry);
++		return false;
++	}
++
++	__async_schedule_node_domain(func, dev, dev_to_node(dev),
++				     &async_dfl_domain, entry);
++	return true;
++}
++
+ /**
+  * async_synchronize_full - synchronize all asynchronous function calls
+  *
+diff --git a/kernel/crash_core.c b/kernel/crash_core.c
+index d4313b53837e3..755d8d4ef5b08 100644
+--- a/kernel/crash_core.c
++++ b/kernel/crash_core.c
+@@ -377,7 +377,6 @@ static int __init reserve_crashkernel_low(unsigned long long low_size)
+ 
+ 	crashk_low_res.start = low_base;
+ 	crashk_low_res.end   = low_base + low_size - 1;
+-	insert_resource(&iomem_resource, &crashk_low_res);
+ #endif
+ 	return 0;
+ }
+@@ -459,8 +458,19 @@ retry:
+ 
+ 	crashk_res.start = crash_base;
+ 	crashk_res.end = crash_base + crash_size - 1;
+-	insert_resource(&iomem_resource, &crashk_res);
+ }
++
++static __init int insert_crashkernel_resources(void)
++{
++	if (crashk_res.start < crashk_res.end)
++		insert_resource(&iomem_resource, &crashk_res);
++
++	if (crashk_low_res.start < crashk_low_res.end)
++		insert_resource(&iomem_resource, &crashk_low_res);
++
++	return 0;
++}
++early_initcall(insert_crashkernel_resources);
+ #endif
+ 
+ int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map,
+diff --git a/kernel/futex/core.c b/kernel/futex/core.c
+index dad981a865b84..52d0bf67e715f 100644
+--- a/kernel/futex/core.c
++++ b/kernel/futex/core.c
+@@ -626,12 +626,21 @@ retry:
+ }
+ 
+ /*
+- * PI futexes can not be requeued and must remove themselves from the
+- * hash bucket. The hash bucket lock (i.e. lock_ptr) is held.
++ * PI futexes can not be requeued and must remove themselves from the hash
++ * bucket. The hash bucket lock (i.e. lock_ptr) is held.
+  */
+ void futex_unqueue_pi(struct futex_q *q)
+ {
+-	__futex_unqueue(q);
++	/*
++	 * If the lock was not acquired (due to timeout or signal) then the
++	 * rt_waiter is removed before futex_q is. If this is observed by
++	 * an unlocker after dropping the rtmutex wait lock and before
++	 * acquiring the hash bucket lock, then the unlocker dequeues the
++	 * futex_q from the hash bucket list to guarantee consistent state
++	 * vs. userspace. Therefore the dequeue here must be conditional.
++	 */
++	if (!plist_node_empty(&q->list))
++		__futex_unqueue(q);
+ 
+ 	BUG_ON(!q->pi_state);
+ 	put_pi_state(q->pi_state);
+diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
+index 90e5197f4e569..5722467f27379 100644
+--- a/kernel/futex/pi.c
++++ b/kernel/futex/pi.c
+@@ -1135,6 +1135,7 @@ retry:
+ 
+ 	hb = futex_hash(&key);
+ 	spin_lock(&hb->lock);
++retry_hb:
+ 
+ 	/*
+ 	 * Check waiters first. We do not trust user space values at
+@@ -1177,12 +1178,17 @@ retry:
+ 		/*
+ 		 * Futex vs rt_mutex waiter state -- if there are no rt_mutex
+ 		 * waiters even though futex thinks there are, then the waiter
+-		 * is leaving and the uncontended path is safe to take.
++		 * is leaving. The entry needs to be removed from the list so a
++		 * new futex_lock_pi() is not using this stale PI-state while
++		 * the futex is available in user space again.
++		 * There can be more than one task on its way out so it needs
++		 * to retry.
+ 		 */
+ 		rt_waiter = rt_mutex_top_waiter(&pi_state->pi_mutex);
+ 		if (!rt_waiter) {
++			__futex_unqueue(top_waiter);
+ 			raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+-			goto do_uncontended;
++			goto retry_hb;
+ 		}
+ 
+ 		get_pi_state(pi_state);
+@@ -1217,7 +1223,6 @@ retry:
+ 		return ret;
+ 	}
+ 
+-do_uncontended:
+ 	/*
+ 	 * We have no kernel internal state, i.e. no waiters in the
+ 	 * kernel. Waiters which are about to queue themselves are stuck
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 27ca1c866f298..371eb1711d346 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -600,7 +600,7 @@ int __init early_irq_init(void)
+ 		mutex_init(&desc[i].request_mutex);
+ 		init_waitqueue_head(&desc[i].wait_for_threads);
+ 		desc_set_defaults(i, &desc[i], node, NULL, NULL);
+-		irq_resend_init(desc);
++		irq_resend_init(&desc[i]);
+ 	}
+ 	return arch_early_irq_init();
+ }
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index be5642a4ec490..b926c4db8a91d 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -1254,6 +1254,7 @@ int kernel_kexec(void)
+ 		kexec_in_progress = true;
+ 		kernel_restart_prepare("kexec reboot");
+ 		migrate_to_reboot_cpu();
++		syscore_shutdown();
+ 
+ 		/*
+ 		 * migrate_to_reboot_cpu() disables CPU hotplug assuming that
+diff --git a/kernel/power/swap.c b/kernel/power/swap.c
+index a2cb0babb5ec9..d44f5937f1e55 100644
+--- a/kernel/power/swap.c
++++ b/kernel/power/swap.c
+@@ -606,11 +606,11 @@ static int crc32_threadfn(void *data)
+ 	unsigned i;
+ 
+ 	while (1) {
+-		wait_event(d->go, atomic_read(&d->ready) ||
++		wait_event(d->go, atomic_read_acquire(&d->ready) ||
+ 		                  kthread_should_stop());
+ 		if (kthread_should_stop()) {
+ 			d->thr = NULL;
+-			atomic_set(&d->stop, 1);
++			atomic_set_release(&d->stop, 1);
+ 			wake_up(&d->done);
+ 			break;
+ 		}
+@@ -619,7 +619,7 @@ static int crc32_threadfn(void *data)
+ 		for (i = 0; i < d->run_threads; i++)
+ 			*d->crc32 = crc32_le(*d->crc32,
+ 			                     d->unc[i], *d->unc_len[i]);
+-		atomic_set(&d->stop, 1);
++		atomic_set_release(&d->stop, 1);
+ 		wake_up(&d->done);
+ 	}
+ 	return 0;
+@@ -649,12 +649,12 @@ static int lzo_compress_threadfn(void *data)
+ 	struct cmp_data *d = data;
+ 
+ 	while (1) {
+-		wait_event(d->go, atomic_read(&d->ready) ||
++		wait_event(d->go, atomic_read_acquire(&d->ready) ||
+ 		                  kthread_should_stop());
+ 		if (kthread_should_stop()) {
+ 			d->thr = NULL;
+ 			d->ret = -1;
+-			atomic_set(&d->stop, 1);
++			atomic_set_release(&d->stop, 1);
+ 			wake_up(&d->done);
+ 			break;
+ 		}
+@@ -663,7 +663,7 @@ static int lzo_compress_threadfn(void *data)
+ 		d->ret = lzo1x_1_compress(d->unc, d->unc_len,
+ 		                          d->cmp + LZO_HEADER, &d->cmp_len,
+ 		                          d->wrk);
+-		atomic_set(&d->stop, 1);
++		atomic_set_release(&d->stop, 1);
+ 		wake_up(&d->done);
+ 	}
+ 	return 0;
+@@ -798,7 +798,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
+ 
+ 			data[thr].unc_len = off;
+ 
+-			atomic_set(&data[thr].ready, 1);
++			atomic_set_release(&data[thr].ready, 1);
+ 			wake_up(&data[thr].go);
+ 		}
+ 
+@@ -806,12 +806,12 @@ static int save_image_lzo(struct swap_map_handle *handle,
+ 			break;
+ 
+ 		crc->run_threads = thr;
+-		atomic_set(&crc->ready, 1);
++		atomic_set_release(&crc->ready, 1);
+ 		wake_up(&crc->go);
+ 
+ 		for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
+ 			wait_event(data[thr].done,
+-			           atomic_read(&data[thr].stop));
++				atomic_read_acquire(&data[thr].stop));
+ 			atomic_set(&data[thr].stop, 0);
+ 
+ 			ret = data[thr].ret;
+@@ -850,7 +850,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
+ 			}
+ 		}
+ 
+-		wait_event(crc->done, atomic_read(&crc->stop));
++		wait_event(crc->done, atomic_read_acquire(&crc->stop));
+ 		atomic_set(&crc->stop, 0);
+ 	}
+ 
+@@ -1132,12 +1132,12 @@ static int lzo_decompress_threadfn(void *data)
+ 	struct dec_data *d = data;
+ 
+ 	while (1) {
+-		wait_event(d->go, atomic_read(&d->ready) ||
++		wait_event(d->go, atomic_read_acquire(&d->ready) ||
+ 		                  kthread_should_stop());
+ 		if (kthread_should_stop()) {
+ 			d->thr = NULL;
+ 			d->ret = -1;
+-			atomic_set(&d->stop, 1);
++			atomic_set_release(&d->stop, 1);
+ 			wake_up(&d->done);
+ 			break;
+ 		}
+@@ -1150,7 +1150,7 @@ static int lzo_decompress_threadfn(void *data)
+ 			flush_icache_range((unsigned long)d->unc,
+ 					   (unsigned long)d->unc + d->unc_len);
+ 
+-		atomic_set(&d->stop, 1);
++		atomic_set_release(&d->stop, 1);
+ 		wake_up(&d->done);
+ 	}
+ 	return 0;
+@@ -1335,7 +1335,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 		}
+ 
+ 		if (crc->run_threads) {
+-			wait_event(crc->done, atomic_read(&crc->stop));
++			wait_event(crc->done, atomic_read_acquire(&crc->stop));
+ 			atomic_set(&crc->stop, 0);
+ 			crc->run_threads = 0;
+ 		}
+@@ -1371,7 +1371,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 					pg = 0;
+ 			}
+ 
+-			atomic_set(&data[thr].ready, 1);
++			atomic_set_release(&data[thr].ready, 1);
+ 			wake_up(&data[thr].go);
+ 		}
+ 
+@@ -1390,7 +1390,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 
+ 		for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
+ 			wait_event(data[thr].done,
+-			           atomic_read(&data[thr].stop));
++				atomic_read_acquire(&data[thr].stop));
+ 			atomic_set(&data[thr].stop, 0);
+ 
+ 			ret = data[thr].ret;
+@@ -1421,7 +1421,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 				ret = snapshot_write_next(snapshot);
+ 				if (ret <= 0) {
+ 					crc->run_threads = thr + 1;
+-					atomic_set(&crc->ready, 1);
++					atomic_set_release(&crc->ready, 1);
+ 					wake_up(&crc->go);
+ 					goto out_finish;
+ 				}
+@@ -1429,13 +1429,13 @@ static int load_image_lzo(struct swap_map_handle *handle,
+ 		}
+ 
+ 		crc->run_threads = thr;
+-		atomic_set(&crc->ready, 1);
++		atomic_set_release(&crc->ready, 1);
+ 		wake_up(&crc->go);
+ 	}
+ 
+ out_finish:
+ 	if (crc->run_threads) {
+-		wait_event(crc->done, atomic_read(&crc->stop));
++		wait_event(crc->done, atomic_read_acquire(&crc->stop));
+ 		atomic_set(&crc->stop, 0);
+ 	}
+ 	stop = ktime_get();
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 3ac3c846105fb..157f3ca2a9b56 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1013,6 +1013,38 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
+ 	return needmore;
+ }
+ 
++static void swake_up_one_online_ipi(void *arg)
++{
++	struct swait_queue_head *wqh = arg;
++
++	swake_up_one(wqh);
++}
++
++static void swake_up_one_online(struct swait_queue_head *wqh)
++{
++	int cpu = get_cpu();
++
++	/*
++	 * If called from rcutree_report_cpu_starting(), wake up
++	 * is dangerous that late in the CPU-down hotplug process. The
++	 * scheduler might queue an ignored hrtimer. Defer the wake up
++	 * to an online CPU instead.
++	 */
++	if (unlikely(cpu_is_offline(cpu))) {
++		int target;
++
++		target = cpumask_any_and(housekeeping_cpumask(HK_TYPE_RCU),
++					 cpu_online_mask);
++
++		smp_call_function_single(target, swake_up_one_online_ipi,
++					 wqh, 0);
++		put_cpu();
++	} else {
++		put_cpu();
++		swake_up_one(wqh);
++	}
++}
++
+ /*
+  * Awaken the grace-period kthread.  Don't do a self-awaken (unless in an
+  * interrupt or softirq handler, in which case we just might immediately
+@@ -1037,7 +1069,7 @@ static void rcu_gp_kthread_wake(void)
+ 		return;
+ 	WRITE_ONCE(rcu_state.gp_wake_time, jiffies);
+ 	WRITE_ONCE(rcu_state.gp_wake_seq, READ_ONCE(rcu_state.gp_seq));
+-	swake_up_one(&rcu_state.gp_wq);
++	swake_up_one_online(&rcu_state.gp_wq);
+ }
+ 
+ /*
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 6d7cea5d591f9..2ac440bc7e10b 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -173,7 +173,6 @@ static bool sync_rcu_exp_done_unlocked(struct rcu_node *rnp)
+ 	return ret;
+ }
+ 
+-
+ /*
+  * Report the exit from RCU read-side critical section for the last task
+  * that queued itself during or before the current expedited preemptible-RCU
+@@ -201,7 +200,7 @@ static void __rcu_report_exp_rnp(struct rcu_node *rnp,
+ 			raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 			if (wake) {
+ 				smp_mb(); /* EGP done before wake_up(). */
+-				swake_up_one(&rcu_state.expedited_wq);
++				swake_up_one_online(&rcu_state.expedited_wq);
+ 			}
+ 			break;
+ 		}
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index c108ed8a9804a..3052b1f1168e2 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -99,6 +99,7 @@ static u64 suspend_start;
+  * Interval: 0.5sec.
+  */
+ #define WATCHDOG_INTERVAL (HZ >> 1)
++#define WATCHDOG_INTERVAL_MAX_NS ((2 * WATCHDOG_INTERVAL) * (NSEC_PER_SEC / HZ))
+ 
+ /*
+  * Threshold: 0.0312s, when doubled: 0.0625s.
+@@ -134,6 +135,7 @@ static DECLARE_WORK(watchdog_work, clocksource_watchdog_work);
+ static DEFINE_SPINLOCK(watchdog_lock);
+ static int watchdog_running;
+ static atomic_t watchdog_reset_pending;
++static int64_t watchdog_max_interval;
+ 
+ static inline void clocksource_watchdog_lock(unsigned long *flags)
+ {
+@@ -399,8 +401,8 @@ static inline void clocksource_reset_watchdog(void)
+ static void clocksource_watchdog(struct timer_list *unused)
+ {
+ 	u64 csnow, wdnow, cslast, wdlast, delta;
++	int64_t wd_nsec, cs_nsec, interval;
+ 	int next_cpu, reset_pending;
+-	int64_t wd_nsec, cs_nsec;
+ 	struct clocksource *cs;
+ 	enum wd_read_status read_ret;
+ 	unsigned long extra_wait = 0;
+@@ -470,6 +472,27 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 		if (atomic_read(&watchdog_reset_pending))
+ 			continue;
+ 
++		/*
++		 * The processing of timer softirqs can get delayed (usually
++		 * on account of ksoftirqd not getting to run in a timely
++		 * manner), which causes the watchdog interval to stretch.
++		 * Skew detection may fail for longer watchdog intervals
++		 * on account of fixed margins being used.
++		 * Some clocksources, e.g. acpi_pm, cannot tolerate
++		 * watchdog intervals longer than a few seconds.
++		 */
++		interval = max(cs_nsec, wd_nsec);
++		if (unlikely(interval > WATCHDOG_INTERVAL_MAX_NS)) {
++			if (system_state > SYSTEM_SCHEDULING &&
++			    interval > 2 * watchdog_max_interval) {
++				watchdog_max_interval = interval;
++				pr_warn("Long readout interval, skipping watchdog check: cs_nsec: %lld wd_nsec: %lld\n",
++					cs_nsec, wd_nsec);
++			}
++			watchdog_timer.expires = jiffies;
++			continue;
++		}
++
+ 		/* Check the deviation from the watchdog clocksource. */
+ 		md = cs->uncertainty_margin + watchdog->uncertainty_margin;
+ 		if (abs(cs_nsec - wd_nsec) > md) {
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 2305366a818a7..ca2d59579fce9 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1574,6 +1574,7 @@ void tick_cancel_sched_timer(int cpu)
+ {
+ 	struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
+ 	ktime_t idle_sleeptime, iowait_sleeptime;
++	unsigned long idle_calls, idle_sleeps;
+ 
+ # ifdef CONFIG_HIGH_RES_TIMERS
+ 	if (ts->sched_timer.base)
+@@ -1582,9 +1583,13 @@ void tick_cancel_sched_timer(int cpu)
+ 
+ 	idle_sleeptime = ts->idle_sleeptime;
+ 	iowait_sleeptime = ts->iowait_sleeptime;
++	idle_calls = ts->idle_calls;
++	idle_sleeps = ts->idle_sleeps;
+ 	memset(ts, 0, sizeof(*ts));
+ 	ts->idle_sleeptime = idle_sleeptime;
+ 	ts->iowait_sleeptime = iowait_sleeptime;
++	ts->idle_calls = idle_calls;
++	ts->idle_sleeps = idle_sleeps;
+ }
+ #endif
+ 
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index c774e560f2f95..a4dcf0f243521 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -574,7 +574,12 @@ __tracing_map_insert(struct tracing_map *map, void *key, bool lookup_only)
+ 				}
+ 
+ 				memcpy(elt->key, key, map->key_size);
+-				entry->val = elt;
++				/*
++				 * Ensure the initialization is visible and
++				 * publish the elt.
++				 */
++				smp_wmb();
++				WRITE_ONCE(entry->val, elt);
+ 				atomic64_inc(&map->hits);
+ 
+ 				return entry->val;
+diff --git a/lib/crypto/mpi/ec.c b/lib/crypto/mpi/ec.c
+index 40f5908e57a4f..e16dca1e23d52 100644
+--- a/lib/crypto/mpi/ec.c
++++ b/lib/crypto/mpi/ec.c
+@@ -584,6 +584,9 @@ void mpi_ec_init(struct mpi_ec_ctx *ctx, enum gcry_mpi_ec_models model,
+ 	ctx->a = mpi_copy(a);
+ 	ctx->b = mpi_copy(b);
+ 
++	ctx->d = NULL;
++	ctx->t.two_inv_p = NULL;
++
+ 	ctx->t.p_barrett = use_barrett > 0 ? mpi_barrett_init(ctx->p, 0) : NULL;
+ 
+ 	mpi_ec_get_reset(ctx);
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 5a88d6d24d793..4823ad979b72f 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -2141,6 +2141,9 @@ static void __init memmap_init_reserved_pages(void)
+ 			start = region->base;
+ 			end = start + region->size;
+ 
++			if (nid == NUMA_NO_NODE || nid >= MAX_NUMNODES)
++				nid = early_pfn_to_nid(PFN_DOWN(start));
++
+ 			reserve_bootmem_region(start, end, nid);
+ 		}
+ 	}
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 397f2a6e34cbd..bad3039d165e6 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1025,38 +1025,31 @@ out:
+ }
+ 
+ /*
+- * To record some information during migration, we use some unused
+- * fields (mapping and private) of struct folio of the newly allocated
+- * destination folio.  This is safe because nobody is using them
+- * except us.
++ * To record some information during migration, we use unused private
++ * field of struct folio of the newly allocated destination folio.
++ * This is safe because nobody is using it except us.
+  */
+-union migration_ptr {
+-	struct anon_vma *anon_vma;
+-	struct address_space *mapping;
+-};
+-
+ enum {
+ 	PAGE_WAS_MAPPED = BIT(0),
+ 	PAGE_WAS_MLOCKED = BIT(1),
++	PAGE_OLD_STATES = PAGE_WAS_MAPPED | PAGE_WAS_MLOCKED,
+ };
+ 
+ static void __migrate_folio_record(struct folio *dst,
+-				   unsigned long old_page_state,
++				   int old_page_state,
+ 				   struct anon_vma *anon_vma)
+ {
+-	union migration_ptr ptr = { .anon_vma = anon_vma };
+-	dst->mapping = ptr.mapping;
+-	dst->private = (void *)old_page_state;
++	dst->private = (void *)anon_vma + old_page_state;
+ }
+ 
+ static void __migrate_folio_extract(struct folio *dst,
+ 				   int *old_page_state,
+ 				   struct anon_vma **anon_vmap)
+ {
+-	union migration_ptr ptr = { .mapping = dst->mapping };
+-	*anon_vmap = ptr.anon_vma;
+-	*old_page_state = (unsigned long)dst->private;
+-	dst->mapping = NULL;
++	unsigned long private = (unsigned long)dst->private;
++
++	*anon_vmap = (struct anon_vma *)(private & ~PAGE_OLD_STATES);
++	*old_page_state = private & PAGE_OLD_STATES;
+ 	dst->private = NULL;
+ }
+ 
+diff --git a/mm/mm_init.c b/mm/mm_init.c
+index 077bfe393b5e2..513bad6727089 100644
+--- a/mm/mm_init.c
++++ b/mm/mm_init.c
+@@ -26,6 +26,7 @@
+ #include <linux/pgtable.h>
+ #include <linux/swap.h>
+ #include <linux/cma.h>
++#include <linux/crash_dump.h>
+ #include "internal.h"
+ #include "slab.h"
+ #include "shuffle.h"
+@@ -381,6 +382,11 @@ static void __init find_zone_movable_pfns_for_nodes(void)
+ 			goto out;
+ 		}
+ 
++		if (is_kdump_kernel()) {
++			pr_warn("The system is under kdump, ignore kernelcore=mirror.\n");
++			goto out;
++		}
++
+ 		for_each_mem_region(r) {
+ 			if (memblock_is_mirror(r))
+ 				continue;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 733732e7e0ba7..6d2a74138f456 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3951,14 +3951,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+ 	else
+ 		(*no_progress_loops)++;
+ 
+-	/*
+-	 * Make sure we converge to OOM if we cannot make any progress
+-	 * several times in the row.
+-	 */
+-	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+-		/* Before OOM, exhaust highatomic_reserve */
+-		return unreserve_highatomic_pageblock(ac, true);
+-	}
++	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
++		goto out;
++
+ 
+ 	/*
+ 	 * Keep reclaiming pages while there is a chance this will lead
+@@ -4001,6 +3996,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+ 		schedule_timeout_uninterruptible(1);
+ 	else
+ 		cond_resched();
++out:
++	/* Before OOM, exhaust highatomic_reserve */
++	if (!ret)
++		return unreserve_highatomic_pageblock(ac, true);
++
+ 	return ret;
+ }
+ 
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 77d91e565045c..338cf946dee8d 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -791,6 +791,13 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
+ 	if (empty) {
+ 		unsigned long section_nr = pfn_to_section_nr(pfn);
+ 
++		/*
++		 * Mark the section invalid so that valid_section()
++		 * return false. This prevents code from dereferencing
++		 * ms->usage array.
++		 */
++		ms->section_mem_map &= ~SECTION_HAS_MEM_MAP;
++
+ 		/*
+ 		 * When removing an early section, the usage map is kept (as the
+ 		 * usage maps of other sections fall into the same page). It
+@@ -799,16 +806,10 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
+ 		 * was allocated during boot.
+ 		 */
+ 		if (!PageReserved(virt_to_page(ms->usage))) {
+-			kfree(ms->usage);
+-			ms->usage = NULL;
++			kfree_rcu(ms->usage, rcu);
++			WRITE_ONCE(ms->usage, NULL);
+ 		}
+ 		memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+-		/*
+-		 * Mark the section invalid so that valid_section()
+-		 * return false. This prevents code from dereferencing
+-		 * ms->usage array.
+-		 */
+-		ms->section_mem_map &= ~SECTION_HAS_MEM_MAP;
+ 	}
+ 
+ 	/*
+diff --git a/net/8021q/vlan_netlink.c b/net/8021q/vlan_netlink.c
+index 214532173536b..a3b68243fd4b1 100644
+--- a/net/8021q/vlan_netlink.c
++++ b/net/8021q/vlan_netlink.c
+@@ -118,12 +118,16 @@ static int vlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ 	}
+ 	if (data[IFLA_VLAN_INGRESS_QOS]) {
+ 		nla_for_each_nested(attr, data[IFLA_VLAN_INGRESS_QOS], rem) {
++			if (nla_type(attr) != IFLA_VLAN_QOS_MAPPING)
++				continue;
+ 			m = nla_data(attr);
+ 			vlan_dev_set_ingress_priority(dev, m->to, m->from);
+ 		}
+ 	}
+ 	if (data[IFLA_VLAN_EGRESS_QOS]) {
+ 		nla_for_each_nested(attr, data[IFLA_VLAN_EGRESS_QOS], rem) {
++			if (nla_type(attr) != IFLA_VLAN_QOS_MAPPING)
++				continue;
+ 			m = nla_data(attr);
+ 			err = vlan_dev_set_egress_priority(dev, m->from, m->to);
+ 			if (err)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index ad20bebe153fc..add22ca0dff95 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -11509,6 +11509,7 @@ static struct pernet_operations __net_initdata netdev_net_ops = {
+ 
+ static void __net_exit default_device_exit_net(struct net *net)
+ {
++	struct netdev_name_node *name_node, *tmp;
+ 	struct net_device *dev, *aux;
+ 	/*
+ 	 * Push all migratable network devices back to the
+@@ -11531,6 +11532,14 @@ static void __net_exit default_device_exit_net(struct net *net)
+ 		snprintf(fb_name, IFNAMSIZ, "dev%d", dev->ifindex);
+ 		if (netdev_name_in_use(&init_net, fb_name))
+ 			snprintf(fb_name, IFNAMSIZ, "dev%%d");
++
++		netdev_for_each_altname_safe(dev, name_node, tmp)
++			if (netdev_name_in_use(&init_net, name_node->name)) {
++				netdev_name_node_del(name_node);
++				synchronize_rcu();
++				__netdev_name_node_alt_destroy(name_node);
++			}
++
+ 		err = dev_change_net_namespace(dev, &init_net, fb_name);
+ 		if (err) {
+ 			pr_emerg("%s: failed to move %s to init_net: %d\n",
+diff --git a/net/core/dev.h b/net/core/dev.h
+index 5aa45f0fd4ae5..3f5eb92396b68 100644
+--- a/net/core/dev.h
++++ b/net/core/dev.h
+@@ -64,6 +64,9 @@ int dev_change_name(struct net_device *dev, const char *newname);
+ 
+ #define netdev_for_each_altname(dev, namenode)				\
+ 	list_for_each_entry((namenode), &(dev)->name_node->list, list)
++#define netdev_for_each_altname_safe(dev, namenode, next)		\
++	list_for_each_entry_safe((namenode), (next), &(dev)->name_node->list, \
++				 list)
+ 
+ int netdev_name_node_alt_create(struct net_device *dev, const char *name);
+ int netdev_name_node_alt_destroy(struct net_device *dev, const char *name);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 1737884be52f8..cee53838310ff 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -83,6 +83,7 @@
+ #include <net/netfilter/nf_conntrack_bpf.h>
+ #include <net/netkit.h>
+ #include <linux/un.h>
++#include <net/xdp_sock_drv.h>
+ 
+ #include "dev.h"
+ 
+@@ -4090,10 +4091,46 @@ static int bpf_xdp_frags_increase_tail(struct xdp_buff *xdp, int offset)
+ 	memset(skb_frag_address(frag) + skb_frag_size(frag), 0, offset);
+ 	skb_frag_size_add(frag, offset);
+ 	sinfo->xdp_frags_size += offset;
++	if (rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL)
++		xsk_buff_get_tail(xdp)->data_end += offset;
+ 
+ 	return 0;
+ }
+ 
++static void bpf_xdp_shrink_data_zc(struct xdp_buff *xdp, int shrink,
++				   struct xdp_mem_info *mem_info, bool release)
++{
++	struct xdp_buff *zc_frag = xsk_buff_get_tail(xdp);
++
++	if (release) {
++		xsk_buff_del_tail(zc_frag);
++		__xdp_return(NULL, mem_info, false, zc_frag);
++	} else {
++		zc_frag->data_end -= shrink;
++	}
++}
++
++static bool bpf_xdp_shrink_data(struct xdp_buff *xdp, skb_frag_t *frag,
++				int shrink)
++{
++	struct xdp_mem_info *mem_info = &xdp->rxq->mem;
++	bool release = skb_frag_size(frag) == shrink;
++
++	if (mem_info->type == MEM_TYPE_XSK_BUFF_POOL) {
++		bpf_xdp_shrink_data_zc(xdp, shrink, mem_info, release);
++		goto out;
++	}
++
++	if (release) {
++		struct page *page = skb_frag_page(frag);
++
++		__xdp_return(page_address(page), mem_info, false, NULL);
++	}
++
++out:
++	return release;
++}
++
+ static int bpf_xdp_frags_shrink_tail(struct xdp_buff *xdp, int offset)
+ {
+ 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+@@ -4108,12 +4145,7 @@ static int bpf_xdp_frags_shrink_tail(struct xdp_buff *xdp, int offset)
+ 
+ 		len_free += shrink;
+ 		offset -= shrink;
+-
+-		if (skb_frag_size(frag) == shrink) {
+-			struct page *page = skb_frag_page(frag);
+-
+-			__xdp_return(page_address(page), &xdp->rxq->mem,
+-				     false, NULL);
++		if (bpf_xdp_shrink_data(xdp, frag, shrink)) {
+ 			n_frags_free++;
+ 		} else {
+ 			skb_frag_size_sub(frag, shrink);
+diff --git a/net/core/request_sock.c b/net/core/request_sock.c
+index f35c2e9984062..63de5c635842b 100644
+--- a/net/core/request_sock.c
++++ b/net/core/request_sock.c
+@@ -33,9 +33,6 @@
+ 
+ void reqsk_queue_alloc(struct request_sock_queue *queue)
+ {
+-	spin_lock_init(&queue->rskq_lock);
+-
+-	spin_lock_init(&queue->fastopenq.lock);
+ 	queue->fastopenq.rskq_rst_head = NULL;
+ 	queue->fastopenq.rskq_rst_tail = NULL;
+ 	queue->fastopenq.qlen = 0;
+diff --git a/net/core/sock.c b/net/core/sock.c
+index d02534c77413f..e5d43a068f8ed 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -107,6 +107,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/poll.h>
+ #include <linux/tcp.h>
++#include <linux/udp.h>
+ #include <linux/init.h>
+ #include <linux/highmem.h>
+ #include <linux/user_namespace.h>
+@@ -4148,8 +4149,14 @@ bool sk_busy_loop_end(void *p, unsigned long start_time)
+ {
+ 	struct sock *sk = p;
+ 
+-	return !skb_queue_empty_lockless(&sk->sk_receive_queue) ||
+-	       sk_busy_loop_timeout(sk, start_time);
++	if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
++		return true;
++
++	if (sk_is_udp(sk) &&
++	    !skb_queue_empty_lockless(&udp_sk(sk)->reader_queue))
++		return true;
++
++	return sk_busy_loop_timeout(sk, start_time);
+ }
+ EXPORT_SYMBOL(sk_busy_loop_end);
+ #endif /* CONFIG_NET_RX_BUSY_POLL */
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index ea0b0334a0fb0..1c58bd72e1245 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -330,6 +330,9 @@ lookup_protocol:
+ 	if (INET_PROTOSW_REUSE & answer_flags)
+ 		sk->sk_reuse = SK_CAN_REUSE;
+ 
++	if (INET_PROTOSW_ICSK & answer_flags)
++		inet_init_csk_locks(sk);
++
+ 	inet = inet_sk(sk);
+ 	inet_assign_bit(IS_ICSK, sk, INET_PROTOSW_ICSK & answer_flags);
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 394a498c28232..762817d6c8d70 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -730,6 +730,10 @@ out:
+ 	}
+ 	if (req)
+ 		reqsk_put(req);
++
++	if (newsk)
++		inet_init_csk_locks(newsk);
++
+ 	return newsk;
+ out_err:
+ 	newsk = NULL;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index ff6838ca2e580..7bce79beca2b9 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -722,6 +722,7 @@ void tcp_push(struct sock *sk, int flags, int mss_now,
+ 		if (!test_bit(TSQ_THROTTLED, &sk->sk_tsq_flags)) {
+ 			NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAUTOCORKING);
+ 			set_bit(TSQ_THROTTLED, &sk->sk_tsq_flags);
++			smp_mb__after_atomic();
+ 		}
+ 		/* It is possible TX completion already happened
+ 		 * before we set TSQ_THROTTLED.
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 13a1833a4df52..959bfd9f6344f 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -199,6 +199,9 @@ lookup_protocol:
+ 	if (INET_PROTOSW_REUSE & answer_flags)
+ 		sk->sk_reuse = SK_CAN_REUSE;
+ 
++	if (INET_PROTOSW_ICSK & answer_flags)
++		inet_init_csk_locks(sk);
++
+ 	inet = inet_sk(sk);
+ 	inet_assign_bit(IS_ICSK, sk, INET_PROTOSW_ICSK & answer_flags);
+ 
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 9b06c380866b5..20551cfb7da6d 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -928,14 +928,15 @@ copy_uaddr:
+  */
+ static int llc_ui_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ {
++	DECLARE_SOCKADDR(struct sockaddr_llc *, addr, msg->msg_name);
+ 	struct sock *sk = sock->sk;
+ 	struct llc_sock *llc = llc_sk(sk);
+-	DECLARE_SOCKADDR(struct sockaddr_llc *, addr, msg->msg_name);
+ 	int flags = msg->msg_flags;
+ 	int noblock = flags & MSG_DONTWAIT;
++	int rc = -EINVAL, copied = 0, hdrlen, hh_len;
+ 	struct sk_buff *skb = NULL;
++	struct net_device *dev;
+ 	size_t size = 0;
+-	int rc = -EINVAL, copied = 0, hdrlen;
+ 
+ 	dprintk("%s: sending from %02X to %02X\n", __func__,
+ 		llc->laddr.lsap, llc->daddr.lsap);
+@@ -955,22 +956,29 @@ static int llc_ui_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 		if (rc)
+ 			goto out;
+ 	}
+-	hdrlen = llc->dev->hard_header_len + llc_ui_header_len(sk, addr);
++	dev = llc->dev;
++	hh_len = LL_RESERVED_SPACE(dev);
++	hdrlen = llc_ui_header_len(sk, addr);
+ 	size = hdrlen + len;
+-	if (size > llc->dev->mtu)
+-		size = llc->dev->mtu;
++	size = min_t(size_t, size, READ_ONCE(dev->mtu));
+ 	copied = size - hdrlen;
+ 	rc = -EINVAL;
+ 	if (copied < 0)
+ 		goto out;
+ 	release_sock(sk);
+-	skb = sock_alloc_send_skb(sk, size, noblock, &rc);
++	skb = sock_alloc_send_skb(sk, hh_len + size, noblock, &rc);
+ 	lock_sock(sk);
+ 	if (!skb)
+ 		goto out;
+-	skb->dev      = llc->dev;
++	if (sock_flag(sk, SOCK_ZAPPED) ||
++	    llc->dev != dev ||
++	    hdrlen != llc_ui_header_len(sk, addr) ||
++	    hh_len != LL_RESERVED_SPACE(dev) ||
++	    size > READ_ONCE(dev->mtu))
++		goto out;
++	skb->dev      = dev;
+ 	skb->protocol = llc_proto_type(addr->sllc_arphrd);
+-	skb_reserve(skb, hdrlen);
++	skb_reserve(skb, hh_len + hdrlen);
+ 	rc = memcpy_from_msg(skb_put(skb, copied), msg, copied);
+ 	if (rc)
+ 		goto out;
+diff --git a/net/llc/llc_core.c b/net/llc/llc_core.c
+index 6e387aadffcec..4f16d9c88350b 100644
+--- a/net/llc/llc_core.c
++++ b/net/llc/llc_core.c
+@@ -135,22 +135,15 @@ static struct packet_type llc_packet_type __read_mostly = {
+ 	.func = llc_rcv,
+ };
+ 
+-static struct packet_type llc_tr_packet_type __read_mostly = {
+-	.type = cpu_to_be16(ETH_P_TR_802_2),
+-	.func = llc_rcv,
+-};
+-
+ static int __init llc_init(void)
+ {
+ 	dev_add_pack(&llc_packet_type);
+-	dev_add_pack(&llc_tr_packet_type);
+ 	return 0;
+ }
+ 
+ static void __exit llc_exit(void)
+ {
+ 	dev_remove_pack(&llc_packet_type);
+-	dev_remove_pack(&llc_tr_packet_type);
+ }
+ 
+ module_init(llc_init);
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 0ba613dd1cc47..c33decbb97f2d 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -404,7 +404,10 @@ void sta_info_free(struct ieee80211_local *local, struct sta_info *sta)
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(sta->link); i++) {
+-		if (!(sta->sta.valid_links & BIT(i)))
++		struct link_sta_info *link_sta;
++
++		link_sta = rcu_access_pointer(sta->link[i]);
++		if (!link_sta)
+ 			continue;
+ 
+ 		sta_remove_link(sta, i, false);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f032c29f1da64..6a987b36d0bb0 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -24,6 +24,7 @@
+ #include <net/sock.h>
+ 
+ #define NFT_MODULE_AUTOLOAD_LIMIT (MODULE_NAME_LEN - sizeof("nft-expr-255-"))
++#define NFT_SET_MAX_ANONLEN 16
+ 
+ unsigned int nf_tables_net_id __read_mostly;
+ 
+@@ -4411,6 +4412,9 @@ static int nf_tables_set_alloc_name(struct nft_ctx *ctx, struct nft_set *set,
+ 		if (p[1] != 'd' || strchr(p + 2, '%'))
+ 			return -EINVAL;
+ 
++		if (strnlen(name, NFT_SET_MAX_ANONLEN) >= NFT_SET_MAX_ANONLEN)
++			return -EINVAL;
++
+ 		inuse = (unsigned long *)get_zeroed_page(GFP_KERNEL);
+ 		if (inuse == NULL)
+ 			return -ENOMEM;
+@@ -10905,16 +10909,10 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 	data->verdict.code = ntohl(nla_get_be32(tb[NFTA_VERDICT_CODE]));
+ 
+ 	switch (data->verdict.code) {
+-	default:
+-		switch (data->verdict.code & NF_VERDICT_MASK) {
+-		case NF_ACCEPT:
+-		case NF_DROP:
+-		case NF_QUEUE:
+-			break;
+-		default:
+-			return -EINVAL;
+-		}
+-		fallthrough;
++	case NF_ACCEPT:
++	case NF_DROP:
++	case NF_QUEUE:
++		break;
+ 	case NFT_CONTINUE:
+ 	case NFT_BREAK:
+ 	case NFT_RETURN:
+@@ -10949,6 +10947,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 
+ 		data->verdict.chain = chain;
+ 		break;
++	default:
++		return -EINVAL;
+ 	}
+ 
+ 	desc->len = sizeof(data->verdict);
+diff --git a/net/netfilter/nft_chain_filter.c b/net/netfilter/nft_chain_filter.c
+index 680fe557686e4..274b6f7e6bb57 100644
+--- a/net/netfilter/nft_chain_filter.c
++++ b/net/netfilter/nft_chain_filter.c
+@@ -357,9 +357,10 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 				  unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
++	struct nft_base_chain *basechain;
+ 	struct nftables_pernet *nft_net;
+-	struct nft_table *table;
+ 	struct nft_chain *chain, *nr;
++	struct nft_table *table;
+ 	struct nft_ctx ctx = {
+ 		.net	= dev_net(dev),
+ 	};
+@@ -371,7 +372,8 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 	nft_net = nft_pernet(ctx.net);
+ 	mutex_lock(&nft_net->commit_mutex);
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+-		if (table->family != NFPROTO_NETDEV)
++		if (table->family != NFPROTO_NETDEV &&
++		    table->family != NFPROTO_INET)
+ 			continue;
+ 
+ 		ctx.family = table->family;
+@@ -380,6 +382,11 @@ static int nf_tables_netdev_event(struct notifier_block *this,
+ 			if (!nft_is_base_chain(chain))
+ 				continue;
+ 
++			basechain = nft_base_chain(chain);
++			if (table->family == NFPROTO_INET &&
++			    basechain->ops.hooknum != NF_INET_INGRESS)
++				continue;
++
+ 			ctx.chain = chain;
+ 			nft_netdev_event(event, dev, &ctx);
+ 		}
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 5284cd2ad5327..f0eeda97bfcd9 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -350,6 +350,12 @@ static int nft_target_validate(const struct nft_ctx *ctx,
+ 	unsigned int hook_mask = 0;
+ 	int ret;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_BRIDGE &&
++	    ctx->family != NFPROTO_ARP)
++		return -EOPNOTSUPP;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+@@ -595,6 +601,12 @@ static int nft_match_validate(const struct nft_ctx *ctx,
+ 	unsigned int hook_mask = 0;
+ 	int ret;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_BRIDGE &&
++	    ctx->family != NFPROTO_ARP)
++		return -EOPNOTSUPP;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index ab3362c483b4a..397351fa4d5f8 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -384,6 +384,11 @@ static int nft_flow_offload_validate(const struct nft_ctx *ctx,
+ {
+ 	unsigned int hook_mask = (1 << NF_INET_FORWARD);
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain, hook_mask);
+ }
+ 
+diff --git a/net/netfilter/nft_limit.c b/net/netfilter/nft_limit.c
+index 79039afde34ec..cefa25e0dbb0a 100644
+--- a/net/netfilter/nft_limit.c
++++ b/net/netfilter/nft_limit.c
+@@ -58,17 +58,19 @@ static inline bool nft_limit_eval(struct nft_limit_priv *priv, u64 cost)
+ static int nft_limit_init(struct nft_limit_priv *priv,
+ 			  const struct nlattr * const tb[], bool pkts)
+ {
++	u64 unit, tokens, rate_with_burst;
+ 	bool invert = false;
+-	u64 unit, tokens;
+ 
+ 	if (tb[NFTA_LIMIT_RATE] == NULL ||
+ 	    tb[NFTA_LIMIT_UNIT] == NULL)
+ 		return -EINVAL;
+ 
+ 	priv->rate = be64_to_cpu(nla_get_be64(tb[NFTA_LIMIT_RATE]));
++	if (priv->rate == 0)
++		return -EINVAL;
++
+ 	unit = be64_to_cpu(nla_get_be64(tb[NFTA_LIMIT_UNIT]));
+-	priv->nsecs = unit * NSEC_PER_SEC;
+-	if (priv->rate == 0 || priv->nsecs < unit)
++	if (check_mul_overflow(unit, NSEC_PER_SEC, &priv->nsecs))
+ 		return -EOVERFLOW;
+ 
+ 	if (tb[NFTA_LIMIT_BURST])
+@@ -77,18 +79,25 @@ static int nft_limit_init(struct nft_limit_priv *priv,
+ 	if (pkts && priv->burst == 0)
+ 		priv->burst = NFT_LIMIT_PKT_BURST_DEFAULT;
+ 
+-	if (priv->rate + priv->burst < priv->rate)
++	if (check_add_overflow(priv->rate, priv->burst, &rate_with_burst))
+ 		return -EOVERFLOW;
+ 
+ 	if (pkts) {
+-		tokens = div64_u64(priv->nsecs, priv->rate) * priv->burst;
++		u64 tmp = div64_u64(priv->nsecs, priv->rate);
++
++		if (check_mul_overflow(tmp, priv->burst, &tokens))
++			return -EOVERFLOW;
+ 	} else {
++		u64 tmp;
++
+ 		/* The token bucket size limits the number of tokens can be
+ 		 * accumulated. tokens_max specifies the bucket size.
+ 		 * tokens_max = unit * (rate + burst) / rate.
+ 		 */
+-		tokens = div64_u64(priv->nsecs * (priv->rate + priv->burst),
+-				 priv->rate);
++		if (check_mul_overflow(priv->nsecs, rate_with_burst, &tmp))
++			return -EOVERFLOW;
++
++		tokens = div64_u64(tmp, priv->rate);
+ 	}
+ 
+ 	if (tb[NFTA_LIMIT_FLAGS]) {
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index 583885ce72328..808f5802c2704 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -143,6 +143,11 @@ static int nft_nat_validate(const struct nft_ctx *ctx,
+ 	struct nft_nat *priv = nft_expr_priv(expr);
+ 	int err;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	err = nft_chain_validate_dependency(ctx->chain, NFT_CHAIN_T_NAT);
+ 	if (err < 0)
+ 		return err;
+diff --git a/net/netfilter/nft_rt.c b/net/netfilter/nft_rt.c
+index 35a2c28caa60b..24d9771385729 100644
+--- a/net/netfilter/nft_rt.c
++++ b/net/netfilter/nft_rt.c
+@@ -166,6 +166,11 @@ static int nft_rt_validate(const struct nft_ctx *ctx, const struct nft_expr *exp
+ 	const struct nft_rt *priv = nft_expr_priv(expr);
+ 	unsigned int hooks;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	switch (priv->key) {
+ 	case NFT_RT_NEXTHOP4:
+ 	case NFT_RT_NEXTHOP6:
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index 9ed85be79452d..f30163e2ca620 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -242,6 +242,11 @@ static int nft_socket_validate(const struct nft_ctx *ctx,
+ 			       const struct nft_expr *expr,
+ 			       const struct nft_data **data)
+ {
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain,
+ 					(1 << NF_INET_PRE_ROUTING) |
+ 					(1 << NF_INET_LOCAL_IN) |
+diff --git a/net/netfilter/nft_synproxy.c b/net/netfilter/nft_synproxy.c
+index 13da882669a4e..1d737f89dfc18 100644
+--- a/net/netfilter/nft_synproxy.c
++++ b/net/netfilter/nft_synproxy.c
+@@ -186,7 +186,6 @@ static int nft_synproxy_do_init(const struct nft_ctx *ctx,
+ 		break;
+ #endif
+ 	case NFPROTO_INET:
+-	case NFPROTO_BRIDGE:
+ 		err = nf_synproxy_ipv4_init(snet, ctx->net);
+ 		if (err)
+ 			goto nf_ct_failure;
+@@ -219,7 +218,6 @@ static void nft_synproxy_do_destroy(const struct nft_ctx *ctx)
+ 		break;
+ #endif
+ 	case NFPROTO_INET:
+-	case NFPROTO_BRIDGE:
+ 		nf_synproxy_ipv4_fini(snet, ctx->net);
+ 		nf_synproxy_ipv6_fini(snet, ctx->net);
+ 		break;
+@@ -253,6 +251,11 @@ static int nft_synproxy_validate(const struct nft_ctx *ctx,
+ 				 const struct nft_expr *expr,
+ 				 const struct nft_data **data)
+ {
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain, (1 << NF_INET_LOCAL_IN) |
+ 						    (1 << NF_INET_FORWARD));
+ }
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index ae15cd693f0ec..71412adb73d41 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -316,6 +316,11 @@ static int nft_tproxy_validate(const struct nft_ctx *ctx,
+ 			       const struct nft_expr *expr,
+ 			       const struct nft_data **data)
+ {
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	return nft_chain_validate_hooks(ctx->chain, 1 << NF_INET_PRE_ROUTING);
+ }
+ 
+diff --git a/net/netfilter/nft_xfrm.c b/net/netfilter/nft_xfrm.c
+index 452f8587addad..1c866757db552 100644
+--- a/net/netfilter/nft_xfrm.c
++++ b/net/netfilter/nft_xfrm.c
+@@ -235,6 +235,11 @@ static int nft_xfrm_validate(const struct nft_ctx *ctx, const struct nft_expr *e
+ 	const struct nft_xfrm *priv = nft_expr_priv(expr);
+ 	unsigned int hooks;
+ 
++	if (ctx->family != NFPROTO_IPV4 &&
++	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET)
++		return -EOPNOTSUPP;
++
+ 	switch (priv->dir) {
+ 	case XFRM_POLICY_IN:
+ 		hooks = (1 << NF_INET_FORWARD) |
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index eb086b06d60da..d9107b545d360 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -374,7 +374,7 @@ static void netlink_skb_destructor(struct sk_buff *skb)
+ 	if (is_vmalloc_addr(skb->head)) {
+ 		if (!skb->cloned ||
+ 		    !atomic_dec_return(&(skb_shinfo(skb)->dataref)))
+-			vfree(skb->head);
++			vfree_atomic(skb->head);
+ 
+ 		skb->head = NULL;
+ 	}
+diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
+index 01c4cdfef45df..8435a20968ef5 100644
+--- a/net/rds/af_rds.c
++++ b/net/rds/af_rds.c
+@@ -419,7 +419,7 @@ static int rds_recv_track_latency(struct rds_sock *rs, sockptr_t optval,
+ 
+ 	rs->rs_rx_traces = trace.rx_traces;
+ 	for (i = 0; i < rs->rs_rx_traces; i++) {
+-		if (trace.rx_trace_pos[i] > RDS_MSG_RX_DGRAM_TRACE_MAX) {
++		if (trace.rx_trace_pos[i] >= RDS_MSG_RX_DGRAM_TRACE_MAX) {
+ 			rs->rs_rx_traces = 0;
+ 			return -EFAULT;
+ 		}
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index fa8aec78f63d7..465bfe5eb0617 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -258,16 +258,62 @@ static int rxrpc_listen(struct socket *sock, int backlog)
+ 	return ret;
+ }
+ 
++/**
++ * rxrpc_kernel_lookup_peer - Obtain remote transport endpoint for an address
++ * @sock: The socket through which it will be accessed
++ * @srx: The network address
++ * @gfp: Allocation flags
++ *
++ * Lookup or create a remote transport endpoint record for the specified
++ * address and return it with a ref held.
++ */
++struct rxrpc_peer *rxrpc_kernel_lookup_peer(struct socket *sock,
++					    struct sockaddr_rxrpc *srx, gfp_t gfp)
++{
++	struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
++	int ret;
++
++	ret = rxrpc_validate_address(rx, srx, sizeof(*srx));
++	if (ret < 0)
++		return ERR_PTR(ret);
++
++	return rxrpc_lookup_peer(rx->local, srx, gfp);
++}
++EXPORT_SYMBOL(rxrpc_kernel_lookup_peer);
++
++/**
++ * rxrpc_kernel_get_peer - Get a reference on a peer
++ * @peer: The peer to get a reference on.
++ *
++ * Get a record for the remote peer in a call.
++ */
++struct rxrpc_peer *rxrpc_kernel_get_peer(struct rxrpc_peer *peer)
++{
++	return peer ? rxrpc_get_peer(peer, rxrpc_peer_get_application) : NULL;
++}
++EXPORT_SYMBOL(rxrpc_kernel_get_peer);
++
++/**
++ * rxrpc_kernel_put_peer - Allow a kernel app to drop a peer reference
++ * @peer: The peer to drop a ref on
++ */
++void rxrpc_kernel_put_peer(struct rxrpc_peer *peer)
++{
++	rxrpc_put_peer(peer, rxrpc_peer_put_application);
++}
++EXPORT_SYMBOL(rxrpc_kernel_put_peer);
++
+ /**
+  * rxrpc_kernel_begin_call - Allow a kernel service to begin a call
+  * @sock: The socket on which to make the call
+- * @srx: The address of the peer to contact
++ * @peer: The peer to contact
+  * @key: The security context to use (defaults to socket setting)
+  * @user_call_ID: The ID to use
+  * @tx_total_len: Total length of data to transmit during the call (or -1)
+  * @hard_timeout: The maximum lifespan of the call in sec
+  * @gfp: The allocation constraints
+  * @notify_rx: Where to send notifications instead of socket queue
++ * @service_id: The ID of the service to contact
+  * @upgrade: Request service upgrade for call
+  * @interruptibility: The call is interruptible, or can be canceled.
+  * @debug_id: The debug ID for tracing to be assigned to the call
+@@ -280,13 +326,14 @@ static int rxrpc_listen(struct socket *sock, int backlog)
+  * supplying @srx and @key.
+  */
+ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+-					   struct sockaddr_rxrpc *srx,
++					   struct rxrpc_peer *peer,
+ 					   struct key *key,
+ 					   unsigned long user_call_ID,
+ 					   s64 tx_total_len,
+ 					   u32 hard_timeout,
+ 					   gfp_t gfp,
+ 					   rxrpc_notify_rx_t notify_rx,
++					   u16 service_id,
+ 					   bool upgrade,
+ 					   enum rxrpc_interruptibility interruptibility,
+ 					   unsigned int debug_id)
+@@ -295,13 +342,11 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+ 	struct rxrpc_call_params p;
+ 	struct rxrpc_call *call;
+ 	struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
+-	int ret;
+ 
+ 	_enter(",,%x,%lx", key_serial(key), user_call_ID);
+ 
+-	ret = rxrpc_validate_address(rx, srx, sizeof(*srx));
+-	if (ret < 0)
+-		return ERR_PTR(ret);
++	if (WARN_ON_ONCE(peer->local != rx->local))
++		return ERR_PTR(-EIO);
+ 
+ 	lock_sock(&rx->sk);
+ 
+@@ -319,12 +364,13 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.local		= rx->local;
++	cp.peer			= peer;
+ 	cp.key			= key;
+ 	cp.security_level	= rx->min_sec_level;
+ 	cp.exclusive		= false;
+ 	cp.upgrade		= upgrade;
+-	cp.service_id		= srx->srx_service;
+-	call = rxrpc_new_client_call(rx, &cp, srx, &p, gfp, debug_id);
++	cp.service_id		= service_id;
++	call = rxrpc_new_client_call(rx, &cp, &p, gfp, debug_id);
+ 	/* The socket has been unlocked. */
+ 	if (!IS_ERR(call)) {
+ 		call->notify_rx = notify_rx;
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index e8b43408136ab..5d5b19f20d1eb 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -364,6 +364,7 @@ struct rxrpc_conn_proto {
+ 
+ struct rxrpc_conn_parameters {
+ 	struct rxrpc_local	*local;		/* Representation of local endpoint */
++	struct rxrpc_peer	*peer;		/* Representation of remote endpoint */
+ 	struct key		*key;		/* Security details */
+ 	bool			exclusive;	/* T if conn is exclusive */
+ 	bool			upgrade;	/* T if service ID can be upgraded */
+@@ -867,7 +868,6 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long
+ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *, gfp_t, unsigned int);
+ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *,
+ 					 struct rxrpc_conn_parameters *,
+-					 struct sockaddr_rxrpc *,
+ 					 struct rxrpc_call_params *, gfp_t,
+ 					 unsigned int);
+ void rxrpc_start_call_timer(struct rxrpc_call *call);
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f10b37c147721..0943e54370ba0 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -193,7 +193,6 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp,
+  * Allocate a new client call.
+  */
+ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
+-						  struct sockaddr_rxrpc *srx,
+ 						  struct rxrpc_conn_parameters *cp,
+ 						  struct rxrpc_call_params *p,
+ 						  gfp_t gfp,
+@@ -211,10 +210,12 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
+ 	now = ktime_get_real();
+ 	call->acks_latest_ts	= now;
+ 	call->cong_tstamp	= now;
+-	call->dest_srx		= *srx;
++	call->dest_srx		= cp->peer->srx;
++	call->dest_srx.srx_service = cp->service_id;
+ 	call->interruptibility	= p->interruptibility;
+ 	call->tx_total_len	= p->tx_total_len;
+ 	call->key		= key_get(cp->key);
++	call->peer		= rxrpc_get_peer(cp->peer, rxrpc_peer_get_call);
+ 	call->local		= rxrpc_get_local(cp->local, rxrpc_local_get_call);
+ 	call->security_level	= cp->security_level;
+ 	if (p->kernel)
+@@ -306,10 +307,6 @@ static int rxrpc_connect_call(struct rxrpc_call *call, gfp_t gfp)
+ 
+ 	_enter("{%d,%lx},", call->debug_id, call->user_call_ID);
+ 
+-	call->peer = rxrpc_lookup_peer(local, &call->dest_srx, gfp);
+-	if (!call->peer)
+-		goto error;
+-
+ 	ret = rxrpc_look_up_bundle(call, gfp);
+ 	if (ret < 0)
+ 		goto error;
+@@ -334,7 +331,6 @@ error:
+  */
+ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ 					 struct rxrpc_conn_parameters *cp,
+-					 struct sockaddr_rxrpc *srx,
+ 					 struct rxrpc_call_params *p,
+ 					 gfp_t gfp,
+ 					 unsigned int debug_id)
+@@ -349,13 +345,18 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ 
+ 	_enter("%p,%lx", rx, p->user_call_ID);
+ 
++	if (WARN_ON_ONCE(!cp->peer)) {
++		release_sock(&rx->sk);
++		return ERR_PTR(-EIO);
++	}
++
+ 	limiter = rxrpc_get_call_slot(p, gfp);
+ 	if (!limiter) {
+ 		release_sock(&rx->sk);
+ 		return ERR_PTR(-ERESTARTSYS);
+ 	}
+ 
+-	call = rxrpc_alloc_client_call(rx, srx, cp, p, gfp, debug_id);
++	call = rxrpc_alloc_client_call(rx, cp, p, gfp, debug_id);
+ 	if (IS_ERR(call)) {
+ 		release_sock(&rx->sk);
+ 		up(limiter);
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 8d7a715a0bb1c..49dcda67a0d59 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -22,6 +22,8 @@
+ #include <net/ip6_route.h>
+ #include "ar-internal.h"
+ 
++static const struct sockaddr_rxrpc rxrpc_null_addr;
++
+ /*
+  * Hash a peer key.
+  */
+@@ -457,39 +459,53 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *rxnet)
+ }
+ 
+ /**
+- * rxrpc_kernel_get_peer - Get the peer address of a call
++ * rxrpc_kernel_get_call_peer - Get the peer address of a call
+  * @sock: The socket on which the call is in progress.
+  * @call: The call to query
+- * @_srx: Where to place the result
+  *
+- * Get the address of the remote peer in a call.
++ * Get a record for the remote peer in a call.
+  */
+-void rxrpc_kernel_get_peer(struct socket *sock, struct rxrpc_call *call,
+-			   struct sockaddr_rxrpc *_srx)
++struct rxrpc_peer *rxrpc_kernel_get_call_peer(struct socket *sock, struct rxrpc_call *call)
+ {
+-	*_srx = call->peer->srx;
++	return call->peer;
+ }
+-EXPORT_SYMBOL(rxrpc_kernel_get_peer);
++EXPORT_SYMBOL(rxrpc_kernel_get_call_peer);
+ 
+ /**
+  * rxrpc_kernel_get_srtt - Get a call's peer smoothed RTT
+- * @sock: The socket on which the call is in progress.
+- * @call: The call to query
+- * @_srtt: Where to store the SRTT value.
++ * @peer: The peer to query
+  *
+- * Get the call's peer smoothed RTT in uS.
++ * Get the call's peer smoothed RTT in uS or UINT_MAX if we have no samples.
+  */
+-bool rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call,
+-			   u32 *_srtt)
++unsigned int rxrpc_kernel_get_srtt(const struct rxrpc_peer *peer)
+ {
+-	struct rxrpc_peer *peer = call->peer;
++	return peer->rtt_count > 0 ? peer->srtt_us >> 3 : UINT_MAX;
++}
++EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
+ 
+-	if (peer->rtt_count == 0) {
+-		*_srtt = 1000000; /* 1S */
+-		return false;
+-	}
++/**
++ * rxrpc_kernel_remote_srx - Get the address of a peer
++ * @peer: The peer to query
++ *
++ * Get a pointer to the address from a peer record.  The caller is responsible
++ * for making sure that the address is not deallocated.
++ */
++const struct sockaddr_rxrpc *rxrpc_kernel_remote_srx(const struct rxrpc_peer *peer)
++{
++	return peer ? &peer->srx : &rxrpc_null_addr;
++}
++EXPORT_SYMBOL(rxrpc_kernel_remote_srx);
+ 
+-	*_srtt = call->peer->srtt_us >> 3;
+-	return true;
++/**
++ * rxrpc_kernel_remote_addr - Get the peer transport address of a call
++ * @peer: The peer to query
++ *
++ * Get a pointer to the transport address from a peer record.  The caller is
++ * responsible for making sure that the address is not deallocated.
++ */
++const struct sockaddr *rxrpc_kernel_remote_addr(const struct rxrpc_peer *peer)
++{
++	return (const struct sockaddr *)
++		(peer ? &peer->srx.transport : &rxrpc_null_addr.transport);
+ }
+-EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
++EXPORT_SYMBOL(rxrpc_kernel_remote_addr);
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 8e0b94714e849..5677d5690a022 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -572,6 +572,7 @@ rxrpc_new_client_call_for_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg,
+ 	__acquires(&call->user_mutex)
+ {
+ 	struct rxrpc_conn_parameters cp;
++	struct rxrpc_peer *peer;
+ 	struct rxrpc_call *call;
+ 	struct key *key;
+ 
+@@ -584,21 +585,29 @@ rxrpc_new_client_call_for_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg,
+ 		return ERR_PTR(-EDESTADDRREQ);
+ 	}
+ 
++	peer = rxrpc_lookup_peer(rx->local, srx, GFP_KERNEL);
++	if (!peer) {
++		release_sock(&rx->sk);
++		return ERR_PTR(-ENOMEM);
++	}
++
+ 	key = rx->key;
+ 	if (key && !rx->key->payload.data[0])
+ 		key = NULL;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.local		= rx->local;
++	cp.peer			= peer;
+ 	cp.key			= rx->key;
+ 	cp.security_level	= rx->min_sec_level;
+ 	cp.exclusive		= rx->exclusive | p->exclusive;
+ 	cp.upgrade		= p->upgrade;
+ 	cp.service_id		= srx->srx_service;
+-	call = rxrpc_new_client_call(rx, &cp, srx, &p->call, GFP_KERNEL,
++	call = rxrpc_new_client_call(rx, &cp, &p->call, GFP_KERNEL,
+ 				     atomic_inc_return(&rxrpc_debug_id));
+ 	/* The socket is now unlocked */
+ 
++	rxrpc_put_peer(peer, rxrpc_peer_put_application);
+ 	_leave(" = %p\n", call);
+ 	return call;
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 1976bd1639863..02c594baa1d93 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1536,6 +1536,9 @@ tcf_block_playback_offloads(struct tcf_block *block, flow_setup_cb_t *cb,
+ 	     chain_prev = chain,
+ 		     chain = __tcf_get_next_chain(block, chain),
+ 		     tcf_chain_put(chain_prev)) {
++		if (chain->tmplt_ops && add)
++			chain->tmplt_ops->tmplt_reoffload(chain, true, cb,
++							  cb_priv);
+ 		for (tp = __tcf_get_next_proto(chain, NULL); tp;
+ 		     tp_prev = tp,
+ 			     tp = __tcf_get_next_proto(chain, tp),
+@@ -1551,6 +1554,9 @@ tcf_block_playback_offloads(struct tcf_block *block, flow_setup_cb_t *cb,
+ 				goto err_playback_remove;
+ 			}
+ 		}
++		if (chain->tmplt_ops && !add)
++			chain->tmplt_ops->tmplt_reoffload(chain, false, cb,
++							  cb_priv);
+ 	}
+ 
+ 	return 0;
+@@ -2971,7 +2977,8 @@ static int tc_chain_tmplt_add(struct tcf_chain *chain, struct net *net,
+ 	ops = tcf_proto_lookup_ops(name, true, extack);
+ 	if (IS_ERR(ops))
+ 		return PTR_ERR(ops);
+-	if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump) {
++	if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump ||
++	    !ops->tmplt_reoffload) {
+ 		NL_SET_ERR_MSG(extack, "Chain templates are not supported with specified classifier");
+ 		module_put(ops->owner);
+ 		return -EOPNOTSUPP;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index e5314a31f75ae..efb9d2811b73d 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2721,6 +2721,28 @@ static void fl_tmplt_destroy(void *tmplt_priv)
+ 	kfree(tmplt);
+ }
+ 
++static void fl_tmplt_reoffload(struct tcf_chain *chain, bool add,
++			       flow_setup_cb_t *cb, void *cb_priv)
++{
++	struct fl_flow_tmplt *tmplt = chain->tmplt_priv;
++	struct flow_cls_offload cls_flower = {};
++
++	cls_flower.rule = flow_rule_alloc(0);
++	if (!cls_flower.rule)
++		return;
++
++	cls_flower.common.chain_index = chain->index;
++	cls_flower.command = add ? FLOW_CLS_TMPLT_CREATE :
++				   FLOW_CLS_TMPLT_DESTROY;
++	cls_flower.cookie = (unsigned long) tmplt;
++	cls_flower.rule->match.dissector = &tmplt->dissector;
++	cls_flower.rule->match.mask = &tmplt->mask;
++	cls_flower.rule->match.key = &tmplt->dummy_key;
++
++	cb(TC_SETUP_CLSFLOWER, &cls_flower, cb_priv);
++	kfree(cls_flower.rule);
++}
++
+ static int fl_dump_key_val(struct sk_buff *skb,
+ 			   void *val, int val_type,
+ 			   void *mask, int mask_type, int len)
+@@ -3628,6 +3650,7 @@ static struct tcf_proto_ops cls_fl_ops __read_mostly = {
+ 	.bind_class	= fl_bind_class,
+ 	.tmplt_create	= fl_tmplt_create,
+ 	.tmplt_destroy	= fl_tmplt_destroy,
++	.tmplt_reoffload = fl_tmplt_reoffload,
+ 	.tmplt_dump	= fl_tmplt_dump,
+ 	.get_exts	= fl_get_exts,
+ 	.owner		= THIS_MODULE,
+diff --git a/net/smc/smc_diag.c b/net/smc/smc_diag.c
+index 5cc376834c575..fb9e5cc1285ec 100644
+--- a/net/smc/smc_diag.c
++++ b/net/smc/smc_diag.c
+@@ -163,7 +163,7 @@ static int __smc_diag_dump(struct sock *sk, struct sk_buff *skb,
+ 	}
+ 	if (smc_conn_lgr_valid(&smc->conn) && smc->conn.lgr->is_smcd &&
+ 	    (req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) &&
+-	    !list_empty(&smc->conn.lgr->list)) {
++	    !list_empty(&smc->conn.lgr->list) && smc->conn.rmb_desc) {
+ 		struct smc_connection *conn = &smc->conn;
+ 		struct smcd_diag_dmbinfo dinfo;
+ 		struct smcd_dev *smcd = conn->lgr->smcd;
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 998687421fa6a..e0ce4276274be 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -717,12 +717,12 @@ static int svc_udp_sendto(struct svc_rqst *rqstp)
+ 				ARRAY_SIZE(rqstp->rq_bvec), xdr);
+ 
+ 	iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec,
+-		      count, 0);
++		      count, rqstp->rq_res.len);
+ 	err = sock_sendmsg(svsk->sk_sock, &msg);
+ 	if (err == -ECONNREFUSED) {
+ 		/* ICMP error on earlier request. */
+ 		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec,
+-			      count, 0);
++			      count, rqstp->rq_res.len);
+ 		err = sock_sendmsg(svsk->sk_sock, &msg);
+ 	}
+ 
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 3da0b52f308d4..688e641cd2784 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -167,8 +167,10 @@ static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
+ 		contd = XDP_PKT_CONTD;
+ 
+ 	err = __xsk_rcv_zc(xs, xskb, len, contd);
+-	if (err || likely(!frags))
+-		goto out;
++	if (err)
++		goto err;
++	if (likely(!frags))
++		return 0;
+ 
+ 	xskb_list = &xskb->pool->xskb_list;
+ 	list_for_each_entry_safe(pos, tmp, xskb_list, xskb_list_node) {
+@@ -177,11 +179,13 @@ static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
+ 		len = pos->xdp.data_end - pos->xdp.data;
+ 		err = __xsk_rcv_zc(xs, pos, len, contd);
+ 		if (err)
+-			return err;
++			goto err;
+ 		list_del(&pos->xskb_list_node);
+ 	}
+ 
+-out:
++	return 0;
++err:
++	xsk_buff_free(xdp);
+ 	return err;
+ }
+ 
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 49cb9f9a09bee..b0a611677865d 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -541,6 +541,7 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool)
+ 
+ 	xskb->xdp.data = xskb->xdp.data_hard_start + XDP_PACKET_HEADROOM;
+ 	xskb->xdp.data_meta = xskb->xdp.data;
++	xskb->xdp.flags = 0;
+ 
+ 	if (pool->dma_need_sync) {
+ 		dma_sync_single_range_for_device(pool->dev, xskb->dma, 0,
+diff --git a/scripts/get_abi.pl b/scripts/get_abi.pl
+index 0ffd5531242aa..408bfd0216da0 100755
+--- a/scripts/get_abi.pl
++++ b/scripts/get_abi.pl
+@@ -98,7 +98,7 @@ sub parse_abi {
+ 	$name =~ s,.*/,,;
+ 
+ 	my $fn = $file;
+-	$fn =~ s,Documentation/ABI/,,;
++	$fn =~ s,.*Documentation/ABI/,,;
+ 
+ 	my $nametag = "File $fn";
+ 	$data{$nametag}->{what} = "File $name";
+diff --git a/security/security.c b/security/security.c
+index dcb3e7014f9bd..266cec94369b2 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -2648,6 +2648,24 @@ int security_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ }
+ EXPORT_SYMBOL_GPL(security_file_ioctl);
+ 
++/**
++ * security_file_ioctl_compat() - Check if an ioctl is allowed in compat mode
++ * @file: associated file
++ * @cmd: ioctl cmd
++ * @arg: ioctl arguments
++ *
++ * Compat version of security_file_ioctl() that correctly handles 32-bit
++ * processes running on 64-bit kernels.
++ *
++ * Return: Returns 0 if permission is granted.
++ */
++int security_file_ioctl_compat(struct file *file, unsigned int cmd,
++			       unsigned long arg)
++{
++	return call_int_hook(file_ioctl_compat, 0, file, cmd, arg);
++}
++EXPORT_SYMBOL_GPL(security_file_ioctl_compat);
++
+ static inline unsigned long mmap_prot(struct file *file, unsigned long prot)
+ {
+ 	/*
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 0fb890bd72cb6..102a1f0c09e0c 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3725,6 +3725,33 @@ static int selinux_file_ioctl(struct file *file, unsigned int cmd,
+ 	return error;
+ }
+ 
++static int selinux_file_ioctl_compat(struct file *file, unsigned int cmd,
++			      unsigned long arg)
++{
++	/*
++	 * If we are in a 64-bit kernel running 32-bit userspace, we need to
++	 * make sure we don't compare 32-bit flags to 64-bit flags.
++	 */
++	switch (cmd) {
++	case FS_IOC32_GETFLAGS:
++		cmd = FS_IOC_GETFLAGS;
++		break;
++	case FS_IOC32_SETFLAGS:
++		cmd = FS_IOC_SETFLAGS;
++		break;
++	case FS_IOC32_GETVERSION:
++		cmd = FS_IOC_GETVERSION;
++		break;
++	case FS_IOC32_SETVERSION:
++		cmd = FS_IOC_SETVERSION;
++		break;
++	default:
++		break;
++	}
++
++	return selinux_file_ioctl(file, cmd, arg);
++}
++
+ static int default_noexec __ro_after_init;
+ 
+ static int file_map_prot_check(struct file *file, unsigned long prot, int shared)
+@@ -7037,6 +7064,7 @@ static struct security_hook_list selinux_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(file_permission, selinux_file_permission),
+ 	LSM_HOOK_INIT(file_alloc_security, selinux_file_alloc_security),
+ 	LSM_HOOK_INIT(file_ioctl, selinux_file_ioctl),
++	LSM_HOOK_INIT(file_ioctl_compat, selinux_file_ioctl_compat),
+ 	LSM_HOOK_INIT(mmap_file, selinux_mmap_file),
+ 	LSM_HOOK_INIT(mmap_addr, selinux_mmap_addr),
+ 	LSM_HOOK_INIT(file_mprotect, selinux_file_mprotect),
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 65130a791f573..1f1ea8529421f 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -4973,6 +4973,7 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ 
+ 	LSM_HOOK_INIT(file_alloc_security, smack_file_alloc_security),
+ 	LSM_HOOK_INIT(file_ioctl, smack_file_ioctl),
++	LSM_HOOK_INIT(file_ioctl_compat, smack_file_ioctl),
+ 	LSM_HOOK_INIT(file_lock, smack_file_lock),
+ 	LSM_HOOK_INIT(file_fcntl, smack_file_fcntl),
+ 	LSM_HOOK_INIT(mmap_file, smack_mmap_file),
+diff --git a/security/tomoyo/tomoyo.c b/security/tomoyo/tomoyo.c
+index 255f1b4702955..1bf4b3da9cbc5 100644
+--- a/security/tomoyo/tomoyo.c
++++ b/security/tomoyo/tomoyo.c
+@@ -568,6 +568,7 @@ static struct security_hook_list tomoyo_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(path_rename, tomoyo_path_rename),
+ 	LSM_HOOK_INIT(inode_getattr, tomoyo_inode_getattr),
+ 	LSM_HOOK_INIT(file_ioctl, tomoyo_file_ioctl),
++	LSM_HOOK_INIT(file_ioctl_compat, tomoyo_file_ioctl),
+ 	LSM_HOOK_INIT(path_chmod, tomoyo_path_chmod),
+ 	LSM_HOOK_INIT(path_chown, tomoyo_path_chown),
+ 	LSM_HOOK_INIT(path_chroot, tomoyo_path_chroot),
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 4e42847297732..1e788859c863b 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -1232,11 +1232,11 @@ static int fill_sdw_codec_dlc(struct device *dev,
+ 	else if (is_unique_device(adr_link, sdw_version, mfg_id, part_id,
+ 				  class_id, adr_index))
+ 		codec->name = devm_kasprintf(dev, GFP_KERNEL,
+-					     "sdw:%01x:%04x:%04x:%02x", link_id,
++					     "sdw:0:%01x:%04x:%04x:%02x", link_id,
+ 					     mfg_id, part_id, class_id);
+ 	else
+ 		codec->name = devm_kasprintf(dev, GFP_KERNEL,
+-					     "sdw:%01x:%04x:%04x:%02x:%01x", link_id,
++					     "sdw:0:%01x:%04x:%04x:%02x:%01x", link_id,
+ 					     mfg_id, part_id, class_id, unique_id);
+ 
+ 	if (!codec->name)
+diff --git a/tools/testing/selftests/drivers/net/bonding/bond_options.sh b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
+index c54d1697f439a..d508486cc0bdc 100755
+--- a/tools/testing/selftests/drivers/net/bonding/bond_options.sh
++++ b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
+@@ -162,7 +162,7 @@ prio_arp()
+ 	local mode=$1
+ 
+ 	for primary_reselect in 0 1 2; do
+-		prio_test "mode active-backup arp_interval 100 arp_ip_target ${g_ip4} primary eth1 primary_reselect $primary_reselect"
++		prio_test "mode $mode arp_interval 100 arp_ip_target ${g_ip4} primary eth1 primary_reselect $primary_reselect"
+ 		log_test "prio" "$mode arp_ip_target primary_reselect $primary_reselect"
+ 	done
+ }
+@@ -178,7 +178,7 @@ prio_ns()
+ 	fi
+ 
+ 	for primary_reselect in 0 1 2; do
+-		prio_test "mode active-backup arp_interval 100 ns_ip6_target ${g_ip6} primary eth1 primary_reselect $primary_reselect"
++		prio_test "mode $mode arp_interval 100 ns_ip6_target ${g_ip6} primary eth1 primary_reselect $primary_reselect"
+ 		log_test "prio" "$mode ns_ip6_target primary_reselect $primary_reselect"
+ 	done
+ }
+@@ -194,9 +194,9 @@ prio()
+ 
+ 	for mode in $modes; do
+ 		prio_miimon $mode
+-		prio_arp $mode
+-		prio_ns $mode
+ 	done
++	prio_arp "active-backup"
++	prio_ns "active-backup"
+ }
+ 
+ arp_validate_test()
+diff --git a/tools/testing/selftests/drivers/net/bonding/settings b/tools/testing/selftests/drivers/net/bonding/settings
+index 6091b45d226ba..79b65bdf05db6 100644
+--- a/tools/testing/selftests/drivers/net/bonding/settings
++++ b/tools/testing/selftests/drivers/net/bonding/settings
+@@ -1 +1 @@
+-timeout=120
++timeout=1200
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+index 1b08e042cf942..185b02d2d4cd1 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+@@ -269,6 +269,7 @@ for port in 0 1; do
+ 	echo 1 > $NSIM_DEV_SYS/new_port
+     fi
+     NSIM_NETDEV=`get_netdev_name old_netdevs`
++    ifconfig $NSIM_NETDEV up
+ 
+     msg="new NIC device created"
+     exp0=( 0 0 0 0 )
+@@ -430,6 +431,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     overflow_table0 "overflow NIC table"
+@@ -487,6 +489,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     overflow_table0 "overflow NIC table"
+@@ -543,6 +546,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     overflow_table0 "destroy NIC"
+@@ -572,6 +576,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     msg="create VxLANs v6"
+@@ -632,6 +637,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
+@@ -687,6 +693,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     msg="create VxLANs v6"
+@@ -746,6 +753,7 @@ for port in 0 1; do
+     fi
+ 
+     echo $port > $NSIM_DEV_SYS/new_port
++    NSIM_NETDEV=`get_netdev_name old_netdevs`
+     ifconfig $NSIM_NETDEV up
+ 
+     msg="create VxLANs v6"
+@@ -876,6 +884,7 @@ msg="re-add a port"
+ 
+ echo 2 > $NSIM_DEV_SYS/del_port
+ echo 2 > $NSIM_DEV_SYS/new_port
++NSIM_NETDEV=`get_netdev_name old_netdevs`
+ check_tables
+ 
+ msg="replace VxLAN in overflow table"
+diff --git a/tools/testing/selftests/mm/hugepage-vmemmap.c b/tools/testing/selftests/mm/hugepage-vmemmap.c
+index 5b354c209e936..894d28c3dd478 100644
+--- a/tools/testing/selftests/mm/hugepage-vmemmap.c
++++ b/tools/testing/selftests/mm/hugepage-vmemmap.c
+@@ -10,10 +10,7 @@
+ #include <unistd.h>
+ #include <sys/mman.h>
+ #include <fcntl.h>
+-
+-#define MAP_LENGTH		(2UL * 1024 * 1024)
+-
+-#define PAGE_SIZE		4096
++#include "vm_util.h"
+ 
+ #define PAGE_COMPOUND_HEAD	(1UL << 15)
+ #define PAGE_COMPOUND_TAIL	(1UL << 16)
+@@ -39,6 +36,9 @@
+ #define MAP_FLAGS		(MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB)
+ #endif
+ 
++static size_t pagesize;
++static size_t maplength;
++
+ static void write_bytes(char *addr, size_t length)
+ {
+ 	unsigned long i;
+@@ -56,7 +56,7 @@ static unsigned long virt_to_pfn(void *addr)
+ 	if (fd < 0)
+ 		return -1UL;
+ 
+-	lseek(fd, (unsigned long)addr / PAGE_SIZE * sizeof(pagemap), SEEK_SET);
++	lseek(fd, (unsigned long)addr / pagesize * sizeof(pagemap), SEEK_SET);
+ 	read(fd, &pagemap, sizeof(pagemap));
+ 	close(fd);
+ 
+@@ -86,7 +86,7 @@ static int check_page_flags(unsigned long pfn)
+ 	 * this also verifies kernel has correctly set the fake page_head to tail
+ 	 * while hugetlb_free_vmemmap is enabled.
+ 	 */
+-	for (i = 1; i < MAP_LENGTH / PAGE_SIZE; i++) {
++	for (i = 1; i < maplength / pagesize; i++) {
+ 		read(fd, &pageflags, sizeof(pageflags));
+ 		if ((pageflags & TAIL_PAGE_FLAGS) != TAIL_PAGE_FLAGS ||
+ 		    (pageflags & HEAD_PAGE_FLAGS) == HEAD_PAGE_FLAGS) {
+@@ -106,18 +106,25 @@ int main(int argc, char **argv)
+ 	void *addr;
+ 	unsigned long pfn;
+ 
+-	addr = mmap(MAP_ADDR, MAP_LENGTH, PROT_READ | PROT_WRITE, MAP_FLAGS, -1, 0);
++	pagesize  = psize();
++	maplength = default_huge_page_size();
++	if (!maplength) {
++		printf("Unable to determine huge page size\n");
++		exit(1);
++	}
++
++	addr = mmap(MAP_ADDR, maplength, PROT_READ | PROT_WRITE, MAP_FLAGS, -1, 0);
+ 	if (addr == MAP_FAILED) {
+ 		perror("mmap");
+ 		exit(1);
+ 	}
+ 
+ 	/* Trigger allocation of HugeTLB page. */
+-	write_bytes(addr, MAP_LENGTH);
++	write_bytes(addr, maplength);
+ 
+ 	pfn = virt_to_pfn(addr);
+ 	if (pfn == -1UL) {
+-		munmap(addr, MAP_LENGTH);
++		munmap(addr, maplength);
+ 		perror("virt_to_pfn");
+ 		exit(1);
+ 	}
+@@ -125,13 +132,13 @@ int main(int argc, char **argv)
+ 	printf("Returned address is %p whose pfn is %lx\n", addr, pfn);
+ 
+ 	if (check_page_flags(pfn) < 0) {
+-		munmap(addr, MAP_LENGTH);
++		munmap(addr, maplength);
+ 		perror("check_page_flags");
+ 		exit(1);
+ 	}
+ 
+ 	/* munmap() length of MAP_HUGETLB memory must be hugepage aligned */
+-	if (munmap(addr, MAP_LENGTH)) {
++	if (munmap(addr, maplength)) {
+ 		perror("munmap");
+ 		exit(1);
+ 	}
+diff --git a/tools/testing/selftests/net/config b/tools/testing/selftests/net/config
+index 8da562a9ae87e..19ff750516609 100644
+--- a/tools/testing/selftests/net/config
++++ b/tools/testing/selftests/net/config
+@@ -1,5 +1,6 @@
+ CONFIG_USER_NS=y
+ CONFIG_NET_NS=y
++CONFIG_BONDING=m
+ CONFIG_BPF_SYSCALL=y
+ CONFIG_TEST_BPF=m
+ CONFIG_NUMA=y
+@@ -14,9 +15,13 @@ CONFIG_VETH=y
+ CONFIG_NET_IPVTI=y
+ CONFIG_IPV6_VTI=y
+ CONFIG_DUMMY=y
++CONFIG_BRIDGE_VLAN_FILTERING=y
+ CONFIG_BRIDGE=y
++CONFIG_CRYPTO_CHACHA20POLY1305=m
+ CONFIG_VLAN_8021Q=y
+ CONFIG_IFB=y
++CONFIG_INET_DIAG=y
++CONFIG_IP_GRE=m
+ CONFIG_NETFILTER=y
+ CONFIG_NETFILTER_ADVANCED=y
+ CONFIG_NF_CONNTRACK=m
+@@ -25,15 +30,36 @@ CONFIG_IP6_NF_IPTABLES=m
+ CONFIG_IP_NF_IPTABLES=m
+ CONFIG_IP6_NF_NAT=m
+ CONFIG_IP_NF_NAT=m
++CONFIG_IPV6_GRE=m
++CONFIG_IPV6_SEG6_LWTUNNEL=y
++CONFIG_L2TP_ETH=m
++CONFIG_L2TP_IP=m
++CONFIG_L2TP=m
++CONFIG_L2TP_V3=y
++CONFIG_MACSEC=m
++CONFIG_MACVLAN=y
++CONFIG_MACVTAP=y
++CONFIG_MPLS=y
++CONFIG_MPTCP=y
+ CONFIG_NF_TABLES=m
+ CONFIG_NF_TABLES_IPV6=y
+ CONFIG_NF_TABLES_IPV4=y
+ CONFIG_NFT_NAT=m
++CONFIG_NET_ACT_GACT=m
++CONFIG_NET_CLS_BASIC=m
++CONFIG_NET_CLS_U32=m
++CONFIG_NET_IPGRE_DEMUX=m
++CONFIG_NET_IPGRE=m
++CONFIG_NET_SCH_FQ_CODEL=m
++CONFIG_NET_SCH_HTB=m
+ CONFIG_NET_SCH_FQ=m
+ CONFIG_NET_SCH_ETF=m
+ CONFIG_NET_SCH_NETEM=y
++CONFIG_PSAMPLE=m
++CONFIG_TCP_MD5SIG=y
+ CONFIG_TEST_BLACKHOLE_DEV=m
+ CONFIG_KALLSYMS=y
++CONFIG_TLS=m
+ CONFIG_TRACEPOINTS=y
+ CONFIG_NET_DROP_MONITOR=m
+ CONFIG_NETDEVSIM=m
+@@ -48,7 +74,9 @@ CONFIG_BAREUDP=m
+ CONFIG_IPV6_IOAM6_LWTUNNEL=y
+ CONFIG_CRYPTO_SM4_GENERIC=y
+ CONFIG_AMT=m
++CONFIG_TUN=y
+ CONFIG_VXLAN=m
+ CONFIG_IP_SCTP=m
+ CONFIG_NETFILTER_XT_MATCH_POLICY=m
+ CONFIG_CRYPTO_ARIA=y
++CONFIG_XFRM_INTERFACE=m
+diff --git a/tools/testing/selftests/net/rps_default_mask.sh b/tools/testing/selftests/net/rps_default_mask.sh
+index a26c5624429fb..4287a85298907 100755
+--- a/tools/testing/selftests/net/rps_default_mask.sh
++++ b/tools/testing/selftests/net/rps_default_mask.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ readonly ksft_skip=4
+@@ -33,6 +33,10 @@ chk_rps() {
+ 
+ 	rps_mask=$($cmd /sys/class/net/$dev_name/queues/rx-0/rps_cpus)
+ 	printf "%-60s" "$msg"
++
++	# In case there is more than 32 CPUs we need to remove commas from masks
++	rps_mask=${rps_mask//,}
++	expected_rps_mask=${expected_rps_mask//,}
+ 	if [ $rps_mask -eq $expected_rps_mask ]; then
+ 		echo "[ ok ]"
+ 	else
+diff --git a/tools/testing/selftests/net/so_incoming_cpu.c b/tools/testing/selftests/net/so_incoming_cpu.c
+index a148181641026..e9fa14e107322 100644
+--- a/tools/testing/selftests/net/so_incoming_cpu.c
++++ b/tools/testing/selftests/net/so_incoming_cpu.c
+@@ -3,19 +3,16 @@
+ #define _GNU_SOURCE
+ #include <sched.h>
+ 
++#include <fcntl.h>
++
+ #include <netinet/in.h>
+ #include <sys/socket.h>
+ #include <sys/sysinfo.h>
+ 
+ #include "../kselftest_harness.h"
+ 
+-#define CLIENT_PER_SERVER	32 /* More sockets, more reliable */
+-#define NR_SERVER		self->nproc
+-#define NR_CLIENT		(CLIENT_PER_SERVER * NR_SERVER)
+-
+ FIXTURE(so_incoming_cpu)
+ {
+-	int nproc;
+ 	int *servers;
+ 	union {
+ 		struct sockaddr addr;
+@@ -56,12 +53,47 @@ FIXTURE_VARIANT_ADD(so_incoming_cpu, after_all_listen)
+ 	.when_to_set = AFTER_ALL_LISTEN,
+ };
+ 
++static void write_sysctl(struct __test_metadata *_metadata,
++			 char *filename, char *string)
++{
++	int fd, len, ret;
++
++	fd = open(filename, O_WRONLY);
++	ASSERT_NE(fd, -1);
++
++	len = strlen(string);
++	ret = write(fd, string, len);
++	ASSERT_EQ(ret, len);
++}
++
++static void setup_netns(struct __test_metadata *_metadata)
++{
++	ASSERT_EQ(unshare(CLONE_NEWNET), 0);
++	ASSERT_EQ(system("ip link set lo up"), 0);
++
++	write_sysctl(_metadata, "/proc/sys/net/ipv4/ip_local_port_range", "10000 60001");
++	write_sysctl(_metadata, "/proc/sys/net/ipv4/tcp_tw_reuse", "0");
++}
++
++#define NR_PORT				(60001 - 10000 - 1)
++#define NR_CLIENT_PER_SERVER_DEFAULT	32
++static int nr_client_per_server, nr_server, nr_client;
++
+ FIXTURE_SETUP(so_incoming_cpu)
+ {
+-	self->nproc = get_nprocs();
+-	ASSERT_LE(2, self->nproc);
++	setup_netns(_metadata);
++
++	nr_server = get_nprocs();
++	ASSERT_LE(2, nr_server);
++
++	if (NR_CLIENT_PER_SERVER_DEFAULT * nr_server < NR_PORT)
++		nr_client_per_server = NR_CLIENT_PER_SERVER_DEFAULT;
++	else
++		nr_client_per_server = NR_PORT / nr_server;
++
++	nr_client = nr_client_per_server * nr_server;
+ 
+-	self->servers = malloc(sizeof(int) * NR_SERVER);
++	self->servers = malloc(sizeof(int) * nr_server);
+ 	ASSERT_NE(self->servers, NULL);
+ 
+ 	self->in_addr.sin_family = AF_INET;
+@@ -74,7 +106,7 @@ FIXTURE_TEARDOWN(so_incoming_cpu)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < NR_SERVER; i++)
++	for (i = 0; i < nr_server; i++)
+ 		close(self->servers[i]);
+ 
+ 	free(self->servers);
+@@ -110,10 +142,10 @@ int create_server(struct __test_metadata *_metadata,
+ 	if (variant->when_to_set == BEFORE_LISTEN)
+ 		set_so_incoming_cpu(_metadata, fd, cpu);
+ 
+-	/* We don't use CLIENT_PER_SERVER here not to block
++	/* We don't use nr_client_per_server here not to block
+ 	 * this test at connect() if SO_INCOMING_CPU is broken.
+ 	 */
+-	ret = listen(fd, NR_CLIENT);
++	ret = listen(fd, nr_client);
+ 	ASSERT_EQ(ret, 0);
+ 
+ 	if (variant->when_to_set == AFTER_LISTEN)
+@@ -128,7 +160,7 @@ void create_servers(struct __test_metadata *_metadata,
+ {
+ 	int i, ret;
+ 
+-	for (i = 0; i < NR_SERVER; i++) {
++	for (i = 0; i < nr_server; i++) {
+ 		self->servers[i] = create_server(_metadata, self, variant, i);
+ 
+ 		if (i == 0) {
+@@ -138,7 +170,7 @@ void create_servers(struct __test_metadata *_metadata,
+ 	}
+ 
+ 	if (variant->when_to_set == AFTER_ALL_LISTEN) {
+-		for (i = 0; i < NR_SERVER; i++)
++		for (i = 0; i < nr_server; i++)
+ 			set_so_incoming_cpu(_metadata, self->servers[i], i);
+ 	}
+ }
+@@ -149,7 +181,7 @@ void create_clients(struct __test_metadata *_metadata,
+ 	cpu_set_t cpu_set;
+ 	int i, j, fd, ret;
+ 
+-	for (i = 0; i < NR_SERVER; i++) {
++	for (i = 0; i < nr_server; i++) {
+ 		CPU_ZERO(&cpu_set);
+ 
+ 		CPU_SET(i, &cpu_set);
+@@ -162,7 +194,7 @@ void create_clients(struct __test_metadata *_metadata,
+ 		ret = sched_setaffinity(0, sizeof(cpu_set), &cpu_set);
+ 		ASSERT_EQ(ret, 0);
+ 
+-		for (j = 0; j < CLIENT_PER_SERVER; j++) {
++		for (j = 0; j < nr_client_per_server; j++) {
+ 			fd  = socket(AF_INET, SOCK_STREAM, 0);
+ 			ASSERT_NE(fd, -1);
+ 
+@@ -180,8 +212,8 @@ void verify_incoming_cpu(struct __test_metadata *_metadata,
+ 	int i, j, fd, cpu, ret, total = 0;
+ 	socklen_t len = sizeof(int);
+ 
+-	for (i = 0; i < NR_SERVER; i++) {
+-		for (j = 0; j < CLIENT_PER_SERVER; j++) {
++	for (i = 0; i < nr_server; i++) {
++		for (j = 0; j < nr_client_per_server; j++) {
+ 			/* If we see -EAGAIN here, SO_INCOMING_CPU is broken */
+ 			fd = accept(self->servers[i], &self->addr, &self->addrlen);
+ 			ASSERT_NE(fd, -1);
+@@ -195,7 +227,7 @@ void verify_incoming_cpu(struct __test_metadata *_metadata,
+ 		}
+ 	}
+ 
+-	ASSERT_EQ(total, NR_CLIENT);
++	ASSERT_EQ(total, nr_client);
+ 	TH_LOG("SO_INCOMING_CPU is very likely to be "
+ 	       "working correctly with %d sockets.", total);
+ }
+diff --git a/tools/testing/selftests/riscv/hwprobe/cbo.c b/tools/testing/selftests/riscv/hwprobe/cbo.c
+index 50a2cc8aef387..c000942c80483 100644
+--- a/tools/testing/selftests/riscv/hwprobe/cbo.c
++++ b/tools/testing/selftests/riscv/hwprobe/cbo.c
+@@ -36,16 +36,14 @@ static void sigill_handler(int sig, siginfo_t *info, void *context)
+ 	regs[0] += 4;
+ }
+ 
+-static void cbo_insn(char *base, int fn)
+-{
+-	uint32_t insn = MK_CBO(fn);
+-
+-	asm volatile(
+-	"mv	a0, %0\n"
+-	"li	a1, %1\n"
+-	".4byte	%2\n"
+-	: : "r" (base), "i" (fn), "i" (insn) : "a0", "a1", "memory");
+-}
++#define cbo_insn(base, fn)							\
++({										\
++	asm volatile(								\
++	"mv	a0, %0\n"							\
++	"li	a1, %1\n"							\
++	".4byte	%2\n"								\
++	: : "r" (base), "i" (fn), "i" (MK_CBO(fn)) : "a0", "a1", "memory");	\
++})
+ 
+ static void cbo_inval(char *base) { cbo_insn(base, 0); }
+ static void cbo_clean(char *base) { cbo_insn(base, 1); }


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-02 17:52 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-02 17:52 UTC (permalink / raw
  To: gentoo-commits

commit:     e1da7b68300d4d9a80469833fa0e47732bed5f5a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb  2 17:51:26 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb  2 17:51:26 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e1da7b68

Add BMQ Fix.sched/topology: Introduce sched_numa_hop_mask

See: https://gitlab.com/alfredchen/linux-prjc/-/commit/4693ad051b34ea70e3d39d3a6cdd072fec9f4878.patch
Bug: https://bugs.gentoo.org/923454

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
index 977bdcd6..4e71c3ef 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
@@ -11450,3 +11450,22 @@ index 2989b57e154a..7313d9f5585f 100644
  	work_data = *work_data_bits(work);
  	worker->current_color = get_work_color(work_data);
  
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 120933a5b206..dc717683342e 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -2813,5 +2813,11 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
+ {
+ 	return cpumask_nth(cpu, cpus);
+ }
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++	return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
+ #endif /* CONFIG_NUMA */
+ #endif
+-- 
+GitLab
+


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-05 20:59 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-05 20:59 UTC (permalink / raw
  To: gentoo-commits

commit:     f6b094e72946ad3e6ee62d9980742069bcc28041
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb  5 20:59:47 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb  5 20:59:47 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f6b094e7

Linux patch 6.7.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1003_linux-6.7.4.patch | 21813 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 21817 insertions(+)

diff --git a/0000_README b/0000_README
index 8b508855..9c8bcf02 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-6.7.3.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.3
 
+Patch:  1003_linux-6.7.4.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.4
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1003_linux-6.7.4.patch b/1003_linux-6.7.4.patch
new file mode 100644
index 00000000..18a490a6
--- /dev/null
+++ b/1003_linux-6.7.4.patch
@@ -0,0 +1,21813 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-net-queues b/Documentation/ABI/testing/sysfs-class-net-queues
+index 906ff3ca928ac..5bff64d256c20 100644
+--- a/Documentation/ABI/testing/sysfs-class-net-queues
++++ b/Documentation/ABI/testing/sysfs-class-net-queues
+@@ -1,4 +1,4 @@
+-What:		/sys/class/<iface>/queues/rx-<queue>/rps_cpus
++What:		/sys/class/net/<iface>/queues/rx-<queue>/rps_cpus
+ Date:		March 2010
+ KernelVersion:	2.6.35
+ Contact:	netdev@vger.kernel.org
+@@ -8,7 +8,7 @@ Description:
+ 		network device queue. Possible values depend on the number
+ 		of available CPU(s) in the system.
+ 
+-What:		/sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt
++What:		/sys/class/net/<iface>/queues/rx-<queue>/rps_flow_cnt
+ Date:		April 2010
+ KernelVersion:	2.6.35
+ Contact:	netdev@vger.kernel.org
+@@ -16,7 +16,7 @@ Description:
+ 		Number of Receive Packet Steering flows being currently
+ 		processed by this particular network device receive queue.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/tx_timeout
++What:		/sys/class/net/<iface>/queues/tx-<queue>/tx_timeout
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -24,7 +24,7 @@ Description:
+ 		Indicates the number of transmit timeout events seen by this
+ 		network interface transmit queue.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/tx_maxrate
++What:		/sys/class/net/<iface>/queues/tx-<queue>/tx_maxrate
+ Date:		March 2015
+ KernelVersion:	4.1
+ Contact:	netdev@vger.kernel.org
+@@ -32,7 +32,7 @@ Description:
+ 		A Mbps max-rate set for the queue, a value of zero means disabled,
+ 		default is disabled.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/xps_cpus
++What:		/sys/class/net/<iface>/queues/tx-<queue>/xps_cpus
+ Date:		November 2010
+ KernelVersion:	2.6.38
+ Contact:	netdev@vger.kernel.org
+@@ -42,7 +42,7 @@ Description:
+ 		network device transmit queue. Possible values depend on the
+ 		number of available CPU(s) in the system.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/xps_rxqs
++What:		/sys/class/net/<iface>/queues/tx-<queue>/xps_rxqs
+ Date:		June 2018
+ KernelVersion:	4.18.0
+ Contact:	netdev@vger.kernel.org
+@@ -53,7 +53,7 @@ Description:
+ 		number of available receive queue(s) in the network device.
+ 		Default is disabled.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -62,7 +62,7 @@ Description:
+ 		of this particular network device transmit queue.
+ 		Default value is 1000.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/inflight
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/inflight
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -70,7 +70,7 @@ Description:
+ 		Indicates the number of bytes (objects) in flight on this
+ 		network device transmit queue.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -79,7 +79,7 @@ Description:
+ 		on this network device transmit queue. This value is clamped
+ 		to be within the bounds defined by limit_max and limit_min.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+@@ -88,7 +88,7 @@ Description:
+ 		queued on this network device transmit queue. See
+ 		include/linux/dynamic_queue_limits.h for the default value.
+ 
+-What:		/sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min
++What:		/sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min
+ Date:		November 2011
+ KernelVersion:	3.3
+ Contact:	netdev@vger.kernel.org
+diff --git a/Documentation/sound/soc/dapm.rst b/Documentation/sound/soc/dapm.rst
+index 8e44107933abf..c3154ce6e1b27 100644
+--- a/Documentation/sound/soc/dapm.rst
++++ b/Documentation/sound/soc/dapm.rst
+@@ -234,7 +234,7 @@ corresponding soft power control. In this case it is necessary to create
+ a virtual widget - a widget with no control bits e.g.
+ ::
+ 
+-  SND_SOC_DAPM_MIXER("AC97 Mixer", SND_SOC_DAPM_NOPM, 0, 0, NULL, 0),
++  SND_SOC_DAPM_MIXER("AC97 Mixer", SND_SOC_NOPM, 0, 0, NULL, 0),
+ 
+ This can be used to merge to signal paths together in software.
+ 
+diff --git a/Makefile b/Makefile
+index 96a08c9f0faa1..73a208d9d4c48 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/alpha/kernel/asm-offsets.c b/arch/alpha/kernel/asm-offsets.c
+index b121294bee266..bf1eedd27cf75 100644
+--- a/arch/alpha/kernel/asm-offsets.c
++++ b/arch/alpha/kernel/asm-offsets.c
+@@ -12,7 +12,7 @@
+ #include <linux/kbuild.h>
+ #include <asm/io.h>
+ 
+-void foo(void)
++static void __used foo(void)
+ {
+ 	DEFINE(TI_TASK, offsetof(struct thread_info, task));
+ 	DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
+diff --git a/arch/arm/boot/dts/intel/ixp/intel-ixp42x-usrobotics-usr8200.dts b/arch/arm/boot/dts/intel/ixp/intel-ixp42x-usrobotics-usr8200.dts
+index 90fd51b36e7da..2c89db34c8d88 100644
+--- a/arch/arm/boot/dts/intel/ixp/intel-ixp42x-usrobotics-usr8200.dts
++++ b/arch/arm/boot/dts/intel/ixp/intel-ixp42x-usrobotics-usr8200.dts
+@@ -165,6 +165,24 @@
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 
++				/*
++				 * PHY 0..4 are internal to the MV88E6060 switch but appear
++				 * as independent devices.
++				 */
++				phy0: ethernet-phy@0 {
++					reg = <0>;
++				};
++				phy1: ethernet-phy@1 {
++					reg = <1>;
++				};
++				phy2: ethernet-phy@2 {
++					reg = <2>;
++				};
++				phy3: ethernet-phy@3 {
++					reg = <3>;
++				};
++
++				/* Altima AMI101L used by the WAN port */
+ 				phy9: ethernet-phy@9 {
+ 					reg = <9>;
+ 				};
+@@ -181,21 +199,25 @@
+ 						port@0 {
+ 							reg = <0>;
+ 							label = "lan1";
++							phy-handle = <&phy0>;
+ 						};
+ 
+ 						port@1 {
+ 							reg = <1>;
+ 							label = "lan2";
++							phy-handle = <&phy1>;
+ 						};
+ 
+ 						port@2 {
+ 							reg = <2>;
+ 							label = "lan3";
++							phy-handle = <&phy2>;
+ 						};
+ 
+ 						port@3 {
+ 							reg = <3>;
+ 							label = "lan4";
++							phy-handle = <&phy3>;
+ 						};
+ 
+ 						port@5 {
+diff --git a/arch/arm/boot/dts/marvell/armada-370-rd.dts b/arch/arm/boot/dts/marvell/armada-370-rd.dts
+index b459a670f6158..1b241da11e942 100644
+--- a/arch/arm/boot/dts/marvell/armada-370-rd.dts
++++ b/arch/arm/boot/dts/marvell/armada-370-rd.dts
+@@ -149,39 +149,37 @@
+ 		};
+ 	};
+ 
+-	switch: switch@10 {
++	switch: ethernet-switch@10 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <0x10>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@0 {
++			ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "lan0";
+ 			};
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan1";
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan2";
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan3";
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				ethernet = <&eth1>;
+ 				phy-mode = "rgmii-id";
+@@ -196,25 +194,25 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switchphy0: switchphy@0 {
++			switchphy0: ethernet-phy@0 {
+ 				reg = <0>;
+ 				interrupt-parent = <&switch>;
+ 				interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 
+-			switchphy1: switchphy@1 {
++			switchphy1: ethernet-phy@1 {
+ 				reg = <1>;
+ 				interrupt-parent = <&switch>;
+ 				interrupts = <1 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 
+-			switchphy2: switchphy@2 {
++			switchphy2: ethernet-phy@2 {
+ 				reg = <2>;
+ 				interrupt-parent = <&switch>;
+ 				interrupts = <2 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 
+-			switchphy3: switchphy@3 {
++			switchphy3: ethernet-phy@3 {
+ 				reg = <3>;
+ 				interrupt-parent = <&switch>;
+ 				interrupts = <3 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/marvell/armada-381-netgear-gs110emx.dts b/arch/arm/boot/dts/marvell/armada-381-netgear-gs110emx.dts
+index f4c4b213ef4ed..5baf83e5253d8 100644
+--- a/arch/arm/boot/dts/marvell/armada-381-netgear-gs110emx.dts
++++ b/arch/arm/boot/dts/marvell/armada-381-netgear-gs110emx.dts
+@@ -77,51 +77,49 @@
+ 	pinctrl-0 = <&mdio_pins>;
+ 	status = "okay";
+ 
+-	switch@0 {
++	ethernet-switch@0 {
+ 		compatible = "marvell,mv88e6190";
+-		#address-cells = <1>;
+ 		#interrupt-cells = <2>;
+ 		interrupt-controller;
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-0 = <&switch_interrupt_pins>;
+ 		pinctrl-names = "default";
+-		#size-cells = <0>;
+ 		reg = <0>;
+ 
+ 		mdio {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy1: switch0phy1@1 {
++			switch0phy1: ethernet-phy@1 {
+ 				reg = <0x1>;
+ 			};
+ 
+-			switch0phy2: switch0phy2@2 {
++			switch0phy2: ethernet-phy@2 {
+ 				reg = <0x2>;
+ 			};
+ 
+-			switch0phy3: switch0phy3@3 {
++			switch0phy3: ethernet-phy@3 {
+ 				reg = <0x3>;
+ 			};
+ 
+-			switch0phy4: switch0phy4@4 {
++			switch0phy4: ethernet-phy@4 {
+ 				reg = <0x4>;
+ 			};
+ 
+-			switch0phy5: switch0phy5@5 {
++			switch0phy5: ethernet-phy@5 {
+ 				reg = <0x5>;
+ 			};
+ 
+-			switch0phy6: switch0phy6@6 {
++			switch0phy6: ethernet-phy@6 {
+ 				reg = <0x6>;
+ 			};
+ 
+-			switch0phy7: switch0phy7@7 {
++			switch0phy7: ethernet-phy@7 {
+ 				reg = <0x7>;
+ 			};
+ 
+-			switch0phy8: switch0phy8@8 {
++			switch0phy8: ethernet-phy@8 {
+ 				reg = <0x8>;
+ 			};
+ 		};
+@@ -142,11 +140,11 @@
+ 			};
+ 		};
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@0 {
++			ethernet-port@0 {
+ 				ethernet = <&eth0>;
+ 				phy-mode = "rgmii";
+ 				reg = <0>;
+@@ -158,55 +156,55 @@
+ 				};
+ 			};
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				label = "lan1";
+ 				phy-handle = <&switch0phy1>;
+ 				reg = <1>;
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				label = "lan2";
+ 				phy-handle = <&switch0phy2>;
+ 				reg = <2>;
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				label = "lan3";
+ 				phy-handle = <&switch0phy3>;
+ 				reg = <3>;
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				label = "lan4";
+ 				phy-handle = <&switch0phy4>;
+ 				reg = <4>;
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				label = "lan5";
+ 				phy-handle = <&switch0phy5>;
+ 				reg = <5>;
+ 			};
+ 
+-			port@6 {
++			ethernet-port@6 {
+ 				label = "lan6";
+ 				phy-handle = <&switch0phy6>;
+ 				reg = <6>;
+ 			};
+ 
+-			port@7 {
++			ethernet-port@7 {
+ 				label = "lan7";
+ 				phy-handle = <&switch0phy7>;
+ 				reg = <7>;
+ 			};
+ 
+-			port@8 {
++			ethernet-port@8 {
+ 				label = "lan8";
+ 				phy-handle = <&switch0phy8>;
+ 				reg = <8>;
+ 			};
+ 
+-			port@9 {
++			ethernet-port@9 {
+ 				/* 88X3310P external phy */
+ 				label = "lan9";
+ 				phy-handle = <&phy1>;
+@@ -214,7 +212,7 @@
+ 				reg = <9>;
+ 			};
+ 
+-			port@a {
++			ethernet-port@a {
+ 				/* 88X3310P external phy */
+ 				label = "lan10";
+ 				phy-handle = <&phy2>;
+diff --git a/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-l8.dts b/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-l8.dts
+index 1990f7d0cc79a..1707d1b015452 100644
+--- a/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-l8.dts
++++ b/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-l8.dts
+@@ -7,66 +7,66 @@
+ };
+ 
+ &mdio {
+-	switch0: switch0@4 {
++	switch0: ethernet-switch@4 {
+ 		compatible = "marvell,mv88e6190";
+ 		reg = <4>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&cf_gtr_switch_reset_pins>;
+ 		reset-gpios = <&gpio0 18 GPIO_ACTIVE_LOW>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan8";
+ 				phy-handle = <&switch0phy0>;
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan7";
+ 				phy-handle = <&switch0phy1>;
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan6";
+ 				phy-handle = <&switch0phy2>;
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "lan5";
+ 				phy-handle = <&switch0phy3>;
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				label = "lan4";
+ 				phy-handle = <&switch0phy4>;
+ 			};
+ 
+-			port@6 {
++			ethernet-port@6 {
+ 				reg = <6>;
+ 				label = "lan3";
+ 				phy-handle = <&switch0phy5>;
+ 			};
+ 
+-			port@7 {
++			ethernet-port@7 {
+ 				reg = <7>;
+ 				label = "lan2";
+ 				phy-handle = <&switch0phy6>;
+ 			};
+ 
+-			port@8 {
++			ethernet-port@8 {
+ 				reg = <8>;
+ 				label = "lan1";
+ 				phy-handle = <&switch0phy7>;
+ 			};
+ 
+-			port@10 {
++			ethernet-port@10 {
+ 				reg = <10>;
+ 				phy-mode = "2500base-x";
+ 
+@@ -83,35 +83,35 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy0: switch0phy0@1 {
++			switch0phy0: ethernet-phy@1 {
+ 				reg = <0x1>;
+ 			};
+ 
+-			switch0phy1: switch0phy1@2 {
++			switch0phy1: ethernet-phy@2 {
+ 				reg = <0x2>;
+ 			};
+ 
+-			switch0phy2: switch0phy2@3 {
++			switch0phy2: ethernet-phy@3 {
+ 				reg = <0x3>;
+ 			};
+ 
+-			switch0phy3: switch0phy3@4 {
++			switch0phy3: ethernet-phy@4 {
+ 				reg = <0x4>;
+ 			};
+ 
+-			switch0phy4: switch0phy4@5 {
++			switch0phy4: ethernet-phy@5 {
+ 				reg = <0x5>;
+ 			};
+ 
+-			switch0phy5: switch0phy5@6 {
++			switch0phy5: ethernet-phy@6 {
+ 				reg = <0x6>;
+ 			};
+ 
+-			switch0phy6: switch0phy6@7 {
++			switch0phy6: ethernet-phy@7 {
+ 				reg = <0x7>;
+ 			};
+ 
+-			switch0phy7: switch0phy7@8 {
++			switch0phy7: ethernet-phy@8 {
+ 				reg = <0x8>;
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-s4.dts b/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-s4.dts
+index b795ad573891e..a7678a784c180 100644
+--- a/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-s4.dts
++++ b/arch/arm/boot/dts/marvell/armada-385-clearfog-gtr-s4.dts
+@@ -11,42 +11,42 @@
+ };
+ 
+ &mdio {
+-	switch0: switch0@4 {
++	switch0: ethernet-switch@4 {
+ 		compatible = "marvell,mv88e6085";
+ 		reg = <4>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&cf_gtr_switch_reset_pins>;
+ 		reset-gpios = <&gpio0 18 GPIO_ACTIVE_LOW>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan2";
+ 				phy-handle = <&switch0phy0>;
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan1";
+ 				phy-handle = <&switch0phy1>;
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan4";
+ 				phy-handle = <&switch0phy2>;
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "lan3";
+ 				phy-handle = <&switch0phy3>;
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				phy-mode = "2500base-x";
+ 				ethernet = <&eth1>;
+@@ -63,19 +63,19 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy0: switch0phy0@11 {
++			switch0phy0: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+ 
+-			switch0phy1: switch0phy1@12 {
++			switch0phy1: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+ 
+-			switch0phy2: switch0phy2@13 {
++			switch0phy2: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 
+-			switch0phy3: switch0phy3@14 {
++			switch0phy3: ethernet-phy@14 {
+ 				reg = <0x14>;
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/marvell/armada-385-linksys.dtsi b/arch/arm/boot/dts/marvell/armada-385-linksys.dtsi
+index fc8216fd9f600..4116ed60f7092 100644
+--- a/arch/arm/boot/dts/marvell/armada-385-linksys.dtsi
++++ b/arch/arm/boot/dts/marvell/armada-385-linksys.dtsi
+@@ -158,42 +158,40 @@
+ &mdio {
+ 	status = "okay";
+ 
+-	switch@0 {
++	ethernet-switch@0 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <0>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@0 {
++			ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "lan4";
+ 			};
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan3";
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan2";
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan1";
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "wan";
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				phy-mode = "sgmii";
+ 				ethernet = <&eth2>;
+diff --git a/arch/arm/boot/dts/marvell/armada-385-turris-omnia.dts b/arch/arm/boot/dts/marvell/armada-385-turris-omnia.dts
+index 2d8d319bec830..7b755bb4e4e75 100644
+--- a/arch/arm/boot/dts/marvell/armada-385-turris-omnia.dts
++++ b/arch/arm/boot/dts/marvell/armada-385-turris-omnia.dts
+@@ -435,12 +435,10 @@
+ 	};
+ 
+ 	/* Switch MV88E6176 at address 0x10 */
+-	switch@10 {
++	ethernet-switch@10 {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&swint_pins>;
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 
+ 		dsa,member = <0 0>;
+ 		reg = <0x10>;
+@@ -448,36 +446,36 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <13 IRQ_TYPE_LEVEL_LOW>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			ports@0 {
++			ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "lan0";
+ 			};
+ 
+-			ports@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan1";
+ 			};
+ 
+-			ports@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan2";
+ 			};
+ 
+-			ports@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan3";
+ 			};
+ 
+-			ports@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "lan4";
+ 			};
+ 
+-			ports@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				ethernet = <&eth1>;
+ 				phy-mode = "rgmii-id";
+@@ -488,7 +486,7 @@
+ 				};
+ 			};
+ 
+-			ports@6 {
++			ethernet-port@6 {
+ 				reg = <6>;
+ 				ethernet = <&eth0>;
+ 				phy-mode = "rgmii-id";
+diff --git a/arch/arm/boot/dts/marvell/armada-388-clearfog.dts b/arch/arm/boot/dts/marvell/armada-388-clearfog.dts
+index 32c569df142ff..3290ccad23745 100644
+--- a/arch/arm/boot/dts/marvell/armada-388-clearfog.dts
++++ b/arch/arm/boot/dts/marvell/armada-388-clearfog.dts
+@@ -92,44 +92,42 @@
+ &mdio {
+ 	status = "okay";
+ 
+-	switch@4 {
++	ethernet-switch@4 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <4>;
+ 		pinctrl-0 = <&clearfog_dsa0_clk_pins &clearfog_dsa0_pins>;
+ 		pinctrl-names = "default";
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@0 {
++			ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "lan5";
+ 			};
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan4";
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan3";
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan2";
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "lan1";
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				ethernet = <&eth1>;
+ 				phy-mode = "1000base-x";
+@@ -140,7 +138,7 @@
+ 				};
+ 			};
+ 
+-			port@6 {
++			ethernet-port@6 {
+ 				/* 88E1512 external phy */
+ 				reg = <6>;
+ 				label = "lan6";
+diff --git a/arch/arm/boot/dts/marvell/armada-xp-linksys-mamba.dts b/arch/arm/boot/dts/marvell/armada-xp-linksys-mamba.dts
+index 7a0614fd0c93a..ea859f7ea0422 100644
+--- a/arch/arm/boot/dts/marvell/armada-xp-linksys-mamba.dts
++++ b/arch/arm/boot/dts/marvell/armada-xp-linksys-mamba.dts
+@@ -265,42 +265,40 @@
+ &mdio {
+ 	status = "okay";
+ 
+-	switch@0 {
++	ethernet-switch@0 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <0>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@0 {
++			ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "lan4";
+ 			};
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan3";
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan2";
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan1";
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "internet";
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				phy-mode = "rgmii-id";
+ 				ethernet = <&eth0>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx1-ads.dts b/arch/arm/boot/dts/nxp/imx/imx1-ads.dts
+index 5833fb6f15d88..2c817c4a4c68f 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx1-ads.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx1-ads.dts
+@@ -65,7 +65,7 @@
+ 	pinctrl-0 = <&pinctrl_weim>;
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		compatible = "cfi-flash";
+ 		reg = <0 0x00000000 0x02000000>;
+ 		bank-width = <4>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx1-apf9328.dts b/arch/arm/boot/dts/nxp/imx/imx1-apf9328.dts
+index 1f11e9542a72d..e66eef87a7a4f 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx1-apf9328.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx1-apf9328.dts
+@@ -45,7 +45,7 @@
+ 	pinctrl-0 = <&pinctrl_weim>;
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		compatible = "cfi-flash";
+ 		reg = <0 0x00000000 0x02000000>;
+ 		bank-width = <2>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx1.dtsi b/arch/arm/boot/dts/nxp/imx/imx1.dtsi
+index e312f1e74e2fe..4aeb74479f44e 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx1.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx1.dtsi
+@@ -268,9 +268,12 @@
+ 			status = "disabled";
+ 		};
+ 
+-		esram: esram@300000 {
++		esram: sram@300000 {
+ 			compatible = "mmio-sram";
+ 			reg = <0x00300000 0x20000>;
++			ranges = <0 0x00300000 0x20000>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-cpuimx25.dtsi b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-cpuimx25.dtsi
+index 0703f62d10d1c..93a6e4e680b45 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-cpuimx25.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-cpuimx25.dtsi
+@@ -27,7 +27,7 @@
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+ 
+-	pcf8563@51 {
++	rtc@51 {
+ 		compatible = "nxp,pcf8563";
+ 		reg = <0x51>;
+ 	};
+diff --git a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts
+index fc8a502fc957f..6cddb2cc36fe2 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-cmo-qvga.dts
+@@ -16,7 +16,7 @@
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&qvga_timings>;
+-			qvga_timings: 320x240 {
++			qvga_timings: timing0 {
+ 				clock-frequency = <6500000>;
+ 				hactive = <320>;
+ 				vactive = <240>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts
+index 80a7f96de4c6a..64b2ffac463b2 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-svga.dts
+@@ -16,7 +16,7 @@
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&dvi_svga_timings>;
+-			dvi_svga_timings: 800x600 {
++			dvi_svga_timings: timing0 {
+ 				clock-frequency = <40000000>;
+ 				hactive = <800>;
+ 				vactive = <600>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts
+index 24027a1fb46d1..fb074bfdaa8dc 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx25-eukrea-mbimxsd25-baseboard-dvi-vga.dts
+@@ -16,7 +16,7 @@
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&dvi_vga_timings>;
+-			dvi_vga_timings: 640x480 {
++			dvi_vga_timings: timing0 {
+ 				clock-frequency = <31250000>;
+ 				hactive = <640>;
+ 				vactive = <480>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx25-pdk.dts b/arch/arm/boot/dts/nxp/imx/imx25-pdk.dts
+index 04f4b127a1725..e93bf3b7115fa 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx25-pdk.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx25-pdk.dts
+@@ -68,7 +68,7 @@
+ 		bus-width = <18>;
+ 		display-timings {
+ 			native-mode = <&wvga_timings>;
+-			wvga_timings: 640x480 {
++			wvga_timings: timing0 {
+ 				hactive = <640>;
+ 				vactive = <480>;
+ 				hback-porch = <45>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx25.dtsi b/arch/arm/boot/dts/nxp/imx/imx25.dtsi
+index 534c70b8d79df..f65c7234f9e7f 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx25.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx25.dtsi
+@@ -542,7 +542,7 @@
+ 			};
+ 
+ 			iim: efuse@53ff0000 {
+-				compatible = "fsl,imx25-iim", "fsl,imx27-iim";
++				compatible = "fsl,imx25-iim";
+ 				reg = <0x53ff0000 0x4000>;
+ 				interrupts = <19>;
+ 				clocks = <&clks 99>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27-apf27dev.dts b/arch/arm/boot/dts/nxp/imx/imx27-apf27dev.dts
+index a21f1f7c24b88..849306cb4532d 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27-apf27dev.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx27-apf27dev.dts
+@@ -16,7 +16,7 @@
+ 		fsl,pcr = <0xfae80083>;	/* non-standard but required */
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 800x480 {
++			timing0: timing0 {
+ 				clock-frequency = <33000033>;
+ 				hactive = <800>;
+ 				vactive = <480>;
+@@ -47,7 +47,7 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 
+-		user {
++		led-user {
+ 			label = "Heartbeat";
+ 			gpios = <&gpio6 14 GPIO_ACTIVE_HIGH>;
+ 			linux,default-trigger = "heartbeat";
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27-eukrea-cpuimx27.dtsi b/arch/arm/boot/dts/nxp/imx/imx27-eukrea-cpuimx27.dtsi
+index 74110bbcd9d4f..c7e9235848782 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27-eukrea-cpuimx27.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx27-eukrea-cpuimx27.dtsi
+@@ -33,7 +33,7 @@
+ 	pinctrl-0 = <&pinctrl_i2c1>;
+ 	status = "okay";
+ 
+-	pcf8563@51 {
++	rtc@51 {
+ 		compatible = "nxp,pcf8563";
+ 		reg = <0x51>;
+ 	};
+@@ -90,7 +90,7 @@
+ &weim {
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		compatible = "cfi-flash";
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27-eukrea-mbimxsd27-baseboard.dts b/arch/arm/boot/dts/nxp/imx/imx27-eukrea-mbimxsd27-baseboard.dts
+index 145e459625b32..d78793601306c 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27-eukrea-mbimxsd27-baseboard.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx27-eukrea-mbimxsd27-baseboard.dts
+@@ -16,7 +16,7 @@
+ 
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 320x240 {
++			timing0: timing0 {
+ 				clock-frequency = <6500000>;
+ 				hactive = <320>;
+ 				vactive = <240>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycard-s-rdk.dts b/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycard-s-rdk.dts
+index 25442eba21c1e..27c93b9fe0499 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycard-s-rdk.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycard-s-rdk.dts
+@@ -19,7 +19,7 @@
+ 		fsl,pcr = <0xf0c88080>;	/* non-standard but required */
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 640x480 {
++			timing0: timing0 {
+ 				hactive = <640>;
+ 				vactive = <480>;
+ 				hback-porch = <112>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-rdk.dts b/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-rdk.dts
+index 7f0cd4d3ec2de..67b235044b708 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-rdk.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-rdk.dts
+@@ -19,7 +19,7 @@
+ 
+ 		display-timings {
+ 			native-mode = <&timing0>;
+-			timing0: 240x320 {
++			timing0: timing0 {
+ 				clock-frequency = <5500000>;
+ 				hactive = <240>;
+ 				vactive = <320>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-som.dtsi b/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-som.dtsi
+index 7b2ea4cdae58c..8d428c8446665 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-som.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-som.dtsi
+@@ -314,7 +314,7 @@
+ &weim {
+ 	status = "okay";
+ 
+-	nor: nor@0,0 {
++	nor: flash@0,0 {
+ 		compatible = "cfi-flash";
+ 		reg = <0 0x00000000 0x02000000>;
+ 		bank-width = <2>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx27.dtsi b/arch/arm/boot/dts/nxp/imx/imx27.dtsi
+index faba12ee7465e..cac4b3d68986a 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx27.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx27.dtsi
+@@ -588,6 +588,9 @@
+ 		iram: sram@ffff4c00 {
+ 			compatible = "mmio-sram";
+ 			reg = <0xffff4c00 0xb400>;
++			ranges = <0 0xffff4c00 0xb400>;
++			#address-cells = <1>;
++			#size-cells = <1>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7d.dtsi b/arch/arm/boot/dts/nxp/imx/imx7d.dtsi
+index 4b94b8afb55d9..0484e349e064e 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7d.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx7d.dtsi
+@@ -217,9 +217,6 @@
+ };
+ 
+ &ca_funnel_in_ports {
+-	#address-cells = <1>;
+-	#size-cells = <0>;
+-
+ 	port@1 {
+ 		reg = <1>;
+ 		ca_funnel_in_port1: endpoint {
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7s.dtsi b/arch/arm/boot/dts/nxp/imx/imx7s.dtsi
+index 5387da8a2a0a3..4569d2b8edef9 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7s.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx7s.dtsi
+@@ -190,7 +190,11 @@
+ 			clock-names = "apb_pclk";
+ 
+ 			ca_funnel_in_ports: in-ports {
+-				port {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				port@0 {
++					reg = <0>;
+ 					ca_funnel_in_port0: endpoint {
+ 						remote-endpoint = <&etm0_out_port>;
+ 					};
+@@ -811,7 +815,7 @@
+ 			};
+ 
+ 			lcdif: lcdif@30730000 {
+-				compatible = "fsl,imx7d-lcdif", "fsl,imx28-lcdif";
++				compatible = "fsl,imx7d-lcdif", "fsl,imx6sx-lcdif";
+ 				reg = <0x30730000 0x10000>;
+ 				interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>,
+@@ -1275,7 +1279,7 @@
+ 		gpmi: nand-controller@33002000 {
+ 			compatible = "fsl,imx7d-gpmi-nand";
+ 			#address-cells = <1>;
+-			#size-cells = <1>;
++			#size-cells = <0>;
+ 			reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
+ 			reg-names = "gpmi-nand", "bch";
+ 			interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/nxp/mxs/imx23-sansa.dts b/arch/arm/boot/dts/nxp/mxs/imx23-sansa.dts
+index 636cf09a2b375..b23e7ada9c804 100644
+--- a/arch/arm/boot/dts/nxp/mxs/imx23-sansa.dts
++++ b/arch/arm/boot/dts/nxp/mxs/imx23-sansa.dts
+@@ -175,10 +175,8 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 		compatible = "i2c-gpio";
+-		gpios = <
+-			&gpio1 24 0		/* SDA */
+-			&gpio1 22 0		/* SCL */
+-		>;
++		sda-gpios = <&gpio1 24 0>;
++		scl-gpios = <&gpio1 22 0>;
+ 		i2c-gpio,delay-us = <2>;	/* ~100 kHz */
+ 	};
+ 
+@@ -186,10 +184,8 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 		compatible = "i2c-gpio";
+-		gpios = <
+-			&gpio0 31 0		/* SDA */
+-			&gpio0 30 0		/* SCL */
+-		>;
++		sda-gpios = <&gpio0 31 0>;
++		scl-gpios = <&gpio0 30 0>;
+ 		i2c-gpio,delay-us = <2>;	/* ~100 kHz */
+ 
+ 		touch: touch@20 {
+diff --git a/arch/arm/boot/dts/nxp/mxs/imx23.dtsi b/arch/arm/boot/dts/nxp/mxs/imx23.dtsi
+index fdf18b7cb2f6a..9cba1d0224f49 100644
+--- a/arch/arm/boot/dts/nxp/mxs/imx23.dtsi
++++ b/arch/arm/boot/dts/nxp/mxs/imx23.dtsi
+@@ -412,7 +412,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			dma_apbx: dma-apbx@80024000 {
++			dma_apbx: dma-controller@80024000 {
+ 				compatible = "fsl,imx23-dma-apbx";
+ 				reg = <0x80024000 0x2000>;
+ 				interrupts = <7>, <5>, <9>, <26>,
+diff --git a/arch/arm/boot/dts/nxp/mxs/imx28.dtsi b/arch/arm/boot/dts/nxp/mxs/imx28.dtsi
+index 6932d23fb29de..37b9d409a5cd7 100644
+--- a/arch/arm/boot/dts/nxp/mxs/imx28.dtsi
++++ b/arch/arm/boot/dts/nxp/mxs/imx28.dtsi
+@@ -990,7 +990,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			dma_apbx: dma-apbx@80024000 {
++			dma_apbx: dma-controller@80024000 {
+ 				compatible = "fsl,imx28-dma-apbx";
+ 				reg = <0x80024000 0x2000>;
+ 				interrupts = <78>, <79>, <66>, <0>,
+diff --git a/arch/arm/boot/dts/qcom/pm8226.dtsi b/arch/arm/boot/dts/qcom/pm8226.dtsi
+new file mode 100644
+index 0000000000000..2413778f37150
+--- /dev/null
++++ b/arch/arm/boot/dts/qcom/pm8226.dtsi
+@@ -0,0 +1,180 @@
++// SPDX-License-Identifier: BSD-3-Clause
++#include <dt-bindings/iio/qcom,spmi-vadc.h>
++#include <dt-bindings/input/linux-event-codes.h>
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/spmi/spmi.h>
++
++/ {
++	thermal-zones {
++		pm8226-thermal {
++			polling-delay-passive = <100>;
++			polling-delay = <0>;
++			thermal-sensors = <&pm8226_temp>;
++
++			trips {
++				trip0 {
++					temperature = <105000>;
++					hysteresis = <2000>;
++					type = "passive";
++				};
++
++				trip1 {
++					temperature = <125000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				crit {
++					temperature = <145000>;
++					hysteresis = <2000>;
++					type = "critical";
++				};
++			};
++		};
++	};
++};
++
++&spmi_bus {
++	pm8226_0: pm8226@0 {
++		compatible = "qcom,pm8226", "qcom,spmi-pmic";
++		reg = <0x0 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		pon@800 {
++			compatible = "qcom,pm8916-pon";
++			reg = <0x800>;
++
++			pwrkey {
++				compatible = "qcom,pm8941-pwrkey";
++				interrupts = <0x0 0x8 0 IRQ_TYPE_EDGE_BOTH>;
++				debounce = <15625>;
++				bias-pull-up;
++				linux,code = <KEY_POWER>;
++			};
++
++			pm8226_resin: resin {
++				compatible = "qcom,pm8941-resin";
++				interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
++				debounce = <15625>;
++				bias-pull-up;
++				status = "disabled";
++			};
++		};
++
++		smbb: charger@1000 {
++			compatible = "qcom,pm8226-charger";
++			reg = <0x1000>;
++			interrupts = <0x0 0x10 7 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x10 5 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x10 4 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x12 1 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x12 0 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x13 2 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x13 1 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x14 1 IRQ_TYPE_EDGE_BOTH>;
++			interrupt-names = "chg-done",
++					  "chg-fast",
++					  "chg-trkl",
++					  "bat-temp-ok",
++					  "bat-present",
++					  "chg-gone",
++					  "usb-valid",
++					  "dc-valid";
++
++			chg_otg: otg-vbus { };
++		};
++
++		pm8226_temp: temp-alarm@2400 {
++			compatible = "qcom,spmi-temp-alarm";
++			reg = <0x2400>;
++			interrupts = <0 0x24 0 IRQ_TYPE_EDGE_RISING>;
++			io-channels = <&pm8226_vadc VADC_DIE_TEMP>;
++			io-channel-names = "thermal";
++			#thermal-sensor-cells = <0>;
++		};
++
++		pm8226_vadc: adc@3100 {
++			compatible = "qcom,spmi-vadc";
++			reg = <0x3100>;
++			interrupts = <0x0 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			#io-channel-cells = <1>;
++
++			channel@7 {
++				reg = <VADC_VSYS>;
++				qcom,pre-scaling = <1 3>;
++				label = "vph_pwr";
++			};
++			channel@8 {
++				reg = <VADC_DIE_TEMP>;
++				label = "die_temp";
++			};
++			channel@9 {
++				reg = <VADC_REF_625MV>;
++				label = "ref_625mv";
++			};
++			channel@a {
++				reg = <VADC_REF_1250MV>;
++				label = "ref_1250mv";
++			};
++			channel@e {
++				reg = <VADC_GND_REF>;
++			};
++			channel@f {
++				reg = <VADC_VDD_VADC>;
++			};
++		};
++
++		pm8226_iadc: adc@3600 {
++			compatible = "qcom,pm8226-iadc", "qcom,spmi-iadc";
++			reg = <0x3600>;
++			interrupts = <0x0 0x36 0x0 IRQ_TYPE_EDGE_RISING>;
++		};
++
++		rtc@6000 {
++			compatible = "qcom,pm8941-rtc";
++			reg = <0x6000>, <0x6100>;
++			reg-names = "rtc", "alarm";
++			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
++		};
++
++		pm8226_mpps: mpps@a000 {
++			compatible = "qcom,pm8226-mpp", "qcom,spmi-mpp";
++			reg = <0xa000>;
++			gpio-controller;
++			#gpio-cells = <2>;
++			gpio-ranges = <&pm8226_mpps 0 0 8>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		pm8226_gpios: gpio@c000 {
++			compatible = "qcom,pm8226-gpio", "qcom,spmi-gpio";
++			reg = <0xc000>;
++			gpio-controller;
++			#gpio-cells = <2>;
++			gpio-ranges = <&pm8226_gpios 0 0 8>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++	};
++
++	pm8226_1: pm8226@1 {
++		compatible = "qcom,pm8226", "qcom,spmi-pmic";
++		reg = <0x1 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		pm8226_spmi_regulators: regulators {
++			compatible = "qcom,pm8226-regulators";
++		};
++
++		pm8226_vib: vibrator@c000 {
++			compatible = "qcom,pm8916-vib";
++			reg = <0xc000>;
++			status = "disabled";
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/qcom/pm8841.dtsi b/arch/arm/boot/dts/qcom/pm8841.dtsi
+new file mode 100644
+index 0000000000000..3bf2ce5c86a64
+--- /dev/null
++++ b/arch/arm/boot/dts/qcom/pm8841.dtsi
+@@ -0,0 +1,68 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/spmi/spmi.h>
++
++
++/ {
++	thermal-zones {
++		pm8841-thermal {
++			polling-delay-passive = <100>;
++			polling-delay = <0>;
++			thermal-sensors = <&pm8841_temp>;
++
++			trips {
++				trip0 {
++					temperature = <105000>;
++					hysteresis = <2000>;
++					type = "passive";
++				};
++
++				trip1 {
++					temperature = <125000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				crit {
++					temperature = <140000>;
++					hysteresis = <2000>;
++					type = "critical";
++				};
++			};
++		};
++	};
++};
++
++&spmi_bus {
++
++	pm8841_0: pm8841@4 {
++		compatible = "qcom,pm8841", "qcom,spmi-pmic";
++		reg = <0x4 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		pm8841_mpps: mpps@a000 {
++			compatible = "qcom,pm8841-mpp", "qcom,spmi-mpp";
++			reg = <0xa000>;
++			gpio-controller;
++			#gpio-cells = <2>;
++			gpio-ranges = <&pm8841_mpps 0 0 4>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		pm8841_temp: temp-alarm@2400 {
++			compatible = "qcom,spmi-temp-alarm";
++			reg = <0x2400>;
++			interrupts = <4 0x24 0 IRQ_TYPE_EDGE_RISING>;
++			#thermal-sensor-cells = <0>;
++		};
++	};
++
++	pm8841_1: pm8841@5 {
++		compatible = "qcom,pm8841", "qcom,spmi-pmic";
++		reg = <0x5 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++	};
++};
+diff --git a/arch/arm/boot/dts/qcom/pm8941.dtsi b/arch/arm/boot/dts/qcom/pm8941.dtsi
+new file mode 100644
+index 0000000000000..ed0ba591c7558
+--- /dev/null
++++ b/arch/arm/boot/dts/qcom/pm8941.dtsi
+@@ -0,0 +1,254 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <dt-bindings/iio/qcom,spmi-vadc.h>
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/spmi/spmi.h>
++
++
++/ {
++	thermal-zones {
++		pm8941-thermal {
++			polling-delay-passive = <100>;
++			polling-delay = <0>;
++			thermal-sensors = <&pm8941_temp>;
++
++			trips {
++				trip0 {
++					temperature = <105000>;
++					hysteresis = <2000>;
++					type = "passive";
++				};
++
++				trip1 {
++					temperature = <125000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				crit {
++					temperature = <145000>;
++					hysteresis = <2000>;
++					type = "critical";
++				};
++			};
++		};
++	};
++};
++
++&spmi_bus {
++
++	pm8941_0: pm8941@0 {
++		compatible = "qcom,pm8941", "qcom,spmi-pmic";
++		reg = <0x0 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		rtc@6000 {
++			compatible = "qcom,pm8941-rtc";
++			reg = <0x6000>,
++			      <0x6100>;
++			reg-names = "rtc", "alarm";
++			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
++		};
++
++		pon@800 {
++			compatible = "qcom,pm8941-pon";
++			reg = <0x800>;
++
++			pwrkey {
++				compatible = "qcom,pm8941-pwrkey";
++				interrupts = <0x0 0x8 0 IRQ_TYPE_EDGE_BOTH>;
++				debounce = <15625>;
++				bias-pull-up;
++			};
++
++			pm8941_resin: resin {
++				compatible = "qcom,pm8941-resin";
++				interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
++				debounce = <15625>;
++				bias-pull-up;
++				status = "disabled";
++			};
++		};
++
++		usb_id: usb-detect@900 {
++			compatible = "qcom,pm8941-misc";
++			reg = <0x900>;
++			interrupts = <0x0 0x9 0 IRQ_TYPE_EDGE_BOTH>;
++			interrupt-names = "usb_id";
++		};
++
++		smbb: charger@1000 {
++			compatible = "qcom,pm8941-charger";
++			reg = <0x1000>;
++			interrupts = <0x0 0x10 7 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x10 5 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x10 4 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x12 1 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x12 0 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x13 2 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x13 1 IRQ_TYPE_EDGE_BOTH>,
++				     <0x0 0x14 1 IRQ_TYPE_EDGE_BOTH>;
++			interrupt-names = "chg-done",
++					  "chg-fast",
++					  "chg-trkl",
++					  "bat-temp-ok",
++					  "bat-present",
++					  "chg-gone",
++					  "usb-valid",
++					  "dc-valid";
++
++			usb-otg-in-supply = <&pm8941_5vs1>;
++
++			chg_otg: otg-vbus { };
++		};
++
++		pm8941_gpios: gpio@c000 {
++			compatible = "qcom,pm8941-gpio", "qcom,spmi-gpio";
++			reg = <0xc000>;
++			gpio-controller;
++			gpio-ranges = <&pm8941_gpios 0 0 36>;
++			#gpio-cells = <2>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++
++			boost_bypass_n_pin: boost-bypass-state {
++				pins = "gpio21";
++				function = "normal";
++			};
++		};
++
++		pm8941_mpps: mpps@a000 {
++			compatible = "qcom,pm8941-mpp", "qcom,spmi-mpp";
++			reg = <0xa000>;
++			gpio-controller;
++			#gpio-cells = <2>;
++			gpio-ranges = <&pm8941_mpps 0 0 8>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		pm8941_temp: temp-alarm@2400 {
++			compatible = "qcom,spmi-temp-alarm";
++			reg = <0x2400>;
++			interrupts = <0 0x24 0 IRQ_TYPE_EDGE_RISING>;
++			io-channels = <&pm8941_vadc VADC_DIE_TEMP>;
++			io-channel-names = "thermal";
++			#thermal-sensor-cells = <0>;
++		};
++
++		pm8941_vadc: adc@3100 {
++			compatible = "qcom,spmi-vadc";
++			reg = <0x3100>;
++			interrupts = <0x0 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			#io-channel-cells = <1>;
++
++
++			channel@6 {
++				reg = <VADC_VBAT_SNS>;
++			};
++
++			channel@8 {
++				reg = <VADC_DIE_TEMP>;
++			};
++
++			channel@9 {
++				reg = <VADC_REF_625MV>;
++			};
++
++			channel@a {
++				reg = <VADC_REF_1250MV>;
++			};
++
++			channel@e {
++				reg = <VADC_GND_REF>;
++			};
++
++			channel@f {
++				reg = <VADC_VDD_VADC>;
++			};
++
++			channel@30 {
++				reg = <VADC_LR_MUX1_BAT_THERM>;
++			};
++		};
++
++		pm8941_iadc: adc@3600 {
++			compatible = "qcom,pm8941-iadc", "qcom,spmi-iadc";
++			reg = <0x3600>;
++			interrupts = <0x0 0x36 0x0 IRQ_TYPE_EDGE_RISING>;
++			qcom,external-resistor-micro-ohms = <10000>;
++		};
++
++		pm8941_coincell: charger@2800 {
++			compatible = "qcom,pm8941-coincell";
++			reg = <0x2800>;
++			status = "disabled";
++		};
++	};
++
++	pm8941_1: pm8941@1 {
++		compatible = "qcom,pm8941", "qcom,spmi-pmic";
++		reg = <0x1 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		pm8941_lpg: pwm {
++			compatible = "qcom,pm8941-lpg";
++
++			#address-cells = <1>;
++			#size-cells = <0>;
++			#pwm-cells = <2>;
++
++			status = "disabled";
++		};
++
++		pm8941_vib: vibrator@c000 {
++			compatible = "qcom,pm8916-vib";
++			reg = <0xc000>;
++			status = "disabled";
++		};
++
++		pm8941_wled: wled@d800 {
++			compatible = "qcom,pm8941-wled";
++			reg = <0xd800>;
++			label = "backlight";
++
++			status = "disabled";
++		};
++
++		regulators {
++			compatible = "qcom,pm8941-regulators";
++			interrupts = <0x1 0x83 0x2 0>, <0x1 0x84 0x2 0>;
++			interrupt-names = "ocp-5vs1", "ocp-5vs2";
++			vin_5vs-supply = <&pm8941_5v>;
++
++			pm8941_5v: s4 {
++				regulator-min-microvolt = <5000000>;
++				regulator-max-microvolt = <5000000>;
++				regulator-enable-ramp-delay = <500>;
++			};
++
++			pm8941_5vs1: 5vs1 {
++				regulator-enable-ramp-delay = <1000>;
++				regulator-pull-down;
++				regulator-over-current-protection;
++				qcom,ocp-max-retries = <10>;
++				qcom,ocp-retry-delay = <30>;
++				qcom,vs-soft-start-strength = <0>;
++				regulator-initial-mode = <1>;
++			};
++
++			pm8941_5vs2: 5vs2 {
++				regulator-enable-ramp-delay = <1000>;
++				regulator-pull-down;
++				regulator-over-current-protection;
++				qcom,ocp-max-retries = <10>;
++				qcom,ocp-retry-delay = <30>;
++				qcom,vs-soft-start-strength = <0>;
++				regulator-initial-mode = <1>;
++			};
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/qcom/pma8084.dtsi b/arch/arm/boot/dts/qcom/pma8084.dtsi
+new file mode 100644
+index 0000000000000..2985f4805b93e
+--- /dev/null
++++ b/arch/arm/boot/dts/qcom/pma8084.dtsi
+@@ -0,0 +1,99 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <dt-bindings/iio/qcom,spmi-vadc.h>
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/spmi/spmi.h>
++
++&spmi_bus {
++
++	pma8084_0: pma8084@0 {
++		compatible = "qcom,pma8084", "qcom,spmi-pmic";
++		reg = <0x0 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		rtc@6000 {
++			compatible = "qcom,pm8941-rtc";
++			reg = <0x6000>,
++			      <0x6100>;
++			reg-names = "rtc", "alarm";
++			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
++		};
++
++		pwrkey@800 {
++			compatible = "qcom,pm8941-pwrkey";
++			reg = <0x800>;
++			interrupts = <0x0 0x8 0 IRQ_TYPE_EDGE_BOTH>;
++			debounce = <15625>;
++			bias-pull-up;
++		};
++
++		pma8084_gpios: gpio@c000 {
++			compatible = "qcom,pma8084-gpio", "qcom,spmi-gpio";
++			reg = <0xc000>;
++			gpio-controller;
++			gpio-ranges = <&pma8084_gpios 0 0 22>;
++			#gpio-cells = <2>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		pma8084_mpps: mpps@a000 {
++			compatible = "qcom,pma8084-mpp", "qcom,spmi-mpp";
++			reg = <0xa000>;
++			gpio-controller;
++			#gpio-cells = <2>;
++			gpio-ranges = <&pma8084_mpps 0 0 8>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++
++		pma8084_temp: temp-alarm@2400 {
++			compatible = "qcom,spmi-temp-alarm";
++			reg = <0x2400>;
++			interrupts = <0 0x24 0 IRQ_TYPE_EDGE_RISING>;
++			#thermal-sensor-cells = <0>;
++			io-channels = <&pma8084_vadc VADC_DIE_TEMP>;
++			io-channel-names = "thermal";
++		};
++
++		pma8084_vadc: adc@3100 {
++			compatible = "qcom,spmi-vadc";
++			reg = <0x3100>;
++			interrupts = <0x0 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			#io-channel-cells = <1>;
++
++			channel@8 {
++				reg = <VADC_DIE_TEMP>;
++			};
++
++			channel@9 {
++				reg = <VADC_REF_625MV>;
++			};
++
++			channel@a {
++				reg = <VADC_REF_1250MV>;
++			};
++
++			channel@c {
++				reg = <VADC_SPARE1>;
++			};
++
++			channel@e {
++				reg = <VADC_GND_REF>;
++			};
++
++			channel@f {
++				reg = <VADC_VDD_VADC>;
++			};
++		};
++	};
++
++	pma8084_1: pma8084@1 {
++		compatible = "qcom,pma8084", "qcom,spmi-pmic";
++		reg = <0x1 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++	};
++};
+diff --git a/arch/arm/boot/dts/qcom/pmx55.dtsi b/arch/arm/boot/dts/qcom/pmx55.dtsi
+new file mode 100644
+index 0000000000000..da0851173c699
+--- /dev/null
++++ b/arch/arm/boot/dts/qcom/pmx55.dtsi
+@@ -0,0 +1,85 @@
++// SPDX-License-Identifier: BSD-3-Clause
++
++/*
++ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2020, Linaro Limited
++ */
++
++#include <dt-bindings/iio/qcom,spmi-vadc.h>
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/spmi/spmi.h>
++
++&spmi_bus {
++	pmic@8 {
++		compatible = "qcom,pmx55", "qcom,spmi-pmic";
++		reg = <0x8 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		pon@800 {
++			compatible = "qcom,pm8916-pon";
++			reg = <0x0800>;
++
++			status = "disabled";
++		};
++
++		pmx55_temp: temp-alarm@2400 {
++			compatible = "qcom,spmi-temp-alarm";
++			reg = <0x2400>;
++			interrupts = <0x8 0x24 0x0 IRQ_TYPE_EDGE_BOTH>;
++			io-channels = <&pmx55_adc ADC5_DIE_TEMP>;
++			io-channel-names = "thermal";
++			#thermal-sensor-cells = <0>;
++		};
++
++		pmx55_adc: adc@3100 {
++			compatible = "qcom,spmi-adc5";
++			reg = <0x3100>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			#io-channel-cells = <1>;
++			interrupts = <0x8 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
++
++			channel@0 {
++				reg = <ADC5_REF_GND>;
++				qcom,pre-scaling = <1 1>;
++				label = "ref_gnd";
++			};
++
++			channel@1 {
++				reg = <ADC5_1P25VREF>;
++				qcom,pre-scaling = <1 1>;
++				label = "vref_1p25";
++			};
++
++			channel@6 {
++				reg = <ADC5_DIE_TEMP>;
++				qcom,pre-scaling = <1 1>;
++				label = "die_temp";
++			};
++
++			channel@9 {
++				reg = <ADC5_CHG_TEMP>;
++				qcom,pre-scaling = <1 1>;
++				label = "chg_temp";
++			};
++		};
++
++		pmx55_gpios: gpio@c000 {
++			compatible = "qcom,pmx55-gpio", "qcom,spmi-gpio";
++			reg = <0xc000>;
++			gpio-controller;
++			gpio-ranges = <&pmx55_gpios 0 0 11>;
++			#gpio-cells = <2>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++	};
++
++	pmic@9 {
++		compatible = "qcom,pmx55", "qcom,spmi-pmic";
++		reg = <0x9 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++	};
++};
+diff --git a/arch/arm/boot/dts/qcom/pmx65.dtsi b/arch/arm/boot/dts/qcom/pmx65.dtsi
+new file mode 100644
+index 0000000000000..1c7fdf59c1f56
+--- /dev/null
++++ b/arch/arm/boot/dts/qcom/pmx65.dtsi
+@@ -0,0 +1,33 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
++ */
++
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/spmi/spmi.h>
++
++&spmi_bus {
++	pmic@1 {
++		compatible = "qcom,pmx65", "qcom,spmi-pmic";
++		reg = <1 SPMI_USID>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++
++		pmx65_temp: temp-alarm@a00 {
++			compatible = "qcom,spmi-temp-alarm";
++			reg = <0xa00>;
++			interrupts = <0x1 0xa 0x0 IRQ_TYPE_EDGE_BOTH>;
++			#thermal-sensor-cells = <0>;
++		};
++
++		pmx65_gpios: gpio@8800 {
++			compatible = "qcom,pmx65-gpio", "qcom,spmi-gpio";
++			reg = <0x8800>;
++			gpio-controller;
++			gpio-ranges = <&pmx65_gpios 0 0 16>;
++			#gpio-cells = <2>;
++			interrupt-controller;
++			#interrupt-cells = <2>;
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8026-asus-sparrow.dts b/arch/arm/boot/dts/qcom/qcom-apq8026-asus-sparrow.dts
+index aa0e0e8d2a973..a39f5a161b03b 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8026-asus-sparrow.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8026-asus-sparrow.dts
+@@ -6,7 +6,7 @@
+ /dts-v1/;
+ 
+ #include "qcom-msm8226.dtsi"
+-#include "qcom-pm8226.dtsi"
++#include "pm8226.dtsi"
+ 
+ /delete-node/ &adsp_region;
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8026-huawei-sturgeon.dts b/arch/arm/boot/dts/qcom/qcom-apq8026-huawei-sturgeon.dts
+index de19640efe553..59b218042d32d 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8026-huawei-sturgeon.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8026-huawei-sturgeon.dts
+@@ -6,7 +6,7 @@
+ /dts-v1/;
+ 
+ #include "qcom-msm8226.dtsi"
+-#include "qcom-pm8226.dtsi"
++#include "pm8226.dtsi"
+ #include <dt-bindings/input/ti-drv260x.h>
+ 
+ /delete-node/ &adsp_region;
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8026-lg-lenok.dts b/arch/arm/boot/dts/qcom/qcom-apq8026-lg-lenok.dts
+index b887e5361ec3a..feb78afef3a6e 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8026-lg-lenok.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8026-lg-lenok.dts
+@@ -6,7 +6,7 @@
+ /dts-v1/;
+ 
+ #include "qcom-msm8226.dtsi"
+-#include "qcom-pm8226.dtsi"
++#include "pm8226.dtsi"
+ 
+ /delete-node/ &adsp_region;
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8026-samsung-matisse-wifi.dts b/arch/arm/boot/dts/qcom/qcom-apq8026-samsung-matisse-wifi.dts
+index f516e0426bb9e..cffc069712b2f 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8026-samsung-matisse-wifi.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8026-samsung-matisse-wifi.dts
+@@ -7,7 +7,7 @@
+ 
+ #include <dt-bindings/input/input.h>
+ #include "qcom-msm8226.dtsi"
+-#include "qcom-pm8226.dtsi"
++#include "pm8226.dtsi"
+ 
+ /delete-node/ &adsp_region;
+ /delete-node/ &smem_region;
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8060-dragonboard.dts b/arch/arm/boot/dts/qcom/qcom-apq8060-dragonboard.dts
+index 569cbf0d8df87..94351c9bf94b9 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8060-dragonboard.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8060-dragonboard.dts
+@@ -71,7 +71,7 @@
+ 		/* Trig on both edges - getting close or far away */
+ 		interrupts-extended = <&pm8058_gpio 34 IRQ_TYPE_EDGE_BOTH>;
+ 		/* MPP05 analog input to the XOADC */
+-		io-channels = <&xoadc 0x00 0x05>;
++		io-channels = <&pm8058_xoadc 0x00 0x05>;
+ 		io-channel-names = "aout";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&dragon_cm3605_gpios>, <&dragon_cm3605_mpps>;
+@@ -944,7 +944,7 @@
+ 	};
+ };
+ 
+-&xoadc {
++&pm8058_xoadc {
+ 	/* Reference voltage 2.2 V */
+ 	xoadc-ref-supply = <&pm8058_l18>;
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8074-dragonboard.dts b/arch/arm/boot/dts/qcom/qcom-apq8074-dragonboard.dts
+index 6d1b2439ae3ac..950fa652f9856 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8074-dragonboard.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8074-dragonboard.dts
+@@ -4,8 +4,8 @@
+ #include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+ #include "qcom-msm8974.dtsi"
+-#include "qcom-pm8841.dtsi"
+-#include "qcom-pm8941.dtsi"
++#include "pm8841.dtsi"
++#include "pm8941.dtsi"
+ 
+ /delete-node/ &mpss_region;
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8084-ifc6540.dts b/arch/arm/boot/dts/qcom/qcom-apq8084-ifc6540.dts
+index 116e59a3b76d0..1df24c922be9f 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8084-ifc6540.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8084-ifc6540.dts
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-apq8084.dtsi"
+-#include "qcom-pma8084.dtsi"
++#include "pma8084.dtsi"
+ 
+ / {
+ 	model = "Qualcomm APQ8084/IFC6540";
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8084-mtp.dts b/arch/arm/boot/dts/qcom/qcom-apq8084-mtp.dts
+index c6b6680248a69..d4e6aee034afd 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8084-mtp.dts
++++ b/arch/arm/boot/dts/qcom/qcom-apq8084-mtp.dts
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-apq8084.dtsi"
+-#include "qcom-pma8084.dtsi"
++#include "pma8084.dtsi"
+ 
+ / {
+ 	model = "Qualcomm APQ 8084-MTP";
+diff --git a/arch/arm/boot/dts/qcom/qcom-mdm9615-wp8548.dtsi b/arch/arm/boot/dts/qcom/qcom-mdm9615-wp8548.dtsi
+index 92c8003dac252..dac3aa793f711 100644
+--- a/arch/arm/boot/dts/qcom/qcom-mdm9615-wp8548.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-mdm9615-wp8548.dtsi
+@@ -76,7 +76,7 @@
+ 	};
+ };
+ 
+-&pmicgpio {
++&pm8018_gpio {
+ 	usb_vbus_5v_pins: usb-vbus-5v-state {
+ 		pins = "gpio4";
+ 		function = "normal";
+diff --git a/arch/arm/boot/dts/qcom/qcom-mdm9615.dtsi b/arch/arm/boot/dts/qcom/qcom-mdm9615.dtsi
+index 63e21aa236429..c0a60bae703b1 100644
+--- a/arch/arm/boot/dts/qcom/qcom-mdm9615.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-mdm9615.dtsi
+@@ -261,7 +261,7 @@
+ 			reg = <0x500000 0x1000>;
+ 			qcom,controller-type = "pmic-arbiter";
+ 
+-			pmicintc: pmic {
++			pm8018: pmic {
+ 				compatible = "qcom,pm8018", "qcom,pm8921";
+ 				interrupts = <GIC_PPI 226 IRQ_TYPE_LEVEL_HIGH>;
+ 				#interrupt-cells = <2>;
+@@ -272,38 +272,38 @@
+ 				pwrkey@1c {
+ 					compatible = "qcom,pm8018-pwrkey", "qcom,pm8921-pwrkey";
+ 					reg = <0x1c>;
+-					interrupt-parent = <&pmicintc>;
++					interrupt-parent = <&pm8018>;
+ 					interrupts = <50 IRQ_TYPE_EDGE_RISING>,
+ 						     <51 IRQ_TYPE_EDGE_RISING>;
+ 					debounce = <15625>;
+ 					pull-up;
+ 				};
+ 
+-				pmicmpp: mpps@50 {
++				pm8018_mpps: mpps@50 {
+ 					compatible = "qcom,pm8018-mpp", "qcom,ssbi-mpp";
+ 					interrupt-controller;
+ 					#interrupt-cells = <2>;
+ 					reg = <0x50>;
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+-					gpio-ranges = <&pmicmpp 0 0 6>;
++					gpio-ranges = <&pm8018_mpps 0 0 6>;
+ 				};
+ 
+ 				rtc@11d {
+ 					compatible = "qcom,pm8018-rtc", "qcom,pm8921-rtc";
+-					interrupt-parent = <&pmicintc>;
++					interrupt-parent = <&pm8018>;
+ 					interrupts = <39 IRQ_TYPE_EDGE_RISING>;
+ 					reg = <0x11d>;
+ 					allow-set-time;
+ 				};
+ 
+-				pmicgpio: gpio@150 {
++				pm8018_gpio: gpio@150 {
+ 					compatible = "qcom,pm8018-gpio", "qcom,ssbi-gpio";
+ 					reg = <0x150>;
+ 					interrupt-controller;
+ 					#interrupt-cells = <2>;
+ 					gpio-controller;
+-					gpio-ranges = <&pmicgpio 0 0 6>;
++					gpio-ranges = <&pm8018_gpio 0 0 6>;
+ 					#gpio-cells = <2>;
+ 				};
+ 			};
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8660.dtsi b/arch/arm/boot/dts/qcom/qcom-msm8660.dtsi
+index 78023ed2fdf71..9217ced108c42 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8660.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-msm8660.dtsi
+@@ -80,13 +80,13 @@
+ 	 */
+ 	iio-hwmon {
+ 		compatible = "iio-hwmon";
+-		io-channels = <&xoadc 0x00 0x01>, /* Battery */
+-			    <&xoadc 0x00 0x02>, /* DC in (charger) */
+-			    <&xoadc 0x00 0x04>, /* VPH the main system voltage */
+-			    <&xoadc 0x00 0x0b>, /* Die temperature */
+-			    <&xoadc 0x00 0x0c>, /* Reference voltage 1.25V */
+-			    <&xoadc 0x00 0x0d>, /* Reference voltage 0.625V */
+-			    <&xoadc 0x00 0x0e>; /* Reference voltage 0.325V */
++		io-channels = <&pm8058_xoadc 0x00 0x01>, /* Battery */
++			      <&pm8058_xoadc 0x00 0x02>, /* DC in (charger) */
++			      <&pm8058_xoadc 0x00 0x04>, /* VPH the main system voltage */
++			      <&pm8058_xoadc 0x00 0x0b>, /* Die temperature */
++			      <&pm8058_xoadc 0x00 0x0c>, /* Reference voltage 1.25V */
++			      <&pm8058_xoadc 0x00 0x0d>, /* Reference voltage 0.625V */
++			      <&pm8058_xoadc 0x00 0x0e>; /* Reference voltage 0.325V */
+ 	};
+ 
+ 	soc: soc {
+@@ -390,7 +390,7 @@
+ 					row-hold = <91500>;
+ 				};
+ 
+-				xoadc: xoadc@197 {
++				pm8058_xoadc: xoadc@197 {
+ 					compatible = "qcom,pm8058-adc";
+ 					reg = <0x197>;
+ 					interrupts-extended = <&pm8058 76 IRQ_TYPE_EDGE_RISING>;
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974-lge-nexus5-hammerhead.dts b/arch/arm/boot/dts/qcom/qcom-msm8974-lge-nexus5-hammerhead.dts
+index 60bdfddeae69e..da99f770d4f57 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974-lge-nexus5-hammerhead.dts
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974-lge-nexus5-hammerhead.dts
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-msm8974.dtsi"
+-#include "qcom-pm8841.dtsi"
+-#include "qcom-pm8941.dtsi"
++#include "pm8841.dtsi"
++#include "pm8941.dtsi"
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974-sony-xperia-rhine.dtsi b/arch/arm/boot/dts/qcom/qcom-msm8974-sony-xperia-rhine.dtsi
+index 68a2f9094e536..23ae474698aa7 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974-sony-xperia-rhine.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974-sony-xperia-rhine.dtsi
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-msm8974.dtsi"
+-#include "qcom-pm8841.dtsi"
+-#include "qcom-pm8941.dtsi"
++#include "pm8841.dtsi"
++#include "pm8941.dtsi"
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974pro-fairphone-fp2.dts b/arch/arm/boot/dts/qcom/qcom-msm8974pro-fairphone-fp2.dts
+index 42d253b75dad0..6c4153689b39e 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974pro-fairphone-fp2.dts
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974pro-fairphone-fp2.dts
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-msm8974pro.dtsi"
+-#include "qcom-pm8841.dtsi"
+-#include "qcom-pm8941.dtsi"
++#include "pm8841.dtsi"
++#include "pm8941.dtsi"
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974pro-oneplus-bacon.dts b/arch/arm/boot/dts/qcom/qcom-msm8974pro-oneplus-bacon.dts
+index 8230d0e1d95d1..c0ca264d8140d 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974pro-oneplus-bacon.dts
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974pro-oneplus-bacon.dts
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-msm8974pro.dtsi"
+-#include "qcom-pm8841.dtsi"
+-#include "qcom-pm8941.dtsi"
++#include "pm8841.dtsi"
++#include "pm8941.dtsi"
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974pro-samsung-klte.dts b/arch/arm/boot/dts/qcom/qcom-msm8974pro-samsung-klte.dts
+index 3e2c86591ee2f..325feb89b343a 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974pro-samsung-klte.dts
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974pro-samsung-klte.dts
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-msm8974pro.dtsi"
+-#include "qcom-pma8084.dtsi"
++#include "pma8084.dtsi"
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+ #include <dt-bindings/leds/common.h>
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974pro-sony-xperia-shinano-castor.dts b/arch/arm/boot/dts/qcom/qcom-msm8974pro-sony-xperia-shinano-castor.dts
+index 11468d1409f72..0798cce3dbea0 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974pro-sony-xperia-shinano-castor.dts
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974pro-sony-xperia-shinano-castor.dts
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "qcom-msm8974pro.dtsi"
+-#include "qcom-pm8841.dtsi"
+-#include "qcom-pm8941.dtsi"
++#include "pm8841.dtsi"
++#include "pm8941.dtsi"
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/leds/common.h>
+ #include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+diff --git a/arch/arm/boot/dts/qcom/qcom-pm8226.dtsi b/arch/arm/boot/dts/qcom/qcom-pm8226.dtsi
+deleted file mode 100644
+index 2413778f37150..0000000000000
+--- a/arch/arm/boot/dts/qcom/qcom-pm8226.dtsi
++++ /dev/null
+@@ -1,180 +0,0 @@
+-// SPDX-License-Identifier: BSD-3-Clause
+-#include <dt-bindings/iio/qcom,spmi-vadc.h>
+-#include <dt-bindings/input/linux-event-codes.h>
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/spmi/spmi.h>
+-
+-/ {
+-	thermal-zones {
+-		pm8226-thermal {
+-			polling-delay-passive = <100>;
+-			polling-delay = <0>;
+-			thermal-sensors = <&pm8226_temp>;
+-
+-			trips {
+-				trip0 {
+-					temperature = <105000>;
+-					hysteresis = <2000>;
+-					type = "passive";
+-				};
+-
+-				trip1 {
+-					temperature = <125000>;
+-					hysteresis = <2000>;
+-					type = "hot";
+-				};
+-
+-				crit {
+-					temperature = <145000>;
+-					hysteresis = <2000>;
+-					type = "critical";
+-				};
+-			};
+-		};
+-	};
+-};
+-
+-&spmi_bus {
+-	pm8226_0: pm8226@0 {
+-		compatible = "qcom,pm8226", "qcom,spmi-pmic";
+-		reg = <0x0 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		pon@800 {
+-			compatible = "qcom,pm8916-pon";
+-			reg = <0x800>;
+-
+-			pwrkey {
+-				compatible = "qcom,pm8941-pwrkey";
+-				interrupts = <0x0 0x8 0 IRQ_TYPE_EDGE_BOTH>;
+-				debounce = <15625>;
+-				bias-pull-up;
+-				linux,code = <KEY_POWER>;
+-			};
+-
+-			pm8226_resin: resin {
+-				compatible = "qcom,pm8941-resin";
+-				interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+-				debounce = <15625>;
+-				bias-pull-up;
+-				status = "disabled";
+-			};
+-		};
+-
+-		smbb: charger@1000 {
+-			compatible = "qcom,pm8226-charger";
+-			reg = <0x1000>;
+-			interrupts = <0x0 0x10 7 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x10 5 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x10 4 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x12 1 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x12 0 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x13 2 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x13 1 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x14 1 IRQ_TYPE_EDGE_BOTH>;
+-			interrupt-names = "chg-done",
+-					  "chg-fast",
+-					  "chg-trkl",
+-					  "bat-temp-ok",
+-					  "bat-present",
+-					  "chg-gone",
+-					  "usb-valid",
+-					  "dc-valid";
+-
+-			chg_otg: otg-vbus { };
+-		};
+-
+-		pm8226_temp: temp-alarm@2400 {
+-			compatible = "qcom,spmi-temp-alarm";
+-			reg = <0x2400>;
+-			interrupts = <0 0x24 0 IRQ_TYPE_EDGE_RISING>;
+-			io-channels = <&pm8226_vadc VADC_DIE_TEMP>;
+-			io-channel-names = "thermal";
+-			#thermal-sensor-cells = <0>;
+-		};
+-
+-		pm8226_vadc: adc@3100 {
+-			compatible = "qcom,spmi-vadc";
+-			reg = <0x3100>;
+-			interrupts = <0x0 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			#io-channel-cells = <1>;
+-
+-			channel@7 {
+-				reg = <VADC_VSYS>;
+-				qcom,pre-scaling = <1 3>;
+-				label = "vph_pwr";
+-			};
+-			channel@8 {
+-				reg = <VADC_DIE_TEMP>;
+-				label = "die_temp";
+-			};
+-			channel@9 {
+-				reg = <VADC_REF_625MV>;
+-				label = "ref_625mv";
+-			};
+-			channel@a {
+-				reg = <VADC_REF_1250MV>;
+-				label = "ref_1250mv";
+-			};
+-			channel@e {
+-				reg = <VADC_GND_REF>;
+-			};
+-			channel@f {
+-				reg = <VADC_VDD_VADC>;
+-			};
+-		};
+-
+-		pm8226_iadc: adc@3600 {
+-			compatible = "qcom,pm8226-iadc", "qcom,spmi-iadc";
+-			reg = <0x3600>;
+-			interrupts = <0x0 0x36 0x0 IRQ_TYPE_EDGE_RISING>;
+-		};
+-
+-		rtc@6000 {
+-			compatible = "qcom,pm8941-rtc";
+-			reg = <0x6000>, <0x6100>;
+-			reg-names = "rtc", "alarm";
+-			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
+-		};
+-
+-		pm8226_mpps: mpps@a000 {
+-			compatible = "qcom,pm8226-mpp", "qcom,spmi-mpp";
+-			reg = <0xa000>;
+-			gpio-controller;
+-			#gpio-cells = <2>;
+-			gpio-ranges = <&pm8226_mpps 0 0 8>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		pm8226_gpios: gpio@c000 {
+-			compatible = "qcom,pm8226-gpio", "qcom,spmi-gpio";
+-			reg = <0xc000>;
+-			gpio-controller;
+-			#gpio-cells = <2>;
+-			gpio-ranges = <&pm8226_gpios 0 0 8>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-	};
+-
+-	pm8226_1: pm8226@1 {
+-		compatible = "qcom,pm8226", "qcom,spmi-pmic";
+-		reg = <0x1 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		pm8226_spmi_regulators: regulators {
+-			compatible = "qcom,pm8226-regulators";
+-		};
+-
+-		pm8226_vib: vibrator@c000 {
+-			compatible = "qcom,pm8916-vib";
+-			reg = <0xc000>;
+-			status = "disabled";
+-		};
+-	};
+-};
+diff --git a/arch/arm/boot/dts/qcom/qcom-pm8841.dtsi b/arch/arm/boot/dts/qcom/qcom-pm8841.dtsi
+deleted file mode 100644
+index 3bf2ce5c86a64..0000000000000
+--- a/arch/arm/boot/dts/qcom/qcom-pm8841.dtsi
++++ /dev/null
+@@ -1,68 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/spmi/spmi.h>
+-
+-
+-/ {
+-	thermal-zones {
+-		pm8841-thermal {
+-			polling-delay-passive = <100>;
+-			polling-delay = <0>;
+-			thermal-sensors = <&pm8841_temp>;
+-
+-			trips {
+-				trip0 {
+-					temperature = <105000>;
+-					hysteresis = <2000>;
+-					type = "passive";
+-				};
+-
+-				trip1 {
+-					temperature = <125000>;
+-					hysteresis = <2000>;
+-					type = "hot";
+-				};
+-
+-				crit {
+-					temperature = <140000>;
+-					hysteresis = <2000>;
+-					type = "critical";
+-				};
+-			};
+-		};
+-	};
+-};
+-
+-&spmi_bus {
+-
+-	pm8841_0: pm8841@4 {
+-		compatible = "qcom,pm8841", "qcom,spmi-pmic";
+-		reg = <0x4 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		pm8841_mpps: mpps@a000 {
+-			compatible = "qcom,pm8841-mpp", "qcom,spmi-mpp";
+-			reg = <0xa000>;
+-			gpio-controller;
+-			#gpio-cells = <2>;
+-			gpio-ranges = <&pm8841_mpps 0 0 4>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		pm8841_temp: temp-alarm@2400 {
+-			compatible = "qcom,spmi-temp-alarm";
+-			reg = <0x2400>;
+-			interrupts = <4 0x24 0 IRQ_TYPE_EDGE_RISING>;
+-			#thermal-sensor-cells = <0>;
+-		};
+-	};
+-
+-	pm8841_1: pm8841@5 {
+-		compatible = "qcom,pm8841", "qcom,spmi-pmic";
+-		reg = <0x5 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-	};
+-};
+diff --git a/arch/arm/boot/dts/qcom/qcom-pm8941.dtsi b/arch/arm/boot/dts/qcom/qcom-pm8941.dtsi
+deleted file mode 100644
+index ed0ba591c7558..0000000000000
+--- a/arch/arm/boot/dts/qcom/qcom-pm8941.dtsi
++++ /dev/null
+@@ -1,254 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-#include <dt-bindings/iio/qcom,spmi-vadc.h>
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/spmi/spmi.h>
+-
+-
+-/ {
+-	thermal-zones {
+-		pm8941-thermal {
+-			polling-delay-passive = <100>;
+-			polling-delay = <0>;
+-			thermal-sensors = <&pm8941_temp>;
+-
+-			trips {
+-				trip0 {
+-					temperature = <105000>;
+-					hysteresis = <2000>;
+-					type = "passive";
+-				};
+-
+-				trip1 {
+-					temperature = <125000>;
+-					hysteresis = <2000>;
+-					type = "hot";
+-				};
+-
+-				crit {
+-					temperature = <145000>;
+-					hysteresis = <2000>;
+-					type = "critical";
+-				};
+-			};
+-		};
+-	};
+-};
+-
+-&spmi_bus {
+-
+-	pm8941_0: pm8941@0 {
+-		compatible = "qcom,pm8941", "qcom,spmi-pmic";
+-		reg = <0x0 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		rtc@6000 {
+-			compatible = "qcom,pm8941-rtc";
+-			reg = <0x6000>,
+-			      <0x6100>;
+-			reg-names = "rtc", "alarm";
+-			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
+-		};
+-
+-		pon@800 {
+-			compatible = "qcom,pm8941-pon";
+-			reg = <0x800>;
+-
+-			pwrkey {
+-				compatible = "qcom,pm8941-pwrkey";
+-				interrupts = <0x0 0x8 0 IRQ_TYPE_EDGE_BOTH>;
+-				debounce = <15625>;
+-				bias-pull-up;
+-			};
+-
+-			pm8941_resin: resin {
+-				compatible = "qcom,pm8941-resin";
+-				interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+-				debounce = <15625>;
+-				bias-pull-up;
+-				status = "disabled";
+-			};
+-		};
+-
+-		usb_id: usb-detect@900 {
+-			compatible = "qcom,pm8941-misc";
+-			reg = <0x900>;
+-			interrupts = <0x0 0x9 0 IRQ_TYPE_EDGE_BOTH>;
+-			interrupt-names = "usb_id";
+-		};
+-
+-		smbb: charger@1000 {
+-			compatible = "qcom,pm8941-charger";
+-			reg = <0x1000>;
+-			interrupts = <0x0 0x10 7 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x10 5 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x10 4 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x12 1 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x12 0 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x13 2 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x13 1 IRQ_TYPE_EDGE_BOTH>,
+-				     <0x0 0x14 1 IRQ_TYPE_EDGE_BOTH>;
+-			interrupt-names = "chg-done",
+-					  "chg-fast",
+-					  "chg-trkl",
+-					  "bat-temp-ok",
+-					  "bat-present",
+-					  "chg-gone",
+-					  "usb-valid",
+-					  "dc-valid";
+-
+-			usb-otg-in-supply = <&pm8941_5vs1>;
+-
+-			chg_otg: otg-vbus { };
+-		};
+-
+-		pm8941_gpios: gpio@c000 {
+-			compatible = "qcom,pm8941-gpio", "qcom,spmi-gpio";
+-			reg = <0xc000>;
+-			gpio-controller;
+-			gpio-ranges = <&pm8941_gpios 0 0 36>;
+-			#gpio-cells = <2>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-
+-			boost_bypass_n_pin: boost-bypass-state {
+-				pins = "gpio21";
+-				function = "normal";
+-			};
+-		};
+-
+-		pm8941_mpps: mpps@a000 {
+-			compatible = "qcom,pm8941-mpp", "qcom,spmi-mpp";
+-			reg = <0xa000>;
+-			gpio-controller;
+-			#gpio-cells = <2>;
+-			gpio-ranges = <&pm8941_mpps 0 0 8>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		pm8941_temp: temp-alarm@2400 {
+-			compatible = "qcom,spmi-temp-alarm";
+-			reg = <0x2400>;
+-			interrupts = <0 0x24 0 IRQ_TYPE_EDGE_RISING>;
+-			io-channels = <&pm8941_vadc VADC_DIE_TEMP>;
+-			io-channel-names = "thermal";
+-			#thermal-sensor-cells = <0>;
+-		};
+-
+-		pm8941_vadc: adc@3100 {
+-			compatible = "qcom,spmi-vadc";
+-			reg = <0x3100>;
+-			interrupts = <0x0 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			#io-channel-cells = <1>;
+-
+-
+-			channel@6 {
+-				reg = <VADC_VBAT_SNS>;
+-			};
+-
+-			channel@8 {
+-				reg = <VADC_DIE_TEMP>;
+-			};
+-
+-			channel@9 {
+-				reg = <VADC_REF_625MV>;
+-			};
+-
+-			channel@a {
+-				reg = <VADC_REF_1250MV>;
+-			};
+-
+-			channel@e {
+-				reg = <VADC_GND_REF>;
+-			};
+-
+-			channel@f {
+-				reg = <VADC_VDD_VADC>;
+-			};
+-
+-			channel@30 {
+-				reg = <VADC_LR_MUX1_BAT_THERM>;
+-			};
+-		};
+-
+-		pm8941_iadc: adc@3600 {
+-			compatible = "qcom,pm8941-iadc", "qcom,spmi-iadc";
+-			reg = <0x3600>;
+-			interrupts = <0x0 0x36 0x0 IRQ_TYPE_EDGE_RISING>;
+-			qcom,external-resistor-micro-ohms = <10000>;
+-		};
+-
+-		pm8941_coincell: charger@2800 {
+-			compatible = "qcom,pm8941-coincell";
+-			reg = <0x2800>;
+-			status = "disabled";
+-		};
+-	};
+-
+-	pm8941_1: pm8941@1 {
+-		compatible = "qcom,pm8941", "qcom,spmi-pmic";
+-		reg = <0x1 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		pm8941_lpg: pwm {
+-			compatible = "qcom,pm8941-lpg";
+-
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			#pwm-cells = <2>;
+-
+-			status = "disabled";
+-		};
+-
+-		pm8941_vib: vibrator@c000 {
+-			compatible = "qcom,pm8916-vib";
+-			reg = <0xc000>;
+-			status = "disabled";
+-		};
+-
+-		pm8941_wled: wled@d800 {
+-			compatible = "qcom,pm8941-wled";
+-			reg = <0xd800>;
+-			label = "backlight";
+-
+-			status = "disabled";
+-		};
+-
+-		regulators {
+-			compatible = "qcom,pm8941-regulators";
+-			interrupts = <0x1 0x83 0x2 0>, <0x1 0x84 0x2 0>;
+-			interrupt-names = "ocp-5vs1", "ocp-5vs2";
+-			vin_5vs-supply = <&pm8941_5v>;
+-
+-			pm8941_5v: s4 {
+-				regulator-min-microvolt = <5000000>;
+-				regulator-max-microvolt = <5000000>;
+-				regulator-enable-ramp-delay = <500>;
+-			};
+-
+-			pm8941_5vs1: 5vs1 {
+-				regulator-enable-ramp-delay = <1000>;
+-				regulator-pull-down;
+-				regulator-over-current-protection;
+-				qcom,ocp-max-retries = <10>;
+-				qcom,ocp-retry-delay = <30>;
+-				qcom,vs-soft-start-strength = <0>;
+-				regulator-initial-mode = <1>;
+-			};
+-
+-			pm8941_5vs2: 5vs2 {
+-				regulator-enable-ramp-delay = <1000>;
+-				regulator-pull-down;
+-				regulator-over-current-protection;
+-				qcom,ocp-max-retries = <10>;
+-				qcom,ocp-retry-delay = <30>;
+-				qcom,vs-soft-start-strength = <0>;
+-				regulator-initial-mode = <1>;
+-			};
+-		};
+-	};
+-};
+diff --git a/arch/arm/boot/dts/qcom/qcom-pma8084.dtsi b/arch/arm/boot/dts/qcom/qcom-pma8084.dtsi
+deleted file mode 100644
+index 2985f4805b93e..0000000000000
+--- a/arch/arm/boot/dts/qcom/qcom-pma8084.dtsi
++++ /dev/null
+@@ -1,99 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-#include <dt-bindings/iio/qcom,spmi-vadc.h>
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/spmi/spmi.h>
+-
+-&spmi_bus {
+-
+-	pma8084_0: pma8084@0 {
+-		compatible = "qcom,pma8084", "qcom,spmi-pmic";
+-		reg = <0x0 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		rtc@6000 {
+-			compatible = "qcom,pm8941-rtc";
+-			reg = <0x6000>,
+-			      <0x6100>;
+-			reg-names = "rtc", "alarm";
+-			interrupts = <0x0 0x61 0x1 IRQ_TYPE_EDGE_RISING>;
+-		};
+-
+-		pwrkey@800 {
+-			compatible = "qcom,pm8941-pwrkey";
+-			reg = <0x800>;
+-			interrupts = <0x0 0x8 0 IRQ_TYPE_EDGE_BOTH>;
+-			debounce = <15625>;
+-			bias-pull-up;
+-		};
+-
+-		pma8084_gpios: gpio@c000 {
+-			compatible = "qcom,pma8084-gpio", "qcom,spmi-gpio";
+-			reg = <0xc000>;
+-			gpio-controller;
+-			gpio-ranges = <&pma8084_gpios 0 0 22>;
+-			#gpio-cells = <2>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		pma8084_mpps: mpps@a000 {
+-			compatible = "qcom,pma8084-mpp", "qcom,spmi-mpp";
+-			reg = <0xa000>;
+-			gpio-controller;
+-			#gpio-cells = <2>;
+-			gpio-ranges = <&pma8084_mpps 0 0 8>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-
+-		pma8084_temp: temp-alarm@2400 {
+-			compatible = "qcom,spmi-temp-alarm";
+-			reg = <0x2400>;
+-			interrupts = <0 0x24 0 IRQ_TYPE_EDGE_RISING>;
+-			#thermal-sensor-cells = <0>;
+-			io-channels = <&pma8084_vadc VADC_DIE_TEMP>;
+-			io-channel-names = "thermal";
+-		};
+-
+-		pma8084_vadc: adc@3100 {
+-			compatible = "qcom,spmi-vadc";
+-			reg = <0x3100>;
+-			interrupts = <0x0 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			#io-channel-cells = <1>;
+-
+-			channel@8 {
+-				reg = <VADC_DIE_TEMP>;
+-			};
+-
+-			channel@9 {
+-				reg = <VADC_REF_625MV>;
+-			};
+-
+-			channel@a {
+-				reg = <VADC_REF_1250MV>;
+-			};
+-
+-			channel@c {
+-				reg = <VADC_SPARE1>;
+-			};
+-
+-			channel@e {
+-				reg = <VADC_GND_REF>;
+-			};
+-
+-			channel@f {
+-				reg = <VADC_VDD_VADC>;
+-			};
+-		};
+-	};
+-
+-	pma8084_1: pma8084@1 {
+-		compatible = "qcom,pma8084", "qcom,spmi-pmic";
+-		reg = <0x1 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-	};
+-};
+diff --git a/arch/arm/boot/dts/qcom/qcom-pmx55.dtsi b/arch/arm/boot/dts/qcom/qcom-pmx55.dtsi
+deleted file mode 100644
+index da0851173c699..0000000000000
+--- a/arch/arm/boot/dts/qcom/qcom-pmx55.dtsi
++++ /dev/null
+@@ -1,85 +0,0 @@
+-// SPDX-License-Identifier: BSD-3-Clause
+-
+-/*
+- * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+- * Copyright (c) 2020, Linaro Limited
+- */
+-
+-#include <dt-bindings/iio/qcom,spmi-vadc.h>
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/spmi/spmi.h>
+-
+-&spmi_bus {
+-	pmic@8 {
+-		compatible = "qcom,pmx55", "qcom,spmi-pmic";
+-		reg = <0x8 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		pon@800 {
+-			compatible = "qcom,pm8916-pon";
+-			reg = <0x0800>;
+-
+-			status = "disabled";
+-		};
+-
+-		pmx55_temp: temp-alarm@2400 {
+-			compatible = "qcom,spmi-temp-alarm";
+-			reg = <0x2400>;
+-			interrupts = <0x8 0x24 0x0 IRQ_TYPE_EDGE_BOTH>;
+-			io-channels = <&pmx55_adc ADC5_DIE_TEMP>;
+-			io-channel-names = "thermal";
+-			#thermal-sensor-cells = <0>;
+-		};
+-
+-		pmx55_adc: adc@3100 {
+-			compatible = "qcom,spmi-adc5";
+-			reg = <0x3100>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			#io-channel-cells = <1>;
+-			interrupts = <0x8 0x31 0x0 IRQ_TYPE_EDGE_RISING>;
+-
+-			channel@0 {
+-				reg = <ADC5_REF_GND>;
+-				qcom,pre-scaling = <1 1>;
+-				label = "ref_gnd";
+-			};
+-
+-			channel@1 {
+-				reg = <ADC5_1P25VREF>;
+-				qcom,pre-scaling = <1 1>;
+-				label = "vref_1p25";
+-			};
+-
+-			channel@6 {
+-				reg = <ADC5_DIE_TEMP>;
+-				qcom,pre-scaling = <1 1>;
+-				label = "die_temp";
+-			};
+-
+-			channel@9 {
+-				reg = <ADC5_CHG_TEMP>;
+-				qcom,pre-scaling = <1 1>;
+-				label = "chg_temp";
+-			};
+-		};
+-
+-		pmx55_gpios: gpio@c000 {
+-			compatible = "qcom,pmx55-gpio", "qcom,spmi-gpio";
+-			reg = <0xc000>;
+-			gpio-controller;
+-			gpio-ranges = <&pmx55_gpios 0 0 11>;
+-			#gpio-cells = <2>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-	};
+-
+-	pmic@9 {
+-		compatible = "qcom,pmx55", "qcom,spmi-pmic";
+-		reg = <0x9 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-	};
+-};
+diff --git a/arch/arm/boot/dts/qcom/qcom-pmx65.dtsi b/arch/arm/boot/dts/qcom/qcom-pmx65.dtsi
+deleted file mode 100644
+index 1c7fdf59c1f56..0000000000000
+--- a/arch/arm/boot/dts/qcom/qcom-pmx65.dtsi
++++ /dev/null
+@@ -1,33 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
+- */
+-
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/spmi/spmi.h>
+-
+-&spmi_bus {
+-	pmic@1 {
+-		compatible = "qcom,pmx65", "qcom,spmi-pmic";
+-		reg = <1 SPMI_USID>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-
+-		pmx65_temp: temp-alarm@a00 {
+-			compatible = "qcom,spmi-temp-alarm";
+-			reg = <0xa00>;
+-			interrupts = <0x1 0xa 0x0 IRQ_TYPE_EDGE_BOTH>;
+-			#thermal-sensor-cells = <0>;
+-		};
+-
+-		pmx65_gpios: gpio@8800 {
+-			compatible = "qcom,pmx65-gpio", "qcom,spmi-gpio";
+-			reg = <0x8800>;
+-			gpio-controller;
+-			gpio-ranges = <&pmx65_gpios 0 0 16>;
+-			#gpio-cells = <2>;
+-			interrupt-controller;
+-			#interrupt-cells = <2>;
+-		};
+-	};
+-};
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx55-mtp.dts b/arch/arm/boot/dts/qcom/qcom-sdx55-mtp.dts
+index 7e97ad5803d87..2470693619090 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx55-mtp.dts
++++ b/arch/arm/boot/dts/qcom/qcom-sdx55-mtp.dts
+@@ -9,7 +9,7 @@
+ #include "qcom-sdx55.dtsi"
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ #include <arm64/qcom/pm8150b.dtsi>
+-#include "qcom-pmx55.dtsi"
++#include "pmx55.dtsi"
+ 
+ / {
+ 	model = "Qualcomm Technologies, Inc. SDX55 MTP";
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx55-t55.dts b/arch/arm/boot/dts/qcom/qcom-sdx55-t55.dts
+index 51058b0652797..082f7ed1a01fb 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx55-t55.dts
++++ b/arch/arm/boot/dts/qcom/qcom-sdx55-t55.dts
+@@ -8,7 +8,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ #include "qcom-sdx55.dtsi"
+-#include "qcom-pmx55.dtsi"
++#include "pmx55.dtsi"
+ 
+ / {
+ 	model = "Thundercomm T55 Development Kit";
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx55-telit-fn980-tlb.dts b/arch/arm/boot/dts/qcom/qcom-sdx55-telit-fn980-tlb.dts
+index 8fadc6e70692a..e336a15b45c4c 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx55-telit-fn980-tlb.dts
++++ b/arch/arm/boot/dts/qcom/qcom-sdx55-telit-fn980-tlb.dts
+@@ -8,7 +8,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ #include "qcom-sdx55.dtsi"
+-#include "qcom-pmx55.dtsi"
++#include "pmx55.dtsi"
+ 
+ / {
+ 	model = "Telit FN980 TLB";
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts b/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts
+index 9649c859a2c36..07c10c84eefa1 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts
++++ b/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts
+@@ -12,7 +12,7 @@
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ #include <arm64/qcom/pmk8350.dtsi>
+ #include <arm64/qcom/pm7250b.dtsi>
+-#include "qcom-pmx65.dtsi"
++#include "pmx65.dtsi"
+ 
+ / {
+ 	model = "Qualcomm Technologies, Inc. SDX65 MTP";
+diff --git a/arch/arm/boot/dts/rockchip/rk3036.dtsi b/arch/arm/boot/dts/rockchip/rk3036.dtsi
+index 78686fc72ce69..c420c7c642cb0 100644
+--- a/arch/arm/boot/dts/rockchip/rk3036.dtsi
++++ b/arch/arm/boot/dts/rockchip/rk3036.dtsi
+@@ -402,12 +402,20 @@
+ 		pinctrl-0 = <&hdmi_ctl>;
+ 		status = "disabled";
+ 
+-		hdmi_in: port {
++		ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+-			hdmi_in_vop: endpoint@0 {
++
++			hdmi_in: port@0 {
+ 				reg = <0>;
+-				remote-endpoint = <&vop_out_hdmi>;
++
++				hdmi_in_vop: endpoint {
++					remote-endpoint = <&vop_out_hdmi>;
++				};
++			};
++
++			hdmi_out: port@1 {
++				reg = <1>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/samsung/exynos4.dtsi b/arch/arm/boot/dts/samsung/exynos4.dtsi
+index f775b9377a38b..7f981b5c0d64b 100644
+--- a/arch/arm/boot/dts/samsung/exynos4.dtsi
++++ b/arch/arm/boot/dts/samsung/exynos4.dtsi
+@@ -203,16 +203,16 @@
+ 
+ 		camera: camera@11800000 {
+ 			compatible = "samsung,fimc";
++			ranges = <0x0 0x11800000 0xa0000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 			#clock-cells = <1>;
+ 			clock-output-names = "cam_a_clkout", "cam_b_clkout";
+-			ranges;
+ 
+-			fimc_0: fimc@11800000 {
++			fimc_0: fimc@0 {
+ 				compatible = "samsung,exynos4210-fimc";
+-				reg = <0x11800000 0x1000>;
++				reg = <0x0 0x1000>;
+ 				interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clock CLK_FIMC0>,
+ 					 <&clock CLK_SCLK_FIMC0>;
+@@ -223,9 +223,9 @@
+ 				status = "disabled";
+ 			};
+ 
+-			fimc_1: fimc@11810000 {
++			fimc_1: fimc@10000 {
+ 				compatible = "samsung,exynos4210-fimc";
+-				reg = <0x11810000 0x1000>;
++				reg = <0x00010000 0x1000>;
+ 				interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clock CLK_FIMC1>,
+ 					 <&clock CLK_SCLK_FIMC1>;
+@@ -236,9 +236,9 @@
+ 				status = "disabled";
+ 			};
+ 
+-			fimc_2: fimc@11820000 {
++			fimc_2: fimc@20000 {
+ 				compatible = "samsung,exynos4210-fimc";
+-				reg = <0x11820000 0x1000>;
++				reg = <0x00020000 0x1000>;
+ 				interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clock CLK_FIMC2>,
+ 					 <&clock CLK_SCLK_FIMC2>;
+@@ -249,9 +249,9 @@
+ 				status = "disabled";
+ 			};
+ 
+-			fimc_3: fimc@11830000 {
++			fimc_3: fimc@30000 {
+ 				compatible = "samsung,exynos4210-fimc";
+-				reg = <0x11830000 0x1000>;
++				reg = <0x00030000 0x1000>;
+ 				interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clock CLK_FIMC3>,
+ 					 <&clock CLK_SCLK_FIMC3>;
+@@ -262,9 +262,9 @@
+ 				status = "disabled";
+ 			};
+ 
+-			csis_0: csis@11880000 {
++			csis_0: csis@80000 {
+ 				compatible = "samsung,exynos4210-csis";
+-				reg = <0x11880000 0x4000>;
++				reg = <0x00080000 0x4000>;
+ 				interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clock CLK_CSIS0>,
+ 					 <&clock CLK_SCLK_CSIS0>;
+@@ -278,9 +278,9 @@
+ 				#size-cells = <0>;
+ 			};
+ 
+-			csis_1: csis@11890000 {
++			csis_1: csis@90000 {
+ 				compatible = "samsung,exynos4210-csis";
+-				reg = <0x11890000 0x4000>;
++				reg = <0x00090000 0x4000>;
+ 				interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clock CLK_CSIS1>,
+ 					 <&clock CLK_SCLK_CSIS1>;
+diff --git a/arch/arm/boot/dts/samsung/exynos4x12.dtsi b/arch/arm/boot/dts/samsung/exynos4x12.dtsi
+index 84c1db221c984..83d9d0a0a6175 100644
+--- a/arch/arm/boot/dts/samsung/exynos4x12.dtsi
++++ b/arch/arm/boot/dts/samsung/exynos4x12.dtsi
+@@ -451,14 +451,15 @@
+ };
+ 
+ &camera {
++	ranges = <0x0 0x11800000 0xba1000>;
+ 	clocks = <&clock CLK_SCLK_CAM0>, <&clock CLK_SCLK_CAM1>,
+ 		 <&clock CLK_PIXELASYNCM0>, <&clock CLK_PIXELASYNCM1>;
+ 	clock-names = "sclk_cam0", "sclk_cam1", "pxl_async0", "pxl_async1";
+ 
+ 	/* fimc_[0-3] are configured outside, under phandles */
+-	fimc_lite_0: fimc-lite@12390000 {
++	fimc_lite_0: fimc-lite@b90000 {
+ 		compatible = "samsung,exynos4212-fimc-lite";
+-		reg = <0x12390000 0x1000>;
++		reg = <0x00b90000 0x1000>;
+ 		interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
+ 		power-domains = <&pd_isp>;
+ 		clocks = <&isp_clock CLK_ISP_FIMC_LITE0>;
+@@ -467,9 +468,9 @@
+ 		status = "disabled";
+ 	};
+ 
+-	fimc_lite_1: fimc-lite@123a0000 {
++	fimc_lite_1: fimc-lite@ba0000 {
+ 		compatible = "samsung,exynos4212-fimc-lite";
+-		reg = <0x123a0000 0x1000>;
++		reg = <0x00ba0000 0x1000>;
+ 		interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
+ 		power-domains = <&pd_isp>;
+ 		clocks = <&isp_clock CLK_ISP_FIMC_LITE1>;
+@@ -478,9 +479,9 @@
+ 		status = "disabled";
+ 	};
+ 
+-	fimc_is: fimc-is@12000000 {
++	fimc_is: fimc-is@800000 {
+ 		compatible = "samsung,exynos4212-fimc-is";
+-		reg = <0x12000000 0x260000>;
++		reg = <0x00800000 0x260000>;
+ 		interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>;
+ 		power-domains = <&pd_isp>;
+@@ -525,9 +526,9 @@
+ 			reg = <0x10020000 0x3000>;
+ 		};
+ 
+-		i2c1_isp: i2c-isp@12140000 {
++		i2c1_isp: i2c-isp@940000 {
+ 			compatible = "samsung,exynos4212-i2c-isp";
+-			reg = <0x12140000 0x100>;
++			reg = <0x00940000 0x100>;
+ 			clocks = <&isp_clock CLK_ISP_I2C1_ISP>;
+ 			clock-names = "i2c_isp";
+ 			#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/samsung/s5pv210.dtsi b/arch/arm/boot/dts/samsung/s5pv210.dtsi
+index f7de5b5f2f383..ed560c9a3aa1e 100644
+--- a/arch/arm/boot/dts/samsung/s5pv210.dtsi
++++ b/arch/arm/boot/dts/samsung/s5pv210.dtsi
+@@ -549,17 +549,17 @@
+ 
+ 		camera: camera@fa600000 {
+ 			compatible = "samsung,fimc";
++			ranges = <0x0 0xfa600000 0xe01000>;
+ 			clocks = <&clocks SCLK_CAM0>, <&clocks SCLK_CAM1>;
+ 			clock-names = "sclk_cam0", "sclk_cam1";
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 			#clock-cells = <1>;
+ 			clock-output-names = "cam_a_clkout", "cam_b_clkout";
+-			ranges;
+ 
+-			csis0: csis@fa600000 {
++			csis0: csis@0 {
+ 				compatible = "samsung,s5pv210-csis";
+-				reg = <0xfa600000 0x4000>;
++				reg = <0x00000000 0x4000>;
+ 				interrupt-parent = <&vic2>;
+ 				interrupts = <29>;
+ 				clocks = <&clocks CLK_CSIS>,
+@@ -572,9 +572,9 @@
+ 				#size-cells = <0>;
+ 			};
+ 
+-			fimc0: fimc@fb200000 {
++			fimc0: fimc@c00000 {
+ 				compatible = "samsung,s5pv210-fimc";
+-				reg = <0xfb200000 0x1000>;
++				reg = <0x00c00000 0x1000>;
+ 				interrupts = <5>;
+ 				interrupt-parent = <&vic2>;
+ 				clocks = <&clocks CLK_FIMC0>,
+@@ -586,9 +586,9 @@
+ 				samsung,cam-if;
+ 			};
+ 
+-			fimc1: fimc@fb300000 {
++			fimc1: fimc@d00000 {
+ 				compatible = "samsung,s5pv210-fimc";
+-				reg = <0xfb300000 0x1000>;
++				reg = <0x00d00000 0x1000>;
+ 				interrupt-parent = <&vic2>;
+ 				interrupts = <6>;
+ 				clocks = <&clocks CLK_FIMC1>,
+@@ -602,9 +602,9 @@
+ 				samsung,lcd-wb;
+ 			};
+ 
+-			fimc2: fimc@fb400000 {
++			fimc2: fimc@e00000 {
+ 				compatible = "samsung,s5pv210-fimc";
+-				reg = <0xfb400000 0x1000>;
++				reg = <0x00e00000 0x1000>;
+ 				interrupt-parent = <&vic2>;
+ 				interrupts = <7>;
+ 				clocks = <&clocks CLK_FIMC2>,
+diff --git a/arch/arm/include/asm/irq_work.h b/arch/arm/include/asm/irq_work.h
+index 3149e4dc1b540..8895999834cc0 100644
+--- a/arch/arm/include/asm/irq_work.h
++++ b/arch/arm/include/asm/irq_work.h
+@@ -9,6 +9,4 @@ static inline bool arch_irq_work_has_interrupt(void)
+ 	return is_smp();
+ }
+ 
+-extern void arch_irq_work_raise(void);
+-
+ #endif /* _ASM_ARM_IRQ_WORK_H */
+diff --git a/arch/arm64/boot/dts/amlogic/meson-s4-s805x2-aq222.dts b/arch/arm64/boot/dts/amlogic/meson-s4-s805x2-aq222.dts
+index c1f322c739826..b1b81ac032009 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-s4-s805x2-aq222.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-s4-s805x2-aq222.dts
+@@ -15,7 +15,7 @@
+ 	#size-cells = <2>;
+ 
+ 	aliases {
+-		serial0 = &uart_B;
++		serial0 = &uart_b;
+ 	};
+ 
+ 	memory@0 {
+@@ -25,7 +25,7 @@
+ 
+ };
+ 
+-&uart_B {
++&uart_b {
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/amlogic/meson-s4.dtsi b/arch/arm64/boot/dts/amlogic/meson-s4.dtsi
+index e0cfc54ebccb1..dac18eb634d7b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-s4.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-s4.dtsi
+@@ -126,14 +126,14 @@
+ 					<10 11 12 13 14 15 16 17 18 19 20 21>;
+ 			};
+ 
+-			uart_B: serial@7a000 {
++			uart_b: serial@7a000 {
+ 				compatible = "amlogic,meson-s4-uart",
+ 					     "amlogic,meson-ao-uart";
+ 				reg = <0x0 0x7a000 0x0 0x18>;
+ 				interrupts = <GIC_SPI 169 IRQ_TYPE_EDGE_RISING>;
+-				status = "disabled";
+ 				clocks = <&xtal>, <&xtal>, <&xtal>;
+ 				clock-names = "xtal", "pclk", "baud";
++				status = "disabled";
+ 			};
+ 
+ 			reset: reset-controller@2000 {
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
+index f9abef8dcc948..870bb380a40a6 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
+@@ -126,32 +126,32 @@
+ 
+ 	reset-gpios = <&gpiosb 23 GPIO_ACTIVE_LOW>;
+ 
+-	ports {
+-		switch0port1: port@1 {
++	ethernet-ports {
++		switch0port1: ethernet-port@1 {
+ 			reg = <1>;
+ 			label = "lan0";
+ 			phy-handle = <&switch0phy0>;
+ 		};
+ 
+-		switch0port2: port@2 {
++		switch0port2: ethernet-port@2 {
+ 			reg = <2>;
+ 			label = "lan1";
+ 			phy-handle = <&switch0phy1>;
+ 		};
+ 
+-		switch0port3: port@3 {
++		switch0port3: ethernet-port@3 {
+ 			reg = <3>;
+ 			label = "lan2";
+ 			phy-handle = <&switch0phy2>;
+ 		};
+ 
+-		switch0port4: port@4 {
++		switch0port4: ethernet-port@4 {
+ 			reg = <4>;
+ 			label = "lan3";
+ 			phy-handle = <&switch0phy3>;
+ 		};
+ 
+-		switch0port5: port@5 {
++		switch0port5: ethernet-port@5 {
+ 			reg = <5>;
+ 			label = "wan";
+ 			phy-handle = <&extphy>;
+@@ -160,7 +160,7 @@
+ 	};
+ 
+ 	mdio {
+-		switch0phy3: switch0phy3@14 {
++		switch0phy3: ethernet-phy@14 {
+ 			reg = <0x14>;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+index 49cbdb55b4b36..fed2dcecb323f 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+@@ -145,19 +145,17 @@
+ };
+ 
+ &mdio {
+-	switch0: switch0@1 {
++	switch0: ethernet-switch@1 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <1>;
+ 
+ 		dsa,member = <0 0>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0port0: port@0 {
++			switch0port0: ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "cpu";
+ 				ethernet = <&eth0>;
+@@ -168,19 +166,19 @@
+ 				};
+ 			};
+ 
+-			switch0port1: port@1 {
++			switch0port1: ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "wan";
+ 				phy-handle = <&switch0phy0>;
+ 			};
+ 
+-			switch0port2: port@2 {
++			switch0port2: ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan0";
+ 				phy-handle = <&switch0phy1>;
+ 			};
+ 
+-			switch0port3: port@3 {
++			switch0port3: ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan1";
+ 				phy-handle = <&switch0phy2>;
+@@ -192,13 +190,13 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy0: switch0phy0@11 {
++			switch0phy0: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+-			switch0phy1: switch0phy1@12 {
++			switch0phy1: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+-			switch0phy2: switch0phy2@13 {
++			switch0phy2: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-gl-mv1000.dts b/arch/arm64/boot/dts/marvell/armada-3720-gl-mv1000.dts
+index b1b45b4fa9d4a..63fbc83521616 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-gl-mv1000.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-gl-mv1000.dts
+@@ -152,31 +152,29 @@
+ };
+ 
+ &mdio {
+-	switch0: switch0@1 {
++	switch0: ethernet-switch@1 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <1>;
+ 
+ 		dsa,member = <0 0>;
+ 
+-		ports: ports {
++		ports: ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@0 {
++			ethernet-port@0 {
+ 				reg = <0>;
+ 				label = "cpu";
+ 				ethernet = <&eth0>;
+ 			};
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "wan";
+ 				phy-handle = <&switch0phy0>;
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan0";
+ 				phy-handle = <&switch0phy1>;
+@@ -185,7 +183,7 @@
+ 				nvmem-cell-names = "mac-address";
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan1";
+ 				phy-handle = <&switch0phy2>;
+@@ -199,13 +197,13 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy0: switch0phy0@11 {
++			switch0phy0: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+-			switch0phy1: switch0phy1@12 {
++			switch0phy1: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+-			switch0phy2: switch0phy2@13 {
++			switch0phy2: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 805ef2d79b401..4249acdec5ae2 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -304,7 +304,13 @@
+ 		reg = <1>;
+ 	};
+ 
+-	/* switch nodes are enabled by U-Boot if modules are present */
++	/*
++	 * NOTE: switch nodes are enabled by U-Boot if modules are present
++	 * DO NOT change this node name (switch0@10) even if it is not following
++	 * conventions! Deployed U-Boot binaries are explicitly looking for
++	 * this node in order to augment the device tree!
++	 * Also do not touch the "ports" or "port@n" nodes. These are also ABI.
++	 */
+ 	switch0@10 {
+ 		compatible = "marvell,mv88e6190";
+ 		reg = <0x10>;
+@@ -317,35 +323,35 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy1: switch0phy1@1 {
++			switch0phy1: ethernet-phy@1 {
+ 				reg = <0x1>;
+ 			};
+ 
+-			switch0phy2: switch0phy2@2 {
++			switch0phy2: ethernet-phy@2 {
+ 				reg = <0x2>;
+ 			};
+ 
+-			switch0phy3: switch0phy3@3 {
++			switch0phy3: ethernet-phy@3 {
+ 				reg = <0x3>;
+ 			};
+ 
+-			switch0phy4: switch0phy4@4 {
++			switch0phy4: ethernet-phy@4 {
+ 				reg = <0x4>;
+ 			};
+ 
+-			switch0phy5: switch0phy5@5 {
++			switch0phy5: ethernet-phy@5 {
+ 				reg = <0x5>;
+ 			};
+ 
+-			switch0phy6: switch0phy6@6 {
++			switch0phy6: ethernet-phy@6 {
+ 				reg = <0x6>;
+ 			};
+ 
+-			switch0phy7: switch0phy7@7 {
++			switch0phy7: ethernet-phy@7 {
+ 				reg = <0x7>;
+ 			};
+ 
+-			switch0phy8: switch0phy8@8 {
++			switch0phy8: ethernet-phy@8 {
+ 				reg = <0x8>;
+ 			};
+ 		};
+@@ -430,6 +436,7 @@
+ 		};
+ 	};
+ 
++	/* NOTE: this node name is ABI, don't change it! */
+ 	switch0@2 {
+ 		compatible = "marvell,mv88e6085";
+ 		reg = <0x2>;
+@@ -442,19 +449,19 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy1_topaz: switch0phy1@11 {
++			switch0phy1_topaz: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+ 
+-			switch0phy2_topaz: switch0phy2@12 {
++			switch0phy2_topaz: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+ 
+-			switch0phy3_topaz: switch0phy3@13 {
++			switch0phy3_topaz: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 
+-			switch0phy4_topaz: switch0phy4@14 {
++			switch0phy4_topaz: ethernet-phy@14 {
+ 				reg = <0x14>;
+ 			};
+ 		};
+@@ -497,6 +504,7 @@
+ 		};
+ 	};
+ 
++	/* NOTE: this node name is ABI, don't change it! */
+ 	switch1@11 {
+ 		compatible = "marvell,mv88e6190";
+ 		reg = <0x11>;
+@@ -509,35 +517,35 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch1phy1: switch1phy1@1 {
++			switch1phy1: ethernet-phy@1 {
+ 				reg = <0x1>;
+ 			};
+ 
+-			switch1phy2: switch1phy2@2 {
++			switch1phy2: ethernet-phy@2 {
+ 				reg = <0x2>;
+ 			};
+ 
+-			switch1phy3: switch1phy3@3 {
++			switch1phy3: ethernet-phy@3 {
+ 				reg = <0x3>;
+ 			};
+ 
+-			switch1phy4: switch1phy4@4 {
++			switch1phy4: ethernet-phy@4 {
+ 				reg = <0x4>;
+ 			};
+ 
+-			switch1phy5: switch1phy5@5 {
++			switch1phy5: ethernet-phy@5 {
+ 				reg = <0x5>;
+ 			};
+ 
+-			switch1phy6: switch1phy6@6 {
++			switch1phy6: ethernet-phy@6 {
+ 				reg = <0x6>;
+ 			};
+ 
+-			switch1phy7: switch1phy7@7 {
++			switch1phy7: ethernet-phy@7 {
+ 				reg = <0x7>;
+ 			};
+ 
+-			switch1phy8: switch1phy8@8 {
++			switch1phy8: ethernet-phy@8 {
+ 				reg = <0x8>;
+ 			};
+ 		};
+@@ -622,6 +630,7 @@
+ 		};
+ 	};
+ 
++	/* NOTE: this node name is ABI, don't change it! */
+ 	switch1@2 {
+ 		compatible = "marvell,mv88e6085";
+ 		reg = <0x2>;
+@@ -634,19 +643,19 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch1phy1_topaz: switch1phy1@11 {
++			switch1phy1_topaz: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+ 
+-			switch1phy2_topaz: switch1phy2@12 {
++			switch1phy2_topaz: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+ 
+-			switch1phy3_topaz: switch1phy3@13 {
++			switch1phy3_topaz: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 
+-			switch1phy4_topaz: switch1phy4@14 {
++			switch1phy4_topaz: ethernet-phy@14 {
+ 				reg = <0x14>;
+ 			};
+ 		};
+@@ -689,6 +698,7 @@
+ 		};
+ 	};
+ 
++	/* NOTE: this node name is ABI, don't change it! */
+ 	switch2@12 {
+ 		compatible = "marvell,mv88e6190";
+ 		reg = <0x12>;
+@@ -701,35 +711,35 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch2phy1: switch2phy1@1 {
++			switch2phy1: ethernet-phy@1 {
+ 				reg = <0x1>;
+ 			};
+ 
+-			switch2phy2: switch2phy2@2 {
++			switch2phy2: ethernet-phy@2 {
+ 				reg = <0x2>;
+ 			};
+ 
+-			switch2phy3: switch2phy3@3 {
++			switch2phy3: ethernet-phy@3 {
+ 				reg = <0x3>;
+ 			};
+ 
+-			switch2phy4: switch2phy4@4 {
++			switch2phy4: ethernet-phy@4 {
+ 				reg = <0x4>;
+ 			};
+ 
+-			switch2phy5: switch2phy5@5 {
++			switch2phy5: ethernet-phy@5 {
+ 				reg = <0x5>;
+ 			};
+ 
+-			switch2phy6: switch2phy6@6 {
++			switch2phy6: ethernet-phy@6 {
+ 				reg = <0x6>;
+ 			};
+ 
+-			switch2phy7: switch2phy7@7 {
++			switch2phy7: ethernet-phy@7 {
+ 				reg = <0x7>;
+ 			};
+ 
+-			switch2phy8: switch2phy8@8 {
++			switch2phy8: ethernet-phy@8 {
+ 				reg = <0x8>;
+ 			};
+ 		};
+@@ -805,6 +815,7 @@
+ 		};
+ 	};
+ 
++	/* NOTE: this node name is ABI, don't change it! */
+ 	switch2@2 {
+ 		compatible = "marvell,mv88e6085";
+ 		reg = <0x2>;
+@@ -817,19 +828,19 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch2phy1_topaz: switch2phy1@11 {
++			switch2phy1_topaz: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+ 
+-			switch2phy2_topaz: switch2phy2@12 {
++			switch2phy2_topaz: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+ 
+-			switch2phy3_topaz: switch2phy3@13 {
++			switch2phy3_topaz: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 
+-			switch2phy4_topaz: switch2phy4@14 {
++			switch2phy4_topaz: ethernet-phy@14 {
+ 				reg = <0x14>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-7040-mochabin.dts b/arch/arm64/boot/dts/marvell/armada-7040-mochabin.dts
+index 48202810bf786..40b7ee7ead72e 100644
+--- a/arch/arm64/boot/dts/marvell/armada-7040-mochabin.dts
++++ b/arch/arm64/boot/dts/marvell/armada-7040-mochabin.dts
+@@ -301,10 +301,8 @@
+ 	};
+ 
+ 	/* 88E6141 Topaz switch */
+-	switch: switch@3 {
++	switch: ethernet-switch@3 {
+ 		compatible = "marvell,mv88e6085";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <3>;
+ 
+ 		pinctrl-names = "default";
+@@ -314,35 +312,35 @@
+ 		interrupt-parent = <&cp0_gpio1>;
+ 		interrupts = <1 IRQ_TYPE_LEVEL_LOW>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			swport1: port@1 {
++			swport1: ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan0";
+ 				phy-handle = <&swphy1>;
+ 			};
+ 
+-			swport2: port@2 {
++			swport2: ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan1";
+ 				phy-handle = <&swphy2>;
+ 			};
+ 
+-			swport3: port@3 {
++			swport3: ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan2";
+ 				phy-handle = <&swphy3>;
+ 			};
+ 
+-			swport4: port@4 {
++			swport4: ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "lan3";
+ 				phy-handle = <&swphy4>;
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				label = "cpu";
+ 				ethernet = <&cp0_eth1>;
+@@ -355,19 +353,19 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			swphy1: swphy1@17 {
++			swphy1: ethernet-phy@17 {
+ 				reg = <17>;
+ 			};
+ 
+-			swphy2: swphy2@18 {
++			swphy2: ethernet-phy@18 {
+ 				reg = <18>;
+ 			};
+ 
+-			swphy3: swphy3@19 {
++			swphy3: ethernet-phy@19 {
+ 				reg = <19>;
+ 			};
+ 
+-			swphy4: swphy4@20 {
++			swphy4: ethernet-phy@20 {
+ 				reg = <20>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts b/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
+index 4125202028c85..67892f0d28633 100644
+--- a/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
++++ b/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
+@@ -497,42 +497,42 @@
+ 		reset-deassert-us = <10000>;
+ 	};
+ 
+-	switch0: switch0@4 {
++	switch0: ethernet-switch@4 {
+ 		compatible = "marvell,mv88e6085";
+ 		reg = <4>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&cp1_switch_reset_pins>;
+ 		reset-gpios = <&cp1_gpio1 24 GPIO_ACTIVE_LOW>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "lan2";
+ 				phy-handle = <&switch0phy0>;
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "lan1";
+ 				phy-handle = <&switch0phy1>;
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "lan4";
+ 				phy-handle = <&switch0phy2>;
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "lan3";
+ 				phy-handle = <&switch0phy3>;
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				label = "cpu";
+ 				ethernet = <&cp1_eth2>;
+@@ -545,19 +545,19 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy0: switch0phy0@11 {
++			switch0phy0: ethernet-phy@11 {
+ 				reg = <0x11>;
+ 			};
+ 
+-			switch0phy1: switch0phy1@12 {
++			switch0phy1: ethernet-phy@12 {
+ 				reg = <0x12>;
+ 			};
+ 
+-			switch0phy2: switch0phy2@13 {
++			switch0phy2: ethernet-phy@13 {
+ 				reg = <0x13>;
+ 			};
+ 
+-			switch0phy3: switch0phy3@14 {
++			switch0phy3: ethernet-phy@14 {
+ 				reg = <0x14>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-crb.dtsi b/arch/arm64/boot/dts/marvell/cn9130-crb.dtsi
+index 47d45ff3d6f57..6fcc34f7b4647 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-crb.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130-crb.dtsi
+@@ -207,11 +207,9 @@
+ 		reg = <0>;
+ 	};
+ 
+-	switch6: switch0@6 {
++	switch6: ethernet-switch@6 {
+ 		/* Actual device is MV88E6393X */
+ 		compatible = "marvell,mv88e6190";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		reg = <6>;
+ 		interrupt-parent = <&cp0_gpio1>;
+ 		interrupts = <28 IRQ_TYPE_LEVEL_LOW>;
+@@ -220,59 +218,59 @@
+ 
+ 		dsa,member = <0 0>;
+ 
+-		ports {
++		ethernet-ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			port@1 {
++			ethernet-port@1 {
+ 				reg = <1>;
+ 				label = "p1";
+ 				phy-handle = <&switch0phy1>;
+ 			};
+ 
+-			port@2 {
++			ethernet-port@2 {
+ 				reg = <2>;
+ 				label = "p2";
+ 				phy-handle = <&switch0phy2>;
+ 			};
+ 
+-			port@3 {
++			ethernet-port@3 {
+ 				reg = <3>;
+ 				label = "p3";
+ 				phy-handle = <&switch0phy3>;
+ 			};
+ 
+-			port@4 {
++			ethernet-port@4 {
+ 				reg = <4>;
+ 				label = "p4";
+ 				phy-handle = <&switch0phy4>;
+ 			};
+ 
+-			port@5 {
++			ethernet-port@5 {
+ 				reg = <5>;
+ 				label = "p5";
+ 				phy-handle = <&switch0phy5>;
+ 			};
+ 
+-			port@6 {
++			ethernet-port@6 {
+ 				reg = <6>;
+ 				label = "p6";
+ 				phy-handle = <&switch0phy6>;
+ 			};
+ 
+-			port@7 {
++			ethernet-port@7 {
+ 				reg = <7>;
+ 				label = "p7";
+ 				phy-handle = <&switch0phy7>;
+ 			};
+ 
+-			port@8 {
++			ethernet-port@8 {
+ 				reg = <8>;
+ 				label = "p8";
+ 				phy-handle = <&switch0phy8>;
+ 			};
+ 
+-			port@9 {
++			ethernet-port@9 {
+ 				reg = <9>;
+ 				label = "p9";
+ 				phy-mode = "10gbase-r";
+@@ -280,7 +278,7 @@
+ 				managed = "in-band-status";
+ 			};
+ 
+-			port@a {
++			ethernet-port@a {
+ 				reg = <10>;
+ 				ethernet = <&cp0_eth0>;
+ 				phy-mode = "10gbase-r";
+@@ -293,35 +291,35 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			switch0phy1: switch0phy1@1 {
++			switch0phy1: ethernet-phy@1 {
+ 				reg = <0x1>;
+ 			};
+ 
+-			switch0phy2: switch0phy2@2 {
++			switch0phy2: ethernet-phy@2 {
+ 				reg = <0x2>;
+ 			};
+ 
+-			switch0phy3: switch0phy3@3 {
++			switch0phy3: ethernet-phy@3 {
+ 				reg = <0x3>;
+ 			};
+ 
+-			switch0phy4: switch0phy4@4 {
++			switch0phy4: ethernet-phy@4 {
+ 				reg = <0x4>;
+ 			};
+ 
+-			switch0phy5: switch0phy5@5 {
++			switch0phy5: ethernet-phy@5 {
+ 				reg = <0x5>;
+ 			};
+ 
+-			switch0phy6: switch0phy6@6 {
++			switch0phy6: ethernet-phy@6 {
+ 				reg = <0x6>;
+ 			};
+ 
+-			switch0phy7: switch0phy7@7 {
++			switch0phy7: ethernet-phy@7 {
+ 				reg = <0x7>;
+ 			};
+ 
+-			switch0phy8: switch0phy8@8 {
++			switch0phy8: ethernet-phy@8 {
+ 				reg = <0x8>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 6ba9da9e6a8b9..fa8ec92ce4904 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -444,6 +444,19 @@
+ 		reg = <0x0 0x80000000 0x0 0x0>;
+ 	};
+ 
++	etm {
++		compatible = "qcom,coresight-remote-etm";
++
++		out-ports {
++			port {
++				modem_etm_out_funnel_in2: endpoint {
++					remote-endpoint =
++					  <&funnel_in2_in_modem_etm>;
++				};
++			};
++		};
++	};
++
+ 	psci {
+ 		compatible = "arm,psci-1.0";
+ 		method = "smc";
+@@ -2644,6 +2657,14 @@
+ 			clocks = <&rpmcc RPM_QDSS_CLK>, <&rpmcc RPM_QDSS_A_CLK>;
+ 			clock-names = "apb_pclk", "atclk";
+ 
++			in-ports {
++				port {
++					funnel_in2_in_modem_etm: endpoint {
++						remote-endpoint =
++						  <&modem_etm_out_funnel_in2>;
++					};
++				};
++			};
+ 
+ 			out-ports {
+ 				port {
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index b485bf925ce61..ebc5ba1b369e9 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -2031,9 +2031,11 @@
+ 
+ 			cpu = <&CPU4>;
+ 
+-			port {
+-				etm4_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in4>;
++			out-ports {
++				port {
++					etm4_out: endpoint {
++						remote-endpoint = <&apss_funnel_in4>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -2048,9 +2050,11 @@
+ 
+ 			cpu = <&CPU5>;
+ 
+-			port {
+-				etm5_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in5>;
++			out-ports {
++				port {
++					etm5_out: endpoint {
++						remote-endpoint = <&apss_funnel_in5>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -2065,9 +2069,11 @@
+ 
+ 			cpu = <&CPU6>;
+ 
+-			port {
+-				etm6_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in6>;
++			out-ports {
++				port {
++					etm6_out: endpoint {
++						remote-endpoint = <&apss_funnel_in6>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -2082,9 +2088,11 @@
+ 
+ 			cpu = <&CPU7>;
+ 
+-			port {
+-				etm7_out: endpoint {
+-					remote-endpoint = <&apss_funnel_in7>;
++			out-ports {
++				port {
++					etm7_out: endpoint {
++						remote-endpoint = <&apss_funnel_in7>;
++					};
+ 				};
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 91169e438b463..9e594d21ecd80 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -3545,11 +3545,8 @@
+ 			};
+ 
+ 			in-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 
+-				port@1 {
+-					reg = <1>;
++				port {
+ 					etf_in: endpoint {
+ 						remote-endpoint =
+ 						  <&merge_funnel_out>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index e302eb7821af0..3221478663ac6 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -2957,11 +2957,8 @@
+ 			};
+ 
+ 			in-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 
+-				port@1 {
+-					reg = <1>;
++				port {
+ 					replicator1_in: endpoint {
+ 						remote-endpoint = <&replicator_out1>;
+ 					};
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 72db75ca77310..d59a99ca872c0 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -3095,11 +3095,8 @@
+ 			clock-names = "apb_pclk";
+ 
+ 			out-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 
+-				port@0 {
+-					reg = <0>;
++				port {
+ 					tpda_out_funnel_qatb: endpoint {
+ 						remote-endpoint = <&funnel_qatb_in_tpda>;
+ 					};
+@@ -3142,11 +3139,7 @@
+ 			};
+ 
+ 			in-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
++				port {
+ 					funnel_qatb_in_tpda: endpoint {
+ 						remote-endpoint = <&tpda_out_funnel_qatb>;
+ 					};
+@@ -3355,11 +3348,8 @@
+ 			};
+ 
+ 			in-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 
+-				port@0 {
+-					reg = <0>;
++				port {
+ 					etf_in_funnel_swao_out: endpoint {
+ 						remote-endpoint = <&funnel_swao_out_etf>;
+ 					};
+@@ -3443,8 +3433,6 @@
+ 			clock-names = "apb_pclk";
+ 
+ 			out-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 				port {
+ 					tpdm_mm_out_tpda9: endpoint {
+ 						remote-endpoint = <&tpda_9_in_tpdm_mm>;
+@@ -3710,11 +3698,7 @@
+ 			};
+ 
+ 			in-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
++				port {
+ 					funnel_apss_merg_in_funnel_apss: endpoint {
+ 					remote-endpoint = <&funnel_apss_out_funnel_apss_merg>;
+ 					};
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 1d597af15bb3e..a72f3c470089b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -2021,7 +2021,7 @@
+ 			compatible = "qcom,sm8350-mpss-pas";
+ 			reg = <0x0 0x04080000 0x0 0x4040>;
+ 
+-			interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_modem_in 1 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_modem_in 2 IRQ_TYPE_EDGE_RISING>,
+@@ -2063,7 +2063,7 @@
+ 			compatible = "qcom,sm8350-slpi-pas";
+ 			reg = <0 0x05c00000 0 0x4000>;
+ 
+-			interrupts-extended = <&pdc 9 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&pdc 9 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_slpi_in 0 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_slpi_in 1 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_slpi_in 2 IRQ_TYPE_EDGE_RISING>,
+@@ -3207,7 +3207,7 @@
+ 			compatible = "qcom,sm8350-adsp-pas";
+ 			reg = <0 0x17300000 0 0x100>;
+ 
+-			interrupts-extended = <&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+@@ -3512,7 +3512,7 @@
+ 			compatible = "qcom,sm8350-cdsp-pas";
+ 			reg = <0 0x98900000 0 0x1400000>;
+ 
+-			interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index dc904ccb3d6c2..f82480570d4b6 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -2160,7 +2160,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		swr4: soundwire-controller@31f0000 {
++		swr4: soundwire@31f0000 {
+ 			compatible = "qcom,soundwire-v1.7.0";
+ 			reg = <0 0x031f0000 0 0x2000>;
+ 			interrupts = <GIC_SPI 171 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2208,7 +2208,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		swr1: soundwire-controller@3210000 {
++		swr1: soundwire@3210000 {
+ 			compatible = "qcom,soundwire-v1.7.0";
+ 			reg = <0 0x03210000 0 0x2000>;
+ 			interrupts = <GIC_SPI 155 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2275,7 +2275,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		swr0: soundwire-controller@3250000 {
++		swr0: soundwire@3250000 {
+ 			compatible = "qcom,soundwire-v1.7.0";
+ 			reg = <0 0x03250000 0 0x2000>;
+ 			interrupts = <GIC_SPI 170 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2302,7 +2302,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		swr2: soundwire-controller@33b0000 {
++		swr2: soundwire@33b0000 {
+ 			compatible = "qcom,soundwire-v1.7.0";
+ 			reg = <0 0x033b0000 0 0x2000>;
+ 			interrupts = <GIC_SPI 496 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 5cf813a579d58..e15564ed54cef 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -2060,7 +2060,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		swr3: soundwire-controller@6ab0000 {
++		swr3: soundwire@6ab0000 {
+ 			compatible = "qcom,soundwire-v2.0.0";
+ 			reg = <0 0x06ab0000 0 0x10000>;
+ 			interrupts = <GIC_SPI 171 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2106,7 +2106,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		swr1: soundwire-controller@6ad0000 {
++		swr1: soundwire@6ad0000 {
+ 			compatible = "qcom,soundwire-v2.0.0";
+ 			reg = <0 0x06ad0000 0 0x10000>;
+ 			interrupts = <GIC_SPI 155 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2171,7 +2171,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		swr0: soundwire-controller@6b10000 {
++		swr0: soundwire@6b10000 {
+ 			compatible = "qcom,soundwire-v2.0.0";
+ 			reg = <0 0x06b10000 0 0x10000>;
+ 			interrupts = <GIC_SPI 170 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2198,7 +2198,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		swr2: soundwire-controller@6d30000 {
++		swr2: soundwire@6d30000 {
+ 			compatible = "qcom,soundwire-v2.0.0";
+ 			reg = <0 0x06d30000 0 0x10000>;
+ 			interrupts = <GIC_SPI 496 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/sprd/ums512.dtsi b/arch/arm64/boot/dts/sprd/ums512.dtsi
+index 97ac550af2f11..cc4459551e05e 100644
+--- a/arch/arm64/boot/dts/sprd/ums512.dtsi
++++ b/arch/arm64/boot/dts/sprd/ums512.dtsi
+@@ -113,7 +113,7 @@
+ 
+ 	idle-states {
+ 		entry-method = "psci";
+-		CORE_PD: core-pd {
++		CORE_PD: cpu-pd {
+ 			compatible = "arm,idle-state";
+ 			entry-latency-us = <4000>;
+ 			exit-latency-us = <4000>;
+@@ -291,6 +291,7 @@
+ 			pll2: clock-controller@0 {
+ 				compatible = "sprd,ums512-gc-pll";
+ 				reg = <0x0 0x100>;
++				clocks = <&ext_26m>;
+ 				clock-names = "ext-26m";
+ 				#clock-cells = <1>;
+ 			};
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revA.dtso b/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revA.dtso
+index ae1b9b2bdbee2..92f4190d564db 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revA.dtso
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revA.dtso
+@@ -21,57 +21,57 @@
+ /dts-v1/;
+ /plugin/;
+ 
+-&i2c1 { /* I2C_SCK C23/C24 - MIO from SOM */
+-	#address-cells = <1>;
+-	#size-cells = <0>;
+-	pinctrl-names = "default", "gpio";
+-	pinctrl-0 = <&pinctrl_i2c1_default>;
+-	pinctrl-1 = <&pinctrl_i2c1_gpio>;
+-	scl-gpios = <&gpio 24 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+-	sda-gpios = <&gpio 25 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+-
+-	/* u14 - 0x40 - ina260 */
+-	/* u27 - 0xe0 - STDP4320 DP/HDMI splitter */
+-};
+-
+-&amba {
+-	si5332_0: si5332_0 { /* u17 */
++&{/} {
++	si5332_0: si5332-0 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <125000000>;
+ 	};
+ 
+-	si5332_1: si5332_1 { /* u17 */
++	si5332_1: si5332-1 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <25000000>;
+ 	};
+ 
+-	si5332_2: si5332_2 { /* u17 */
++	si5332_2: si5332-2 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <48000000>;
+ 	};
+ 
+-	si5332_3: si5332_3 { /* u17 */
++	si5332_3: si5332-3 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <24000000>;
+ 	};
+ 
+-	si5332_4: si5332_4 { /* u17 */
++	si5332_4: si5332-4 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <26000000>;
+ 	};
+ 
+-	si5332_5: si5332_5 { /* u17 */
++	si5332_5: si5332-5 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <27000000>;
+ 	};
+ };
+ 
++&i2c1 { /* I2C_SCK C23/C24 - MIO from SOM */
++	#address-cells = <1>;
++	#size-cells = <0>;
++	pinctrl-names = "default", "gpio";
++	pinctrl-0 = <&pinctrl_i2c1_default>;
++	pinctrl-1 = <&pinctrl_i2c1_gpio>;
++	scl-gpios = <&gpio 24 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio 25 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++
++	/* u14 - 0x40 - ina260 */
++	/* u27 - 0xe0 - STDP4320 DP/HDMI splitter */
++};
++
+ /* DP/USB 3.0 and SATA */
+ &psgtr {
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revB.dtso b/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revB.dtso
+index b59e48be6465a..f88b71f5b07a6 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revB.dtso
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-sck-kv-g-revB.dtso
+@@ -16,58 +16,58 @@
+ /dts-v1/;
+ /plugin/;
+ 
+-&i2c1 { /* I2C_SCK C23/C24 - MIO from SOM */
+-	#address-cells = <1>;
+-	#size-cells = <0>;
+-	pinctrl-names = "default", "gpio";
+-	pinctrl-0 = <&pinctrl_i2c1_default>;
+-	pinctrl-1 = <&pinctrl_i2c1_gpio>;
+-	scl-gpios = <&gpio 24 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+-	sda-gpios = <&gpio 25 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+-
+-	/* u14 - 0x40 - ina260 */
+-	/* u43 - 0x2d - usb5744 */
+-	/* u27 - 0xe0 - STDP4320 DP/HDMI splitter */
+-};
+-
+-&amba {
+-	si5332_0: si5332_0 { /* u17 */
++&{/} {
++	si5332_0: si5332-0 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <125000000>;
+ 	};
+ 
+-	si5332_1: si5332_1 { /* u17 */
++	si5332_1: si5332-1 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <25000000>;
+ 	};
+ 
+-	si5332_2: si5332_2 { /* u17 */
++	si5332_2: si5332-2 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <48000000>;
+ 	};
+ 
+-	si5332_3: si5332_3 { /* u17 */
++	si5332_3: si5332-3 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <24000000>;
+ 	};
+ 
+-	si5332_4: si5332_4 { /* u17 */
++	si5332_4: si5332-4 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <26000000>;
+ 	};
+ 
+-	si5332_5: si5332_5 { /* u17 */
++	si5332_5: si5332-5 { /* u17 */
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <27000000>;
+ 	};
+ };
+ 
++&i2c1 { /* I2C_SCK C23/C24 - MIO from SOM */
++	#address-cells = <1>;
++	#size-cells = <0>;
++	pinctrl-names = "default", "gpio";
++	pinctrl-0 = <&pinctrl_i2c1_default>;
++	pinctrl-1 = <&pinctrl_i2c1_gpio>;
++	scl-gpios = <&gpio 24 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio 25 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++
++	/* u14 - 0x40 - ina260 */
++	/* u43 - 0x2d - usb5744 */
++	/* u27 - 0xe0 - STDP4320 DP/HDMI splitter */
++};
++
+ /* DP/USB 3.0 */
+ &psgtr {
+ 	status = "okay";
+diff --git a/arch/arm64/include/asm/irq_work.h b/arch/arm64/include/asm/irq_work.h
+index 81bbfa3a035bd..a1020285ea750 100644
+--- a/arch/arm64/include/asm/irq_work.h
++++ b/arch/arm64/include/asm/irq_work.h
+@@ -2,8 +2,6 @@
+ #ifndef __ASM_IRQ_WORK_H
+ #define __ASM_IRQ_WORK_H
+ 
+-extern void arch_irq_work_raise(void);
+-
+ static inline bool arch_irq_work_has_interrupt(void)
+ {
+ 	return true;
+diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
+index 6ad5c6ef53296..85087e2df5649 100644
+--- a/arch/arm64/kernel/irq.c
++++ b/arch/arm64/kernel/irq.c
+@@ -22,6 +22,7 @@
+ #include <linux/vmalloc.h>
+ #include <asm/daifflags.h>
+ #include <asm/exception.h>
++#include <asm/numa.h>
+ #include <asm/softirq_stack.h>
+ #include <asm/stacktrace.h>
+ #include <asm/vmap_stack.h>
+@@ -47,17 +48,17 @@ static void init_irq_scs(void)
+ 
+ 	for_each_possible_cpu(cpu)
+ 		per_cpu(irq_shadow_call_stack_ptr, cpu) =
+-			scs_alloc(cpu_to_node(cpu));
++			scs_alloc(early_cpu_to_node(cpu));
+ }
+ 
+ #ifdef CONFIG_VMAP_STACK
+-static void init_irq_stacks(void)
++static void __init init_irq_stacks(void)
+ {
+ 	int cpu;
+ 	unsigned long *p;
+ 
+ 	for_each_possible_cpu(cpu) {
+-		p = arch_alloc_vmap_stack(IRQ_STACK_SIZE, cpu_to_node(cpu));
++		p = arch_alloc_vmap_stack(IRQ_STACK_SIZE, early_cpu_to_node(cpu));
+ 		per_cpu(irq_stack_ptr, cpu) = p;
+ 	}
+ }
+diff --git a/arch/csky/include/asm/irq_work.h b/arch/csky/include/asm/irq_work.h
+index 33aaf39d6f94f..d39fcc1f5395f 100644
+--- a/arch/csky/include/asm/irq_work.h
++++ b/arch/csky/include/asm/irq_work.h
+@@ -7,5 +7,5 @@ static inline bool arch_irq_work_has_interrupt(void)
+ {
+ 	return true;
+ }
+-extern void arch_irq_work_raise(void);
++
+ #endif /* __ASM_CSKY_IRQ_WORK_H */
+diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c
+index 173fe514fc9ec..bee9f7a3108f0 100644
+--- a/arch/loongarch/kernel/asm-offsets.c
++++ b/arch/loongarch/kernel/asm-offsets.c
+@@ -15,7 +15,7 @@
+ #include <asm/processor.h>
+ #include <asm/ftrace.h>
+ 
+-void output_ptreg_defines(void)
++static void __used output_ptreg_defines(void)
+ {
+ 	COMMENT("LoongArch pt_regs offsets.");
+ 	OFFSET(PT_R0, pt_regs, regs[0]);
+@@ -62,7 +62,7 @@ void output_ptreg_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_task_defines(void)
++static void __used output_task_defines(void)
+ {
+ 	COMMENT("LoongArch task_struct offsets.");
+ 	OFFSET(TASK_STATE, task_struct, __state);
+@@ -77,7 +77,7 @@ void output_task_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_thread_info_defines(void)
++static void __used output_thread_info_defines(void)
+ {
+ 	COMMENT("LoongArch thread_info offsets.");
+ 	OFFSET(TI_TASK, thread_info, task);
+@@ -93,7 +93,7 @@ void output_thread_info_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_thread_defines(void)
++static void __used output_thread_defines(void)
+ {
+ 	COMMENT("LoongArch specific thread_struct offsets.");
+ 	OFFSET(THREAD_REG01, task_struct, thread.reg01);
+@@ -129,7 +129,7 @@ void output_thread_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_thread_fpu_defines(void)
++static void __used output_thread_fpu_defines(void)
+ {
+ 	OFFSET(THREAD_FPR0, loongarch_fpu, fpr[0]);
+ 	OFFSET(THREAD_FPR1, loongarch_fpu, fpr[1]);
+@@ -170,7 +170,7 @@ void output_thread_fpu_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_thread_lbt_defines(void)
++static void __used output_thread_lbt_defines(void)
+ {
+ 	OFFSET(THREAD_SCR0,  loongarch_lbt, scr0);
+ 	OFFSET(THREAD_SCR1,  loongarch_lbt, scr1);
+@@ -180,7 +180,7 @@ void output_thread_lbt_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_mm_defines(void)
++static void __used output_mm_defines(void)
+ {
+ 	COMMENT("Size of struct page");
+ 	DEFINE(STRUCT_PAGE_SIZE, sizeof(struct page));
+@@ -212,7 +212,7 @@ void output_mm_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_sc_defines(void)
++static void __used output_sc_defines(void)
+ {
+ 	COMMENT("Linux sigcontext offsets.");
+ 	OFFSET(SC_REGS, sigcontext, sc_regs);
+@@ -220,7 +220,7 @@ void output_sc_defines(void)
+ 	BLANK();
+ }
+ 
+-void output_signal_defines(void)
++static void __used output_signal_defines(void)
+ {
+ 	COMMENT("Linux signal numbers.");
+ 	DEFINE(_SIGHUP, SIGHUP);
+@@ -258,7 +258,7 @@ void output_signal_defines(void)
+ }
+ 
+ #ifdef CONFIG_SMP
+-void output_smpboot_defines(void)
++static void __used output_smpboot_defines(void)
+ {
+ 	COMMENT("Linux smp cpu boot offsets.");
+ 	OFFSET(CPU_BOOT_STACK, secondary_data, stack);
+@@ -268,7 +268,7 @@ void output_smpboot_defines(void)
+ #endif
+ 
+ #ifdef CONFIG_HIBERNATION
+-void output_pbe_defines(void)
++static void __used output_pbe_defines(void)
+ {
+ 	COMMENT("Linux struct pbe offsets.");
+ 	OFFSET(PBE_ADDRESS, pbe, address);
+@@ -280,7 +280,7 @@ void output_pbe_defines(void)
+ #endif
+ 
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+-void output_fgraph_ret_regs_defines(void)
++static void __used output_fgraph_ret_regs_defines(void)
+ {
+ 	COMMENT("LoongArch fgraph_ret_regs offsets.");
+ 	OFFSET(FGRET_REGS_A0, fgraph_ret_regs, regs[0]);
+@@ -291,7 +291,7 @@ void output_fgraph_ret_regs_defines(void)
+ }
+ #endif
+ 
+-void output_kvm_defines(void)
++static void __used output_kvm_defines(void)
+ {
+ 	COMMENT("KVM/LoongArch Specific offsets.");
+ 
+diff --git a/arch/powerpc/crypto/aes-gcm-p10-glue.c b/arch/powerpc/crypto/aes-gcm-p10-glue.c
+index 4b6e899895e7b..f62ee54076c06 100644
+--- a/arch/powerpc/crypto/aes-gcm-p10-glue.c
++++ b/arch/powerpc/crypto/aes-gcm-p10-glue.c
+@@ -37,7 +37,7 @@ asmlinkage void aes_p10_gcm_encrypt(u8 *in, u8 *out, size_t len,
+ 				    void *rkey, u8 *iv, void *Xi);
+ asmlinkage void aes_p10_gcm_decrypt(u8 *in, u8 *out, size_t len,
+ 				    void *rkey, u8 *iv, void *Xi);
+-asmlinkage void gcm_init_htable(unsigned char htable[256], unsigned char Xi[16]);
++asmlinkage void gcm_init_htable(unsigned char htable[], unsigned char Xi[]);
+ asmlinkage void gcm_ghash_p10(unsigned char *Xi, unsigned char *Htable,
+ 		unsigned char *aad, unsigned int alen);
+ 
+diff --git a/arch/powerpc/include/asm/irq_work.h b/arch/powerpc/include/asm/irq_work.h
+index b8b0be8f1a07e..c6d3078bd8c3b 100644
+--- a/arch/powerpc/include/asm/irq_work.h
++++ b/arch/powerpc/include/asm/irq_work.h
+@@ -6,6 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
+ {
+ 	return true;
+ }
+-extern void arch_irq_work_raise(void);
+ 
+ #endif /* _ASM_POWERPC_IRQ_WORK_H */
+diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
+index 52cc25864a1be..d8b7e246a32f5 100644
+--- a/arch/powerpc/include/asm/mmu.h
++++ b/arch/powerpc/include/asm/mmu.h
+@@ -412,5 +412,9 @@ extern void *abatron_pteptrs[2];
+ #include <asm/nohash/mmu.h>
+ #endif
+ 
++#if defined(CONFIG_FA_DUMP) || defined(CONFIG_PRESERVE_FA_DUMP)
++#define __HAVE_ARCH_RESERVED_KERNEL_PAGES
++#endif
++
+ #endif /* __KERNEL__ */
+ #endif /* _ASM_POWERPC_MMU_H_ */
+diff --git a/arch/powerpc/include/asm/mmzone.h b/arch/powerpc/include/asm/mmzone.h
+index 4c6c6dbd182f4..da827d2d08666 100644
+--- a/arch/powerpc/include/asm/mmzone.h
++++ b/arch/powerpc/include/asm/mmzone.h
+@@ -42,14 +42,6 @@ u64 memory_hotplug_max(void);
+ #else
+ #define memory_hotplug_max() memblock_end_of_DRAM()
+ #endif /* CONFIG_NUMA */
+-#ifdef CONFIG_FA_DUMP
+-#define __HAVE_ARCH_RESERVED_KERNEL_PAGES
+-#endif
+-
+-#ifdef CONFIG_MEMORY_HOTPLUG
+-extern int create_section_mapping(unsigned long start, unsigned long end,
+-				  int nid, pgprot_t prot);
+-#endif
+ 
+ #endif /* __KERNEL__ */
+ #endif /* _ASM_MMZONE_H_ */
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 5ea2014aff90d..11e062b47d3f8 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -1439,10 +1439,12 @@ static int emulate_instruction(struct pt_regs *regs)
+ 	return -EINVAL;
+ }
+ 
++#ifdef CONFIG_GENERIC_BUG
+ int is_valid_bugaddr(unsigned long addr)
+ {
+ 	return is_kernel_addr(addr);
+ }
++#endif
+ 
+ #ifdef CONFIG_MATH_EMULATION
+ static int emulate_math(struct pt_regs *regs)
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index a4ab8625061a6..6af97dc0f6d5a 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -586,6 +586,8 @@ static int do_fp_load(struct instruction_op *op, unsigned long ea,
+ 	} u;
+ 
+ 	nb = GETSIZE(op->type);
++	if (nb > sizeof(u))
++		return -EINVAL;
+ 	if (!address_ok(regs, ea, nb))
+ 		return -EFAULT;
+ 	rn = op->reg;
+@@ -636,6 +638,8 @@ static int do_fp_store(struct instruction_op *op, unsigned long ea,
+ 	} u;
+ 
+ 	nb = GETSIZE(op->type);
++	if (nb > sizeof(u))
++		return -EINVAL;
+ 	if (!address_ok(regs, ea, nb))
+ 		return -EFAULT;
+ 	rn = op->reg;
+@@ -680,6 +684,9 @@ static nokprobe_inline int do_vec_load(int rn, unsigned long ea,
+ 		u8 b[sizeof(__vector128)];
+ 	} u = {};
+ 
++	if (size > sizeof(u))
++		return -EINVAL;
++
+ 	if (!address_ok(regs, ea & ~0xfUL, 16))
+ 		return -EFAULT;
+ 	/* align to multiple of size */
+@@ -707,6 +714,9 @@ static nokprobe_inline int do_vec_store(int rn, unsigned long ea,
+ 		u8 b[sizeof(__vector128)];
+ 	} u;
+ 
++	if (size > sizeof(u))
++		return -EINVAL;
++
+ 	if (!address_ok(regs, ea & ~0xfUL, 16))
+ 		return -EFAULT;
+ 	/* align to multiple of size */
+diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
+index be229290a6a77..3438ab72c346b 100644
+--- a/arch/powerpc/mm/book3s64/pgtable.c
++++ b/arch/powerpc/mm/book3s64/pgtable.c
+@@ -542,6 +542,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
+ 	set_pte_at(vma->vm_mm, addr, ptep, pte);
+ }
+ 
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ /*
+  * For hash translation mode, we use the deposited table to store hash slot
+  * information and they are stored at PTRS_PER_PMD offset from related pmd
+@@ -563,6 +564,7 @@ int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
+ 
+ 	return true;
+ }
++#endif
+ 
+ /*
+  * Does the CPU support tlbie?
+diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
+index 119ef491f7976..d3a7726ecf512 100644
+--- a/arch/powerpc/mm/init-common.c
++++ b/arch/powerpc/mm/init-common.c
+@@ -126,7 +126,7 @@ void pgtable_cache_add(unsigned int shift)
+ 	 * as to leave enough 0 bits in the address to contain it. */
+ 	unsigned long minalign = max(MAX_PGTABLE_INDEX_SIZE + 1,
+ 				     HUGEPD_SHIFT_MASK + 1);
+-	struct kmem_cache *new;
++	struct kmem_cache *new = NULL;
+ 
+ 	/* It would be nice if this was a BUILD_BUG_ON(), but at the
+ 	 * moment, gcc doesn't seem to recognize is_power_of_2 as a
+@@ -139,7 +139,8 @@ void pgtable_cache_add(unsigned int shift)
+ 
+ 	align = max_t(unsigned long, align, minalign);
+ 	name = kasprintf(GFP_KERNEL, "pgtable-2^%d", shift);
+-	new = kmem_cache_create(name, table_size, align, 0, ctor(shift));
++	if (name)
++		new = kmem_cache_create(name, table_size, align, 0, ctor(shift));
+ 	if (!new)
+ 		panic("Could not allocate pgtable cache for order %d", shift);
+ 
+diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
+index 7f9ff0640124a..72341b9fb5521 100644
+--- a/arch/powerpc/mm/mmu_decl.h
++++ b/arch/powerpc/mm/mmu_decl.h
+@@ -181,3 +181,8 @@ static inline bool debug_pagealloc_enabled_or_kfence(void)
+ {
+ 	return IS_ENABLED(CONFIG_KFENCE) || debug_pagealloc_enabled();
+ }
++
++#ifdef CONFIG_MEMORY_HOTPLUG
++int create_section_mapping(unsigned long start, unsigned long end,
++			   int nid, pgprot_t prot);
++#endif
+diff --git a/arch/riscv/include/asm/irq_work.h b/arch/riscv/include/asm/irq_work.h
+index b53891964ae03..b27a4d64fc6a0 100644
+--- a/arch/riscv/include/asm/irq_work.h
++++ b/arch/riscv/include/asm/irq_work.h
+@@ -6,5 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
+ {
+ 	return IS_ENABLED(CONFIG_SMP);
+ }
+-extern void arch_irq_work_raise(void);
++
+ #endif /* _ASM_RISCV_IRQ_WORK_H */
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index 76ace1e0b46f6..663881785b2b4 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -89,6 +89,7 @@ relocate_enable_mmu:
+ 	/* Compute satp for kernel page tables, but don't load it yet */
+ 	srl a2, a0, PAGE_SHIFT
+ 	la a1, satp_mode
++	XIP_FIXUP_OFFSET a1
+ 	REG_L a1, 0(a1)
+ 	or a2, a2, a1
+ 
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 2e011cbddf3af..ad77ed410d4dd 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -174,6 +174,9 @@ void __init mem_init(void)
+ 
+ /* Limit the memory size via mem. */
+ static phys_addr_t memory_limit;
++#ifdef CONFIG_XIP_KERNEL
++#define memory_limit	(*(phys_addr_t *)XIP_FIXUP(&memory_limit))
++#endif /* CONFIG_XIP_KERNEL */
+ 
+ static int __init early_mem(char *p)
+ {
+@@ -952,7 +955,7 @@ static void __init create_fdt_early_page_table(uintptr_t fix_fdt_va,
+ 	 * setup_vm_final installs the linear mapping. For 32-bit kernel, as the
+ 	 * kernel is mapped in the linear mapping, that makes no difference.
+ 	 */
+-	dtb_early_va = kernel_mapping_pa_to_va(XIP_FIXUP(dtb_pa));
++	dtb_early_va = kernel_mapping_pa_to_va(dtb_pa);
+ #endif
+ 
+ 	dtb_early_pa = dtb_pa;
+@@ -1055,9 +1058,13 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ #endif
+ 
+ 	kernel_map.virt_addr = KERNEL_LINK_ADDR + kernel_map.virt_offset;
+-	kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL);
+ 
+ #ifdef CONFIG_XIP_KERNEL
++#ifdef CONFIG_64BIT
++	kernel_map.page_offset = PAGE_OFFSET_L3;
++#else
++	kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL);
++#endif
+ 	kernel_map.xiprom = (uintptr_t)CONFIG_XIP_PHYS_ADDR;
+ 	kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom);
+ 
+@@ -1067,6 +1074,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ 
+ 	kernel_map.va_kernel_xip_pa_offset = kernel_map.virt_addr - kernel_map.xiprom;
+ #else
++	kernel_map.page_offset = _AC(CONFIG_PAGE_OFFSET, UL);
+ 	kernel_map.phys_addr = (uintptr_t)(&_start);
+ 	kernel_map.size = (uintptr_t)(&_end) - kernel_map.phys_addr;
+ #endif
+diff --git a/arch/s390/boot/ipl_parm.c b/arch/s390/boot/ipl_parm.c
+index 2ab4872fbee1c..b24de9aabf7d8 100644
+--- a/arch/s390/boot/ipl_parm.c
++++ b/arch/s390/boot/ipl_parm.c
+@@ -274,7 +274,7 @@ void parse_boot_command_line(void)
+ 			memory_limit = round_down(memparse(val, NULL), PAGE_SIZE);
+ 
+ 		if (!strcmp(param, "vmalloc") && val) {
+-			vmalloc_size = round_up(memparse(val, NULL), PAGE_SIZE);
++			vmalloc_size = round_up(memparse(val, NULL), _SEGMENT_SIZE);
+ 			vmalloc_size_set = 1;
+ 		}
+ 
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 8104e0e3d188d..9cc76e631759b 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -255,7 +255,8 @@ static unsigned long setup_kernel_memory_layout(void)
+ 	VMALLOC_END = MODULES_VADDR;
+ 
+ 	/* allow vmalloc area to occupy up to about 1/2 of the rest virtual space left */
+-	vmalloc_size = min(vmalloc_size, round_down(VMALLOC_END / 2, _REGION3_SIZE));
++	vsize = round_down(VMALLOC_END / 2, _SEGMENT_SIZE);
++	vmalloc_size = min(vmalloc_size, vsize);
+ 	VMALLOC_START = VMALLOC_END - vmalloc_size;
+ 
+ 	/* split remaining virtual space between 1:1 mapping & vmemmap array */
+diff --git a/arch/s390/include/asm/irq_work.h b/arch/s390/include/asm/irq_work.h
+index 603783766d0ab..f00c9f610d5a8 100644
+--- a/arch/s390/include/asm/irq_work.h
++++ b/arch/s390/include/asm/irq_work.h
+@@ -7,6 +7,4 @@ static inline bool arch_irq_work_has_interrupt(void)
+ 	return true;
+ }
+ 
+-void arch_irq_work_raise(void);
+-
+ #endif /* _ASM_S390_IRQ_WORK_H */
+diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c
+index 046403471c5d7..c7ed302a6b59e 100644
+--- a/arch/s390/kernel/ptrace.c
++++ b/arch/s390/kernel/ptrace.c
+@@ -392,6 +392,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
+ 		/*
+ 		 * floating point control reg. is in the thread structure
+ 		 */
++		save_fpu_regs();
+ 		if ((unsigned int) data != 0 ||
+ 		    test_fp_ctl(data >> (BITS_PER_LONG - 32)))
+ 			return -EINVAL;
+@@ -748,6 +749,7 @@ static int __poke_user_compat(struct task_struct *child,
+ 		/*
+ 		 * floating point control reg. is in the thread structure
+ 		 */
++		save_fpu_regs();
+ 		if (test_fp_ctl(tmp))
+ 			return -EINVAL;
+ 		child->thread.fpu.fpc = data;
+@@ -911,9 +913,7 @@ static int s390_fpregs_set(struct task_struct *target,
+ 	int rc = 0;
+ 	freg_t fprs[__NUM_FPRS];
+ 
+-	if (target == current)
+-		save_fpu_regs();
+-
++	save_fpu_regs();
+ 	if (MACHINE_HAS_VX)
+ 		convert_vx_to_fp(fprs, target->thread.fpu.vxrs);
+ 	else
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 7aa0e668488f0..16e32174807f7 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4316,10 +4316,6 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+ 
+ 	vcpu_load(vcpu);
+ 
+-	if (test_fp_ctl(fpu->fpc)) {
+-		ret = -EINVAL;
+-		goto out;
+-	}
+ 	vcpu->run->s.regs.fpc = fpu->fpc;
+ 	if (MACHINE_HAS_VX)
+ 		convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs,
+@@ -4327,7 +4323,6 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+ 	else
+ 		memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs));
+ 
+-out:
+ 	vcpu_put(vcpu);
+ 	return ret;
+ }
+diff --git a/arch/sparc/kernel/asm-offsets.c b/arch/sparc/kernel/asm-offsets.c
+index 5784f2df489a4..3d9b9855dce91 100644
+--- a/arch/sparc/kernel/asm-offsets.c
++++ b/arch/sparc/kernel/asm-offsets.c
+@@ -19,14 +19,14 @@
+ #include <asm/hibernate.h>
+ 
+ #ifdef CONFIG_SPARC32
+-int sparc32_foo(void)
++static int __used sparc32_foo(void)
+ {
+ 	DEFINE(AOFF_thread_fork_kpsr,
+ 			offsetof(struct thread_struct, fork_kpsr));
+ 	return 0;
+ }
+ #else
+-int sparc64_foo(void)
++static int __used sparc64_foo(void)
+ {
+ #ifdef CONFIG_HIBERNATION
+ 	BLANK();
+@@ -45,7 +45,7 @@ int sparc64_foo(void)
+ }
+ #endif
+ 
+-int foo(void)
++static int __used foo(void)
+ {
+ 	BLANK();
+ 	DEFINE(AOFF_task_thread, offsetof(struct task_struct, thread));
+diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
+index 3d7836c465070..cabcc501b448a 100644
+--- a/arch/um/drivers/net_kern.c
++++ b/arch/um/drivers/net_kern.c
+@@ -204,7 +204,7 @@ static int uml_net_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
+-static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct uml_net_private *lp = netdev_priv(dev);
+ 	unsigned long flags;
+diff --git a/arch/um/include/shared/kern_util.h b/arch/um/include/shared/kern_util.h
+index d8b8b4f07e429..444bae755b16a 100644
+--- a/arch/um/include/shared/kern_util.h
++++ b/arch/um/include/shared/kern_util.h
+@@ -50,7 +50,7 @@ extern void do_uml_exitcalls(void);
+  * Are we disallowed to sleep? Used to choose between GFP_KERNEL and
+  * GFP_ATOMIC.
+  */
+-extern int __cant_sleep(void);
++extern int __uml_cant_sleep(void);
+ extern int get_current_pid(void);
+ extern int copy_from_user_proc(void *to, void *from, int size);
+ extern char *uml_strdup(const char *string);
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 106b7da2f8d6f..6daffb9d8a8d7 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -220,7 +220,7 @@ void arch_cpu_idle(void)
+ 	um_idle_sleep();
+ }
+ 
+-int __cant_sleep(void) {
++int __uml_cant_sleep(void) {
+ 	return in_atomic() || irqs_disabled() || in_interrupt();
+ 	/* Is in_interrupt() really needed? */
+ }
+diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c
+index fddd1dec27e6d..3e270da6b6f67 100644
+--- a/arch/um/kernel/time.c
++++ b/arch/um/kernel/time.c
+@@ -432,9 +432,29 @@ static void time_travel_update_time(unsigned long long next, bool idle)
+ 	time_travel_del_event(&ne);
+ }
+ 
++static void time_travel_update_time_rel(unsigned long long offs)
++{
++	unsigned long flags;
++
++	/*
++	 * Disable interrupts before calculating the new time so
++	 * that a real timer interrupt (signal) can't happen at
++	 * a bad time e.g. after we read time_travel_time but
++	 * before we've completed updating the time.
++	 */
++	local_irq_save(flags);
++	time_travel_update_time(time_travel_time + offs, false);
++	local_irq_restore(flags);
++}
++
+ void time_travel_ndelay(unsigned long nsec)
+ {
+-	time_travel_update_time(time_travel_time + nsec, false);
++	/*
++	 * Not strictly needed to use _rel() version since this is
++	 * only used in INFCPU/EXT modes, but it doesn't hurt and
++	 * is more readable too.
++	 */
++	time_travel_update_time_rel(nsec);
+ }
+ EXPORT_SYMBOL(time_travel_ndelay);
+ 
+@@ -568,7 +588,11 @@ static void time_travel_set_start(void)
+ #define time_travel_time 0
+ #define time_travel_ext_waiting 0
+ 
+-static inline void time_travel_update_time(unsigned long long ns, bool retearly)
++static inline void time_travel_update_time(unsigned long long ns, bool idle)
++{
++}
++
++static inline void time_travel_update_time_rel(unsigned long long offs)
+ {
+ }
+ 
+@@ -720,9 +744,7 @@ static u64 timer_read(struct clocksource *cs)
+ 		 */
+ 		if (!irqs_disabled() && !in_interrupt() && !in_softirq() &&
+ 		    !time_travel_ext_waiting)
+-			time_travel_update_time(time_travel_time +
+-						TIMER_MULTIPLIER,
+-						false);
++			time_travel_update_time_rel(TIMER_MULTIPLIER);
+ 		return time_travel_time / TIMER_MULTIPLIER;
+ 	}
+ 
+diff --git a/arch/um/os-Linux/helper.c b/arch/um/os-Linux/helper.c
+index b459745f52e24..3cb8ac63be6ed 100644
+--- a/arch/um/os-Linux/helper.c
++++ b/arch/um/os-Linux/helper.c
+@@ -46,7 +46,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
+ 	unsigned long stack, sp;
+ 	int pid, fds[2], ret, n;
+ 
+-	stack = alloc_stack(0, __cant_sleep());
++	stack = alloc_stack(0, __uml_cant_sleep());
+ 	if (stack == 0)
+ 		return -ENOMEM;
+ 
+@@ -70,7 +70,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
+ 	data.pre_data = pre_data;
+ 	data.argv = argv;
+ 	data.fd = fds[1];
+-	data.buf = __cant_sleep() ? uml_kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
++	data.buf = __uml_cant_sleep() ? uml_kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
+ 					uml_kmalloc(PATH_MAX, UM_GFP_KERNEL);
+ 	pid = clone(helper_child, (void *) sp, CLONE_VM, &data);
+ 	if (pid < 0) {
+@@ -121,7 +121,7 @@ int run_helper_thread(int (*proc)(void *), void *arg, unsigned int flags,
+ 	unsigned long stack, sp;
+ 	int pid, status, err;
+ 
+-	stack = alloc_stack(0, __cant_sleep());
++	stack = alloc_stack(0, __uml_cant_sleep());
+ 	if (stack == 0)
+ 		return -ENOMEM;
+ 
+diff --git a/arch/um/os-Linux/util.c b/arch/um/os-Linux/util.c
+index fc0f2a9dee5af..1dca4ffbd572f 100644
+--- a/arch/um/os-Linux/util.c
++++ b/arch/um/os-Linux/util.c
+@@ -173,23 +173,38 @@ __uml_setup("quiet", quiet_cmd_param,
+ "quiet\n"
+ "    Turns off information messages during boot.\n\n");
+ 
++/*
++ * The os_info/os_warn functions will be called by helper threads. These
++ * have a very limited stack size and using the libc formatting functions
++ * may overflow the stack.
++ * So pull in the kernel vscnprintf and use that instead with a fixed
++ * on-stack buffer.
++ */
++int vscnprintf(char *buf, size_t size, const char *fmt, va_list args);
++
+ void os_info(const char *fmt, ...)
+ {
++	char buf[256];
+ 	va_list list;
++	int len;
+ 
+ 	if (quiet_info)
+ 		return;
+ 
+ 	va_start(list, fmt);
+-	vfprintf(stderr, fmt, list);
++	len = vscnprintf(buf, sizeof(buf), fmt, list);
++	fwrite(buf, len, 1, stderr);
+ 	va_end(list);
+ }
+ 
+ void os_warn(const char *fmt, ...)
+ {
++	char buf[256];
+ 	va_list list;
++	int len;
+ 
+ 	va_start(list, fmt);
+-	vfprintf(stderr, fmt, list);
++	len = vscnprintf(buf, sizeof(buf), fmt, list);
++	fwrite(buf, len, 1, stderr);
+ 	va_end(list);
+ }
+diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
+index 473ba59b82a81..d040080d7edbd 100644
+--- a/arch/x86/boot/compressed/ident_map_64.c
++++ b/arch/x86/boot/compressed/ident_map_64.c
+@@ -386,3 +386,8 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
+ 	 */
+ 	kernel_add_identity_map(address, end);
+ }
++
++void do_boot_nmi_trap(struct pt_regs *regs, unsigned long error_code)
++{
++	/* Empty handler to ignore NMI during early boot */
++}
+diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c
+index 3cdf94b414567..d100284bbef47 100644
+--- a/arch/x86/boot/compressed/idt_64.c
++++ b/arch/x86/boot/compressed/idt_64.c
+@@ -61,6 +61,7 @@ void load_stage2_idt(void)
+ 	boot_idt_desc.address = (unsigned long)boot_idt;
+ 
+ 	set_idt_entry(X86_TRAP_PF, boot_page_fault);
++	set_idt_entry(X86_TRAP_NMI, boot_nmi_trap);
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 	/*
+diff --git a/arch/x86/boot/compressed/idt_handlers_64.S b/arch/x86/boot/compressed/idt_handlers_64.S
+index 22890e199f5b4..4d03c8562f637 100644
+--- a/arch/x86/boot/compressed/idt_handlers_64.S
++++ b/arch/x86/boot/compressed/idt_handlers_64.S
+@@ -70,6 +70,7 @@ SYM_FUNC_END(\name)
+ 	.code64
+ 
+ EXCEPTION_HANDLER	boot_page_fault do_boot_page_fault error_code=1
++EXCEPTION_HANDLER	boot_nmi_trap do_boot_nmi_trap error_code=0
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ EXCEPTION_HANDLER	boot_stage1_vc do_vc_no_ghcb		error_code=1
+diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
+index c0d502bd87161..bc2f0f17fb90e 100644
+--- a/arch/x86/boot/compressed/misc.h
++++ b/arch/x86/boot/compressed/misc.h
+@@ -196,6 +196,7 @@ static inline void cleanup_exception_handling(void) { }
+ 
+ /* IDT Entry Points */
+ void boot_page_fault(void);
++void boot_nmi_trap(void);
+ void boot_stage1_vc(void);
+ void boot_stage2_vc(void);
+ 
+diff --git a/arch/x86/include/asm/irq_work.h b/arch/x86/include/asm/irq_work.h
+index 800ffce0db29e..6b4d36c951655 100644
+--- a/arch/x86/include/asm/irq_work.h
++++ b/arch/x86/include/asm/irq_work.h
+@@ -9,7 +9,6 @@ static inline bool arch_irq_work_has_interrupt(void)
+ {
+ 	return boot_cpu_has(X86_FEATURE_APIC);
+ }
+-extern void arch_irq_work_raise(void);
+ #else
+ static inline bool arch_irq_work_has_interrupt(void)
+ {
+diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h
+index 8fa6ac0e2d766..d91b37f5b4bb4 100644
+--- a/arch/x86/include/asm/kmsan.h
++++ b/arch/x86/include/asm/kmsan.h
+@@ -64,6 +64,7 @@ static inline bool kmsan_virt_addr_valid(void *addr)
+ {
+ 	unsigned long x = (unsigned long)addr;
+ 	unsigned long y = x - __START_KERNEL_map;
++	bool ret;
+ 
+ 	/* use the carry flag to determine if x was < __START_KERNEL_map */
+ 	if (unlikely(x > y)) {
+@@ -79,7 +80,21 @@ static inline bool kmsan_virt_addr_valid(void *addr)
+ 			return false;
+ 	}
+ 
+-	return pfn_valid(x >> PAGE_SHIFT);
++	/*
++	 * pfn_valid() relies on RCU, and may call into the scheduler on exiting
++	 * the critical section. However, this would result in recursion with
++	 * KMSAN. Therefore, disable preemption here, and re-enable preemption
++	 * below while suppressing reschedules to avoid recursion.
++	 *
++	 * Note, this sacrifices occasionally breaking scheduling guarantees.
++	 * Although, a kernel compiled with KMSAN has already given up on any
++	 * performance guarantees due to being heavily instrumented.
++	 */
++	preempt_disable();
++	ret = pfn_valid(x >> PAGE_SHIFT);
++	preempt_enable_no_resched();
++
++	return ret;
+ }
+ 
+ #endif /* !MODULE */
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 7b397370b4d64..df8d25e744d10 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -44,6 +44,7 @@
+ #include <linux/sync_core.h>
+ #include <linux/task_work.h>
+ #include <linux/hardirq.h>
++#include <linux/kexec.h>
+ 
+ #include <asm/intel-family.h>
+ #include <asm/processor.h>
+@@ -233,6 +234,7 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
+ 	struct llist_node *pending;
+ 	struct mce_evt_llist *l;
+ 	int apei_err = 0;
++	struct page *p;
+ 
+ 	/*
+ 	 * Allow instrumentation around external facilities usage. Not that it
+@@ -286,6 +288,20 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
+ 	if (!fake_panic) {
+ 		if (panic_timeout == 0)
+ 			panic_timeout = mca_cfg.panic_timeout;
++
++		/*
++		 * Kdump skips the poisoned page in order to avoid
++		 * touching the error bits again. Poison the page even
++		 * if the error is fatal and the machine is about to
++		 * panic.
++		 */
++		if (kexec_crash_loaded()) {
++			if (final && (final->status & MCI_STATUS_ADDRV)) {
++				p = pfn_to_online_page(final->addr >> PAGE_SHIFT);
++				if (p)
++					SetPageHWPoison(p);
++			}
++		}
+ 		panic(msg);
+ 	} else
+ 		pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
+diff --git a/block/bio.c b/block/bio.c
+index 5eba53ca953b4..270f6b99926ea 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -944,7 +944,7 @@ bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
+ 
+ 	if ((addr1 | mask) != (addr2 | mask))
+ 		return false;
+-	if (bv->bv_len + len > queue_max_segment_size(q))
++	if (len > queue_max_segment_size(q) - bv->bv_len)
+ 		return false;
+ 	return bvec_try_merge_page(bv, page, len, offset, same_page);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 7e743ac58c31d..a71974a5e57cd 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1858,6 +1858,22 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
+ 	__add_wait_queue(wq, wait);
+ 
++	/*
++	 * Add one explicit barrier since blk_mq_get_driver_tag() may
++	 * not imply barrier in case of failure.
++	 *
++	 * Order adding us to wait queue and allocating driver tag.
++	 *
++	 * The pair is the one implied in sbitmap_queue_wake_up() which
++	 * orders clearing sbitmap tag bits and waitqueue_active() in
++	 * __sbitmap_queue_wake_up(), since waitqueue_active() is lockless
++	 *
++	 * Otherwise, re-order of adding wait queue and getting driver tag
++	 * may cause __sbitmap_queue_wake_up() to wake up nothing because
++	 * the waitqueue_active() may not observe us in wait queue.
++	 */
++	smp_mb();
++
+ 	/*
+ 	 * It's possible that a tag was freed in the window between the
+ 	 * allocation failure and adding the hardware queue to the wait
+diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c
+index 9711e8fc979d9..9290d43745519 100644
+--- a/drivers/accel/habanalabs/common/device.c
++++ b/drivers/accel/habanalabs/common/device.c
+@@ -853,6 +853,9 @@ static int device_early_init(struct hl_device *hdev)
+ 		gaudi2_set_asic_funcs(hdev);
+ 		strscpy(hdev->asic_name, "GAUDI2B", sizeof(hdev->asic_name));
+ 		break;
++	case ASIC_GAUDI2C:
++		gaudi2_set_asic_funcs(hdev);
++		strscpy(hdev->asic_name, "GAUDI2C", sizeof(hdev->asic_name));
+ 		break;
+ 	default:
+ 		dev_err(hdev->dev, "Unrecognized ASIC type %d\n",
+@@ -1041,18 +1044,19 @@ static bool is_pci_link_healthy(struct hl_device *hdev)
+ 	return (vendor_id == PCI_VENDOR_ID_HABANALABS);
+ }
+ 
+-static void hl_device_eq_heartbeat(struct hl_device *hdev)
++static int hl_device_eq_heartbeat_check(struct hl_device *hdev)
+ {
+-	u64 event_mask = HL_NOTIFIER_EVENT_DEVICE_RESET | HL_NOTIFIER_EVENT_DEVICE_UNAVAILABLE;
+ 	struct asic_fixed_properties *prop = &hdev->asic_prop;
+ 
+ 	if (!prop->cpucp_info.eq_health_check_supported)
+-		return;
++		return 0;
+ 
+ 	if (hdev->eq_heartbeat_received)
+ 		hdev->eq_heartbeat_received = false;
+ 	else
+-		hl_device_cond_reset(hdev, HL_DRV_RESET_HARD, event_mask);
++		return -EIO;
++
++	return 0;
+ }
+ 
+ static void hl_device_heartbeat(struct work_struct *work)
+@@ -1069,10 +1073,9 @@ static void hl_device_heartbeat(struct work_struct *work)
+ 	/*
+ 	 * For EQ health check need to check if driver received the heartbeat eq event
+ 	 * in order to validate the eq is working.
++	 * Only if both the EQ is healthy and we managed to send the next heartbeat reschedule.
+ 	 */
+-	hl_device_eq_heartbeat(hdev);
+-
+-	if (!hdev->asic_funcs->send_heartbeat(hdev))
++	if ((!hl_device_eq_heartbeat_check(hdev)) && (!hdev->asic_funcs->send_heartbeat(hdev)))
+ 		goto reschedule;
+ 
+ 	if (hl_device_operational(hdev, NULL))
+diff --git a/drivers/accel/habanalabs/common/habanalabs.h b/drivers/accel/habanalabs/common/habanalabs.h
+index 1655c101c7052..d0fd77bb6a749 100644
+--- a/drivers/accel/habanalabs/common/habanalabs.h
++++ b/drivers/accel/habanalabs/common/habanalabs.h
+@@ -1262,6 +1262,7 @@ struct hl_dec {
+  * @ASIC_GAUDI_SEC: Gaudi secured device (HL-2000).
+  * @ASIC_GAUDI2: Gaudi2 device.
+  * @ASIC_GAUDI2B: Gaudi2B device.
++ * @ASIC_GAUDI2C: Gaudi2C device.
+  */
+ enum hl_asic_type {
+ 	ASIC_INVALID,
+@@ -1270,6 +1271,7 @@ enum hl_asic_type {
+ 	ASIC_GAUDI_SEC,
+ 	ASIC_GAUDI2,
+ 	ASIC_GAUDI2B,
++	ASIC_GAUDI2C,
+ };
+ 
+ struct hl_cs_parser;
+diff --git a/drivers/accel/habanalabs/common/habanalabs_drv.c b/drivers/accel/habanalabs/common/habanalabs_drv.c
+index 306a5bc9bf894..51fb04bbe376f 100644
+--- a/drivers/accel/habanalabs/common/habanalabs_drv.c
++++ b/drivers/accel/habanalabs/common/habanalabs_drv.c
+@@ -141,6 +141,9 @@ static enum hl_asic_type get_asic_type(struct hl_device *hdev)
+ 		case REV_ID_B:
+ 			asic_type = ASIC_GAUDI2B;
+ 			break;
++		case REV_ID_C:
++			asic_type = ASIC_GAUDI2C;
++			break;
+ 		default:
+ 			break;
+ 		}
+diff --git a/drivers/accel/habanalabs/common/mmu/mmu.c b/drivers/accel/habanalabs/common/mmu/mmu.c
+index b2145716c6053..b654302a68fc0 100644
+--- a/drivers/accel/habanalabs/common/mmu/mmu.c
++++ b/drivers/accel/habanalabs/common/mmu/mmu.c
+@@ -596,6 +596,7 @@ int hl_mmu_if_set_funcs(struct hl_device *hdev)
+ 		break;
+ 	case ASIC_GAUDI2:
+ 	case ASIC_GAUDI2B:
++	case ASIC_GAUDI2C:
+ 		/* MMUs in Gaudi2 are always host resident */
+ 		hl_mmu_v2_hr_set_funcs(hdev, &hdev->mmu_func[MMU_HR_PGT]);
+ 		break;
+diff --git a/drivers/accel/habanalabs/common/sysfs.c b/drivers/accel/habanalabs/common/sysfs.c
+index 01f89f029355e..2786063730555 100644
+--- a/drivers/accel/habanalabs/common/sysfs.c
++++ b/drivers/accel/habanalabs/common/sysfs.c
+@@ -251,6 +251,9 @@ static ssize_t device_type_show(struct device *dev,
+ 	case ASIC_GAUDI2B:
+ 		str = "GAUDI2B";
+ 		break;
++	case ASIC_GAUDI2C:
++		str = "GAUDI2C";
++		break;
+ 	default:
+ 		dev_err(hdev->dev, "Unrecognized ASIC type %d\n",
+ 				hdev->asic_type);
+diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c
+index 819660c684cfc..bc6e338ef2fd6 100644
+--- a/drivers/accel/habanalabs/gaudi2/gaudi2.c
++++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c
+@@ -7929,21 +7929,19 @@ static int gaudi2_handle_qman_err_generic(struct hl_device *hdev, u16 event_type
+ 				error_count++;
+ 			}
+ 
+-		if (i == QMAN_STREAMS && error_count) {
+-			/* check for undefined opcode */
+-			if (glbl_sts_val & PDMA0_QM_GLBL_ERR_STS_CP_UNDEF_CMD_ERR_MASK &&
+-					hdev->captured_err_info.undef_opcode.write_enable) {
++		/* check for undefined opcode */
++		if (glbl_sts_val & PDMA0_QM_GLBL_ERR_STS_CP_UNDEF_CMD_ERR_MASK) {
++			*event_mask |= HL_NOTIFIER_EVENT_UNDEFINED_OPCODE;
++			if (hdev->captured_err_info.undef_opcode.write_enable) {
+ 				memset(&hdev->captured_err_info.undef_opcode, 0,
+ 						sizeof(hdev->captured_err_info.undef_opcode));
+-
+-				hdev->captured_err_info.undef_opcode.write_enable = false;
+ 				hdev->captured_err_info.undef_opcode.timestamp = ktime_get();
+ 				hdev->captured_err_info.undef_opcode.engine_id =
+ 							gaudi2_queue_id_to_engine_id[qid_base];
+-				*event_mask |= HL_NOTIFIER_EVENT_UNDEFINED_OPCODE;
+ 			}
+ 
+-			handle_lower_qman_data_on_err(hdev, qman_base, *event_mask);
++			if (i == QMAN_STREAMS)
++				handle_lower_qman_data_on_err(hdev, qman_base, *event_mask);
+ 		}
+ 	}
+ 
+diff --git a/drivers/accel/habanalabs/include/hw_ip/pci/pci_general.h b/drivers/accel/habanalabs/include/hw_ip/pci/pci_general.h
+index f5d497dc9bdc1..4f951cada0776 100644
+--- a/drivers/accel/habanalabs/include/hw_ip/pci/pci_general.h
++++ b/drivers/accel/habanalabs/include/hw_ip/pci/pci_general.h
+@@ -25,6 +25,7 @@ enum hl_revision_id {
+ 	REV_ID_INVALID				= 0x00,
+ 	REV_ID_A				= 0x01,
+ 	REV_ID_B				= 0x02,
++	REV_ID_C				= 0x03
+ };
+ 
+ #endif /* INCLUDE_PCI_GENERAL_H_ */
+diff --git a/drivers/acpi/acpi_extlog.c b/drivers/acpi/acpi_extlog.c
+index 71e8d4e7a36cc..ca87a09391359 100644
+--- a/drivers/acpi/acpi_extlog.c
++++ b/drivers/acpi/acpi_extlog.c
+@@ -308,9 +308,10 @@ err:
+ static void __exit extlog_exit(void)
+ {
+ 	mce_unregister_decode_chain(&extlog_mce_dec);
+-	((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
+-	if (extlog_l1_addr)
++	if (extlog_l1_addr) {
++		((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
+ 		acpi_os_unmap_iomem(extlog_l1_addr, l1_size);
++	}
+ 	if (elog_addr)
+ 		acpi_os_unmap_iomem(elog_addr, elog_size);
+ 	release_mem_region(elog_base, elog_size);
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index 375010e575d0e..33ddb447747ea 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -500,6 +500,15 @@ static const struct dmi_system_id video_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
+ 		},
+ 	},
++	{
++	 .callback = video_set_report_key_events,
++	 .driver_data = (void *)((uintptr_t)REPORT_BRIGHTNESS_KEY_EVENTS),
++	 .ident = "COLORFUL X15 AT 23",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "COLORFUL"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "X15 AT 23"),
++		},
++	},
+ 	/*
+ 	 * Some machines change the brightness themselves when a brightness
+ 	 * hotkey gets pressed, despite us telling them not to. In this case
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 63ad0541db381..ab2a82cb1b0b4 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -101,6 +101,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes)
+ 	return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2;
+ }
+ 
++/*
++ * A platform may describe one error source for the handling of synchronous
++ * errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
++ * or External Interrupt). On x86, the HEST notifications are always
++ * asynchronous, so only SEA on ARM is delivered as a synchronous
++ * notification.
++ */
++static inline bool is_hest_sync_notify(struct ghes *ghes)
++{
++	u8 notify_type = ghes->generic->notify.type;
++
++	return notify_type == ACPI_HEST_NOTIFY_SEA;
++}
++
+ /*
+  * This driver isn't really modular, however for the time being,
+  * continuing to use module_param is the easiest way to remain
+@@ -489,7 +503,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
+ }
+ 
+ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+-				       int sev)
++				       int sev, bool sync)
+ {
+ 	int flags = -1;
+ 	int sec_sev = ghes_severity(gdata->error_severity);
+@@ -503,7 +517,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+ 	    (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
+ 		flags = MF_SOFT_OFFLINE;
+ 	if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
+-		flags = 0;
++		flags = sync ? MF_ACTION_REQUIRED : 0;
+ 
+ 	if (flags != -1)
+ 		return ghes_do_memory_failure(mem_err->physical_addr, flags);
+@@ -511,9 +525,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+ 	return false;
+ }
+ 
+-static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev)
++static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata,
++				       int sev, bool sync)
+ {
+ 	struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
++	int flags = sync ? MF_ACTION_REQUIRED : 0;
+ 	bool queued = false;
+ 	int sec_sev, i;
+ 	char *p;
+@@ -538,7 +554,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s
+ 		 * and don't filter out 'corrected' error here.
+ 		 */
+ 		if (is_cache && has_pa) {
+-			queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
++			queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags);
+ 			p += err_info->length;
+ 			continue;
+ 		}
+@@ -666,6 +682,7 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 	const guid_t *fru_id = &guid_null;
+ 	char *fru_text = "";
+ 	bool queued = false;
++	bool sync = is_hest_sync_notify(ghes);
+ 
+ 	sev = ghes_severity(estatus->error_severity);
+ 	apei_estatus_for_each_section(estatus, gdata) {
+@@ -683,13 +700,13 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 			atomic_notifier_call_chain(&ghes_report_chain, sev, mem_err);
+ 
+ 			arch_apei_report_mem_error(sev, mem_err);
+-			queued = ghes_handle_memory_failure(gdata, sev);
++			queued = ghes_handle_memory_failure(gdata, sev, sync);
+ 		}
+ 		else if (guid_equal(sec_type, &CPER_SEC_PCIE)) {
+ 			ghes_handle_aer(gdata);
+ 		}
+ 		else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
+-			queued = ghes_handle_arm_hw_error(gdata, sev);
++			queued = ghes_handle_arm_hw_error(gdata, sev, sync);
+ 		} else {
+ 			void *err = acpi_hest_get_payload(gdata);
+ 
+diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c
+index 12f330b0eac01..b57de78fbf14f 100644
+--- a/drivers/acpi/numa/srat.c
++++ b/drivers/acpi/numa/srat.c
+@@ -183,7 +183,7 @@ static int __init slit_valid(struct acpi_table_slit *slit)
+ 	int i, j;
+ 	int d = slit->locality_count;
+ 	for (i = 0; i < d; i++) {
+-		for (j = 0; j < d; j++)  {
++		for (j = 0; j < d; j++) {
+ 			u8 val = slit->entry[d*i + j];
+ 			if (i == j) {
+ 				if (val != LOCAL_DISTANCE)
+@@ -532,7 +532,7 @@ int __init acpi_numa_init(void)
+ 	 */
+ 
+ 	/* fake_pxm is the next unused PXM value after SRAT parsing */
+-	for (i = 0, fake_pxm = -1; i < MAX_NUMNODES - 1; i++) {
++	for (i = 0, fake_pxm = -1; i < MAX_NUMNODES; i++) {
+ 		if (node_to_pxm_map[i] > fake_pxm)
+ 			fake_pxm = node_to_pxm_map[i];
+ 	}
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index c3536c236be99..03b52d31a3e3f 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -461,6 +461,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "B1502CBA"),
+ 		},
+ 	},
++	{
++		/* Asus ExpertBook B1502CGA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "B1502CGA"),
++		},
++	},
+ 	{
+ 		/* Asus ExpertBook B2402CBA */
+ 		.matches = {
+@@ -482,6 +489,20 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "B2502CBA"),
+ 		},
+ 	},
++	{
++		/* Asus Vivobook E1504GA */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "E1504GA"),
++		},
++	},
++	{
++		/* Asus Vivobook E1504GAB */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_BOARD_NAME, "E1504GAB"),
++		},
++	},
+ 	{
+ 		/* LG Electronics 17U70P */
+ 		.matches = {
+diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
+index eaa31e567d1ec..5b59d133b6af4 100644
+--- a/drivers/base/arch_numa.c
++++ b/drivers/base/arch_numa.c
+@@ -144,7 +144,7 @@ void __init early_map_cpu_to_node(unsigned int cpu, int nid)
+ unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
+ EXPORT_SYMBOL(__per_cpu_offset);
+ 
+-static int __init early_cpu_to_node(int cpu)
++int __init early_cpu_to_node(int cpu)
+ {
+ 	return cpu_to_node_map[cpu];
+ }
+diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
+index 65de51f3dfd9a..ab78eab97d982 100644
+--- a/drivers/block/rnbd/rnbd-srv.c
++++ b/drivers/block/rnbd/rnbd-srv.c
+@@ -585,6 +585,7 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
+ {
+ 	char *full_path;
+ 	char *a, *b;
++	int len;
+ 
+ 	full_path = kmalloc(PATH_MAX, GFP_KERNEL);
+ 	if (!full_path)
+@@ -596,19 +597,19 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
+ 	 */
+ 	a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
+ 	if (a) {
+-		int len = a - dev_search_path;
++		len = a - dev_search_path;
+ 
+ 		len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
+ 			       dev_search_path, srv_sess->sessname, dev_name);
+-		if (len >= PATH_MAX) {
+-			pr_err("Too long path: %s, %s, %s\n",
+-			       dev_search_path, srv_sess->sessname, dev_name);
+-			kfree(full_path);
+-			return ERR_PTR(-EINVAL);
+-		}
+ 	} else {
+-		snprintf(full_path, PATH_MAX, "%s/%s",
+-			 dev_search_path, dev_name);
++		len = snprintf(full_path, PATH_MAX, "%s/%s",
++			       dev_search_path, dev_name);
++	}
++	if (len >= PATH_MAX) {
++		pr_err("Too long path: %s, %s, %s\n",
++		       dev_search_path, srv_sess->sessname, dev_name);
++		kfree(full_path);
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	/* eliminitate duplicated slashes */
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 067e248e35993..35f74f209d1fc 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2039,6 +2039,7 @@ static const struct qca_device_data qca_soc_data_wcn3998 __maybe_unused = {
+ static const struct qca_device_data qca_soc_data_qca2066 __maybe_unused = {
+ 	.soc_type = QCA_QCA2066,
+ 	.num_vregs = 0,
++	.capabilities = QCA_CAP_WIDEBAND_SPEECH | QCA_CAP_VALID_LE_STATES,
+ };
+ 
+ static const struct qca_device_data qca_soc_data_qca6390 __maybe_unused = {
+diff --git a/drivers/char/hw_random/jh7110-trng.c b/drivers/char/hw_random/jh7110-trng.c
+index 38474d48a25e1..b1f94e3c0c6a4 100644
+--- a/drivers/char/hw_random/jh7110-trng.c
++++ b/drivers/char/hw_random/jh7110-trng.c
+@@ -300,7 +300,7 @@ static int starfive_trng_probe(struct platform_device *pdev)
+ 	ret = devm_request_irq(&pdev->dev, irq, starfive_trng_irq, 0, pdev->name,
+ 			       (void *)trng);
+ 	if (ret)
+-		return dev_err_probe(&pdev->dev, irq,
++		return dev_err_probe(&pdev->dev, ret,
+ 				     "Failed to register interrupt handler\n");
+ 
+ 	trng->hclk = devm_clk_get(&pdev->dev, "hclk");
+diff --git a/drivers/clk/hisilicon/clk-hi3620.c b/drivers/clk/hisilicon/clk-hi3620.c
+index 2d7186905abdc..5d0226530fdb2 100644
+--- a/drivers/clk/hisilicon/clk-hi3620.c
++++ b/drivers/clk/hisilicon/clk-hi3620.c
+@@ -466,8 +466,10 @@ static void __init hi3620_mmc_clk_init(struct device_node *node)
+ 		return;
+ 
+ 	clk_data->clks = kcalloc(num, sizeof(*clk_data->clks), GFP_KERNEL);
+-	if (!clk_data->clks)
++	if (!clk_data->clks) {
++		kfree(clk_data);
+ 		return;
++	}
+ 
+ 	for (i = 0; i < num; i++) {
+ 		struct hisi_mmc_clock *mmc_clk = &hi3620_mmc_clks[i];
+diff --git a/drivers/clk/imx/clk-imx8qxp.c b/drivers/clk/imx/clk-imx8qxp.c
+index 41f0a45aa162e..7d8883916cacd 100644
+--- a/drivers/clk/imx/clk-imx8qxp.c
++++ b/drivers/clk/imx/clk-imx8qxp.c
+@@ -66,6 +66,22 @@ static const char * const lcd_pxl_sels[] = {
+ 	"lcd_pxl_bypass_div_clk",
+ };
+ 
++static const char *const lvds0_sels[] = {
++	"clk_dummy",
++	"clk_dummy",
++	"clk_dummy",
++	"clk_dummy",
++	"mipi0_lvds_bypass_clk",
++};
++
++static const char *const lvds1_sels[] = {
++	"clk_dummy",
++	"clk_dummy",
++	"clk_dummy",
++	"clk_dummy",
++	"mipi1_lvds_bypass_clk",
++};
++
+ static const char * const mipi_sels[] = {
+ 	"clk_dummy",
+ 	"clk_dummy",
+@@ -207,9 +223,9 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
+ 	/* MIPI-LVDS SS */
+ 	imx_clk_scu("mipi0_bypass_clk", IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_BYPASS);
+ 	imx_clk_scu("mipi0_pixel_clk", IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_PER);
+-	imx_clk_scu("mipi0_lvds_pixel_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC2);
+ 	imx_clk_scu("mipi0_lvds_bypass_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_BYPASS);
+-	imx_clk_scu("mipi0_lvds_phy_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC3);
++	imx_clk_scu2("mipi0_lvds_pixel_clk", lvds0_sels, ARRAY_SIZE(lvds0_sels), IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC2);
++	imx_clk_scu2("mipi0_lvds_phy_clk", lvds0_sels, ARRAY_SIZE(lvds0_sels), IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC3);
+ 	imx_clk_scu2("mipi0_dsi_tx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_MST_BUS);
+ 	imx_clk_scu2("mipi0_dsi_rx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_SLV_BUS);
+ 	imx_clk_scu2("mipi0_dsi_phy_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_PHY);
+@@ -219,9 +235,9 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
+ 
+ 	imx_clk_scu("mipi1_bypass_clk", IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_BYPASS);
+ 	imx_clk_scu("mipi1_pixel_clk", IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_PER);
+-	imx_clk_scu("mipi1_lvds_pixel_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC2);
+ 	imx_clk_scu("mipi1_lvds_bypass_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_BYPASS);
+-	imx_clk_scu("mipi1_lvds_phy_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC3);
++	imx_clk_scu2("mipi1_lvds_pixel_clk", lvds1_sels, ARRAY_SIZE(lvds1_sels), IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC2);
++	imx_clk_scu2("mipi1_lvds_phy_clk", lvds1_sels, ARRAY_SIZE(lvds1_sels), IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC3);
+ 
+ 	imx_clk_scu2("mipi1_dsi_tx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_MST_BUS);
+ 	imx_clk_scu2("mipi1_dsi_rx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_SLV_BUS);
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index be89180dd19c1..e48a904c00133 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -886,8 +886,10 @@ struct clk_hw *__imx_clk_gpr_scu(const char *name, const char * const *parent_na
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if (!imx_clk_is_resource_owned(rsrc_id))
++	if (!imx_clk_is_resource_owned(rsrc_id)) {
++		kfree(clk_node);
+ 		return NULL;
++	}
+ 
+ 	clk = kzalloc(sizeof(*clk), GFP_KERNEL);
+ 	if (!clk) {
+diff --git a/drivers/clk/mmp/clk-of-pxa168.c b/drivers/clk/mmp/clk-of-pxa168.c
+index fb0df64cf053c..c5a7ba1deaa3a 100644
+--- a/drivers/clk/mmp/clk-of-pxa168.c
++++ b/drivers/clk/mmp/clk-of-pxa168.c
+@@ -308,18 +308,21 @@ static void __init pxa168_clk_init(struct device_node *np)
+ 	pxa_unit->mpmu_base = of_iomap(np, 0);
+ 	if (!pxa_unit->mpmu_base) {
+ 		pr_err("failed to map mpmu registers\n");
++		kfree(pxa_unit);
+ 		return;
+ 	}
+ 
+ 	pxa_unit->apmu_base = of_iomap(np, 1);
+ 	if (!pxa_unit->apmu_base) {
+ 		pr_err("failed to map apmu registers\n");
++		kfree(pxa_unit);
+ 		return;
+ 	}
+ 
+ 	pxa_unit->apbc_base = of_iomap(np, 2);
+ 	if (!pxa_unit->apbc_base) {
+ 		pr_err("failed to map apbc registers\n");
++		kfree(pxa_unit);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index a148ff1f0872c..a4f6884416a04 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -4545,6 +4545,7 @@ struct caam_hash_alg {
+ 	struct list_head entry;
+ 	struct device *dev;
+ 	int alg_type;
++	bool is_hmac;
+ 	struct ahash_alg ahash_alg;
+ };
+ 
+@@ -4571,7 +4572,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
+ 
+ 	ctx->dev = caam_hash->dev;
+ 
+-	if (alg->setkey) {
++	if (caam_hash->is_hmac) {
+ 		ctx->adata.key_dma = dma_map_single_attrs(ctx->dev, ctx->key,
+ 							  ARRAY_SIZE(ctx->key),
+ 							  DMA_TO_DEVICE,
+@@ -4611,7 +4612,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
+ 	 * For keyed hash algorithms shared descriptors
+ 	 * will be created later in setkey() callback
+ 	 */
+-	return alg->setkey ? 0 : ahash_set_sh_desc(ahash);
++	return caam_hash->is_hmac ? 0 : ahash_set_sh_desc(ahash);
+ }
+ 
+ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
+@@ -4646,12 +4647,14 @@ static struct caam_hash_alg *caam_hash_alloc(struct device *dev,
+ 			 template->hmac_name);
+ 		snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ 			 template->hmac_driver_name);
++		t_alg->is_hmac = true;
+ 	} else {
+ 		snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s",
+ 			 template->name);
+ 		snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ 			 template->driver_name);
+ 		t_alg->ahash_alg.setkey = NULL;
++		t_alg->is_hmac = false;
+ 	}
+ 	alg->cra_module = THIS_MODULE;
+ 	alg->cra_init = caam_hash_cra_init;
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index 290c8500c247f..fdd724228c2fa 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -1753,6 +1753,7 @@ static struct caam_hash_template driver_hash[] = {
+ struct caam_hash_alg {
+ 	struct list_head entry;
+ 	int alg_type;
++	bool is_hmac;
+ 	struct ahash_engine_alg ahash_alg;
+ };
+ 
+@@ -1804,7 +1805,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
+ 	} else {
+ 		if (priv->era >= 6) {
+ 			ctx->dir = DMA_BIDIRECTIONAL;
+-			ctx->key_dir = alg->setkey ? DMA_TO_DEVICE : DMA_NONE;
++			ctx->key_dir = caam_hash->is_hmac ? DMA_TO_DEVICE : DMA_NONE;
+ 		} else {
+ 			ctx->dir = DMA_TO_DEVICE;
+ 			ctx->key_dir = DMA_NONE;
+@@ -1862,7 +1863,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
+ 	 * For keyed hash algorithms shared descriptors
+ 	 * will be created later in setkey() callback
+ 	 */
+-	return alg->setkey ? 0 : ahash_set_sh_desc(ahash);
++	return caam_hash->is_hmac ? 0 : ahash_set_sh_desc(ahash);
+ }
+ 
+ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
+@@ -1915,12 +1916,14 @@ caam_hash_alloc(struct caam_hash_template *template,
+ 			 template->hmac_name);
+ 		snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ 			 template->hmac_driver_name);
++		t_alg->is_hmac = true;
+ 	} else {
+ 		snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s",
+ 			 template->name);
+ 		snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ 			 template->driver_name);
+ 		halg->setkey = NULL;
++		t_alg->is_hmac = false;
+ 	}
+ 	alg->cra_module = THIS_MODULE;
+ 	alg->cra_init = caam_hash_cra_init;
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptlf.c b/drivers/crypto/marvell/octeontx2/otx2_cptlf.c
+index 6edd27ff8c4e3..e4bd3f030ceca 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptlf.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptlf.c
+@@ -419,8 +419,8 @@ int otx2_cptlf_init(struct otx2_cptlfs_info *lfs, u8 eng_grp_mask, int pri,
+ 	return 0;
+ 
+ free_iq:
+-	otx2_cpt_free_instruction_queues(lfs);
+ 	cptlf_hw_cleanup(lfs);
++	otx2_cpt_free_instruction_queues(lfs);
+ detach_rsrcs:
+ 	otx2_cpt_detach_rsrcs_msg(lfs);
+ clear_lfs_num:
+@@ -431,11 +431,13 @@ EXPORT_SYMBOL_NS_GPL(otx2_cptlf_init, CRYPTO_DEV_OCTEONTX2_CPT);
+ 
+ void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs)
+ {
+-	lfs->lfs_num = 0;
+ 	/* Cleanup LFs hardware side */
+ 	cptlf_hw_cleanup(lfs);
++	/* Free instruction queues */
++	otx2_cpt_free_instruction_queues(lfs);
+ 	/* Send request to detach LFs */
+ 	otx2_cpt_detach_rsrcs_msg(lfs);
++	lfs->lfs_num = 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(otx2_cptlf_shutdown, CRYPTO_DEV_OCTEONTX2_CPT);
+ 
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c
+index bac729c885f96..215a1b17b6ce0 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c
+@@ -249,8 +249,11 @@ static void cptvf_lf_shutdown(struct otx2_cptlfs_info *lfs)
+ 	otx2_cptlf_unregister_interrupts(lfs);
+ 	/* Cleanup LFs software side */
+ 	lf_sw_cleanup(lfs);
++	/* Free instruction queues */
++	otx2_cpt_free_instruction_queues(lfs);
+ 	/* Send request to detach LFs */
+ 	otx2_cpt_detach_rsrcs_msg(lfs);
++	lfs->lfs_num = 0;
+ }
+ 
+ static int cptvf_lf_init(struct otx2_cptvf_dev *cptvf)
+diff --git a/drivers/crypto/starfive/jh7110-cryp.c b/drivers/crypto/starfive/jh7110-cryp.c
+index 3a67ddc4d9367..4f5b6818208dc 100644
+--- a/drivers/crypto/starfive/jh7110-cryp.c
++++ b/drivers/crypto/starfive/jh7110-cryp.c
+@@ -168,7 +168,7 @@ static int starfive_cryp_probe(struct platform_device *pdev)
+ 	ret = devm_request_irq(&pdev->dev, irq, starfive_cryp_irq, 0, pdev->name,
+ 			       (void *)cryp);
+ 	if (ret)
+-		return dev_err_probe(&pdev->dev, irq,
++		return dev_err_probe(&pdev->dev, ret,
+ 				     "Failed to register interrupt handler\n");
+ 
+ 	clk_prepare_enable(cryp->hclk);
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index b2d5c8921ab36..b0cf6d2fd352f 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -104,7 +104,7 @@ static struct stm32_crc *stm32_crc_get_next_crc(void)
+ 	struct stm32_crc *crc;
+ 
+ 	spin_lock_bh(&crc_list.lock);
+-	crc = list_first_entry(&crc_list.dev_list, struct stm32_crc, list);
++	crc = list_first_entry_or_null(&crc_list.dev_list, struct stm32_crc, list);
+ 	if (crc)
+ 		list_move_tail(&crc->list, &crc_list.dev_list);
+ 	spin_unlock_bh(&crc_list.lock);
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 907f50ab70ed9..7162d2bad4465 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -461,10 +461,14 @@ static void devfreq_monitor(struct work_struct *work)
+ 	if (err)
+ 		dev_err(&devfreq->dev, "dvfs failed with (%d) error\n", err);
+ 
++	if (devfreq->stop_polling)
++		goto out;
++
+ 	queue_delayed_work(devfreq_wq, &devfreq->work,
+ 				msecs_to_jiffies(devfreq->profile->polling_ms));
+-	mutex_unlock(&devfreq->lock);
+ 
++out:
++	mutex_unlock(&devfreq->lock);
+ 	trace_devfreq_monitor(devfreq);
+ }
+ 
+@@ -483,6 +487,10 @@ void devfreq_monitor_start(struct devfreq *devfreq)
+ 	if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
+ 		return;
+ 
++	mutex_lock(&devfreq->lock);
++	if (delayed_work_pending(&devfreq->work))
++		goto out;
++
+ 	switch (devfreq->profile->timer) {
+ 	case DEVFREQ_TIMER_DEFERRABLE:
+ 		INIT_DEFERRABLE_WORK(&devfreq->work, devfreq_monitor);
+@@ -491,12 +499,16 @@ void devfreq_monitor_start(struct devfreq *devfreq)
+ 		INIT_DELAYED_WORK(&devfreq->work, devfreq_monitor);
+ 		break;
+ 	default:
+-		return;
++		goto out;
+ 	}
+ 
+ 	if (devfreq->profile->polling_ms)
+ 		queue_delayed_work(devfreq_wq, &devfreq->work,
+ 			msecs_to_jiffies(devfreq->profile->polling_ms));
++
++out:
++	devfreq->stop_polling = false;
++	mutex_unlock(&devfreq->lock);
+ }
+ EXPORT_SYMBOL(devfreq_monitor_start);
+ 
+@@ -513,6 +525,14 @@ void devfreq_monitor_stop(struct devfreq *devfreq)
+ 	if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
+ 		return;
+ 
++	mutex_lock(&devfreq->lock);
++	if (devfreq->stop_polling) {
++		mutex_unlock(&devfreq->lock);
++		return;
++	}
++
++	devfreq->stop_polling = true;
++	mutex_unlock(&devfreq->lock);
+ 	cancel_delayed_work_sync(&devfreq->work);
+ }
+ EXPORT_SYMBOL(devfreq_monitor_stop);
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index 6f7a60d2ed916..e7f55c021e562 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -1280,8 +1280,6 @@ int extcon_dev_register(struct extcon_dev *edev)
+ 
+ 	edev->id = ret;
+ 
+-	dev_set_name(&edev->dev, "extcon%d", edev->id);
+-
+ 	ret = extcon_alloc_cables(edev);
+ 	if (ret < 0)
+ 		goto err_alloc_cables;
+@@ -1310,6 +1308,7 @@ int extcon_dev_register(struct extcon_dev *edev)
+ 	RAW_INIT_NOTIFIER_HEAD(&edev->nh_all);
+ 
+ 	dev_set_drvdata(&edev->dev, edev);
++	dev_set_name(&edev->dev, "extcon%d", edev->id);
+ 	edev->state = 0;
+ 
+ 	ret = device_register(&edev->dev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+index 02f4c6f9d4f68..576067d66bb9a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
++++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+@@ -330,6 +330,7 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
+ {
+ 	struct list_head *reset_device_list = reset_context->reset_device_list;
+ 	struct amdgpu_device *tmp_adev = NULL;
++	struct amdgpu_ras *con;
+ 	int r;
+ 
+ 	if (reset_device_list == NULL)
+@@ -355,7 +356,30 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
+ 		 */
+ 		amdgpu_register_gpu_instance(tmp_adev);
+ 
+-		/* Resume RAS */
++		/* Resume RAS, ecc_irq */
++		con = amdgpu_ras_get_context(tmp_adev);
++		if (!amdgpu_sriov_vf(tmp_adev) && con) {
++			if (tmp_adev->sdma.ras &&
++				tmp_adev->sdma.ras->ras_block.ras_late_init) {
++				r = tmp_adev->sdma.ras->ras_block.ras_late_init(tmp_adev,
++						&tmp_adev->sdma.ras->ras_block.ras_comm);
++				if (r) {
++					dev_err(tmp_adev->dev, "SDMA failed to execute ras_late_init! ret:%d\n", r);
++					goto end;
++				}
++			}
++
++			if (tmp_adev->gfx.ras &&
++				tmp_adev->gfx.ras->ras_block.ras_late_init) {
++				r = tmp_adev->gfx.ras->ras_block.ras_late_init(tmp_adev,
++						&tmp_adev->gfx.ras->ras_block.ras_comm);
++				if (r) {
++					dev_err(tmp_adev->dev, "GFX failed to execute ras_late_init! ret:%d\n", r);
++					goto end;
++				}
++			}
++		}
++
+ 		amdgpu_ras_resume(tmp_adev);
+ 
+ 		/* Update PSP FW topology after reset */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
+index 469785d337911..1ef758ac5076e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
+@@ -90,7 +90,7 @@ struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
+ 		return NULL;
+ 
+ 	fence = container_of(f, struct amdgpu_amdkfd_fence, base);
+-	if (fence && f->ops == &amdkfd_fence_ops)
++	if (f->ops == &amdkfd_fence_ops)
+ 		return fence;
+ 
+ 	return NULL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 3677d644183b8..9257c9af3fee6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1485,6 +1485,7 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
+ 				return true;
+ 
+ 			fw_ver = *((uint32_t *)adev->pm.fw->data + 69);
++			release_firmware(adev->pm.fw);
+ 			if (fw_ver < 0x00160e00)
+ 				return true;
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index d2f273d77e595..55784a9f26c4c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -1045,21 +1045,28 @@ int amdgpu_gmc_vram_checking(struct amdgpu_device *adev)
+ 	 * seconds, so here, we just pick up three parts for emulation.
+ 	 */
+ 	ret = memcmp(vram_ptr, cptr, 10);
+-	if (ret)
+-		return ret;
++	if (ret) {
++		ret = -EIO;
++		goto release_buffer;
++	}
+ 
+ 	ret = memcmp(vram_ptr + (size / 2), cptr, 10);
+-	if (ret)
+-		return ret;
++	if (ret) {
++		ret = -EIO;
++		goto release_buffer;
++	}
+ 
+ 	ret = memcmp(vram_ptr + size - 10, cptr, 10);
+-	if (ret)
+-		return ret;
++	if (ret) {
++		ret = -EIO;
++		goto release_buffer;
++	}
+ 
++release_buffer:
+ 	amdgpu_bo_free_kernel(&vram_bo, &vram_gpu,
+ 			&vram_ptr);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static ssize_t current_memory_partition_show(
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mca.c
+index cf33eb219e257..061d88f4480d7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mca.c
+@@ -351,6 +351,9 @@ int amdgpu_mca_smu_get_mca_entry(struct amdgpu_device *adev, enum amdgpu_mca_err
+ 	const struct amdgpu_mca_smu_funcs *mca_funcs = adev->mca.mca_funcs;
+ 	int count;
+ 
++	if (!mca_funcs || !mca_funcs->mca_get_mca_entry)
++		return -EOPNOTSUPP;
++
+ 	switch (type) {
+ 	case AMDGPU_MCA_ERROR_TYPE_UE:
+ 		count = mca_funcs->max_ue_count;
+@@ -365,10 +368,7 @@ int amdgpu_mca_smu_get_mca_entry(struct amdgpu_device *adev, enum amdgpu_mca_err
+ 	if (idx >= count)
+ 		return -EINVAL;
+ 
+-	if (mca_funcs && mca_funcs->mca_get_mca_entry)
+-		return mca_funcs->mca_get_mca_entry(adev, type, idx, entry);
+-
+-	return -EOPNOTSUPP;
++	return mca_funcs->mca_get_mca_entry(adev, type, idx, entry);
+ }
+ 
+ #if defined(CONFIG_DEBUG_FS)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 9ddbf1494326a..30c0108366587 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -886,6 +886,11 @@ int amdgpu_mes_set_shader_debugger(struct amdgpu_device *adev,
+ 	op_input.op = MES_MISC_OP_SET_SHADER_DEBUGGER;
+ 	op_input.set_shader_debugger.process_context_addr = process_context_addr;
+ 	op_input.set_shader_debugger.flags.u32all = flags;
++
++	/* use amdgpu mes_flush_shader_debugger instead */
++	if (op_input.set_shader_debugger.flags.process_ctx_flush)
++		return -EINVAL;
++
+ 	op_input.set_shader_debugger.spi_gdbg_per_vmid_cntl = spi_gdbg_per_vmid_cntl;
+ 	memcpy(op_input.set_shader_debugger.tcp_watch_cntl, tcp_watch_cntl,
+ 			sizeof(op_input.set_shader_debugger.tcp_watch_cntl));
+@@ -905,6 +910,32 @@ int amdgpu_mes_set_shader_debugger(struct amdgpu_device *adev,
+ 	return r;
+ }
+ 
++int amdgpu_mes_flush_shader_debugger(struct amdgpu_device *adev,
++				     uint64_t process_context_addr)
++{
++	struct mes_misc_op_input op_input = {0};
++	int r;
++
++	if (!adev->mes.funcs->misc_op) {
++		DRM_ERROR("mes flush shader debugger is not supported!\n");
++		return -EINVAL;
++	}
++
++	op_input.op = MES_MISC_OP_SET_SHADER_DEBUGGER;
++	op_input.set_shader_debugger.process_context_addr = process_context_addr;
++	op_input.set_shader_debugger.flags.process_ctx_flush = true;
++
++	amdgpu_mes_lock(&adev->mes);
++
++	r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
++	if (r)
++		DRM_ERROR("failed to set_shader_debugger\n");
++
++	amdgpu_mes_unlock(&adev->mes);
++
++	return r;
++}
++
+ static void
+ amdgpu_mes_ring_to_queue_props(struct amdgpu_device *adev,
+ 			       struct amdgpu_ring *ring,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+index a27b424ffe005..c2c88b772361d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+@@ -291,9 +291,10 @@ struct mes_misc_op_input {
+ 			uint64_t process_context_addr;
+ 			union {
+ 				struct {
+-					uint64_t single_memop : 1;
+-					uint64_t single_alu_op : 1;
+-					uint64_t reserved: 30;
++					uint32_t single_memop : 1;
++					uint32_t single_alu_op : 1;
++					uint32_t reserved: 29;
++					uint32_t process_ctx_flush: 1;
+ 				};
+ 				uint32_t u32all;
+ 			} flags;
+@@ -369,7 +370,8 @@ int amdgpu_mes_set_shader_debugger(struct amdgpu_device *adev,
+ 				const uint32_t *tcp_watch_cntl,
+ 				uint32_t flags,
+ 				bool trap_en);
+-
++int amdgpu_mes_flush_shader_debugger(struct amdgpu_device *adev,
++				uint64_t process_context_addr);
+ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
+ 			int queue_type, int idx,
+ 			struct amdgpu_mes_ctx_data *ctx_data,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 5ad03f2afdb45..425cebcc5cbff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1245,19 +1245,15 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
+  * amdgpu_bo_move_notify - notification about a memory move
+  * @bo: pointer to a buffer object
+  * @evict: if this move is evicting the buffer from the graphics address space
+- * @new_mem: new information of the bufer object
+  *
+  * Marks the corresponding &amdgpu_bo buffer object as invalid, also performs
+  * bookkeeping.
+  * TTM driver callback which is called when ttm moves a buffer.
+  */
+-void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
+-			   bool evict,
+-			   struct ttm_resource *new_mem)
++void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
+ 	struct amdgpu_bo *abo;
+-	struct ttm_resource *old_mem = bo->resource;
+ 
+ 	if (!amdgpu_bo_is_amdgpu_bo(bo))
+ 		return;
+@@ -1274,13 +1270,6 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
+ 	/* remember the eviction */
+ 	if (evict)
+ 		atomic64_inc(&adev->num_evictions);
+-
+-	/* update statistics */
+-	if (!new_mem)
+-		return;
+-
+-	/* move_notify is called before move happens */
+-	trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
+ }
+ 
+ void amdgpu_bo_get_memory(struct amdgpu_bo *bo,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+index d28e21baef16e..a3ea8a82db23a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+@@ -344,9 +344,7 @@ int amdgpu_bo_set_metadata (struct amdgpu_bo *bo, void *metadata,
+ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
+ 			   size_t buffer_size, uint32_t *metadata_size,
+ 			   uint64_t *flags);
+-void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
+-			   bool evict,
+-			   struct ttm_resource *new_mem);
++void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict);
+ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo);
+ vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
+ void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 63fb4cd85e53b..4a3726bb6da1b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1174,6 +1174,9 @@ static int amdgpu_ras_query_error_status_helper(struct amdgpu_device *adev,
+ 	enum amdgpu_ras_block blk = info ? info->head.block : AMDGPU_RAS_BLOCK_COUNT;
+ 	struct amdgpu_ras_block_object *block_obj = NULL;
+ 
++	if (blk == AMDGPU_RAS_BLOCK_COUNT)
++		return -EINVAL;
++
+ 	if (error_query_mode == AMDGPU_RAS_INVALID_ERROR_QUERY)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+index dcd8c066bc1f5..1b013a44ca99a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+@@ -191,7 +191,8 @@ static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
+ 
+ 	/* Never sync to VM updates either. */
+ 	if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
+-	    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
++	    owner != AMDGPU_FENCE_OWNER_UNDEFINED &&
++	    owner != AMDGPU_FENCE_OWNER_KFD)
+ 		return false;
+ 
+ 	/* Ignore fences depending on the sync mode */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index ab4a762aed5bd..75c9fd2c6c2a1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -545,10 +545,11 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
+ 			return r;
+ 	}
+ 
++	trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
+ out:
+ 	/* update statistics */
+ 	atomic64_add(bo->base.size, &adev->num_bytes_moved);
+-	amdgpu_bo_move_notify(bo, evict, new_mem);
++	amdgpu_bo_move_notify(bo, evict);
+ 	return 0;
+ }
+ 
+@@ -1553,7 +1554,7 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo,
+ static void
+ amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo)
+ {
+-	amdgpu_bo_move_notify(bo, false, NULL);
++	amdgpu_bo_move_notify(bo, false);
+ }
+ 
+ static struct ttm_device_funcs amdgpu_bo_driver = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index b14127429f303..0efb2568cb65f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -1397,9 +1397,13 @@ int amdgpu_ucode_request(struct amdgpu_device *adev, const struct firmware **fw,
+ 
+ 	if (err)
+ 		return -ENODEV;
++
+ 	err = amdgpu_ucode_validate(*fw);
+-	if (err)
++	if (err) {
+ 		dev_dbg(adev->dev, "\"%s\" failed to validate\n", fw_name);
++		release_firmware(*fw);
++		*fw = NULL;
++	}
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+index 53a2ba5fcf4ba..22175da0e16af 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+@@ -102,7 +102,9 @@ static void gfxhub_v1_0_init_system_aperture_regs(struct amdgpu_device *adev)
+ 		WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
+ 			min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
+ 
+-		if (adev->apu_flags & AMD_APU_IS_RAVEN2)
++		if (adev->apu_flags & (AMD_APU_IS_RAVEN2 |
++				       AMD_APU_IS_RENOIR |
++				       AMD_APU_IS_GREEN_SARDINE))
+ 		       /*
+ 			* Raven2 has a HW issue that it is unable to use the
+ 			* vram which is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR.
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c
+index 55423ff1bb492..95d06da544e2a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_2.c
+@@ -139,7 +139,9 @@ gfxhub_v1_2_xcc_init_system_aperture_regs(struct amdgpu_device *adev,
+ 			WREG32_SOC15_RLC(GC, GET_INST(GC, i), regMC_VM_SYSTEM_APERTURE_LOW_ADDR,
+ 				min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
+ 
+-			if (adev->apu_flags & AMD_APU_IS_RAVEN2)
++			if (adev->apu_flags & (AMD_APU_IS_RAVEN2 |
++					       AMD_APU_IS_RENOIR |
++					       AMD_APU_IS_GREEN_SARDINE))
+ 			       /*
+ 				* Raven2 has a HW issue that it is unable to use the
+ 				* vram which is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR.
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index a5a05c16c10d7..6c51856088546 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -1041,6 +1041,10 @@ static int gmc_v10_0_hw_fini(void *handle)
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+ 
++	if (adev->gmc.ecc_irq.funcs &&
++		amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC))
++		amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index 23d7b548d13f4..c9c653cfc765b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -941,6 +941,11 @@ static int gmc_v11_0_hw_fini(void *handle)
+ 	}
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
++
++	if (adev->gmc.ecc_irq.funcs &&
++		amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC))
++		amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
++
+ 	gmc_v11_0_gart_disable(adev);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+index 42e103d7077d5..59d9215e55562 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
+@@ -915,8 +915,8 @@ static int gmc_v6_0_hw_init(void *handle)
+ 
+ 	if (amdgpu_emu_mode == 1)
+ 		return amdgpu_gmc_vram_checking(adev);
+-	else
+-		return r;
++
++	return 0;
+ }
+ 
+ static int gmc_v6_0_hw_fini(void *handle)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+index efc16e580f1e2..45a2f8e031a2c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+@@ -1099,8 +1099,8 @@ static int gmc_v7_0_hw_init(void *handle)
+ 
+ 	if (amdgpu_emu_mode == 1)
+ 		return amdgpu_gmc_vram_checking(adev);
+-	else
+-		return r;
++
++	return 0;
+ }
+ 
+ static int gmc_v7_0_hw_fini(void *handle)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+index ff4ae73d27ecd..4422b27a3cc2f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+@@ -1219,8 +1219,8 @@ static int gmc_v8_0_hw_init(void *handle)
+ 
+ 	if (amdgpu_emu_mode == 1)
+ 		return amdgpu_gmc_vram_checking(adev);
+-	else
+-		return r;
++
++	return 0;
+ }
+ 
+ static int gmc_v8_0_hw_fini(void *handle)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 77e625f24cd07..ad91329f227df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -2341,8 +2341,8 @@ static int gmc_v9_0_hw_init(void *handle)
+ 
+ 	if (amdgpu_emu_mode == 1)
+ 		return amdgpu_gmc_vram_checking(adev);
+-	else
+-		return r;
++
++	return 0;
+ }
+ 
+ /**
+@@ -2381,6 +2381,10 @@ static int gmc_v9_0_hw_fini(void *handle)
+ 
+ 	amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
+ 
++	if (adev->gmc.ecc_irq.funcs &&
++		amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC))
++		amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+index 843219a917360..e3ddd22aa1728 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+@@ -96,7 +96,9 @@ static void mmhub_v1_0_init_system_aperture_regs(struct amdgpu_device *adev)
+ 	WREG32_SOC15(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
+ 		     min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
+ 
+-	if (adev->apu_flags & AMD_APU_IS_RAVEN2)
++	if (adev->apu_flags & (AMD_APU_IS_RAVEN2 |
++			       AMD_APU_IS_RENOIR |
++			       AMD_APU_IS_GREEN_SARDINE))
+ 		/*
+ 		 * Raven2 has a HW issue that it is unable to use the vram which
+ 		 * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 77f493262e058..43eff221eae58 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -87,6 +87,8 @@ void kfd_process_dequeue_from_device(struct kfd_process_device *pdd)
+ 		return;
+ 
+ 	dev->dqm->ops.process_termination(dev->dqm, &pdd->qpd);
++	if (dev->kfd->shared_resources.enable_mes)
++		amdgpu_mes_flush_shader_debugger(dev->adev, pdd->proc_ctx_gpu_addr);
+ 	pdd->already_dequeued = true;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index a15bfb5223e8f..9af1d094385af 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -400,14 +400,9 @@ static void svm_range_bo_release(struct kref *kref)
+ 		spin_lock(&svm_bo->list_lock);
+ 	}
+ 	spin_unlock(&svm_bo->list_lock);
+-	if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base)) {
+-		/* We're not in the eviction worker.
+-		 * Signal the fence and synchronize with any
+-		 * pending eviction work.
+-		 */
++	if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base))
++		/* We're not in the eviction worker. Signal the fence. */
+ 		dma_fence_signal(&svm_bo->eviction_fence->base);
+-		cancel_work_sync(&svm_bo->eviction_work);
+-	}
+ 	dma_fence_put(&svm_bo->eviction_fence->base);
+ 	amdgpu_bo_unref(&svm_bo->bo);
+ 	kfree(svm_bo);
+@@ -2371,8 +2366,10 @@ retry:
+ 		mutex_unlock(&svms->lock);
+ 		mmap_write_unlock(mm);
+ 
+-		/* Pairs with mmget in svm_range_add_list_work */
+-		mmput(mm);
++		/* Pairs with mmget in svm_range_add_list_work. If dropping the
++		 * last mm refcount, schedule release work to avoid circular locking
++		 */
++		mmput_async(mm);
+ 
+ 		spin_lock(&svms->deferred_list_lock);
+ 	}
+@@ -2683,6 +2680,7 @@ svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr,
+ {
+ 	struct vm_area_struct *vma;
+ 	struct interval_tree_node *node;
++	struct rb_node *rb_node;
+ 	unsigned long start_limit, end_limit;
+ 
+ 	vma = vma_lookup(p->mm, addr << PAGE_SHIFT);
+@@ -2702,16 +2700,15 @@ svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr,
+ 	if (node) {
+ 		end_limit = min(end_limit, node->start);
+ 		/* Last range that ends before the fault address */
+-		node = container_of(rb_prev(&node->rb),
+-				    struct interval_tree_node, rb);
++		rb_node = rb_prev(&node->rb);
+ 	} else {
+ 		/* Last range must end before addr because
+ 		 * there was no range after addr
+ 		 */
+-		node = container_of(rb_last(&p->svms.objects.rb_root),
+-				    struct interval_tree_node, rb);
++		rb_node = rb_last(&p->svms.objects.rb_root);
+ 	}
+-	if (node) {
++	if (rb_node) {
++		node = container_of(rb_node, struct interval_tree_node, rb);
+ 		if (node->last >= addr) {
+ 			WARN(1, "Overlap with prev node and page fault addr\n");
+ 			return -EFAULT;
+@@ -3447,13 +3444,14 @@ svm_range_trigger_migration(struct mm_struct *mm, struct svm_range *prange,
+ 
+ int svm_range_schedule_evict_svm_bo(struct amdgpu_amdkfd_fence *fence)
+ {
+-	if (!fence)
+-		return -EINVAL;
+-
+-	if (dma_fence_is_signaled(&fence->base))
+-		return 0;
+-
+-	if (fence->svm_bo) {
++	/* Dereferencing fence->svm_bo is safe here because the fence hasn't
++	 * signaled yet and we're under the protection of the fence->lock.
++	 * After the fence is signaled in svm_range_bo_release, we cannot get
++	 * here any more.
++	 *
++	 * Reference is dropped in svm_range_evict_svm_bo_worker.
++	 */
++	if (svm_bo_ref_unless_zero(fence->svm_bo)) {
+ 		WRITE_ONCE(fence->svm_bo->evicting, 1);
+ 		schedule_work(&fence->svm_bo->eviction_work);
+ 	}
+@@ -3468,8 +3466,6 @@ static void svm_range_evict_svm_bo_worker(struct work_struct *work)
+ 	int r = 0;
+ 
+ 	svm_bo = container_of(work, struct svm_range_bo, eviction_work);
+-	if (!svm_bo_ref_unless_zero(svm_bo))
+-		return; /* svm_bo was freed while eviction was pending */
+ 
+ 	if (mmget_not_zero(svm_bo->eviction_fence->mm)) {
+ 		mm = svm_bo->eviction_fence->mm;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 58d775a0668db..e5f7c92eebcbb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1452,17 +1452,19 @@ static int kfd_add_peer_prop(struct kfd_topology_device *kdev,
+ 		/* CPU->CPU  link*/
+ 		cpu_dev = kfd_topology_device_by_proximity_domain(iolink1->node_to);
+ 		if (cpu_dev) {
+-			list_for_each_entry(iolink3, &cpu_dev->io_link_props, list)
+-				if (iolink3->node_to == iolink2->node_to)
+-					break;
+-
+-			props->weight += iolink3->weight;
+-			props->min_latency += iolink3->min_latency;
+-			props->max_latency += iolink3->max_latency;
+-			props->min_bandwidth = min(props->min_bandwidth,
+-							iolink3->min_bandwidth);
+-			props->max_bandwidth = min(props->max_bandwidth,
+-							iolink3->max_bandwidth);
++			list_for_each_entry(iolink3, &cpu_dev->io_link_props, list) {
++				if (iolink3->node_to != iolink2->node_to)
++					continue;
++
++				props->weight += iolink3->weight;
++				props->min_latency += iolink3->min_latency;
++				props->max_latency += iolink3->max_latency;
++				props->min_bandwidth = min(props->min_bandwidth,
++							   iolink3->min_bandwidth);
++				props->max_bandwidth = min(props->max_bandwidth,
++							   iolink3->max_bandwidth);
++				break;
++			}
+ 		} else {
+ 			WARN(1, "CPU node not found");
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 9dbbaeb8c6cf1..affc628004ffb 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -65,7 +65,6 @@
+ #include "amdgpu_dm_debugfs.h"
+ #endif
+ #include "amdgpu_dm_psr.h"
+-#include "amdgpu_dm_replay.h"
+ 
+ #include "ivsrcid/ivsrcid_vislands30.h"
+ 
+@@ -1258,7 +1257,9 @@ static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_
+ 	/* AGP aperture is disabled */
+ 	if (agp_bot > agp_top) {
+ 		logical_addr_low = adev->gmc.fb_start >> 18;
+-		if (adev->apu_flags & AMD_APU_IS_RAVEN2)
++		if (adev->apu_flags & (AMD_APU_IS_RAVEN2 |
++				       AMD_APU_IS_RENOIR |
++				       AMD_APU_IS_GREEN_SARDINE))
+ 			/*
+ 			 * Raven2 has a HW issue that it is unable to use the vram which
+ 			 * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the
+@@ -1270,7 +1271,9 @@ static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_
+ 			logical_addr_high = adev->gmc.fb_end >> 18;
+ 	} else {
+ 		logical_addr_low = min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18;
+-		if (adev->apu_flags & AMD_APU_IS_RAVEN2)
++		if (adev->apu_flags & (AMD_APU_IS_RAVEN2 |
++				       AMD_APU_IS_RENOIR |
++				       AMD_APU_IS_GREEN_SARDINE))
+ 			/*
+ 			 * Raven2 has a HW issue that it is unable to use the vram which
+ 			 * is out of MC_VM_SYSTEM_APERTURE_HIGH_ADDR. So here is the
+@@ -4348,7 +4351,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 	enum dc_connection_type new_connection_type = dc_connection_none;
+ 	const struct dc_plane_cap *plane;
+ 	bool psr_feature_enabled = false;
+-	bool replay_feature_enabled = false;
+ 	int max_overlay = dm->dc->caps.max_slave_planes;
+ 
+ 	dm->display_indexes_num = dm->dc->caps.max_streams;
+@@ -4460,20 +4462,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 		}
+ 	}
+ 
+-	if (!(amdgpu_dc_debug_mask & DC_DISABLE_REPLAY)) {
+-		switch (adev->ip_versions[DCE_HWIP][0]) {
+-		case IP_VERSION(3, 1, 4):
+-		case IP_VERSION(3, 1, 5):
+-		case IP_VERSION(3, 1, 6):
+-		case IP_VERSION(3, 2, 0):
+-		case IP_VERSION(3, 2, 1):
+-			replay_feature_enabled = true;
+-			break;
+-		default:
+-			replay_feature_enabled = amdgpu_dc_feature_mask & DC_REPLAY_MASK;
+-			break;
+-		}
+-	}
+ 	/* loops over all connectors on the board */
+ 	for (i = 0; i < link_cnt; i++) {
+ 		struct dc_link *link = NULL;
+@@ -4522,12 +4510,6 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 				amdgpu_dm_update_connector_after_detect(aconnector);
+ 				setup_backlight_device(dm, aconnector);
+ 
+-				/*
+-				 * Disable psr if replay can be enabled
+-				 */
+-				if (replay_feature_enabled && amdgpu_dm_setup_replay(link, aconnector))
+-					psr_feature_enabled = false;
+-
+ 				if (psr_feature_enabled)
+ 					amdgpu_dm_set_psr_caps(link);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index cb0b48bb2a7dc..d2834ad85a544 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -29,7 +29,6 @@
+ #include "dc.h"
+ #include "amdgpu.h"
+ #include "amdgpu_dm_psr.h"
+-#include "amdgpu_dm_replay.h"
+ #include "amdgpu_dm_crtc.h"
+ #include "amdgpu_dm_plane.h"
+ #include "amdgpu_dm_trace.h"
+@@ -124,12 +123,7 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
+ 	 * fill_dc_dirty_rects().
+ 	 */
+ 	if (vblank_work->stream && vblank_work->stream->link) {
+-		/*
+-		 * Prioritize replay, instead of psr
+-		 */
+-		if (vblank_work->stream->link->replay_settings.replay_feature_enabled)
+-			amdgpu_dm_replay_enable(vblank_work->stream, false);
+-		else if (vblank_work->enable) {
++		if (vblank_work->enable) {
+ 			if (vblank_work->stream->link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 &&
+ 			    vblank_work->stream->link->psr_settings.psr_allow_active)
+ 				amdgpu_dm_psr_disable(vblank_work->stream);
+@@ -138,7 +132,6 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work)
+ #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+ 			   !amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base) &&
+ #endif
+-			   vblank_work->stream->link->panel_config.psr.disallow_replay &&
+ 			   vblank_work->acrtc->dm_irq_params.allow_psr_entry) {
+ 			amdgpu_dm_psr_enable(vblank_work->stream);
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+index 59c2a3545db34..a84f1e376dee4 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+@@ -87,6 +87,20 @@ static const struct IP_BASE CLK_BASE = { { { { 0x00016C00, 0x02401800, 0, 0, 0,
+ #define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK	0x0000F000L
+ #define CLK1_CLK_PLL_REQ__FbMult_frac_MASK	0xFFFF0000L
+ 
++#define regCLK1_CLK2_BYPASS_CNTL			0x029c
++#define regCLK1_CLK2_BYPASS_CNTL_BASE_IDX	0
++
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL__SHIFT	0x0
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV__SHIFT	0x10
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK		0x00000007L
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV_MASK		0x000F0000L
++
++#define regCLK6_0_CLK6_spll_field_8				0x464b
++#define regCLK6_0_CLK6_spll_field_8_BASE_IDX	0
++
++#define CLK6_0_CLK6_spll_field_8__spll_ssc_en__SHIFT	0xd
++#define CLK6_0_CLK6_spll_field_8__spll_ssc_en_MASK		0x00002000L
++
+ #define REG(reg_name) \
+ 	(CLK_BASE.instance[0].segment[reg ## reg_name ## _BASE_IDX] + reg ## reg_name)
+ 
+@@ -157,6 +171,37 @@ static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state
+ 	}
+ }
+ 
++bool dcn314_is_spll_ssc_enabled(struct clk_mgr *clk_mgr_base)
++{
++	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
++	uint32_t ssc_enable;
++
++	REG_GET(CLK6_0_CLK6_spll_field_8, spll_ssc_en, &ssc_enable);
++
++	return ssc_enable == 1;
++}
++
++void dcn314_init_clocks(struct clk_mgr *clk_mgr)
++{
++	struct clk_mgr_internal *clk_mgr_int = TO_CLK_MGR_INTERNAL(clk_mgr);
++	uint32_t ref_dtbclk = clk_mgr->clks.ref_dtbclk_khz;
++
++	memset(&(clk_mgr->clks), 0, sizeof(struct dc_clocks));
++	// Assumption is that boot state always supports pstate
++	clk_mgr->clks.ref_dtbclk_khz = ref_dtbclk;	// restore ref_dtbclk
++	clk_mgr->clks.p_state_change_support = true;
++	clk_mgr->clks.prev_p_state_change_support = true;
++	clk_mgr->clks.pwr_state = DCN_PWR_STATE_UNKNOWN;
++	clk_mgr->clks.zstate_support = DCN_ZSTATE_SUPPORT_UNKNOWN;
++
++	// to adjust dp_dto reference clock if ssc is enable otherwise to apply dprefclk
++	if (dcn314_is_spll_ssc_enabled(clk_mgr))
++		clk_mgr->dp_dto_source_clock_in_khz =
++			dce_adjust_dp_ref_freq_for_ss(clk_mgr_int, clk_mgr->dprefclk_khz);
++	else
++		clk_mgr->dp_dto_source_clock_in_khz = clk_mgr->dprefclk_khz;
++}
++
+ void dcn314_update_clocks(struct clk_mgr *clk_mgr_base,
+ 			struct dc_state *context,
+ 			bool safe_to_lower)
+@@ -433,6 +478,11 @@ static DpmClocks314_t dummy_clocks;
+ 
+ static struct dcn314_watermarks dummy_wms = { 0 };
+ 
++static struct dcn314_ss_info_table ss_info_table = {
++	.ss_divider = 1000,
++	.ss_percentage = {0, 0, 375, 375, 375}
++};
++
+ static void dcn314_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn314_watermarks *table)
+ {
+ 	int i, num_valid_sets;
+@@ -705,13 +755,31 @@ static struct clk_mgr_funcs dcn314_funcs = {
+ 	.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+ 	.get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
+ 	.update_clocks = dcn314_update_clocks,
+-	.init_clocks = dcn31_init_clocks,
++	.init_clocks = dcn314_init_clocks,
+ 	.enable_pme_wa = dcn314_enable_pme_wa,
+ 	.are_clock_states_equal = dcn314_are_clock_states_equal,
+ 	.notify_wm_ranges = dcn314_notify_wm_ranges
+ };
+ extern struct clk_mgr_funcs dcn3_fpga_funcs;
+ 
++static void dcn314_read_ss_info_from_lut(struct clk_mgr_internal *clk_mgr)
++{
++	uint32_t clock_source;
++	//uint32_t ssc_enable;
++
++	REG_GET(CLK1_CLK2_BYPASS_CNTL, CLK2_BYPASS_SEL, &clock_source);
++	//REG_GET(CLK6_0_CLK6_spll_field_8, spll_ssc_en, &ssc_enable);
++
++	if (dcn314_is_spll_ssc_enabled(&clk_mgr->base) && (clock_source < ARRAY_SIZE(ss_info_table.ss_percentage))) {
++		clk_mgr->dprefclk_ss_percentage = ss_info_table.ss_percentage[clock_source];
++
++		if (clk_mgr->dprefclk_ss_percentage != 0) {
++			clk_mgr->ss_on_dprefclk = true;
++			clk_mgr->dprefclk_ss_divider = ss_info_table.ss_divider;
++		}
++	}
++}
++
+ void dcn314_clk_mgr_construct(
+ 		struct dc_context *ctx,
+ 		struct clk_mgr_dcn314 *clk_mgr,
+@@ -779,6 +847,7 @@ void dcn314_clk_mgr_construct(
+ 	clk_mgr->base.base.dprefclk_khz = 600000;
+ 	clk_mgr->base.base.clks.ref_dtbclk_khz = 600000;
+ 	dce_clock_read_ss_info(&clk_mgr->base);
++	dcn314_read_ss_info_from_lut(&clk_mgr->base);
+ 	/*if bios enabled SS, driver needs to adjust dtb clock, only enable with correct bios*/
+ 
+ 	clk_mgr->base.base.bw_params = &dcn314_bw_params;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.h
+index 171f84340eb2f..002c28e807208 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.h
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.h
+@@ -28,6 +28,8 @@
+ #define __DCN314_CLK_MGR_H__
+ #include "clk_mgr_internal.h"
+ 
++#define DCN314_NUM_CLOCK_SOURCES   5
++
+ struct dcn314_watermarks;
+ 
+ struct dcn314_smu_watermark_set {
+@@ -40,9 +42,18 @@ struct clk_mgr_dcn314 {
+ 	struct dcn314_smu_watermark_set smu_wm_set;
+ };
+ 
++struct dcn314_ss_info_table {
++	uint32_t ss_divider;
++	uint32_t ss_percentage[DCN314_NUM_CLOCK_SOURCES];
++};
++
+ bool dcn314_are_clock_states_equal(struct dc_clocks *a,
+ 		struct dc_clocks *b);
+ 
++bool dcn314_is_spll_ssc_enabled(struct clk_mgr *clk_mgr_base);
++
++void dcn314_init_clocks(struct clk_mgr *clk_mgr);
++
+ void dcn314_update_clocks(struct clk_mgr *clk_mgr_base,
+ 			struct dc_state *context,
+ 			bool safe_to_lower);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 45ede6440a79f..4ef90a3add1c9 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -126,21 +126,13 @@ static void dcn35_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *
+ 			continue;
+ 		if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal) ||
+ 				     !pipe->stream->link_enc)) {
+-			struct stream_encoder *stream_enc = pipe->stream_res.stream_enc;
+-
+ 			if (disable) {
+-				if (stream_enc && stream_enc->funcs->disable_fifo)
+-					pipe->stream_res.stream_enc->funcs->disable_fifo(stream_enc);
+-
+ 				if (pipe->stream_res.tg && pipe->stream_res.tg->funcs->immediate_disable_crtc)
+ 					pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
+ 
+ 				reset_sync_context_for_pipe(dc, context, i);
+ 			} else {
+ 				pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
+-
+-				if (stream_enc && stream_enc->funcs->enable_fifo)
+-					pipe->stream_res.stream_enc->funcs->enable_fifo(stream_enc);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index bc098098345cf..bbdeda489768b 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1964,6 +1964,10 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 		wait_for_no_pipes_pending(dc, context);
+ 		/* pplib is notified if disp_num changed */
+ 		dc->hwss.optimize_bandwidth(dc, context);
++		/* Need to do otg sync again as otg could be out of sync due to otg
++		 * workaround applied during clock update
++		 */
++		dc_trigger_sync(dc, context);
+ 	}
+ 
+ 	if (dc->hwss.update_dsc_pg)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+index e2a3aa8812df4..811474f4419bd 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+@@ -244,7 +244,7 @@ enum pixel_format {
+ #define DC_MAX_DIRTY_RECTS 3
+ struct dc_flip_addrs {
+ 	struct dc_plane_address address;
+-	unsigned int flip_timestamp_in_us;
++	unsigned long long flip_timestamp_in_us;
+ 	bool flip_immediate;
+ 	/* TODO: add flip duration for FreeSync */
+ 	bool triplebuffer_flips;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c
+index 140598f18bbdd..f0458b8f00af8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c
+@@ -782,7 +782,7 @@ static void get_azalia_clock_info_dp(
+ 	/*audio_dto_module = dpDtoSourceClockInkhz * 10,000;
+ 	 *  [khz] ->[100Hz] */
+ 	azalia_clock_info->audio_dto_module =
+-		pll_info->dp_dto_source_clock_in_khz * 10;
++		pll_info->audio_dto_source_clock_in_khz * 10;
+ }
+ 
+ void dce_aud_wall_dto_setup(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 5d3f6fa1011e8..970644b695cd4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -975,6 +975,9 @@ static bool dcn31_program_pix_clk(
+ 			look_up_in_video_optimized_rate_tlb(pix_clk_params->requested_pix_clk_100hz / 10);
+ 	struct bp_pixel_clock_parameters bp_pc_params = {0};
+ 	enum transmitter_color_depth bp_pc_colour_depth = TRANSMITTER_COLOR_DEPTH_24;
++
++	if (clock_source->ctx->dc->clk_mgr->dp_dto_source_clock_in_khz != 0)
++		dp_dto_ref_khz = clock_source->ctx->dc->clk_mgr->dp_dto_source_clock_in_khz;
+ 	// For these signal types Driver to program DP_DTO without calling VBIOS Command table
+ 	if (dc_is_dp_signal(pix_clk_params->signal_type) || dc_is_virtual_signal(pix_clk_params->signal_type)) {
+ 		if (e) {
+@@ -1088,6 +1091,10 @@ static bool get_pixel_clk_frequency_100hz(
+ 	struct dce110_clk_src *clk_src = TO_DCE110_CLK_SRC(clock_source);
+ 	unsigned int clock_hz = 0;
+ 	unsigned int modulo_hz = 0;
++	unsigned int dp_dto_ref_khz = clock_source->ctx->dc->clk_mgr->dprefclk_khz;
++
++	if (clock_source->ctx->dc->clk_mgr->dp_dto_source_clock_in_khz != 0)
++		dp_dto_ref_khz = clock_source->ctx->dc->clk_mgr->dp_dto_source_clock_in_khz;
+ 
+ 	if (clock_source->id == CLOCK_SOURCE_ID_DP_DTO) {
+ 		clock_hz = REG_READ(PHASE[inst]);
+@@ -1100,7 +1107,7 @@ static bool get_pixel_clk_frequency_100hz(
+ 			modulo_hz = REG_READ(MODULO[inst]);
+ 			if (modulo_hz)
+ 				*pixel_clk_khz = div_u64((uint64_t)clock_hz*
+-					clock_source->ctx->dc->clk_mgr->dprefclk_khz*10,
++					dp_dto_ref_khz*10,
+ 					modulo_hz);
+ 			else
+ 				*pixel_clk_khz = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c
+index f91e088952755..da94e5309fbaf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c
+@@ -256,6 +256,10 @@ void dcn35_link_encoder_construct(
+ 		enc10->base.features.flags.bits.IS_UHBR10_CAPABLE = bp_cap_info.DP_UHBR10_EN;
+ 		enc10->base.features.flags.bits.IS_UHBR13_5_CAPABLE = bp_cap_info.DP_UHBR13_5_EN;
+ 		enc10->base.features.flags.bits.IS_UHBR20_CAPABLE = bp_cap_info.DP_UHBR20_EN;
++		if (bp_cap_info.DP_IS_USB_C) {
++			/*BIOS not switch to use CONNECTOR_ID_USBC = 24 yet*/
++			enc10->base.features.flags.bits.DP_IS_USB_C = 1;
++		}
+ 
+ 	} else {
+ 		DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
+@@ -264,4 +268,5 @@ void dcn35_link_encoder_construct(
+ 	}
+ 	if (enc10->base.ctx->dc->debug.hdmi20_disable)
+ 		enc10->base.features.flags.bits.HDMI_6GB_EN = 0;
++
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
+index cbdfb762c10c5..6c84b0fa40f44 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
+@@ -813,6 +813,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
+ 					(v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ||
+ 						v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
+ 							mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
++					mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] > 0 || mode_lib->vba.DRAMClockChangeRequirementFinal == false,
++
+ 					/* Output */
+ 					&v->DSTXAfterScaler[k],
+ 					&v->DSTYAfterScaler[k],
+@@ -3317,6 +3319,7 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 							v->SwathHeightCThisState[k], v->TWait,
+ 							(v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
+ 									mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
++							mode_lib->vba.PrefetchModePerState[i][j] > 0 || mode_lib->vba.DRAMClockChangeRequirementFinal == false,
+ 
+ 							/* Output */
+ 							&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.DSTXAfterScaler[k],
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
+index d940dfa5ae43e..80fccd4999a58 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
+@@ -3423,6 +3423,7 @@ bool dml32_CalculatePrefetchSchedule(
+ 		unsigned int SwathHeightC,
+ 		double TWait,
+ 		double TPreReq,
++		bool ExtendPrefetchIfPossible,
+ 		/* Output */
+ 		double   *DSTXAfterScaler,
+ 		double   *DSTYAfterScaler,
+@@ -3892,12 +3893,32 @@ bool dml32_CalculatePrefetchSchedule(
+ 			/* Clamp to oto for bandwidth calculation */
+ 			LinesForPrefetchBandwidth = dst_y_prefetch_oto;
+ 		} else {
+-			*DestinationLinesForPrefetch = dst_y_prefetch_equ;
+-			TimeForFetchingMetaPTE = Tvm_equ;
+-			TimeForFetchingRowInVBlank = Tr0_equ;
+-			*PrefetchBandwidth = prefetch_bw_equ;
+-			/* Clamp to equ for bandwidth calculation */
+-			LinesForPrefetchBandwidth = dst_y_prefetch_equ;
++			/* For mode programming we want to extend the prefetch as much as possible
++			 * (up to oto, or as long as we can for equ) if we're not already applying
++			 * the 60us prefetch requirement. This is to avoid intermittent underflow
++			 * issues during prefetch.
++			 *
++			 * The prefetch extension is applied under the following scenarios:
++			 * 1. We're in prefetch mode > 0 (i.e. we don't support MCLK switch in blank)
++			 * 2. We're using subvp or drr methods of p-state switch, in which case we
++			 *    we don't care if prefetch takes up more of the blanking time
++			 *
++			 * Mode programming typically chooses the smallest prefetch time possible
++			 * (i.e. highest bandwidth during prefetch) presumably to create margin between
++			 * p-states / c-states that happen in vblank and prefetch. Therefore we only
++			 * apply this prefetch extension when p-state in vblank is not required (UCLK
++			 * p-states take up the most vblank time).
++			 */
++			if (ExtendPrefetchIfPossible && TPreReq == 0 && VStartup < MaxVStartup) {
++				MyError = true;
++			} else {
++				*DestinationLinesForPrefetch = dst_y_prefetch_equ;
++				TimeForFetchingMetaPTE = Tvm_equ;
++				TimeForFetchingRowInVBlank = Tr0_equ;
++				*PrefetchBandwidth = prefetch_bw_equ;
++				/* Clamp to equ for bandwidth calculation */
++				LinesForPrefetchBandwidth = dst_y_prefetch_equ;
++			}
+ 		}
+ 
+ 		*DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
+index 592d174df6c62..5d34735df83db 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
+@@ -747,6 +747,7 @@ bool dml32_CalculatePrefetchSchedule(
+ 		unsigned int SwathHeightC,
+ 		double TWait,
+ 		double TPreReq,
++		bool ExtendPrefetchIfPossible,
+ 		/* Output */
+ 		double   *DSTXAfterScaler,
+ 		double   *DSTYAfterScaler,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index 62ce95bac8f2b..9be5ebf3a8c0b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -6229,7 +6229,7 @@ static void set_calculate_prefetch_schedule_params(struct display_mode_lib_st *m
+ 				CalculatePrefetchSchedule_params->GPUVMEnable = mode_lib->ms.cache_display_cfg.plane.GPUVMEnable;
+ 				CalculatePrefetchSchedule_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable;
+ 				CalculatePrefetchSchedule_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
+-				CalculatePrefetchSchedule_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
++				CalculatePrefetchSchedule_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024;
+ 				CalculatePrefetchSchedule_params->DynamicMetadataEnable = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataEnable[k];
+ 				CalculatePrefetchSchedule_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled;
+ 				CalculatePrefetchSchedule_params->DynamicMetadataLinesBeforeActiveRequired = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataLinesBeforeActiveRequired[k];
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c
+index 2498b8341199b..d6a68484153c7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c
+@@ -157,6 +157,14 @@ bool is_dp2p0_output_encoder(const struct pipe_ctx *pipe_ctx)
+ {
+ 	/* If this assert is hit then we have a link encoder dynamic management issue */
+ 	ASSERT(pipe_ctx->stream_res.hpo_dp_stream_enc ? pipe_ctx->link_res.hpo_dp_link_enc != NULL : true);
++	/* Count MST hubs once by treating only 1st remote sink in topology as an encoder */
++	if (pipe_ctx->stream->link && pipe_ctx->stream->link->remote_sinks[0]) {
++		return (pipe_ctx->stream_res.hpo_dp_stream_enc &&
++			pipe_ctx->link_res.hpo_dp_link_enc &&
++			dc_is_dp_signal(pipe_ctx->stream->signal) &&
++			(pipe_ctx->stream->link->remote_sinks[0]->sink_id == pipe_ctx->stream->sink->sink_id));
++	}
++
+ 	return (pipe_ctx->stream_res.hpo_dp_stream_enc &&
+ 		pipe_ctx->link_res.hpo_dp_link_enc &&
+ 		dc_is_dp_signal(pipe_ctx->stream->signal));
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 9b8299d97e400..9fedf99475695 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -1353,7 +1353,7 @@ static void build_audio_output(
+ 	if (state->clk_mgr &&
+ 		(pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT ||
+ 			pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)) {
+-		audio_output->pll_info.dp_dto_source_clock_in_khz =
++		audio_output->pll_info.audio_dto_source_clock_in_khz =
+ 				state->clk_mgr->funcs->get_dp_ref_clk_frequency(
+ 						state->clk_mgr);
+ 	}
+@@ -2124,7 +2124,8 @@ static void dce110_reset_hw_ctx_wrap(
+ 				BREAK_TO_DEBUGGER();
+ 			}
+ 			pipe_ctx_old->stream_res.tg->funcs->disable_crtc(pipe_ctx_old->stream_res.tg);
+-			pipe_ctx_old->stream->link->phy_state.symclk_ref_cnts.otg = 0;
++			if (dc_is_hdmi_tmds_signal(pipe_ctx_old->stream->signal))
++				pipe_ctx_old->stream->link->phy_state.symclk_ref_cnts.otg = 0;
+ 			pipe_ctx_old->plane_res.mi->funcs->free_mem_input(
+ 					pipe_ctx_old->plane_res.mi, dc->current_state->stream_count);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index cdb903116eb7c..1fc8436c8130e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1057,7 +1057,8 @@ static void dcn10_reset_back_end_for_pipe(
+ 		if (pipe_ctx->stream_res.tg->funcs->set_drr)
+ 			pipe_ctx->stream_res.tg->funcs->set_drr(
+ 					pipe_ctx->stream_res.tg, NULL);
+-		pipe_ctx->stream->link->phy_state.symclk_ref_cnts.otg = 0;
++		if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal))
++			pipe_ctx->stream->link->phy_state.symclk_ref_cnts.otg = 0;
+ 	}
+ 
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 608221b0dd5dc..da0181fef411f 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1877,6 +1877,8 @@ void dcn20_program_front_end_for_ctx(
+ 	int i;
+ 	struct dce_hwseq *hws = dc->hwseq;
+ 	DC_LOGGER_INIT(dc->ctx->logger);
++	unsigned int prev_hubp_count = 0;
++	unsigned int hubp_count = 0;
+ 
+ 	if (resource_is_pipe_topology_changed(dc->current_state, context))
+ 		resource_log_pipe_topology_update(dc, context);
+@@ -1894,6 +1896,20 @@ void dcn20_program_front_end_for_ctx(
+ 		}
+ 	}
+ 
++	for (i = 0; i < dc->res_pool->pipe_count; i++) {
++		if (dc->current_state->res_ctx.pipe_ctx[i].plane_state)
++			prev_hubp_count++;
++		if (context->res_ctx.pipe_ctx[i].plane_state)
++			hubp_count++;
++	}
++
++	if (prev_hubp_count == 0 && hubp_count > 0) {
++		if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
++			dc->res_pool->hubbub->funcs->force_pstate_change_control(
++					dc->res_pool->hubbub, true, false);
++		udelay(500);
++	}
++
+ 	/* Set pipe update flags and lock pipes */
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+ 		dcn20_detect_pipe_changes(&dc->current_state->res_ctx.pipe_ctx[i],
+@@ -2039,6 +2055,10 @@ void dcn20_post_unlock_program_front_end(
+ 		}
+ 	}
+ 
++	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
++		dc->res_pool->hubbub->funcs->force_pstate_change_control(
++				dc->res_pool->hubbub, false, false);
++
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+ 		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+ 
+@@ -2590,7 +2610,8 @@ static void dcn20_reset_back_end_for_pipe(
+ 		 * the case where the same symclk is shared across multiple otg
+ 		 * instances
+ 		 */
+-		link->phy_state.symclk_ref_cnts.otg = 0;
++		if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal))
++			link->phy_state.symclk_ref_cnts.otg = 0;
+ 		if (link->phy_state.symclk_state == SYMCLK_ON_TX_OFF) {
+ 			link_hwss->disable_link_output(link,
+ 					&pipe_ctx->link_res, pipe_ctx->stream->signal);
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+index 52656691ae484..3a70a3cbc274c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+@@ -523,7 +523,8 @@ static void dcn31_reset_back_end_for_pipe(
+ 	if (pipe_ctx->stream_res.tg->funcs->set_odm_bypass)
+ 		pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+ 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+-	pipe_ctx->stream->link->phy_state.symclk_ref_cnts.otg = 0;
++	if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal))
++		pipe_ctx->stream->link->phy_state.symclk_ref_cnts.otg = 0;
+ 
+ 	if (pipe_ctx->stream_res.tg->funcs->set_drr)
+ 		pipe_ctx->stream_res.tg->funcs->set_drr(
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index 5bf9e7c1e052f..cb9d8389329ff 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -995,9 +995,22 @@ static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream,
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ {
+ 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
++	struct dc *dc = pipe_ctx->stream->ctx->dc;
+ 	struct dc_stream_state *stream = pipe_ctx->stream;
+ 	struct pipe_ctx *odm_pipe;
+ 	int opp_cnt = 1;
++	struct dccg *dccg = dc->res_pool->dccg;
++	/* It has been found that when DSCCLK is lower than 16Mhz, we will get DCN
++	 * register access hung. When DSCCLk is based on refclk, DSCCLk is always a
++	 * fixed value higher than 16Mhz so the issue doesn't occur. When DSCCLK is
++	 * generated by DTO, DSCCLK would be based on 1/3 dispclk. For small timings
++	 * with DSC such as 480p60Hz, the dispclk could be low enough to trigger
++	 * this problem. We are implementing a workaround here to keep using dscclk
++	 * based on fixed value refclk when timing is smaller than 3x16Mhz (i.e
++	 * 48Mhz) pixel clock to avoid hitting this problem.
++	 */
++	bool should_use_dto_dscclk = (dccg->funcs->set_dto_dscclk != NULL) &&
++			stream->timing.pix_clk_100hz > 480000;
+ 
+ 	ASSERT(dsc);
+ 	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
+@@ -1020,12 +1033,16 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 
+ 		dsc->funcs->dsc_set_config(dsc, &dsc_cfg, &dsc_optc_cfg);
+ 		dsc->funcs->dsc_enable(dsc, pipe_ctx->stream_res.opp->inst);
++		if (should_use_dto_dscclk)
++			dccg->funcs->set_dto_dscclk(dccg, dsc->inst);
+ 		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+ 			struct display_stream_compressor *odm_dsc = odm_pipe->stream_res.dsc;
+ 
+ 			ASSERT(odm_dsc);
+ 			odm_dsc->funcs->dsc_set_config(odm_dsc, &dsc_cfg, &dsc_optc_cfg);
+ 			odm_dsc->funcs->dsc_enable(odm_dsc, odm_pipe->stream_res.opp->inst);
++			if (should_use_dto_dscclk)
++				dccg->funcs->set_dto_dscclk(dccg, odm_dsc->inst);
+ 		}
+ 		dsc_cfg.dc_dsc_cfg.num_slices_h *= opp_cnt;
+ 		dsc_cfg.pic_width *= opp_cnt;
+@@ -1045,9 +1062,13 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 				OPTC_DSC_DISABLED, 0, 0);
+ 
+ 		/* disable DSC block */
++		if (dccg->funcs->set_ref_dscclk)
++			dccg->funcs->set_ref_dscclk(dccg, pipe_ctx->stream_res.dsc->inst);
+ 		dsc->funcs->dsc_disable(pipe_ctx->stream_res.dsc);
+ 		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+ 			ASSERT(odm_pipe->stream_res.dsc);
++			if (dccg->funcs->set_ref_dscclk)
++				dccg->funcs->set_ref_dscclk(dccg, odm_pipe->stream_res.dsc->inst);
+ 			odm_pipe->stream_res.dsc->funcs->dsc_disable(odm_pipe->stream_res.dsc);
+ 		}
+ 	}
+@@ -1130,6 +1151,10 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
+ 		if (!pipe_ctx->next_odm_pipe && current_pipe_ctx->next_odm_pipe &&
+ 				current_pipe_ctx->next_odm_pipe->stream_res.dsc) {
+ 			struct display_stream_compressor *dsc = current_pipe_ctx->next_odm_pipe->stream_res.dsc;
++			struct dccg *dccg = dc->res_pool->dccg;
++
++			if (dccg->funcs->set_ref_dscclk)
++				dccg->funcs->set_ref_dscclk(dccg, dsc->inst);
+ 			/* disconnect DSC block from stream */
+ 			dsc->funcs->dsc_disconnect(dsc);
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+index fa9614bcb1605..55ded5fb8a38f 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+@@ -314,6 +314,7 @@ struct clk_mgr {
+ 	bool force_smu_not_present;
+ 	bool dc_mode_softmax_enabled;
+ 	int dprefclk_khz; // Used by program pixel clock in clock source funcs, need to figureout where this goes
++	int dp_dto_source_clock_in_khz; // Used to program DP DTO with ss adjustment on DCN314
+ 	int dentist_vco_freq_khz;
+ 	struct clk_state_registers_and_bypass boot_snapshot;
+ 	struct clk_bw_params *bw_params;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+index ce2f0c0e82bd6..6b44557fcb1a8 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+@@ -201,6 +201,10 @@ struct dccg_funcs {
+ 			struct dccg *dccg,
+ 			enum streamclk_source src,
+ 			uint32_t otg_inst);
++	void (*set_dto_dscclk)(
++			struct dccg *dccg,
++			uint32_t dsc_inst);
++	void (*set_ref_dscclk)(struct dccg *dccg, uint32_t dsc_inst);
+ };
+ 
+ #endif //__DAL_DCCG_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index 04d1ecd6e5934..a08ae59c1ea9f 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -776,10 +776,26 @@ static bool dp_set_dsc_on_rx(struct pipe_ctx *pipe_ctx, bool enable)
+  */
+ void link_set_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ {
++	/* TODO: Move this to HWSS as this is hardware programming sequence not a
++	 * link layer sequence
++	 */
+ 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
++	struct dc *dc = pipe_ctx->stream->ctx->dc;
+ 	struct dc_stream_state *stream = pipe_ctx->stream;
+ 	struct pipe_ctx *odm_pipe;
+ 	int opp_cnt = 1;
++	struct dccg *dccg = dc->res_pool->dccg;
++	/* It has been found that when DSCCLK is lower than 16Mhz, we will get DCN
++	 * register access hung. When DSCCLk is based on refclk, DSCCLk is always a
++	 * fixed value higher than 16Mhz so the issue doesn't occur. When DSCCLK is
++	 * generated by DTO, DSCCLK would be based on 1/3 dispclk. For small timings
++	 * with DSC such as 480p60Hz, the dispclk could be low enough to trigger
++	 * this problem. We are implementing a workaround here to keep using dscclk
++	 * based on fixed value refclk when timing is smaller than 3x16Mhz (i.e
++	 * 48Mhz) pixel clock to avoid hitting this problem.
++	 */
++	bool should_use_dto_dscclk = (dccg->funcs->set_dto_dscclk != NULL) &&
++			stream->timing.pix_clk_100hz > 480000;
+ 	DC_LOGGER_INIT(dsc->ctx->logger);
+ 
+ 	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
+@@ -802,11 +818,15 @@ void link_set_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 
+ 		dsc->funcs->dsc_set_config(dsc, &dsc_cfg, &dsc_optc_cfg);
+ 		dsc->funcs->dsc_enable(dsc, pipe_ctx->stream_res.opp->inst);
++		if (should_use_dto_dscclk)
++			dccg->funcs->set_dto_dscclk(dccg, dsc->inst);
+ 		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+ 			struct display_stream_compressor *odm_dsc = odm_pipe->stream_res.dsc;
+ 
+ 			odm_dsc->funcs->dsc_set_config(odm_dsc, &dsc_cfg, &dsc_optc_cfg);
+ 			odm_dsc->funcs->dsc_enable(odm_dsc, odm_pipe->stream_res.opp->inst);
++			if (should_use_dto_dscclk)
++				dccg->funcs->set_dto_dscclk(dccg, odm_dsc->inst);
+ 		}
+ 		dsc_cfg.dc_dsc_cfg.num_slices_h *= opp_cnt;
+ 		dsc_cfg.pic_width *= opp_cnt;
+@@ -856,9 +876,14 @@ void link_set_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ 		}
+ 
+ 		/* disable DSC block */
++		if (dccg->funcs->set_ref_dscclk)
++			dccg->funcs->set_ref_dscclk(dccg, pipe_ctx->stream_res.dsc->inst);
+ 		pipe_ctx->stream_res.dsc->funcs->dsc_disable(pipe_ctx->stream_res.dsc);
+-		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
++		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
++			if (dccg->funcs->set_ref_dscclk)
++				dccg->funcs->set_ref_dscclk(dccg, odm_pipe->stream_res.dsc->inst);
+ 			odm_pipe->stream_res.dsc->funcs->dsc_disable(odm_pipe->stream_res.dsc);
++		}
+ 	}
+ }
+ 
+@@ -1061,18 +1086,21 @@ static struct fixed31_32 get_pbn_from_bw_in_kbps(uint64_t kbps)
+ 	uint32_t denominator = 1;
+ 
+ 	/*
+-	 * margin 5300ppm + 300ppm ~ 0.6% as per spec, factor is 1.006
++	 * The 1.006 factor (margin 5300ppm + 300ppm ~ 0.6% as per spec) is not
++	 * required when determining PBN/time slot utilization on the link between
++	 * us and the branch, since that overhead is already accounted for in
++	 * the get_pbn_per_slot function.
++	 *
+ 	 * The unit of 54/64Mbytes/sec is an arbitrary unit chosen based on
+ 	 * common multiplier to render an integer PBN for all link rate/lane
+ 	 * counts combinations
+ 	 * calculate
+-	 * peak_kbps *= (1006/1000)
+ 	 * peak_kbps *= (64/54)
+-	 * peak_kbps *= 8    convert to bytes
++	 * peak_kbps /= (8 * 1000) convert to bytes
+ 	 */
+ 
+-	numerator = 64 * PEAK_FACTOR_X1000;
+-	denominator = 54 * 8 * 1000 * 1000;
++	numerator = 64;
++	denominator = 54 * 8 * 1000;
+ 	kbps *= numerator;
+ 	peak_kbps = dc_fixpt_from_fraction(kbps, denominator);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+index 7581023daa478..d6e1f969bfd52 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+@@ -50,6 +50,7 @@ static bool get_bw_alloc_proceed_flag(struct dc_link *tmp)
+ 			&& tmp->hpd_status
+ 			&& tmp->dpia_bw_alloc_config.bw_alloc_enabled);
+ }
++
+ static void reset_bw_alloc_struct(struct dc_link *link)
+ {
+ 	link->dpia_bw_alloc_config.bw_alloc_enabled = false;
+@@ -59,6 +60,11 @@ static void reset_bw_alloc_struct(struct dc_link *link)
+ 	link->dpia_bw_alloc_config.bw_granularity = 0;
+ 	link->dpia_bw_alloc_config.response_ready = false;
+ }
++
++#define BW_GRANULARITY_0 4 // 0.25 Gbps
++#define BW_GRANULARITY_1 2 // 0.5 Gbps
++#define BW_GRANULARITY_2 1 // 1 Gbps
++
+ static uint8_t get_bw_granularity(struct dc_link *link)
+ {
+ 	uint8_t bw_granularity = 0;
+@@ -71,16 +77,20 @@ static uint8_t get_bw_granularity(struct dc_link *link)
+ 
+ 	switch (bw_granularity & 0x3) {
+ 	case 0:
+-		bw_granularity = 4;
++		bw_granularity = BW_GRANULARITY_0;
+ 		break;
+ 	case 1:
++		bw_granularity = BW_GRANULARITY_1;
++		break;
++	case 2:
+ 	default:
+-		bw_granularity = 2;
++		bw_granularity = BW_GRANULARITY_2;
+ 		break;
+ 	}
+ 
+ 	return bw_granularity;
+ }
++
+ static int get_estimated_bw(struct dc_link *link)
+ {
+ 	uint8_t bw_estimated_bw = 0;
+@@ -93,31 +103,7 @@ static int get_estimated_bw(struct dc_link *link)
+ 
+ 	return bw_estimated_bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+ }
+-static bool allocate_usb4_bw(int *stream_allocated_bw, int bw_needed, struct dc_link *link)
+-{
+-	if (bw_needed > 0)
+-		*stream_allocated_bw += bw_needed;
+-
+-	return true;
+-}
+-static bool deallocate_usb4_bw(int *stream_allocated_bw, int bw_to_dealloc, struct dc_link *link)
+-{
+-	bool ret = false;
+-
+-	if (*stream_allocated_bw > 0) {
+-		*stream_allocated_bw -= bw_to_dealloc;
+-		ret = true;
+-	} else {
+-		//Do nothing for now
+-		ret = true;
+-	}
+ 
+-	// Unplug so reset values
+-	if (!link->hpd_status)
+-		reset_bw_alloc_struct(link);
+-
+-	return ret;
+-}
+ /*
+  * Read all New BW alloc configuration ex: estimated_bw, allocated_bw,
+  * granuality, Driver_ID, CM_Group, & populate the BW allocation structs
+@@ -128,7 +114,12 @@ static void init_usb4_bw_struct(struct dc_link *link)
+ 	// Init the known values
+ 	link->dpia_bw_alloc_config.bw_granularity = get_bw_granularity(link);
+ 	link->dpia_bw_alloc_config.estimated_bw = get_estimated_bw(link);
++
++	DC_LOG_DEBUG("%s: bw_granularity(%d), estimated_bw(%d)\n",
++		__func__, link->dpia_bw_alloc_config.bw_granularity,
++		link->dpia_bw_alloc_config.estimated_bw);
+ }
++
+ static uint8_t get_lowest_dpia_index(struct dc_link *link)
+ {
+ 	const struct dc *dc_struct = link->dc;
+@@ -141,12 +132,15 @@ static uint8_t get_lowest_dpia_index(struct dc_link *link)
+ 				dc_struct->links[i]->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
+ 			continue;
+ 
+-		if (idx > dc_struct->links[i]->link_index)
++		if (idx > dc_struct->links[i]->link_index) {
+ 			idx = dc_struct->links[i]->link_index;
++			break;
++		}
+ 	}
+ 
+ 	return idx;
+ }
++
+ /*
+  * Get the Max Available BW or Max Estimated BW for each Host Router
+  *
+@@ -186,6 +180,7 @@ static int get_host_router_total_bw(struct dc_link *link, uint8_t type)
+ 
+ 	return total_bw;
+ }
++
+ /*
+  * Cleanup function for when the dpia is unplugged to reset struct
+  * and perform any required clean up
+@@ -194,42 +189,50 @@ static int get_host_router_total_bw(struct dc_link *link, uint8_t type)
+  *
+  * return: none
+  */
+-static bool dpia_bw_alloc_unplug(struct dc_link *link)
++static void dpia_bw_alloc_unplug(struct dc_link *link)
+ {
+-	if (!link)
+-		return true;
+-
+-	return deallocate_usb4_bw(&link->dpia_bw_alloc_config.sink_allocated_bw,
+-			link->dpia_bw_alloc_config.sink_allocated_bw, link);
++	if (link) {
++		DC_LOG_DEBUG("%s: resetting bw alloc config for link(%d)\n",
++			__func__, link->link_index);
++		link->dpia_bw_alloc_config.sink_allocated_bw = 0;
++		reset_bw_alloc_struct(link);
++	}
+ }
++
+ static void set_usb4_req_bw_req(struct dc_link *link, int req_bw)
+ {
+ 	uint8_t requested_bw;
+ 	uint32_t temp;
+ 
+-	// 1. Add check for this corner case #1
+-	if (req_bw > link->dpia_bw_alloc_config.estimated_bw)
++	/* Error check whether request bw greater than allocated */
++	if (req_bw > link->dpia_bw_alloc_config.estimated_bw) {
++		DC_LOG_ERROR("%s: Request bw greater than estimated bw for link(%d)\n",
++			__func__, link->link_index);
+ 		req_bw = link->dpia_bw_alloc_config.estimated_bw;
++	}
+ 
+ 	temp = req_bw * link->dpia_bw_alloc_config.bw_granularity;
+ 	requested_bw = temp / Kbps_TO_Gbps;
+ 
+-	// Always make sure to add more to account for floating points
++	/* Always make sure to add more to account for floating points */
+ 	if (temp % Kbps_TO_Gbps)
+ 		++requested_bw;
+ 
+-	// 2. Add check for this corner case #2
++	/* Error check whether requested and allocated are equal */
+ 	req_bw = requested_bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+-	if (req_bw == link->dpia_bw_alloc_config.sink_allocated_bw)
+-		return;
++	if (req_bw == link->dpia_bw_alloc_config.sink_allocated_bw) {
++		DC_LOG_ERROR("%s: Request bw equals to allocated bw for link(%d)\n",
++			__func__, link->link_index);
++	}
+ 
+-	if (core_link_write_dpcd(
++	link->dpia_bw_alloc_config.response_ready = false; // Reset flag
++	core_link_write_dpcd(
+ 		link,
+ 		REQUESTED_BW,
+ 		&requested_bw,
+-		sizeof(uint8_t)) == DC_OK)
+-		link->dpia_bw_alloc_config.response_ready = false; // Reset flag
++		sizeof(uint8_t));
+ }
++
+ /*
+  * Return the response_ready flag from dc_link struct
+  *
+@@ -241,6 +244,7 @@ static bool get_cm_response_ready_flag(struct dc_link *link)
+ {
+ 	return link->dpia_bw_alloc_config.response_ready;
+ }
++
+ // ------------------------------------------------------------------
+ //					PUBLIC FUNCTIONS
+ // ------------------------------------------------------------------
+@@ -277,27 +281,27 @@ bool link_dp_dpia_set_dptx_usb4_bw_alloc_support(struct dc_link *link)
+ 				DPTX_BW_ALLOCATION_MODE_CONTROL,
+ 				&response,
+ 				sizeof(uint8_t)) != DC_OK) {
+-			DC_LOG_DEBUG("%s: **** FAILURE Enabling DPtx BW Allocation Mode Support ***\n",
+-					__func__);
++			DC_LOG_DEBUG("%s: FAILURE Enabling DPtx BW Allocation Mode Support for link(%d)\n",
++				__func__, link->link_index);
+ 		} else {
+ 			// SUCCESS Enabled DPtx BW Allocation Mode Support
+-			link->dpia_bw_alloc_config.bw_alloc_enabled = true;
+-			DC_LOG_DEBUG("%s: **** SUCCESS Enabling DPtx BW Allocation Mode Support ***\n",
+-					__func__);
++			DC_LOG_DEBUG("%s: SUCCESS Enabling DPtx BW Allocation Mode Support for link(%d)\n",
++				__func__, link->link_index);
+ 
+ 			ret = true;
+ 			init_usb4_bw_struct(link);
++			link->dpia_bw_alloc_config.bw_alloc_enabled = true;
+ 		}
+ 	}
+ 
+ out:
+ 	return ret;
+ }
++
+ void dpia_handle_bw_alloc_response(struct dc_link *link, uint8_t bw, uint8_t result)
+ {
+ 	int bw_needed = 0;
+ 	int estimated = 0;
+-	int host_router_total_estimated_bw = 0;
+ 
+ 	if (!get_bw_alloc_proceed_flag((link)))
+ 		return;
+@@ -306,14 +310,22 @@ void dpia_handle_bw_alloc_response(struct dc_link *link, uint8_t bw, uint8_t res
+ 
+ 	case DPIA_BW_REQ_FAILED:
+ 
+-		DC_LOG_DEBUG("%s: *** *** BW REQ FAILURE for DP-TX Request *** ***\n", __func__);
++		/*
++		 * Ideally, we shouldn't run into this case as we always validate available
++		 * bandwidth and request within that limit
++		 */
++		estimated = bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+ 
+-		// Update the new Estimated BW value updated by CM
+-		link->dpia_bw_alloc_config.estimated_bw =
+-				bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
++		DC_LOG_ERROR("%s: BW REQ FAILURE for DP-TX Request for link(%d)\n",
++			__func__, link->link_index);
++		DC_LOG_ERROR("%s: current estimated_bw(%d), new estimated_bw(%d)\n",
++			__func__, link->dpia_bw_alloc_config.estimated_bw, estimated);
+ 
++		/* Update the new Estimated BW value updated by CM */
++		link->dpia_bw_alloc_config.estimated_bw = estimated;
++
++		/* Allocate the previously requested bandwidth */
+ 		set_usb4_req_bw_req(link, link->dpia_bw_alloc_config.estimated_bw);
+-		link->dpia_bw_alloc_config.response_ready = false;
+ 
+ 		/*
+ 		 * If FAIL then it is either:
+@@ -326,68 +338,34 @@ void dpia_handle_bw_alloc_response(struct dc_link *link, uint8_t bw, uint8_t res
+ 
+ 	case DPIA_BW_REQ_SUCCESS:
+ 
+-		DC_LOG_DEBUG("%s: *** BW REQ SUCCESS for DP-TX Request ***\n", __func__);
+-
+-		// 1. SUCCESS 1st time before any Pruning is done
+-		// 2. SUCCESS after prev. FAIL before any Pruning is done
+-		// 3. SUCCESS after Pruning is done but before enabling link
+-
+ 		bw_needed = bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+ 
+-		// 1.
+-		if (!link->dpia_bw_alloc_config.sink_allocated_bw) {
+-
+-			allocate_usb4_bw(&link->dpia_bw_alloc_config.sink_allocated_bw, bw_needed, link);
+-			link->dpia_bw_alloc_config.sink_verified_bw =
+-					link->dpia_bw_alloc_config.sink_allocated_bw;
++		DC_LOG_DEBUG("%s: BW REQ SUCCESS for DP-TX Request for link(%d)\n",
++			__func__, link->link_index);
++		DC_LOG_DEBUG("%s: current allocated_bw(%d), new allocated_bw(%d)\n",
++			__func__, link->dpia_bw_alloc_config.sink_allocated_bw, bw_needed);
+ 
+-			// SUCCESS from first attempt
+-			if (link->dpia_bw_alloc_config.sink_allocated_bw >
+-			link->dpia_bw_alloc_config.sink_max_bw)
+-				link->dpia_bw_alloc_config.sink_verified_bw =
+-						link->dpia_bw_alloc_config.sink_max_bw;
+-		}
+-		// 3.
+-		else if (link->dpia_bw_alloc_config.sink_allocated_bw) {
+-
+-			// Find out how much do we need to de-alloc
+-			if (link->dpia_bw_alloc_config.sink_allocated_bw > bw_needed)
+-				deallocate_usb4_bw(&link->dpia_bw_alloc_config.sink_allocated_bw,
+-						link->dpia_bw_alloc_config.sink_allocated_bw - bw_needed, link);
+-			else
+-				allocate_usb4_bw(&link->dpia_bw_alloc_config.sink_allocated_bw,
+-						bw_needed - link->dpia_bw_alloc_config.sink_allocated_bw, link);
+-		}
+-
+-		// 4. If this is the 2nd sink then any unused bw will be reallocated to master DPIA
+-		// => check if estimated_bw changed
++		link->dpia_bw_alloc_config.sink_allocated_bw = bw_needed;
+ 
+ 		link->dpia_bw_alloc_config.response_ready = true;
+ 		break;
+ 
+ 	case DPIA_EST_BW_CHANGED:
+ 
+-		DC_LOG_DEBUG("%s: *** ESTIMATED BW CHANGED for DP-TX Request ***\n", __func__);
+-
+ 		estimated = bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+-		host_router_total_estimated_bw = get_host_router_total_bw(link, HOST_ROUTER_BW_ESTIMATED);
+ 
+-		// 1. If due to unplug of other sink
+-		if (estimated == host_router_total_estimated_bw) {
+-			// First update the estimated & max_bw fields
+-			if (link->dpia_bw_alloc_config.estimated_bw < estimated)
+-				link->dpia_bw_alloc_config.estimated_bw = estimated;
+-		}
+-		// 2. If due to realloc bw btw 2 dpia due to plug OR realloc unused Bw
+-		else {
+-			// We lost estimated bw usually due to plug event of other dpia
+-			link->dpia_bw_alloc_config.estimated_bw = estimated;
+-		}
++		DC_LOG_DEBUG("%s: ESTIMATED BW CHANGED for link(%d)\n",
++			__func__, link->link_index);
++		DC_LOG_DEBUG("%s: current estimated_bw(%d), new estimated_bw(%d)\n",
++			__func__, link->dpia_bw_alloc_config.estimated_bw, estimated);
++
++		link->dpia_bw_alloc_config.estimated_bw = estimated;
+ 		break;
+ 
+ 	case DPIA_BW_ALLOC_CAPS_CHANGED:
+ 
+-		DC_LOG_DEBUG("%s: *** BW ALLOC CAPABILITY CHANGED for DP-TX Request ***\n", __func__);
++		DC_LOG_ERROR("%s: BW ALLOC CAPABILITY CHANGED to Disabled for link(%d)\n",
++			__func__, link->link_index);
+ 		link->dpia_bw_alloc_config.bw_alloc_enabled = false;
+ 		break;
+ 	}
+@@ -409,11 +387,11 @@ int dpia_handle_usb4_bandwidth_allocation_for_link(struct dc_link *link, int pea
+ 		set_usb4_req_bw_req(link, link->dpia_bw_alloc_config.sink_max_bw);
+ 
+ 		do {
+-			if (!(timeout > 0))
++			if (timeout > 0)
+ 				timeout--;
+ 			else
+ 				break;
+-			fsleep(10 * 1000);
++			msleep(10);
+ 		} while (!get_cm_response_ready_flag(link));
+ 
+ 		if (!timeout)
+@@ -428,37 +406,36 @@ int dpia_handle_usb4_bandwidth_allocation_for_link(struct dc_link *link, int pea
+ out:
+ 	return ret;
+ }
+-int link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int req_bw)
++
++bool link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int req_bw)
+ {
+-	int ret = 0;
++	bool ret = false;
+ 	uint8_t timeout = 10;
+ 
++	DC_LOG_DEBUG("%s: ENTER: link(%d), hpd_status(%d), current allocated_bw(%d), req_bw(%d)\n",
++		__func__, link->link_index, link->hpd_status,
++		link->dpia_bw_alloc_config.sink_allocated_bw, req_bw);
++
+ 	if (!get_bw_alloc_proceed_flag(link))
+ 		goto out;
+ 
+-	/*
+-	 * Sometimes stream uses same timing parameters as the already
+-	 * allocated max sink bw so no need to re-alloc
+-	 */
+-	if (req_bw != link->dpia_bw_alloc_config.sink_allocated_bw) {
+-		set_usb4_req_bw_req(link, req_bw);
+-		do {
+-			if (!(timeout > 0))
+-				timeout--;
+-			else
+-				break;
+-			udelay(10 * 1000);
+-		} while (!get_cm_response_ready_flag(link));
++	set_usb4_req_bw_req(link, req_bw);
++	do {
++		if (timeout > 0)
++			timeout--;
++		else
++			break;
++		msleep(10);
++	} while (!get_cm_response_ready_flag(link));
+ 
+-		if (!timeout)
+-			ret = 0;// ERROR TIMEOUT waiting for response for allocating bw
+-		else if (link->dpia_bw_alloc_config.sink_allocated_bw > 0)
+-			ret = get_host_router_total_bw(link, HOST_ROUTER_BW_ALLOCATED);
+-	}
++	if (timeout)
++		ret = true;
+ 
+ out:
++	DC_LOG_DEBUG("%s: EXIT: timeout(%d), ret(%d)\n", __func__, timeout, ret);
+ 	return ret;
+ }
++
+ bool dpia_validate_usb4_bw(struct dc_link **link, int *bw_needed_per_dpia, const unsigned int num_dpias)
+ {
+ 	bool ret = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h
+index 7292690383ae1..981bc4eb6120e 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h
+@@ -59,9 +59,9 @@ bool link_dp_dpia_set_dptx_usb4_bw_alloc_support(struct dc_link *link);
+  * @link: pointer to the dc_link struct instance
+  * @req_bw: Bw requested by the stream
+  *
+- * return: allocated bw else return 0
++ * return: true if allocated successfully
+  */
+-int link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int req_bw);
++bool link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int req_bw);
+ 
+ /*
+  * Handle the USB4 BW Allocation related functionality here:
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
+index 0c00e94e90b1d..9eadc2c7f221e 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
+@@ -190,9 +190,6 @@ static void handle_hpd_irq_replay_sink(struct dc_link *link)
+ 	/*AMD Replay version reuse DP_PSR_ERROR_STATUS for REPLAY_ERROR status.*/
+ 	union psr_error_status replay_error_status;
+ 
+-	if (link->replay_settings.config.force_disable_desync_error_check)
+-		return;
+-
+ 	if (!link->replay_settings.replay_feature_enabled)
+ 		return;
+ 
+@@ -210,9 +207,6 @@ static void handle_hpd_irq_replay_sink(struct dc_link *link)
+ 		&replay_error_status.raw,
+ 		sizeof(replay_error_status.raw));
+ 
+-	if (replay_configuration.bits.DESYNC_ERROR_STATUS)
+-		link->replay_settings.config.received_desync_error_hpd = 1;
+-
+ 	link->replay_settings.config.replay_error_status.bits.LINK_CRC_ERROR =
+ 		replay_error_status.bits.LINK_CRC_ERROR;
+ 	link->replay_settings.config.replay_error_status.bits.DESYNC_ERROR =
+@@ -225,6 +219,12 @@ static void handle_hpd_irq_replay_sink(struct dc_link *link)
+ 		link->replay_settings.config.replay_error_status.bits.STATE_TRANSITION_ERROR) {
+ 		bool allow_active;
+ 
++		if (link->replay_settings.config.replay_error_status.bits.DESYNC_ERROR)
++			link->replay_settings.config.received_desync_error_hpd = 1;
++
++		if (link->replay_settings.config.force_disable_desync_error_check)
++			return;
++
+ 		/* Acknowledge and clear configuration bits */
+ 		dm_helpers_dp_write_dpcd(
+ 			link->ctx,
+diff --git a/drivers/gpu/drm/amd/display/include/audio_types.h b/drivers/gpu/drm/amd/display/include/audio_types.h
+index 66a54da0641ce..915a031a43cb2 100644
+--- a/drivers/gpu/drm/amd/display/include/audio_types.h
++++ b/drivers/gpu/drm/amd/display/include/audio_types.h
+@@ -64,7 +64,7 @@ enum audio_dto_source {
+ /* PLL information required for AZALIA DTO calculation */
+ 
+ struct audio_pll_info {
+-	uint32_t dp_dto_source_clock_in_khz;
++	uint32_t audio_dto_source_clock_in_khz;
+ 	uint32_t feed_back_divider;
+ 	enum audio_dto_source dto_source;
+ 	bool ss_enabled;
+diff --git a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+index 7806056ca1dd1..1675314a3ff20 100644
+--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
++++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+@@ -841,8 +841,6 @@ bool is_psr_su_specific_panel(struct dc_link *link)
+ 				isPSRSUSupported = false;
+ 			else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
+ 				isPSRSUSupported = false;
+-			else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
+-				isPSRSUSupported = false;
+ 			else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
+ 				isPSRSUSupported = true;
+ 		}
+diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
+index 7f98394338c26..579977f6ad524 100644
+--- a/drivers/gpu/drm/amd/include/amd_shared.h
++++ b/drivers/gpu/drm/amd/include/amd_shared.h
+@@ -244,7 +244,6 @@ enum DC_FEATURE_MASK {
+ 	DC_DISABLE_LTTPR_DP2_0 = (1 << 6), //0x40, disabled by default
+ 	DC_PSR_ALLOW_SMU_OPT = (1 << 7), //0x80, disabled by default
+ 	DC_PSR_ALLOW_MULTI_DISP_OPT = (1 << 8), //0x100, disabled by default
+-	DC_REPLAY_MASK = (1 << 9), //0x200, disabled by default for dcn < 3.1.4
+ };
+ 
+ enum DC_DEBUG_MASK {
+@@ -255,7 +254,6 @@ enum DC_DEBUG_MASK {
+ 	DC_DISABLE_PSR = 0x10,
+ 	DC_FORCE_SUBVP_MCLK_SWITCH = 0x20,
+ 	DC_DISABLE_MPO = 0x40,
+-	DC_DISABLE_REPLAY = 0x50,
+ 	DC_ENABLE_DPIA_TRACE = 0x80,
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/include/mes_v11_api_def.h b/drivers/gpu/drm/amd/include/mes_v11_api_def.h
+index b1db2b1901874..e07e93167a82c 100644
+--- a/drivers/gpu/drm/amd/include/mes_v11_api_def.h
++++ b/drivers/gpu/drm/amd/include/mes_v11_api_def.h
+@@ -571,7 +571,8 @@ struct SET_SHADER_DEBUGGER {
+ 		struct {
+ 			uint32_t single_memop : 1;  /* SQ_DEBUG.single_memop */
+ 			uint32_t single_alu_op : 1; /* SQ_DEBUG.single_alu_op */
+-			uint32_t reserved : 30;
++			uint32_t reserved : 29;
++			uint32_t process_ctx_flush : 1;
+ 		};
+ 		uint32_t u32all;
+ 	} flags;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c
+index f2a55c1413f59..17882f8dfdd34 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/process_pptables_v1_0.c
+@@ -200,7 +200,7 @@ static int get_platform_power_management_table(
+ 		struct pp_hwmgr *hwmgr,
+ 		ATOM_Tonga_PPM_Table *atom_ppm_table)
+ {
+-	struct phm_ppm_table *ptr = kzalloc(sizeof(ATOM_Tonga_PPM_Table), GFP_KERNEL);
++	struct phm_ppm_table *ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
+ 	struct phm_ppt_v1_information *pp_table_information =
+ 		(struct phm_ppt_v1_information *)(hwmgr->pptable);
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+index b1a8799e2dee3..aa91730e4eaff 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3999,6 +3999,7 @@ static int smu7_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+ 	uint32_t sclk, mclk, activity_percent;
+ 	uint32_t offset, val_vid;
+ 	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
++	struct amdgpu_device *adev = hwmgr->adev;
+ 
+ 	/* size must be at least 4 bytes for all sensors */
+ 	if (*size < 4)
+@@ -4042,7 +4043,21 @@ static int smu7_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+ 		*size = 4;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_GPU_INPUT_POWER:
+-		return smu7_get_gpu_power(hwmgr, (uint32_t *)value);
++		if ((adev->asic_type != CHIP_HAWAII) &&
++		    (adev->asic_type != CHIP_BONAIRE) &&
++		    (adev->asic_type != CHIP_FIJI) &&
++		    (adev->asic_type != CHIP_TONGA))
++			return smu7_get_gpu_power(hwmgr, (uint32_t *)value);
++		else
++			return -EOPNOTSUPP;
++	case AMDGPU_PP_SENSOR_GPU_AVG_POWER:
++		if ((adev->asic_type != CHIP_HAWAII) &&
++		    (adev->asic_type != CHIP_BONAIRE) &&
++		    (adev->asic_type != CHIP_FIJI) &&
++		    (adev->asic_type != CHIP_TONGA))
++			return -EOPNOTSUPP;
++		else
++			return smu7_get_gpu_power(hwmgr, (uint32_t *)value);
+ 	case AMDGPU_PP_SENSOR_VDDGFX:
+ 		if ((data->vr_config & VRCONF_VDDGFX_MASK) ==
+ 		    (VR_SVI2_PLANE_2 << VRCONF_VDDGFX_SHIFT))
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 5168628f11cff..29d91493b101a 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1298,10 +1298,32 @@ static void anx7625_config(struct anx7625_data *ctx)
+ 			  XTAL_FRQ_SEL, XTAL_FRQ_27M);
+ }
+ 
++static int anx7625_hpd_timer_config(struct anx7625_data *ctx)
++{
++	int ret;
++
++	/* Set irq detect window to 2ms */
++	ret = anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
++				HPD_DET_TIMER_BIT0_7, HPD_TIME & 0xFF);
++	ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
++				 HPD_DET_TIMER_BIT8_15,
++				 (HPD_TIME >> 8) & 0xFF);
++	ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
++				 HPD_DET_TIMER_BIT16_23,
++				 (HPD_TIME >> 16) & 0xFF);
++
++	return ret;
++}
++
++static int anx7625_read_hpd_gpio_config_status(struct anx7625_data *ctx)
++{
++	return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, GPIO_CTRL_2);
++}
++
+ static void anx7625_disable_pd_protocol(struct anx7625_data *ctx)
+ {
+ 	struct device *dev = ctx->dev;
+-	int ret;
++	int ret, val;
+ 
+ 	/* Reset main ocm */
+ 	ret = anx7625_reg_write(ctx, ctx->i2c.rx_p0_client, 0x88, 0x40);
+@@ -1315,6 +1337,19 @@ static void anx7625_disable_pd_protocol(struct anx7625_data *ctx)
+ 		DRM_DEV_DEBUG_DRIVER(dev, "disable PD feature fail.\n");
+ 	else
+ 		DRM_DEV_DEBUG_DRIVER(dev, "disable PD feature succeeded.\n");
++
++	/*
++	 * Make sure the HPD GPIO already be configured after OCM release before
++	 * setting HPD detect window register. Here we poll the status register
++	 * at maximum 40ms, then config HPD irq detect window register
++	 */
++	readx_poll_timeout(anx7625_read_hpd_gpio_config_status,
++			   ctx, val,
++			   ((val & HPD_SOURCE) || (val < 0)),
++			   2000, 2000 * 20);
++
++	/* Set HPD irq detect window to 2ms */
++	anx7625_hpd_timer_config(ctx);
+ }
+ 
+ static int anx7625_ocm_loading_check(struct anx7625_data *ctx)
+@@ -1437,20 +1472,6 @@ static void anx7625_start_dp_work(struct anx7625_data *ctx)
+ 
+ static int anx7625_read_hpd_status_p0(struct anx7625_data *ctx)
+ {
+-	int ret;
+-
+-	/* Set irq detect window to 2ms */
+-	ret = anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
+-				HPD_DET_TIMER_BIT0_7, HPD_TIME & 0xFF);
+-	ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
+-				 HPD_DET_TIMER_BIT8_15,
+-				 (HPD_TIME >> 8) & 0xFF);
+-	ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
+-				 HPD_DET_TIMER_BIT16_23,
+-				 (HPD_TIME >> 16) & 0xFF);
+-	if (ret < 0)
+-		return ret;
+-
+ 	return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, SYSTEM_STSTUS);
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.h b/drivers/gpu/drm/bridge/analogix/anx7625.h
+index 80d3fb4e985f1..39ed35d338363 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.h
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.h
+@@ -259,6 +259,10 @@
+ #define AP_MIPI_RX_EN BIT(5) /* 1: MIPI RX input in  0: no RX in */
+ #define AP_DISABLE_PD BIT(6)
+ #define AP_DISABLE_DISPLAY BIT(7)
++
++#define GPIO_CTRL_2   0x49
++#define HPD_SOURCE    BIT(6)
++
+ /***************************************************************/
+ /* Register definition of device address 0x84 */
+ #define  MIPI_PHY_CONTROL_3            0x03
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 446458aca8e99..54a7103c1c0f5 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -958,7 +958,7 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file)
+ {
+ 	struct drm_gem_object *obj;
+ 	struct drm_memory_stats status = {};
+-	enum drm_gem_object_status supported_status;
++	enum drm_gem_object_status supported_status = 0;
+ 	int id;
+ 
+ 	spin_lock(&file->table_lock);
+diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c
+index d3ba0698b84b4..36c5629af7d8e 100644
+--- a/drivers/gpu/drm/drm_framebuffer.c
++++ b/drivers/gpu/drm/drm_framebuffer.c
+@@ -552,7 +552,7 @@ int drm_mode_getfb2_ioctl(struct drm_device *dev,
+ 	struct drm_mode_fb_cmd2 *r = data;
+ 	struct drm_framebuffer *fb;
+ 	unsigned int i;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 14201f73aab13..843a6dbda93a0 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -347,7 +347,8 @@ static int mipi_dsi_remove_device_fn(struct device *dev, void *priv)
+ {
+ 	struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
+ 
+-	mipi_dsi_detach(dsi);
++	if (dsi->attached)
++		mipi_dsi_detach(dsi);
+ 	mipi_dsi_device_unregister(dsi);
+ 
+ 	return 0;
+@@ -370,11 +371,18 @@ EXPORT_SYMBOL(mipi_dsi_host_unregister);
+ int mipi_dsi_attach(struct mipi_dsi_device *dsi)
+ {
+ 	const struct mipi_dsi_host_ops *ops = dsi->host->ops;
++	int ret;
+ 
+ 	if (!ops || !ops->attach)
+ 		return -ENOSYS;
+ 
+-	return ops->attach(dsi->host, dsi);
++	ret = ops->attach(dsi->host, dsi);
++	if (ret)
++		return ret;
++
++	dsi->attached = true;
++
++	return 0;
+ }
+ EXPORT_SYMBOL(mipi_dsi_attach);
+ 
+@@ -386,9 +394,14 @@ int mipi_dsi_detach(struct mipi_dsi_device *dsi)
+ {
+ 	const struct mipi_dsi_host_ops *ops = dsi->host->ops;
+ 
++	if (WARN_ON(!dsi->attached))
++		return -EINVAL;
++
+ 	if (!ops || !ops->detach)
+ 		return -ENOSYS;
+ 
++	dsi->attached = false;
++
+ 	return ops->detach(dsi->host, dsi);
+ }
+ EXPORT_SYMBOL(mipi_dsi_detach);
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c
+index 8399256cb5c9d..5380fb6c55ae1 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_drv.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c
+@@ -300,6 +300,7 @@ err_mode_config_cleanup:
+ 	drm_mode_config_cleanup(drm);
+ 	exynos_drm_cleanup_dma(drm);
+ 	kfree(private);
++	dev_set_drvdata(dev, NULL);
+ err_free_drm:
+ 	drm_dev_put(drm);
+ 
+@@ -313,6 +314,7 @@ static void exynos_drm_unbind(struct device *dev)
+ 	drm_dev_unregister(drm);
+ 
+ 	drm_kms_helper_poll_fini(drm);
++	drm_atomic_helper_shutdown(drm);
+ 
+ 	component_unbind_all(drm->dev, drm);
+ 	drm_mode_config_cleanup(drm);
+@@ -350,9 +352,18 @@ static int exynos_drm_platform_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void exynos_drm_platform_shutdown(struct platform_device *pdev)
++{
++	struct drm_device *drm = platform_get_drvdata(pdev);
++
++	if (drm)
++		drm_atomic_helper_shutdown(drm);
++}
++
+ static struct platform_driver exynos_drm_platform_driver = {
+ 	.probe	= exynos_drm_platform_probe,
+ 	.remove	= exynos_drm_platform_remove,
++	.shutdown = exynos_drm_platform_shutdown,
+ 	.driver	= {
+ 		.name	= "exynos-drm",
+ 		.pm	= &exynos_drm_pm_ops,
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 7a0220d29a235..500ed2d183fcc 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1312,6 +1312,7 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
+ 
+ 	if (adreno_is_a650(adreno_gpu) ||
+ 	    adreno_is_a660(adreno_gpu) ||
++	    adreno_is_a690(adreno_gpu) ||
+ 	    adreno_is_a730(adreno_gpu) ||
+ 	    adreno_is_a740_family(adreno_gpu)) {
+ 		/* TODO: get ddr type from bootloader and use 2 for LPDDR4 */
+@@ -1321,13 +1322,6 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
+ 		uavflagprd_inv = 2;
+ 	}
+ 
+-	if (adreno_is_a690(adreno_gpu)) {
+-		hbb_lo = 2;
+-		amsbc = 1;
+-		rgb565_predicator = 1;
+-		uavflagprd_inv = 2;
+-	}
+-
+ 	if (adreno_is_7c3(adreno_gpu)) {
+ 		hbb_lo = 1;
+ 		amsbc = 1;
+@@ -1741,7 +1735,9 @@ static int hw_init(struct msm_gpu *gpu)
+ 	/* Setting the primFifo thresholds default values,
+ 	 * and vccCacheSkipDis=1 bit (0x200) for A640 and newer
+ 	*/
+-	if (adreno_is_a650(adreno_gpu) || adreno_is_a660(adreno_gpu) || adreno_is_a690(adreno_gpu))
++	if (adreno_is_a690(adreno_gpu))
++		gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00800200);
++	else if (adreno_is_a650(adreno_gpu) || adreno_is_a660(adreno_gpu))
+ 		gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00300200);
+ 	else if (adreno_is_a640_family(adreno_gpu) || adreno_is_7c3(adreno_gpu))
+ 		gpu_write(gpu, REG_A6XX_PC_DBG_ECO_CNTL, 0x00200200);
+@@ -1775,6 +1771,8 @@ static int hw_init(struct msm_gpu *gpu)
+ 	if (adreno_is_a730(adreno_gpu) ||
+ 	    adreno_is_a740_family(adreno_gpu))
+ 		gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0xcfffff);
++	else if (adreno_is_a690(adreno_gpu))
++		gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0x4fffff);
+ 	else if (adreno_is_a619(adreno_gpu))
+ 		gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL, (1 << 30) | 0x3fffff);
+ 	else if (adreno_is_a610(adreno_gpu))
+@@ -1808,12 +1806,17 @@ static int hw_init(struct msm_gpu *gpu)
+ 	a6xx_set_cp_protect(gpu);
+ 
+ 	if (adreno_is_a660_family(adreno_gpu)) {
+-		gpu_write(gpu, REG_A6XX_CP_CHICKEN_DBG, 0x1);
++		if (adreno_is_a690(adreno_gpu))
++			gpu_write(gpu, REG_A6XX_CP_CHICKEN_DBG, 0x00028801);
++		else
++			gpu_write(gpu, REG_A6XX_CP_CHICKEN_DBG, 0x1);
+ 		gpu_write(gpu, REG_A6XX_RBBM_GBIF_CLIENT_QOS_CNTL, 0x0);
+ 	}
+ 
++	if (adreno_is_a690(adreno_gpu))
++		gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG, 0x90);
+ 	/* Set dualQ + disable afull for A660 GPU */
+-	if (adreno_is_a660(adreno_gpu))
++	else if (adreno_is_a660(adreno_gpu))
+ 		gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG, 0x66906);
+ 	else if (adreno_is_a7xx(adreno_gpu))
+ 		gpu_write(gpu, REG_A6XX_UCHE_CMDQ_CONFIG,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+index 1709ba57f3840..022b0408c24d5 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+@@ -31,6 +31,7 @@ static const struct dpu_mdp_cfg sm8350_mdp = {
+ 		[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 8 },
++		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x2bc, .bit_off = 16 },
+ 		[DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2bc, .bit_off = 20 },
+ 	},
+ };
+@@ -298,6 +299,21 @@ static const struct dpu_dsc_cfg sm8350_dsc[] = {
+ 	},
+ };
+ 
++static const struct dpu_wb_cfg sm8350_wb[] = {
++	{
++		.name = "wb_2", .id = WB_2,
++		.base = 0x65000, .len = 0x2c8,
++		.features = WB_SM8250_MASK,
++		.format_list = wb2_formats,
++		.num_formats = ARRAY_SIZE(wb2_formats),
++		.clk_ctrl = DPU_CLK_CTRL_WB2,
++		.xin_id = 6,
++		.vbif_idx = VBIF_RT,
++		.maxlinewidth = 4096,
++		.intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4),
++	},
++};
++
+ static const struct dpu_intf_cfg sm8350_intf[] = {
+ 	{
+ 		.name = "intf_0", .id = INTF_0,
+@@ -393,6 +409,8 @@ const struct dpu_mdss_cfg dpu_sm8350_cfg = {
+ 	.dsc = sm8350_dsc,
+ 	.merge_3d_count = ARRAY_SIZE(sm8350_merge_3d),
+ 	.merge_3d = sm8350_merge_3d,
++	.wb_count = ARRAY_SIZE(sm8350_wb),
++	.wb = sm8350_wb,
+ 	.intf_count = ARRAY_SIZE(sm8350_intf),
+ 	.intf = sm8350_intf,
+ 	.vbif_count = ARRAY_SIZE(sdm845_vbif),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+index 72b0f547242fe..7adc42257e1ea 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+@@ -32,6 +32,7 @@ static const struct dpu_mdp_cfg sm8450_mdp = {
+ 		[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2bc, .bit_off = 8 },
+ 		[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 8 },
++		[DPU_CLK_CTRL_WB2] = { .reg_off = 0x2bc, .bit_off = 16 },
+ 		[DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2bc, .bit_off = 20 },
+ 	},
+ };
+@@ -316,6 +317,21 @@ static const struct dpu_dsc_cfg sm8450_dsc[] = {
+ 	},
+ };
+ 
++static const struct dpu_wb_cfg sm8450_wb[] = {
++	{
++		.name = "wb_2", .id = WB_2,
++		.base = 0x65000, .len = 0x2c8,
++		.features = WB_SM8250_MASK,
++		.format_list = wb2_formats,
++		.num_formats = ARRAY_SIZE(wb2_formats),
++		.clk_ctrl = DPU_CLK_CTRL_WB2,
++		.xin_id = 6,
++		.vbif_idx = VBIF_RT,
++		.maxlinewidth = 4096,
++		.intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4),
++	},
++};
++
+ static const struct dpu_intf_cfg sm8450_intf[] = {
+ 	{
+ 		.name = "intf_0", .id = INTF_0,
+@@ -411,6 +427,8 @@ const struct dpu_mdss_cfg dpu_sm8450_cfg = {
+ 	.dsc = sm8450_dsc,
+ 	.merge_3d_count = ARRAY_SIZE(sm8450_merge_3d),
+ 	.merge_3d = sm8450_merge_3d,
++	.wb_count = ARRAY_SIZE(sm8450_wb),
++	.wb = sm8450_wb,
+ 	.intf_count = ARRAY_SIZE(sm8450_intf),
+ 	.intf = sm8450_intf,
+ 	.vbif_count = ARRAY_SIZE(sdm845_vbif),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 5dbb5d27bbea0..b9f0093389a82 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -39,6 +39,9 @@
+ #define DPU_ERROR_ENC(e, fmt, ...) DPU_ERROR("enc%d " fmt,\
+ 		(e) ? (e)->base.base.id : -1, ##__VA_ARGS__)
+ 
++#define DPU_ERROR_ENC_RATELIMITED(e, fmt, ...) DPU_ERROR_RATELIMITED("enc%d " fmt,\
++		(e) ? (e)->base.base.id : -1, ##__VA_ARGS__)
++
+ /*
+  * Two to anticipate panels that can do cmd/vid dynamic switching
+  * plan is to create all possible physical encoder types, and switch between
+@@ -2339,7 +2342,7 @@ static void dpu_encoder_frame_done_timeout(struct timer_list *t)
+ 		return;
+ 	}
+ 
+-	DPU_ERROR_ENC(dpu_enc, "frame done timeout\n");
++	DPU_ERROR_ENC_RATELIMITED(dpu_enc, "frame done timeout\n");
+ 
+ 	event = DPU_ENCODER_FRAME_EVENT_ERROR;
+ 	trace_dpu_enc_frame_done_timeout(DRMID(drm_enc), event);
+@@ -2497,7 +2500,6 @@ void dpu_encoder_phys_init(struct dpu_encoder_phys *phys_enc,
+ 	phys_enc->enc_spinlock = p->enc_spinlock;
+ 	phys_enc->enable_state = DPU_ENC_DISABLED;
+ 
+-	atomic_set(&phys_enc->vblank_refcount, 0);
+ 	atomic_set(&phys_enc->pending_kickoff_cnt, 0);
+ 	atomic_set(&phys_enc->pending_ctlstart_cnt, 0);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
+index 6f04c3d56e77c..96bda57b69596 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h
+@@ -155,6 +155,7 @@ enum dpu_intr_idx {
+  * @hw_wb:		Hardware interface to the wb registers
+  * @dpu_kms:		Pointer to the dpu_kms top level
+  * @cached_mode:	DRM mode cached at mode_set time, acted on in enable
++ * @vblank_ctl_lock:	Vblank ctl mutex lock to protect vblank_refcount
+  * @enabled:		Whether the encoder has enabled and running a mode
+  * @split_role:		Role to play in a split-panel configuration
+  * @intf_mode:		Interface mode
+@@ -183,11 +184,12 @@ struct dpu_encoder_phys {
+ 	struct dpu_hw_wb *hw_wb;
+ 	struct dpu_kms *dpu_kms;
+ 	struct drm_display_mode cached_mode;
++	struct mutex vblank_ctl_lock;
+ 	enum dpu_enc_split_role split_role;
+ 	enum dpu_intf_mode intf_mode;
+ 	spinlock_t *enc_spinlock;
+ 	enum dpu_enc_enable_state enable_state;
+-	atomic_t vblank_refcount;
++	int vblank_refcount;
+ 	atomic_t vsync_cnt;
+ 	atomic_t underrun_cnt;
+ 	atomic_t pending_ctlstart_cnt;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+index be185fe69793b..2d788c5e26a83 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+@@ -244,7 +244,8 @@ static int dpu_encoder_phys_cmd_control_vblank_irq(
+ 		return -EINVAL;
+ 	}
+ 
+-	refcount = atomic_read(&phys_enc->vblank_refcount);
++	mutex_lock(&phys_enc->vblank_ctl_lock);
++	refcount = phys_enc->vblank_refcount;
+ 
+ 	/* Slave encoders don't report vblank */
+ 	if (!dpu_encoder_phys_cmd_is_master(phys_enc))
+@@ -260,16 +261,24 @@ static int dpu_encoder_phys_cmd_control_vblank_irq(
+ 		      phys_enc->hw_pp->idx - PINGPONG_0,
+ 		      enable ? "true" : "false", refcount);
+ 
+-	if (enable && atomic_inc_return(&phys_enc->vblank_refcount) == 1)
+-		ret = dpu_core_irq_register_callback(phys_enc->dpu_kms,
+-				phys_enc->irq[INTR_IDX_RDPTR],
+-				dpu_encoder_phys_cmd_te_rd_ptr_irq,
+-				phys_enc);
+-	else if (!enable && atomic_dec_return(&phys_enc->vblank_refcount) == 0)
+-		ret = dpu_core_irq_unregister_callback(phys_enc->dpu_kms,
+-				phys_enc->irq[INTR_IDX_RDPTR]);
++	if (enable) {
++		if (phys_enc->vblank_refcount == 0)
++			ret = dpu_core_irq_register_callback(phys_enc->dpu_kms,
++					phys_enc->irq[INTR_IDX_RDPTR],
++					dpu_encoder_phys_cmd_te_rd_ptr_irq,
++					phys_enc);
++		if (!ret)
++			phys_enc->vblank_refcount++;
++	} else if (!enable) {
++		if (phys_enc->vblank_refcount == 1)
++			ret = dpu_core_irq_unregister_callback(phys_enc->dpu_kms,
++					phys_enc->irq[INTR_IDX_RDPTR]);
++		if (!ret)
++			phys_enc->vblank_refcount--;
++	}
+ 
+ end:
++	mutex_unlock(&phys_enc->vblank_ctl_lock);
+ 	if (ret) {
+ 		DRM_ERROR("vblank irq err id:%u pp:%d ret:%d, enable %s/%d\n",
+ 			  DRMID(phys_enc->parent),
+@@ -285,7 +294,7 @@ static void dpu_encoder_phys_cmd_irq_control(struct dpu_encoder_phys *phys_enc,
+ {
+ 	trace_dpu_enc_phys_cmd_irq_ctrl(DRMID(phys_enc->parent),
+ 			phys_enc->hw_pp->idx - PINGPONG_0,
+-			enable, atomic_read(&phys_enc->vblank_refcount));
++			enable, phys_enc->vblank_refcount);
+ 
+ 	if (enable) {
+ 		dpu_core_irq_register_callback(phys_enc->dpu_kms,
+@@ -763,6 +772,9 @@ struct dpu_encoder_phys *dpu_encoder_phys_cmd_init(
+ 
+ 	dpu_encoder_phys_init(phys_enc, p);
+ 
++	mutex_init(&phys_enc->vblank_ctl_lock);
++	phys_enc->vblank_refcount = 0;
++
+ 	dpu_encoder_phys_cmd_init_ops(&phys_enc->ops);
+ 	phys_enc->intf_mode = INTF_MODE_CMD;
+ 	cmd_enc->stream_sel = 0;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+index a01fda7118835..eeb0acf9665e8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+@@ -364,7 +364,8 @@ static int dpu_encoder_phys_vid_control_vblank_irq(
+ 	int ret = 0;
+ 	int refcount;
+ 
+-	refcount = atomic_read(&phys_enc->vblank_refcount);
++	mutex_lock(&phys_enc->vblank_ctl_lock);
++	refcount = phys_enc->vblank_refcount;
+ 
+ 	/* Slave encoders don't report vblank */
+ 	if (!dpu_encoder_phys_vid_is_master(phys_enc))
+@@ -377,18 +378,26 @@ static int dpu_encoder_phys_vid_control_vblank_irq(
+ 	}
+ 
+ 	DRM_DEBUG_VBL("id:%u enable=%d/%d\n", DRMID(phys_enc->parent), enable,
+-		      atomic_read(&phys_enc->vblank_refcount));
++		      refcount);
+ 
+-	if (enable && atomic_inc_return(&phys_enc->vblank_refcount) == 1)
+-		ret = dpu_core_irq_register_callback(phys_enc->dpu_kms,
+-				phys_enc->irq[INTR_IDX_VSYNC],
+-				dpu_encoder_phys_vid_vblank_irq,
+-				phys_enc);
+-	else if (!enable && atomic_dec_return(&phys_enc->vblank_refcount) == 0)
+-		ret = dpu_core_irq_unregister_callback(phys_enc->dpu_kms,
+-				phys_enc->irq[INTR_IDX_VSYNC]);
++	if (enable) {
++		if (phys_enc->vblank_refcount == 0)
++			ret = dpu_core_irq_register_callback(phys_enc->dpu_kms,
++					phys_enc->irq[INTR_IDX_VSYNC],
++					dpu_encoder_phys_vid_vblank_irq,
++					phys_enc);
++		if (!ret)
++			phys_enc->vblank_refcount++;
++	} else if (!enable) {
++		if (phys_enc->vblank_refcount == 1)
++			ret = dpu_core_irq_unregister_callback(phys_enc->dpu_kms,
++					phys_enc->irq[INTR_IDX_VSYNC]);
++		if (!ret)
++			phys_enc->vblank_refcount--;
++	}
+ 
+ end:
++	mutex_unlock(&phys_enc->vblank_ctl_lock);
+ 	if (ret) {
+ 		DRM_ERROR("failed: id:%u intf:%d ret:%d enable:%d refcnt:%d\n",
+ 			  DRMID(phys_enc->parent),
+@@ -618,7 +627,7 @@ static void dpu_encoder_phys_vid_irq_control(struct dpu_encoder_phys *phys_enc,
+ 	trace_dpu_enc_phys_vid_irq_ctrl(DRMID(phys_enc->parent),
+ 			    phys_enc->hw_intf->idx - INTF_0,
+ 			    enable,
+-			    atomic_read(&phys_enc->vblank_refcount));
++			   phys_enc->vblank_refcount);
+ 
+ 	if (enable) {
+ 		ret = dpu_encoder_phys_vid_control_vblank_irq(phys_enc, true);
+@@ -713,6 +722,8 @@ struct dpu_encoder_phys *dpu_encoder_phys_vid_init(
+ 	DPU_DEBUG_VIDENC(phys_enc, "\n");
+ 
+ 	dpu_encoder_phys_init(phys_enc, p);
++	mutex_init(&phys_enc->vblank_ctl_lock);
++	phys_enc->vblank_refcount = 0;
+ 
+ 	dpu_encoder_phys_vid_init_ops(&phys_enc->ops);
+ 	phys_enc->intf_mode = INTF_MODE_VIDEO;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
+index 9668fb97c0478..d49b3ef7689eb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
+@@ -87,6 +87,9 @@ static void dpu_hw_wb_setup_format(struct dpu_hw_wb *ctx,
+ 			dst_format |= BIT(14); /* DST_ALPHA_X */
+ 	}
+ 
++	if (DPU_FORMAT_IS_YUV(fmt))
++		dst_format |= BIT(15);
++
+ 	pattern = (fmt->element[3] << 24) |
+ 		(fmt->element[2] << 16) |
+ 		(fmt->element[1] << 8)  |
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+index b6f53ca6e9628..f5473d4dea92f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+@@ -51,6 +51,7 @@
+ 	} while (0)
+ 
+ #define DPU_ERROR(fmt, ...) pr_err("[dpu error]" fmt, ##__VA_ARGS__)
++#define DPU_ERROR_RATELIMITED(fmt, ...) pr_err_ratelimited("[dpu error]" fmt, ##__VA_ARGS__)
+ 
+ /**
+  * ktime_compare_safe - compare two ktime structures
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 1b88fb52726f2..4f89c99395017 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -170,6 +170,11 @@ static const struct msm_dp_desc sm8350_dp_descs[] = {
+ 	{}
+ };
+ 
++static const struct msm_dp_desc sm8650_dp_descs[] = {
++	{ .io_start = 0x0af54000, .id = MSM_DP_CONTROLLER_0, .connector_type = DRM_MODE_CONNECTOR_DisplayPort },
++	{}
++};
++
+ static const struct of_device_id dp_dt_match[] = {
+ 	{ .compatible = "qcom,sc7180-dp", .data = &sc7180_dp_descs },
+ 	{ .compatible = "qcom,sc7280-dp", .data = &sc7280_dp_descs },
+@@ -180,6 +185,7 @@ static const struct of_device_id dp_dt_match[] = {
+ 	{ .compatible = "qcom,sc8280xp-edp", .data = &sc8280xp_edp_descs },
+ 	{ .compatible = "qcom,sdm845-dp", .data = &sc7180_dp_descs },
+ 	{ .compatible = "qcom,sm8350-dp", .data = &sm8350_dp_descs },
++	{ .compatible = "qcom,sm8650-dp", .data = &sm8650_dp_descs },
+ 	{}
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+index b6314bb66d2fd..e49ebd9f6326f 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
+@@ -691,6 +691,10 @@ static int dsi_phy_driver_probe(struct platform_device *pdev)
+ 		return dev_err_probe(dev, PTR_ERR(phy->ahb_clk),
+ 				     "Unable to get ahb clk\n");
+ 
++	ret = devm_pm_runtime_enable(&pdev->dev);
++	if (ret)
++		return ret;
++
+ 	/* PLL init will call into clk_register which requires
+ 	 * register access, so we need to enable power and ahb clock.
+ 	 */
+diff --git a/drivers/gpu/drm/msm/msm_mdss.c b/drivers/gpu/drm/msm/msm_mdss.c
+index 6865db1e3ce89..29bb38f0bb2ce 100644
+--- a/drivers/gpu/drm/msm/msm_mdss.c
++++ b/drivers/gpu/drm/msm/msm_mdss.c
+@@ -545,7 +545,7 @@ static const struct msm_mdss_data sc8280xp_data = {
+ 	.ubwc_dec_version = UBWC_4_0,
+ 	.ubwc_swizzle = 6,
+ 	.ubwc_static = 1,
+-	.highest_bank_bit = 2,
++	.highest_bank_bit = 3,
+ 	.macrotile_mode = 1,
+ };
+ 
+diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
+index 7dc6fb7308ce3..cba5a93e60822 100644
+--- a/drivers/gpu/drm/panel/panel-edp.c
++++ b/drivers/gpu/drm/panel/panel-edp.c
+@@ -203,6 +203,9 @@ struct edp_panel_entry {
+ 
+ 	/** @name: Name of this panel (for printing to logs). */
+ 	const char *name;
++
++	/** @override_edid_mode: Override the mode obtained by edid. */
++	const struct drm_display_mode *override_edid_mode;
+ };
+ 
+ struct panel_edp {
+@@ -301,6 +304,24 @@ static unsigned int panel_edp_get_display_modes(struct panel_edp *panel,
+ 	return num;
+ }
+ 
++static int panel_edp_override_edid_mode(struct panel_edp *panel,
++					struct drm_connector *connector,
++					const struct drm_display_mode *override_mode)
++{
++	struct drm_display_mode *mode;
++
++	mode = drm_mode_duplicate(connector->dev, override_mode);
++	if (!mode) {
++		dev_err(panel->base.dev, "failed to add additional mode\n");
++		return 0;
++	}
++
++	mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
++	drm_mode_set_name(mode);
++	drm_mode_probed_add(connector, mode);
++	return 1;
++}
++
+ static int panel_edp_get_non_edid_modes(struct panel_edp *panel,
+ 					struct drm_connector *connector)
+ {
+@@ -568,6 +589,9 @@ static int panel_edp_get_modes(struct drm_panel *panel,
+ {
+ 	struct panel_edp *p = to_panel_edp(panel);
+ 	int num = 0;
++	bool has_override_edid_mode = p->detected_panel &&
++				      p->detected_panel != ERR_PTR(-EINVAL) &&
++				      p->detected_panel->override_edid_mode;
+ 
+ 	/* probe EDID if a DDC bus is available */
+ 	if (p->ddc) {
+@@ -575,9 +599,18 @@ static int panel_edp_get_modes(struct drm_panel *panel,
+ 
+ 		if (!p->edid)
+ 			p->edid = drm_get_edid(connector, p->ddc);
+-
+-		if (p->edid)
+-			num += drm_add_edid_modes(connector, p->edid);
++		if (p->edid) {
++			if (has_override_edid_mode) {
++				/*
++				 * override_edid_mode is specified. Use
++				 * override_edid_mode instead of from edid.
++				 */
++				num += panel_edp_override_edid_mode(p, connector,
++						p->detected_panel->override_edid_mode);
++			} else {
++				num += drm_add_edid_modes(connector, p->edid);
++			}
++		}
+ 
+ 		pm_runtime_mark_last_busy(panel->dev);
+ 		pm_runtime_put_autosuspend(panel->dev);
+@@ -1830,6 +1863,15 @@ static const struct panel_delay delay_200_500_e200 = {
+ 	.delay = _delay \
+ }
+ 
++#define EDP_PANEL_ENTRY2(vend_chr_0, vend_chr_1, vend_chr_2, product_id, _delay, _name, _mode) \
++{ \
++	.name = _name, \
++	.panel_id = drm_edid_encode_panel_id(vend_chr_0, vend_chr_1, vend_chr_2, \
++					     product_id), \
++	.delay = _delay, \
++	.override_edid_mode = _mode \
++}
++
+ /*
+  * This table is used to figure out power sequencing delays for panels that
+  * are detected by EDID. Entries here may point to entries in the
+diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c
+index 13c8dd8cd3506..2bc762d31ac70 100644
+--- a/drivers/hid/hidraw.c
++++ b/drivers/hid/hidraw.c
+@@ -357,8 +357,11 @@ static int hidraw_release(struct inode * inode, struct file * file)
+ 	down_write(&minors_rwsem);
+ 
+ 	spin_lock_irqsave(&hidraw_table[minor]->list_lock, flags);
+-	for (int i = list->tail; i < list->head; i++)
+-		kfree(list->buffer[i].value);
++	while (list->tail != list->head) {
++		kfree(list->buffer[list->tail].value);
++		list->buffer[list->tail].value = NULL;
++		list->tail = (list->tail + 1) & (HIDRAW_BUFFER_SIZE - 1);
++	}
+ 	list_del(&list->node);
+ 	spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags);
+ 	kfree(list);
+diff --git a/drivers/hwmon/hp-wmi-sensors.c b/drivers/hwmon/hp-wmi-sensors.c
+index 17ae62f88bbf4..b5325d0e72b9c 100644
+--- a/drivers/hwmon/hp-wmi-sensors.c
++++ b/drivers/hwmon/hp-wmi-sensors.c
+@@ -17,6 +17,8 @@
+  *     Available: https://github.com/linuxhw/ACPI
+  * [4] P. Rohár, "bmfdec - Decompile binary MOF file (BMF) from WMI buffer",
+  *     2017. [Online]. Available: https://github.com/pali/bmfdec
++ * [5] Microsoft Corporation, "Driver-Defined WMI Data Items", 2017. [Online].
++ *     Available: https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/driver-defined-wmi-data-items
+  */
+ 
+ #include <linux/acpi.h>
+@@ -24,6 +26,7 @@
+ #include <linux/hwmon.h>
+ #include <linux/jiffies.h>
+ #include <linux/mutex.h>
++#include <linux/nls.h>
+ #include <linux/units.h>
+ #include <linux/wmi.h>
+ 
+@@ -395,6 +398,50 @@ struct hp_wmi_sensors {
+ 	struct mutex lock;	/* Lock polling WMI and driver state changes. */
+ };
+ 
++static bool is_raw_wmi_string(const u8 *pointer, u32 length)
++{
++	const u16 *ptr;
++	u16 len;
++
++	/* WMI strings are length-prefixed UTF-16 [5]. */
++	if (length <= sizeof(*ptr))
++		return false;
++
++	length -= sizeof(*ptr);
++	ptr = (const u16 *)pointer;
++	len = *ptr;
++
++	return len <= length && !(len & 1);
++}
++
++static char *convert_raw_wmi_string(const u8 *buf)
++{
++	const wchar_t *src;
++	unsigned int cps;
++	unsigned int len;
++	char *dst;
++	int i;
++
++	src = (const wchar_t *)buf;
++
++	/* Count UTF-16 code points. Exclude trailing null padding. */
++	cps = *src / sizeof(*src);
++	while (cps && !src[cps])
++		cps--;
++
++	/* Each code point becomes up to 3 UTF-8 characters. */
++	len = min(cps * 3, HP_WMI_MAX_STR_SIZE - 1);
++
++	dst = kmalloc((len + 1) * sizeof(*dst), GFP_KERNEL);
++	if (!dst)
++		return NULL;
++
++	i = utf16s_to_utf8s(++src, cps, UTF16_LITTLE_ENDIAN, dst, len);
++	dst[i] = '\0';
++
++	return dst;
++}
++
+ /* hp_wmi_strdup - devm_kstrdup, but length-limited */
+ static char *hp_wmi_strdup(struct device *dev, const char *src)
+ {
+@@ -412,6 +459,23 @@ static char *hp_wmi_strdup(struct device *dev, const char *src)
+ 	return dst;
+ }
+ 
++/* hp_wmi_wstrdup - hp_wmi_strdup, but for a raw WMI string */
++static char *hp_wmi_wstrdup(struct device *dev, const u8 *buf)
++{
++	char *src;
++	char *dst;
++
++	src = convert_raw_wmi_string(buf);
++	if (!src)
++		return NULL;
++
++	dst = hp_wmi_strdup(dev, strim(src));	/* Note: Copy is trimmed. */
++
++	kfree(src);
++
++	return dst;
++}
++
+ /*
+  * hp_wmi_get_wobj - poll WMI for a WMI object instance
+  * @guid: WMI object GUID
+@@ -462,8 +526,14 @@ static int check_wobj(const union acpi_object *wobj,
+ 	for (prop = 0; prop <= last_prop; prop++) {
+ 		type = elements[prop].type;
+ 		valid_type = property_map[prop];
+-		if (type != valid_type)
++		if (type != valid_type) {
++			if (type == ACPI_TYPE_BUFFER &&
++			    valid_type == ACPI_TYPE_STRING &&
++			    is_raw_wmi_string(elements[prop].buffer.pointer,
++					      elements[prop].buffer.length))
++				continue;
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	return 0;
+@@ -480,7 +550,9 @@ static int extract_acpi_value(struct device *dev,
+ 		break;
+ 
+ 	case ACPI_TYPE_STRING:
+-		*out_string = hp_wmi_strdup(dev, strim(element->string.pointer));
++		*out_string = element->type == ACPI_TYPE_BUFFER ?
++			hp_wmi_wstrdup(dev, element->buffer.pointer) :
++			hp_wmi_strdup(dev, strim(element->string.pointer));
+ 		if (!*out_string)
+ 			return -ENOMEM;
+ 		break;
+@@ -861,7 +933,9 @@ update_numeric_sensor_from_wobj(struct device *dev,
+ {
+ 	const union acpi_object *elements;
+ 	const union acpi_object *element;
+-	const char *string;
++	const char *new_string;
++	char *trimmed;
++	char *string;
+ 	bool is_new;
+ 	int offset;
+ 	u8 size;
+@@ -885,11 +959,21 @@ update_numeric_sensor_from_wobj(struct device *dev,
+ 	offset = is_new ? size - 1 : -2;
+ 
+ 	element = &elements[HP_WMI_PROPERTY_CURRENT_STATE + offset];
+-	string = strim(element->string.pointer);
+-
+-	if (strcmp(string, nsensor->current_state)) {
+-		devm_kfree(dev, nsensor->current_state);
+-		nsensor->current_state = hp_wmi_strdup(dev, string);
++	string = element->type == ACPI_TYPE_BUFFER ?
++		convert_raw_wmi_string(element->buffer.pointer) :
++		element->string.pointer;
++
++	if (string) {
++		trimmed = strim(string);
++		if (strcmp(trimmed, nsensor->current_state)) {
++			new_string = hp_wmi_strdup(dev, trimmed);
++			if (new_string) {
++				devm_kfree(dev, nsensor->current_state);
++				nsensor->current_state = new_string;
++			}
++		}
++		if (element->type == ACPI_TYPE_BUFFER)
++			kfree(string);
+ 	}
+ 
+ 	/* Old variant: -2 (not -1) because it lacks the Size property. */
+@@ -996,11 +1080,15 @@ static int check_event_wobj(const union acpi_object *wobj)
+ 			  HP_WMI_EVENT_PROPERTY_STATUS);
+ }
+ 
+-static int populate_event_from_wobj(struct hp_wmi_event *event,
++static int populate_event_from_wobj(struct device *dev,
++				    struct hp_wmi_event *event,
+ 				    union acpi_object *wobj)
+ {
+ 	int prop = HP_WMI_EVENT_PROPERTY_NAME;
+ 	union acpi_object *element;
++	acpi_object_type type;
++	char *string;
++	u32 value;
+ 	int err;
+ 
+ 	err = check_event_wobj(wobj);
+@@ -1009,20 +1097,24 @@ static int populate_event_from_wobj(struct hp_wmi_event *event,
+ 
+ 	element = wobj->package.elements;
+ 
+-	/* Extracted strings are NOT device-managed copies. */
+-
+ 	for (; prop <= HP_WMI_EVENT_PROPERTY_CATEGORY; prop++, element++) {
++		type = hp_wmi_event_property_map[prop];
++
++		err = extract_acpi_value(dev, element, type, &value, &string);
++		if (err)
++			return err;
++
+ 		switch (prop) {
+ 		case HP_WMI_EVENT_PROPERTY_NAME:
+-			event->name = strim(element->string.pointer);
++			event->name = string;
+ 			break;
+ 
+ 		case HP_WMI_EVENT_PROPERTY_DESCRIPTION:
+-			event->description = strim(element->string.pointer);
++			event->description = string;
+ 			break;
+ 
+ 		case HP_WMI_EVENT_PROPERTY_CATEGORY:
+-			event->category = element->integer.value;
++			event->category = value;
+ 			break;
+ 
+ 		default:
+@@ -1511,8 +1603,8 @@ static void hp_wmi_notify(u32 value, void *context)
+ 	struct acpi_buffer out = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	struct hp_wmi_sensors *state = context;
+ 	struct device *dev = &state->wdev->dev;
++	struct hp_wmi_event event = {};
+ 	struct hp_wmi_info *fan_info;
+-	struct hp_wmi_event event;
+ 	union acpi_object *wobj;
+ 	acpi_status err;
+ 	int event_type;
+@@ -1546,7 +1638,7 @@ static void hp_wmi_notify(u32 value, void *context)
+ 
+ 	wobj = out.pointer;
+ 
+-	err = populate_event_from_wobj(&event, wobj);
++	err = populate_event_from_wobj(dev, &event, wobj);
+ 	if (err) {
+ 		dev_warn(dev, "Bad event data (ACPI type %d)\n", wobj->type);
+ 		goto out_free_wobj;
+@@ -1577,6 +1669,9 @@ static void hp_wmi_notify(u32 value, void *context)
+ out_free_wobj:
+ 	kfree(wobj);
+ 
++	devm_kfree(dev, event.name);
++	devm_kfree(dev, event.description);
++
+ out_unlock:
+ 	mutex_unlock(&state->lock);
+ }
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index d928eb8ae5a37..92a49fafe2c02 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -2553,6 +2553,13 @@ store_pwm(struct device *dev, struct device_attribute *attr, const char *buf,
+ 	int err;
+ 	u16 reg;
+ 
++	/*
++	 * The fan control mode should be set to manual if the user wants to adjust
++	 * the fan speed. Otherwise, it will fail to set.
++	 */
++	if (index == 0 && data->pwm_enable[nr] > manual)
++		return -EBUSY;
++
+ 	err = kstrtoul(buf, 10, &val);
+ 	if (err < 0)
+ 		return err;
+diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
+index 4362db7c57892..086fdf262e7b6 100644
+--- a/drivers/i2c/busses/i2c-rk3x.c
++++ b/drivers/i2c/busses/i2c-rk3x.c
+@@ -1295,8 +1295,12 @@ static int rk3x_i2c_probe(struct platform_device *pdev)
+ 			return -EINVAL;
+ 		}
+ 
+-		/* 27+i: write mask, 11+i: value */
+-		value = BIT(27 + bus_nr) | BIT(11 + bus_nr);
++		/* rv1126 i2c2 uses non-sequential write mask 20, value 4 */
++		if (i2c->soc_data == &rv1126_soc_data && bus_nr == 2)
++			value = BIT(20) | BIT(4);
++		else
++			/* 27+i: write mask, 11+i: value */
++			value = BIT(27 + bus_nr) | BIT(11 + bus_nr);
+ 
+ 		ret = regmap_write(grf, i2c->soc_data->grf_offset, value);
+ 		if (ret != 0) {
+diff --git a/drivers/i3c/master/i3c-master-cdns.c b/drivers/i3c/master/i3c-master-cdns.c
+index bcbe8f914149b..c1627f3552ce3 100644
+--- a/drivers/i3c/master/i3c-master-cdns.c
++++ b/drivers/i3c/master/i3c-master-cdns.c
+@@ -76,7 +76,8 @@
+ #define PRESCL_CTRL0			0x14
+ #define PRESCL_CTRL0_I2C(x)		((x) << 16)
+ #define PRESCL_CTRL0_I3C(x)		(x)
+-#define PRESCL_CTRL0_MAX		GENMASK(9, 0)
++#define PRESCL_CTRL0_I3C_MAX		GENMASK(9, 0)
++#define PRESCL_CTRL0_I2C_MAX		GENMASK(15, 0)
+ 
+ #define PRESCL_CTRL1			0x18
+ #define PRESCL_CTRL1_PP_LOW_MASK	GENMASK(15, 8)
+@@ -1233,7 +1234,7 @@ static int cdns_i3c_master_bus_init(struct i3c_master_controller *m)
+ 		return -EINVAL;
+ 
+ 	pres = DIV_ROUND_UP(sysclk_rate, (bus->scl_rate.i3c * 4)) - 1;
+-	if (pres > PRESCL_CTRL0_MAX)
++	if (pres > PRESCL_CTRL0_I3C_MAX)
+ 		return -ERANGE;
+ 
+ 	bus->scl_rate.i3c = sysclk_rate / ((pres + 1) * 4);
+@@ -1246,7 +1247,7 @@ static int cdns_i3c_master_bus_init(struct i3c_master_controller *m)
+ 	max_i2cfreq = bus->scl_rate.i2c;
+ 
+ 	pres = (sysclk_rate / (max_i2cfreq * 5)) - 1;
+-	if (pres > PRESCL_CTRL0_MAX)
++	if (pres > PRESCL_CTRL0_I2C_MAX)
+ 		return -ERANGE;
+ 
+ 	bus->scl_rate.i2c = sysclk_rate / ((pres + 1) * 5);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 5b3154503bf49..319d4288eddde 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -531,21 +531,18 @@ static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
+ 		if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))
+ 			rec.join_state = SENDONLY_FULLMEMBER_JOIN;
+ 	}
+-	spin_unlock_irq(&priv->lock);
+ 
+ 	multicast = ib_sa_join_multicast(&ipoib_sa_client, priv->ca, priv->port,
+-					 &rec, comp_mask, GFP_KERNEL,
++					 &rec, comp_mask, GFP_ATOMIC,
+ 					 ipoib_mcast_join_complete, mcast);
+-	spin_lock_irq(&priv->lock);
+ 	if (IS_ERR(multicast)) {
+ 		ret = PTR_ERR(multicast);
+ 		ipoib_warn(priv, "ib_sa_join_multicast failed, status %d\n", ret);
+ 		/* Requeue this join task with a backoff delay */
+ 		__ipoib_mcast_schedule_join_thread(priv, mcast, 1);
+ 		clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
+-		spin_unlock_irq(&priv->lock);
+ 		complete(&mcast->done);
+-		spin_lock_irq(&priv->lock);
++		return ret;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/leds/trigger/ledtrig-panic.c b/drivers/leds/trigger/ledtrig-panic.c
+index 64abf2e91608a..5a6b21bfeb9af 100644
+--- a/drivers/leds/trigger/ledtrig-panic.c
++++ b/drivers/leds/trigger/ledtrig-panic.c
+@@ -64,10 +64,13 @@ static long led_panic_blink(int state)
+ 
+ static int __init ledtrig_panic_init(void)
+ {
++	led_trigger_register_simple("panic", &trigger);
++	if (!trigger)
++		return -ENOMEM;
++
+ 	atomic_notifier_chain_register(&panic_notifier_list,
+ 				       &led_trigger_panic_nb);
+ 
+-	led_trigger_register_simple("panic", &trigger);
+ 	panic_blink = led_panic_blink;
+ 	return 0;
+ }
+diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c
+index c6d4957c4da83..0ec21dcdbde72 100644
+--- a/drivers/mailbox/arm_mhuv2.c
++++ b/drivers/mailbox/arm_mhuv2.c
+@@ -553,7 +553,8 @@ static irqreturn_t mhuv2_sender_interrupt(int irq, void *data)
+ 	priv = chan->con_priv;
+ 
+ 	if (!IS_PROTOCOL_DOORBELL(priv)) {
+-		writel_relaxed(1, &mhu->send->ch_wn[priv->ch_wn_idx + priv->windows - 1].int_clr);
++		for (i = 0; i < priv->windows; i++)
++			writel_relaxed(1, &mhu->send->ch_wn[priv->ch_wn_idx + i].int_clr);
+ 
+ 		if (chan->cl) {
+ 			mbox_chan_txdone(chan, 0);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 22b54788bceab..21b04607d53a9 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1219,6 +1219,7 @@ struct super_type  {
+ 					  struct md_rdev *refdev,
+ 					  int minor_version);
+ 	int		    (*validate_super)(struct mddev *mddev,
++					      struct md_rdev *freshest,
+ 					      struct md_rdev *rdev);
+ 	void		    (*sync_super)(struct mddev *mddev,
+ 					  struct md_rdev *rdev);
+@@ -1356,8 +1357,9 @@ static int super_90_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor
+ 
+ /*
+  * validate_super for 0.90.0
++ * note: we are not using "freshest" for 0.9 superblock
+  */
+-static int super_90_validate(struct mddev *mddev, struct md_rdev *rdev)
++static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, struct md_rdev *rdev)
+ {
+ 	mdp_disk_t *desc;
+ 	mdp_super_t *sb = page_address(rdev->sb_page);
+@@ -1869,7 +1871,7 @@ static int super_1_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_
+ 	return ret;
+ }
+ 
+-static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
++static int super_1_validate(struct mddev *mddev, struct md_rdev *freshest, struct md_rdev *rdev)
+ {
+ 	struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
+ 	__u64 ev1 = le64_to_cpu(sb->events);
+@@ -1965,13 +1967,15 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
+ 		}
+ 	} else if (mddev->pers == NULL) {
+ 		/* Insist of good event counter while assembling, except for
+-		 * spares (which don't need an event count) */
+-		++ev1;
++		 * spares (which don't need an event count).
++		 * Similar to mdadm, we allow event counter difference of 1
++		 * from the freshest device.
++		 */
+ 		if (rdev->desc_nr >= 0 &&
+ 		    rdev->desc_nr < le32_to_cpu(sb->max_dev) &&
+ 		    (le16_to_cpu(sb->dev_roles[rdev->desc_nr]) < MD_DISK_ROLE_MAX ||
+ 		     le16_to_cpu(sb->dev_roles[rdev->desc_nr]) == MD_DISK_ROLE_JOURNAL))
+-			if (ev1 < mddev->events)
++			if (ev1 + 1 < mddev->events)
+ 				return -EINVAL;
+ 	} else if (mddev->bitmap) {
+ 		/* If adding to array with a bitmap, then we can accept an
+@@ -1992,8 +1996,38 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev)
+ 		    rdev->desc_nr >= le32_to_cpu(sb->max_dev)) {
+ 			role = MD_DISK_ROLE_SPARE;
+ 			rdev->desc_nr = -1;
+-		} else
++		} else if (mddev->pers == NULL && freshest && ev1 < mddev->events) {
++			/*
++			 * If we are assembling, and our event counter is smaller than the
++			 * highest event counter, we cannot trust our superblock about the role.
++			 * It could happen that our rdev was marked as Faulty, and all other
++			 * superblocks were updated with +1 event counter.
++			 * Then, before the next superblock update, which typically happens when
++			 * remove_and_add_spares() removes the device from the array, there was
++			 * a crash or reboot.
++			 * If we allow current rdev without consulting the freshest superblock,
++			 * we could cause data corruption.
++			 * Note that in this case our event counter is smaller by 1 than the
++			 * highest, otherwise, this rdev would not be allowed into array;
++			 * both kernel and mdadm allow event counter difference of 1.
++			 */
++			struct mdp_superblock_1 *freshest_sb = page_address(freshest->sb_page);
++			u32 freshest_max_dev = le32_to_cpu(freshest_sb->max_dev);
++
++			if (rdev->desc_nr >= freshest_max_dev) {
++				/* this is unexpected, better not proceed */
++				pr_warn("md: %s: rdev[%pg]: desc_nr(%d) >= freshest(%pg)->sb->max_dev(%u)\n",
++						mdname(mddev), rdev->bdev, rdev->desc_nr,
++						freshest->bdev, freshest_max_dev);
++				return -EUCLEAN;
++			}
++
++			role = le16_to_cpu(freshest_sb->dev_roles[rdev->desc_nr]);
++			pr_debug("md: %s: rdev[%pg]: role=%d(0x%x) according to freshest %pg\n",
++				     mdname(mddev), rdev->bdev, role, role, freshest->bdev);
++		} else {
+ 			role = le16_to_cpu(sb->dev_roles[rdev->desc_nr]);
++		}
+ 		switch(role) {
+ 		case MD_DISK_ROLE_SPARE: /* spare */
+ 			break;
+@@ -2900,7 +2934,7 @@ static int add_bound_rdev(struct md_rdev *rdev)
+ 		 * and should be added immediately.
+ 		 */
+ 		super_types[mddev->major_version].
+-			validate_super(mddev, rdev);
++			validate_super(mddev, NULL/*freshest*/, rdev);
+ 		err = mddev->pers->hot_add_disk(mddev, rdev);
+ 		if (err) {
+ 			md_kick_rdev_from_array(rdev);
+@@ -3837,7 +3871,7 @@ static int analyze_sbs(struct mddev *mddev)
+ 	}
+ 
+ 	super_types[mddev->major_version].
+-		validate_super(mddev, freshest);
++		validate_super(mddev, NULL/*freshest*/, freshest);
+ 
+ 	i = 0;
+ 	rdev_for_each_safe(rdev, tmp, mddev) {
+@@ -3852,7 +3886,7 @@ static int analyze_sbs(struct mddev *mddev)
+ 		}
+ 		if (rdev != freshest) {
+ 			if (super_types[mddev->major_version].
+-			    validate_super(mddev, rdev)) {
++			    validate_super(mddev, freshest, rdev)) {
+ 				pr_warn("md: kicking non-fresh %pg from array!\n",
+ 					rdev->bdev);
+ 				md_kick_rdev_from_array(rdev);
+@@ -6843,7 +6877,7 @@ int md_add_new_disk(struct mddev *mddev, struct mdu_disk_info_s *info)
+ 			rdev->saved_raid_disk = rdev->raid_disk;
+ 		} else
+ 			super_types[mddev->major_version].
+-				validate_super(mddev, rdev);
++				validate_super(mddev, NULL/*freshest*/, rdev);
+ 		if ((info->state & (1<<MD_DISK_SYNC)) &&
+ 		     rdev->raid_disk != info->raid_disk) {
+ 			/* This was a hot-add request, but events doesn't
+diff --git a/drivers/media/i2c/imx335.c b/drivers/media/i2c/imx335.c
+index ec729126274b2..964a81bec5a4c 100644
+--- a/drivers/media/i2c/imx335.c
++++ b/drivers/media/i2c/imx335.c
+@@ -962,8 +962,8 @@ static int imx335_init_controls(struct imx335 *imx335)
+ 	imx335->hblank_ctrl = v4l2_ctrl_new_std(ctrl_hdlr,
+ 						&imx335_ctrl_ops,
+ 						V4L2_CID_HBLANK,
+-						IMX335_REG_MIN,
+-						IMX335_REG_MAX,
++						mode->hblank,
++						mode->hblank,
+ 						1, mode->hblank);
+ 	if (imx335->hblank_ctrl)
+ 		imx335->hblank_ctrl->flags |= V4L2_CTRL_FLAG_READ_ONLY;
+diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
+index 24e468485fbf0..6be22586c3d2e 100644
+--- a/drivers/media/i2c/ov2740.c
++++ b/drivers/media/i2c/ov2740.c
+@@ -310,7 +310,7 @@ static const struct ov2740_mode supported_modes[] = {
+ 	{
+ 		.width = 1932,
+ 		.height = 1092,
+-		.hts = 1080,
++		.hts = 2160,
+ 		.vts_def = OV2740_VTS_DEF,
+ 		.vts_min = OV2740_VTS_MIN,
+ 		.reg_list = {
+@@ -357,15 +357,6 @@ static u64 to_pixel_rate(u32 f_index)
+ 	return pixel_rate;
+ }
+ 
+-static u64 to_pixels_per_line(u32 hts, u32 f_index)
+-{
+-	u64 ppl = hts * to_pixel_rate(f_index);
+-
+-	do_div(ppl, OV2740_SCLK);
+-
+-	return ppl;
+-}
+-
+ static int ov2740_read_reg(struct ov2740 *ov2740, u16 reg, u16 len, u32 *val)
+ {
+ 	struct i2c_client *client = v4l2_get_subdevdata(&ov2740->sd);
+@@ -598,8 +589,7 @@ static int ov2740_init_controls(struct ov2740 *ov2740)
+ 					   V4L2_CID_VBLANK, vblank_min,
+ 					   vblank_max, 1, vblank_default);
+ 
+-	h_blank = to_pixels_per_line(cur_mode->hts, cur_mode->link_freq_index);
+-	h_blank -= cur_mode->width;
++	h_blank = cur_mode->hts - cur_mode->width;
+ 	ov2740->hblank = v4l2_ctrl_new_std(ctrl_hdlr, &ov2740_ctrl_ops,
+ 					   V4L2_CID_HBLANK, h_blank, h_blank, 1,
+ 					   h_blank);
+@@ -842,8 +832,7 @@ static int ov2740_set_format(struct v4l2_subdev *sd,
+ 				 mode->vts_min - mode->height,
+ 				 OV2740_VTS_MAX - mode->height, 1, vblank_def);
+ 	__v4l2_ctrl_s_ctrl(ov2740->vblank, vblank_def);
+-	h_blank = to_pixels_per_line(mode->hts, mode->link_freq_index) -
+-		mode->width;
++	h_blank = mode->hts - mode->width;
+ 	__v4l2_ctrl_modify_range(ov2740->hblank, h_blank, h_blank, 1, h_blank);
+ 
+ 	return 0;
+diff --git a/drivers/media/pci/ddbridge/ddbridge-main.c b/drivers/media/pci/ddbridge/ddbridge-main.c
+index 91733ab9f58c3..363badab7cf07 100644
+--- a/drivers/media/pci/ddbridge/ddbridge-main.c
++++ b/drivers/media/pci/ddbridge/ddbridge-main.c
+@@ -238,7 +238,7 @@ fail:
+ 	ddb_unmap(dev);
+ 	pci_set_drvdata(pdev, NULL);
+ 	pci_disable_device(pdev);
+-	return -1;
++	return stat;
+ }
+ 
+ /****************************************************************************/
+diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h
+index 5a701f64289ef..0246cf0ac3a8b 100644
+--- a/drivers/media/platform/amphion/vpu.h
++++ b/drivers/media/platform/amphion/vpu.h
+@@ -154,7 +154,6 @@ struct vpu_core {
+ 	struct vpu_mbox tx_type;
+ 	struct vpu_mbox tx_data;
+ 	struct vpu_mbox rx;
+-	unsigned long cmd_seq;
+ 
+ 	wait_queue_head_t ack_wq;
+ 	struct completion cmp;
+@@ -253,6 +252,8 @@ struct vpu_inst {
+ 
+ 	struct list_head cmd_q;
+ 	void *pending;
++	unsigned long cmd_seq;
++	atomic_long_t last_response_cmd;
+ 
+ 	struct vpu_inst_ops *ops;
+ 	const struct vpu_format *formats;
+diff --git a/drivers/media/platform/amphion/vpu_cmds.c b/drivers/media/platform/amphion/vpu_cmds.c
+index c2337812573ef..5695f5c1cb3e8 100644
+--- a/drivers/media/platform/amphion/vpu_cmds.c
++++ b/drivers/media/platform/amphion/vpu_cmds.c
+@@ -32,6 +32,7 @@ struct vpu_cmd_t {
+ 	struct vpu_cmd_request *request;
+ 	struct vpu_rpc_event *pkt;
+ 	unsigned long key;
++	atomic_long_t *last_response_cmd;
+ };
+ 
+ static struct vpu_cmd_request vpu_cmd_requests[] = {
+@@ -115,6 +116,8 @@ static void vpu_free_cmd(struct vpu_cmd_t *cmd)
+ {
+ 	if (!cmd)
+ 		return;
++	if (cmd->last_response_cmd)
++		atomic_long_set(cmd->last_response_cmd, cmd->key);
+ 	vfree(cmd->pkt);
+ 	vfree(cmd);
+ }
+@@ -172,7 +175,8 @@ static int vpu_request_cmd(struct vpu_inst *inst, u32 id, void *data,
+ 		return -ENOMEM;
+ 
+ 	mutex_lock(&core->cmd_lock);
+-	cmd->key = core->cmd_seq++;
++	cmd->key = ++inst->cmd_seq;
++	cmd->last_response_cmd = &inst->last_response_cmd;
+ 	if (key)
+ 		*key = cmd->key;
+ 	if (sync)
+@@ -246,26 +250,12 @@ void vpu_clear_request(struct vpu_inst *inst)
+ 
+ static bool check_is_responsed(struct vpu_inst *inst, unsigned long key)
+ {
+-	struct vpu_core *core = inst->core;
+-	struct vpu_cmd_t *cmd;
+-	bool flag = true;
++	unsigned long last_response = atomic_long_read(&inst->last_response_cmd);
+ 
+-	mutex_lock(&core->cmd_lock);
+-	cmd = inst->pending;
+-	if (cmd && key == cmd->key) {
+-		flag = false;
+-		goto exit;
+-	}
+-	list_for_each_entry(cmd, &inst->cmd_q, list) {
+-		if (key == cmd->key) {
+-			flag = false;
+-			break;
+-		}
+-	}
+-exit:
+-	mutex_unlock(&core->cmd_lock);
++	if (key <= last_response && (last_response - key) < (ULONG_MAX >> 1))
++		return true;
+ 
+-	return flag;
++	return false;
+ }
+ 
+ static int sync_session_response(struct vpu_inst *inst, unsigned long key, long timeout, int try)
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
+index 0f6e4c666440e..d7e0de49b3dce 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.c
++++ b/drivers/media/platform/amphion/vpu_v4l2.c
+@@ -716,6 +716,7 @@ int vpu_v4l2_open(struct file *file, struct vpu_inst *inst)
+ 		func = &vpu->decoder;
+ 
+ 	atomic_set(&inst->ref_count, 0);
++	atomic_long_set(&inst->last_response_cmd, 0);
+ 	vpu_inst_get(inst);
+ 	inst->vpu = vpu;
+ 	inst->core = vpu_request_core(vpu, inst->type);
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index f1c532a5802ac..25f5b5eebf13f 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -184,25 +184,16 @@ static int rga_setup_ctrls(struct rga_ctx *ctx)
+ static struct rga_fmt formats[] = {
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_ARGB32,
+-		.color_swap = RGA_COLOR_RB_SWAP,
++		.color_swap = RGA_COLOR_ALPHA_SWAP,
+ 		.hw_format = RGA_COLOR_FMT_ABGR8888,
+ 		.depth = 32,
+ 		.uv_factor = 1,
+ 		.y_div = 1,
+ 		.x_div = 1,
+ 	},
+-	{
+-		.fourcc = V4L2_PIX_FMT_XRGB32,
+-		.color_swap = RGA_COLOR_RB_SWAP,
+-		.hw_format = RGA_COLOR_FMT_XBGR8888,
+-		.depth = 32,
+-		.uv_factor = 1,
+-		.y_div = 1,
+-		.x_div = 1,
+-	},
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_ABGR32,
+-		.color_swap = RGA_COLOR_ALPHA_SWAP,
++		.color_swap = RGA_COLOR_RB_SWAP,
+ 		.hw_format = RGA_COLOR_FMT_ABGR8888,
+ 		.depth = 32,
+ 		.uv_factor = 1,
+@@ -211,7 +202,7 @@ static struct rga_fmt formats[] = {
+ 	},
+ 	{
+ 		.fourcc = V4L2_PIX_FMT_XBGR32,
+-		.color_swap = RGA_COLOR_ALPHA_SWAP,
++		.color_swap = RGA_COLOR_RB_SWAP,
+ 		.hw_format = RGA_COLOR_FMT_XBGR8888,
+ 		.depth = 32,
+ 		.uv_factor = 1,
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h b/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
+index 1e7cea1bea5ea..2d7f06281c390 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
+@@ -61,6 +61,14 @@ struct dentry;
+ 						 RKISP1_CIF_ISP_EXP_END |	\
+ 						 RKISP1_CIF_ISP_HIST_MEASURE_RDY)
+ 
++/* IRQ lines */
++enum rkisp1_irq_line {
++	RKISP1_IRQ_ISP = 0,
++	RKISP1_IRQ_MI,
++	RKISP1_IRQ_MIPI,
++	RKISP1_NUM_IRQS,
++};
++
+ /* enum for the resizer pads */
+ enum rkisp1_rsz_pad {
+ 	RKISP1_RSZ_PAD_SINK,
+@@ -423,7 +431,6 @@ struct rkisp1_debug {
+  * struct rkisp1_device - ISP platform device
+  *
+  * @base_addr:	   base register address
+- * @irq:	   the irq number
+  * @dev:	   a pointer to the struct device
+  * @clk_size:	   number of clocks
+  * @clks:	   array of clocks
+@@ -441,6 +448,7 @@ struct rkisp1_debug {
+  * @stream_lock:   serializes {start/stop}_streaming callbacks between the capture devices.
+  * @debug:	   debug params to be exposed on debugfs
+  * @info:	   version-specific ISP information
++ * @irqs:          IRQ line numbers
+  */
+ struct rkisp1_device {
+ 	void __iomem *base_addr;
+@@ -461,6 +469,7 @@ struct rkisp1_device {
+ 	struct mutex stream_lock; /* serialize {start/stop}_streaming cb between capture devices */
+ 	struct rkisp1_debug debug;
+ 	const struct rkisp1_info *info;
++	int irqs[RKISP1_NUM_IRQS];
+ };
+ 
+ /*
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
+index 6e17b2817e610..702adee83322b 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
+@@ -125,8 +125,20 @@ static void rkisp1_csi_disable(struct rkisp1_csi *csi)
+ 	struct rkisp1_device *rkisp1 = csi->rkisp1;
+ 	u32 val;
+ 
+-	/* Mask and clear interrupts. */
++	/* Mask MIPI interrupts. */
+ 	rkisp1_write(rkisp1, RKISP1_CIF_MIPI_IMSC, 0);
++
++	/* Flush posted writes */
++	rkisp1_read(rkisp1, RKISP1_CIF_MIPI_IMSC);
++
++	/*
++	 * Wait until the IRQ handler has ended. The IRQ handler may get called
++	 * even after this, but it will return immediately as the MIPI
++	 * interrupts have been masked.
++	 */
++	synchronize_irq(rkisp1->irqs[RKISP1_IRQ_MIPI]);
++
++	/* Clear MIPI interrupt status */
+ 	rkisp1_write(rkisp1, RKISP1_CIF_MIPI_ICR, ~0);
+ 
+ 	val = rkisp1_read(rkisp1, RKISP1_CIF_MIPI_CTRL);
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+index 894d5afaff4e1..f96f821a7b50d 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+@@ -114,6 +114,7 @@
+ struct rkisp1_isr_data {
+ 	const char *name;
+ 	irqreturn_t (*isr)(int irq, void *ctx);
++	u32 line_mask;
+ };
+ 
+ /* ----------------------------------------------------------------------------
+@@ -442,17 +443,25 @@ error:
+ 
+ static irqreturn_t rkisp1_isr(int irq, void *ctx)
+ {
++	irqreturn_t ret = IRQ_NONE;
++
+ 	/*
+ 	 * Call rkisp1_capture_isr() first to handle the frame that
+ 	 * potentially completed using the current frame_sequence number before
+ 	 * it is potentially incremented by rkisp1_isp_isr() in the vertical
+ 	 * sync.
+ 	 */
+-	rkisp1_capture_isr(irq, ctx);
+-	rkisp1_isp_isr(irq, ctx);
+-	rkisp1_csi_isr(irq, ctx);
+ 
+-	return IRQ_HANDLED;
++	if (rkisp1_capture_isr(irq, ctx) == IRQ_HANDLED)
++		ret = IRQ_HANDLED;
++
++	if (rkisp1_isp_isr(irq, ctx) == IRQ_HANDLED)
++		ret = IRQ_HANDLED;
++
++	if (rkisp1_csi_isr(irq, ctx) == IRQ_HANDLED)
++		ret = IRQ_HANDLED;
++
++	return ret;
+ }
+ 
+ static const char * const px30_isp_clks[] = {
+@@ -463,9 +472,9 @@ static const char * const px30_isp_clks[] = {
+ };
+ 
+ static const struct rkisp1_isr_data px30_isp_isrs[] = {
+-	{ "isp", rkisp1_isp_isr },
+-	{ "mi", rkisp1_capture_isr },
+-	{ "mipi", rkisp1_csi_isr },
++	{ "isp", rkisp1_isp_isr, BIT(RKISP1_IRQ_ISP) },
++	{ "mi", rkisp1_capture_isr, BIT(RKISP1_IRQ_MI) },
++	{ "mipi", rkisp1_csi_isr, BIT(RKISP1_IRQ_MIPI) },
+ };
+ 
+ static const struct rkisp1_info px30_isp_info = {
+@@ -484,7 +493,7 @@ static const char * const rk3399_isp_clks[] = {
+ };
+ 
+ static const struct rkisp1_isr_data rk3399_isp_isrs[] = {
+-	{ NULL, rkisp1_isr },
++	{ NULL, rkisp1_isr, BIT(RKISP1_IRQ_ISP) | BIT(RKISP1_IRQ_MI) | BIT(RKISP1_IRQ_MIPI) },
+ };
+ 
+ static const struct rkisp1_info rk3399_isp_info = {
+@@ -535,6 +544,9 @@ static int rkisp1_probe(struct platform_device *pdev)
+ 	if (IS_ERR(rkisp1->base_addr))
+ 		return PTR_ERR(rkisp1->base_addr);
+ 
++	for (unsigned int il = 0; il < ARRAY_SIZE(rkisp1->irqs); ++il)
++		rkisp1->irqs[il] = -1;
++
+ 	for (i = 0; i < info->isr_size; i++) {
+ 		irq = info->isrs[i].name
+ 		    ? platform_get_irq_byname(pdev, info->isrs[i].name)
+@@ -542,7 +554,12 @@ static int rkisp1_probe(struct platform_device *pdev)
+ 		if (irq < 0)
+ 			return irq;
+ 
+-		ret = devm_request_irq(dev, irq, info->isrs[i].isr, IRQF_SHARED,
++		for (unsigned int il = 0; il < ARRAY_SIZE(rkisp1->irqs); ++il) {
++			if (info->isrs[i].line_mask & BIT(il))
++				rkisp1->irqs[il] = irq;
++		}
++
++		ret = devm_request_irq(dev, irq, info->isrs[i].isr, 0,
+ 				       dev_driver_string(dev), dev);
+ 		if (ret) {
+ 			dev_err(dev, "request irq failed: %d\n", ret);
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
+index 45d1ab96fc6e7..5fbc47bda6831 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
+@@ -254,11 +254,25 @@ static void rkisp1_isp_stop(struct rkisp1_isp *isp)
+ 	 * ISP(mi) stop in mi frame end -> Stop ISP(mipi) ->
+ 	 * Stop ISP(isp) ->wait for ISP isp off
+ 	 */
+-	/* stop and clear MI and ISP interrupts */
+-	rkisp1_write(rkisp1, RKISP1_CIF_ISP_IMSC, 0);
+-	rkisp1_write(rkisp1, RKISP1_CIF_ISP_ICR, ~0);
+ 
++	/* Mask MI and ISP interrupts */
++	rkisp1_write(rkisp1, RKISP1_CIF_ISP_IMSC, 0);
+ 	rkisp1_write(rkisp1, RKISP1_CIF_MI_IMSC, 0);
++
++	/* Flush posted writes */
++	rkisp1_read(rkisp1, RKISP1_CIF_MI_IMSC);
++
++	/*
++	 * Wait until the IRQ handler has ended. The IRQ handler may get called
++	 * even after this, but it will return immediately as the MI and ISP
++	 * interrupts have been masked.
++	 */
++	synchronize_irq(rkisp1->irqs[RKISP1_IRQ_ISP]);
++	if (rkisp1->irqs[RKISP1_IRQ_ISP] != rkisp1->irqs[RKISP1_IRQ_MI])
++		synchronize_irq(rkisp1->irqs[RKISP1_IRQ_MI]);
++
++	/* Clear MI and ISP interrupt status */
++	rkisp1_write(rkisp1, RKISP1_CIF_ISP_ICR, ~0);
+ 	rkisp1_write(rkisp1, RKISP1_CIF_MI_ICR, ~0);
+ 
+ 	/* stop ISP */
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
+index 28ecc7347d543..6297870ee9e9e 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
+@@ -335,12 +335,8 @@ static int rkisp1_rsz_enum_mbus_code(struct v4l2_subdev *sd,
+ {
+ 	struct rkisp1_resizer *rsz =
+ 		container_of(sd, struct rkisp1_resizer, sd);
+-	struct v4l2_subdev_pad_config dummy_cfg;
+-	struct v4l2_subdev_state pad_state = {
+-		.pads = &dummy_cfg
+-	};
+-	u32 pad = code->pad;
+-	int ret;
++	unsigned int index = code->index;
++	unsigned int i;
+ 
+ 	if (code->pad == RKISP1_RSZ_PAD_SRC) {
+ 		/* supported mbus codes on the src are the same as in the capture */
+@@ -360,15 +356,29 @@ static int rkisp1_rsz_enum_mbus_code(struct v4l2_subdev *sd,
+ 		return 0;
+ 	}
+ 
+-	/* supported mbus codes on the sink pad are the same as isp src pad */
+-	code->pad = RKISP1_ISP_PAD_SOURCE_VIDEO;
+-	ret = v4l2_subdev_call(&rsz->rkisp1->isp.sd, pad, enum_mbus_code,
+-			       &pad_state, code);
++	/*
++	 * Supported mbus codes on the sink pad are the same as on the ISP
++	 * source pad.
++	 */
++	for (i = 0; ; i++) {
++		const struct rkisp1_mbus_info *fmt =
++			rkisp1_mbus_info_get_by_index(i);
+ 
+-	/* restore pad */
+-	code->pad = pad;
+-	code->flags = 0;
+-	return ret;
++		if (!fmt)
++			break;
++
++		if (!(fmt->direction & RKISP1_ISP_SD_SRC))
++			continue;
++
++		if (!index) {
++			code->code = fmt->mbus_code;
++			return 0;
++		}
++
++		index--;
++	}
++
++	return -EINVAL;
+ }
+ 
+ static int rkisp1_rsz_init_config(struct v4l2_subdev *sd,
+diff --git a/drivers/media/usb/stk1160/stk1160-video.c b/drivers/media/usb/stk1160/stk1160-video.c
+index 4e966f6bf608d..366f0e4a5dc0d 100644
+--- a/drivers/media/usb/stk1160/stk1160-video.c
++++ b/drivers/media/usb/stk1160/stk1160-video.c
+@@ -107,8 +107,7 @@ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ 
+ 	/*
+ 	 * TODO: These stk1160_dbg are very spammy!
+-	 * We should 1) check why we are getting them
+-	 * and 2) add ratelimit.
++	 * We should check why we are getting them.
+ 	 *
+ 	 * UPDATE: One of the reasons (the only one?) for getting these
+ 	 * is incorrect standard (mismatch between expected and configured).
+@@ -151,7 +150,7 @@ void stk1160_copy_video(struct stk1160 *dev, u8 *src, int len)
+ 
+ 	/* Let the bug hunt begin! sanity checks! */
+ 	if (lencopy < 0) {
+-		stk1160_dbg("copy skipped: negative lencopy\n");
++		printk_ratelimited(KERN_DEBUG "copy skipped: negative lencopy\n");
+ 		return;
+ 	}
+ 
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 08fcd2ffa727b..bbd90123a4e76 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2592,6 +2592,15 @@ static const struct usb_device_id uvc_ids[] = {
+ 	  .bInterfaceSubClass	= 1,
+ 	  .bInterfaceProtocol	= 0,
+ 	  .driver_info		= (kernel_ulong_t)&uvc_ctrl_power_line_limited },
++	/* Chicony Electronics Co., Ltd Integrated Camera */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x04f2,
++	  .idProduct		= 0xb67c,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= UVC_PC_PROTOCOL_15,
++	  .driver_info		= (kernel_ulong_t)&uvc_ctrl_power_line_uvc11 },
+ 	/* Chicony EasyCamera */
+ 	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
+ 				| USB_DEVICE_ID_MATCH_INT_INFO,
+@@ -2994,6 +3003,15 @@ static const struct usb_device_id uvc_ids[] = {
+ 	  .bInterfaceSubClass	= 1,
+ 	  .bInterfaceProtocol	= 0,
+ 	  .driver_info		= UVC_INFO_QUIRK(UVC_QUIRK_FORCE_BPP) },
++	/* SunplusIT Inc HD Camera */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x2b7e,
++	  .idProduct		= 0xb752,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= UVC_PC_PROTOCOL_15,
++	  .driver_info		= (kernel_ulong_t)&uvc_ctrl_power_line_uvc11 },
+ 	/* Lenovo Integrated Camera */
+ 	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
+ 				| USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 90ce58fd629e5..68d71b4b55bd3 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -1483,6 +1483,7 @@ config MFD_SYSCON
+ 
+ config MFD_TI_AM335X_TSCADC
+ 	tristate "TI ADC / Touch Screen chip support"
++	depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST
+ 	select MFD_CORE
+ 	select REGMAP
+ 	select REGMAP_MMIO
+diff --git a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
+index 3882e97e96a70..c6eb27d46cb06 100644
+--- a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
++++ b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
+@@ -150,6 +150,7 @@ static int lis3lv02d_i2c_probe(struct i2c_client *client)
+ 	lis3_dev.init	  = lis3_i2c_init;
+ 	lis3_dev.read	  = lis3_i2c_read;
+ 	lis3_dev.write	  = lis3_i2c_write;
++	lis3_dev.reg_ctrl = lis3_reg_ctrl;
+ 	lis3_dev.irq	  = client->irq;
+ 	lis3_dev.ac	  = lis3lv02d_axis_map;
+ 	lis3_dev.pm_dev	  = &client->dev;
+diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
+index dc2c7b9796563..7edf0fd58c346 100644
+--- a/drivers/net/bonding/bond_alb.c
++++ b/drivers/net/bonding/bond_alb.c
+@@ -985,7 +985,8 @@ static int alb_upper_dev_walk(struct net_device *upper,
+ 	if (netif_is_macvlan(upper) && !strict_match) {
+ 		tags = bond_verify_device_path(bond->dev, upper, 0);
+ 		if (IS_ERR_OR_NULL(tags))
+-			BUG();
++			return -ENOMEM;
++
+ 		alb_send_lp_vid(slave, upper->dev_addr,
+ 				tags[0].vlan_proto, tags[0].vlan_id);
+ 		kfree(tags);
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index d27c6b70a2f67..2333f6383b542 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2838,8 +2838,7 @@ static void mt753x_phylink_mac_link_up(struct dsa_switch *ds, int port,
+ 	/* MT753x MAC works in 1G full duplex mode for all up-clocked
+ 	 * variants.
+ 	 */
+-	if (interface == PHY_INTERFACE_MODE_INTERNAL ||
+-	    interface == PHY_INTERFACE_MODE_TRGMII ||
++	if (interface == PHY_INTERFACE_MODE_TRGMII ||
+ 	    (phy_interface_mode_is_8023z(interface))) {
+ 		speed = SPEED_1000;
+ 		duplex = DUPLEX_FULL;
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 44383a03ef2ff..c54d305a1d831 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -601,8 +601,8 @@ struct mv88e6xxx_ops {
+ 	int (*serdes_get_sset_count)(struct mv88e6xxx_chip *chip, int port);
+ 	int (*serdes_get_strings)(struct mv88e6xxx_chip *chip,  int port,
+ 				  uint8_t *data);
+-	int (*serdes_get_stats)(struct mv88e6xxx_chip *chip,  int port,
+-				uint64_t *data);
++	size_t (*serdes_get_stats)(struct mv88e6xxx_chip *chip, int port,
++				   uint64_t *data);
+ 
+ 	/* SERDES registers for ethtool */
+ 	int (*serdes_get_regs_len)(struct mv88e6xxx_chip *chip,  int port);
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index 3b4b42651fa3d..01ea53940786d 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -177,8 +177,8 @@ static uint64_t mv88e6352_serdes_get_stat(struct mv88e6xxx_chip *chip,
+ 	return val;
+ }
+ 
+-int mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data)
++size_t mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data)
+ {
+ 	struct mv88e6xxx_port *mv88e6xxx_port = &chip->ports[port];
+ 	struct mv88e6352_serdes_hw_stat *stat;
+@@ -187,7 +187,7 @@ int mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+ 
+ 	err = mv88e6352_g2_scratch_port_has_serdes(chip, port);
+ 	if (err <= 0)
+-		return err;
++		return 0;
+ 
+ 	BUILD_BUG_ON(ARRAY_SIZE(mv88e6352_serdes_hw_stats) >
+ 		     ARRAY_SIZE(mv88e6xxx_port->serdes_stats));
+@@ -429,8 +429,8 @@ static uint64_t mv88e6390_serdes_get_stat(struct mv88e6xxx_chip *chip, int lane,
+ 	return reg[0] | ((u64)reg[1] << 16) | ((u64)reg[2] << 32);
+ }
+ 
+-int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data)
++size_t mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data)
+ {
+ 	struct mv88e6390_serdes_hw_stat *stat;
+ 	int lane;
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.h b/drivers/net/dsa/mv88e6xxx/serdes.h
+index aac95cab46e3d..ff5c3ab31e155 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.h
++++ b/drivers/net/dsa/mv88e6xxx/serdes.h
+@@ -127,13 +127,13 @@ unsigned int mv88e6390_serdes_irq_mapping(struct mv88e6xxx_chip *chip,
+ int mv88e6352_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port);
+ int mv88e6352_serdes_get_strings(struct mv88e6xxx_chip *chip,
+ 				 int port, uint8_t *data);
+-int mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data);
++size_t mv88e6352_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data);
+ int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port);
+ int mv88e6390_serdes_get_strings(struct mv88e6xxx_chip *chip,
+ 				 int port, uint8_t *data);
+-int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+-			       uint64_t *data);
++size_t mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
++				  uint64_t *data);
+ 
+ int mv88e6352_serdes_get_regs_len(struct mv88e6xxx_chip *chip, int port);
+ void mv88e6352_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p);
+diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c
+index ec57d9d52072f..6f2a7aee7e5ad 100644
+--- a/drivers/net/dsa/qca/qca8k-8xxx.c
++++ b/drivers/net/dsa/qca/qca8k-8xxx.c
+@@ -949,10 +949,15 @@ qca8k_mdio_register(struct qca8k_priv *priv)
+ 	struct dsa_switch *ds = priv->ds;
+ 	struct device_node *mdio;
+ 	struct mii_bus *bus;
++	int err;
++
++	mdio = of_get_child_by_name(priv->dev->of_node, "mdio");
+ 
+ 	bus = devm_mdiobus_alloc(ds->dev);
+-	if (!bus)
+-		return -ENOMEM;
++	if (!bus) {
++		err = -ENOMEM;
++		goto out_put_node;
++	}
+ 
+ 	bus->priv = (void *)priv;
+ 	snprintf(bus->id, MII_BUS_ID_SIZE, "qca8k-%d.%d",
+@@ -962,12 +967,12 @@ qca8k_mdio_register(struct qca8k_priv *priv)
+ 	ds->user_mii_bus = bus;
+ 
+ 	/* Check if the devicetree declare the port:phy mapping */
+-	mdio = of_get_child_by_name(priv->dev->of_node, "mdio");
+ 	if (of_device_is_available(mdio)) {
+ 		bus->name = "qca8k user mii";
+ 		bus->read = qca8k_internal_mdio_read;
+ 		bus->write = qca8k_internal_mdio_write;
+-		return devm_of_mdiobus_register(priv->dev, bus, mdio);
++		err = devm_of_mdiobus_register(priv->dev, bus, mdio);
++		goto out_put_node;
+ 	}
+ 
+ 	/* If a mapping can't be found the legacy mapping is used,
+@@ -976,7 +981,13 @@ qca8k_mdio_register(struct qca8k_priv *priv)
+ 	bus->name = "qca8k-legacy user mii";
+ 	bus->read = qca8k_legacy_mdio_read;
+ 	bus->write = qca8k_legacy_mdio_write;
+-	return devm_mdiobus_register(priv->dev, bus);
++
++	err = devm_mdiobus_register(priv->dev, bus);
++
++out_put_node:
++	of_node_put(mdio);
++
++	return err;
+ }
+ 
+ static int
+@@ -2038,12 +2049,11 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
+ 	priv->info = of_device_get_match_data(priv->dev);
+ 
+ 	priv->reset_gpio = devm_gpiod_get_optional(priv->dev, "reset",
+-						   GPIOD_ASIS);
++						   GPIOD_OUT_HIGH);
+ 	if (IS_ERR(priv->reset_gpio))
+ 		return PTR_ERR(priv->reset_gpio);
+ 
+ 	if (priv->reset_gpio) {
+-		gpiod_set_value_cansleep(priv->reset_gpio, 1);
+ 		/* The active low duration must be greater than 10 ms
+ 		 * and checkpatch.pl wants 20 ms.
+ 		 */
+diff --git a/drivers/net/ethernet/amd/pds_core/adminq.c b/drivers/net/ethernet/amd/pds_core/adminq.c
+index 5beadabc21361..ea773cfa0af67 100644
+--- a/drivers/net/ethernet/amd/pds_core/adminq.c
++++ b/drivers/net/ethernet/amd/pds_core/adminq.c
+@@ -63,6 +63,15 @@ static int pdsc_process_notifyq(struct pdsc_qcq *qcq)
+ 	return nq_work;
+ }
+ 
++static bool pdsc_adminq_inc_if_up(struct pdsc *pdsc)
++{
++	if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER) ||
++	    pdsc->state & BIT_ULL(PDSC_S_FW_DEAD))
++		return false;
++
++	return refcount_inc_not_zero(&pdsc->adminq_refcnt);
++}
++
+ void pdsc_process_adminq(struct pdsc_qcq *qcq)
+ {
+ 	union pds_core_adminq_comp *comp;
+@@ -75,9 +84,9 @@ void pdsc_process_adminq(struct pdsc_qcq *qcq)
+ 	int aq_work = 0;
+ 	int credits;
+ 
+-	/* Don't process AdminQ when shutting down */
+-	if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {
+-		dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",
++	/* Don't process AdminQ when it's not up */
++	if (!pdsc_adminq_inc_if_up(pdsc)) {
++		dev_err(pdsc->dev, "%s: called while adminq is unavailable\n",
+ 			__func__);
+ 		return;
+ 	}
+@@ -124,6 +133,7 @@ credits:
+ 		pds_core_intr_credits(&pdsc->intr_ctrl[qcq->intx],
+ 				      credits,
+ 				      PDS_CORE_INTR_CRED_REARM);
++	refcount_dec(&pdsc->adminq_refcnt);
+ }
+ 
+ void pdsc_work_thread(struct work_struct *work)
+@@ -135,18 +145,20 @@ void pdsc_work_thread(struct work_struct *work)
+ 
+ irqreturn_t pdsc_adminq_isr(int irq, void *data)
+ {
+-	struct pdsc_qcq *qcq = data;
+-	struct pdsc *pdsc = qcq->pdsc;
++	struct pdsc *pdsc = data;
++	struct pdsc_qcq *qcq;
+ 
+-	/* Don't process AdminQ when shutting down */
+-	if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {
+-		dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",
++	/* Don't process AdminQ when it's not up */
++	if (!pdsc_adminq_inc_if_up(pdsc)) {
++		dev_err(pdsc->dev, "%s: called while adminq is unavailable\n",
+ 			__func__);
+ 		return IRQ_HANDLED;
+ 	}
+ 
++	qcq = &pdsc->adminqcq;
+ 	queue_work(pdsc->wq, &qcq->work);
+ 	pds_core_intr_mask(&pdsc->intr_ctrl[qcq->intx], PDS_CORE_INTR_MASK_CLEAR);
++	refcount_dec(&pdsc->adminq_refcnt);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -179,10 +191,16 @@ static int __pdsc_adminq_post(struct pdsc *pdsc,
+ 
+ 	/* Check that the FW is running */
+ 	if (!pdsc_is_fw_running(pdsc)) {
+-		u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
+-
+-		dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n",
+-			 __func__, fw_status);
++		if (pdsc->info_regs) {
++			u8 fw_status =
++				ioread8(&pdsc->info_regs->fw_status);
++
++			dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n",
++				 __func__, fw_status);
++		} else {
++			dev_info(pdsc->dev, "%s: post failed - BARs not setup\n",
++				 __func__);
++		}
+ 		ret = -ENXIO;
+ 
+ 		goto err_out_unlock;
+@@ -230,6 +248,12 @@ int pdsc_adminq_post(struct pdsc *pdsc,
+ 	int err = 0;
+ 	int index;
+ 
++	if (!pdsc_adminq_inc_if_up(pdsc)) {
++		dev_dbg(pdsc->dev, "%s: preventing adminq cmd %u\n",
++			__func__, cmd->opcode);
++		return -ENXIO;
++	}
++
+ 	wc.qcq = &pdsc->adminqcq;
+ 	index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc);
+ 	if (index < 0) {
+@@ -248,10 +272,16 @@ int pdsc_adminq_post(struct pdsc *pdsc,
+ 			break;
+ 
+ 		if (!pdsc_is_fw_running(pdsc)) {
+-			u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
+-
+-			dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n",
+-				__func__, fw_status);
++			if (pdsc->info_regs) {
++				u8 fw_status =
++					ioread8(&pdsc->info_regs->fw_status);
++
++				dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n",
++					__func__, fw_status);
++			} else {
++				dev_dbg(pdsc->dev, "%s: post wait failed - BARs not setup\n",
++					__func__);
++			}
+ 			err = -ENXIO;
+ 			break;
+ 		}
+@@ -285,6 +315,8 @@ err_out:
+ 			queue_work(pdsc->wq, &pdsc->health_work);
+ 	}
+ 
++	refcount_dec(&pdsc->adminq_refcnt);
++
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(pdsc_adminq_post);
+diff --git a/drivers/net/ethernet/amd/pds_core/core.c b/drivers/net/ethernet/amd/pds_core/core.c
+index 0d2091e9eb283..7658a72867675 100644
+--- a/drivers/net/ethernet/amd/pds_core/core.c
++++ b/drivers/net/ethernet/amd/pds_core/core.c
+@@ -125,7 +125,7 @@ static int pdsc_qcq_intr_alloc(struct pdsc *pdsc, struct pdsc_qcq *qcq)
+ 
+ 	snprintf(name, sizeof(name), "%s-%d-%s",
+ 		 PDS_CORE_DRV_NAME, pdsc->pdev->bus->number, qcq->q.name);
+-	index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, qcq);
++	index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, pdsc);
+ 	if (index < 0)
+ 		return index;
+ 	qcq->intx = index;
+@@ -404,10 +404,7 @@ int pdsc_setup(struct pdsc *pdsc, bool init)
+ 	int numdescs;
+ 	int err;
+ 
+-	if (init)
+-		err = pdsc_dev_init(pdsc);
+-	else
+-		err = pdsc_dev_reinit(pdsc);
++	err = pdsc_dev_init(pdsc);
+ 	if (err)
+ 		return err;
+ 
+@@ -450,6 +447,7 @@ int pdsc_setup(struct pdsc *pdsc, bool init)
+ 		pdsc_debugfs_add_viftype(pdsc);
+ 	}
+ 
++	refcount_set(&pdsc->adminq_refcnt, 1);
+ 	clear_bit(PDSC_S_FW_DEAD, &pdsc->state);
+ 	return 0;
+ 
+@@ -464,6 +462,8 @@ void pdsc_teardown(struct pdsc *pdsc, bool removing)
+ 
+ 	if (!pdsc->pdev->is_virtfn)
+ 		pdsc_devcmd_reset(pdsc);
++	if (pdsc->adminqcq.work.func)
++		cancel_work_sync(&pdsc->adminqcq.work);
+ 	pdsc_qcq_free(pdsc, &pdsc->notifyqcq);
+ 	pdsc_qcq_free(pdsc, &pdsc->adminqcq);
+ 
+@@ -476,10 +476,9 @@ void pdsc_teardown(struct pdsc *pdsc, bool removing)
+ 		for (i = 0; i < pdsc->nintrs; i++)
+ 			pdsc_intr_free(pdsc, i);
+ 
+-		if (removing) {
+-			kfree(pdsc->intr_info);
+-			pdsc->intr_info = NULL;
+-		}
++		kfree(pdsc->intr_info);
++		pdsc->intr_info = NULL;
++		pdsc->nintrs = 0;
+ 	}
+ 
+ 	if (pdsc->kern_dbpage) {
+@@ -487,6 +486,7 @@ void pdsc_teardown(struct pdsc *pdsc, bool removing)
+ 		pdsc->kern_dbpage = NULL;
+ 	}
+ 
++	pci_free_irq_vectors(pdsc->pdev);
+ 	set_bit(PDSC_S_FW_DEAD, &pdsc->state);
+ }
+ 
+@@ -512,6 +512,24 @@ void pdsc_stop(struct pdsc *pdsc)
+ 					   PDS_CORE_INTR_MASK_SET);
+ }
+ 
++static void pdsc_adminq_wait_and_dec_once_unused(struct pdsc *pdsc)
++{
++	/* The driver initializes the adminq_refcnt to 1 when the adminq is
++	 * allocated and ready for use. Other users/requesters will increment
++	 * the refcnt while in use. If the refcnt is down to 1 then the adminq
++	 * is not in use and the refcnt can be cleared and adminq freed. Before
++	 * calling this function the driver will set PDSC_S_FW_DEAD, which
++	 * prevent subsequent attempts to use the adminq and increment the
++	 * refcnt to fail. This guarantees that this function will eventually
++	 * exit.
++	 */
++	while (!refcount_dec_if_one(&pdsc->adminq_refcnt)) {
++		dev_dbg_ratelimited(pdsc->dev, "%s: adminq in use\n",
++				    __func__);
++		cpu_relax();
++	}
++}
++
+ void pdsc_fw_down(struct pdsc *pdsc)
+ {
+ 	union pds_core_notifyq_comp reset_event = {
+@@ -527,6 +545,8 @@ void pdsc_fw_down(struct pdsc *pdsc)
+ 	if (pdsc->pdev->is_virtfn)
+ 		return;
+ 
++	pdsc_adminq_wait_and_dec_once_unused(pdsc);
++
+ 	/* Notify clients of fw_down */
+ 	if (pdsc->fw_reporter)
+ 		devlink_health_report(pdsc->fw_reporter, "FW down reported", pdsc);
+@@ -577,7 +597,13 @@ err_out:
+ 
+ static void pdsc_check_pci_health(struct pdsc *pdsc)
+ {
+-	u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
++	u8 fw_status;
++
++	/* some sort of teardown already in progress */
++	if (!pdsc->info_regs)
++		return;
++
++	fw_status = ioread8(&pdsc->info_regs->fw_status);
+ 
+ 	/* is PCI broken? */
+ 	if (fw_status != PDS_RC_BAD_PCI)
+diff --git a/drivers/net/ethernet/amd/pds_core/core.h b/drivers/net/ethernet/amd/pds_core/core.h
+index e35d3e7006bfc..110c4b826b22d 100644
+--- a/drivers/net/ethernet/amd/pds_core/core.h
++++ b/drivers/net/ethernet/amd/pds_core/core.h
+@@ -184,6 +184,7 @@ struct pdsc {
+ 	struct mutex devcmd_lock;	/* lock for dev_cmd operations */
+ 	struct mutex config_lock;	/* lock for configuration operations */
+ 	spinlock_t adminq_lock;		/* lock for adminq operations */
++	refcount_t adminq_refcnt;
+ 	struct pds_core_dev_info_regs __iomem *info_regs;
+ 	struct pds_core_dev_cmd_regs __iomem *cmd_regs;
+ 	struct pds_core_intr __iomem *intr_ctrl;
+@@ -280,7 +281,6 @@ int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
+ 		       union pds_core_dev_comp *comp, int max_seconds);
+ int pdsc_devcmd_init(struct pdsc *pdsc);
+ int pdsc_devcmd_reset(struct pdsc *pdsc);
+-int pdsc_dev_reinit(struct pdsc *pdsc);
+ int pdsc_dev_init(struct pdsc *pdsc);
+ 
+ void pdsc_reset_prepare(struct pci_dev *pdev);
+diff --git a/drivers/net/ethernet/amd/pds_core/debugfs.c b/drivers/net/ethernet/amd/pds_core/debugfs.c
+index 8ec392299b7dc..4e8579ca1c8c7 100644
+--- a/drivers/net/ethernet/amd/pds_core/debugfs.c
++++ b/drivers/net/ethernet/amd/pds_core/debugfs.c
+@@ -64,6 +64,10 @@ DEFINE_SHOW_ATTRIBUTE(identity);
+ 
+ void pdsc_debugfs_add_ident(struct pdsc *pdsc)
+ {
++	/* This file will already exist in the reset flow */
++	if (debugfs_lookup("identity", pdsc->dentry))
++		return;
++
+ 	debugfs_create_file("identity", 0400, pdsc->dentry,
+ 			    pdsc, &identity_fops);
+ }
+diff --git a/drivers/net/ethernet/amd/pds_core/dev.c b/drivers/net/ethernet/amd/pds_core/dev.c
+index 31940b857e0e5..e65a1632df505 100644
+--- a/drivers/net/ethernet/amd/pds_core/dev.c
++++ b/drivers/net/ethernet/amd/pds_core/dev.c
+@@ -57,6 +57,9 @@ int pdsc_err_to_errno(enum pds_core_status_code code)
+ 
+ bool pdsc_is_fw_running(struct pdsc *pdsc)
+ {
++	if (!pdsc->info_regs)
++		return false;
++
+ 	pdsc->fw_status = ioread8(&pdsc->info_regs->fw_status);
+ 	pdsc->last_fw_time = jiffies;
+ 	pdsc->last_hb = ioread32(&pdsc->info_regs->fw_heartbeat);
+@@ -182,13 +185,17 @@ int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
+ {
+ 	int err;
+ 
++	if (!pdsc->cmd_regs)
++		return -ENXIO;
++
+ 	memcpy_toio(&pdsc->cmd_regs->cmd, cmd, sizeof(*cmd));
+ 	pdsc_devcmd_dbell(pdsc);
+ 	err = pdsc_devcmd_wait(pdsc, cmd->opcode, max_seconds);
+-	memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp));
+ 
+ 	if ((err == -ENXIO || err == -ETIMEDOUT) && pdsc->wq)
+ 		queue_work(pdsc->wq, &pdsc->health_work);
++	else
++		memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp));
+ 
+ 	return err;
+ }
+@@ -309,13 +316,6 @@ static int pdsc_identify(struct pdsc *pdsc)
+ 	return 0;
+ }
+ 
+-int pdsc_dev_reinit(struct pdsc *pdsc)
+-{
+-	pdsc_init_devinfo(pdsc);
+-
+-	return pdsc_identify(pdsc);
+-}
+-
+ int pdsc_dev_init(struct pdsc *pdsc)
+ {
+ 	unsigned int nintrs;
+diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
+index e9948ea5bbcdb..54864f27c87a9 100644
+--- a/drivers/net/ethernet/amd/pds_core/devlink.c
++++ b/drivers/net/ethernet/amd/pds_core/devlink.c
+@@ -111,7 +111,8 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ 
+ 	mutex_lock(&pdsc->devcmd_lock);
+ 	err = pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout * 2);
+-	memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));
++	if (!err)
++		memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));
+ 	mutex_unlock(&pdsc->devcmd_lock);
+ 	if (err && err != -EIO)
+ 		return err;
+diff --git a/drivers/net/ethernet/amd/pds_core/fw.c b/drivers/net/ethernet/amd/pds_core/fw.c
+index 90a811f3878ae..fa626719e68d1 100644
+--- a/drivers/net/ethernet/amd/pds_core/fw.c
++++ b/drivers/net/ethernet/amd/pds_core/fw.c
+@@ -107,6 +107,9 @@ int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
+ 
+ 	dev_info(pdsc->dev, "Installing firmware\n");
+ 
++	if (!pdsc->cmd_regs)
++		return -ENXIO;
++
+ 	dl = priv_to_devlink(pdsc);
+ 	devlink_flash_update_status_notify(dl, "Preparing to flash",
+ 					   NULL, 0, 0);
+diff --git a/drivers/net/ethernet/amd/pds_core/main.c b/drivers/net/ethernet/amd/pds_core/main.c
+index 3080898d7b95b..cdbf053b5376c 100644
+--- a/drivers/net/ethernet/amd/pds_core/main.c
++++ b/drivers/net/ethernet/amd/pds_core/main.c
+@@ -37,6 +37,11 @@ static void pdsc_unmap_bars(struct pdsc *pdsc)
+ 	struct pdsc_dev_bar *bars = pdsc->bars;
+ 	unsigned int i;
+ 
++	pdsc->info_regs = NULL;
++	pdsc->cmd_regs = NULL;
++	pdsc->intr_status = NULL;
++	pdsc->intr_ctrl = NULL;
++
+ 	for (i = 0; i < PDS_CORE_BARS_MAX; i++) {
+ 		if (bars[i].vaddr)
+ 			pci_iounmap(pdsc->pdev, bars[i].vaddr);
+@@ -293,7 +298,7 @@ err_out_stop:
+ err_out_teardown:
+ 	pdsc_teardown(pdsc, PDSC_TEARDOWN_REMOVING);
+ err_out_unmap_bars:
+-	del_timer_sync(&pdsc->wdtimer);
++	timer_shutdown_sync(&pdsc->wdtimer);
+ 	if (pdsc->wq)
+ 		destroy_workqueue(pdsc->wq);
+ 	mutex_destroy(&pdsc->config_lock);
+@@ -420,7 +425,7 @@ static void pdsc_remove(struct pci_dev *pdev)
+ 		 */
+ 		pdsc_sriov_configure(pdev, 0);
+ 
+-		del_timer_sync(&pdsc->wdtimer);
++		timer_shutdown_sync(&pdsc->wdtimer);
+ 		if (pdsc->wq)
+ 			destroy_workqueue(pdsc->wq);
+ 
+@@ -433,7 +438,6 @@ static void pdsc_remove(struct pci_dev *pdev)
+ 		mutex_destroy(&pdsc->config_lock);
+ 		mutex_destroy(&pdsc->devcmd_lock);
+ 
+-		pci_free_irq_vectors(pdev);
+ 		pdsc_unmap_bars(pdsc);
+ 		pci_release_regions(pdev);
+ 	}
+@@ -445,13 +449,26 @@ static void pdsc_remove(struct pci_dev *pdev)
+ 	devlink_free(dl);
+ }
+ 
++static void pdsc_stop_health_thread(struct pdsc *pdsc)
++{
++	timer_shutdown_sync(&pdsc->wdtimer);
++	if (pdsc->health_work.func)
++		cancel_work_sync(&pdsc->health_work);
++}
++
++static void pdsc_restart_health_thread(struct pdsc *pdsc)
++{
++	timer_setup(&pdsc->wdtimer, pdsc_wdtimer_cb, 0);
++	mod_timer(&pdsc->wdtimer, jiffies + 1);
++}
++
+ void pdsc_reset_prepare(struct pci_dev *pdev)
+ {
+ 	struct pdsc *pdsc = pci_get_drvdata(pdev);
+ 
++	pdsc_stop_health_thread(pdsc);
+ 	pdsc_fw_down(pdsc);
+ 
+-	pci_free_irq_vectors(pdev);
+ 	pdsc_unmap_bars(pdsc);
+ 	pci_release_regions(pdev);
+ 	pci_disable_device(pdev);
+@@ -486,6 +503,7 @@ void pdsc_reset_done(struct pci_dev *pdev)
+ 	}
+ 
+ 	pdsc_fw_up(pdsc);
++	pdsc_restart_health_thread(pdsc);
+ }
+ 
+ static const struct pci_error_handlers pdsc_err_handler = {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c b/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c
+index 28c9b6f1a54f1..abd4832e4ed21 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c
+@@ -953,8 +953,6 @@ int aq_ptp_ring_alloc(struct aq_nic_s *aq_nic)
+ {
+ 	struct aq_ptp_s *aq_ptp = aq_nic->aq_ptp;
+ 	unsigned int tx_ring_idx, rx_ring_idx;
+-	struct aq_ring_s *hwts;
+-	struct aq_ring_s *ring;
+ 	int err;
+ 
+ 	if (!aq_ptp)
+@@ -962,29 +960,23 @@ int aq_ptp_ring_alloc(struct aq_nic_s *aq_nic)
+ 
+ 	tx_ring_idx = aq_ptp_ring_idx(aq_nic->aq_nic_cfg.tc_mode);
+ 
+-	ring = aq_ring_tx_alloc(&aq_ptp->ptp_tx, aq_nic,
+-				tx_ring_idx, &aq_nic->aq_nic_cfg);
+-	if (!ring) {
+-		err = -ENOMEM;
++	err = aq_ring_tx_alloc(&aq_ptp->ptp_tx, aq_nic,
++			       tx_ring_idx, &aq_nic->aq_nic_cfg);
++	if (err)
+ 		goto err_exit;
+-	}
+ 
+ 	rx_ring_idx = aq_ptp_ring_idx(aq_nic->aq_nic_cfg.tc_mode);
+ 
+-	ring = aq_ring_rx_alloc(&aq_ptp->ptp_rx, aq_nic,
+-				rx_ring_idx, &aq_nic->aq_nic_cfg);
+-	if (!ring) {
+-		err = -ENOMEM;
++	err = aq_ring_rx_alloc(&aq_ptp->ptp_rx, aq_nic,
++			       rx_ring_idx, &aq_nic->aq_nic_cfg);
++	if (err)
+ 		goto err_exit_ptp_tx;
+-	}
+ 
+-	hwts = aq_ring_hwts_rx_alloc(&aq_ptp->hwts_rx, aq_nic, PTP_HWST_RING_IDX,
+-				     aq_nic->aq_nic_cfg.rxds,
+-				     aq_nic->aq_nic_cfg.aq_hw_caps->rxd_size);
+-	if (!hwts) {
+-		err = -ENOMEM;
++	err = aq_ring_hwts_rx_alloc(&aq_ptp->hwts_rx, aq_nic, PTP_HWST_RING_IDX,
++				    aq_nic->aq_nic_cfg.rxds,
++				    aq_nic->aq_nic_cfg.aq_hw_caps->rxd_size);
++	if (err)
+ 		goto err_exit_ptp_rx;
+-	}
+ 
+ 	err = aq_ptp_skb_ring_init(&aq_ptp->skb_ring, aq_nic->aq_nic_cfg.rxds);
+ 	if (err != 0) {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index e1885c1eb100a..cda8597b4e146 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -132,8 +132,8 @@ static int aq_get_rxpages(struct aq_ring_s *self, struct aq_ring_buff_s *rxbuf)
+ 	return 0;
+ }
+ 
+-static struct aq_ring_s *aq_ring_alloc(struct aq_ring_s *self,
+-				       struct aq_nic_s *aq_nic)
++static int aq_ring_alloc(struct aq_ring_s *self,
++			 struct aq_nic_s *aq_nic)
+ {
+ 	int err = 0;
+ 
+@@ -156,46 +156,29 @@ static struct aq_ring_s *aq_ring_alloc(struct aq_ring_s *self,
+ err_exit:
+ 	if (err < 0) {
+ 		aq_ring_free(self);
+-		self = NULL;
+ 	}
+ 
+-	return self;
++	return err;
+ }
+ 
+-struct aq_ring_s *aq_ring_tx_alloc(struct aq_ring_s *self,
+-				   struct aq_nic_s *aq_nic,
+-				   unsigned int idx,
+-				   struct aq_nic_cfg_s *aq_nic_cfg)
++int aq_ring_tx_alloc(struct aq_ring_s *self,
++		     struct aq_nic_s *aq_nic,
++		     unsigned int idx,
++		     struct aq_nic_cfg_s *aq_nic_cfg)
+ {
+-	int err = 0;
+-
+ 	self->aq_nic = aq_nic;
+ 	self->idx = idx;
+ 	self->size = aq_nic_cfg->txds;
+ 	self->dx_size = aq_nic_cfg->aq_hw_caps->txd_size;
+ 
+-	self = aq_ring_alloc(self, aq_nic);
+-	if (!self) {
+-		err = -ENOMEM;
+-		goto err_exit;
+-	}
+-
+-err_exit:
+-	if (err < 0) {
+-		aq_ring_free(self);
+-		self = NULL;
+-	}
+-
+-	return self;
++	return aq_ring_alloc(self, aq_nic);
+ }
+ 
+-struct aq_ring_s *aq_ring_rx_alloc(struct aq_ring_s *self,
+-				   struct aq_nic_s *aq_nic,
+-				   unsigned int idx,
+-				   struct aq_nic_cfg_s *aq_nic_cfg)
++int aq_ring_rx_alloc(struct aq_ring_s *self,
++		     struct aq_nic_s *aq_nic,
++		     unsigned int idx,
++		     struct aq_nic_cfg_s *aq_nic_cfg)
+ {
+-	int err = 0;
+-
+ 	self->aq_nic = aq_nic;
+ 	self->idx = idx;
+ 	self->size = aq_nic_cfg->rxds;
+@@ -217,22 +200,10 @@ struct aq_ring_s *aq_ring_rx_alloc(struct aq_ring_s *self,
+ 		self->tail_size = 0;
+ 	}
+ 
+-	self = aq_ring_alloc(self, aq_nic);
+-	if (!self) {
+-		err = -ENOMEM;
+-		goto err_exit;
+-	}
+-
+-err_exit:
+-	if (err < 0) {
+-		aq_ring_free(self);
+-		self = NULL;
+-	}
+-
+-	return self;
++	return aq_ring_alloc(self, aq_nic);
+ }
+ 
+-struct aq_ring_s *
++int
+ aq_ring_hwts_rx_alloc(struct aq_ring_s *self, struct aq_nic_s *aq_nic,
+ 		      unsigned int idx, unsigned int size, unsigned int dx_size)
+ {
+@@ -250,10 +221,10 @@ aq_ring_hwts_rx_alloc(struct aq_ring_s *self, struct aq_nic_s *aq_nic,
+ 					   GFP_KERNEL);
+ 	if (!self->dx_ring) {
+ 		aq_ring_free(self);
+-		return NULL;
++		return -ENOMEM;
+ 	}
+ 
+-	return self;
++	return 0;
+ }
+ 
+ int aq_ring_init(struct aq_ring_s *self, const enum atl_ring_type ring_type)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.h b/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
+index 0a6c34438c1d0..52847310740a2 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
+@@ -183,14 +183,14 @@ static inline unsigned int aq_ring_avail_dx(struct aq_ring_s *self)
+ 		self->sw_head - self->sw_tail - 1);
+ }
+ 
+-struct aq_ring_s *aq_ring_tx_alloc(struct aq_ring_s *self,
+-				   struct aq_nic_s *aq_nic,
+-				   unsigned int idx,
+-				   struct aq_nic_cfg_s *aq_nic_cfg);
+-struct aq_ring_s *aq_ring_rx_alloc(struct aq_ring_s *self,
+-				   struct aq_nic_s *aq_nic,
+-				   unsigned int idx,
+-				   struct aq_nic_cfg_s *aq_nic_cfg);
++int aq_ring_tx_alloc(struct aq_ring_s *self,
++		     struct aq_nic_s *aq_nic,
++		     unsigned int idx,
++		     struct aq_nic_cfg_s *aq_nic_cfg);
++int aq_ring_rx_alloc(struct aq_ring_s *self,
++		     struct aq_nic_s *aq_nic,
++		     unsigned int idx,
++		     struct aq_nic_cfg_s *aq_nic_cfg);
+ 
+ int aq_ring_init(struct aq_ring_s *self, const enum atl_ring_type ring_type);
+ void aq_ring_rx_deinit(struct aq_ring_s *self);
+@@ -207,9 +207,9 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		     int budget);
+ int aq_ring_rx_fill(struct aq_ring_s *self);
+ 
+-struct aq_ring_s *aq_ring_hwts_rx_alloc(struct aq_ring_s *self,
+-		struct aq_nic_s *aq_nic, unsigned int idx,
+-		unsigned int size, unsigned int dx_size);
++int aq_ring_hwts_rx_alloc(struct aq_ring_s *self,
++			  struct aq_nic_s *aq_nic, unsigned int idx,
++			  unsigned int size, unsigned int dx_size);
+ void aq_ring_hwts_rx_clean(struct aq_ring_s *self, struct aq_nic_s *aq_nic);
+ 
+ unsigned int aq_ring_fill_stats_data(struct aq_ring_s *self, u64 *data);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+index f5db1c44e9b91..9769ab4f9bef0 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+@@ -136,35 +136,32 @@ int aq_vec_ring_alloc(struct aq_vec_s *self, struct aq_nic_s *aq_nic,
+ 		const unsigned int idx_ring = AQ_NIC_CFG_TCVEC2RING(aq_nic_cfg,
+ 								    i, idx);
+ 
+-		ring = aq_ring_tx_alloc(&self->ring[i][AQ_VEC_TX_ID], aq_nic,
+-					idx_ring, aq_nic_cfg);
+-		if (!ring) {
+-			err = -ENOMEM;
++		ring = &self->ring[i][AQ_VEC_TX_ID];
++		err = aq_ring_tx_alloc(ring, aq_nic, idx_ring, aq_nic_cfg);
++		if (err)
+ 			goto err_exit;
+-		}
+ 
+ 		++self->tx_rings;
+ 
+ 		aq_nic_set_tx_ring(aq_nic, idx_ring, ring);
+ 
+-		if (xdp_rxq_info_reg(&self->ring[i][AQ_VEC_RX_ID].xdp_rxq,
++		ring = &self->ring[i][AQ_VEC_RX_ID];
++		if (xdp_rxq_info_reg(&ring->xdp_rxq,
+ 				     aq_nic->ndev, idx,
+ 				     self->napi.napi_id) < 0) {
+ 			err = -ENOMEM;
+ 			goto err_exit;
+ 		}
+-		if (xdp_rxq_info_reg_mem_model(&self->ring[i][AQ_VEC_RX_ID].xdp_rxq,
++		if (xdp_rxq_info_reg_mem_model(&ring->xdp_rxq,
+ 					       MEM_TYPE_PAGE_SHARED, NULL) < 0) {
+-			xdp_rxq_info_unreg(&self->ring[i][AQ_VEC_RX_ID].xdp_rxq);
++			xdp_rxq_info_unreg(&ring->xdp_rxq);
+ 			err = -ENOMEM;
+ 			goto err_exit;
+ 		}
+ 
+-		ring = aq_ring_rx_alloc(&self->ring[i][AQ_VEC_RX_ID], aq_nic,
+-					idx_ring, aq_nic_cfg);
+-		if (!ring) {
+-			xdp_rxq_info_unreg(&self->ring[i][AQ_VEC_RX_ID].xdp_rxq);
+-			err = -ENOMEM;
++		err = aq_ring_rx_alloc(ring, aq_nic, idx_ring, aq_nic_cfg);
++		if (err) {
++			xdp_rxq_info_unreg(&ring->xdp_rxq);
+ 			goto err_exit;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
+index 73655347902d2..93ff7c8ec9051 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx.c
++++ b/drivers/net/ethernet/google/gve/gve_rx.c
+@@ -362,7 +362,7 @@ static enum pkt_hash_types gve_rss_type(__be16 pkt_flags)
+ 
+ static struct sk_buff *gve_rx_add_frags(struct napi_struct *napi,
+ 					struct gve_rx_slot_page_info *page_info,
+-					u16 packet_buffer_size, u16 len,
++					unsigned int truesize, u16 len,
+ 					struct gve_rx_ctx *ctx)
+ {
+ 	u32 offset = page_info->page_offset + page_info->pad;
+@@ -395,10 +395,10 @@ static struct sk_buff *gve_rx_add_frags(struct napi_struct *napi,
+ 	if (skb != ctx->skb_head) {
+ 		ctx->skb_head->len += len;
+ 		ctx->skb_head->data_len += len;
+-		ctx->skb_head->truesize += packet_buffer_size;
++		ctx->skb_head->truesize += truesize;
+ 	}
+ 	skb_add_rx_frag(skb, num_frags, page_info->page,
+-			offset, len, packet_buffer_size);
++			offset, len, truesize);
+ 
+ 	return ctx->skb_head;
+ }
+@@ -492,7 +492,7 @@ static struct sk_buff *gve_rx_copy_to_pool(struct gve_rx_ring *rx,
+ 
+ 		memcpy(alloc_page_info.page_address, src, page_info->pad + len);
+ 		skb = gve_rx_add_frags(napi, &alloc_page_info,
+-				       rx->packet_buffer_size,
++				       PAGE_SIZE,
+ 				       len, ctx);
+ 
+ 		u64_stats_update_begin(&rx->statss);
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_hw.c b/drivers/net/ethernet/intel/e1000/e1000_hw.c
+index 4542e2bc28e8d..4576511c99f56 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_hw.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_hw.c
+@@ -5,6 +5,7 @@
+  * Shared functions for accessing and configuring the MAC
+  */
+ 
++#include <linux/bitfield.h>
+ #include "e1000.h"
+ 
+ static s32 e1000_check_downshift(struct e1000_hw *hw);
+diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
+index a187582d22994..ba9c19e6994c9 100644
+--- a/drivers/net/ethernet/intel/e1000e/e1000.h
++++ b/drivers/net/ethernet/intel/e1000e/e1000.h
+@@ -360,23 +360,43 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca);
+  * As a result, a shift of INCVALUE_SHIFT_n is used to fit a value of
+  * INCVALUE_n into the TIMINCA register allowing 32+8+(24-INCVALUE_SHIFT_n)
+  * bits to count nanoseconds leaving the rest for fractional nonseconds.
++ *
++ * Any given INCVALUE also has an associated maximum adjustment value. This
++ * maximum adjustment value is the largest increase (or decrease) which can be
++ * safely applied without overflowing the INCVALUE. Since INCVALUE has
++ * a maximum range of 24 bits, its largest value is 0xFFFFFF.
++ *
++ * To understand where the maximum value comes from, consider the following
++ * equation:
++ *
++ *   new_incval = base_incval + (base_incval * adjustment) / 1billion
++ *
++ * To avoid overflow that means:
++ *   max_incval = base_incval + (base_incval * max_adj) / billion
++ *
++ * Re-arranging:
++ *   max_adj = floor(((max_incval - base_incval) * 1billion) / 1billion)
+  */
+ #define INCVALUE_96MHZ		125
+ #define INCVALUE_SHIFT_96MHZ	17
+ #define INCPERIOD_SHIFT_96MHZ	2
+ #define INCPERIOD_96MHZ		(12 >> INCPERIOD_SHIFT_96MHZ)
++#define MAX_PPB_96MHZ		23999900 /* 23,999,900 ppb */
+ 
+ #define INCVALUE_25MHZ		40
+ #define INCVALUE_SHIFT_25MHZ	18
+ #define INCPERIOD_25MHZ		1
++#define MAX_PPB_25MHZ		599999900 /* 599,999,900 ppb */
+ 
+ #define INCVALUE_24MHZ		125
+ #define INCVALUE_SHIFT_24MHZ	14
+ #define INCPERIOD_24MHZ		3
++#define MAX_PPB_24MHZ		999999999 /* 999,999,999 ppb */
+ 
+ #define INCVALUE_38400KHZ	26
+ #define INCVALUE_SHIFT_38400KHZ	19
+ #define INCPERIOD_38400KHZ	1
++#define MAX_PPB_38400KHZ	230769100 /* 230,769,100 ppb */
+ 
+ /* Another drawback of scaling the incvalue by a large factor is the
+  * 64-bit SYSTIM register overflows more quickly.  This is dealt with
+diff --git a/drivers/net/ethernet/intel/e1000e/ptp.c b/drivers/net/ethernet/intel/e1000e/ptp.c
+index 02d871bc112a7..bbcfd529399b0 100644
+--- a/drivers/net/ethernet/intel/e1000e/ptp.c
++++ b/drivers/net/ethernet/intel/e1000e/ptp.c
+@@ -280,8 +280,17 @@ void e1000e_ptp_init(struct e1000_adapter *adapter)
+ 
+ 	switch (hw->mac.type) {
+ 	case e1000_pch2lan:
++		adapter->ptp_clock_info.max_adj = MAX_PPB_96MHZ;
++		break;
+ 	case e1000_pch_lpt:
++		if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)
++			adapter->ptp_clock_info.max_adj = MAX_PPB_96MHZ;
++		else
++			adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ;
++		break;
+ 	case e1000_pch_spt:
++		adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;
++		break;
+ 	case e1000_pch_cnp:
+ 	case e1000_pch_tgp:
+ 	case e1000_pch_adp:
+@@ -289,15 +298,14 @@ void e1000e_ptp_init(struct e1000_adapter *adapter)
+ 	case e1000_pch_lnp:
+ 	case e1000_pch_ptp:
+ 	case e1000_pch_nvp:
+-		if ((hw->mac.type < e1000_pch_lpt) ||
+-		    (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)) {
+-			adapter->ptp_clock_info.max_adj = 24000000 - 1;
+-			break;
+-		}
+-		fallthrough;
++		if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)
++			adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;
++		else
++			adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;
++		break;
+ 	case e1000_82574:
+ 	case e1000_82583:
+-		adapter->ptp_clock_info.max_adj = 600000000 - 1;
++		adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ;
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
+index af1b0cde36703..ae700a1807c65 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
++++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2019 Intel Corporation. */
+ 
++#include <linux/bitfield.h>
+ #include "fm10k_pf.h"
+ #include "fm10k_vf.h"
+ 
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
+index dc8ccd378ec92..c50928ec14fff 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
++++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2019 Intel Corporation. */
+ 
++#include <linux/bitfield.h>
+ #include "fm10k_vf.h"
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index d7e24d661724a..3eb6564c1cbce 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -2,6 +2,7 @@
+ /* Copyright(c) 2013 - 2021 Intel Corporation. */
+ 
+ #include <linux/avf/virtchnl.h>
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/etherdevice.h>
+ #include <linux/pci.h>
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c
+index 68602fc375f62..d57dd30b024fa 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c
+@@ -1,6 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2021 Intel Corporation. */
+ 
++#include <linux/bitfield.h>
++#include "i40e_adminq.h"
+ #include "i40e_alloc.h"
+ #include "i40e_dcb.h"
+ #include "i40e_prototype.h"
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+index 77cdbfc19d477..e5aec09d58e27 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2018 Intel Corporation. */
+ 
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include "i40e_alloc.h"
+ #include "i40e_prototype.h"
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index de5ec4e6bedfb..7db89b2945107 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2607,6 +2607,14 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 	int aq_ret = 0;
+ 	int i;
+ 
++	if (vf->is_disabled_from_host) {
++		aq_ret = -EPERM;
++		dev_info(&pf->pdev->dev,
++			 "Admin has disabled VF %d, will not enable queues\n",
++			 vf->vf_id);
++		goto error_param;
++	}
++
+ 	if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
+ 		aq_ret = -EINVAL;
+ 		goto error_param;
+@@ -4734,9 +4742,12 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+ 	struct i40e_link_status *ls = &pf->hw.phy.link_info;
+ 	struct virtchnl_pf_event pfe;
+ 	struct i40e_hw *hw = &pf->hw;
++	struct i40e_vsi *vsi;
++	unsigned long q_map;
+ 	struct i40e_vf *vf;
+ 	int abs_vf_id;
+ 	int ret = 0;
++	int tmp;
+ 
+ 	if (test_and_set_bit(__I40E_VIRTCHNL_OP_PENDING, pf->state)) {
+ 		dev_warn(&pf->pdev->dev, "Unable to configure VFs, other operation is pending.\n");
+@@ -4759,17 +4770,38 @@ int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+ 	switch (link) {
+ 	case IFLA_VF_LINK_STATE_AUTO:
+ 		vf->link_forced = false;
++		vf->is_disabled_from_host = false;
++		/* reset needed to reinit VF resources */
++		i40e_vc_reset_vf(vf, true);
+ 		i40e_set_vf_link_state(vf, &pfe, ls);
+ 		break;
+ 	case IFLA_VF_LINK_STATE_ENABLE:
+ 		vf->link_forced = true;
+ 		vf->link_up = true;
++		vf->is_disabled_from_host = false;
++		/* reset needed to reinit VF resources */
++		i40e_vc_reset_vf(vf, true);
+ 		i40e_set_vf_link_state(vf, &pfe, ls);
+ 		break;
+ 	case IFLA_VF_LINK_STATE_DISABLE:
+ 		vf->link_forced = true;
+ 		vf->link_up = false;
+ 		i40e_set_vf_link_state(vf, &pfe, ls);
++
++		vsi = pf->vsi[vf->lan_vsi_idx];
++		q_map = BIT(vsi->num_queue_pairs) - 1;
++
++		vf->is_disabled_from_host = true;
++
++		/* Try to stop both Tx&Rx rings even if one of the calls fails
++		 * to ensure we stop the rings even in case of errors.
++		 * If any of them returns with an error then the first
++		 * error that occurred will be returned.
++		 */
++		tmp = i40e_ctrl_vf_tx_rings(vsi, q_map, false);
++		ret = i40e_ctrl_vf_rx_rings(vsi, q_map, false);
++
++		ret = tmp ? tmp : ret;
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 5fd607c0de0a6..66f95e2f3146a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -100,6 +100,7 @@ struct i40e_vf {
+ 	bool link_forced;
+ 	bool link_up;		/* only valid if VF link is forced */
+ 	bool spoofchk;
++	bool is_disabled_from_host; /* PF ctrl of VF enable/disable */
+ 	u16 num_vlan;
+ 
+ 	/* ADq related variables */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
+index 8091e6feca014..6a10c0ecf2b5f 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
+@@ -1,10 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2018 Intel Corporation. */
+ 
++#include <linux/avf/virtchnl.h>
++#include <linux/bitfield.h>
+ #include "iavf_type.h"
+ #include "iavf_adminq.h"
+ #include "iavf_prototype.h"
+-#include <linux/avf/virtchnl.h>
+ 
+ /**
+  * iavf_aq_str - convert AQ err code to a string
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index dc499fe7734ec..25ba5653ac6bf 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -1,11 +1,12 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2018 Intel Corporation. */
+ 
++#include <linux/bitfield.h>
++#include <linux/uaccess.h>
++
+ /* ethtool support for iavf */
+ #include "iavf.h"
+ 
+-#include <linux/uaccess.h>
+-
+ /* ethtool statistics helpers */
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_fdir.c b/drivers/net/ethernet/intel/iavf/iavf_fdir.c
+index 03e774bd2a5b4..65ddcd81c993e 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_fdir.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_fdir.c
+@@ -3,6 +3,7 @@
+ 
+ /* flow director ethtool support for iavf */
+ 
++#include <linux/bitfield.h>
+ #include "iavf.h"
+ 
+ #define GTPU_PORT	2152
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+index d64c4997136b1..fb7edba9c2f8b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2013 - 2018 Intel Corporation. */
+ 
++#include <linux/bitfield.h>
+ #include <linux/prefetch.h>
+ 
+ #include "iavf.h"
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index fbd5d92182d37..b8437c36ff382 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -421,10 +421,10 @@ struct ice_aqc_vsi_props {
+ #define ICE_AQ_VSI_INNER_VLAN_INSERT_PVID	BIT(2)
+ #define ICE_AQ_VSI_INNER_VLAN_EMODE_S		3
+ #define ICE_AQ_VSI_INNER_VLAN_EMODE_M		(0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+-#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH	(0x0 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+-#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP	(0x1 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+-#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR		(0x2 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
+-#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING	(0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S)
++#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH	0x0U
++#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP	0x1U
++#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR		0x2U
++#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING	0x3U
+ 	u8 inner_vlan_reserved2[3];
+ 	/* ingress egress up sections */
+ 	__le32 ingress_table; /* bitmap, 3 bits per up */
+@@ -490,11 +490,11 @@ struct ice_aqc_vsi_props {
+ #define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S		2
+ #define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M		(0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S)
+ #define ICE_AQ_VSI_Q_OPT_RSS_HASH_S		6
+-#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M		(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+-#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ		(0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+-#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ		(0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+-#define ICE_AQ_VSI_Q_OPT_RSS_XOR		(0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
+-#define ICE_AQ_VSI_Q_OPT_RSS_JHASH		(0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S)
++#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M		GENMASK(7, 6)
++#define ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ		0x0U
++#define ICE_AQ_VSI_Q_OPT_RSS_HASH_SYM_TPLZ	0x1U
++#define ICE_AQ_VSI_Q_OPT_RSS_HASH_XOR		0x2U
++#define ICE_AQ_VSI_Q_OPT_RSS_HASH_JHASH		0x3U
+ 	u8 q_opt_tc;
+ #define ICE_AQ_VSI_Q_OPT_TC_OVR_S		0
+ #define ICE_AQ_VSI_Q_OPT_TC_OVR_M		(0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S)
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 1bad6e17f9bef..c01950de44ced 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -979,7 +979,8 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt)
+ 	 */
+ 	if (ice_is_dvm_ena(hw)) {
+ 		ctxt->info.inner_vlan_flags |=
+-			ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
++			FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M,
++				   ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING);
+ 		ctxt->info.outer_vlan_flags =
+ 			(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL <<
+ 			 ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) &
+@@ -1186,12 +1187,12 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
+ 	case ICE_VSI_PF:
+ 		/* PF VSI will inherit RSS instance of PF */
+ 		lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_PF;
+-		hash_type = ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
++		hash_type = ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ;
+ 		break;
+ 	case ICE_VSI_VF:
+ 		/* VF VSI will gets a small RSS table which is a VSI LUT type */
+ 		lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI;
+-		hash_type = ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
++		hash_type = ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ;
+ 		break;
+ 	default:
+ 		dev_dbg(dev, "Unsupported VSI type %s\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 1c7b4ded948b6..8872f7a4f4320 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -823,8 +823,8 @@ static int ice_vc_handle_rss_cfg(struct ice_vf *vf, u8 *msg, bool add)
+ 		int status;
+ 
+ 		lut_type = ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI;
+-		hash_type = add ? ICE_AQ_VSI_Q_OPT_RSS_XOR :
+-				ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
++		hash_type = add ? ICE_AQ_VSI_Q_OPT_RSS_HASH_XOR :
++				ICE_AQ_VSI_Q_OPT_RSS_HASH_TPLZ;
+ 
+ 		ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ 		if (!ctx) {
+@@ -832,11 +832,9 @@ static int ice_vc_handle_rss_cfg(struct ice_vf *vf, u8 *msg, bool add)
+ 			goto error_param;
+ 		}
+ 
+-		ctx->info.q_opt_rss = ((lut_type <<
+-					ICE_AQ_VSI_Q_OPT_RSS_LUT_S) &
+-				       ICE_AQ_VSI_Q_OPT_RSS_LUT_M) |
+-				       (hash_type &
+-					ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
++		ctx->info.q_opt_rss =
++			FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_LUT_M, lut_type) |
++			FIELD_PREP(ICE_AQ_VSI_Q_OPT_RSS_HASH_M, hash_type);
+ 
+ 		/* Preserve existing queueing option setting */
+ 		ctx->info.q_opt_rss |= (vsi->info.q_opt_rss &
+diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c
+index 76266e709a392..8307902115ff2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c
+@@ -131,6 +131,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ {
+ 	struct ice_hw *hw = &vsi->back->hw;
+ 	struct ice_vsi_ctx *ctxt;
++	u8 *ivf;
+ 	int err;
+ 
+ 	/* do not allow modifying VLAN stripping when a port VLAN is configured
+@@ -143,19 +144,24 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 	if (!ctxt)
+ 		return -ENOMEM;
+ 
++	ivf = &ctxt->info.inner_vlan_flags;
++
+ 	/* Here we are configuring what the VSI should do with the VLAN tag in
+ 	 * the Rx packet. We can either leave the tag in the packet or put it in
+ 	 * the Rx descriptor.
+ 	 */
+-	if (ena)
++	if (ena) {
+ 		/* Strip VLAN tag from Rx packet and put it in the desc */
+-		ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH;
+-	else
++		*ivf = FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M,
++				  ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH);
++	} else {
+ 		/* Disable stripping. Leave tag in packet */
+-		ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING;
++		*ivf = FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M,
++				  ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING);
++	}
+ 
+ 	/* Allow all packets untagged/tagged */
+-	ctxt->info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL;
++	*ivf |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL;
+ 
+ 	ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h
+index 8dc837889723c..4a3c4454d25ab 100644
+--- a/drivers/net/ethernet/intel/idpf/virtchnl2.h
++++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h
+@@ -978,7 +978,7 @@ struct virtchnl2_ptype {
+ 	u8 proto_id_count;
+ 	__le16 pad;
+ 	__le16 proto_id[];
+-};
++} __packed __aligned(2);
+ VIRTCHNL2_CHECK_STRUCT_LEN(6, virtchnl2_ptype);
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c
+index b9b9d35494d27..53b396fd194a3 100644
+--- a/drivers/net/ethernet/intel/igb/e1000_i210.c
++++ b/drivers/net/ethernet/intel/igb/e1000_i210.c
+@@ -5,9 +5,9 @@
+  * e1000_i211
+  */
+ 
+-#include <linux/types.h>
++#include <linux/bitfield.h>
+ #include <linux/if_ether.h>
+-
++#include <linux/types.h>
+ #include "e1000_hw.h"
+ #include "e1000_i210.h"
+ 
+diff --git a/drivers/net/ethernet/intel/igb/e1000_nvm.c b/drivers/net/ethernet/intel/igb/e1000_nvm.c
+index fa136e6e93285..0da57e89593a0 100644
+--- a/drivers/net/ethernet/intel/igb/e1000_nvm.c
++++ b/drivers/net/ethernet/intel/igb/e1000_nvm.c
+@@ -1,9 +1,9 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2007 - 2018 Intel Corporation. */
+ 
+-#include <linux/if_ether.h>
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+-
++#include <linux/if_ether.h>
+ #include "e1000_mac.h"
+ #include "e1000_nvm.h"
+ 
+diff --git a/drivers/net/ethernet/intel/igb/e1000_phy.c b/drivers/net/ethernet/intel/igb/e1000_phy.c
+index a018000f7db92..3c1b562a3271c 100644
+--- a/drivers/net/ethernet/intel/igb/e1000_phy.c
++++ b/drivers/net/ethernet/intel/igb/e1000_phy.c
+@@ -1,9 +1,9 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright(c) 2007 - 2018 Intel Corporation. */
+ 
+-#include <linux/if_ether.h>
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+-
++#include <linux/if_ether.h>
+ #include "e1000_mac.h"
+ #include "e1000_phy.h"
+ 
+diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
+index fd712585af27f..e6c1fbee049ef 100644
+--- a/drivers/net/ethernet/intel/igbvf/netdev.c
++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
+@@ -3,25 +3,25 @@
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+-#include <linux/module.h>
+-#include <linux/types.h>
+-#include <linux/init.h>
+-#include <linux/pci.h>
+-#include <linux/vmalloc.h>
+-#include <linux/pagemap.h>
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+-#include <linux/netdevice.h>
+-#include <linux/tcp.h>
+-#include <linux/ipv6.h>
+-#include <linux/slab.h>
+-#include <net/checksum.h>
+-#include <net/ip6_checksum.h>
+-#include <linux/mii.h>
+ #include <linux/ethtool.h>
+ #include <linux/if_vlan.h>
++#include <linux/init.h>
++#include <linux/ipv6.h>
++#include <linux/mii.h>
++#include <linux/module.h>
++#include <linux/netdevice.h>
++#include <linux/pagemap.h>
++#include <linux/pci.h>
+ #include <linux/prefetch.h>
+ #include <linux/sctp.h>
+-
++#include <linux/slab.h>
++#include <linux/tcp.h>
++#include <linux/types.h>
++#include <linux/vmalloc.h>
++#include <net/checksum.h>
++#include <net/ip6_checksum.h>
+ #include "igbvf.h"
+ 
+ char igbvf_driver_name[] = "igbvf";
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 17546a035ab19..d2562c8e8015e 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright (c)  2018 Intel Corporation */
+ 
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ 
+ #include "igc_hw.h"
+diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c
+index 53b77c969c857..d0d9e7170154c 100644
+--- a/drivers/net/ethernet/intel/igc/igc_phy.c
++++ b/drivers/net/ethernet/intel/igc/igc_phy.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Copyright (c)  2018 Intel Corporation */
+ 
++#include <linux/bitfield.h>
+ #include "igc_phy.h"
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
+index 100388968e4db..3d56481e16bc9 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
+@@ -123,14 +123,14 @@ static s32 ixgbe_init_phy_ops_82598(struct ixgbe_hw *hw)
+ 		if (ret_val)
+ 			return ret_val;
+ 		if (hw->phy.sfp_type == ixgbe_sfp_type_unknown)
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 
+ 		/* Check to see if SFP+ module is supported */
+ 		ret_val = ixgbe_get_sfp_init_sequence_offsets(hw,
+ 							    &list_offset,
+ 							    &data_offset);
+ 		if (ret_val)
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 		break;
+ 	default:
+ 		break;
+@@ -213,7 +213,7 @@ static s32 ixgbe_get_link_capabilities_82598(struct ixgbe_hw *hw,
+ 		break;
+ 
+ 	default:
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -283,7 +283,7 @@ static s32 ixgbe_fc_enable_82598(struct ixgbe_hw *hw)
+ 
+ 	/* Validate the water mark configuration */
+ 	if (!hw->fc.pause_time)
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 
+ 	/* Low water mark of zero causes XOFF floods */
+ 	for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+@@ -292,7 +292,7 @@ static s32 ixgbe_fc_enable_82598(struct ixgbe_hw *hw)
+ 			if (!hw->fc.low_water[i] ||
+ 			    hw->fc.low_water[i] >= hw->fc.high_water[i]) {
+ 				hw_dbg(hw, "Invalid water mark configuration\n");
+-				return IXGBE_ERR_INVALID_LINK_SETTINGS;
++				return -EINVAL;
+ 			}
+ 		}
+ 	}
+@@ -369,7 +369,7 @@ static s32 ixgbe_fc_enable_82598(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_dbg(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* Set 802.3x based flow control settings. */
+@@ -438,7 +438,7 @@ static s32 ixgbe_start_mac_link_82598(struct ixgbe_hw *hw,
+ 				msleep(100);
+ 			}
+ 			if (!(links_reg & IXGBE_LINKS_KX_AN_COMP)) {
+-				status = IXGBE_ERR_AUTONEG_NOT_COMPLETE;
++				status = -EIO;
+ 				hw_dbg(hw, "Autonegotiation did not complete.\n");
+ 			}
+ 		}
+@@ -478,7 +478,7 @@ static s32 ixgbe_validate_link_ready(struct ixgbe_hw *hw)
+ 
+ 	if (timeout == IXGBE_VALIDATE_LINK_READY_TIMEOUT) {
+ 		hw_dbg(hw, "Link was indicated but link is down\n");
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -594,7 +594,7 @@ static s32 ixgbe_setup_mac_link_82598(struct ixgbe_hw *hw,
+ 	speed &= link_capabilities;
+ 
+ 	if (speed == IXGBE_LINK_SPEED_UNKNOWN)
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 
+ 	/* Set KX4/KX support according to speed requested */
+ 	else if (link_mode == IXGBE_AUTOC_LMS_KX4_AN ||
+@@ -701,9 +701,9 @@ static s32 ixgbe_reset_hw_82598(struct ixgbe_hw *hw)
+ 
+ 		/* Init PHY and function pointers, perform SFP setup */
+ 		phy_status = hw->phy.ops.init(hw);
+-		if (phy_status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++		if (phy_status == -EOPNOTSUPP)
+ 			return phy_status;
+-		if (phy_status == IXGBE_ERR_SFP_NOT_PRESENT)
++		if (phy_status == -ENOENT)
+ 			goto mac_reset_top;
+ 
+ 		hw->phy.ops.reset(hw);
+@@ -727,7 +727,7 @@ mac_reset_top:
+ 		udelay(1);
+ 	}
+ 	if (ctrl & IXGBE_CTRL_RST) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 
+@@ -789,7 +789,7 @@ static s32 ixgbe_set_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(rar));
+@@ -814,7 +814,7 @@ static s32 ixgbe_clear_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	rar_high = IXGBE_READ_REG(hw, IXGBE_RAH(rar));
+@@ -845,7 +845,7 @@ static s32 ixgbe_set_vfta_82598(struct ixgbe_hw *hw, u32 vlan, u32 vind,
+ 	u32 vftabyte;
+ 
+ 	if (vlan > 4095)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* Determine 32-bit word position in array */
+ 	regindex = (vlan >> 5) & 0x7F;   /* upper seven bits */
+@@ -964,7 +964,7 @@ static s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr,
+ 		gssr = IXGBE_GSSR_PHY0_SM;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, gssr) != 0)
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	if (hw->phy.type == ixgbe_phy_nl) {
+ 		/*
+@@ -993,7 +993,7 @@ static s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr,
+ 
+ 		if (sfp_stat != IXGBE_I2C_EEPROM_STATUS_PASS) {
+ 			hw_dbg(hw, "EEPROM read did not pass.\n");
+-			status = IXGBE_ERR_SFP_NOT_PRESENT;
++			status = -ENOENT;
+ 			goto out;
+ 		}
+ 
+@@ -1003,7 +1003,7 @@ static s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr,
+ 
+ 		*eeprom_data = (u8)(sfp_data >> 8);
+ 	} else {
+-		status = IXGBE_ERR_PHY;
++		status = -EIO;
+ 	}
+ 
+ out:
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
+index 58ea959a44822..339e106a5732d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
+@@ -117,7 +117,7 @@ static s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw)
+ 		ret_val = hw->mac.ops.acquire_swfw_sync(hw,
+ 							IXGBE_GSSR_MAC_CSR_SM);
+ 		if (ret_val)
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		if (hw->eeprom.ops.read(hw, ++data_offset, &data_value))
+ 			goto setup_sfp_err;
+@@ -144,7 +144,7 @@ static s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw)
+ 
+ 		if (ret_val) {
+ 			hw_dbg(hw, " sfp module setup not complete\n");
+-			return IXGBE_ERR_SFP_SETUP_NOT_COMPLETE;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -159,7 +159,7 @@ setup_sfp_err:
+ 	usleep_range(hw->eeprom.semaphore_delay * 1000,
+ 		     hw->eeprom.semaphore_delay * 2000);
+ 	hw_err(hw, "eeprom read at offset %d failed\n", data_offset);
+-	return IXGBE_ERR_SFP_SETUP_NOT_COMPLETE;
++	return -EIO;
+ }
+ 
+ /**
+@@ -184,7 +184,7 @@ static s32 prot_autoc_read_82599(struct ixgbe_hw *hw, bool *locked,
+ 		ret_val = hw->mac.ops.acquire_swfw_sync(hw,
+ 					IXGBE_GSSR_MAC_CSR_SM);
+ 		if (ret_val)
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		*locked = true;
+ 	}
+@@ -219,7 +219,7 @@ static s32 prot_autoc_write_82599(struct ixgbe_hw *hw, u32 autoc, bool locked)
+ 		ret_val = hw->mac.ops.acquire_swfw_sync(hw,
+ 					IXGBE_GSSR_MAC_CSR_SM);
+ 		if (ret_val)
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		locked = true;
+ 	}
+@@ -400,7 +400,7 @@ static s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw,
+ 		break;
+ 
+ 	default:
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 	}
+ 
+ 	if (hw->phy.multispeed_fiber) {
+@@ -541,7 +541,7 @@ static s32 ixgbe_start_mac_link_82599(struct ixgbe_hw *hw,
+ 				msleep(100);
+ 			}
+ 			if (!(links_reg & IXGBE_LINKS_KX_AN_COMP)) {
+-				status = IXGBE_ERR_AUTONEG_NOT_COMPLETE;
++				status = -EIO;
+ 				hw_dbg(hw, "Autoneg did not complete.\n");
+ 			}
+ 		}
+@@ -794,7 +794,7 @@ static s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw,
+ 	speed &= link_capabilities;
+ 
+ 	if (speed == IXGBE_LINK_SPEED_UNKNOWN)
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 
+ 	/* Use stored value (EEPROM defaults) of AUTOC to find KR/KX4 support*/
+ 	if (hw->mac.orig_link_settings_stored)
+@@ -861,8 +861,7 @@ static s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw,
+ 					msleep(100);
+ 				}
+ 				if (!(links_reg & IXGBE_LINKS_KX_AN_COMP)) {
+-					status =
+-						IXGBE_ERR_AUTONEG_NOT_COMPLETE;
++					status = -EIO;
+ 					hw_dbg(hw, "Autoneg did not complete.\n");
+ 				}
+ 			}
+@@ -927,7 +926,7 @@ static s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw)
+ 	/* Identify PHY and related function pointers */
+ 	status = hw->phy.ops.init(hw);
+ 
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (status == -EOPNOTSUPP)
+ 		return status;
+ 
+ 	/* Setup SFP module if there is one present. */
+@@ -936,7 +935,7 @@ static s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw)
+ 		hw->phy.sfp_setup_needed = false;
+ 	}
+ 
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (status == -EOPNOTSUPP)
+ 		return status;
+ 
+ 	/* Reset PHY */
+@@ -974,7 +973,7 @@ mac_reset_top:
+ 	}
+ 
+ 	if (ctrl & IXGBE_CTRL_RST_MASK) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 
+@@ -1093,7 +1092,7 @@ static s32 ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, u32 *fdircmd)
+ 		udelay(10);
+ 	}
+ 
+-	return IXGBE_ERR_FDIR_CMD_INCOMPLETE;
++	return -EIO;
+ }
+ 
+ /**
+@@ -1155,7 +1154,7 @@ s32 ixgbe_reinit_fdir_tables_82599(struct ixgbe_hw *hw)
+ 	}
+ 	if (i >= IXGBE_FDIR_INIT_DONE_POLL) {
+ 		hw_dbg(hw, "Flow Director Signature poll time exceeded!\n");
+-		return IXGBE_ERR_FDIR_REINIT_FAILED;
++		return -EIO;
+ 	}
+ 
+ 	/* Clear FDIR statistics registers (read to clear) */
+@@ -1387,7 +1386,7 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on flow type input\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* configure FDIRCMD register */
+@@ -1546,7 +1545,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on vm pool mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch (input_mask->formatted.flow_type & IXGBE_ATR_L4TYPE_MASK) {
+@@ -1555,14 +1554,14 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		if (input_mask->formatted.dst_port ||
+ 		    input_mask->formatted.src_port) {
+ 			hw_dbg(hw, " Error on src/dst port mask\n");
+-			return IXGBE_ERR_CONFIG;
++			return -EIO;
+ 		}
+ 		break;
+ 	case IXGBE_ATR_L4TYPE_MASK:
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on flow type mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch (ntohs(input_mask->formatted.vlan_id) & 0xEFFF) {
+@@ -1583,7 +1582,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on VLAN mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch ((__force u16)input_mask->formatted.flex_bytes & 0xFFFF) {
+@@ -1595,7 +1594,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
+ 		break;
+ 	default:
+ 		hw_dbg(hw, " Error on flexible byte mask\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* Now mask VM pool and destination IPv6 - bits 5 and 2 */
+@@ -1824,7 +1823,7 @@ static s32 ixgbe_identify_phy_82599(struct ixgbe_hw *hw)
+ 
+ 	/* Return error if SFP module has been detected but is not supported */
+ 	if (hw->phy.type == ixgbe_phy_sfp_unsupported)
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 
+ 	return status;
+ }
+@@ -1863,13 +1862,13 @@ static s32 ixgbe_enable_rx_dma_82599(struct ixgbe_hw *hw, u32 regval)
+  *  Verifies that installed the firmware version is 0.6 or higher
+  *  for SFI devices. All 82599 SFI devices should have version 0.6 or higher.
+  *
+- *  Returns IXGBE_ERR_EEPROM_VERSION if the FW is not present or
+- *  if the FW version is not supported.
++ *  Return: -EACCES if the FW is not present or if the FW version is
++ *  not supported.
+  **/
+ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ {
+-	s32 status = IXGBE_ERR_EEPROM_VERSION;
+ 	u16 fw_offset, fw_ptp_cfg_offset;
++	s32 status = -EACCES;
+ 	u16 offset;
+ 	u16 fw_version = 0;
+ 
+@@ -1883,7 +1882,7 @@ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ 		goto fw_version_err;
+ 
+ 	if (fw_offset == 0 || fw_offset == 0xFFFF)
+-		return IXGBE_ERR_EEPROM_VERSION;
++		return -EACCES;
+ 
+ 	/* get the offset to the Pass Through Patch Configuration block */
+ 	offset = fw_offset + IXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR;
+@@ -1891,7 +1890,7 @@ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ 		goto fw_version_err;
+ 
+ 	if (fw_ptp_cfg_offset == 0 || fw_ptp_cfg_offset == 0xFFFF)
+-		return IXGBE_ERR_EEPROM_VERSION;
++		return -EACCES;
+ 
+ 	/* get the firmware version */
+ 	offset = fw_ptp_cfg_offset + IXGBE_FW_PATCH_VERSION_4;
+@@ -1905,7 +1904,7 @@ static s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw)
+ 
+ fw_version_err:
+ 	hw_err(hw, "eeprom read at offset %d failed\n", offset);
+-	return IXGBE_ERR_EEPROM_VERSION;
++	return -EACCES;
+ }
+ 
+ /**
+@@ -2038,7 +2037,7 @@ static s32 ixgbe_reset_pipeline_82599(struct ixgbe_hw *hw)
+ 
+ 	if (!(anlp1_reg & IXGBE_ANLP1_AN_STATE_MASK)) {
+ 		hw_dbg(hw, "auto negotiation not completed\n");
+-		ret_val = IXGBE_ERR_RESET_FAILED;
++		ret_val = -EIO;
+ 		goto reset_pipeline_out;
+ 	}
+ 
+@@ -2087,7 +2086,7 @@ static s32 ixgbe_read_i2c_byte_82599(struct ixgbe_hw *hw, u8 byte_offset,
+ 
+ 		if (!timeout) {
+ 			hw_dbg(hw, "Driver can't access resource, acquiring I2C bus timeout.\n");
+-			status = IXGBE_ERR_I2C;
++			status = -EIO;
+ 			goto release_i2c_access;
+ 		}
+ 	}
+@@ -2141,7 +2140,7 @@ static s32 ixgbe_write_i2c_byte_82599(struct ixgbe_hw *hw, u8 byte_offset,
+ 
+ 		if (!timeout) {
+ 			hw_dbg(hw, "Driver can't access resource, acquiring I2C bus timeout.\n");
+-			status = IXGBE_ERR_I2C;
++			status = -EIO;
+ 			goto release_i2c_access;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+index 878dd8dff5285..b2a0f2aaa05be 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+@@ -124,7 +124,7 @@ s32 ixgbe_setup_fc_generic(struct ixgbe_hw *hw)
+ 	 */
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_dbg(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	/*
+@@ -215,7 +215,7 @@ s32 ixgbe_setup_fc_generic(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_dbg(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	if (hw->mac.type != ixgbe_mac_X540) {
+@@ -500,7 +500,7 @@ s32 ixgbe_read_pba_string_generic(struct ixgbe_hw *hw, u8 *pba_num,
+ 
+ 	if (pba_num == NULL) {
+ 		hw_dbg(hw, "PBA string buffer was null\n");
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	ret_val = hw->eeprom.ops.read(hw, IXGBE_PBANUM0_PTR, &data);
+@@ -526,7 +526,7 @@ s32 ixgbe_read_pba_string_generic(struct ixgbe_hw *hw, u8 *pba_num,
+ 		/* we will need 11 characters to store the PBA */
+ 		if (pba_num_size < 11) {
+ 			hw_dbg(hw, "PBA string buffer too small\n");
+-			return IXGBE_ERR_NO_SPACE;
++			return -ENOSPC;
+ 		}
+ 
+ 		/* extract hex string from data and pba_ptr */
+@@ -563,13 +563,13 @@ s32 ixgbe_read_pba_string_generic(struct ixgbe_hw *hw, u8 *pba_num,
+ 
+ 	if (length == 0xFFFF || length == 0) {
+ 		hw_dbg(hw, "NVM PBA number section invalid length\n");
+-		return IXGBE_ERR_PBA_SECTION;
++		return -EIO;
+ 	}
+ 
+ 	/* check if pba_num buffer is big enough */
+ 	if (pba_num_size  < (((u32)length * 2) - 1)) {
+ 		hw_dbg(hw, "PBA string buffer too small\n");
+-		return IXGBE_ERR_NO_SPACE;
++		return -ENOSPC;
+ 	}
+ 
+ 	/* trim pba length from start of string */
+@@ -805,7 +805,7 @@ s32 ixgbe_led_on_generic(struct ixgbe_hw *hw, u32 index)
+ 	u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn on the LED, set mode to ON. */
+ 	led_reg &= ~IXGBE_LED_MODE_MASK(index);
+@@ -826,7 +826,7 @@ s32 ixgbe_led_off_generic(struct ixgbe_hw *hw, u32 index)
+ 	u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn off the LED, set mode to OFF. */
+ 	led_reg &= ~IXGBE_LED_MODE_MASK(index);
+@@ -904,11 +904,8 @@ s32 ixgbe_write_eeprom_buffer_bit_bang_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset + words > hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || (offset + words > hw->eeprom.word_size))
++		return -EINVAL;
+ 
+ 	/*
+ 	 * The EEPROM page size cannot be queried from the chip. We do lazy
+@@ -962,7 +959,7 @@ static s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	if (ixgbe_ready_eeprom(hw) != 0) {
+ 		ixgbe_release_eeprom(hw);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	for (i = 0; i < words; i++) {
+@@ -1028,7 +1025,7 @@ s32 ixgbe_write_eeprom_generic(struct ixgbe_hw *hw, u16 offset, u16 data)
+ 	hw->eeprom.ops.init_params(hw);
+ 
+ 	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++		return -EINVAL;
+ 
+ 	return ixgbe_write_eeprom_buffer_bit_bang(hw, offset, 1, &data);
+ }
+@@ -1050,11 +1047,8 @@ s32 ixgbe_read_eeprom_buffer_bit_bang_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset + words > hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || (offset + words > hw->eeprom.word_size))
++		return -EINVAL;
+ 
+ 	/*
+ 	 * We cannot hold synchronization semaphores for too long
+@@ -1099,7 +1093,7 @@ static s32 ixgbe_read_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	if (ixgbe_ready_eeprom(hw) != 0) {
+ 		ixgbe_release_eeprom(hw);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	for (i = 0; i < words; i++) {
+@@ -1142,7 +1136,7 @@ s32 ixgbe_read_eeprom_bit_bang_generic(struct ixgbe_hw *hw, u16 offset,
+ 	hw->eeprom.ops.init_params(hw);
+ 
+ 	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++		return -EINVAL;
+ 
+ 	return ixgbe_read_eeprom_buffer_bit_bang(hw, offset, 1, data);
+ }
+@@ -1165,11 +1159,8 @@ s32 ixgbe_read_eerd_buffer_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || offset >= hw->eeprom.word_size)
++		return -EINVAL;
+ 
+ 	for (i = 0; i < words; i++) {
+ 		eerd = ((offset + i) << IXGBE_EEPROM_RW_ADDR_SHIFT) |
+@@ -1262,11 +1253,8 @@ s32 ixgbe_write_eewr_buffer_generic(struct ixgbe_hw *hw, u16 offset,
+ 
+ 	hw->eeprom.ops.init_params(hw);
+ 
+-	if (words == 0)
+-		return IXGBE_ERR_INVALID_ARGUMENT;
+-
+-	if (offset >= hw->eeprom.word_size)
+-		return IXGBE_ERR_EEPROM;
++	if (words == 0 || offset >= hw->eeprom.word_size)
++		return -EINVAL;
+ 
+ 	for (i = 0; i < words; i++) {
+ 		eewr = ((offset + i) << IXGBE_EEPROM_RW_ADDR_SHIFT) |
+@@ -1328,7 +1316,7 @@ static s32 ixgbe_poll_eerd_eewr_done(struct ixgbe_hw *hw, u32 ee_reg)
+ 		}
+ 		udelay(5);
+ 	}
+-	return IXGBE_ERR_EEPROM;
++	return -EIO;
+ }
+ 
+ /**
+@@ -1344,7 +1332,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
+ 	u32 i;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) != 0)
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
+ 
+@@ -1366,7 +1354,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
+ 		hw_dbg(hw, "Could not acquire EEPROM grant\n");
+ 
+ 		hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	/* Setup EEPROM for Read/Write */
+@@ -1419,7 +1407,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
+ 		swsm = IXGBE_READ_REG(hw, IXGBE_SWSM(hw));
+ 		if (swsm & IXGBE_SWSM_SMBI) {
+ 			hw_dbg(hw, "Software semaphore SMBI between device drivers not granted.\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -1447,7 +1435,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
+ 	if (i >= timeout) {
+ 		hw_dbg(hw, "SWESMBI Software EEPROM semaphore not granted.\n");
+ 		ixgbe_release_eeprom_semaphore(hw);
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -1503,7 +1491,7 @@ static s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw)
+ 	 */
+ 	if (i >= IXGBE_EEPROM_MAX_RETRY_SPI) {
+ 		hw_dbg(hw, "SPI EEPROM Status error\n");
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -1715,7 +1703,7 @@ s32 ixgbe_calc_eeprom_checksum_generic(struct ixgbe_hw *hw)
+ 	for (i = IXGBE_PCIE_ANALOG_PTR; i < IXGBE_FW_PTR; i++) {
+ 		if (hw->eeprom.ops.read(hw, i, &pointer)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 
+ 		/* If the pointer seems invalid */
+@@ -1724,7 +1712,7 @@ s32 ixgbe_calc_eeprom_checksum_generic(struct ixgbe_hw *hw)
+ 
+ 		if (hw->eeprom.ops.read(hw, pointer, &length)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 
+ 		if (length == 0xFFFF || length == 0)
+@@ -1733,7 +1721,7 @@ s32 ixgbe_calc_eeprom_checksum_generic(struct ixgbe_hw *hw)
+ 		for (j = pointer + 1; j <= pointer + length; j++) {
+ 			if (hw->eeprom.ops.read(hw, j, &word)) {
+ 				hw_dbg(hw, "EEPROM read failed\n");
+-				return IXGBE_ERR_EEPROM;
++				return -EIO;
+ 			}
+ 			checksum += word;
+ 		}
+@@ -1786,7 +1774,7 @@ s32 ixgbe_validate_eeprom_checksum_generic(struct ixgbe_hw *hw,
+ 	 * calculated checksum
+ 	 */
+ 	if (read_checksum != checksum)
+-		status = IXGBE_ERR_EEPROM_CHECKSUM;
++		status = -EIO;
+ 
+ 	/* If the user cares, return the calculated checksum */
+ 	if (checksum_val)
+@@ -1845,7 +1833,7 @@ s32 ixgbe_set_rar_generic(struct ixgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+ 	/* Make sure we are using a valid rar index range */
+ 	if (index >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", index);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	/* setup VMDq pool selection before this RAR gets enabled */
+@@ -1897,7 +1885,7 @@ s32 ixgbe_clear_rar_generic(struct ixgbe_hw *hw, u32 index)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (index >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", index);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	/*
+@@ -2146,7 +2134,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ 
+ 	/* Validate the water mark configuration. */
+ 	if (!hw->fc.pause_time)
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 
+ 	/* Low water mark of zero causes XOFF floods */
+ 	for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
+@@ -2155,7 +2143,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ 			if (!hw->fc.low_water[i] ||
+ 			    hw->fc.low_water[i] >= hw->fc.high_water[i]) {
+ 				hw_dbg(hw, "Invalid water mark configuration\n");
+-				return IXGBE_ERR_INVALID_LINK_SETTINGS;
++				return -EINVAL;
+ 			}
+ 		}
+ 	}
+@@ -2212,7 +2200,7 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_dbg(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	/* Set 802.3x based flow control settings. */
+@@ -2269,7 +2257,7 @@ s32 ixgbe_negotiate_fc(struct ixgbe_hw *hw, u32 adv_reg, u32 lp_reg,
+ 		       u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm)
+ {
+ 	if ((!(adv_reg)) ||  (!(lp_reg)))
+-		return IXGBE_ERR_FC_NOT_NEGOTIATED;
++		return -EINVAL;
+ 
+ 	if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) {
+ 		/*
+@@ -2321,7 +2309,7 @@ static s32 ixgbe_fc_autoneg_fiber(struct ixgbe_hw *hw)
+ 	linkstat = IXGBE_READ_REG(hw, IXGBE_PCS1GLSTA);
+ 	if ((!!(linkstat & IXGBE_PCS1GLSTA_AN_COMPLETE) == 0) ||
+ 	    (!!(linkstat & IXGBE_PCS1GLSTA_AN_TIMED_OUT) == 1))
+-		return IXGBE_ERR_FC_NOT_NEGOTIATED;
++		return -EIO;
+ 
+ 	pcs_anadv_reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANA);
+ 	pcs_lpab_reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANLP);
+@@ -2353,12 +2341,12 @@ static s32 ixgbe_fc_autoneg_backplane(struct ixgbe_hw *hw)
+ 	 */
+ 	links = IXGBE_READ_REG(hw, IXGBE_LINKS);
+ 	if ((links & IXGBE_LINKS_KX_AN_COMP) == 0)
+-		return IXGBE_ERR_FC_NOT_NEGOTIATED;
++		return -EIO;
+ 
+ 	if (hw->mac.type == ixgbe_mac_82599EB) {
+ 		links2 = IXGBE_READ_REG(hw, IXGBE_LINKS2);
+ 		if ((links2 & IXGBE_LINKS2_AN_SUPPORTED) == 0)
+-			return IXGBE_ERR_FC_NOT_NEGOTIATED;
++			return -EIO;
+ 	}
+ 	/*
+ 	 * Read the 10g AN autoc and LP ability registers and resolve
+@@ -2407,8 +2395,8 @@ static s32 ixgbe_fc_autoneg_copper(struct ixgbe_hw *hw)
+  **/
+ void ixgbe_fc_autoneg(struct ixgbe_hw *hw)
+ {
+-	s32 ret_val = IXGBE_ERR_FC_NOT_NEGOTIATED;
+ 	ixgbe_link_speed speed;
++	s32 ret_val = -EIO;
+ 	bool link_up;
+ 
+ 	/*
+@@ -2510,7 +2498,7 @@ static u32 ixgbe_pcie_timeout_poll(struct ixgbe_hw *hw)
+  *  @hw: pointer to hardware structure
+  *
+  *  Disables PCI-Express primary access and verifies there are no pending
+- *  requests. IXGBE_ERR_PRIMARY_REQUESTS_PENDING is returned if primary disable
++ *  requests. -EALREADY is returned if primary disable
+  *  bit hasn't caused the primary requests to be disabled, else 0
+  *  is returned signifying primary requests disabled.
+  **/
+@@ -2575,7 +2563,7 @@ gio_disable_fail:
+ 	}
+ 
+ 	hw_dbg(hw, "PCIe transaction pending bit also did not clear.\n");
+-	return IXGBE_ERR_PRIMARY_REQUESTS_PENDING;
++	return -EALREADY;
+ }
+ 
+ /**
+@@ -2600,7 +2588,7 @@ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u32 mask)
+ 		 * SW_FW_SYNC bits (not just NVM)
+ 		 */
+ 		if (ixgbe_get_eeprom_semaphore(hw))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		gssr = IXGBE_READ_REG(hw, IXGBE_GSSR);
+ 		if (!(gssr & (fwmask | swmask))) {
+@@ -2620,7 +2608,7 @@ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u32 mask)
+ 		ixgbe_release_swfw_sync(hw, gssr & (fwmask | swmask));
+ 
+ 	usleep_range(5000, 10000);
+-	return IXGBE_ERR_SWFW_SYNC;
++	return -EBUSY;
+ }
+ 
+ /**
+@@ -2757,7 +2745,7 @@ s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index)
+ 	s32 ret_val;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/*
+ 	 * Link must be up to auto-blink the LEDs;
+@@ -2803,7 +2791,7 @@ s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index)
+ 	s32 ret_val;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	ret_val = hw->mac.ops.prot_autoc_read(hw, &locked, &autoc_reg);
+ 	if (ret_val)
+@@ -2963,7 +2951,7 @@ s32 ixgbe_clear_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	mpsar_lo = IXGBE_READ_REG(hw, IXGBE_MPSAR_LO(rar));
+@@ -3014,7 +3002,7 @@ s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
+ 	/* Make sure we are using a valid rar index range */
+ 	if (rar >= rar_entries) {
+ 		hw_dbg(hw, "RAR index %d is out of range.\n", rar);
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	if (vmdq < 32) {
+@@ -3091,7 +3079,7 @@ static s32 ixgbe_find_vlvf_slot(struct ixgbe_hw *hw, u32 vlan, bool vlvf_bypass)
+ 	 * will simply bypass the VLVF if there are no entries present in the
+ 	 * VLVF that contain our VLAN
+ 	 */
+-	first_empty_slot = vlvf_bypass ? IXGBE_ERR_NO_SPACE : 0;
++	first_empty_slot = vlvf_bypass ? -ENOSPC : 0;
+ 
+ 	/* add VLAN enable bit for comparison */
+ 	vlan |= IXGBE_VLVF_VIEN;
+@@ -3115,7 +3103,7 @@ static s32 ixgbe_find_vlvf_slot(struct ixgbe_hw *hw, u32 vlan, bool vlvf_bypass)
+ 	if (!first_empty_slot)
+ 		hw_dbg(hw, "No space in VLVF.\n");
+ 
+-	return first_empty_slot ? : IXGBE_ERR_NO_SPACE;
++	return first_empty_slot ? : -ENOSPC;
+ }
+ 
+ /**
+@@ -3135,7 +3123,7 @@ s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, u32 vind,
+ 	s32 vlvf_index;
+ 
+ 	if ((vlan > 4095) || (vind > 63))
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/*
+ 	 * this is a 2 part operation - first the VFTA, then the
+@@ -3611,7 +3599,8 @@ u8 ixgbe_calculate_checksum(u8 *buffer, u32 length)
+  *
+  *  Communicates with the manageability block. On success return 0
+  *  else returns semaphore error when encountering an error acquiring
+- *  semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
++ *  semaphore, -EINVAL when incorrect parameters passed or -EIO when
++ *  command fails.
+  *
+  *  This function assumes that the IXGBE_GSSR_SW_MNG_SM semaphore is held
+  *  by the caller.
+@@ -3624,7 +3613,7 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+ 
+ 	if (!length || length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ 		hw_dbg(hw, "Buffer length failure buffersize-%d.\n", length);
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EINVAL;
+ 	}
+ 
+ 	/* Set bit 9 of FWSTS clearing FW reset indication */
+@@ -3635,13 +3624,13 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+ 	hicr = IXGBE_READ_REG(hw, IXGBE_HICR);
+ 	if (!(hicr & IXGBE_HICR_EN)) {
+ 		hw_dbg(hw, "IXGBE_HOST_EN bit disabled.\n");
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EIO;
+ 	}
+ 
+ 	/* Calculate length in DWORDs. We must be DWORD aligned */
+ 	if (length % sizeof(u32)) {
+ 		hw_dbg(hw, "Buffer length failure, not aligned to dword");
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 	}
+ 
+ 	dword_len = length >> 2;
+@@ -3666,7 +3655,7 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+ 	/* Check command successful completion. */
+ 	if ((timeout && i == timeout) ||
+ 	    !(IXGBE_READ_REG(hw, IXGBE_HICR) & IXGBE_HICR_SV))
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EIO;
+ 
+ 	return 0;
+ }
+@@ -3686,7 +3675,7 @@ s32 ixgbe_hic_unlocked(struct ixgbe_hw *hw, u32 *buffer, u32 length,
+  *  in these cases.
+  *
+  *  Communicates with the manageability block.  On success return 0
+- *  else return IXGBE_ERR_HOST_INTERFACE_COMMAND.
++ *  else return -EIO or -EINVAL.
+  **/
+ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+ 				 u32 length, u32 timeout,
+@@ -3701,7 +3690,7 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+ 
+ 	if (!length || length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ 		hw_dbg(hw, "Buffer length failure buffersize-%d.\n", length);
+-		return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		return -EINVAL;
+ 	}
+ 	/* Take management host interface semaphore */
+ 	status = hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM);
+@@ -3731,7 +3720,7 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
+ 
+ 	if (length < round_up(buf_len, 4) + hdr_size) {
+ 		hw_dbg(hw, "Buffer not large enough for reply message.\n");
+-		status = IXGBE_ERR_HOST_INTERFACE_COMMAND;
++		status = -EIO;
+ 		goto rel_out;
+ 	}
+ 
+@@ -3762,8 +3751,8 @@ rel_out:
+  *
+  *  Sends driver version number to firmware through the manageability
+  *  block.  On success return 0
+- *  else returns IXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+- *  semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
++ *  else returns -EBUSY when encountering an error acquiring
++ *  semaphore or -EIO when command fails.
+  **/
+ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 				 u8 build, u8 sub, __always_unused u16 len,
+@@ -3799,7 +3788,7 @@ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 		    FW_CEM_RESP_STATUS_SUCCESS)
+ 			ret_val = 0;
+ 		else
+-			ret_val = IXGBE_ERR_HOST_INTERFACE_COMMAND;
++			ret_val = -EIO;
+ 
+ 		break;
+ 	}
+@@ -3897,14 +3886,14 @@ static s32 ixgbe_get_ets_data(struct ixgbe_hw *hw, u16 *ets_cfg,
+ 		return status;
+ 
+ 	if ((*ets_offset == 0x0000) || (*ets_offset == 0xFFFF))
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	status = hw->eeprom.ops.read(hw, *ets_offset, ets_cfg);
+ 	if (status)
+ 		return status;
+ 
+ 	if ((*ets_cfg & IXGBE_ETS_TYPE_MASK) != IXGBE_ETS_TYPE_EMC_SHIFTED)
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	return 0;
+ }
+@@ -3927,7 +3916,7 @@ s32 ixgbe_get_thermal_sensor_data_generic(struct ixgbe_hw *hw)
+ 
+ 	/* Only support thermal sensors attached to physical port 0 */
+ 	if ((IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_LAN_ID_1))
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	status = ixgbe_get_ets_data(hw, &ets_cfg, &ets_offset);
+ 	if (status)
+@@ -3987,7 +3976,7 @@ s32 ixgbe_init_thermal_sensor_thresh_generic(struct ixgbe_hw *hw)
+ 
+ 	/* Only support thermal sensors attached to physical port 0 */
+ 	if ((IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_LAN_ID_1))
+-		return IXGBE_NOT_IMPLEMENTED;
++		return -EOPNOTSUPP;
+ 
+ 	status = ixgbe_get_ets_data(hw, &ets_cfg, &ets_offset);
+ 	if (status)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index 4dd897806fa5a..e47461f3eaef0 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -3370,7 +3370,7 @@ static int ixgbe_get_module_eeprom(struct net_device *dev,
+ {
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_hw *hw = &adapter->hw;
+-	s32 status = IXGBE_ERR_PHY_ADDR_INVALID;
++	s32 status = -EFAULT;
+ 	u8 databyte = 0xFF;
+ 	int i = 0;
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 94bde2cad0f47..6a3f633406c4b 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2756,7 +2756,6 @@ static void ixgbe_check_overtemp_subtask(struct ixgbe_adapter *adapter)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
+ 	u32 eicr = adapter->interrupt_event;
+-	s32 rc;
+ 
+ 	if (test_bit(__IXGBE_DOWN, &adapter->state))
+ 		return;
+@@ -2790,14 +2789,13 @@ static void ixgbe_check_overtemp_subtask(struct ixgbe_adapter *adapter)
+ 		}
+ 
+ 		/* Check if this is not due to overtemp */
+-		if (hw->phy.ops.check_overtemp(hw) != IXGBE_ERR_OVERTEMP)
++		if (!hw->phy.ops.check_overtemp(hw))
+ 			return;
+ 
+ 		break;
+ 	case IXGBE_DEV_ID_X550EM_A_1G_T:
+ 	case IXGBE_DEV_ID_X550EM_A_1G_T_L:
+-		rc = hw->phy.ops.check_overtemp(hw);
+-		if (rc != IXGBE_ERR_OVERTEMP)
++		if (!hw->phy.ops.check_overtemp(hw))
+ 			return;
+ 		break;
+ 	default:
+@@ -5512,7 +5510,7 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
+ {
+ 	u32 speed;
+ 	bool autoneg, link_up = false;
+-	int ret = IXGBE_ERR_LINK_SETUP;
++	int ret = -EIO;
+ 
+ 	if (hw->mac.ops.check_link)
+ 		ret = hw->mac.ops.check_link(hw, &speed, &link_up, false);
+@@ -5983,13 +5981,13 @@ void ixgbe_reset(struct ixgbe_adapter *adapter)
+ 	err = hw->mac.ops.init_hw(hw);
+ 	switch (err) {
+ 	case 0:
+-	case IXGBE_ERR_SFP_NOT_PRESENT:
+-	case IXGBE_ERR_SFP_NOT_SUPPORTED:
++	case -ENOENT:
++	case -EOPNOTSUPP:
+ 		break;
+-	case IXGBE_ERR_PRIMARY_REQUESTS_PENDING:
++	case -EALREADY:
+ 		e_dev_err("primary disable timed out\n");
+ 		break;
+-	case IXGBE_ERR_EEPROM_VERSION:
++	case -EACCES:
+ 		/* We are running on a pre-production device, log a warning */
+ 		e_dev_warn("This device is a pre-production adapter/LOM. "
+ 			   "Please be aware there may be issues associated with "
+@@ -7829,10 +7827,10 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
+ 	adapter->sfp_poll_time = jiffies + IXGBE_SFP_POLL_JIFFIES - 1;
+ 
+ 	err = hw->phy.ops.identify_sfp(hw);
+-	if (err == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (err == -EOPNOTSUPP)
+ 		goto sfp_out;
+ 
+-	if (err == IXGBE_ERR_SFP_NOT_PRESENT) {
++	if (err == -ENOENT) {
+ 		/* If no cable is present, then we need to reset
+ 		 * the next time we find a good cable. */
+ 		adapter->flags2 |= IXGBE_FLAG2_SFP_NEEDS_RESET;
+@@ -7858,7 +7856,7 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
+ 	else
+ 		err = hw->mac.ops.setup_sfp(hw);
+ 
+-	if (err == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (err == -EOPNOTSUPP)
+ 		goto sfp_out;
+ 
+ 	adapter->flags |= IXGBE_FLAG_NEED_LINK_CONFIG;
+@@ -7867,8 +7865,8 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
+ sfp_out:
+ 	clear_bit(__IXGBE_IN_SFP_INIT, &adapter->state);
+ 
+-	if ((err == IXGBE_ERR_SFP_NOT_SUPPORTED) &&
+-	    (adapter->netdev->reg_state == NETREG_REGISTERED)) {
++	if (err == -EOPNOTSUPP &&
++	    adapter->netdev->reg_state == NETREG_REGISTERED) {
+ 		e_dev_err("failed to initialize because an unsupported "
+ 			  "SFP+ module type was detected.\n");
+ 		e_dev_err("Reload the driver after installing a "
+@@ -7938,7 +7936,7 @@ static void ixgbe_service_timer(struct timer_list *t)
+ static void ixgbe_phy_interrupt_subtask(struct ixgbe_adapter *adapter)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
+-	u32 status;
++	bool overtemp;
+ 
+ 	if (!(adapter->flags2 & IXGBE_FLAG2_PHY_INTERRUPT))
+ 		return;
+@@ -7948,11 +7946,9 @@ static void ixgbe_phy_interrupt_subtask(struct ixgbe_adapter *adapter)
+ 	if (!hw->phy.ops.handle_lasi)
+ 		return;
+ 
+-	status = hw->phy.ops.handle_lasi(&adapter->hw);
+-	if (status != IXGBE_ERR_OVERTEMP)
+-		return;
+-
+-	e_crit(drv, "%s\n", ixgbe_overheat_msg);
++	hw->phy.ops.handle_lasi(&adapter->hw, &overtemp);
++	if (overtemp)
++		e_crit(drv, "%s\n", ixgbe_overheat_msg);
+ }
+ 
+ static void ixgbe_reset_subtask(struct ixgbe_adapter *adapter)
+@@ -10922,9 +10918,9 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	err = hw->mac.ops.reset_hw(hw);
+ 	hw->phy.reset_if_overtemp = false;
+ 	ixgbe_set_eee_capable(adapter);
+-	if (err == IXGBE_ERR_SFP_NOT_PRESENT) {
++	if (err == -ENOENT) {
+ 		err = 0;
+-	} else if (err == IXGBE_ERR_SFP_NOT_SUPPORTED) {
++	} else if (err == -EOPNOTSUPP) {
+ 		e_dev_err("failed to load because an unsupported SFP+ or QSFP module type was detected.\n");
+ 		e_dev_err("Reload the driver after installing a supported module.\n");
+ 		goto err_sw_init;
+@@ -11143,7 +11139,7 @@ skip_sriov:
+ 
+ 	/* reset the hardware with the new settings */
+ 	err = hw->mac.ops.start_hw(hw);
+-	if (err == IXGBE_ERR_EEPROM_VERSION) {
++	if (err == -EACCES) {
+ 		/* We are running on a pre-production device, log a warning */
+ 		e_dev_warn("This device is a pre-production adapter/LOM. "
+ 			   "Please be aware there may be issues associated "
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
+index 5679293e53f7a..fe7ef5773369a 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
+@@ -24,7 +24,7 @@ s32 ixgbe_read_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+ 		size = mbx->size;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->read(hw, msg, size, mbx_id);
+ }
+@@ -43,10 +43,10 @@ s32 ixgbe_write_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (size > mbx->size)
+-		return IXGBE_ERR_MBX;
++		return -EINVAL;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->write(hw, msg, size, mbx_id);
+ }
+@@ -63,7 +63,7 @@ s32 ixgbe_check_for_msg(struct ixgbe_hw *hw, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->check_for_msg(hw, mbx_id);
+ }
+@@ -80,7 +80,7 @@ s32 ixgbe_check_for_ack(struct ixgbe_hw *hw, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->check_for_ack(hw, mbx_id);
+ }
+@@ -97,7 +97,7 @@ s32 ixgbe_check_for_rst(struct ixgbe_hw *hw, u16 mbx_id)
+ 	struct ixgbe_mbx_info *mbx = &hw->mbx;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	return mbx->ops->check_for_rst(hw, mbx_id);
+ }
+@@ -115,12 +115,12 @@ static s32 ixgbe_poll_for_msg(struct ixgbe_hw *hw, u16 mbx_id)
+ 	int countdown = mbx->timeout;
+ 
+ 	if (!countdown || !mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	while (mbx->ops->check_for_msg(hw, mbx_id)) {
+ 		countdown--;
+ 		if (!countdown)
+-			return IXGBE_ERR_MBX;
++			return -EIO;
+ 		udelay(mbx->usec_delay);
+ 	}
+ 
+@@ -140,12 +140,12 @@ static s32 ixgbe_poll_for_ack(struct ixgbe_hw *hw, u16 mbx_id)
+ 	int countdown = mbx->timeout;
+ 
+ 	if (!countdown || !mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	while (mbx->ops->check_for_ack(hw, mbx_id)) {
+ 		countdown--;
+ 		if (!countdown)
+-			return IXGBE_ERR_MBX;
++			return -EIO;
+ 		udelay(mbx->usec_delay);
+ 	}
+ 
+@@ -169,7 +169,7 @@ static s32 ixgbe_read_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
+ 	s32 ret_val;
+ 
+ 	if (!mbx->ops)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	ret_val = ixgbe_poll_for_msg(hw, mbx_id);
+ 	if (ret_val)
+@@ -197,7 +197,7 @@ static s32 ixgbe_write_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
+ 
+ 	/* exit if either we can't write or there isn't a defined timeout */
+ 	if (!mbx->ops || !mbx->timeout)
+-		return IXGBE_ERR_MBX;
++		return -EIO;
+ 
+ 	/* send msg */
+ 	ret_val = mbx->ops->write(hw, msg, size, mbx_id);
+@@ -217,7 +217,7 @@ static s32 ixgbe_check_for_bit_pf(struct ixgbe_hw *hw, u32 mask, s32 index)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -238,7 +238,7 @@ static s32 ixgbe_check_for_msg_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -259,7 +259,7 @@ static s32 ixgbe_check_for_ack_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -295,7 +295,7 @@ static s32 ixgbe_check_for_rst_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 		return 0;
+ 	}
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+@@ -317,7 +317,7 @@ static s32 ixgbe_obtain_mbx_lock_pf(struct ixgbe_hw *hw, u16 vf_number)
+ 	if (p2v_mailbox & IXGBE_PFMAILBOX_PFU)
+ 		return 0;
+ 
+-	return IXGBE_ERR_MBX;
++	return -EIO;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
+index 8f4316b19278c..6434c190e7a4c 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
+@@ -7,7 +7,6 @@
+ #include "ixgbe_type.h"
+ 
+ #define IXGBE_VFMAILBOX_SIZE        16 /* 16 32 bit words - 64 bytes */
+-#define IXGBE_ERR_MBX               -100
+ 
+ #define IXGBE_VFMAILBOX             0x002FC
+ #define IXGBE_VFMBMEM               0x00200
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+index 689470c1e8ad5..930dc50719364 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+@@ -102,7 +102,7 @@ s32 ixgbe_read_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 	csum = ~csum;
+ 	do {
+ 		if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 		ixgbe_i2c_start(hw);
+ 		/* Device Address and write indication */
+ 		if (ixgbe_out_i2c_byte_ack(hw, addr))
+@@ -150,7 +150,7 @@ fail:
+ 			hw_dbg(hw, "I2C byte read combined error.\n");
+ 	} while (retry < max_retry);
+ 
+-	return IXGBE_ERR_I2C;
++	return -EIO;
+ }
+ 
+ /**
+@@ -179,7 +179,7 @@ s32 ixgbe_write_i2c_combined_generic_int(struct ixgbe_hw *hw, u8 addr,
+ 	csum = ~csum;
+ 	do {
+ 		if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 		ixgbe_i2c_start(hw);
+ 		/* Device Address and write indication */
+ 		if (ixgbe_out_i2c_byte_ack(hw, addr))
+@@ -215,7 +215,7 @@ fail:
+ 			hw_dbg(hw, "I2C byte write combined error.\n");
+ 	} while (retry < max_retry);
+ 
+-	return IXGBE_ERR_I2C;
++	return -EIO;
+ }
+ 
+ /**
+@@ -262,8 +262,8 @@ static bool ixgbe_probe_phy(struct ixgbe_hw *hw, u16 phy_addr)
+  **/
+ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw)
+ {
++	u32 status = -EFAULT;
+ 	u32 phy_addr;
+-	u32 status = IXGBE_ERR_PHY_ADDR_INVALID;
+ 
+ 	if (!hw->phy.phy_semaphore_mask) {
+ 		if (hw->bus.lan_id)
+@@ -282,7 +282,7 @@ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw)
+ 		if (ixgbe_probe_phy(hw, phy_addr))
+ 			return 0;
+ 		else
+-			return IXGBE_ERR_PHY_ADDR_INVALID;
++			return -EFAULT;
+ 	}
+ 
+ 	for (phy_addr = 0; phy_addr < IXGBE_MAX_PHY_ADDR; phy_addr++) {
+@@ -408,8 +408,7 @@ s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw)
+ 		return status;
+ 
+ 	/* Don't reset PHY if it's shut down due to overtemp. */
+-	if (!hw->phy.reset_if_overtemp &&
+-	    (IXGBE_ERR_OVERTEMP == hw->phy.ops.check_overtemp(hw)))
++	if (!hw->phy.reset_if_overtemp && hw->phy.ops.check_overtemp(hw))
+ 		return 0;
+ 
+ 	/* Blocked by MNG FW so bail */
+@@ -457,7 +456,7 @@ s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw)
+ 
+ 	if (ctrl & MDIO_CTRL1_RESET) {
+ 		hw_dbg(hw, "PHY reset polling failed to complete.\n");
+-		return IXGBE_ERR_RESET_FAILED;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -500,7 +499,7 @@ s32 ixgbe_read_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY address command did not complete.\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/* Address cycle complete, setup and write the read
+@@ -527,7 +526,7 @@ s32 ixgbe_read_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY read command didn't complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/* Read operation is complete.  Get the data
+@@ -559,7 +558,7 @@ s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr,
+ 						phy_data);
+ 		hw->mac.ops.release_swfw_sync(hw, gssr);
+ 	} else {
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	return status;
+@@ -604,7 +603,7 @@ s32 ixgbe_write_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY address cmd didn't complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/*
+@@ -632,7 +631,7 @@ s32 ixgbe_write_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr,
+ 
+ 	if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) {
+ 		hw_dbg(hw, "PHY write cmd didn't complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -657,7 +656,7 @@ s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr,
+ 						 phy_data);
+ 		hw->mac.ops.release_swfw_sync(hw, gssr);
+ 	} else {
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	return status;
+@@ -1430,7 +1429,7 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw)
+ 
+ 	if ((phy_data & MDIO_CTRL1_RESET) != 0) {
+ 		hw_dbg(hw, "PHY reset did not complete.\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	/* Get init offsets */
+@@ -1487,12 +1486,12 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw)
+ 				hw_dbg(hw, "SOL\n");
+ 			} else {
+ 				hw_dbg(hw, "Bad control value\n");
+-				return IXGBE_ERR_PHY;
++				return -EIO;
+ 			}
+ 			break;
+ 		default:
+ 			hw_dbg(hw, "Bad control type\n");
+-			return IXGBE_ERR_PHY;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -1500,7 +1499,7 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw)
+ 
+ err_eeprom:
+ 	hw_err(hw, "eeprom read at offset %d failed\n", data_offset);
+-	return IXGBE_ERR_PHY;
++	return -EIO;
+ }
+ 
+ /**
+@@ -1518,10 +1517,10 @@ s32 ixgbe_identify_module_generic(struct ixgbe_hw *hw)
+ 		return ixgbe_identify_qsfp_module_generic(hw);
+ 	default:
+ 		hw->phy.sfp_type = ixgbe_sfp_type_not_present;
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	}
+ 
+-	return IXGBE_ERR_SFP_NOT_PRESENT;
++	return -ENOENT;
+ }
+ 
+ /**
+@@ -1546,7 +1545,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_fiber) {
+ 		hw->phy.sfp_type = ixgbe_sfp_type_not_present;
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	}
+ 
+ 	/* LAN ID is needed for sfp_type determination */
+@@ -1561,7 +1560,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (identifier != IXGBE_SFF_IDENTIFIER_SFP) {
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 	status = hw->phy.ops.read_i2c_eeprom(hw,
+ 					     IXGBE_SFF_1GBE_COMP_CODES,
+@@ -1752,7 +1751,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 	      hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 ||
+ 	      hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1)) {
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	/* Anything else 82598-based is supported */
+@@ -1776,7 +1775,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw)
+ 		}
+ 		hw_dbg(hw, "SFP+ module not supported\n");
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 	return 0;
+ 
+@@ -1786,7 +1785,7 @@ err_read_i2c_eeprom:
+ 		hw->phy.id = 0;
+ 		hw->phy.type = ixgbe_phy_unknown;
+ 	}
+-	return IXGBE_ERR_SFP_NOT_PRESENT;
++	return -ENOENT;
+ }
+ 
+ /**
+@@ -1813,7 +1812,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_fiber_qsfp) {
+ 		hw->phy.sfp_type = ixgbe_sfp_type_not_present;
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	}
+ 
+ 	/* LAN ID is needed for sfp_type determination */
+@@ -1827,7 +1826,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 
+ 	if (identifier != IXGBE_SFF_IDENTIFIER_QSFP_PLUS) {
+ 		hw->phy.type = ixgbe_phy_sfp_unsupported;
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	hw->phy.id = identifier;
+@@ -1895,7 +1894,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 		} else {
+ 			/* unsupported module type */
+ 			hw->phy.type = ixgbe_phy_sfp_unsupported;
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 		}
+ 	}
+ 
+@@ -1955,7 +1954,7 @@ static s32 ixgbe_identify_qsfp_module_generic(struct ixgbe_hw *hw)
+ 			}
+ 			hw_dbg(hw, "QSFP module not supported\n");
+ 			hw->phy.type = ixgbe_phy_sfp_unsupported;
+-			return IXGBE_ERR_SFP_NOT_SUPPORTED;
++			return -EOPNOTSUPP;
+ 		}
+ 		return 0;
+ 	}
+@@ -1966,7 +1965,7 @@ err_read_i2c_eeprom:
+ 	hw->phy.id = 0;
+ 	hw->phy.type = ixgbe_phy_unknown;
+ 
+-	return IXGBE_ERR_SFP_NOT_PRESENT;
++	return -ENOENT;
+ }
+ 
+ /**
+@@ -1986,14 +1985,14 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 	u16 sfp_type = hw->phy.sfp_type;
+ 
+ 	if (hw->phy.sfp_type == ixgbe_sfp_type_unknown)
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 
+ 	if (hw->phy.sfp_type == ixgbe_sfp_type_not_present)
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 
+ 	if ((hw->device_id == IXGBE_DEV_ID_82598_SR_DUAL_PORT_EM) &&
+ 	    (hw->phy.sfp_type == ixgbe_sfp_type_da_cu))
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 
+ 	/*
+ 	 * Limiting active cables and 1G Phys must be initialized as
+@@ -2014,11 +2013,11 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 	if (hw->eeprom.ops.read(hw, IXGBE_PHY_INIT_OFFSET_NL, list_offset)) {
+ 		hw_err(hw, "eeprom read at %d failed\n",
+ 		       IXGBE_PHY_INIT_OFFSET_NL);
+-		return IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT;
++		return -EIO;
+ 	}
+ 
+ 	if ((!*list_offset) || (*list_offset == 0xFFFF))
+-		return IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT;
++		return -EIO;
+ 
+ 	/* Shift offset to first ID word */
+ 	(*list_offset)++;
+@@ -2037,7 +2036,7 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 				goto err_phy;
+ 			if ((!*data_offset) || (*data_offset == 0xFFFF)) {
+ 				hw_dbg(hw, "SFP+ module not supported\n");
+-				return IXGBE_ERR_SFP_NOT_SUPPORTED;
++				return -EOPNOTSUPP;
+ 			} else {
+ 				break;
+ 			}
+@@ -2050,14 +2049,14 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 
+ 	if (sfp_id == IXGBE_PHY_INIT_END_NL) {
+ 		hw_dbg(hw, "No matching SFP+ module found\n");
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	return 0;
+ 
+ err_phy:
+ 	hw_err(hw, "eeprom read at offset %d failed\n", *list_offset);
+-	return IXGBE_ERR_PHY;
++	return -EIO;
+ }
+ 
+ /**
+@@ -2152,7 +2151,7 @@ static s32 ixgbe_read_i2c_byte_generic_int(struct ixgbe_hw *hw, u8 byte_offset,
+ 
+ 	do {
+ 		if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		ixgbe_i2c_start(hw);
+ 
+@@ -2268,7 +2267,7 @@ static s32 ixgbe_write_i2c_byte_generic_int(struct ixgbe_hw *hw, u8 byte_offset,
+ 	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+ 
+ 	if (lock && hw->mac.ops.acquire_swfw_sync(hw, swfw_mask))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	do {
+ 		ixgbe_i2c_start(hw);
+@@ -2510,7 +2509,7 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw)
+ 
+ 	if (ack == 1) {
+ 		hw_dbg(hw, "I2C ack was not received.\n");
+-		status = IXGBE_ERR_I2C;
++		status = -EIO;
+ 	}
+ 
+ 	ixgbe_lower_i2c_clk(hw, &i2cctl);
+@@ -2582,7 +2581,7 @@ static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data)
+ 		udelay(IXGBE_I2C_T_LOW);
+ 	} else {
+ 		hw_dbg(hw, "I2C data was not set to %X\n", data);
+-		return IXGBE_ERR_I2C;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -2678,7 +2677,7 @@ static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data)
+ 	*i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL(hw));
+ 	if (data != ixgbe_get_i2c_data(hw, i2cctl)) {
+ 		hw_dbg(hw, "Error - I2C data was not set to %X.\n", data);
+-		return IXGBE_ERR_I2C;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -2748,22 +2747,24 @@ static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw)
+  *  @hw: pointer to hardware structure
+  *
+  *  Checks if the LASI temp alarm status was triggered due to overtemp
++ *
++ *  Return true when an overtemp event detected, otherwise false.
+  **/
+-s32 ixgbe_tn_check_overtemp(struct ixgbe_hw *hw)
++bool ixgbe_tn_check_overtemp(struct ixgbe_hw *hw)
+ {
+ 	u16 phy_data = 0;
++	u32 status;
+ 
+ 	if (hw->device_id != IXGBE_DEV_ID_82599_T3_LOM)
+-		return 0;
++		return false;
+ 
+ 	/* Check that the LASI temp alarm status was triggered */
+-	hw->phy.ops.read_reg(hw, IXGBE_TN_LASI_STATUS_REG,
+-			     MDIO_MMD_PMAPMD, &phy_data);
+-
+-	if (!(phy_data & IXGBE_TN_LASI_STATUS_TEMP_ALARM))
+-		return 0;
++	status = hw->phy.ops.read_reg(hw, IXGBE_TN_LASI_STATUS_REG,
++				      MDIO_MMD_PMAPMD, &phy_data);
++	if (status)
++		return false;
+ 
+-	return IXGBE_ERR_OVERTEMP;
++	return !!(phy_data & IXGBE_TN_LASI_STATUS_TEMP_ALARM);
+ }
+ 
+ /** ixgbe_set_copper_phy_power - Control power for copper phy
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+index 6544c4539c0de..ef72729d7c933 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+@@ -155,7 +155,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw);
+ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw,
+ 					u16 *list_offset,
+ 					u16 *data_offset);
+-s32 ixgbe_tn_check_overtemp(struct ixgbe_hw *hw);
++bool ixgbe_tn_check_overtemp(struct ixgbe_hw *hw);
+ s32 ixgbe_read_i2c_byte_generic(struct ixgbe_hw *hw, u8 byte_offset,
+ 				u8 dev_addr, u8 *data);
+ s32 ixgbe_read_i2c_byte_generic_unlocked(struct ixgbe_hw *hw, u8 byte_offset,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 9cfdfa8a4355c..9379069c55c82 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -1327,7 +1327,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
+ 		break;
+ 	default:
+ 		e_err(drv, "Unhandled Msg %8.8x\n", msgbuf[0]);
+-		retval = IXGBE_ERR_MBX;
++		retval = -EIO;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+index 2b00db92b08f5..61b9774b3d31e 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+@@ -3509,10 +3509,10 @@ struct ixgbe_phy_operations {
+ 	s32 (*read_i2c_sff8472)(struct ixgbe_hw *, u8 , u8 *);
+ 	s32 (*read_i2c_eeprom)(struct ixgbe_hw *, u8 , u8 *);
+ 	s32 (*write_i2c_eeprom)(struct ixgbe_hw *, u8, u8);
+-	s32 (*check_overtemp)(struct ixgbe_hw *);
++	bool (*check_overtemp)(struct ixgbe_hw *);
+ 	s32 (*set_phy_power)(struct ixgbe_hw *, bool on);
+ 	s32 (*enter_lplu)(struct ixgbe_hw *);
+-	s32 (*handle_lasi)(struct ixgbe_hw *hw);
++	s32 (*handle_lasi)(struct ixgbe_hw *hw, bool *);
+ 	s32 (*read_i2c_byte_unlocked)(struct ixgbe_hw *, u8 offset, u8 addr,
+ 				      u8 *value);
+ 	s32 (*write_i2c_byte_unlocked)(struct ixgbe_hw *, u8 offset, u8 addr,
+@@ -3665,45 +3665,6 @@ struct ixgbe_info {
+ 	const u32			*mvals;
+ };
+ 
+-
+-/* Error Codes */
+-#define IXGBE_ERR_EEPROM                        -1
+-#define IXGBE_ERR_EEPROM_CHECKSUM               -2
+-#define IXGBE_ERR_PHY                           -3
+-#define IXGBE_ERR_CONFIG                        -4
+-#define IXGBE_ERR_PARAM                         -5
+-#define IXGBE_ERR_MAC_TYPE                      -6
+-#define IXGBE_ERR_UNKNOWN_PHY                   -7
+-#define IXGBE_ERR_LINK_SETUP                    -8
+-#define IXGBE_ERR_ADAPTER_STOPPED               -9
+-#define IXGBE_ERR_INVALID_MAC_ADDR              -10
+-#define IXGBE_ERR_DEVICE_NOT_SUPPORTED          -11
+-#define IXGBE_ERR_PRIMARY_REQUESTS_PENDING      -12
+-#define IXGBE_ERR_INVALID_LINK_SETTINGS         -13
+-#define IXGBE_ERR_AUTONEG_NOT_COMPLETE          -14
+-#define IXGBE_ERR_RESET_FAILED                  -15
+-#define IXGBE_ERR_SWFW_SYNC                     -16
+-#define IXGBE_ERR_PHY_ADDR_INVALID              -17
+-#define IXGBE_ERR_I2C                           -18
+-#define IXGBE_ERR_SFP_NOT_SUPPORTED             -19
+-#define IXGBE_ERR_SFP_NOT_PRESENT               -20
+-#define IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT       -21
+-#define IXGBE_ERR_NO_SAN_ADDR_PTR               -22
+-#define IXGBE_ERR_FDIR_REINIT_FAILED            -23
+-#define IXGBE_ERR_EEPROM_VERSION                -24
+-#define IXGBE_ERR_NO_SPACE                      -25
+-#define IXGBE_ERR_OVERTEMP                      -26
+-#define IXGBE_ERR_FC_NOT_NEGOTIATED             -27
+-#define IXGBE_ERR_FC_NOT_SUPPORTED              -28
+-#define IXGBE_ERR_SFP_SETUP_NOT_COMPLETE        -30
+-#define IXGBE_ERR_PBA_SECTION                   -31
+-#define IXGBE_ERR_INVALID_ARGUMENT              -32
+-#define IXGBE_ERR_HOST_INTERFACE_COMMAND        -33
+-#define IXGBE_ERR_FDIR_CMD_INCOMPLETE		-38
+-#define IXGBE_ERR_FW_RESP_INVALID		-39
+-#define IXGBE_ERR_TOKEN_RETRY			-40
+-#define IXGBE_NOT_IMPLEMENTED                   0x7FFFFFFF
+-
+ #define IXGBE_FUSES0_GROUP(_i)		(0x11158 + ((_i) * 4))
+ #define IXGBE_FUSES0_300MHZ		BIT(5)
+ #define IXGBE_FUSES0_REV_MASK		(3u << 6)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
+index d5cfb51ff648d..15325c549d9b5 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
+@@ -84,7 +84,7 @@ mac_reset_top:
+ 	status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
+ 	if (status) {
+ 		hw_dbg(hw, "semaphore failed with %d", status);
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	ctrl = IXGBE_CTRL_RST;
+@@ -103,7 +103,7 @@ mac_reset_top:
+ 	}
+ 
+ 	if (ctrl & IXGBE_CTRL_RST_MASK) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 	msleep(100);
+@@ -220,7 +220,7 @@ static s32 ixgbe_read_eerd_X540(struct ixgbe_hw *hw, u16 offset, u16 *data)
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_read_eerd_generic(hw, offset, data);
+ 
+@@ -243,7 +243,7 @@ static s32 ixgbe_read_eerd_buffer_X540(struct ixgbe_hw *hw,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_read_eerd_buffer_generic(hw, offset, words, data);
+ 
+@@ -264,7 +264,7 @@ static s32 ixgbe_write_eewr_X540(struct ixgbe_hw *hw, u16 offset, u16 data)
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_write_eewr_generic(hw, offset, data);
+ 
+@@ -287,7 +287,7 @@ static s32 ixgbe_write_eewr_buffer_X540(struct ixgbe_hw *hw,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_write_eewr_buffer_generic(hw, offset, words, data);
+ 
+@@ -324,7 +324,7 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 	for (i = 0; i < checksum_last_word; i++) {
+ 		if (ixgbe_read_eerd_generic(hw, i, &word)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 		checksum += word;
+ 	}
+@@ -349,7 +349,7 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 
+ 		if (ixgbe_read_eerd_generic(hw, pointer, &length)) {
+ 			hw_dbg(hw, "EEPROM read failed\n");
+-			return IXGBE_ERR_EEPROM;
++			return -EIO;
+ 		}
+ 
+ 		/* Skip pointer section if length is invalid. */
+@@ -360,7 +360,7 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 		for (j = pointer + 1; j <= pointer + length; j++) {
+ 			if (ixgbe_read_eerd_generic(hw, j, &word)) {
+ 				hw_dbg(hw, "EEPROM read failed\n");
+-				return IXGBE_ERR_EEPROM;
++				return -EIO;
+ 			}
+ 			checksum += word;
+ 		}
+@@ -397,7 +397,7 @@ static s32 ixgbe_validate_eeprom_checksum_X540(struct ixgbe_hw *hw,
+ 	}
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = hw->eeprom.ops.calc_checksum(hw);
+ 	if (status < 0)
+@@ -418,7 +418,7 @@ static s32 ixgbe_validate_eeprom_checksum_X540(struct ixgbe_hw *hw,
+ 	 */
+ 	if (read_checksum != checksum) {
+ 		hw_dbg(hw, "Invalid EEPROM checksum");
+-		status = IXGBE_ERR_EEPROM_CHECKSUM;
++		status = -EIO;
+ 	}
+ 
+ 	/* If the user cares, return the calculated checksum */
+@@ -455,7 +455,7 @@ static s32 ixgbe_update_eeprom_checksum_X540(struct ixgbe_hw *hw)
+ 	}
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM))
+-		return  IXGBE_ERR_SWFW_SYNC;
++		return  -EBUSY;
+ 
+ 	status = hw->eeprom.ops.calc_checksum(hw);
+ 	if (status < 0)
+@@ -490,7 +490,7 @@ static s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw)
+ 	s32 status;
+ 
+ 	status = ixgbe_poll_flash_update_done_X540(hw);
+-	if (status == IXGBE_ERR_EEPROM) {
++	if (status == -EIO) {
+ 		hw_dbg(hw, "Flash update time out\n");
+ 		return status;
+ 	}
+@@ -540,7 +540,7 @@ static s32 ixgbe_poll_flash_update_done_X540(struct ixgbe_hw *hw)
+ 			return 0;
+ 		udelay(5);
+ 	}
+-	return IXGBE_ERR_EEPROM;
++	return -EIO;
+ }
+ 
+ /**
+@@ -575,7 +575,7 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
+ 		 * SW_FW_SYNC bits (not just NVM)
+ 		 */
+ 		if (ixgbe_get_swfw_sync_semaphore(hw))
+-			return IXGBE_ERR_SWFW_SYNC;
++			return -EBUSY;
+ 
+ 		swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC(hw));
+ 		if (!(swfw_sync & (fwmask | swmask | hwmask))) {
+@@ -599,7 +599,7 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
+ 	 * bits in the SW_FW_SYNC register.
+ 	 */
+ 	if (ixgbe_get_swfw_sync_semaphore(hw))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC(hw));
+ 	if (swfw_sync & (fwmask | hwmask)) {
+ 		swfw_sync |= swmask;
+@@ -622,11 +622,11 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
+ 			rmask |= IXGBE_GSSR_I2C_MASK;
+ 		ixgbe_release_swfw_sync_X540(hw, rmask);
+ 		ixgbe_release_swfw_sync_semaphore(hw);
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 	ixgbe_release_swfw_sync_semaphore(hw);
+ 
+-	return IXGBE_ERR_SWFW_SYNC;
++	return -EBUSY;
+ }
+ 
+ /**
+@@ -680,7 +680,7 @@ static s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw)
+ 	if (i == timeout) {
+ 		hw_dbg(hw,
+ 		       "Software semaphore SMBI between device drivers not granted.\n");
+-		return IXGBE_ERR_EEPROM;
++		return -EIO;
+ 	}
+ 
+ 	/* Now get the semaphore between SW/FW through the REGSMP bit */
+@@ -697,7 +697,7 @@ static s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw)
+ 	 */
+ 	hw_dbg(hw, "REGSMP Software NVM semaphore not granted\n");
+ 	ixgbe_release_swfw_sync_semaphore(hw);
+-	return IXGBE_ERR_EEPROM;
++	return -EIO;
+ }
+ 
+ /**
+@@ -768,7 +768,7 @@ s32 ixgbe_blink_led_start_X540(struct ixgbe_hw *hw, u32 index)
+ 	bool link_up;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* Link should be up in order for the blink bit in the LED control
+ 	 * register to work. Force link and speed in the MAC if link is down.
+@@ -804,7 +804,7 @@ s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index)
+ 	u32 ledctl_reg;
+ 
+ 	if (index > 3)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* Restore the LED to its default value. */
+ 	ledctl_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+index aa4bf6c9a2f7c..cdc912bba8089 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+@@ -206,13 +206,13 @@ static s32 ixgbe_reset_cs4227(struct ixgbe_hw *hw)
+ 	}
+ 	if (retry == IXGBE_CS4227_RETRIES) {
+ 		hw_err(hw, "CS4227 reset did not complete\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	status = ixgbe_read_cs4227(hw, IXGBE_CS4227_EEPROM_STATUS, &value);
+ 	if (status || !(value & IXGBE_CS4227_EEPROM_LOAD_OK)) {
+ 		hw_err(hw, "CS4227 EEPROM did not load successfully\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -350,13 +350,13 @@ static s32 ixgbe_identify_phy_x550em(struct ixgbe_hw *hw)
+ static s32 ixgbe_read_phy_reg_x550em(struct ixgbe_hw *hw, u32 reg_addr,
+ 				     u32 device_type, u16 *phy_data)
+ {
+-	return IXGBE_NOT_IMPLEMENTED;
++	return -EOPNOTSUPP;
+ }
+ 
+ static s32 ixgbe_write_phy_reg_x550em(struct ixgbe_hw *hw, u32 reg_addr,
+ 				      u32 device_type, u16 phy_data)
+ {
+-	return IXGBE_NOT_IMPLEMENTED;
++	return -EOPNOTSUPP;
+ }
+ 
+ /**
+@@ -463,7 +463,7 @@ s32 ixgbe_fw_phy_activity(struct ixgbe_hw *hw, u16 activity,
+ 		--retries;
+ 	} while (retries > 0);
+ 
+-	return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++	return -EIO;
+ }
+ 
+ static const struct {
+@@ -511,7 +511,7 @@ static s32 ixgbe_get_phy_id_fw(struct ixgbe_hw *hw)
+ 	hw->phy.id |= phy_id_lo & IXGBE_PHY_REVISION_MASK;
+ 	hw->phy.revision = phy_id_lo & ~IXGBE_PHY_REVISION_MASK;
+ 	if (!hw->phy.id || hw->phy.id == IXGBE_PHY_REVISION_MASK)
+-		return IXGBE_ERR_PHY_ADDR_INVALID;
++		return -EFAULT;
+ 
+ 	hw->phy.autoneg_advertised = hw->phy.speeds_supported;
+ 	hw->phy.eee_speeds_supported = IXGBE_LINK_SPEED_100_FULL |
+@@ -568,7 +568,7 @@ static s32 ixgbe_setup_fw_link(struct ixgbe_hw *hw)
+ 
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_err(hw, "rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	switch (hw->fc.requested_mode) {
+@@ -600,8 +600,10 @@ static s32 ixgbe_setup_fw_link(struct ixgbe_hw *hw)
+ 	rc = ixgbe_fw_phy_activity(hw, FW_PHY_ACT_SETUP_LINK, &setup);
+ 	if (rc)
+ 		return rc;
++
+ 	if (setup[0] == FW_PHY_ACT_SETUP_LINK_RSP_DOWN)
+-		return IXGBE_ERR_OVERTEMP;
++		return -EIO;
++
+ 	return 0;
+ }
+ 
+@@ -675,7 +677,7 @@ static s32 ixgbe_iosf_wait(struct ixgbe_hw *hw, u32 *ctrl)
+ 		*ctrl = command;
+ 	if (i == IXGBE_MDIO_COMMAND_TIMEOUT) {
+ 		hw_dbg(hw, "IOSF wait timed out\n");
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ 	return 0;
+@@ -715,7 +717,8 @@ static s32 ixgbe_read_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr,
+ 		error = (command & IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK) >>
+ 			 IXGBE_SB_IOSF_CTRL_CMPL_ERR_SHIFT;
+ 		hw_dbg(hw, "Failed to read, error %x\n", error);
+-		return IXGBE_ERR_PHY;
++		ret = -EIO;
++		goto out;
+ 	}
+ 
+ 	if (!ret)
+@@ -750,9 +753,9 @@ static s32 ixgbe_get_phy_token(struct ixgbe_hw *hw)
+ 	if (token_cmd.hdr.cmd_or_resp.ret_status == FW_PHY_TOKEN_OK)
+ 		return 0;
+ 	if (token_cmd.hdr.cmd_or_resp.ret_status != FW_PHY_TOKEN_RETRY)
+-		return IXGBE_ERR_FW_RESP_INVALID;
++		return -EIO;
+ 
+-	return IXGBE_ERR_TOKEN_RETRY;
++	return -EAGAIN;
+ }
+ 
+ /**
+@@ -778,7 +781,7 @@ static s32 ixgbe_put_phy_token(struct ixgbe_hw *hw)
+ 		return status;
+ 	if (token_cmd.hdr.cmd_or_resp.ret_status == FW_PHY_TOKEN_OK)
+ 		return 0;
+-	return IXGBE_ERR_FW_RESP_INVALID;
++	return -EIO;
+ }
+ 
+ /**
+@@ -942,7 +945,7 @@ static s32 ixgbe_checksum_ptr_x550(struct ixgbe_hw *hw, u16 ptr,
+ 		local_buffer = buf;
+ 	} else {
+ 		if (buffer_size < ptr)
+-			return  IXGBE_ERR_PARAM;
++			return  -EINVAL;
+ 		local_buffer = &buffer[ptr];
+ 	}
+ 
+@@ -960,7 +963,7 @@ static s32 ixgbe_checksum_ptr_x550(struct ixgbe_hw *hw, u16 ptr,
+ 	}
+ 
+ 	if (buffer && ((u32)start + (u32)length > buffer_size))
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	for (i = start; length; i++, length--) {
+ 		if (i == bufsz && !buffer) {
+@@ -1012,7 +1015,7 @@ static s32 ixgbe_calc_checksum_X550(struct ixgbe_hw *hw, u16 *buffer,
+ 		local_buffer = eeprom_ptrs;
+ 	} else {
+ 		if (buffer_size < IXGBE_EEPROM_LAST_WORD)
+-			return IXGBE_ERR_PARAM;
++			return -EINVAL;
+ 		local_buffer = buffer;
+ 	}
+ 
+@@ -1148,7 +1151,7 @@ static s32 ixgbe_validate_eeprom_checksum_X550(struct ixgbe_hw *hw,
+ 	 * calculated checksum
+ 	 */
+ 	if (read_checksum != checksum) {
+-		status = IXGBE_ERR_EEPROM_CHECKSUM;
++		status = -EIO;
+ 		hw_dbg(hw, "Invalid EEPROM checksum");
+ 	}
+ 
+@@ -1203,7 +1206,7 @@ static s32 ixgbe_write_ee_hostif_X550(struct ixgbe_hw *hw, u16 offset, u16 data)
+ 		hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM);
+ 	} else {
+ 		hw_dbg(hw, "write ee hostif failed to get semaphore");
+-		status = IXGBE_ERR_SWFW_SYNC;
++		status = -EBUSY;
+ 	}
+ 
+ 	return status;
+@@ -1415,7 +1418,7 @@ static s32 ixgbe_write_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr,
+ 		error = (command & IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK) >>
+ 			 IXGBE_SB_IOSF_CTRL_CMPL_ERR_SHIFT;
+ 		hw_dbg(hw, "Failed to write, error %x\n", error);
+-		return IXGBE_ERR_PHY;
++		return -EIO;
+ 	}
+ 
+ out:
+@@ -1558,7 +1561,7 @@ static s32 ixgbe_setup_ixfi_x550em(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
+ 
+ 	/* iXFI is only supported with X552 */
+ 	if (mac->type != ixgbe_mac_X550EM_x)
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EIO;
+ 
+ 	/* Disable AN and force speed to 10G Serial. */
+ 	status = ixgbe_read_iosf_sb_reg_x550(hw,
+@@ -1580,7 +1583,7 @@ static s32 ixgbe_setup_ixfi_x550em(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
+ 		break;
+ 	default:
+ 		/* Other link speeds are not supported by internal KR PHY. */
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 	}
+ 
+ 	status = ixgbe_write_iosf_sb_reg_x550(hw,
+@@ -1611,7 +1614,7 @@ static s32 ixgbe_supported_sfp_modules_X550em(struct ixgbe_hw *hw, bool *linear)
+ {
+ 	switch (hw->phy.sfp_type) {
+ 	case ixgbe_sfp_type_not_present:
+-		return IXGBE_ERR_SFP_NOT_PRESENT;
++		return -ENOENT;
+ 	case ixgbe_sfp_type_da_cu_core0:
+ 	case ixgbe_sfp_type_da_cu_core1:
+ 		*linear = true;
+@@ -1630,7 +1633,7 @@ static s32 ixgbe_supported_sfp_modules_X550em(struct ixgbe_hw *hw, bool *linear)
+ 	case ixgbe_sfp_type_1g_cu_core0:
+ 	case ixgbe_sfp_type_1g_cu_core1:
+ 	default:
+-		return IXGBE_ERR_SFP_NOT_SUPPORTED;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	return 0;
+@@ -1660,7 +1663,7 @@ ixgbe_setup_mac_link_sfp_x550em(struct ixgbe_hw *hw,
+ 	 * there is no reason to configure CS4227 and SFP not present error is
+ 	 * not accepted in the setup MAC link flow.
+ 	 */
+-	if (status == IXGBE_ERR_SFP_NOT_PRESENT)
++	if (status == -ENOENT)
+ 		return 0;
+ 
+ 	if (status)
+@@ -1718,7 +1721,7 @@ static s32 ixgbe_setup_sfi_x550a(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
+ 		break;
+ 	default:
+ 		/* Other link speeds are not supported by internal PHY. */
+-		return IXGBE_ERR_LINK_SETUP;
++		return -EINVAL;
+ 	}
+ 
+ 	(void)mac->ops.write_iosf_sb_reg(hw,
+@@ -1803,7 +1806,7 @@ ixgbe_setup_mac_link_sfp_n(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ 	/* If no SFP module present, then return success. Return success since
+ 	 * SFP not present error is not excepted in the setup MAC link flow.
+ 	 */
+-	if (ret_val == IXGBE_ERR_SFP_NOT_PRESENT)
++	if (ret_val == -ENOENT)
+ 		return 0;
+ 
+ 	if (ret_val)
+@@ -1853,7 +1856,7 @@ ixgbe_setup_mac_link_sfp_x550a(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ 	/* If no SFP module present, then return success. Return success since
+ 	 * SFP not present error is not excepted in the setup MAC link flow.
+ 	 */
+-	if (ret_val == IXGBE_ERR_SFP_NOT_PRESENT)
++	if (ret_val == -ENOENT)
+ 		return 0;
+ 
+ 	if (ret_val)
+@@ -1863,7 +1866,7 @@ ixgbe_setup_mac_link_sfp_x550a(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ 	ixgbe_setup_kr_speed_x550em(hw, speed);
+ 
+ 	if (hw->phy.mdio.prtad == MDIO_PRTAD_NONE)
+-		return IXGBE_ERR_PHY_ADDR_INVALID;
++		return -EFAULT;
+ 
+ 	/* Get external PHY SKU id */
+ 	ret_val = hw->phy.ops.read_reg(hw, IXGBE_CS4227_EFUSE_PDF_SKU,
+@@ -1962,7 +1965,7 @@ static s32 ixgbe_check_link_t_X550em(struct ixgbe_hw *hw,
+ 	u16 i, autoneg_status;
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_copper)
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 
+ 	status = ixgbe_check_mac_link_generic(hw, speed, link_up,
+ 					      link_up_wait_to_complete);
+@@ -2145,9 +2148,9 @@ static s32 ixgbe_setup_sgmii_fw(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+  */
+ static void ixgbe_fc_autoneg_sgmii_x550em_a(struct ixgbe_hw *hw)
+ {
+-	s32 status = IXGBE_ERR_FC_NOT_NEGOTIATED;
+ 	u32 info[FW_PHY_ACT_DATA_COUNT] = { 0 };
+ 	ixgbe_link_speed speed;
++	s32 status = -EIO;
+ 	bool link_up;
+ 
+ 	/* AN should have completed when the cable was plugged in.
+@@ -2165,7 +2168,7 @@ static void ixgbe_fc_autoneg_sgmii_x550em_a(struct ixgbe_hw *hw)
+ 	/* Check if auto-negotiation has completed */
+ 	status = ixgbe_fw_phy_activity(hw, FW_PHY_ACT_GET_LINK_INFO, &info);
+ 	if (status || !(info[0] & FW_PHY_ACT_GET_LINK_INFO_AN_COMPLETE)) {
+-		status = IXGBE_ERR_FC_NOT_NEGOTIATED;
++		status = -EIO;
+ 		goto out;
+ 	}
+ 
+@@ -2369,18 +2372,18 @@ static s32 ixgbe_get_link_capabilities_X550em(struct ixgbe_hw *hw,
+  * @hw: pointer to hardware structure
+  * @lsc: pointer to boolean flag which indicates whether external Base T
+  *	 PHY interrupt is lsc
++ * @is_overtemp: indicate whether an overtemp event encountered
+  *
+  * Determime if external Base T PHY interrupt cause is high temperature
+  * failure alarm or link status change.
+- *
+- * Return IXGBE_ERR_OVERTEMP if interrupt is high temperature
+- * failure alarm, else return PHY access status.
+  **/
+-static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
++static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc,
++				       bool *is_overtemp)
+ {
+ 	u32 status;
+ 	u16 reg;
+ 
++	*is_overtemp = false;
+ 	*lsc = false;
+ 
+ 	/* Vendor alarm triggered */
+@@ -2412,7 +2415,8 @@ static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
+ 	if (reg & IXGBE_MDIO_GLOBAL_ALM_1_HI_TMP_FAIL) {
+ 		/* power down the PHY in case the PHY FW didn't already */
+ 		ixgbe_set_copper_phy_power(hw, false);
+-		return IXGBE_ERR_OVERTEMP;
++		*is_overtemp = true;
++		return -EIO;
+ 	}
+ 	if (reg & IXGBE_MDIO_GLOBAL_ALM_1_DEV_FAULT) {
+ 		/*  device fault alarm triggered */
+@@ -2426,7 +2430,8 @@ static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
+ 		if (reg == IXGBE_MDIO_GLOBAL_FAULT_MSG_HI_TMP) {
+ 			/* power down the PHY in case the PHY FW didn't */
+ 			ixgbe_set_copper_phy_power(hw, false);
+-			return IXGBE_ERR_OVERTEMP;
++			*is_overtemp = true;
++			return -EIO;
+ 		}
+ 	}
+ 
+@@ -2462,12 +2467,12 @@ static s32 ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc)
+  **/
+ static s32 ixgbe_enable_lasi_ext_t_x550em(struct ixgbe_hw *hw)
+ {
++	bool lsc, overtemp;
+ 	u32 status;
+ 	u16 reg;
+-	bool lsc;
+ 
+ 	/* Clear interrupt flags */
+-	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc);
++	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc, &overtemp);
+ 
+ 	/* Enable link status change alarm */
+ 
+@@ -2546,21 +2551,20 @@ static s32 ixgbe_enable_lasi_ext_t_x550em(struct ixgbe_hw *hw)
+ /**
+  * ixgbe_handle_lasi_ext_t_x550em - Handle external Base T PHY interrupt
+  * @hw: pointer to hardware structure
++ * @is_overtemp: indicate whether an overtemp event encountered
+  *
+  * Handle external Base T PHY interrupt. If high temperature
+  * failure alarm then return error, else if link status change
+  * then setup internal/external PHY link
+- *
+- * Return IXGBE_ERR_OVERTEMP if interrupt is high temperature
+- * failure alarm, else return PHY access status.
+  **/
+-static s32 ixgbe_handle_lasi_ext_t_x550em(struct ixgbe_hw *hw)
++static s32 ixgbe_handle_lasi_ext_t_x550em(struct ixgbe_hw *hw,
++					  bool *is_overtemp)
+ {
+ 	struct ixgbe_phy_info *phy = &hw->phy;
+ 	bool lsc;
+ 	u32 status;
+ 
+-	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc);
++	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc, is_overtemp);
+ 	if (status)
+ 		return status;
+ 
+@@ -2692,7 +2696,7 @@ static s32 ixgbe_setup_internal_phy_t_x550em(struct ixgbe_hw *hw)
+ 	u16 speed;
+ 
+ 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_copper)
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 
+ 	if (!(hw->mac.type == ixgbe_mac_X550EM_x &&
+ 	      !(hw->phy.nw_mng_if_sel & IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE))) {
+@@ -2735,7 +2739,7 @@ static s32 ixgbe_setup_internal_phy_t_x550em(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		/* Internal PHY does not support anything else */
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	return ixgbe_setup_ixfi_x550em(hw, &force_speed);
+@@ -2767,7 +2771,7 @@ static s32 ixgbe_led_on_t_x550em(struct ixgbe_hw *hw, u32 led_idx)
+ 	u16 phy_data;
+ 
+ 	if (led_idx >= IXGBE_X557_MAX_LED_INDEX)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn on the LED, set mode to ON. */
+ 	hw->phy.ops.read_reg(hw, IXGBE_X557_LED_PROVISIONING + led_idx,
+@@ -2789,7 +2793,7 @@ static s32 ixgbe_led_off_t_x550em(struct ixgbe_hw *hw, u32 led_idx)
+ 	u16 phy_data;
+ 
+ 	if (led_idx >= IXGBE_X557_MAX_LED_INDEX)
+-		return IXGBE_ERR_PARAM;
++		return -EINVAL;
+ 
+ 	/* To turn on the LED, set mode to ON. */
+ 	hw->phy.ops.read_reg(hw, IXGBE_X557_LED_PROVISIONING + led_idx,
+@@ -2813,8 +2817,9 @@ static s32 ixgbe_led_off_t_x550em(struct ixgbe_hw *hw, u32 led_idx)
+  *
+  *  Sends driver version number to firmware through the manageability
+  *  block.  On success return 0
+- *  else returns IXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+- *  semaphore or IXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
++ *  else returns -EBUSY when encountering an error acquiring
++ *  semaphore, -EIO when command fails or -ENIVAL when incorrect
++ *  params passed.
+  **/
+ static s32 ixgbe_set_fw_drv_ver_x550(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 				     u8 build, u8 sub, u16 len,
+@@ -2825,7 +2830,7 @@ static s32 ixgbe_set_fw_drv_ver_x550(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 	int i;
+ 
+ 	if (!len || !driver_ver || (len > sizeof(fw_cmd.driver_string)))
+-		return IXGBE_ERR_INVALID_ARGUMENT;
++		return -EINVAL;
+ 
+ 	fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
+ 	fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN + len;
+@@ -2850,7 +2855,7 @@ static s32 ixgbe_set_fw_drv_ver_x550(struct ixgbe_hw *hw, u8 maj, u8 min,
+ 
+ 		if (fw_cmd.hdr.cmd_or_resp.ret_status !=
+ 		    FW_CEM_RESP_STATUS_SUCCESS)
+-			return IXGBE_ERR_HOST_INTERFACE_COMMAND;
++			return -EIO;
+ 		return 0;
+ 	}
+ 
+@@ -2907,7 +2912,7 @@ static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+ 	/* Validate the requested mode */
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_err(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	/* 10gig parts do not have a word in the EEPROM to determine the
+@@ -2942,7 +2947,7 @@ static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_err(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	switch (hw->device_id) {
+@@ -2986,8 +2991,8 @@ static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+ static void ixgbe_fc_autoneg_backplane_x550em_a(struct ixgbe_hw *hw)
+ {
+ 	u32 link_s1, lp_an_page_low, an_cntl_1;
+-	s32 status = IXGBE_ERR_FC_NOT_NEGOTIATED;
+ 	ixgbe_link_speed speed;
++	s32 status = -EIO;
+ 	bool link_up;
+ 
+ 	/* AN should have completed when the cable was plugged in.
+@@ -3013,7 +3018,7 @@ static void ixgbe_fc_autoneg_backplane_x550em_a(struct ixgbe_hw *hw)
+ 
+ 	if (status || (link_s1 & IXGBE_KRM_LINK_S1_MAC_AN_COMPLETE) == 0) {
+ 		hw_dbg(hw, "Auto-Negotiation did not complete\n");
+-		status = IXGBE_ERR_FC_NOT_NEGOTIATED;
++		status = -EIO;
+ 		goto out;
+ 	}
+ 
+@@ -3187,21 +3192,23 @@ static s32 ixgbe_reset_phy_fw(struct ixgbe_hw *hw)
+ /**
+  * ixgbe_check_overtemp_fw - Check firmware-controlled PHYs for overtemp
+  * @hw: pointer to hardware structure
++ *
++ * Return true when an overtemp event detected, otherwise false.
+  */
+-static s32 ixgbe_check_overtemp_fw(struct ixgbe_hw *hw)
++static bool ixgbe_check_overtemp_fw(struct ixgbe_hw *hw)
+ {
+ 	u32 store[FW_PHY_ACT_DATA_COUNT] = { 0 };
+ 	s32 rc;
+ 
+ 	rc = ixgbe_fw_phy_activity(hw, FW_PHY_ACT_GET_LINK_INFO, &store);
+ 	if (rc)
+-		return rc;
++		return false;
+ 
+ 	if (store[0] & FW_PHY_ACT_GET_LINK_INFO_TEMP) {
+ 		ixgbe_shutdown_fw_phy(hw);
+-		return IXGBE_ERR_OVERTEMP;
++		return true;
+ 	}
+-	return 0;
++	return false;
+ }
+ 
+ /**
+@@ -3251,8 +3258,7 @@ static s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw)
+ 
+ 	/* Identify the PHY or SFP module */
+ 	ret_val = phy->ops.identify(hw);
+-	if (ret_val == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+-	    ret_val == IXGBE_ERR_PHY_ADDR_INVALID)
++	if (ret_val == -EOPNOTSUPP || ret_val == -EFAULT)
+ 		return ret_val;
+ 
+ 	/* Setup function pointers based on detected hardware */
+@@ -3460,8 +3466,7 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 
+ 	/* PHY ops must be identified and initialized prior to reset */
+ 	status = hw->phy.ops.init(hw);
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+-	    status == IXGBE_ERR_PHY_ADDR_INVALID)
++	if (status == -EOPNOTSUPP || status == -EFAULT)
+ 		return status;
+ 
+ 	/* start the external PHY */
+@@ -3477,7 +3482,7 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 		hw->phy.sfp_setup_needed = false;
+ 	}
+ 
+-	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
++	if (status == -EOPNOTSUPP)
+ 		return status;
+ 
+ 	/* Reset PHY */
+@@ -3501,7 +3506,7 @@ mac_reset_top:
+ 	status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
+ 	if (status) {
+ 		hw_dbg(hw, "semaphore failed with %d", status);
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 	}
+ 
+ 	ctrl |= IXGBE_READ_REG(hw, IXGBE_CTRL);
+@@ -3519,7 +3524,7 @@ mac_reset_top:
+ 	}
+ 
+ 	if (ctrl & IXGBE_CTRL_RST_MASK) {
+-		status = IXGBE_ERR_RESET_FAILED;
++		status = -EIO;
+ 		hw_dbg(hw, "Reset polling failed to complete.\n");
+ 	}
+ 
+@@ -3615,7 +3620,7 @@ static s32 ixgbe_setup_fc_backplane_x550em_a(struct ixgbe_hw *hw)
+ 	/* Validate the requested mode */
+ 	if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ 		hw_err(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+-		return IXGBE_ERR_INVALID_LINK_SETTINGS;
++		return -EINVAL;
+ 	}
+ 
+ 	if (hw->fc.requested_mode == ixgbe_fc_default)
+@@ -3672,7 +3677,7 @@ static s32 ixgbe_setup_fc_backplane_x550em_a(struct ixgbe_hw *hw)
+ 		break;
+ 	default:
+ 		hw_err(hw, "Flow control param set incorrectly\n");
+-		return IXGBE_ERR_CONFIG;
++		return -EIO;
+ 	}
+ 
+ 	status = hw->mac.ops.write_iosf_sb_reg(hw,
+@@ -3768,7 +3773,7 @@ static s32 ixgbe_acquire_swfw_sync_x550em_a(struct ixgbe_hw *hw, u32 mask)
+ 			return 0;
+ 		if (hmask)
+ 			ixgbe_release_swfw_sync_X540(hw, hmask);
+-		if (status != IXGBE_ERR_TOKEN_RETRY)
++		if (status != -EAGAIN)
+ 			return status;
+ 		msleep(FW_PHY_TOKEN_DELAY);
+ 	}
+@@ -3812,7 +3817,7 @@ static s32 ixgbe_read_phy_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, mask))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = hw->phy.ops.read_reg_mdi(hw, reg_addr, device_type, phy_data);
+ 
+@@ -3838,7 +3843,7 @@ static s32 ixgbe_write_phy_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ 	s32 status;
+ 
+ 	if (hw->mac.ops.acquire_swfw_sync(hw, mask))
+-		return IXGBE_ERR_SWFW_SYNC;
++		return -EBUSY;
+ 
+ 	status = ixgbe_write_phy_reg_mdi(hw, reg_addr, device_type, phy_data);
+ 	hw->mac.ops.release_swfw_sync(hw, mask);
+diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c
+index 89f26402f8fb8..5f66f779e56fc 100644
+--- a/drivers/net/ethernet/marvell/mvmdio.c
++++ b/drivers/net/ethernet/marvell/mvmdio.c
+@@ -23,6 +23,7 @@
+ #include <linux/delay.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/kernel.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+@@ -58,11 +59,6 @@
+  * - Armada 370       (Globalscale Mirabox):   41us to 43us (Polled)
+  */
+ #define MVMDIO_SMI_TIMEOUT		1000 /* 1000us = 1ms */
+-#define MVMDIO_SMI_POLL_INTERVAL_MIN	45
+-#define MVMDIO_SMI_POLL_INTERVAL_MAX	55
+-
+-#define MVMDIO_XSMI_POLL_INTERVAL_MIN	150
+-#define MVMDIO_XSMI_POLL_INTERVAL_MAX	160
+ 
+ struct orion_mdio_dev {
+ 	void __iomem *regs;
+@@ -84,8 +80,6 @@ enum orion_mdio_bus_type {
+ 
+ struct orion_mdio_ops {
+ 	int (*is_done)(struct orion_mdio_dev *);
+-	unsigned int poll_interval_min;
+-	unsigned int poll_interval_max;
+ };
+ 
+ /* Wait for the SMI unit to be ready for another operation
+@@ -94,34 +88,23 @@ static int orion_mdio_wait_ready(const struct orion_mdio_ops *ops,
+ 				 struct mii_bus *bus)
+ {
+ 	struct orion_mdio_dev *dev = bus->priv;
+-	unsigned long timeout = usecs_to_jiffies(MVMDIO_SMI_TIMEOUT);
+-	unsigned long end = jiffies + timeout;
+-	int timedout = 0;
++	unsigned long timeout;
++	int done;
+ 
+-	while (1) {
+-	        if (ops->is_done(dev))
++	if (dev->err_interrupt <= 0) {
++		if (!read_poll_timeout_atomic(ops->is_done, done, done, 2,
++					      MVMDIO_SMI_TIMEOUT, false, dev))
++			return 0;
++	} else {
++		/* wait_event_timeout does not guarantee a delay of at
++		 * least one whole jiffie, so timeout must be no less
++		 * than two.
++		 */
++		timeout = max(usecs_to_jiffies(MVMDIO_SMI_TIMEOUT), 2);
++
++		if (wait_event_timeout(dev->smi_busy_wait,
++				       ops->is_done(dev), timeout))
+ 			return 0;
+-	        else if (timedout)
+-			break;
+-
+-	        if (dev->err_interrupt <= 0) {
+-			usleep_range(ops->poll_interval_min,
+-				     ops->poll_interval_max);
+-
+-			if (time_is_before_jiffies(end))
+-				++timedout;
+-	        } else {
+-			/* wait_event_timeout does not guarantee a delay of at
+-			 * least one whole jiffie, so timeout must be no less
+-			 * than two.
+-			 */
+-			if (timeout < 2)
+-				timeout = 2;
+-			wait_event_timeout(dev->smi_busy_wait,
+-				           ops->is_done(dev), timeout);
+-
+-			++timedout;
+-	        }
+ 	}
+ 
+ 	dev_err(bus->parent, "Timeout: SMI busy for too long\n");
+@@ -135,8 +118,6 @@ static int orion_mdio_smi_is_done(struct orion_mdio_dev *dev)
+ 
+ static const struct orion_mdio_ops orion_mdio_smi_ops = {
+ 	.is_done = orion_mdio_smi_is_done,
+-	.poll_interval_min = MVMDIO_SMI_POLL_INTERVAL_MIN,
+-	.poll_interval_max = MVMDIO_SMI_POLL_INTERVAL_MAX,
+ };
+ 
+ static int orion_mdio_smi_read(struct mii_bus *bus, int mii_id,
+@@ -194,8 +175,6 @@ static int orion_mdio_xsmi_is_done(struct orion_mdio_dev *dev)
+ 
+ static const struct orion_mdio_ops orion_mdio_xsmi_ops = {
+ 	.is_done = orion_mdio_xsmi_is_done,
+-	.poll_interval_min = MVMDIO_XSMI_POLL_INTERVAL_MIN,
+-	.poll_interval_max = MVMDIO_XSMI_POLL_INTERVAL_MAX,
+ };
+ 
+ static int orion_mdio_xsmi_read_c45(struct mii_bus *bus, int mii_id,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 0bcf3e5592806..3784347b6fd88 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -2678,18 +2678,17 @@ int rvu_mbox_handler_npc_mcam_alloc_entry(struct rvu *rvu,
+ 	rsp->entry = NPC_MCAM_ENTRY_INVALID;
+ 	rsp->free_count = 0;
+ 
+-	/* Check if ref_entry is within range */
+-	if (req->priority && req->ref_entry >= mcam->bmap_entries) {
+-		dev_err(rvu->dev, "%s: reference entry %d is out of range\n",
+-			__func__, req->ref_entry);
+-		return NPC_MCAM_INVALID_REQ;
+-	}
++	/* Check if ref_entry is greater that the range
++	 * then set it to max value.
++	 */
++	if (req->ref_entry > mcam->bmap_entries)
++		req->ref_entry = mcam->bmap_entries;
+ 
+ 	/* ref_entry can't be '0' if requested priority is high.
+ 	 * Can't be last entry if requested priority is low.
+ 	 */
+ 	if ((!req->ref_entry && req->priority == NPC_MCAM_HIGHER_PRIO) ||
+-	    ((req->ref_entry == (mcam->bmap_entries - 1)) &&
++	    ((req->ref_entry == mcam->bmap_entries) &&
+ 	     req->priority == NPC_MCAM_LOWER_PRIO))
+ 		return NPC_MCAM_INVALID_REQ;
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 53f6258a973c2..8b7fc0af91ced 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -314,7 +314,6 @@ static int otx2_set_channels(struct net_device *dev,
+ 	pfvf->hw.tx_queues = channel->tx_count;
+ 	if (pfvf->xdp_prog)
+ 		pfvf->hw.xdp_queues = channel->rx_count;
+-	pfvf->hw.non_qos_queues =  pfvf->hw.tx_queues + pfvf->hw.xdp_queues;
+ 
+ 	if (if_up)
+ 		err = dev->netdev_ops->ndo_open(dev);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index a57455aebff6f..e5fe67e738655 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1744,6 +1744,7 @@ int otx2_open(struct net_device *netdev)
+ 	/* RQ and SQs are mapped to different CQs,
+ 	 * so find out max CQ IRQs (i.e CINTs) needed.
+ 	 */
++	pf->hw.non_qos_queues =  pf->hw.tx_queues + pf->hw.xdp_queues;
+ 	pf->hw.cint_cnt = max3(pf->hw.rx_queues, pf->hw.tx_queues,
+ 			       pf->hw.tc_tx_queues);
+ 
+@@ -2643,8 +2644,6 @@ static int otx2_xdp_setup(struct otx2_nic *pf, struct bpf_prog *prog)
+ 		xdp_features_clear_redirect_target(dev);
+ 	}
+ 
+-	pf->hw.non_qos_queues += pf->hw.xdp_queues;
+-
+ 	if (if_up)
+ 		otx2_open(pf->netdev);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index 4d519ea833b2c..f828d32737af0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -1403,7 +1403,7 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 				     struct otx2_cq_queue *cq,
+ 				     bool *need_xdp_flush)
+ {
+-	unsigned char *hard_start, *data;
++	unsigned char *hard_start;
+ 	int qidx = cq->cq_idx;
+ 	struct xdp_buff xdp;
+ 	struct page *page;
+@@ -1417,9 +1417,8 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 
+ 	xdp_init_buff(&xdp, pfvf->rbsize, &cq->xdp_rxq);
+ 
+-	data = (unsigned char *)phys_to_virt(pa);
+-	hard_start = page_address(page);
+-	xdp_prepare_buff(&xdp, hard_start, data - hard_start,
++	hard_start = (unsigned char *)phys_to_virt(pa);
++	xdp_prepare_buff(&xdp, hard_start, OTX2_HEAD_ROOM,
+ 			 cqe->sg.seg_size, false);
+ 
+ 	act = bpf_prog_run_xdp(prog, &xdp);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 3cf6589cfdacf..d2c039f830195 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -4758,7 +4758,10 @@ static int mtk_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA)) {
+-		err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(36));
++		err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(36));
++		if (!err)
++			err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
++
+ 		if (err) {
+ 			dev_err(&pdev->dev, "Wrong DMA config\n");
+ 			return -EINVAL;
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_port.c b/drivers/net/ethernet/microchip/lan966x/lan966x_port.c
+index 92108d354051c..2e83bbb9477e0 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_port.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_port.c
+@@ -168,9 +168,10 @@ static void lan966x_port_link_up(struct lan966x_port *port)
+ 	lan966x_taprio_speed_set(port, config->speed);
+ 
+ 	/* Also the GIGA_MODE_ENA(1) needs to be set regardless of the
+-	 * port speed for QSGMII ports.
++	 * port speed for QSGMII or SGMII ports.
+ 	 */
+-	if (phy_interface_num_ports(config->portmode) == 4)
++	if (phy_interface_num_ports(config->portmode) == 4 ||
++	    config->portmode == PHY_INTERFACE_MODE_SGMII)
+ 		mode = DEV_MAC_MODE_CFG_GIGA_MODE_ENA_SET(1);
+ 
+ 	lan_wr(config->duplex | mode,
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index d6ce113a4210b..fa4237c27e061 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -392,6 +392,10 @@ static void ionic_remove(struct pci_dev *pdev)
+ 	del_timer_sync(&ionic->watchdog_timer);
+ 
+ 	if (ionic->lif) {
++		/* prevent adminq cmds if already known as down */
++		if (test_and_clear_bit(IONIC_LIF_F_FW_RESET, ionic->lif->state))
++			set_bit(IONIC_LIF_F_FW_STOPPING, ionic->lif->state);
++
+ 		ionic_lif_unregister(ionic->lif);
+ 		ionic_devlink_unregister(ionic);
+ 		ionic_lif_deinit(ionic->lif);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+index c06576f439161..22ab0a44fa8c7 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+@@ -321,6 +321,7 @@ void ionic_dev_cmd_comp(struct ionic_dev *idev, union ionic_dev_cmd_comp *comp)
+ 
+ void ionic_dev_cmd_go(struct ionic_dev *idev, union ionic_dev_cmd *cmd)
+ {
++	idev->opcode = cmd->cmd.opcode;
+ 	memcpy_toio(&idev->dev_cmd_regs->cmd, cmd, sizeof(*cmd));
+ 	iowrite32(0, &idev->dev_cmd_regs->done);
+ 	iowrite32(1, &idev->dev_cmd_regs->doorbell);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+index 9b54630400752..fd112bee4dcf6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+@@ -153,6 +153,7 @@ struct ionic_dev {
+ 	bool fw_hb_ready;
+ 	bool fw_status_ready;
+ 	u8 fw_generation;
++	u8 opcode;
+ 
+ 	u64 __iomem *db_pages;
+ 	dma_addr_t phy_db_pages;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index bad919343180e..075e0e3fc2ea4 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -3238,6 +3238,9 @@ static void ionic_lif_reset(struct ionic_lif *lif)
+ {
+ 	struct ionic_dev *idev = &lif->ionic->idev;
+ 
++	if (!ionic_is_fw_running(idev))
++		return;
++
+ 	mutex_lock(&lif->ionic->dev_cmd_lock);
+ 	ionic_dev_cmd_lif_reset(idev, lif->index);
+ 	ionic_dev_cmd_wait(lif->ionic, DEVCMD_TIMEOUT);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index 8355773921789..83c413a10f79a 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -410,22 +410,28 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx,
+ 				      do_msg);
+ }
+ 
+-int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
++static int __ionic_adminq_post_wait(struct ionic_lif *lif,
++				    struct ionic_admin_ctx *ctx,
++				    const bool do_msg)
+ {
+ 	int err;
+ 
++	if (!ionic_is_fw_running(&lif->ionic->idev))
++		return 0;
++
+ 	err = ionic_adminq_post(lif, ctx);
+ 
+-	return ionic_adminq_wait(lif, ctx, err, true);
++	return ionic_adminq_wait(lif, ctx, err, do_msg);
+ }
+ 
+-int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
++int ionic_adminq_post_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
+ {
+-	int err;
+-
+-	err = ionic_adminq_post(lif, ctx);
++	return __ionic_adminq_post_wait(lif, ctx, true);
++}
+ 
+-	return ionic_adminq_wait(lif, ctx, err, false);
++int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin_ctx *ctx)
++{
++	return __ionic_adminq_post_wait(lif, ctx, false);
+ }
+ 
+ static void ionic_dev_cmd_clean(struct ionic *ionic)
+@@ -465,7 +471,7 @@ static int __ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds,
+ 	 */
+ 	max_wait = jiffies + (max_seconds * HZ);
+ try_again:
+-	opcode = readb(&idev->dev_cmd_regs->cmd.cmd.opcode);
++	opcode = idev->opcode;
+ 	start_time = jiffies;
+ 	for (fw_up = ionic_is_fw_running(idev);
+ 	     !done && fw_up && time_before(jiffies, max_wait);
+diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
+index 37fb033e1c29e..ef203b0807e58 100644
+--- a/drivers/net/phy/at803x.c
++++ b/drivers/net/phy/at803x.c
+@@ -2104,7 +2104,7 @@ static struct phy_driver at803x_driver[] = {
+ 	.write_page		= at803x_write_page,
+ 	.get_features		= at803x_get_features,
+ 	.read_status		= at803x_read_status,
+-	.config_intr		= &at803x_config_intr,
++	.config_intr		= at803x_config_intr,
+ 	.handle_interrupt	= at803x_handle_interrupt,
+ 	.get_tunable		= at803x_get_tunable,
+ 	.set_tunable		= at803x_set_tunable,
+@@ -2134,7 +2134,7 @@ static struct phy_driver at803x_driver[] = {
+ 	.resume			= at803x_resume,
+ 	.flags			= PHY_POLL_CABLE_TEST,
+ 	/* PHY_BASIC_FEATURES */
+-	.config_intr		= &at803x_config_intr,
++	.config_intr		= at803x_config_intr,
+ 	.handle_interrupt	= at803x_handle_interrupt,
+ 	.cable_test_start	= at803x_cable_test_start,
+ 	.cable_test_get_status	= at803x_cable_test_get_status,
+@@ -2150,7 +2150,7 @@ static struct phy_driver at803x_driver[] = {
+ 	.resume			= at803x_resume,
+ 	.flags			= PHY_POLL_CABLE_TEST,
+ 	/* PHY_BASIC_FEATURES */
+-	.config_intr		= &at803x_config_intr,
++	.config_intr		= at803x_config_intr,
+ 	.handle_interrupt	= at803x_handle_interrupt,
+ 	.cable_test_start	= at803x_cable_test_start,
+ 	.cable_test_get_status	= at803x_cable_test_get_status,
+diff --git a/drivers/net/phy/mediatek-ge-soc.c b/drivers/net/phy/mediatek-ge-soc.c
+index 8a20d9889f105..0f3a1538a8b8e 100644
+--- a/drivers/net/phy/mediatek-ge-soc.c
++++ b/drivers/net/phy/mediatek-ge-soc.c
+@@ -489,7 +489,7 @@ static int tx_r50_fill_result(struct phy_device *phydev, u16 tx_r50_cal_val,
+ 	u16 reg, val;
+ 
+ 	if (phydev->drv->phy_id == MTK_GPHY_ID_MT7988)
+-		bias = -2;
++		bias = -1;
+ 
+ 	val = clamp_val(bias + tx_r50_cal_val, 0, 63);
+ 
+@@ -705,6 +705,11 @@ restore:
+ static void mt798x_phy_common_finetune(struct phy_device *phydev)
+ {
+ 	phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5);
++	/* SlvDSPreadyTime = 24, MasDSPreadyTime = 24 */
++	__phy_write(phydev, 0x11, 0xc71);
++	__phy_write(phydev, 0x12, 0xc);
++	__phy_write(phydev, 0x10, 0x8fae);
++
+ 	/* EnabRandUpdTrig = 1 */
+ 	__phy_write(phydev, 0x11, 0x2f00);
+ 	__phy_write(phydev, 0x12, 0xe);
+@@ -715,15 +720,56 @@ static void mt798x_phy_common_finetune(struct phy_device *phydev)
+ 	__phy_write(phydev, 0x12, 0x0);
+ 	__phy_write(phydev, 0x10, 0x83aa);
+ 
+-	/* TrFreeze = 0 */
++	/* FfeUpdGainForce = 1(Enable), FfeUpdGainForceVal = 4 */
++	__phy_write(phydev, 0x11, 0x240);
++	__phy_write(phydev, 0x12, 0x0);
++	__phy_write(phydev, 0x10, 0x9680);
++
++	/* TrFreeze = 0 (mt7988 default) */
+ 	__phy_write(phydev, 0x11, 0x0);
+ 	__phy_write(phydev, 0x12, 0x0);
+ 	__phy_write(phydev, 0x10, 0x9686);
+ 
++	/* SSTrKp100 = 5 */
++	/* SSTrKf100 = 6 */
++	/* SSTrKp1000Mas = 5 */
++	/* SSTrKf1000Mas = 6 */
+ 	/* SSTrKp1000Slv = 5 */
++	/* SSTrKf1000Slv = 6 */
+ 	__phy_write(phydev, 0x11, 0xbaef);
+ 	__phy_write(phydev, 0x12, 0x2e);
+ 	__phy_write(phydev, 0x10, 0x968c);
++	phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0);
++}
++
++static void mt7981_phy_finetune(struct phy_device *phydev)
++{
++	u16 val[8] = { 0x01ce, 0x01c1,
++		       0x020f, 0x0202,
++		       0x03d0, 0x03c0,
++		       0x0013, 0x0005 };
++	int i, k;
++
++	/* 100M eye finetune:
++	 * Keep middle level of TX MLT3 shapper as default.
++	 * Only change TX MLT3 overshoot level here.
++	 */
++	for (k = 0, i = 1; i < 12; i++) {
++		if (i % 3 == 0)
++			continue;
++		phy_write_mmd(phydev, MDIO_MMD_VEND1, i, val[k++]);
++	}
++
++	phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5);
++	/* ResetSyncOffset = 6 */
++	__phy_write(phydev, 0x11, 0x600);
++	__phy_write(phydev, 0x12, 0x0);
++	__phy_write(phydev, 0x10, 0x8fc0);
++
++	/* VgaDecRate = 1 */
++	__phy_write(phydev, 0x11, 0x4c2a);
++	__phy_write(phydev, 0x12, 0x3e);
++	__phy_write(phydev, 0x10, 0x8fa4);
+ 
+ 	/* MrvlTrFix100Kp = 3, MrvlTrFix100Kf = 2,
+ 	 * MrvlTrFix1000Kp = 3, MrvlTrFix1000Kf = 2
+@@ -738,7 +784,7 @@ static void mt798x_phy_common_finetune(struct phy_device *phydev)
+ 	__phy_write(phydev, 0x10, 0x8ec0);
+ 	phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0);
+ 
+-	/* TR_OPEN_LOOP_EN = 1, lpf_x_average = 9*/
++	/* TR_OPEN_LOOP_EN = 1, lpf_x_average = 9 */
+ 	phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG234,
+ 		       MTK_PHY_TR_OPEN_LOOP_EN_MASK | MTK_PHY_LPF_X_AVERAGE_MASK,
+ 		       BIT(0) | FIELD_PREP(MTK_PHY_LPF_X_AVERAGE_MASK, 0x9));
+@@ -771,48 +817,6 @@ static void mt798x_phy_common_finetune(struct phy_device *phydev)
+ 	phy_write_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_LDO_OUTPUT_V, 0x2222);
+ }
+ 
+-static void mt7981_phy_finetune(struct phy_device *phydev)
+-{
+-	u16 val[8] = { 0x01ce, 0x01c1,
+-		       0x020f, 0x0202,
+-		       0x03d0, 0x03c0,
+-		       0x0013, 0x0005 };
+-	int i, k;
+-
+-	/* 100M eye finetune:
+-	 * Keep middle level of TX MLT3 shapper as default.
+-	 * Only change TX MLT3 overshoot level here.
+-	 */
+-	for (k = 0, i = 1; i < 12; i++) {
+-		if (i % 3 == 0)
+-			continue;
+-		phy_write_mmd(phydev, MDIO_MMD_VEND1, i, val[k++]);
+-	}
+-
+-	phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5);
+-	/* SlvDSPreadyTime = 24, MasDSPreadyTime = 24 */
+-	__phy_write(phydev, 0x11, 0xc71);
+-	__phy_write(phydev, 0x12, 0xc);
+-	__phy_write(phydev, 0x10, 0x8fae);
+-
+-	/* ResetSyncOffset = 6 */
+-	__phy_write(phydev, 0x11, 0x600);
+-	__phy_write(phydev, 0x12, 0x0);
+-	__phy_write(phydev, 0x10, 0x8fc0);
+-
+-	/* VgaDecRate = 1 */
+-	__phy_write(phydev, 0x11, 0x4c2a);
+-	__phy_write(phydev, 0x12, 0x3e);
+-	__phy_write(phydev, 0x10, 0x8fa4);
+-
+-	/* FfeUpdGainForce = 4 */
+-	__phy_write(phydev, 0x11, 0x240);
+-	__phy_write(phydev, 0x12, 0x0);
+-	__phy_write(phydev, 0x10, 0x9680);
+-
+-	phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0);
+-}
+-
+ static void mt7988_phy_finetune(struct phy_device *phydev)
+ {
+ 	u16 val[12] = { 0x0187, 0x01cd, 0x01c8, 0x0182,
+@@ -827,17 +831,7 @@ static void mt7988_phy_finetune(struct phy_device *phydev)
+ 	/* TCT finetune */
+ 	phy_write_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_TX_FILTER, 0x5);
+ 
+-	/* Disable TX power saving */
+-	phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RXADC_CTRL_RG7,
+-		       MTK_PHY_DA_AD_BUF_BIAS_LP_MASK, 0x3 << 8);
+-
+ 	phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5);
+-
+-	/* SlvDSPreadyTime = 24, MasDSPreadyTime = 12 */
+-	__phy_write(phydev, 0x11, 0x671);
+-	__phy_write(phydev, 0x12, 0xc);
+-	__phy_write(phydev, 0x10, 0x8fae);
+-
+ 	/* ResetSyncOffset = 5 */
+ 	__phy_write(phydev, 0x11, 0x500);
+ 	__phy_write(phydev, 0x12, 0x0);
+@@ -845,13 +839,27 @@ static void mt7988_phy_finetune(struct phy_device *phydev)
+ 
+ 	/* VgaDecRate is 1 at default on mt7988 */
+ 
+-	phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0);
++	/* MrvlTrFix100Kp = 6, MrvlTrFix100Kf = 7,
++	 * MrvlTrFix1000Kp = 6, MrvlTrFix1000Kf = 7
++	 */
++	__phy_write(phydev, 0x11, 0xb90a);
++	__phy_write(phydev, 0x12, 0x6f);
++	__phy_write(phydev, 0x10, 0x8f82);
++
++	/* RemAckCntLimitCtrl = 1 */
++	__phy_write(phydev, 0x11, 0xfbba);
++	__phy_write(phydev, 0x12, 0xc3);
++	__phy_write(phydev, 0x10, 0x87f8);
+ 
+-	phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_2A30);
+-	/* TxClkOffset = 2 */
+-	__phy_modify(phydev, MTK_PHY_ANARG_RG, MTK_PHY_TCLKOFFSET_MASK,
+-		     FIELD_PREP(MTK_PHY_TCLKOFFSET_MASK, 0x2));
+ 	phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0);
++
++	/* TR_OPEN_LOOP_EN = 1, lpf_x_average = 10 */
++	phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG234,
++		       MTK_PHY_TR_OPEN_LOOP_EN_MASK | MTK_PHY_LPF_X_AVERAGE_MASK,
++		       BIT(0) | FIELD_PREP(MTK_PHY_LPF_X_AVERAGE_MASK, 0xa));
++
++	/* rg_tr_lpf_cnt_val = 1023 */
++	phy_write_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_LPF_CNT_VAL, 0x3ff);
+ }
+ 
+ static void mt798x_phy_eee(struct phy_device *phydev)
+@@ -884,11 +892,11 @@ static void mt798x_phy_eee(struct phy_device *phydev)
+ 		       MTK_PHY_LPI_SLV_SEND_TX_EN,
+ 		       FIELD_PREP(MTK_PHY_LPI_SLV_SEND_TX_TIMER_MASK, 0x120));
+ 
+-	phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG239,
+-		       MTK_PHY_LPI_SEND_LOC_TIMER_MASK |
+-		       MTK_PHY_LPI_TXPCS_LOC_RCV,
+-		       FIELD_PREP(MTK_PHY_LPI_SEND_LOC_TIMER_MASK, 0x117));
++	/* Keep MTK_PHY_LPI_SEND_LOC_TIMER as 375 */
++	phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG239,
++			   MTK_PHY_LPI_TXPCS_LOC_RCV);
+ 
++	/* This also fixes some IoT issues, such as CH340 */
+ 	phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG2C7,
+ 		       MTK_PHY_MAX_GAIN_MASK | MTK_PHY_MIN_GAIN_MASK,
+ 		       FIELD_PREP(MTK_PHY_MAX_GAIN_MASK, 0x8) |
+@@ -922,7 +930,7 @@ static void mt798x_phy_eee(struct phy_device *phydev)
+ 	__phy_write(phydev, 0x12, 0x0);
+ 	__phy_write(phydev, 0x10, 0x9690);
+ 
+-	/* REG_EEE_st2TrKf1000 = 3 */
++	/* REG_EEE_st2TrKf1000 = 2 */
+ 	__phy_write(phydev, 0x11, 0x114f);
+ 	__phy_write(phydev, 0x12, 0x2);
+ 	__phy_write(phydev, 0x10, 0x969a);
+@@ -947,7 +955,7 @@ static void mt798x_phy_eee(struct phy_device *phydev)
+ 	__phy_write(phydev, 0x12, 0x0);
+ 	__phy_write(phydev, 0x10, 0x96b8);
+ 
+-	/* REGEEE_wake_slv_tr_wait_dfesigdet_en = 1 */
++	/* REGEEE_wake_slv_tr_wait_dfesigdet_en = 0 */
+ 	__phy_write(phydev, 0x11, 0x1463);
+ 	__phy_write(phydev, 0x12, 0x0);
+ 	__phy_write(phydev, 0x10, 0x96ca);
+@@ -1459,6 +1467,13 @@ static int mt7988_phy_probe(struct phy_device *phydev)
+ 	if (err)
+ 		return err;
+ 
++	/* Disable TX power saving at probing to:
++	 * 1. Meet common mode compliance test criteria
++	 * 2. Make sure that TX-VCM calibration works fine
++	 */
++	phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RXADC_CTRL_RG7,
++		       MTK_PHY_DA_AD_BUF_BIAS_LP_MASK, 0x3 << 8);
++
+ 	return mt798x_phy_calibration(phydev);
+ }
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 858175ca58cd8..ca2db4adcb35c 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -3650,12 +3650,8 @@ static int lan8841_ts_info(struct mii_timestamper *mii_ts,
+ 
+ 	info->phc_index = ptp_priv->ptp_clock ?
+ 				ptp_clock_index(ptp_priv->ptp_clock) : -1;
+-	if (info->phc_index == -1) {
+-		info->so_timestamping |= SOF_TIMESTAMPING_TX_SOFTWARE |
+-					 SOF_TIMESTAMPING_RX_SOFTWARE |
+-					 SOF_TIMESTAMPING_SOFTWARE;
++	if (info->phc_index == -1)
+ 		return 0;
+-	}
+ 
+ 	info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ 				SOF_TIMESTAMPING_RX_HARDWARE |
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 5a1bf42ce1566..d837c18874161 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1315,8 +1315,6 @@ static int ax88179_bind(struct usbnet *dev, struct usb_interface *intf)
+ 
+ 	netif_set_tso_max_size(dev->net, 16384);
+ 
+-	ax88179_reset(dev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 51b1868d2f220..1caf21fd5032c 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -4096,10 +4096,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
+ {
+ 	vq_callback_t **callbacks;
+ 	struct virtqueue **vqs;
+-	int ret = -ENOMEM;
+-	int i, total_vqs;
+ 	const char **names;
++	int ret = -ENOMEM;
++	int total_vqs;
+ 	bool *ctx;
++	u16 i;
+ 
+ 	/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
+ 	 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
+@@ -4136,8 +4137,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
+ 	for (i = 0; i < vi->max_queue_pairs; i++) {
+ 		callbacks[rxq2vq(i)] = skb_recv_done;
+ 		callbacks[txq2vq(i)] = skb_xmit_done;
+-		sprintf(vi->rq[i].name, "input.%d", i);
+-		sprintf(vi->sq[i].name, "output.%d", i);
++		sprintf(vi->rq[i].name, "input.%u", i);
++		sprintf(vi->sq[i].name, "output.%u", i);
+ 		names[rxq2vq(i)] = vi->rq[i].name;
+ 		names[txq2vq(i)] = vi->sq[i].name;
+ 		if (ctx)
+diff --git a/drivers/net/wireless/ath/ath11k/pcic.c b/drivers/net/wireless/ath/ath11k/pcic.c
+index 16d1e332193f0..e602d4130105f 100644
+--- a/drivers/net/wireless/ath/ath11k/pcic.c
++++ b/drivers/net/wireless/ath/ath11k/pcic.c
+@@ -460,8 +460,6 @@ void ath11k_pcic_ext_irq_enable(struct ath11k_base *ab)
+ {
+ 	int i;
+ 
+-	set_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
+-
+ 	for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ 		struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+ 
+@@ -471,6 +469,8 @@ void ath11k_pcic_ext_irq_enable(struct ath11k_base *ab)
+ 		}
+ 		ath11k_pcic_ext_grp_enable(irq_grp);
+ 	}
++
++	set_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
+ }
+ EXPORT_SYMBOL(ath11k_pcic_ext_irq_enable);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
+index eca86fc25a608..b896dfe66dad3 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.c
++++ b/drivers/net/wireless/ath/ath12k/hal.c
+@@ -889,8 +889,8 @@ static u8 *ath12k_hw_wcn7850_rx_desc_mpdu_start_addr2(struct hal_rx_desc *desc)
+ 
+ static bool ath12k_hw_wcn7850_rx_desc_is_da_mcbc(struct hal_rx_desc *desc)
+ {
+-	return __le16_to_cpu(desc->u.wcn7850.msdu_end.info5) &
+-	       RX_MSDU_END_INFO5_DA_IS_MCBC;
++	return __le32_to_cpu(desc->u.wcn7850.msdu_end.info13) &
++	       RX_MSDU_END_INFO13_MCAST_BCAST;
+ }
+ 
+ static void ath12k_hw_wcn7850_rx_desc_get_dot11_hdr(struct hal_rx_desc *desc,
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index 2245fb510ba2c..b55cf33e37bd8 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -949,7 +949,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
+ 		.rx_mac_buf_ring = true,
+ 		.vdev_start_delay = true,
+ 
+-		.interface_modes = BIT(NL80211_IFTYPE_STATION),
++		.interface_modes = BIT(NL80211_IFTYPE_STATION) |
++				   BIT(NL80211_IFTYPE_AP),
+ 		.supports_monitor = false,
+ 
+ 		.idle_ps = true,
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index fc0d14ea328e6..b698e55a5b7bf 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -6380,8 +6380,8 @@ ath12k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 	}
+ 
+ 	if (ab->hw_params->vdev_start_delay &&
+-	    (arvif->vdev_type == WMI_VDEV_TYPE_AP ||
+-	    arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)) {
++	    arvif->vdev_type != WMI_VDEV_TYPE_AP &&
++	    arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
+ 		param.vdev_id = arvif->vdev_id;
+ 		param.peer_type = WMI_PEER_TYPE_DEFAULT;
+ 		param.peer_addr = ar->mac_addr;
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 800177021baff..efcaeccb055aa 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -652,9 +652,10 @@ void ath9k_htc_txstatus(struct ath9k_htc_priv *priv, void *wmi_event)
+ 	struct ath9k_htc_tx_event *tx_pend;
+ 	int i;
+ 
+-	for (i = 0; i < txs->cnt; i++) {
+-		WARN_ON(txs->cnt > HTC_MAX_TX_STATUS);
++	if (WARN_ON_ONCE(txs->cnt > HTC_MAX_TX_STATUS))
++		return;
+ 
++	for (i = 0; i < txs->cnt; i++) {
+ 		__txs = &txs->txstatus[i];
+ 
+ 		skb = ath9k_htc_tx_get_packet(priv, __txs);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index ae6bf3c968dfb..b475555097ff2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1359,7 +1359,7 @@ u8 mt76_connac_get_phy_mode_ext(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	sband = phy->hw->wiphy->bands[band];
+ 	eht_cap = ieee80211_get_eht_iftype_cap(sband, vif->type);
+ 
+-	if (!eht_cap || !eht_cap->has_eht)
++	if (!eht_cap || !eht_cap->has_eht || !vif->bss_conf.eht_support)
+ 		return mode;
+ 
+ 	switch (band) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/pci.c b/drivers/net/wireless/mediatek/mt76/mt7996/pci.c
+index c5301050ff8b3..67c015896243f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/pci.c
+@@ -17,11 +17,13 @@ static u32 hif_idx;
+ 
+ static const struct pci_device_id mt7996_pci_device_table[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7990) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7992) },
+ 	{ },
+ };
+ 
+ static const struct pci_device_id mt7996_hif_device_table[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7991) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x799a) },
+ 	{ },
+ };
+ 
+@@ -60,7 +62,9 @@ static void mt7996_put_hif2(struct mt7996_hif *hif)
+ static struct mt7996_hif *mt7996_pci_init_hif2(struct pci_dev *pdev)
+ {
+ 	hif_idx++;
+-	if (!pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x7991, NULL))
++
++	if (!pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x7991, NULL) &&
++	    !pci_get_device(PCI_VENDOR_ID_MEDIATEK, 0x799a, NULL))
+ 		return NULL;
+ 
+ 	writel(hif_idx | MT_PCIE_RECOG_ID_SEM,
+@@ -113,7 +117,7 @@ static int mt7996_pci_probe(struct pci_dev *pdev,
+ 
+ 	mt76_pci_disable_aspm(pdev);
+ 
+-	if (id->device == 0x7991)
++	if (id->device == 0x7991 || id->device == 0x799a)
+ 		return mt7996_pci_hif2_probe(pdev);
+ 
+ 	dev = mt7996_mmio_probe(&pdev->dev, pcim_iomap_table(pdev)[0],
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+index ee880f749b3c6..1926ffdffb4f0 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+@@ -8659,7 +8659,7 @@ static void rt2800_rxdcoc_calibration(struct rt2x00_dev *rt2x00dev)
+ 	rt2800_rfcsr_write_bank(rt2x00dev, 5, 4, saverfb5r4);
+ 	rt2800_rfcsr_write_bank(rt2x00dev, 7, 4, saverfb7r4);
+ 
+-	rt2800_bbp_write(rt2x00dev, 158, 141);
++	rt2800_bbp_write(rt2x00dev, 158, 140);
+ 	bbpreg = rt2800_bbp_read(rt2x00dev, 159);
+ 	bbpreg = bbpreg & (~0x40);
+ 	rt2800_bbp_write(rt2x00dev, 159, bbpreg);
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+index c88ce446e1175..9e7d9dbe954cf 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+@@ -101,6 +101,7 @@ void rt2x00lib_disable_radio(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00link_stop_tuner(rt2x00dev);
+ 	rt2x00queue_stop_queues(rt2x00dev);
+ 	rt2x00queue_flush_queues(rt2x00dev, true);
++	rt2x00queue_stop_queue(rt2x00dev->bcn);
+ 
+ 	/*
+ 	 * Disable radio.
+@@ -1286,6 +1287,7 @@ int rt2x00lib_start(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00dev->intf_ap_count = 0;
+ 	rt2x00dev->intf_sta_count = 0;
+ 	rt2x00dev->intf_associated = 0;
++	rt2x00dev->intf_beaconing = 0;
+ 
+ 	/* Enable the radio */
+ 	retval = rt2x00lib_enable_radio(rt2x00dev);
+@@ -1312,6 +1314,7 @@ void rt2x00lib_stop(struct rt2x00_dev *rt2x00dev)
+ 	rt2x00dev->intf_ap_count = 0;
+ 	rt2x00dev->intf_sta_count = 0;
+ 	rt2x00dev->intf_associated = 0;
++	rt2x00dev->intf_beaconing = 0;
+ }
+ 
+ static inline void rt2x00lib_set_if_combinations(struct rt2x00_dev *rt2x00dev)
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+index 4202c65177839..75fda72c14ca9 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+@@ -598,6 +598,17 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
+ 	 */
+ 	if (changes & BSS_CHANGED_BEACON_ENABLED) {
+ 		mutex_lock(&intf->beacon_skb_mutex);
++
++		/*
++		 * Clear the 'enable_beacon' flag and clear beacon because
++		 * the beacon queue has been stopped after hardware reset.
++		 */
++		if (test_bit(DEVICE_STATE_RESET, &rt2x00dev->flags) &&
++		    intf->enable_beacon) {
++			intf->enable_beacon = false;
++			rt2x00queue_clear_beacon(rt2x00dev, vif);
++		}
++
+ 		if (!bss_conf->enable_beacon && intf->enable_beacon) {
+ 			rt2x00dev->intf_beaconing--;
+ 			intf->enable_beacon = false;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 43ee7592bc6e1..180907319e8cd 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -7961,6 +7961,18 @@ static const struct usb_device_id dev_table[] = {
+ 	.driver_info = (unsigned long)&rtl8192eu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x818c, 0xff, 0xff, 0xff),
+ 	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* D-Link DWA-131 rev C1 */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x3312, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* TP-Link TL-WN8200ND V2 */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2357, 0x0126, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* Mercusys MW300UM */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0100, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
++/* Mercusys MW300UH */
++{USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0104, 0xff, 0xff, 0xff),
++	.driver_info = (unsigned long)&rtl8192eu_fops},
+ #endif
+ { }
+ };
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
+index fe9b407dc2aff..71e29b103da5a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
+@@ -49,7 +49,7 @@ u32 rtl8723e_phy_query_rf_reg(struct ieee80211_hw *hw,
+ 							    rfpath, regaddr);
+ 	}
+ 
+-	bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -80,7 +80,7 @@ void rtl8723e_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 			original_value = rtl8723_phy_rf_serial_read(hw,
+ 								    rfpath,
+ 								    regaddr);
+-			bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+@@ -89,7 +89,7 @@ void rtl8723e_phy_set_rf_reg(struct ieee80211_hw *hw,
+ 		rtl8723_phy_rf_serial_write(hw, rfpath, regaddr, data);
+ 	} else {
+ 		if (bitmask != RFREG_OFFSET_MASK) {
+-			bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data =
+ 			    ((original_value & (~bitmask)) |
+ 			     (data << bitshift));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
+index 2b9313cb93dbd..094cb36153f5a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
+@@ -41,7 +41,7 @@ u32 rtl8723be_phy_query_rf_reg(struct ieee80211_hw *hw, enum radio_path rfpath,
+ 	spin_lock(&rtlpriv->locks.rf_lock);
+ 
+ 	original_value = rtl8723_phy_rf_serial_read(hw, rfpath, regaddr);
+-	bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++	bitshift = calculate_bit_shift(bitmask);
+ 	readback_value = (original_value & bitmask) >> bitshift;
+ 
+ 	spin_unlock(&rtlpriv->locks.rf_lock);
+@@ -68,7 +68,7 @@ void rtl8723be_phy_set_rf_reg(struct ieee80211_hw *hw, enum radio_path path,
+ 	if (bitmask != RFREG_OFFSET_MASK) {
+ 			original_value = rtl8723_phy_rf_serial_read(hw, path,
+ 								    regaddr);
+-			bitshift = rtl8723_phy_calculate_bit_shift(bitmask);
++			bitshift = calculate_bit_shift(bitmask);
+ 			data = ((original_value & (~bitmask)) |
+ 				(data << bitshift));
+ 		}
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index bdcc172639e4e..ace7bbf2cf943 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -131,7 +131,7 @@ static const struct rtw89_btc_ver rtw89_btc_ver_defs[] = {
+ 	 .fcxbtcrpt = 105, .fcxtdma = 3,    .fcxslots = 1, .fcxcysta = 5,
+ 	 .fcxstep = 3,   .fcxnullsta = 2, .fcxmreg = 2,  .fcxgpiodbg = 1,
+ 	 .fcxbtver = 1,  .fcxbtscan = 2,  .fcxbtafh = 2, .fcxbtdevinfo = 1,
+-	 .fwlrole = 1,   .frptmap = 3,    .fcxctrl = 1,
++	 .fwlrole = 2,   .frptmap = 3,    .fcxctrl = 1,
+ 	 .info_buf = 1800, .max_role_num = 6,
+ 	},
+ 	{RTL8852C, RTW89_FW_VER_CODE(0, 27, 57, 0),
+@@ -159,7 +159,7 @@ static const struct rtw89_btc_ver rtw89_btc_ver_defs[] = {
+ 	 .fcxbtcrpt = 105, .fcxtdma = 3,  .fcxslots = 1, .fcxcysta = 5,
+ 	 .fcxstep = 3,   .fcxnullsta = 2, .fcxmreg = 2,  .fcxgpiodbg = 1,
+ 	 .fcxbtver = 1,  .fcxbtscan = 2,  .fcxbtafh = 2, .fcxbtdevinfo = 1,
+-	 .fwlrole = 1,   .frptmap = 3,    .fcxctrl = 1,
++	 .fwlrole = 2,   .frptmap = 3,    .fcxctrl = 1,
+ 	 .info_buf = 1800, .max_role_num = 6,
+ 	},
+ 	{RTL8852B, RTW89_FW_VER_CODE(0, 29, 14, 0),
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 3d75165e48bea..a3624ebf01b78 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -2886,7 +2886,7 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ 
+ 	if (hw->conf.flags & IEEE80211_CONF_IDLE)
+ 		ieee80211_queue_delayed_work(hw, &roc->roc_work,
+-					     RTW89_ROC_IDLE_TIMEOUT);
++					     msecs_to_jiffies(RTW89_ROC_IDLE_TIMEOUT));
+ }
+ 
+ void rtw89_roc_work(struct work_struct *work)
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 91e4d4e79eeaf..8f59461e58f23 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -2294,12 +2294,6 @@ struct rtw89_btc_fbtc_fddt_cell_status {
+ 	u8 state_phase; /* [0:3] train state, [4:7] train phase */
+ } __packed;
+ 
+-struct rtw89_btc_fbtc_fddt_cell_status_v5 {
+-	s8 wl_tx_pwr;
+-	s8 bt_tx_pwr;
+-	s8 bt_rx_gain;
+-} __packed;
+-
+ struct rtw89_btc_fbtc_cysta_v3 { /* statistics for cycles */
+ 	u8 fver;
+ 	u8 rsvd;
+@@ -2363,9 +2357,9 @@ struct rtw89_btc_fbtc_cysta_v5 { /* statistics for cycles */
+ 	struct rtw89_btc_fbtc_cycle_a2dp_empty_info a2dp_ept;
+ 	struct rtw89_btc_fbtc_a2dp_trx_stat_v4 a2dp_trx[BTC_CYCLE_SLOT_MAX];
+ 	struct rtw89_btc_fbtc_cycle_fddt_info_v5 fddt_trx[BTC_CYCLE_SLOT_MAX];
+-	struct rtw89_btc_fbtc_fddt_cell_status_v5 fddt_cells[FDD_TRAIN_WL_DIRECTION]
+-							    [FDD_TRAIN_WL_RSSI_LEVEL]
+-							    [FDD_TRAIN_BT_RSSI_LEVEL];
++	struct rtw89_btc_fbtc_fddt_cell_status fddt_cells[FDD_TRAIN_WL_DIRECTION]
++							 [FDD_TRAIN_WL_RSSI_LEVEL]
++							 [FDD_TRAIN_BT_RSSI_LEVEL];
+ 	__le32 except_map;
+ } __packed;
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index a732c22a2d549..313ed4c45464b 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -4043,6 +4043,7 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ 	rtw89_core_scan_complete(rtwdev, vif, true);
+ 	ieee80211_scan_completed(rtwdev->hw, &info);
+ 	ieee80211_wake_queues(rtwdev->hw);
++	rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
+ 
+ 	rtw89_release_pkt_list(rtwdev);
+ 	rtwvif = (struct rtw89_vif *)vif->drv_priv;
+@@ -4060,6 +4061,19 @@ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ 	rtw89_hw_scan_complete(rtwdev, vif, true);
+ }
+ 
++static bool rtw89_is_any_vif_connected_or_connecting(struct rtw89_dev *rtwdev)
++{
++	struct rtw89_vif *rtwvif;
++
++	rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++		/* This variable implies connected or during attempt to connect */
++		if (!is_zero_ether_addr(rtwvif->bssid))
++			return true;
++	}
++
++	return false;
++}
++
+ int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ 			  bool enable)
+ {
+@@ -4072,8 +4086,7 @@ int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ 	if (!rtwvif)
+ 		return -EINVAL;
+ 
+-	/* This variable implies connected or during attempt to connect */
+-	connected = !is_zero_ether_addr(rtwvif->bssid);
++	connected = rtw89_is_any_vif_connected_or_connecting(rtwdev);
+ 	opt.enable = enable;
+ 	opt.target_ch_mode = connected;
+ 	if (enable) {
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 0c5768f41d55d..d0c7de4e80dcf 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -3747,6 +3747,50 @@ static const struct rtw89_port_reg rtw89_port_base_ax = {
+ 		    R_AX_PORT_HGQ_WINDOW_CFG + 3},
+ };
+ 
++static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev,
++					struct rtw89_vif *rtwvif, u8 type)
++{
++	u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif->port);
++	u32 reg_info, reg_ctrl;
++	u32 val;
++	int ret;
++
++	reg_info = rtw89_mac_reg_by_idx(rtwdev, R_AX_PTCL_DBG_INFO, rtwvif->mac_idx);
++	reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, R_AX_PTCL_DBG, rtwvif->mac_idx);
++
++	rtw89_write32_mask(rtwdev, reg_ctrl, B_AX_PTCL_DBG_SEL_MASK, type);
++	rtw89_write32_set(rtwdev, reg_ctrl, B_AX_PTCL_DBG_EN);
++	fsleep(100);
++
++	ret = read_poll_timeout(rtw89_read32_mask, val, val == 0, 1000, 100000,
++				true, rtwdev, reg_info, mask);
++	if (ret)
++		rtw89_warn(rtwdev, "Polling beacon packet empty fail\n");
++}
++
++static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++{
++	const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
++	const struct rtw89_port_reg *p = mac->port_base;
++
++	rtw89_write32_set(rtwdev, R_AX_BCN_DROP_ALL0, BIT(rtwvif->port));
++	rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK, 1);
++	rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area, B_AX_BCN_MSK_AREA_MASK, 0);
++	rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 0);
++	rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK, 2);
++	rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK, 1);
++	rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK, 1);
++	rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
++
++	rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM0);
++	if (rtwvif->port == RTW89_PORT_0)
++		rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM1);
++
++	rtw89_write32_clr(rtwdev, R_AX_BCN_DROP_ALL0, BIT(rtwvif->port));
++	rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TBTT_PROHIB_EN);
++	fsleep(2);
++}
++
+ #define BCN_INTERVAL 100
+ #define BCN_ERLY_DEF 160
+ #define BCN_SETUP_DEF 2
+@@ -3762,21 +3806,36 @@ static void rtw89_mac_port_cfg_func_sw(struct rtw89_dev *rtwdev,
+ 	const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ 	const struct rtw89_port_reg *p = mac->port_base;
+ 	struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++	const struct rtw89_chip_info *chip = rtwdev->chip;
++	bool need_backup = false;
++	u32 backup_val;
+ 
+ 	if (!rtw89_read32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN))
+ 		return;
+ 
+-	rtw89_write32_port_clr(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK);
+-	rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 1);
+-	rtw89_write16_port_clr(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK);
+-	rtw89_write16_port_clr(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK);
++	if (chip->chip_id == RTL8852A && rtwvif->port != RTW89_PORT_0) {
++		need_backup = true;
++		backup_val = rtw89_read32_port(rtwdev, rtwvif, p->tbtt_prohib);
++	}
+ 
+-	msleep(vif->bss_conf.beacon_int + 1);
++	if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++		rtw89_mac_bcn_drop(rtwdev, rtwvif);
+ 
++	if (chip->chip_id == RTL8852A) {
++		rtw89_write32_port_clr(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK);
++		rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 1);
++		rtw89_write16_port_clr(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK);
++		rtw89_write16_port_clr(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK);
++	}
++
++	msleep(vif->bss_conf.beacon_int + 1);
+ 	rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN |
+ 							    B_AX_BRK_SETUP);
+ 	rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSFTR_RST);
+ 	rtw89_write32_port(rtwdev, rtwvif, p->bcn_cnt_tmr, 0);
++
++	if (need_backup)
++		rtw89_write32_port(rtwdev, rtwvif, p->tbtt_prohib, backup_val);
+ }
+ 
+ static void rtw89_mac_port_cfg_tx_rpt(struct rtw89_dev *rtwdev,
+@@ -3857,12 +3916,10 @@ static void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
+ }
+ 
+ static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev,
+-				     struct rtw89_vif *rtwvif)
++				     struct rtw89_vif *rtwvif, bool en)
+ {
+ 	const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ 	const struct rtw89_port_reg *p = mac->port_base;
+-	bool en = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ||
+-		  rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
+ 
+ 	if (en)
+ 		rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
+@@ -3870,6 +3927,24 @@ static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev,
+ 		rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
+ }
+ 
++static void rtw89_mac_port_cfg_tx_sw_by_nettype(struct rtw89_dev *rtwdev,
++						struct rtw89_vif *rtwvif)
++{
++	bool en = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ||
++		  rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++
++	rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en);
++}
++
++void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en)
++{
++	struct rtw89_vif *rtwvif;
++
++	rtw89_for_each_rtwvif(rtwdev, rtwvif)
++		if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++			rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en);
++}
++
+ static void rtw89_mac_port_cfg_bcn_intv(struct rtw89_dev *rtwdev,
+ 					struct rtw89_vif *rtwvif)
+ {
+@@ -4176,7 +4251,7 @@ int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ 	rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif);
+ 	rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif);
+ 	rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif);
+-	rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif);
++	rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif);
+ 	rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif);
+ 	rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif);
+ 	rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif);
+@@ -4261,7 +4336,7 @@ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+ 
+ void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ {
+-	rtw89_mac_port_cfg_func_en(rtwdev, rtwvif, false);
++	rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif);
+ }
+ 
+ int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+@@ -4338,8 +4413,10 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *c2h,
+ 
+ 	switch (reason) {
+ 	case RTW89_SCAN_LEAVE_CH_NOTIFY:
+-		if (rtw89_is_op_chan(rtwdev, band, chan))
++		if (rtw89_is_op_chan(rtwdev, band, chan)) {
++			rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, false);
+ 			ieee80211_stop_queues(rtwdev->hw);
++		}
+ 		return;
+ 	case RTW89_SCAN_END_SCAN_NOTIFY:
+ 		if (rtwvif && rtwvif->scan_req &&
+@@ -4357,6 +4434,7 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *c2h,
+ 		if (rtw89_is_op_chan(rtwdev, band, chan)) {
+ 			rtw89_assign_entity_chan(rtwdev, rtwvif->sub_entity_idx,
+ 						 &rtwdev->scan_info.op_chan);
++			rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
+ 			ieee80211_wake_queues(rtwdev->hw);
+ 		} else {
+ 			rtw89_chan_create(&new, chan, chan, band,
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index c11c904f87fe2..f9fef678f314d 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -992,6 +992,7 @@ int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+ 					struct ieee80211_vif *vif);
+ void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en);
+ int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
+ int rtw89_mac_enable_bb_rf(struct rtw89_dev *rtwdev);
+ int rtw89_mac_disable_bb_rf(struct rtw89_dev *rtwdev);
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 31d1f78916751..b7ceaf5595ebe 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -477,6 +477,9 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (rtwdev->scanning)
++		rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
++
+ 	ether_addr_copy(rtwvif->bssid, vif->bss_conf.bssid);
+ 	rtw89_cam_bssid_changed(rtwdev, rtwvif);
+ 	rtw89_mac_port_update(rtwdev, rtwvif);
+diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
+index ccd5481e8a3dc..672010b9e026b 100644
+--- a/drivers/net/wireless/realtek/rtw89/reg.h
++++ b/drivers/net/wireless/realtek/rtw89/reg.h
+@@ -2375,6 +2375,14 @@
+ #define R_AX_TSFTR_HIGH_P4 0xC53C
+ #define B_AX_TSFTR_HIGH_MASK GENMASK(31, 0)
+ 
++#define R_AX_BCN_DROP_ALL0 0xC560
++#define R_AX_BCN_DROP_ALL0_C1 0xE560
++#define B_AX_BCN_DROP_ALL_P4 BIT(4)
++#define B_AX_BCN_DROP_ALL_P3 BIT(3)
++#define B_AX_BCN_DROP_ALL_P2 BIT(2)
++#define B_AX_BCN_DROP_ALL_P1 BIT(1)
++#define B_AX_BCN_DROP_ALL_P0 BIT(0)
++
+ #define R_AX_MBSSID_CTRL 0xC568
+ #define R_AX_MBSSID_CTRL_C1 0xE568
+ #define B_AX_P0MB_ALL_MASK GENMASK(23, 1)
+@@ -2554,11 +2562,20 @@
+ 
+ #define R_AX_PTCL_DBG_INFO 0xC6F0
+ #define R_AX_PTCL_DBG_INFO_C1 0xE6F0
++#define B_AX_PTCL_DBG_INFO_MASK_BY_PORT(port) \
++({\
++	typeof(port) _port = (port); \
++	GENMASK((_port) * 2 + 1, (_port) * 2); \
++})
++
+ #define B_AX_PTCL_DBG_INFO_MASK GENMASK(31, 0)
+ #define R_AX_PTCL_DBG 0xC6F4
+ #define R_AX_PTCL_DBG_C1 0xE6F4
+ #define B_AX_PTCL_DBG_EN BIT(8)
+ #define B_AX_PTCL_DBG_SEL_MASK GENMASK(7, 0)
++#define AX_PTCL_DBG_BCNQ_NUM0 8
++#define AX_PTCL_DBG_BCNQ_NUM1 9
++
+ 
+ #define R_AX_DLE_CTRL 0xC800
+ #define R_AX_DLE_CTRL_C1 0xE800
+diff --git a/drivers/net/wireless/silabs/wfx/sta.c b/drivers/net/wireless/silabs/wfx/sta.c
+index 1b6c158457b42..537caf9d914a7 100644
+--- a/drivers/net/wireless/silabs/wfx/sta.c
++++ b/drivers/net/wireless/silabs/wfx/sta.c
+@@ -336,29 +336,38 @@ static int wfx_upload_ap_templates(struct wfx_vif *wvif)
+ 	return 0;
+ }
+ 
+-static void wfx_set_mfp_ap(struct wfx_vif *wvif)
++static int wfx_set_mfp_ap(struct wfx_vif *wvif)
+ {
+ 	struct ieee80211_vif *vif = wvif_to_vif(wvif);
+ 	struct sk_buff *skb = ieee80211_beacon_get(wvif->wdev->hw, vif, 0);
+ 	const int ieoffset = offsetof(struct ieee80211_mgmt, u.beacon.variable);
+-	const u16 *ptr = (u16 *)cfg80211_find_ie(WLAN_EID_RSN, skb->data + ieoffset,
+-						 skb->len - ieoffset);
+ 	const int pairwise_cipher_suite_count_offset = 8 / sizeof(u16);
+ 	const int pairwise_cipher_suite_size = 4 / sizeof(u16);
+ 	const int akm_suite_size = 4 / sizeof(u16);
++	const u16 *ptr;
+ 
+-	if (ptr) {
+-		ptr += pairwise_cipher_suite_count_offset;
+-		if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+-			return;
+-		ptr += 1 + pairwise_cipher_suite_size * *ptr;
+-		if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+-			return;
+-		ptr += 1 + akm_suite_size * *ptr;
+-		if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+-			return;
+-		wfx_hif_set_mfp(wvif, *ptr & BIT(7), *ptr & BIT(6));
+-	}
++	if (unlikely(!skb))
++		return -ENOMEM;
++
++	ptr = (u16 *)cfg80211_find_ie(WLAN_EID_RSN, skb->data + ieoffset,
++				      skb->len - ieoffset);
++	if (unlikely(!ptr))
++		return -EINVAL;
++
++	ptr += pairwise_cipher_suite_count_offset;
++	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
++		return -EINVAL;
++
++	ptr += 1 + pairwise_cipher_suite_size * *ptr;
++	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
++		return -EINVAL;
++
++	ptr += 1 + akm_suite_size * *ptr;
++	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
++		return -EINVAL;
++
++	wfx_hif_set_mfp(wvif, *ptr & BIT(7), *ptr & BIT(6));
++	return 0;
+ }
+ 
+ int wfx_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+@@ -376,8 +385,7 @@ int wfx_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	ret = wfx_hif_start(wvif, &vif->bss_conf, wvif->channel);
+ 	if (ret > 0)
+ 		return -EIO;
+-	wfx_set_mfp_ap(wvif);
+-	return ret;
++	return wfx_set_mfp_ap(wvif);
+ }
+ 
+ void wfx_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index b2971dd953358..f1e54a3a15c71 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -832,9 +832,14 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_level_ceil);
+  * use.
+  */
+ struct dev_pm_opp *dev_pm_opp_find_level_floor(struct device *dev,
+-					       unsigned long *level)
++					       unsigned int *level)
+ {
+-	return _find_key_floor(dev, level, 0, true, _read_level, NULL);
++	unsigned long temp = *level;
++	struct dev_pm_opp *opp;
++
++	opp = _find_key_floor(dev, &temp, 0, true, _read_level, NULL);
++	*level = temp;
++	return opp;
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_level_floor);
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index f43873049d52c..c6283ba781979 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -272,7 +272,7 @@ void pci_bus_put(struct pci_bus *bus);
+ 
+ /* PCIe speed to Mb/s reduced by encoding overhead */
+ #define PCIE_SPEED2MBS_ENC(speed) \
+-	((speed) == PCIE_SPEED_64_0GT ? 64000*128/130 : \
++	((speed) == PCIE_SPEED_64_0GT ? 64000*1/1 : \
+ 	 (speed) == PCIE_SPEED_32_0GT ? 32000*128/130 : \
+ 	 (speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \
+ 	 (speed) == PCIE_SPEED_8_0GT  ?  8000*128/130 : \
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index 42a3bd35a3e11..38e3346772cc7 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -740,7 +740,7 @@ static void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info)
+ 	u8 bus = info->id >> 8;
+ 	u8 devfn = info->id & 0xff;
+ 
+-	pci_info(dev, "%s%s error received: %04x:%02x:%02x.%d\n",
++	pci_info(dev, "%s%s error message received from %04x:%02x:%02x.%d\n",
+ 		 info->multi_error_valid ? "Multiple " : "",
+ 		 aer_error_severity_string[info->severity],
+ 		 pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn),
+@@ -929,7 +929,12 @@ static bool find_source_device(struct pci_dev *parent,
+ 		pci_walk_bus(parent->subordinate, find_device_iter, e_info);
+ 
+ 	if (!e_info->error_dev_num) {
+-		pci_info(parent, "can't find device of ID%04x\n", e_info->id);
++		u8 bus = e_info->id >> 8;
++		u8 devfn = e_info->id & 0xff;
++
++		pci_info(parent, "found no error details for %04x:%02x:%02x.%d\n",
++			 pci_domain_nr(parent->bus), bus, PCI_SLOT(devfn),
++			 PCI_FUNC(devfn));
+ 		return false;
+ 	}
+ 	return true;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d55a3ffae4b8b..a2bf6de11462f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -702,10 +702,13 @@ static void quirk_amd_dwc_class(struct pci_dev *pdev)
+ {
+ 	u32 class = pdev->class;
+ 
+-	/* Use "USB Device (not host controller)" class */
+-	pdev->class = PCI_CLASS_SERIAL_USB_DEVICE;
+-	pci_info(pdev, "PCI class overridden (%#08x -> %#08x) so dwc3 driver can claim this instead of xhci\n",
+-		 class, pdev->class);
++	if (class != PCI_CLASS_SERIAL_USB_DEVICE) {
++		/* Use "USB Device (not host controller)" class */
++		pdev->class = PCI_CLASS_SERIAL_USB_DEVICE;
++		pci_info(pdev,
++			"PCI class overridden (%#08x -> %#08x) so dwc3 driver can claim this instead of xhci\n",
++			class, pdev->class);
++	}
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB,
+ 		quirk_amd_dwc_class);
+@@ -3786,6 +3789,19 @@ static void quirk_no_pm_reset(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
+ 			       PCI_CLASS_DISPLAY_VGA, 8, quirk_no_pm_reset);
+ 
++/*
++ * Spectrum-{1,2,3,4} devices report that a D3hot->D0 transition causes a reset
++ * (i.e., they advertise NoSoftRst-). However, this transition does not have
++ * any effect on the device: It continues to be operational and network ports
++ * remain up. Advertising this support makes it seem as if a PM reset is viable
++ * for these devices. Mark it as unavailable to skip it when testing reset
++ * methods.
++ */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcb84, quirk_no_pm_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf6c, quirk_no_pm_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf70, quirk_no_pm_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf80, quirk_no_pm_reset);
++
+ /*
+  * Thunderbolt controllers with broken MSI hotplug signaling:
+  * Entire 1st generation (Light Ridge, Eagle Ridge, Light Peak) and part
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 5b921387eca61..1804794d0e686 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1308,13 +1308,6 @@ static void stdev_release(struct device *dev)
+ {
+ 	struct switchtec_dev *stdev = to_stdev(dev);
+ 
+-	if (stdev->dma_mrpc) {
+-		iowrite32(0, &stdev->mmio_mrpc->dma_en);
+-		flush_wc_buf(stdev);
+-		writeq(0, &stdev->mmio_mrpc->dma_addr);
+-		dma_free_coherent(&stdev->pdev->dev, sizeof(*stdev->dma_mrpc),
+-				stdev->dma_mrpc, stdev->dma_mrpc_dma_addr);
+-	}
+ 	kfree(stdev);
+ }
+ 
+@@ -1358,7 +1351,7 @@ static struct switchtec_dev *stdev_create(struct pci_dev *pdev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	stdev->alive = true;
+-	stdev->pdev = pdev;
++	stdev->pdev = pci_dev_get(pdev);
+ 	INIT_LIST_HEAD(&stdev->mrpc_queue);
+ 	mutex_init(&stdev->mrpc_mutex);
+ 	stdev->mrpc_busy = 0;
+@@ -1391,6 +1384,7 @@ static struct switchtec_dev *stdev_create(struct pci_dev *pdev)
+ 	return stdev;
+ 
+ err_put:
++	pci_dev_put(stdev->pdev);
+ 	put_device(&stdev->dev);
+ 	return ERR_PTR(rc);
+ }
+@@ -1644,6 +1638,18 @@ static int switchtec_init_pci(struct switchtec_dev *stdev,
+ 	return 0;
+ }
+ 
++static void switchtec_exit_pci(struct switchtec_dev *stdev)
++{
++	if (stdev->dma_mrpc) {
++		iowrite32(0, &stdev->mmio_mrpc->dma_en);
++		flush_wc_buf(stdev);
++		writeq(0, &stdev->mmio_mrpc->dma_addr);
++		dma_free_coherent(&stdev->pdev->dev, sizeof(*stdev->dma_mrpc),
++				  stdev->dma_mrpc, stdev->dma_mrpc_dma_addr);
++		stdev->dma_mrpc = NULL;
++	}
++}
++
+ static int switchtec_pci_probe(struct pci_dev *pdev,
+ 			       const struct pci_device_id *id)
+ {
+@@ -1703,6 +1709,9 @@ static void switchtec_pci_remove(struct pci_dev *pdev)
+ 	ida_free(&switchtec_minor_ida, MINOR(stdev->dev.devt));
+ 	dev_info(&stdev->dev, "unregistered.\n");
+ 	stdev_kill(stdev);
++	switchtec_exit_pci(stdev);
++	pci_dev_put(stdev->pdev);
++	stdev->pdev = NULL;
+ 	put_device(&stdev->dev);
+ }
+ 
+diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
+index 6ca7be05229c1..0e80fdc9f9cad 100644
+--- a/drivers/perf/arm_pmuv3.c
++++ b/drivers/perf/arm_pmuv3.c
+@@ -169,7 +169,11 @@ armv8pmu_events_sysfs_show(struct device *dev,
+ 	PMU_EVENT_ATTR_ID(name, armv8pmu_events_sysfs_show, config)
+ 
+ static struct attribute *armv8_pmuv3_event_attrs[] = {
+-	ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR),
++	/*
++	 * Don't expose the sw_incr event in /sys. It's not usable as writes to
++	 * PMSWINC_EL0 will trap as PMUSERENR.{SW,EN}=={0,0} and event rotation
++	 * means we don't have a fixed event<->counter relationship regardless.
++	 */
+ 	ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL),
+ 	ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL),
+ 	ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL),
+diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c
+index 3cd0798ee6313..f1af21dbd5fb2 100644
+--- a/drivers/pinctrl/intel/pinctrl-baytrail.c
++++ b/drivers/pinctrl/intel/pinctrl-baytrail.c
+@@ -918,13 +918,14 @@ static int byt_pin_config_set(struct pinctrl_dev *pctl_dev,
+ 			      unsigned int num_configs)
+ {
+ 	struct intel_pinctrl *vg = pinctrl_dev_get_drvdata(pctl_dev);
+-	unsigned int param, arg;
+ 	void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG);
+ 	void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);
+ 	void __iomem *db_reg = byt_gpio_reg(vg, offset, BYT_DEBOUNCE_REG);
+ 	u32 conf, val, db_pulse, debounce;
++	enum pin_config_param param;
+ 	unsigned long flags;
+ 	int i, ret = 0;
++	u32 arg;
+ 
+ 	raw_spin_lock_irqsave(&byt_lock, flags);
+ 
+diff --git a/drivers/pnp/pnpacpi/rsparser.c b/drivers/pnp/pnpacpi/rsparser.c
+index 4f05f610391b0..c02ce0834c2cd 100644
+--- a/drivers/pnp/pnpacpi/rsparser.c
++++ b/drivers/pnp/pnpacpi/rsparser.c
+@@ -151,13 +151,13 @@ static int vendor_resource_matches(struct pnp_dev *dev,
+ static void pnpacpi_parse_allocated_vendor(struct pnp_dev *dev,
+ 				    struct acpi_resource_vendor_typed *vendor)
+ {
+-	if (vendor_resource_matches(dev, vendor, &hp_ccsr_uuid, 16)) {
+-		u64 start, length;
++	struct { u64 start, length; } range;
+ 
+-		memcpy(&start, vendor->byte_data, sizeof(start));
+-		memcpy(&length, vendor->byte_data + 8, sizeof(length));
+-
+-		pnp_add_mem_resource(dev, start, start + length - 1, 0);
++	if (vendor_resource_matches(dev, vendor, &hp_ccsr_uuid,
++				    sizeof(range))) {
++		memcpy(&range, vendor->byte_data, sizeof(range));
++		pnp_add_mem_resource(dev, range.start, range.start +
++				     range.length - 1, 0);
+ 	}
+ }
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 3137e40fcd3e0..a7b3e548ea5ac 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2918,7 +2918,8 @@ static int _regulator_enable(struct regulator *regulator)
+ 		/* Fallthrough on positive return values - already enabled */
+ 	}
+ 
+-	rdev->use_count++;
++	if (regulator->enable_count == 1)
++		rdev->use_count++;
+ 
+ 	return 0;
+ 
+@@ -2993,37 +2994,40 @@ static int _regulator_disable(struct regulator *regulator)
+ 
+ 	lockdep_assert_held_once(&rdev->mutex.base);
+ 
+-	if (WARN(rdev->use_count <= 0,
++	if (WARN(regulator->enable_count == 0,
+ 		 "unbalanced disables for %s\n", rdev_get_name(rdev)))
+ 		return -EIO;
+ 
+-	/* are we the last user and permitted to disable ? */
+-	if (rdev->use_count == 1 &&
+-	    (rdev->constraints && !rdev->constraints->always_on)) {
+-
+-		/* we are last user */
+-		if (regulator_ops_is_valid(rdev, REGULATOR_CHANGE_STATUS)) {
+-			ret = _notifier_call_chain(rdev,
+-						   REGULATOR_EVENT_PRE_DISABLE,
+-						   NULL);
+-			if (ret & NOTIFY_STOP_MASK)
+-				return -EINVAL;
+-
+-			ret = _regulator_do_disable(rdev);
+-			if (ret < 0) {
+-				rdev_err(rdev, "failed to disable: %pe\n", ERR_PTR(ret));
+-				_notifier_call_chain(rdev,
+-						REGULATOR_EVENT_ABORT_DISABLE,
++	if (regulator->enable_count == 1) {
++	/* disabling last enable_count from this regulator */
++		/* are we the last user and permitted to disable ? */
++		if (rdev->use_count == 1 &&
++		    (rdev->constraints && !rdev->constraints->always_on)) {
++
++			/* we are last user */
++			if (regulator_ops_is_valid(rdev, REGULATOR_CHANGE_STATUS)) {
++				ret = _notifier_call_chain(rdev,
++							   REGULATOR_EVENT_PRE_DISABLE,
++							   NULL);
++				if (ret & NOTIFY_STOP_MASK)
++					return -EINVAL;
++
++				ret = _regulator_do_disable(rdev);
++				if (ret < 0) {
++					rdev_err(rdev, "failed to disable: %pe\n", ERR_PTR(ret));
++					_notifier_call_chain(rdev,
++							REGULATOR_EVENT_ABORT_DISABLE,
++							NULL);
++					return ret;
++				}
++				_notifier_call_chain(rdev, REGULATOR_EVENT_DISABLE,
+ 						NULL);
+-				return ret;
+ 			}
+-			_notifier_call_chain(rdev, REGULATOR_EVENT_DISABLE,
+-					NULL);
+-		}
+ 
+-		rdev->use_count = 0;
+-	} else if (rdev->use_count > 1) {
+-		rdev->use_count--;
++			rdev->use_count = 0;
++		} else if (rdev->use_count > 1) {
++			rdev->use_count--;
++		}
+ 	}
+ 
+ 	if (ret == 0)
+diff --git a/drivers/regulator/ti-abb-regulator.c b/drivers/regulator/ti-abb-regulator.c
+index f48214e2c3b46..04133510e5af7 100644
+--- a/drivers/regulator/ti-abb-regulator.c
++++ b/drivers/regulator/ti-abb-regulator.c
+@@ -726,9 +726,25 @@ static int ti_abb_probe(struct platform_device *pdev)
+ 			return PTR_ERR(abb->setup_reg);
+ 	}
+ 
+-	abb->int_base = devm_platform_ioremap_resource_byname(pdev, "int-address");
+-	if (IS_ERR(abb->int_base))
+-		return PTR_ERR(abb->int_base);
++	pname = "int-address";
++	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, pname);
++	if (!res) {
++		dev_err(dev, "Missing '%s' IO resource\n", pname);
++		return -ENODEV;
++	}
++	/*
++	 * The MPU interrupt status register (PRM_IRQSTATUS_MPU) is
++	 * shared between regulator-abb-{ivahd,dspeve,gpu} driver
++	 * instances. Therefore use devm_ioremap() rather than
++	 * devm_platform_ioremap_resource_byname() to avoid busy
++	 * resource region conflicts.
++	 */
++	abb->int_base = devm_ioremap(dev, res->start,
++					     resource_size(res));
++	if (!abb->int_base) {
++		dev_err(dev, "Unable to map '%s'\n", pname);
++		return -ENOMEM;
++	}
+ 
+ 	/* Map Optional resources */
+ 	pname = "efuse-address";
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 88f41f95cc94d..d6ea2fd4c2a02 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -2044,6 +2044,7 @@ static ssize_t status_show(struct device *dev,
+ {
+ 	ssize_t nchars = 0;
+ 	struct vfio_ap_queue *q;
++	unsigned long apid, apqi;
+ 	struct ap_matrix_mdev *matrix_mdev;
+ 	struct ap_device *apdev = to_ap_dev(dev);
+ 
+@@ -2051,8 +2052,21 @@ static ssize_t status_show(struct device *dev,
+ 	q = dev_get_drvdata(&apdev->device);
+ 	matrix_mdev = vfio_ap_mdev_for_queue(q);
+ 
++	/* If the queue is assigned to the matrix mediated device, then
++	 * determine whether it is passed through to a guest; otherwise,
++	 * indicate that it is unassigned.
++	 */
+ 	if (matrix_mdev) {
+-		if (matrix_mdev->kvm)
++		apid = AP_QID_CARD(q->apqn);
++		apqi = AP_QID_QUEUE(q->apqn);
++		/*
++		 * If the queue is passed through to the guest, then indicate
++		 * that it is in use; otherwise, indicate that it is
++		 * merely assigned to a matrix mediated device.
++		 */
++		if (matrix_mdev->kvm &&
++		    test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
++		    test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm))
+ 			nchars = scnprintf(buf, PAGE_SIZE, "%s\n",
+ 					   AP_QUEUE_IN_USE);
+ 		else
+diff --git a/drivers/scsi/arcmsr/arcmsr.h b/drivers/scsi/arcmsr/arcmsr.h
+index ed8d9319862a5..3819d559ebbb4 100644
+--- a/drivers/scsi/arcmsr/arcmsr.h
++++ b/drivers/scsi/arcmsr/arcmsr.h
+@@ -78,9 +78,13 @@ struct device_attribute;
+ #ifndef PCI_DEVICE_ID_ARECA_1203
+ #define PCI_DEVICE_ID_ARECA_1203	0x1203
+ #endif
++#ifndef PCI_DEVICE_ID_ARECA_1883
++#define PCI_DEVICE_ID_ARECA_1883	0x1883
++#endif
+ #ifndef PCI_DEVICE_ID_ARECA_1884
+ #define PCI_DEVICE_ID_ARECA_1884	0x1884
+ #endif
++#define PCI_DEVICE_ID_ARECA_1886_0	0x1886
+ #define PCI_DEVICE_ID_ARECA_1886	0x188A
+ #define	ARCMSR_HOURS			(1000 * 60 * 60 * 4)
+ #define	ARCMSR_MINUTES			(1000 * 60 * 60)
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index a66221c3b72f8..01fb1396e1a92 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -214,8 +214,12 @@ static struct pci_device_id arcmsr_device_id_table[] = {
+ 		.driver_data = ACB_ADAPTER_TYPE_A},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1880),
+ 		.driver_data = ACB_ADAPTER_TYPE_C},
++	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1883),
++		.driver_data = ACB_ADAPTER_TYPE_C},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1884),
+ 		.driver_data = ACB_ADAPTER_TYPE_E},
++	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1886_0),
++		.driver_data = ACB_ADAPTER_TYPE_F},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_ARECA, PCI_DEVICE_ID_ARECA_1886),
+ 		.driver_data = ACB_ADAPTER_TYPE_F},
+ 	{0, 0}, /* Terminating entry */
+@@ -4706,9 +4710,11 @@ static const char *arcmsr_info(struct Scsi_Host *host)
+ 	case PCI_DEVICE_ID_ARECA_1680:
+ 	case PCI_DEVICE_ID_ARECA_1681:
+ 	case PCI_DEVICE_ID_ARECA_1880:
++	case PCI_DEVICE_ID_ARECA_1883:
+ 	case PCI_DEVICE_ID_ARECA_1884:
+ 		type = "SAS/SATA";
+ 		break;
++	case PCI_DEVICE_ID_ARECA_1886_0:
+ 	case PCI_DEVICE_ID_ARECA_1886:
+ 		type = "NVMe/SAS/SATA";
+ 		break;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 6d8577423d32b..b56fbc61a15ae 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -1605,6 +1605,11 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
+ 	}
+ 
+ 	phy->port_id = port_id;
++	spin_lock(&phy->lock);
++	/* Delete timer and set phy_attached atomically */
++	del_timer(&phy->timer);
++	phy->phy_attached = 1;
++	spin_unlock(&phy->lock);
+ 
+ 	/*
+ 	 * Call pm_runtime_get_noresume() which pairs with
+@@ -1618,11 +1623,6 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
+ 
+ 	res = IRQ_HANDLED;
+ 
+-	spin_lock(&phy->lock);
+-	/* Delete timer and set phy_attached atomically */
+-	del_timer(&phy->timer);
+-	phy->phy_attached = 1;
+-	spin_unlock(&phy->lock);
+ end:
+ 	if (phy->reset_completion)
+ 		complete(phy->reset_completion);
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index a7b3243b471d5..7162a5029b37a 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -3390,7 +3390,7 @@ static enum sci_status isci_io_request_build(struct isci_host *ihost,
+ 		return SCI_FAILURE;
+ 	}
+ 
+-	return SCI_SUCCESS;
++	return status;
+ }
+ 
+ static struct isci_request *isci_request_from_tag(struct isci_host *ihost, u16 tag)
+diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
+index 945adca5e72fd..05be0810b5e31 100644
+--- a/drivers/scsi/libfc/fc_fcp.c
++++ b/drivers/scsi/libfc/fc_fcp.c
+@@ -265,6 +265,11 @@ static int fc_fcp_send_abort(struct fc_fcp_pkt *fsp)
+ 	if (!fsp->seq_ptr)
+ 		return -EINVAL;
+ 
++	if (fsp->state & FC_SRB_ABORT_PENDING) {
++		FC_FCP_DBG(fsp, "abort already pending\n");
++		return -EBUSY;
++	}
++
+ 	this_cpu_inc(fsp->lp->stats->FcpPktAborts);
+ 
+ 	fsp->state |= FC_SRB_ABORT_PENDING;
+@@ -1671,7 +1676,7 @@ static void fc_fcp_rec_error(struct fc_fcp_pkt *fsp, struct fc_frame *fp)
+ 		if (fsp->recov_retry++ < FC_MAX_RECOV_RETRY)
+ 			fc_fcp_rec(fsp);
+ 		else
+-			fc_fcp_recovery(fsp, FC_ERROR);
++			fc_fcp_recovery(fsp, FC_TIMED_OUT);
+ 		break;
+ 	}
+ 	fc_fcp_unlock_pkt(fsp);
+@@ -1690,11 +1695,12 @@ static void fc_fcp_recovery(struct fc_fcp_pkt *fsp, u8 code)
+ 	fsp->status_code = code;
+ 	fsp->cdb_status = 0;
+ 	fsp->io_status = 0;
+-	/*
+-	 * if this fails then we let the scsi command timer fire and
+-	 * scsi-ml escalate.
+-	 */
+-	fc_fcp_send_abort(fsp);
++	if (!fsp->cmd)
++		/*
++		 * Only abort non-scsi commands; otherwise let the
++		 * scsi command timer fire and scsi-ml escalate.
++		 */
++		fc_fcp_send_abort(fsp);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index af15f7a22d258..04d608ea91060 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -33,6 +33,7 @@
+ struct lpfc_sli2_slim;
+ 
+ #define ELX_MODEL_NAME_SIZE	80
++#define ELX_FW_NAME_SIZE	84
+ 
+ #define LPFC_PCI_DEV_LP		0x1
+ #define LPFC_PCI_DEV_OC		0x2
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 0829fe6ddff88..385e1636f139b 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1130,12 +1130,12 @@ stop_rr_fcf_flogi:
+ 			 vport->port_state, vport->fc_flag,
+ 			 sp->cmn.priority_tagging, kref_read(&ndlp->kref));
+ 
+-	if (sp->cmn.priority_tagging)
+-		vport->phba->pport->vmid_flag |= (LPFC_VMID_ISSUE_QFPA |
+-						  LPFC_VMID_TYPE_PRIO);
+ 	/* reinitialize the VMID datastructure before returning */
+ 	if (lpfc_is_vmid_enabled(phba))
+ 		lpfc_reinit_vmid(vport);
++	if (sp->cmn.priority_tagging)
++		vport->phba->pport->vmid_flag |= (LPFC_VMID_ISSUE_QFPA |
++						  LPFC_VMID_TYPE_PRIO);
+ 
+ 	/*
+ 	 * Address a timing race with dev_loss.  If dev_loss is active on
+@@ -11130,6 +11130,14 @@ mbox_err_exit:
+ 	lpfc_nlp_put(ndlp);
+ 
+ 	mempool_free(pmb, phba->mbox_mem_pool);
++
++	/* reinitialize the VMID datastructure before returning.
++	 * this is specifically for vport
++	 */
++	if (lpfc_is_vmid_enabled(phba))
++		lpfc_reinit_vmid(vport);
++	vport->vmid_flag = vport->phba->pport->vmid_flag;
++
+ 	return;
+ }
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index e7c47ee185a49..70bcee64bc8c6 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -14721,7 +14721,7 @@ out:
+ int
+ lpfc_sli4_request_firmware_update(struct lpfc_hba *phba, uint8_t fw_upgrade)
+ {
+-	uint8_t file_name[ELX_MODEL_NAME_SIZE];
++	char file_name[ELX_FW_NAME_SIZE] = {0};
+ 	int ret;
+ 	const struct firmware *fw;
+ 
+@@ -14730,7 +14730,7 @@ lpfc_sli4_request_firmware_update(struct lpfc_hba *phba, uint8_t fw_upgrade)
+ 	    LPFC_SLI_INTF_IF_TYPE_2)
+ 		return -EPERM;
+ 
+-	snprintf(file_name, ELX_MODEL_NAME_SIZE, "%s.grp", phba->ModelName);
++	scnprintf(file_name, sizeof(file_name), "%s.grp", phba->ModelName);
+ 
+ 	if (fw_upgrade == INT_FW_UPGRADE) {
+ 		ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_UEVENT,
+diff --git a/drivers/scsi/lpfc/lpfc_vmid.c b/drivers/scsi/lpfc/lpfc_vmid.c
+index cf8ba840d0eab..773e02ae20c37 100644
+--- a/drivers/scsi/lpfc/lpfc_vmid.c
++++ b/drivers/scsi/lpfc/lpfc_vmid.c
+@@ -321,5 +321,6 @@ lpfc_reinit_vmid(struct lpfc_vport *vport)
+ 	if (!hash_empty(vport->hash_table))
+ 		hash_for_each_safe(vport->hash_table, bucket, tmp, cur, hnode)
+ 			hash_del(&cur->hnode);
++	vport->vmid_flag = 0;
+ 	write_unlock(&vport->vmid_lock);
+ }
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index f039f1d986477..0d148c39ebcc9 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -1892,7 +1892,8 @@ static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)
+ 
+ 	reply_qid = qidx + 1;
+ 	op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
+-	if (!mrioc->pdev->revision)
++	if ((mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) &&
++		!mrioc->pdev->revision)
+ 		op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
+ 	op_reply_q->ci = 0;
+ 	op_reply_q->ephase = 1;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index e2f2c205df71a..872d4b809d083 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -5112,7 +5112,10 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		mpi3mr_init_drv_cmd(&mrioc->evtack_cmds[i],
+ 				    MPI3MR_HOSTTAG_EVTACKCMD_MIN + i);
+ 
+-	if (pdev->revision)
++	if ((pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) &&
++		!pdev->revision)
++		mrioc->enable_segqueue = false;
++	else
+ 		mrioc->enable_segqueue = true;
+ 
+ 	init_waitqueue_head(&mrioc->reset_waitq);
+@@ -5441,6 +5444,14 @@ static const struct pci_device_id mpi3mr_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(MPI3_MFGPAGE_VENDORID_BROADCOM,
+ 		    MPI3_MFGPAGE_DEVID_SAS4116, PCI_ANY_ID, PCI_ANY_ID)
+ 	},
++	{
++		PCI_DEVICE_SUB(MPI3_MFGPAGE_VENDORID_BROADCOM,
++		    MPI3_MFGPAGE_DEVID_SAS5116_MPI, PCI_ANY_ID, PCI_ANY_ID)
++	},
++	{
++		PCI_DEVICE_SUB(MPI3_MFGPAGE_VENDORID_BROADCOM,
++		    MPI3_MFGPAGE_DEVID_SAS5116_MPI_MGMT, PCI_ANY_ID, PCI_ANY_ID)
++	},
+ 	{ 0 }
+ };
+ MODULE_DEVICE_TABLE(pci, mpi3mr_pci_id_table);
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index d983f4a0e9f14..3328b175a8326 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -61,11 +61,11 @@ static int scsi_eh_try_stu(struct scsi_cmnd *scmd);
+ static enum scsi_disposition scsi_try_to_abort_cmd(const struct scsi_host_template *,
+ 						   struct scsi_cmnd *);
+ 
+-void scsi_eh_wakeup(struct Scsi_Host *shost)
++void scsi_eh_wakeup(struct Scsi_Host *shost, unsigned int busy)
+ {
+ 	lockdep_assert_held(shost->host_lock);
+ 
+-	if (scsi_host_busy(shost) == shost->host_failed) {
++	if (busy == shost->host_failed) {
+ 		trace_scsi_eh_wakeup(shost);
+ 		wake_up_process(shost->ehandler);
+ 		SCSI_LOG_ERROR_RECOVERY(5, shost_printk(KERN_INFO, shost,
+@@ -88,7 +88,7 @@ void scsi_schedule_eh(struct Scsi_Host *shost)
+ 	if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 ||
+ 	    scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) {
+ 		shost->host_eh_scheduled++;
+-		scsi_eh_wakeup(shost);
++		scsi_eh_wakeup(shost, scsi_host_busy(shost));
+ 	}
+ 
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+@@ -286,7 +286,7 @@ static void scsi_eh_inc_host_failed(struct rcu_head *head)
+ 
+ 	spin_lock_irqsave(shost->host_lock, flags);
+ 	shost->host_failed++;
+-	scsi_eh_wakeup(shost);
++	scsi_eh_wakeup(shost, scsi_host_busy(shost));
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+ }
+ 
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index cf3864f720930..1fb80eae9a63a 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -280,7 +280,7 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
+ 	if (unlikely(scsi_host_in_recovery(shost))) {
+ 		spin_lock_irqsave(shost->host_lock, flags);
+ 		if (shost->host_failed || shost->host_eh_scheduled)
+-			scsi_eh_wakeup(shost);
++			scsi_eh_wakeup(shost, scsi_host_busy(shost));
+ 		spin_unlock_irqrestore(shost->host_lock, flags);
+ 	}
+ 	rcu_read_unlock();
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 3f0dfb97db6bd..1fbfe1b52c9f1 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -92,7 +92,7 @@ extern void scmd_eh_abort_handler(struct work_struct *work);
+ extern enum blk_eh_timer_return scsi_timeout(struct request *req);
+ extern int scsi_error_handler(void *host);
+ extern enum scsi_disposition scsi_decide_disposition(struct scsi_cmnd *cmd);
+-extern void scsi_eh_wakeup(struct Scsi_Host *shost);
++extern void scsi_eh_wakeup(struct Scsi_Host *shost, unsigned int busy);
+ extern void scsi_eh_scmd_add(struct scsi_cmnd *);
+ void scsi_eh_ready_devs(struct Scsi_Host *shost,
+ 			struct list_head *work_q,
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index 86a048a10a13f..042553abe1bf8 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -477,7 +477,7 @@ static void xlnx_call_notify_cb_handler(const u32 *payload)
+ 		}
+ 	}
+ 	if (!is_callback_found)
+-		pr_warn("Didn't find any registered callback for 0x%x 0x%x\n",
++		pr_warn("Unhandled SGI node 0x%x event 0x%x. Expected with Xen hypervisor\n",
+ 			payload[1], payload[2]);
+ }
+ 
+@@ -555,7 +555,7 @@ static void xlnx_disable_percpu_irq(void *data)
+ static int xlnx_event_init_sgi(struct platform_device *pdev)
+ {
+ 	int ret = 0;
+-	int cpu = smp_processor_id();
++	int cpu;
+ 	/*
+ 	 * IRQ related structures are used for the following:
+ 	 * for each SGI interrupt ensure its mapped by GIC IRQ domain
+@@ -592,9 +592,12 @@ static int xlnx_event_init_sgi(struct platform_device *pdev)
+ 	sgi_fwspec.param[0] = sgi_num;
+ 	virq_sgi = irq_create_fwspec_mapping(&sgi_fwspec);
+ 
++	cpu = get_cpu();
+ 	per_cpu(cpu_number1, cpu) = cpu;
+ 	ret = request_percpu_irq(virq_sgi, xlnx_event_handler, "xlnx_event_mgmt",
+ 				 &cpu_number1);
++	put_cpu();
++
+ 	WARN_ON(ret);
+ 	if (ret) {
+ 		irq_dispose_mapping(virq_sgi);
+diff --git a/drivers/spmi/spmi-mtk-pmif.c b/drivers/spmi/spmi-mtk-pmif.c
+index 54c35f5535cbe..1261f381cae6c 100644
+--- a/drivers/spmi/spmi-mtk-pmif.c
++++ b/drivers/spmi/spmi-mtk-pmif.c
+@@ -475,7 +475,7 @@ static int mtk_spmi_probe(struct platform_device *pdev)
+ 	for (i = 0; i < arb->nclks; i++)
+ 		arb->clks[i].id = pmif_clock_names[i];
+ 
+-	err = devm_clk_bulk_get(&pdev->dev, arb->nclks, arb->clks);
++	err = clk_bulk_get(&pdev->dev, arb->nclks, arb->clks);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to get clocks: %d\n", err);
+ 		goto err_put_ctrl;
+@@ -484,7 +484,7 @@ static int mtk_spmi_probe(struct platform_device *pdev)
+ 	err = clk_bulk_prepare_enable(arb->nclks, arb->clks);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to enable clocks: %d\n", err);
+-		goto err_put_ctrl;
++		goto err_put_clks;
+ 	}
+ 
+ 	ctrl->cmd = pmif_arb_cmd;
+@@ -510,6 +510,8 @@ static int mtk_spmi_probe(struct platform_device *pdev)
+ 
+ err_domain_remove:
+ 	clk_bulk_disable_unprepare(arb->nclks, arb->clks);
++err_put_clks:
++	clk_bulk_put(arb->nclks, arb->clks);
+ err_put_ctrl:
+ 	spmi_controller_put(ctrl);
+ 	return err;
+@@ -521,6 +523,7 @@ static void mtk_spmi_remove(struct platform_device *pdev)
+ 	struct pmif *arb = spmi_controller_get_drvdata(ctrl);
+ 
+ 	clk_bulk_disable_unprepare(arb->nclks, arb->clks);
++	clk_bulk_put(arb->nclks, arb->clks);
+ 	spmi_controller_remove(ctrl);
+ 	spmi_controller_put(ctrl);
+ }
+diff --git a/drivers/staging/vme_user/vme.c b/drivers/staging/vme_user/vme.c
+index 5c416c31ec576..9bc2d35405af6 100644
+--- a/drivers/staging/vme_user/vme.c
++++ b/drivers/staging/vme_user/vme.c
+@@ -341,7 +341,7 @@ int vme_slave_set(struct vme_resource *resource, int enabled,
+ 
+ 	if (!bridge->slave_set) {
+ 		dev_err(bridge->parent, "Function not supported\n");
+-		return -ENOSYS;
++		return -EINVAL;
+ 	}
+ 
+ 	if (!(((image->address_attr & aspace) == aspace) &&
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 1bc7ba4594063..45f51727410cc 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -37,8 +37,6 @@ static LIST_HEAD(thermal_governor_list);
+ static DEFINE_MUTEX(thermal_list_lock);
+ static DEFINE_MUTEX(thermal_governor_lock);
+ 
+-static atomic_t in_suspend;
+-
+ static struct thermal_governor *def_governor;
+ 
+ /*
+@@ -405,7 +403,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ {
+ 	const struct thermal_trip *trip;
+ 
+-	if (atomic_read(&in_suspend))
++	if (tz->suspended)
+ 		return;
+ 
+ 	if (WARN_ONCE(!tz->ops->get_temp,
+@@ -1514,17 +1512,35 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ 	case PM_HIBERNATION_PREPARE:
+ 	case PM_RESTORE_PREPARE:
+ 	case PM_SUSPEND_PREPARE:
+-		atomic_set(&in_suspend, 1);
++		mutex_lock(&thermal_list_lock);
++
++		list_for_each_entry(tz, &thermal_tz_list, node) {
++			mutex_lock(&tz->lock);
++
++			tz->suspended = true;
++
++			mutex_unlock(&tz->lock);
++		}
++
++		mutex_unlock(&thermal_list_lock);
+ 		break;
+ 	case PM_POST_HIBERNATION:
+ 	case PM_POST_RESTORE:
+ 	case PM_POST_SUSPEND:
+-		atomic_set(&in_suspend, 0);
++		mutex_lock(&thermal_list_lock);
++
+ 		list_for_each_entry(tz, &thermal_tz_list, node) {
++			mutex_lock(&tz->lock);
++
++			tz->suspended = false;
++
+ 			thermal_zone_device_init(tz);
+-			thermal_zone_device_update(tz,
+-						   THERMAL_EVENT_UNSPECIFIED);
++			__thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++
++			mutex_unlock(&tz->lock);
+ 		}
++
++		mutex_unlock(&thermal_list_lock);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 614be0f13a313..8ccf691935b75 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -19,6 +19,7 @@
+ #include <linux/serial_core.h>
+ #include <linux/8250_pci.h>
+ #include <linux/bitops.h>
++#include <linux/bitfield.h>
+ 
+ #include <asm/byteorder.h>
+ #include <asm/io.h>
+@@ -1970,6 +1971,20 @@ pci_sunix_setup(struct serial_private *priv,
+ 
+ #define MOXA_GPIO_PIN2	BIT(2)
+ 
++#define MOXA_RS232	0x00
++#define MOXA_RS422	0x01
++#define MOXA_RS485_4W	0x0B
++#define MOXA_RS485_2W	0x0F
++#define MOXA_UIR_OFFSET	0x04
++#define MOXA_EVEN_RS_MASK	GENMASK(3, 0)
++#define MOXA_ODD_RS_MASK	GENMASK(7, 4)
++
++enum {
++	MOXA_SUPP_RS232 = BIT(0),
++	MOXA_SUPP_RS422 = BIT(1),
++	MOXA_SUPP_RS485 = BIT(2),
++};
++
+ static bool pci_moxa_is_mini_pcie(unsigned short device)
+ {
+ 	if (device == PCI_DEVICE_ID_MOXA_CP102N	||
+@@ -1983,13 +1998,54 @@ static bool pci_moxa_is_mini_pcie(unsigned short device)
+ 	return false;
+ }
+ 
++static unsigned int pci_moxa_supported_rs(struct pci_dev *dev)
++{
++	switch (dev->device & 0x0F00) {
++	case 0x0000:
++	case 0x0600:
++		return MOXA_SUPP_RS232;
++	case 0x0100:
++		return MOXA_SUPP_RS232 | MOXA_SUPP_RS422 | MOXA_SUPP_RS485;
++	case 0x0300:
++		return MOXA_SUPP_RS422 | MOXA_SUPP_RS485;
++	}
++	return 0;
++}
++
++static int pci_moxa_set_interface(const struct pci_dev *dev,
++				  unsigned int port_idx,
++				  u8 mode)
++{
++	resource_size_t iobar_addr = pci_resource_start(dev, 2);
++	resource_size_t UIR_addr = iobar_addr + MOXA_UIR_OFFSET + port_idx / 2;
++	u8 val;
++
++	val = inb(UIR_addr);
++
++	if (port_idx % 2) {
++		val &= ~MOXA_ODD_RS_MASK;
++		val |= FIELD_PREP(MOXA_ODD_RS_MASK, mode);
++	} else {
++		val &= ~MOXA_EVEN_RS_MASK;
++		val |= FIELD_PREP(MOXA_EVEN_RS_MASK, mode);
++	}
++	outb(val, UIR_addr);
++
++	return 0;
++}
++
+ static int pci_moxa_init(struct pci_dev *dev)
+ {
+ 	unsigned short device = dev->device;
+ 	resource_size_t iobar_addr = pci_resource_start(dev, 2);
+-	unsigned int num_ports = (device & 0x00F0) >> 4;
++	unsigned int num_ports = (device & 0x00F0) >> 4, i;
+ 	u8 val;
+ 
++	if (!(pci_moxa_supported_rs(dev) & MOXA_SUPP_RS232)) {
++		for (i = 0; i < num_ports; ++i)
++			pci_moxa_set_interface(dev, i, MOXA_RS422);
++	}
++
+ 	/*
+ 	 * Enable hardware buffer to prevent break signal output when system boots up.
+ 	 * This hardware buffer is only supported on Mini PCIe series.
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 4b499301a3db1..85de90eebc7bb 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -844,7 +844,7 @@ int tty_mode_ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg)
+ 			ret = -EFAULT;
+ 		return ret;
+ 	case TIOCSLCKTRMIOS:
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!checkpoint_restore_ns_capable(&init_user_ns))
+ 			return -EPERM;
+ 		copy_termios_locked(real_tty, &kterm);
+ 		if (user_termios_to_kernel_termios(&kterm,
+@@ -861,7 +861,7 @@ int tty_mode_ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg)
+ 			ret = -EFAULT;
+ 		return ret;
+ 	case TIOCSLCKTRMIOS:
+-		if (!capable(CAP_SYS_ADMIN))
++		if (!checkpoint_restore_ns_capable(&init_user_ns))
+ 			return -EPERM;
+ 		copy_termios_locked(real_tty, &kterm);
+ 		if (user_termios_to_kernel_termios_1(&kterm,
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 87480a6e6d934..ef8d9bda94ac1 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -47,12 +47,18 @@
+ #define USB_VENDOR_TEXAS_INSTRUMENTS		0x0451
+ #define USB_PRODUCT_TUSB8041_USB3		0x8140
+ #define USB_PRODUCT_TUSB8041_USB2		0x8142
+-#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	0x01
+-#define HUB_QUIRK_DISABLE_AUTOSUSPEND		0x02
++#define USB_VENDOR_MICROCHIP			0x0424
++#define USB_PRODUCT_USB4913			0x4913
++#define USB_PRODUCT_USB4914			0x4914
++#define USB_PRODUCT_USB4915			0x4915
++#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	BIT(0)
++#define HUB_QUIRK_DISABLE_AUTOSUSPEND		BIT(1)
++#define HUB_QUIRK_REDUCE_FRAME_INTR_BINTERVAL	BIT(2)
+ 
+ #define USB_TP_TRANSMISSION_DELAY	40	/* ns */
+ #define USB_TP_TRANSMISSION_DELAY_MAX	65535	/* ns */
+ #define USB_PING_RESPONSE_TIME		400	/* ns */
++#define USB_REDUCE_FRAME_INTR_BINTERVAL	9
+ 
+ /* Protect struct usb_device->state and ->children members
+  * Note: Both are also protected by ->dev.sem, except that ->state can
+@@ -1904,6 +1910,14 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 		usb_autopm_get_interface_no_resume(intf);
+ 	}
+ 
++	if ((id->driver_info & HUB_QUIRK_REDUCE_FRAME_INTR_BINTERVAL) &&
++	    desc->endpoint[0].desc.bInterval > USB_REDUCE_FRAME_INTR_BINTERVAL) {
++		desc->endpoint[0].desc.bInterval =
++			USB_REDUCE_FRAME_INTR_BINTERVAL;
++		/* Tell the HCD about the interrupt ep's new bInterval */
++		usb_set_interface(hdev, 0, 0);
++	}
++
+ 	if (hub_configure(hub, &desc->endpoint[0].desc) >= 0) {
+ 		onboard_hub_create_pdevs(hdev, &hub->onboard_hub_devs);
+ 
+@@ -5895,6 +5909,21 @@ static const struct usb_device_id hub_id_table[] = {
+       .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
+       .idProduct = USB_PRODUCT_TUSB8041_USB3,
+       .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
++	{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++	  .idVendor = USB_VENDOR_MICROCHIP,
++	  .idProduct = USB_PRODUCT_USB4913,
++	  .driver_info = HUB_QUIRK_REDUCE_FRAME_INTR_BINTERVAL},
++	{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++	  .idVendor = USB_VENDOR_MICROCHIP,
++	  .idProduct = USB_PRODUCT_USB4914,
++	  .driver_info = HUB_QUIRK_REDUCE_FRAME_INTR_BINTERVAL},
++	{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++	  .idVendor = USB_VENDOR_MICROCHIP,
++	  .idProduct = USB_PRODUCT_USB4915,
++	  .driver_info = HUB_QUIRK_REDUCE_FRAME_INTR_BINTERVAL},
+     { .match_flags = USB_DEVICE_ID_MATCH_DEV_CLASS,
+       .bDeviceClass = USB_CLASS_HUB},
+     { .match_flags = USB_DEVICE_ID_MATCH_INT_CLASS,
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 732cdeb739202..f0853c4478f57 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -433,7 +433,7 @@ void xhci_plat_remove(struct platform_device *dev)
+ }
+ EXPORT_SYMBOL_GPL(xhci_plat_remove);
+ 
+-static int __maybe_unused xhci_plat_suspend(struct device *dev)
++static int xhci_plat_suspend(struct device *dev)
+ {
+ 	struct usb_hcd	*hcd = dev_get_drvdata(dev);
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+@@ -461,7 +461,7 @@ static int __maybe_unused xhci_plat_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused xhci_plat_resume(struct device *dev)
++static int xhci_plat_resume_common(struct device *dev, struct pm_message pmsg)
+ {
+ 	struct usb_hcd	*hcd = dev_get_drvdata(dev);
+ 	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
+@@ -483,7 +483,7 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ 	if (ret)
+ 		goto disable_clks;
+ 
+-	ret = xhci_resume(xhci, PMSG_RESUME);
++	ret = xhci_resume(xhci, pmsg);
+ 	if (ret)
+ 		goto disable_clks;
+ 
+@@ -502,6 +502,16 @@ disable_clks:
+ 	return ret;
+ }
+ 
++static int xhci_plat_resume(struct device *dev)
++{
++	return xhci_plat_resume_common(dev, PMSG_RESUME);
++}
++
++static int xhci_plat_restore(struct device *dev)
++{
++	return xhci_plat_resume_common(dev, PMSG_RESTORE);
++}
++
+ static int __maybe_unused xhci_plat_runtime_suspend(struct device *dev)
+ {
+ 	struct usb_hcd  *hcd = dev_get_drvdata(dev);
+@@ -524,7 +534,12 @@ static int __maybe_unused xhci_plat_runtime_resume(struct device *dev)
+ }
+ 
+ const struct dev_pm_ops xhci_plat_pm_ops = {
+-	SET_SYSTEM_SLEEP_PM_OPS(xhci_plat_suspend, xhci_plat_resume)
++	.suspend = pm_sleep_ptr(xhci_plat_suspend),
++	.resume = pm_sleep_ptr(xhci_plat_resume),
++	.freeze = pm_sleep_ptr(xhci_plat_suspend),
++	.thaw = pm_sleep_ptr(xhci_plat_resume),
++	.poweroff = pm_sleep_ptr(xhci_plat_suspend),
++	.restore = pm_sleep_ptr(xhci_plat_restore),
+ 
+ 	SET_RUNTIME_PM_OPS(xhci_plat_runtime_suspend,
+ 			   xhci_plat_runtime_resume,
+diff --git a/drivers/watchdog/it87_wdt.c b/drivers/watchdog/it87_wdt.c
+index e888b1bdd1f2f..8c1ee072f48b8 100644
+--- a/drivers/watchdog/it87_wdt.c
++++ b/drivers/watchdog/it87_wdt.c
+@@ -256,6 +256,7 @@ static struct watchdog_device wdt_dev = {
+ static int __init it87_wdt_init(void)
+ {
+ 	u8  chip_rev;
++	u8 ctrl;
+ 	int rc;
+ 
+ 	rc = superio_enter();
+@@ -315,7 +316,18 @@ static int __init it87_wdt_init(void)
+ 
+ 	superio_select(GPIO);
+ 	superio_outb(WDT_TOV1, WDTCFG);
+-	superio_outb(0x00, WDTCTRL);
++
++	switch (chip_type) {
++	case IT8784_ID:
++	case IT8786_ID:
++		ctrl = superio_inb(WDTCTRL);
++		ctrl &= 0x08;
++		superio_outb(ctrl, WDTCTRL);
++		break;
++	default:
++		superio_outb(0x00, WDTCTRL);
++	}
++
+ 	superio_exit();
+ 
+ 	if (timeout < 1 || timeout > max_units * 60) {
+diff --git a/drivers/watchdog/starfive-wdt.c b/drivers/watchdog/starfive-wdt.c
+index 5f501b41faf9d..49b38ecc092dd 100644
+--- a/drivers/watchdog/starfive-wdt.c
++++ b/drivers/watchdog/starfive-wdt.c
+@@ -202,12 +202,14 @@ static u32 starfive_wdt_ticks_to_sec(struct starfive_wdt *wdt, u32 ticks)
+ 
+ /* Write unlock-key to unlock. Write other value to lock. */
+ static void starfive_wdt_unlock(struct starfive_wdt *wdt)
++	__acquires(&wdt->lock)
+ {
+ 	spin_lock(&wdt->lock);
+ 	writel(wdt->variant->unlock_key, wdt->base + wdt->variant->unlock);
+ }
+ 
+ static void starfive_wdt_lock(struct starfive_wdt *wdt)
++	__releases(&wdt->lock)
+ {
+ 	writel(~wdt->variant->unlock_key, wdt->base + wdt->variant->unlock);
+ 	spin_unlock(&wdt->lock);
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index 4440e626b7975..42adc2c1e06b3 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -11,6 +11,7 @@
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+ #include <linux/dma-buf.h>
++#include <linux/dma-direct.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ #include <linux/uaccess.h>
+@@ -50,7 +51,7 @@ struct gntdev_dmabuf {
+ 
+ 	/* Number of pages this buffer has. */
+ 	int nr_pages;
+-	/* Pages of this buffer. */
++	/* Pages of this buffer (only for dma-buf export). */
+ 	struct page **pages;
+ };
+ 
+@@ -484,7 +485,7 @@ out:
+ /* DMA buffer import support. */
+ 
+ static int
+-dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs,
++dmabuf_imp_grant_foreign_access(unsigned long *gfns, u32 *refs,
+ 				int count, int domid)
+ {
+ 	grant_ref_t priv_gref_head;
+@@ -507,7 +508,7 @@ dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs,
+ 		}
+ 
+ 		gnttab_grant_foreign_access_ref(cur_ref, domid,
+-						xen_page_to_gfn(pages[i]), 0);
++						gfns[i], 0);
+ 		refs[i] = cur_ref;
+ 	}
+ 
+@@ -529,7 +530,6 @@ static void dmabuf_imp_end_foreign_access(u32 *refs, int count)
+ 
+ static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf)
+ {
+-	kfree(gntdev_dmabuf->pages);
+ 	kfree(gntdev_dmabuf->u.imp.refs);
+ 	kfree(gntdev_dmabuf);
+ }
+@@ -549,12 +549,6 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count)
+ 	if (!gntdev_dmabuf->u.imp.refs)
+ 		goto fail;
+ 
+-	gntdev_dmabuf->pages = kcalloc(count,
+-				       sizeof(gntdev_dmabuf->pages[0]),
+-				       GFP_KERNEL);
+-	if (!gntdev_dmabuf->pages)
+-		goto fail;
+-
+ 	gntdev_dmabuf->nr_pages = count;
+ 
+ 	for (i = 0; i < count; i++)
+@@ -576,7 +570,8 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+ 	struct dma_buf *dma_buf;
+ 	struct dma_buf_attachment *attach;
+ 	struct sg_table *sgt;
+-	struct sg_page_iter sg_iter;
++	struct sg_dma_page_iter sg_iter;
++	unsigned long *gfns;
+ 	int i;
+ 
+ 	dma_buf = dma_buf_get(fd);
+@@ -624,26 +619,31 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+ 
+ 	gntdev_dmabuf->u.imp.sgt = sgt;
+ 
+-	/* Now convert sgt to array of pages and check for page validity. */
++	gfns = kcalloc(count, sizeof(*gfns), GFP_KERNEL);
++	if (!gfns) {
++		ret = ERR_PTR(-ENOMEM);
++		goto fail_unmap;
++	}
++
++	/*
++	 * Now convert sgt to array of gfns without accessing underlying pages.
++	 * It is not allowed to access the underlying struct page of an sg table
++	 * exported by DMA-buf, but since we deal with special Xen dma device here
++	 * (not a normal physical one) look at the dma addresses in the sg table
++	 * and then calculate gfns directly from them.
++	 */
+ 	i = 0;
+-	for_each_sgtable_page(sgt, &sg_iter, 0) {
+-		struct page *page = sg_page_iter_page(&sg_iter);
+-		/*
+-		 * Check if page is valid: this can happen if we are given
+-		 * a page from VRAM or other resources which are not backed
+-		 * by a struct page.
+-		 */
+-		if (!pfn_valid(page_to_pfn(page))) {
+-			ret = ERR_PTR(-EINVAL);
+-			goto fail_unmap;
+-		}
++	for_each_sgtable_dma_page(sgt, &sg_iter, 0) {
++		dma_addr_t addr = sg_page_iter_dma_address(&sg_iter);
++		unsigned long pfn = bfn_to_pfn(XEN_PFN_DOWN(dma_to_phys(dev, addr)));
+ 
+-		gntdev_dmabuf->pages[i++] = page;
++		gfns[i++] = pfn_to_gfn(pfn);
+ 	}
+ 
+-	ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages,
++	ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gfns,
+ 						      gntdev_dmabuf->u.imp.refs,
+ 						      count, domid));
++	kfree(gfns);
+ 	if (IS_ERR(ret))
+ 		goto fail_end_access;
+ 
+diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
+index 731e3d14b67d3..0e8418066a482 100644
+--- a/fs/9p/v9fs_vfs.h
++++ b/fs/9p/v9fs_vfs.h
+@@ -42,6 +42,7 @@ struct inode *v9fs_alloc_inode(struct super_block *sb);
+ void v9fs_free_inode(struct inode *inode);
+ struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode,
+ 			     dev_t rdev);
++void v9fs_set_netfs_context(struct inode *inode);
+ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+ 		    struct inode *inode, umode_t mode, dev_t rdev);
+ void v9fs_evict_inode(struct inode *inode);
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index b845ee18a80be..90dc5ef755160 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -246,7 +246,7 @@ void v9fs_free_inode(struct inode *inode)
+ /*
+  * Set parameters for the netfs library
+  */
+-static void v9fs_set_netfs_context(struct inode *inode)
++void v9fs_set_netfs_context(struct inode *inode)
+ {
+ 	struct v9fs_inode *v9inode = V9FS_I(inode);
+ 	netfs_inode_init(&v9inode->netfs, &v9fs_req_ops);
+@@ -326,8 +326,6 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+ 		err = -EINVAL;
+ 		goto error;
+ 	}
+-
+-	v9fs_set_netfs_context(inode);
+ error:
+ 	return err;
+ 
+@@ -359,6 +357,7 @@ struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode, dev_t rdev)
+ 		iput(inode);
+ 		return ERR_PTR(err);
+ 	}
++	v9fs_set_netfs_context(inode);
+ 	return inode;
+ }
+ 
+@@ -464,6 +463,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ 		goto error;
+ 
+ 	v9fs_stat2inode(st, inode, sb, 0);
++	v9fs_set_netfs_context(inode);
+ 	v9fs_cache_inode_get_cookie(inode);
+ 	unlock_new_inode(inode);
+ 	return inode;
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index c7319af2f4711..d0636b99f05b9 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -128,6 +128,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ 		goto error;
+ 
+ 	v9fs_stat2inode_dotl(st, inode, 0);
++	v9fs_set_netfs_context(inode);
+ 	v9fs_cache_inode_get_cookie(inode);
+ 	retval = v9fs_get_acl(inode, fid);
+ 	if (retval)
+diff --git a/fs/afs/callback.c b/fs/afs/callback.c
+index a484fa6428081..90f9b2a46ff48 100644
+--- a/fs/afs/callback.c
++++ b/fs/afs/callback.c
+@@ -110,13 +110,14 @@ static struct afs_volume *afs_lookup_volume_rcu(struct afs_cell *cell,
+ {
+ 	struct afs_volume *volume = NULL;
+ 	struct rb_node *p;
+-	int seq = 0;
++	int seq = 1;
+ 
+ 	do {
+ 		/* Unfortunately, rbtree walking doesn't give reliable results
+ 		 * under just the RCU read lock, so we have to check for
+ 		 * changes.
+ 		 */
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&cell->volume_lock, &seq);
+ 
+ 		p = rcu_dereference_raw(cell->volumes.rb_node);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 2c0b8dc3dd0d8..9c02f328c966c 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -4887,13 +4887,15 @@ int ceph_encode_dentry_release(void **p, struct dentry *dentry,
+ 			       struct inode *dir,
+ 			       int mds, int drop, int unless)
+ {
+-	struct dentry *parent = NULL;
+ 	struct ceph_mds_request_release *rel = *p;
+ 	struct ceph_dentry_info *di = ceph_dentry(dentry);
+ 	struct ceph_client *cl;
+ 	int force = 0;
+ 	int ret;
+ 
++	/* This shouldn't happen */
++	BUG_ON(!dir);
++
+ 	/*
+ 	 * force an record for the directory caps if we have a dentry lease.
+ 	 * this is racy (can't take i_ceph_lock and d_lock together), but it
+@@ -4903,14 +4905,9 @@ int ceph_encode_dentry_release(void **p, struct dentry *dentry,
+ 	spin_lock(&dentry->d_lock);
+ 	if (di->lease_session && di->lease_session->s_mds == mds)
+ 		force = 1;
+-	if (!dir) {
+-		parent = dget(dentry->d_parent);
+-		dir = d_inode(parent);
+-	}
+ 	spin_unlock(&dentry->d_lock);
+ 
+ 	ret = ceph_encode_inode_release(p, dir, mds, drop, unless, force);
+-	dput(parent);
+ 
+ 	cl = ceph_inode_to_client(dir);
+ 	spin_lock(&dentry->d_lock);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index d95eb525519a6..558c3af44449c 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4128,12 +4128,12 @@ static void handle_session(struct ceph_mds_session *session,
+ 			pr_info_client(cl, "mds%d reconnect success\n",
+ 				       session->s_mds);
+ 
++		session->s_features = features;
+ 		if (session->s_state == CEPH_MDS_SESSION_OPEN) {
+ 			pr_notice_client(cl, "mds%d is already opened\n",
+ 					 session->s_mds);
+ 		} else {
+ 			session->s_state = CEPH_MDS_SESSION_OPEN;
+-			session->s_features = features;
+ 			renewed_caps(mdsc, session, 0);
+ 			if (test_bit(CEPHFS_FEATURE_METRIC_COLLECT,
+ 				     &session->s_features))
+diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c
+index 9d36c3532de14..06ee397e0c3a6 100644
+--- a/fs/ceph/quota.c
++++ b/fs/ceph/quota.c
+@@ -197,10 +197,10 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc)
+ }
+ 
+ /*
+- * This function walks through the snaprealm for an inode and returns the
+- * ceph_snap_realm for the first snaprealm that has quotas set (max_files,
++ * This function walks through the snaprealm for an inode and set the
++ * realmp with the first snaprealm that has quotas set (max_files,
+  * max_bytes, or any, depending on the 'which_quota' argument).  If the root is
+- * reached, return the root ceph_snap_realm instead.
++ * reached, set the realmp with the root ceph_snap_realm instead.
+  *
+  * Note that the caller is responsible for calling ceph_put_snap_realm() on the
+  * returned realm.
+@@ -211,10 +211,9 @@ void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc)
+  * this function will return -EAGAIN; otherwise, the snaprealms walk-through
+  * will be restarted.
+  */
+-static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc,
+-					       struct inode *inode,
+-					       enum quota_get_realm which_quota,
+-					       bool retry)
++static int get_quota_realm(struct ceph_mds_client *mdsc, struct inode *inode,
++			   enum quota_get_realm which_quota,
++			   struct ceph_snap_realm **realmp, bool retry)
+ {
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	struct ceph_inode_info *ci = NULL;
+@@ -222,8 +221,10 @@ static struct ceph_snap_realm *get_quota_realm(struct ceph_mds_client *mdsc,
+ 	struct inode *in;
+ 	bool has_quota;
+ 
++	if (realmp)
++		*realmp = NULL;
+ 	if (ceph_snap(inode) != CEPH_NOSNAP)
+-		return NULL;
++		return 0;
+ 
+ restart:
+ 	realm = ceph_inode(inode)->i_snap_realm;
+@@ -250,7 +251,7 @@ restart:
+ 				break;
+ 			ceph_put_snap_realm(mdsc, realm);
+ 			if (!retry)
+-				return ERR_PTR(-EAGAIN);
++				return -EAGAIN;
+ 			goto restart;
+ 		}
+ 
+@@ -259,8 +260,11 @@ restart:
+ 		iput(in);
+ 
+ 		next = realm->parent;
+-		if (has_quota || !next)
+-		       return realm;
++		if (has_quota || !next) {
++			if (realmp)
++				*realmp = realm;
++			return 0;
++		}
+ 
+ 		ceph_get_snap_realm(mdsc, next);
+ 		ceph_put_snap_realm(mdsc, realm);
+@@ -269,7 +273,7 @@ restart:
+ 	if (realm)
+ 		ceph_put_snap_realm(mdsc, realm);
+ 
+-	return NULL;
++	return 0;
+ }
+ 
+ bool ceph_quota_is_same_realm(struct inode *old, struct inode *new)
+@@ -277,6 +281,7 @@ bool ceph_quota_is_same_realm(struct inode *old, struct inode *new)
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(old->i_sb);
+ 	struct ceph_snap_realm *old_realm, *new_realm;
+ 	bool is_same;
++	int ret;
+ 
+ restart:
+ 	/*
+@@ -286,9 +291,9 @@ restart:
+ 	 * dropped and we can then restart the whole operation.
+ 	 */
+ 	down_read(&mdsc->snap_rwsem);
+-	old_realm = get_quota_realm(mdsc, old, QUOTA_GET_ANY, true);
+-	new_realm = get_quota_realm(mdsc, new, QUOTA_GET_ANY, false);
+-	if (PTR_ERR(new_realm) == -EAGAIN) {
++	get_quota_realm(mdsc, old, QUOTA_GET_ANY, &old_realm, true);
++	ret = get_quota_realm(mdsc, new, QUOTA_GET_ANY, &new_realm, false);
++	if (ret == -EAGAIN) {
+ 		up_read(&mdsc->snap_rwsem);
+ 		if (old_realm)
+ 			ceph_put_snap_realm(mdsc, old_realm);
+@@ -492,8 +497,8 @@ bool ceph_quota_update_statfs(struct ceph_fs_client *fsc, struct kstatfs *buf)
+ 	bool is_updated = false;
+ 
+ 	down_read(&mdsc->snap_rwsem);
+-	realm = get_quota_realm(mdsc, d_inode(fsc->sb->s_root),
+-				QUOTA_GET_MAX_BYTES, true);
++	get_quota_realm(mdsc, d_inode(fsc->sb->s_root), QUOTA_GET_MAX_BYTES,
++			&realm, true);
+ 	up_read(&mdsc->snap_rwsem);
+ 	if (!realm)
+ 		return false;
+diff --git a/fs/dcache.c b/fs/dcache.c
+index c82ae731df9af..d1ab857a69cae 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -787,12 +787,12 @@ static inline bool fast_dput(struct dentry *dentry)
+ 	 */
+ 	if (unlikely(ret < 0)) {
+ 		spin_lock(&dentry->d_lock);
+-		if (dentry->d_lockref.count > 1) {
+-			dentry->d_lockref.count--;
++		if (WARN_ON_ONCE(dentry->d_lockref.count <= 0)) {
+ 			spin_unlock(&dentry->d_lock);
+ 			return true;
+ 		}
+-		return false;
++		dentry->d_lockref.count--;
++		goto locked;
+ 	}
+ 
+ 	/*
+@@ -850,6 +850,7 @@ static inline bool fast_dput(struct dentry *dentry)
+ 	 * else could have killed it and marked it dead. Either way, we
+ 	 * don't need to do anything else.
+ 	 */
++locked:
+ 	if (dentry->d_lockref.count) {
+ 		spin_unlock(&dentry->d_lock);
+ 		return true;
+diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
+index b0e8774c435a4..d7193687b9b4c 100644
+--- a/fs/ecryptfs/inode.c
++++ b/fs/ecryptfs/inode.c
+@@ -78,6 +78,14 @@ static struct inode *__ecryptfs_get_inode(struct inode *lower_inode,
+ 
+ 	if (lower_inode->i_sb != ecryptfs_superblock_to_lower(sb))
+ 		return ERR_PTR(-EXDEV);
++
++	/* Reject dealing with casefold directories. */
++	if (IS_CASEFOLDED(lower_inode)) {
++		pr_err_ratelimited("%s: Can't handle casefolded directory.\n",
++				   __func__);
++		return ERR_PTR(-EREMOTE);
++	}
++
+ 	if (!igrab(lower_inode))
+ 		return ERR_PTR(-ESTALE);
+ 	inode = iget5_locked(sb, (unsigned long)lower_inode,
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index a33cd6757f984..1c0e6167d8e73 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -815,7 +815,6 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ 
+ 	if (ztailpacking) {
+ 		pcl->obj.index = 0;	/* which indicates ztailpacking */
+-		pcl->pageofs_in = erofs_blkoff(fe->inode->i_sb, map->m_pa);
+ 		pcl->tailpacking_size = map->m_plen;
+ 	} else {
+ 		pcl->obj.index = map->m_pa >> PAGE_SHIFT;
+@@ -893,6 +892,7 @@ static int z_erofs_pcluster_begin(struct z_erofs_decompress_frontend *fe)
+ 		}
+ 		get_page(map->buf.page);
+ 		WRITE_ONCE(fe->pcl->compressed_bvecs[0].page, map->buf.page);
++		fe->pcl->pageofs_in = map->m_pa & ~PAGE_MASK;
+ 		fe->mode = Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE;
+ 	}
+ 	/* file-backed inplace I/O pages are traversed in reverse order */
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 7a1a24ae4a2d8..e313c936351d5 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -82,29 +82,26 @@ static int z_erofs_load_full_lcluster(struct z_erofs_maprecorder *m,
+ }
+ 
+ static unsigned int decode_compactedbits(unsigned int lobits,
+-					 unsigned int lomask,
+ 					 u8 *in, unsigned int pos, u8 *type)
+ {
+ 	const unsigned int v = get_unaligned_le32(in + pos / 8) >> (pos & 7);
+-	const unsigned int lo = v & lomask;
++	const unsigned int lo = v & ((1 << lobits) - 1);
+ 
+ 	*type = (v >> lobits) & 3;
+ 	return lo;
+ }
+ 
+-static int get_compacted_la_distance(unsigned int lclusterbits,
++static int get_compacted_la_distance(unsigned int lobits,
+ 				     unsigned int encodebits,
+ 				     unsigned int vcnt, u8 *in, int i)
+ {
+-	const unsigned int lomask = (1 << lclusterbits) - 1;
+ 	unsigned int lo, d1 = 0;
+ 	u8 type;
+ 
+ 	DBG_BUGON(i >= vcnt);
+ 
+ 	do {
+-		lo = decode_compactedbits(lclusterbits, lomask,
+-					  in, encodebits * i, &type);
++		lo = decode_compactedbits(lobits, in, encodebits * i, &type);
+ 
+ 		if (type != Z_EROFS_LCLUSTER_TYPE_NONHEAD)
+ 			return d1;
+@@ -123,15 +120,14 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ {
+ 	struct erofs_inode *const vi = EROFS_I(m->inode);
+ 	const unsigned int lclusterbits = vi->z_logical_clusterbits;
+-	const unsigned int lomask = (1 << lclusterbits) - 1;
+-	unsigned int vcnt, base, lo, encodebits, nblk, eofs;
++	unsigned int vcnt, base, lo, lobits, encodebits, nblk, eofs;
+ 	int i;
+ 	u8 *in, type;
+ 	bool big_pcluster;
+ 
+ 	if (1 << amortizedshift == 4 && lclusterbits <= 14)
+ 		vcnt = 2;
+-	else if (1 << amortizedshift == 2 && lclusterbits == 12)
++	else if (1 << amortizedshift == 2 && lclusterbits <= 12)
+ 		vcnt = 16;
+ 	else
+ 		return -EOPNOTSUPP;
+@@ -140,6 +136,7 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 	m->nextpackoff = round_down(pos, vcnt << amortizedshift) +
+ 			 (vcnt << amortizedshift);
+ 	big_pcluster = vi->z_advise & Z_EROFS_ADVISE_BIG_PCLUSTER_1;
++	lobits = max(lclusterbits, ilog2(Z_EROFS_LI_D0_CBLKCNT) + 1U);
+ 	encodebits = ((vcnt << amortizedshift) - sizeof(__le32)) * 8 / vcnt;
+ 	eofs = erofs_blkoff(m->inode->i_sb, pos);
+ 	base = round_down(eofs, vcnt << amortizedshift);
+@@ -147,15 +144,14 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 
+ 	i = (eofs - base) >> amortizedshift;
+ 
+-	lo = decode_compactedbits(lclusterbits, lomask,
+-				  in, encodebits * i, &type);
++	lo = decode_compactedbits(lobits, in, encodebits * i, &type);
+ 	m->type = type;
+ 	if (type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {
+ 		m->clusterofs = 1 << lclusterbits;
+ 
+ 		/* figure out lookahead_distance: delta[1] if needed */
+ 		if (lookahead)
+-			m->delta[1] = get_compacted_la_distance(lclusterbits,
++			m->delta[1] = get_compacted_la_distance(lobits,
+ 						encodebits, vcnt, in, i);
+ 		if (lo & Z_EROFS_LI_D0_CBLKCNT) {
+ 			if (!big_pcluster) {
+@@ -174,8 +170,8 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 		 * of which lo saves delta[1] rather than delta[0].
+ 		 * Hence, get delta[0] by the previous lcluster indirectly.
+ 		 */
+-		lo = decode_compactedbits(lclusterbits, lomask,
+-					  in, encodebits * (i - 1), &type);
++		lo = decode_compactedbits(lobits, in,
++					  encodebits * (i - 1), &type);
+ 		if (type != Z_EROFS_LCLUSTER_TYPE_NONHEAD)
+ 			lo = 0;
+ 		else if (lo & Z_EROFS_LI_D0_CBLKCNT)
+@@ -190,8 +186,8 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 		nblk = 1;
+ 		while (i > 0) {
+ 			--i;
+-			lo = decode_compactedbits(lclusterbits, lomask,
+-						  in, encodebits * i, &type);
++			lo = decode_compactedbits(lobits, in,
++						  encodebits * i, &type);
+ 			if (type == Z_EROFS_LCLUSTER_TYPE_NONHEAD)
+ 				i -= lo;
+ 
+@@ -202,8 +198,8 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 		nblk = 0;
+ 		while (i > 0) {
+ 			--i;
+-			lo = decode_compactedbits(lclusterbits, lomask,
+-						  in, encodebits * i, &type);
++			lo = decode_compactedbits(lobits, in,
++						  encodebits * i, &type);
+ 			if (type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {
+ 				if (lo & Z_EROFS_LI_D0_CBLKCNT) {
+ 					--i;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index d5efe076d3d3f..01299b55a567a 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4523,7 +4523,8 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ 	 * Round up offset. This is not fallocate, we need to zero out
+ 	 * blocks, so convert interior block aligned part of the range to
+ 	 * unwritten and possibly manually zero out unaligned parts of the
+-	 * range.
++	 * range. Here, start and partial_begin are inclusive, end and
++	 * partial_end are exclusive.
+ 	 */
+ 	start = round_up(offset, 1 << blkbits);
+ 	end = round_down((offset + len), 1 << blkbits);
+@@ -4609,7 +4610,8 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ 		 * disk in case of crash before zeroing trans is committed.
+ 		 */
+ 		if (ext4_should_journal_data(inode)) {
+-			ret = filemap_write_and_wait_range(mapping, start, end);
++			ret = filemap_write_and_wait_range(mapping, start,
++							   end - 1);
+ 			if (ret) {
+ 				filemap_invalidate_unlock(mapping);
+ 				goto out_mutex;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index ab023d709f72b..8408318e1d32e 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6758,13 +6758,15 @@ static int ext4_try_to_trim_range(struct super_block *sb,
+ __acquires(ext4_group_lock_ptr(sb, e4b->bd_group))
+ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ {
+-	ext4_grpblk_t next, count, free_count;
++	ext4_grpblk_t next, count, free_count, last, origin_start;
+ 	bool set_trimmed = false;
+ 	void *bitmap;
+ 
++	last = ext4_last_grp_cluster(sb, e4b->bd_group);
+ 	bitmap = e4b->bd_bitmap;
+-	if (start == 0 && max >= ext4_last_grp_cluster(sb, e4b->bd_group))
++	if (start == 0 && max >= last)
+ 		set_trimmed = true;
++	origin_start = start;
+ 	start = max(e4b->bd_info->bb_first_free, start);
+ 	count = 0;
+ 	free_count = 0;
+@@ -6773,7 +6775,10 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ 		start = mb_find_next_zero_bit(bitmap, max + 1, start);
+ 		if (start > max)
+ 			break;
+-		next = mb_find_next_bit(bitmap, max + 1, start);
++
++		next = mb_find_next_bit(bitmap, last + 1, start);
++		if (origin_start == 0 && next >= last)
++			set_trimmed = true;
+ 
+ 		if ((next - start) >= minblocks) {
+ 			int ret = ext4_trim_extent(sb, start, next - start, e4b);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 4fe061edefdde..e168a9f596001 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -218,17 +218,24 @@ struct ext4_new_flex_group_data {
+ 						   in the flex group */
+ 	__u16 *bg_flags;			/* block group flags of groups
+ 						   in @groups */
++	ext4_group_t resize_bg;			/* number of allocated
++						   new_group_data */
+ 	ext4_group_t count;			/* number of groups in @groups
+ 						 */
+ };
+ 
++/*
++ * Avoiding memory allocation failures due to too many groups added each time.
++ */
++#define MAX_RESIZE_BG				16384
++
+ /*
+  * alloc_flex_gd() allocates a ext4_new_flex_group_data with size of
+  * @flexbg_size.
+  *
+  * Returns NULL on failure otherwise address of the allocated structure.
+  */
+-static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned long flexbg_size)
++static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned int flexbg_size)
+ {
+ 	struct ext4_new_flex_group_data *flex_gd;
+ 
+@@ -236,17 +243,18 @@ static struct ext4_new_flex_group_data *alloc_flex_gd(unsigned long flexbg_size)
+ 	if (flex_gd == NULL)
+ 		goto out3;
+ 
+-	if (flexbg_size >= UINT_MAX / sizeof(struct ext4_new_group_data))
+-		goto out2;
+-	flex_gd->count = flexbg_size;
++	if (unlikely(flexbg_size > MAX_RESIZE_BG))
++		flex_gd->resize_bg = MAX_RESIZE_BG;
++	else
++		flex_gd->resize_bg = flexbg_size;
+ 
+-	flex_gd->groups = kmalloc_array(flexbg_size,
++	flex_gd->groups = kmalloc_array(flex_gd->resize_bg,
+ 					sizeof(struct ext4_new_group_data),
+ 					GFP_NOFS);
+ 	if (flex_gd->groups == NULL)
+ 		goto out2;
+ 
+-	flex_gd->bg_flags = kmalloc_array(flexbg_size, sizeof(__u16),
++	flex_gd->bg_flags = kmalloc_array(flex_gd->resize_bg, sizeof(__u16),
+ 					  GFP_NOFS);
+ 	if (flex_gd->bg_flags == NULL)
+ 		goto out1;
+@@ -283,7 +291,7 @@ static void free_flex_gd(struct ext4_new_flex_group_data *flex_gd)
+  */
+ static int ext4_alloc_group_tables(struct super_block *sb,
+ 				struct ext4_new_flex_group_data *flex_gd,
+-				int flexbg_size)
++				unsigned int flexbg_size)
+ {
+ 	struct ext4_new_group_data *group_data = flex_gd->groups;
+ 	ext4_fsblk_t start_blk;
+@@ -384,12 +392,12 @@ next_group:
+ 		group = group_data[0].group;
+ 
+ 		printk(KERN_DEBUG "EXT4-fs: adding a flex group with "
+-		       "%d groups, flexbg size is %d:\n", flex_gd->count,
++		       "%u groups, flexbg size is %u:\n", flex_gd->count,
+ 		       flexbg_size);
+ 
+ 		for (i = 0; i < flex_gd->count; i++) {
+ 			ext4_debug(
+-			       "adding %s group %u: %u blocks (%d free, %d mdata blocks)\n",
++			       "adding %s group %u: %u blocks (%u free, %u mdata blocks)\n",
+ 			       ext4_bg_has_super(sb, group + i) ? "normal" :
+ 			       "no-super", group + i,
+ 			       group_data[i].blocks_count,
+@@ -1605,8 +1613,7 @@ exit:
+ 
+ static int ext4_setup_next_flex_gd(struct super_block *sb,
+ 				    struct ext4_new_flex_group_data *flex_gd,
+-				    ext4_fsblk_t n_blocks_count,
+-				    unsigned long flexbg_size)
++				    ext4_fsblk_t n_blocks_count)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_super_block *es = sbi->s_es;
+@@ -1630,7 +1637,7 @@ static int ext4_setup_next_flex_gd(struct super_block *sb,
+ 	BUG_ON(last);
+ 	ext4_get_group_no_and_offset(sb, n_blocks_count - 1, &n_group, &last);
+ 
+-	last_group = group | (flexbg_size - 1);
++	last_group = group | (flex_gd->resize_bg - 1);
+ 	if (last_group > n_group)
+ 		last_group = n_group;
+ 
+@@ -1990,8 +1997,9 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ 	ext4_fsblk_t o_blocks_count;
+ 	ext4_fsblk_t n_blocks_count_retry = 0;
+ 	unsigned long last_update_time = 0;
+-	int err = 0, flexbg_size = 1 << sbi->s_log_groups_per_flex;
++	int err = 0;
+ 	int meta_bg;
++	unsigned int flexbg_size = ext4_flex_bg_size(sbi);
+ 
+ 	/* See if the device is actually as big as what was requested */
+ 	bh = ext4_sb_bread(sb, n_blocks_count - 1, 0);
+@@ -2132,8 +2140,7 @@ retry:
+ 	/* Add flex groups. Note that a regular group is a
+ 	 * flex group with 1 group.
+ 	 */
+-	while (ext4_setup_next_flex_gd(sb, flex_gd, n_blocks_count,
+-					      flexbg_size)) {
++	while (ext4_setup_next_flex_gd(sb, flex_gd, n_blocks_count)) {
+ 		if (time_is_before_jiffies(last_update_time + HZ * 10)) {
+ 			if (last_update_time)
+ 				ext4_msg(sb, KERN_INFO,
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 36e5dab6baaee..62119f3f7206d 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1036,8 +1036,10 @@ static void set_cluster_dirty(struct compress_ctx *cc)
+ 	int i;
+ 
+ 	for (i = 0; i < cc->cluster_size; i++)
+-		if (cc->rpages[i])
++		if (cc->rpages[i]) {
+ 			set_page_dirty(cc->rpages[i]);
++			set_page_private_gcing(cc->rpages[i]);
++		}
+ }
+ 
+ static int prepare_compress_overwrite(struct compress_ctx *cc,
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 8912511980ae2..a05781e708d66 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1317,6 +1317,7 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
+ 			}
+ 			memcpy_page(pdst, 0, psrc, 0, PAGE_SIZE);
+ 			set_page_dirty(pdst);
++			set_page_private_gcing(pdst);
+ 			f2fs_put_page(pdst, 1);
+ 			f2fs_put_page(psrc, 1);
+ 
+@@ -4059,6 +4060,7 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len)
+ 		f2fs_bug_on(F2FS_I_SB(inode), !page);
+ 
+ 		set_page_dirty(page);
++		set_page_private_gcing(page);
+ 		f2fs_put_page(page, 1);
+ 		f2fs_put_page(page, 0);
+ 	}
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index b56d0f1078a76..d0f24ccbd1ac6 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -712,7 +712,16 @@ retry_dn:
+ 		 */
+ 		if (dest == NEW_ADDR) {
+ 			f2fs_truncate_data_blocks_range(&dn, 1);
+-			f2fs_reserve_new_block(&dn);
++			do {
++				err = f2fs_reserve_new_block(&dn);
++				if (err == -ENOSPC) {
++					f2fs_bug_on(sbi, 1);
++					break;
++				}
++			} while (err &&
++				IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
++			if (err)
++				goto err;
+ 			continue;
+ 		}
+ 
+@@ -720,12 +729,14 @@ retry_dn:
+ 		if (f2fs_is_valid_blkaddr(sbi, dest, META_POR)) {
+ 
+ 			if (src == NULL_ADDR) {
+-				err = f2fs_reserve_new_block(&dn);
+-				while (err &&
+-				       IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION))
++				do {
+ 					err = f2fs_reserve_new_block(&dn);
+-				/* We should not get -ENOSPC */
+-				f2fs_bug_on(sbi, err);
++					if (err == -ENOSPC) {
++						f2fs_bug_on(sbi, 1);
++						break;
++					}
++				} while (err &&
++					IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
+ 				if (err)
+ 					goto err;
+ 			}
+@@ -906,6 +917,8 @@ skip:
+ 	if (!err && fix_curseg_write_pointer && !f2fs_readonly(sbi->sb) &&
+ 			f2fs_sb_has_blkzoned(sbi)) {
+ 		err = f2fs_fix_curseg_write_pointer(sbi);
++		if (!err)
++			err = f2fs_check_write_pointer(sbi);
+ 		ret = err;
+ 	}
+ 
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 11c77757ead9e..cb3cda1390adb 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -63,10 +63,10 @@
+  */
+ static void dbAllocBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 			int nblocks);
+-static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval);
+-static int dbBackSplit(dmtree_t * tp, int leafno);
+-static int dbJoin(dmtree_t * tp, int leafno, int newval);
+-static void dbAdjTree(dmtree_t * tp, int leafno, int newval);
++static void dbSplit(dmtree_t *tp, int leafno, int splitsz, int newval, bool is_ctl);
++static int dbBackSplit(dmtree_t *tp, int leafno, bool is_ctl);
++static int dbJoin(dmtree_t *tp, int leafno, int newval, bool is_ctl);
++static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl);
+ static int dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc,
+ 		    int level);
+ static int dbAllocAny(struct bmap * bmp, s64 nblocks, int l2nb, s64 * results);
+@@ -2103,7 +2103,7 @@ static int dbFreeDmap(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 		 * system.
+ 		 */
+ 		if (dp->tree.stree[word] == NOFREE)
+-			dbBackSplit((dmtree_t *) & dp->tree, word);
++			dbBackSplit((dmtree_t *)&dp->tree, word, false);
+ 
+ 		dbAllocBits(bmp, dp, blkno, nblocks);
+ 	}
+@@ -2189,7 +2189,7 @@ static void dbAllocBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 			 * the binary system of the leaves if need be.
+ 			 */
+ 			dbSplit(tp, word, BUDMIN,
+-				dbMaxBud((u8 *) & dp->wmap[word]));
++				dbMaxBud((u8 *)&dp->wmap[word]), false);
+ 
+ 			word += 1;
+ 		} else {
+@@ -2229,7 +2229,7 @@ static void dbAllocBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 				 * system of the leaves to reflect the current
+ 				 * allocation (size).
+ 				 */
+-				dbSplit(tp, word, size, NOFREE);
++				dbSplit(tp, word, size, NOFREE, false);
+ 
+ 				/* get the number of dmap words handled */
+ 				nw = BUDSIZE(size, BUDMIN);
+@@ -2336,7 +2336,7 @@ static int dbFreeBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 			/* update the leaf for this dmap word.
+ 			 */
+ 			rc = dbJoin(tp, word,
+-				    dbMaxBud((u8 *) & dp->wmap[word]));
++				    dbMaxBud((u8 *)&dp->wmap[word]), false);
+ 			if (rc)
+ 				return rc;
+ 
+@@ -2369,7 +2369,7 @@ static int dbFreeBits(struct bmap * bmp, struct dmap * dp, s64 blkno,
+ 
+ 				/* update the leaf.
+ 				 */
+-				rc = dbJoin(tp, word, size);
++				rc = dbJoin(tp, word, size, false);
+ 				if (rc)
+ 					return rc;
+ 
+@@ -2521,16 +2521,16 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+ 		 * that it is at the front of a binary buddy system.
+ 		 */
+ 		if (oldval == NOFREE) {
+-			rc = dbBackSplit((dmtree_t *) dcp, leafno);
++			rc = dbBackSplit((dmtree_t *)dcp, leafno, true);
+ 			if (rc) {
+ 				release_metapage(mp);
+ 				return rc;
+ 			}
+ 			oldval = dcp->stree[ti];
+ 		}
+-		dbSplit((dmtree_t *) dcp, leafno, dcp->budmin, newval);
++		dbSplit((dmtree_t *) dcp, leafno, dcp->budmin, newval, true);
+ 	} else {
+-		rc = dbJoin((dmtree_t *) dcp, leafno, newval);
++		rc = dbJoin((dmtree_t *) dcp, leafno, newval, true);
+ 		if (rc) {
+ 			release_metapage(mp);
+ 			return rc;
+@@ -2561,7 +2561,7 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+ 				 */
+ 				if (alloc) {
+ 					dbJoin((dmtree_t *) dcp, leafno,
+-					       oldval);
++					       oldval, true);
+ 				} else {
+ 					/* the dbJoin() above might have
+ 					 * caused a larger binary buddy system
+@@ -2571,9 +2571,9 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+ 					 */
+ 					if (dcp->stree[ti] == NOFREE)
+ 						dbBackSplit((dmtree_t *)
+-							    dcp, leafno);
++							    dcp, leafno, true);
+ 					dbSplit((dmtree_t *) dcp, leafno,
+-						dcp->budmin, oldval);
++						dcp->budmin, oldval, true);
+ 				}
+ 
+ 				/* release the buffer and return the error.
+@@ -2621,7 +2621,7 @@ dbAdjCtl(struct bmap * bmp, s64 blkno, int newval, int alloc, int level)
+  *
+  * serialization: IREAD_LOCK(ipbmap) or IWRITE_LOCK(ipbmap) held on entry/exit;
+  */
+-static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
++static void dbSplit(dmtree_t *tp, int leafno, int splitsz, int newval, bool is_ctl)
+ {
+ 	int budsz;
+ 	int cursz;
+@@ -2643,7 +2643,7 @@ static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
+ 		while (cursz >= splitsz) {
+ 			/* update the buddy's leaf with its new value.
+ 			 */
+-			dbAdjTree(tp, leafno ^ budsz, cursz);
++			dbAdjTree(tp, leafno ^ budsz, cursz, is_ctl);
+ 
+ 			/* on to the next size and buddy.
+ 			 */
+@@ -2655,7 +2655,7 @@ static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
+ 	/* adjust the dmap tree to reflect the specified leaf's new
+ 	 * value.
+ 	 */
+-	dbAdjTree(tp, leafno, newval);
++	dbAdjTree(tp, leafno, newval, is_ctl);
+ }
+ 
+ 
+@@ -2686,7 +2686,7 @@ static void dbSplit(dmtree_t * tp, int leafno, int splitsz, int newval)
+  *
+  * serialization: IREAD_LOCK(ipbmap) or IWRITE_LOCK(ipbmap) held on entry/exit;
+  */
+-static int dbBackSplit(dmtree_t * tp, int leafno)
++static int dbBackSplit(dmtree_t *tp, int leafno, bool is_ctl)
+ {
+ 	int budsz, bud, w, bsz, size;
+ 	int cursz;
+@@ -2737,7 +2737,7 @@ static int dbBackSplit(dmtree_t * tp, int leafno)
+ 				 * system in two.
+ 				 */
+ 				cursz = leaf[bud] - 1;
+-				dbSplit(tp, bud, cursz, cursz);
++				dbSplit(tp, bud, cursz, cursz, is_ctl);
+ 				break;
+ 			}
+ 		}
+@@ -2765,7 +2765,7 @@ static int dbBackSplit(dmtree_t * tp, int leafno)
+  *
+  * RETURN VALUES: none
+  */
+-static int dbJoin(dmtree_t * tp, int leafno, int newval)
++static int dbJoin(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ {
+ 	int budsz, buddy;
+ 	s8 *leaf;
+@@ -2820,12 +2820,12 @@ static int dbJoin(dmtree_t * tp, int leafno, int newval)
+ 			if (leafno < buddy) {
+ 				/* leafno is the left buddy.
+ 				 */
+-				dbAdjTree(tp, buddy, NOFREE);
++				dbAdjTree(tp, buddy, NOFREE, is_ctl);
+ 			} else {
+ 				/* buddy is the left buddy and becomes
+ 				 * leafno.
+ 				 */
+-				dbAdjTree(tp, leafno, NOFREE);
++				dbAdjTree(tp, leafno, NOFREE, is_ctl);
+ 				leafno = buddy;
+ 			}
+ 
+@@ -2838,7 +2838,7 @@ static int dbJoin(dmtree_t * tp, int leafno, int newval)
+ 
+ 	/* update the leaf value.
+ 	 */
+-	dbAdjTree(tp, leafno, newval);
++	dbAdjTree(tp, leafno, newval, is_ctl);
+ 
+ 	return 0;
+ }
+@@ -2859,15 +2859,20 @@ static int dbJoin(dmtree_t * tp, int leafno, int newval)
+  *
+  * RETURN VALUES: none
+  */
+-static void dbAdjTree(dmtree_t * tp, int leafno, int newval)
++static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ {
+ 	int lp, pp, k;
+-	int max;
++	int max, size;
++
++	size = is_ctl ? CTLTREESIZE : TREESIZE;
+ 
+ 	/* pick up the index of the leaf for this leafno.
+ 	 */
+ 	lp = leafno + le32_to_cpu(tp->dmt_leafidx);
+ 
++	if (WARN_ON_ONCE(lp >= size || lp < 0))
++		return;
++
+ 	/* is the current value the same as the old value ?  if so,
+ 	 * there is nothing to do.
+ 	 */
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 92b7c533407c1..031d8f570f581 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -633,6 +633,11 @@ int dtSearch(struct inode *ip, struct component_name * key, ino_t * data,
+ 		for (base = 0, lim = p->header.nextindex; lim; lim >>= 1) {
+ 			index = base + (lim >> 1);
+ 
++			if (stbl[index] < 0) {
++				rc = -EIO;
++				goto out;
++			}
++
+ 			if (p->header.flag & BT_LEAF) {
+ 				/* uppercase leaf name to compare */
+ 				cmp =
+@@ -1970,7 +1975,7 @@ static int dtSplitRoot(tid_t tid,
+ 		do {
+ 			f = &rp->slot[fsi];
+ 			fsi = f->next;
+-		} while (fsi != -1);
++		} while (fsi >= 0);
+ 
+ 		f->next = n;
+ 	}
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index a037ee59e3985..2ec35889ad24e 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -2179,6 +2179,9 @@ static int diNewExt(struct inomap * imap, struct iag * iagp, int extno)
+ 	/* get the ag and iag numbers for this iag.
+ 	 */
+ 	agno = BLKTOAG(le64_to_cpu(iagp->agstart), sbi);
++	if (agno >= MAXAG || agno < 0)
++		return -EIO;
++
+ 	iagno = le32_to_cpu(iagp->iagnum);
+ 
+ 	/* check if this is the last free extent within the
+diff --git a/fs/jfs/jfs_mount.c b/fs/jfs/jfs_mount.c
+index 415eb65a36ffc..9b5c6a20b30c8 100644
+--- a/fs/jfs/jfs_mount.c
++++ b/fs/jfs/jfs_mount.c
+@@ -172,15 +172,15 @@ int jfs_mount(struct super_block *sb)
+ 	}
+ 	jfs_info("jfs_mount: ipimap:0x%p", ipimap);
+ 
+-	/* map further access of per fileset inodes by the fileset inode */
+-	sbi->ipimap = ipimap;
+-
+ 	/* initialize fileset inode allocation map */
+ 	if ((rc = diMount(ipimap))) {
+ 		jfs_err("jfs_mount: diMount failed w/rc = %d", rc);
+ 		goto err_ipimap;
+ 	}
+ 
++	/* map further access of per fileset inodes by the fileset inode */
++	sbi->ipimap = ipimap;
++
+ 	return rc;
+ 
+ 	/*
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 8b2bd65d70e72..62d39ecf0a466 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -676,6 +676,18 @@ struct kernfs_node *kernfs_new_node(struct kernfs_node *parent,
+ {
+ 	struct kernfs_node *kn;
+ 
++	if (parent->mode & S_ISGID) {
++		/* this code block imitates inode_init_owner() for
++		 * kernfs
++		 */
++
++		if (parent->iattr)
++			gid = parent->iattr->ia_gid;
++
++		if (flags & KERNFS_DIR)
++			mode |= S_ISGID;
++	}
++
+ 	kn = __kernfs_new_node(kernfs_root(parent), parent,
+ 			       name, mode, uid, gid, flags);
+ 	if (kn) {
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 814733ba2f4ba..9221a33f917b8 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -1336,7 +1336,7 @@ static int ocfs2_rename(struct mnt_idmap *idmap,
+ 		goto bail;
+ 	}
+ 
+-	if (S_ISDIR(old_inode->i_mode)) {
++	if (S_ISDIR(old_inode->i_mode) && new_dir != old_dir) {
+ 		u64 old_inode_parent;
+ 
+ 		update_dot_dot = 1;
+@@ -1353,8 +1353,7 @@ static int ocfs2_rename(struct mnt_idmap *idmap,
+ 			goto bail;
+ 		}
+ 
+-		if (!new_inode && new_dir != old_dir &&
+-		    new_dir->i_nlink >= ocfs2_link_max(osb)) {
++		if (!new_inode && new_dir->i_nlink >= ocfs2_link_max(osb)) {
+ 			status = -EMLINK;
+ 			goto bail;
+ 		}
+@@ -1601,6 +1600,9 @@ static int ocfs2_rename(struct mnt_idmap *idmap,
+ 			mlog_errno(status);
+ 			goto bail;
+ 		}
++	}
++
++	if (S_ISDIR(old_inode->i_mode)) {
+ 		drop_nlink(old_dir);
+ 		if (new_inode) {
+ 			drop_nlink(new_inode);
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 8064ea76f80b5..84abf98340a04 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -44,7 +44,7 @@ static struct ctl_table sysctl_mount_point[] = {
+  */
+ struct ctl_table_header *register_sysctl_mount_point(const char *path)
+ {
+-	return register_sysctl_sz(path, sysctl_mount_point, 0);
++	return register_sysctl(path, sysctl_mount_point);
+ }
+ EXPORT_SYMBOL(register_sysctl_mount_point);
+ 
+@@ -233,7 +233,8 @@ static int insert_header(struct ctl_dir *dir, struct ctl_table_header *header)
+ 		return -EROFS;
+ 
+ 	/* Am I creating a permanently empty directory? */
+-	if (sysctl_is_perm_empty_ctl_table(header->ctl_table)) {
++	if (header->ctl_table_size > 0 &&
++	    sysctl_is_perm_empty_ctl_table(header->ctl_table)) {
+ 		if (!RB_EMPTY_ROOT(&dir->root))
+ 			return -EINVAL;
+ 		sysctl_set_perm_empty_ctl_header(dir_h);
+@@ -1213,6 +1214,10 @@ static bool get_links(struct ctl_dir *dir,
+ 	struct ctl_table_header *tmp_head;
+ 	struct ctl_table *entry, *link;
+ 
++	if (header->ctl_table_size == 0 ||
++	    sysctl_is_perm_empty_ctl_table(header->ctl_table))
++		return true;
++
+ 	/* Are there links available for every entry in table? */
+ 	list_for_each_table_entry(entry, header) {
+ 		const char *procname = entry->procname;
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index d36702c7ab3c4..88b34fdbf7592 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -529,6 +529,7 @@ static int ramoops_init_przs(const char *name,
+ 	}
+ 
+ 	zone_sz = mem_sz / *cnt;
++	zone_sz = ALIGN_DOWN(zone_sz, 2);
+ 	if (!zone_sz) {
+ 		dev_err(dev, "%s zone size == 0\n", name);
+ 		goto fail;
+diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c
+index 994d6e6995ab1..5996197ba40ca 100644
+--- a/fs/reiserfs/namei.c
++++ b/fs/reiserfs/namei.c
+@@ -1324,8 +1324,8 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 	struct inode *old_inode, *new_dentry_inode;
+ 	struct reiserfs_transaction_handle th;
+ 	int jbegin_count;
+-	umode_t old_inode_mode;
+ 	unsigned long savelink = 1;
++	bool update_dir_parent = false;
+ 
+ 	if (flags & ~RENAME_NOREPLACE)
+ 		return -EINVAL;
+@@ -1375,8 +1375,7 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 		return -ENOENT;
+ 	}
+ 
+-	old_inode_mode = old_inode->i_mode;
+-	if (S_ISDIR(old_inode_mode)) {
++	if (S_ISDIR(old_inode->i_mode)) {
+ 		/*
+ 		 * make sure that directory being renamed has correct ".."
+ 		 * and that its new parent directory has not too many links
+@@ -1389,24 +1388,28 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 			}
+ 		}
+ 
+-		/*
+-		 * directory is renamed, its parent directory will be changed,
+-		 * so find ".." entry
+-		 */
+-		dot_dot_de.de_gen_number_bit_string = NULL;
+-		retval =
+-		    reiserfs_find_entry(old_inode, "..", 2, &dot_dot_entry_path,
++		if (old_dir != new_dir) {
++			/*
++			 * directory is renamed, its parent directory will be
++			 * changed, so find ".." entry
++			 */
++			dot_dot_de.de_gen_number_bit_string = NULL;
++			retval =
++			    reiserfs_find_entry(old_inode, "..", 2,
++					&dot_dot_entry_path,
+ 					&dot_dot_de);
+-		pathrelse(&dot_dot_entry_path);
+-		if (retval != NAME_FOUND) {
+-			reiserfs_write_unlock(old_dir->i_sb);
+-			return -EIO;
+-		}
++			pathrelse(&dot_dot_entry_path);
++			if (retval != NAME_FOUND) {
++				reiserfs_write_unlock(old_dir->i_sb);
++				return -EIO;
++			}
+ 
+-		/* inode number of .. must equal old_dir->i_ino */
+-		if (dot_dot_de.de_objectid != old_dir->i_ino) {
+-			reiserfs_write_unlock(old_dir->i_sb);
+-			return -EIO;
++			/* inode number of .. must equal old_dir->i_ino */
++			if (dot_dot_de.de_objectid != old_dir->i_ino) {
++				reiserfs_write_unlock(old_dir->i_sb);
++				return -EIO;
++			}
++			update_dir_parent = true;
+ 		}
+ 	}
+ 
+@@ -1486,7 +1489,7 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 
+ 		reiserfs_prepare_for_journal(old_inode->i_sb, new_de.de_bh, 1);
+ 
+-		if (S_ISDIR(old_inode->i_mode)) {
++		if (update_dir_parent) {
+ 			if ((retval =
+ 			     search_by_entry_key(new_dir->i_sb,
+ 						 &dot_dot_de.de_entry_key,
+@@ -1534,14 +1537,14 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 							 new_de.de_bh);
+ 			reiserfs_restore_prepared_buffer(old_inode->i_sb,
+ 							 old_de.de_bh);
+-			if (S_ISDIR(old_inode_mode))
++			if (update_dir_parent)
+ 				reiserfs_restore_prepared_buffer(old_inode->
+ 								 i_sb,
+ 								 dot_dot_de.
+ 								 de_bh);
+ 			continue;
+ 		}
+-		if (S_ISDIR(old_inode_mode)) {
++		if (update_dir_parent) {
+ 			if (item_moved(&dot_dot_ih, &dot_dot_entry_path) ||
+ 			    !entry_points_to_object("..", 2, &dot_dot_de,
+ 						    old_dir)) {
+@@ -1559,7 +1562,7 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 			}
+ 		}
+ 
+-		RFALSE(S_ISDIR(old_inode_mode) &&
++		RFALSE(update_dir_parent &&
+ 		       !buffer_journal_prepared(dot_dot_de.de_bh), "");
+ 
+ 		break;
+@@ -1592,11 +1595,12 @@ static int reiserfs_rename(struct mnt_idmap *idmap,
+ 		savelink = new_dentry_inode->i_nlink;
+ 	}
+ 
+-	if (S_ISDIR(old_inode_mode)) {
++	if (update_dir_parent) {
+ 		/* adjust ".." of renamed directory */
+ 		set_ino_in_dir_entry(&dot_dot_de, INODE_PKEY(new_dir));
+ 		journal_mark_dirty(&th, dot_dot_de.de_bh);
+-
++	}
++	if (S_ISDIR(old_inode->i_mode)) {
+ 		/*
+ 		 * there (in new_dir) was no directory, so it got new link
+ 		 * (".."  of renamed directory)
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 5e32c79f03a74..942e6ece56b1a 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -205,9 +205,18 @@ struct cifs_open_info_data {
+ 	};
+ };
+ 
+-#define cifs_open_data_reparse(d) \
+-	((d)->reparse_point || \
+-	 (le32_to_cpu((d)->fi.Attributes) & ATTR_REPARSE))
++static inline bool cifs_open_data_reparse(struct cifs_open_info_data *data)
++{
++	struct smb2_file_all_info *fi = &data->fi;
++	u32 attrs = le32_to_cpu(fi->Attributes);
++	bool ret;
++
++	ret = data->reparse_point || (attrs & ATTR_REPARSE);
++	if (ret)
++		attrs |= ATTR_REPARSE;
++	fi->Attributes = cpu_to_le32(attrs);
++	return ret;
++}
+ 
+ static inline void cifs_free_open_info(struct cifs_open_info_data *data)
+ {
+@@ -390,12 +399,17 @@ struct smb_version_operations {
+ 	int (*rename_pending_delete)(const char *, struct dentry *,
+ 				     const unsigned int);
+ 	/* send rename request */
+-	int (*rename)(const unsigned int, struct cifs_tcon *, const char *,
+-		      const char *, struct cifs_sb_info *);
++	int (*rename)(const unsigned int xid,
++		      struct cifs_tcon *tcon,
++		      struct dentry *source_dentry,
++		      const char *from_name, const char *to_name,
++		      struct cifs_sb_info *cifs_sb);
+ 	/* send create hardlink request */
+-	int (*create_hardlink)(const unsigned int, struct cifs_tcon *,
+-			       const char *, const char *,
+-			       struct cifs_sb_info *);
++	int (*create_hardlink)(const unsigned int xid,
++			       struct cifs_tcon *tcon,
++			       struct dentry *source_dentry,
++			       const char *from_name, const char *to_name,
++			       struct cifs_sb_info *cifs_sb);
+ 	/* query symlink target */
+ 	int (*query_symlink)(const unsigned int xid,
+ 			     struct cifs_tcon *tcon,
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 46feaa0880bdf..9516f57323246 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -435,16 +435,19 @@ extern int CIFSPOSIXDelFile(const unsigned int xid, struct cifs_tcon *tcon,
+ 			int remap_special_chars);
+ extern int CIFSSMBDelFile(const unsigned int xid, struct cifs_tcon *tcon,
+ 			  const char *name, struct cifs_sb_info *cifs_sb);
+-extern int CIFSSMBRename(const unsigned int xid, struct cifs_tcon *tcon,
+-			 const char *from_name, const char *to_name,
+-			 struct cifs_sb_info *cifs_sb);
++int CIFSSMBRename(const unsigned int xid, struct cifs_tcon *tcon,
++		  struct dentry *source_dentry,
++		  const char *from_name, const char *to_name,
++		  struct cifs_sb_info *cifs_sb);
+ extern int CIFSSMBRenameOpenFile(const unsigned int xid, struct cifs_tcon *tcon,
+ 				 int netfid, const char *target_name,
+ 				 const struct nls_table *nls_codepage,
+ 				 int remap_special_chars);
+-extern int CIFSCreateHardLink(const unsigned int xid, struct cifs_tcon *tcon,
+-			      const char *from_name, const char *to_name,
+-			      struct cifs_sb_info *cifs_sb);
++int CIFSCreateHardLink(const unsigned int xid,
++		       struct cifs_tcon *tcon,
++		       struct dentry *source_dentry,
++		       const char *from_name, const char *to_name,
++		       struct cifs_sb_info *cifs_sb);
+ extern int CIFSUnixCreateHardLink(const unsigned int xid,
+ 			struct cifs_tcon *tcon,
+ 			const char *fromName, const char *toName,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 9ee348e6d1069..e9e33b0b3ac47 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -2149,10 +2149,10 @@ CIFSSMBFlush(const unsigned int xid, struct cifs_tcon *tcon, int smb_file_id)
+ 	return rc;
+ }
+ 
+-int
+-CIFSSMBRename(const unsigned int xid, struct cifs_tcon *tcon,
+-	      const char *from_name, const char *to_name,
+-	      struct cifs_sb_info *cifs_sb)
++int CIFSSMBRename(const unsigned int xid, struct cifs_tcon *tcon,
++		  struct dentry *source_dentry,
++		  const char *from_name, const char *to_name,
++		  struct cifs_sb_info *cifs_sb)
+ {
+ 	int rc = 0;
+ 	RENAME_REQ *pSMB = NULL;
+@@ -2530,10 +2530,11 @@ createHardLinkRetry:
+ 	return rc;
+ }
+ 
+-int
+-CIFSCreateHardLink(const unsigned int xid, struct cifs_tcon *tcon,
+-		   const char *from_name, const char *to_name,
+-		   struct cifs_sb_info *cifs_sb)
++int CIFSCreateHardLink(const unsigned int xid,
++		       struct cifs_tcon *tcon,
++		       struct dentry *source_dentry,
++		       const char *from_name, const char *to_name,
++		       struct cifs_sb_info *cifs_sb)
+ {
+ 	int rc = 0;
+ 	NT_RENAME_REQ *pSMB = NULL;
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 09c5c0f5c96e2..eb54e48937771 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -2219,7 +2219,8 @@ cifs_do_rename(const unsigned int xid, struct dentry *from_dentry,
+ 		return -ENOSYS;
+ 
+ 	/* try path-based rename first */
+-	rc = server->ops->rename(xid, tcon, from_path, to_path, cifs_sb);
++	rc = server->ops->rename(xid, tcon, from_dentry,
++				 from_path, to_path, cifs_sb);
+ 
+ 	/*
+ 	 * Don't bother with rename by filehandle unless file is busy and
+diff --git a/fs/smb/client/link.c b/fs/smb/client/link.c
+index a1da50e66fbb6..3ef34218a790d 100644
+--- a/fs/smb/client/link.c
++++ b/fs/smb/client/link.c
+@@ -510,8 +510,8 @@ cifs_hardlink(struct dentry *old_file, struct inode *inode,
+ 			rc = -ENOSYS;
+ 			goto cifs_hl_exit;
+ 		}
+-		rc = server->ops->create_hardlink(xid, tcon, from_name, to_name,
+-						  cifs_sb);
++		rc = server->ops->create_hardlink(xid, tcon, old_file,
++						  from_name, to_name, cifs_sb);
+ 		if ((rc == -EIO) || (rc == -EINVAL))
+ 			rc = -EOPNOTSUPP;
+ 	}
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index 2d3b332a79a16..a16e175731eb3 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -440,8 +440,14 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	}
+ 
+ 	if (!iface) {
+-		cifs_dbg(FYI, "unable to get the interface matching: %pIS\n",
+-			 &ss);
++		if (!chan_index)
++			cifs_dbg(FYI, "unable to get the interface matching: %pIS\n",
++				 &ss);
++		else {
++			cifs_dbg(FYI, "unable to find another interface to replace: %pIS\n",
++				 &old_iface->sockaddr);
++		}
++
+ 		spin_unlock(&ses->iface_lock);
+ 		return 0;
+ 	}
+@@ -459,10 +465,6 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 		iface->weight_fulfilled++;
+ 
+ 		kref_put(&old_iface->refcount, release_iface);
+-	} else if (old_iface) {
+-		/* if a new candidate is not found, keep things as is */
+-		cifs_dbg(FYI, "could not replace iface: %pIS\n",
+-			 &old_iface->sockaddr);
+ 	} else if (!chan_index) {
+ 		/* special case: update interface for primary channel */
+ 		if (iface) {
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index c94940af5d4b8..6cac0b107a2d0 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -35,6 +35,18 @@ free_set_inf_compound(struct smb_rqst *rqst)
+ 		SMB2_close_free(&rqst[2]);
+ }
+ 
++static inline __u32 file_create_options(struct dentry *dentry)
++{
++	struct cifsInodeInfo *ci;
++
++	if (dentry) {
++		ci = CIFS_I(d_inode(dentry));
++		if (ci->cifsAttrs & ATTR_REPARSE)
++			return OPEN_REPARSE_POINT;
++	}
++	return 0;
++}
++
+ /*
+  * note: If cfile is passed, the reference to it is dropped here.
+  * So make sure that you do not reuse cfile after return from this func.
+@@ -781,11 +793,11 @@ smb2_unlink(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ 				ACL_NO_MODE, NULL, SMB2_OP_DELETE, NULL, NULL, NULL, NULL, NULL);
+ }
+ 
+-static int
+-smb2_set_path_attr(const unsigned int xid, struct cifs_tcon *tcon,
+-		   const char *from_name, const char *to_name,
+-		   struct cifs_sb_info *cifs_sb, __u32 access, int command,
+-		   struct cifsFileInfo *cfile)
++static int smb2_set_path_attr(const unsigned int xid, struct cifs_tcon *tcon,
++			      const char *from_name, const char *to_name,
++			      struct cifs_sb_info *cifs_sb,
++			      __u32 create_options, __u32 access,
++			      int command, struct cifsFileInfo *cfile)
+ {
+ 	__le16 *smb2_to_name = NULL;
+ 	int rc;
+@@ -796,35 +808,40 @@ smb2_set_path_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 		goto smb2_rename_path;
+ 	}
+ 	rc = smb2_compound_op(xid, tcon, cifs_sb, from_name, access,
+-			      FILE_OPEN, 0, ACL_NO_MODE, smb2_to_name,
++			      FILE_OPEN, create_options, ACL_NO_MODE, smb2_to_name,
+ 			      command, cfile, NULL, NULL, NULL, NULL);
+ smb2_rename_path:
+ 	kfree(smb2_to_name);
+ 	return rc;
+ }
+ 
+-int
+-smb2_rename_path(const unsigned int xid, struct cifs_tcon *tcon,
+-		 const char *from_name, const char *to_name,
+-		 struct cifs_sb_info *cifs_sb)
++int smb2_rename_path(const unsigned int xid,
++		     struct cifs_tcon *tcon,
++		     struct dentry *source_dentry,
++		     const char *from_name, const char *to_name,
++		     struct cifs_sb_info *cifs_sb)
+ {
+ 	struct cifsFileInfo *cfile;
++	__u32 co = file_create_options(source_dentry);
+ 
+ 	drop_cached_dir_by_name(xid, tcon, from_name, cifs_sb);
+ 	cifs_get_writable_path(tcon, from_name, FIND_WR_WITH_DELETE, &cfile);
+ 
+-	return smb2_set_path_attr(xid, tcon, from_name, to_name,
+-				  cifs_sb, DELETE, SMB2_OP_RENAME, cfile);
++	return smb2_set_path_attr(xid, tcon, from_name, to_name, cifs_sb,
++				  co, DELETE, SMB2_OP_RENAME, cfile);
+ }
+ 
+-int
+-smb2_create_hardlink(const unsigned int xid, struct cifs_tcon *tcon,
+-		     const char *from_name, const char *to_name,
+-		     struct cifs_sb_info *cifs_sb)
++int smb2_create_hardlink(const unsigned int xid,
++			 struct cifs_tcon *tcon,
++			 struct dentry *source_dentry,
++			 const char *from_name, const char *to_name,
++			 struct cifs_sb_info *cifs_sb)
+ {
+-	return smb2_set_path_attr(xid, tcon, from_name, to_name, cifs_sb,
+-				  FILE_READ_ATTRIBUTES, SMB2_OP_HARDLINK,
+-				  NULL);
++	__u32 co = file_create_options(source_dentry);
++
++	return smb2_set_path_attr(xid, tcon, from_name, to_name,
++				  cifs_sb, co, FILE_READ_ATTRIBUTES,
++				  SMB2_OP_HARDLINK, NULL);
+ }
+ 
+ int
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 0e371f7e2854b..a8084ce7fcbd2 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -80,12 +80,16 @@ extern int smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon,
+ 		      const char *name, struct cifs_sb_info *cifs_sb);
+ extern int smb2_unlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 		       const char *name, struct cifs_sb_info *cifs_sb);
+-extern int smb2_rename_path(const unsigned int xid, struct cifs_tcon *tcon,
+-			    const char *from_name, const char *to_name,
+-			    struct cifs_sb_info *cifs_sb);
+-extern int smb2_create_hardlink(const unsigned int xid, struct cifs_tcon *tcon,
+-				const char *from_name, const char *to_name,
+-				struct cifs_sb_info *cifs_sb);
++int smb2_rename_path(const unsigned int xid,
++		     struct cifs_tcon *tcon,
++		     struct dentry *source_dentry,
++		     const char *from_name, const char *to_name,
++		     struct cifs_sb_info *cifs_sb);
++int smb2_create_hardlink(const unsigned int xid,
++			 struct cifs_tcon *tcon,
++			 struct dentry *source_dentry,
++			 const char *from_name, const char *to_name,
++			 struct cifs_sb_info *cifs_sb);
+ extern int smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 			struct cifs_sb_info *cifs_sb, const unsigned char *path,
+ 			char *pbuf, unsigned int *pbytes_written);
+diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c
+index f0677ea0ec24e..f2d7ad155c00f 100644
+--- a/fs/tracefs/event_inode.c
++++ b/fs/tracefs/event_inode.c
+@@ -45,6 +45,7 @@ enum {
+ 	EVENTFS_SAVE_MODE	= BIT(16),
+ 	EVENTFS_SAVE_UID	= BIT(17),
+ 	EVENTFS_SAVE_GID	= BIT(18),
++	EVENTFS_TOPLEVEL	= BIT(19),
+ };
+ 
+ #define EVENTFS_MODE_MASK	(EVENTFS_SAVE_MODE - 1)
+@@ -117,10 +118,17 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		 * The events directory dentry is never freed, unless its
+ 		 * part of an instance that is deleted. It's attr is the
+ 		 * default for its child files and directories.
+-		 * Do not update it. It's not used for its own mode or ownership
++		 * Do not update it. It's not used for its own mode or ownership.
+ 		 */
+-		if (!ei->is_events)
++		if (ei->is_events) {
++			/* But it still needs to know if it was modified */
++			if (iattr->ia_valid & ATTR_UID)
++				ei->attr.mode |= EVENTFS_SAVE_UID;
++			if (iattr->ia_valid & ATTR_GID)
++				ei->attr.mode |= EVENTFS_SAVE_GID;
++		} else {
+ 			update_attr(&ei->attr, iattr);
++		}
+ 
+ 	} else {
+ 		name = dentry->d_name.name;
+@@ -138,9 +146,66 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	return ret;
+ }
+ 
++static void update_top_events_attr(struct eventfs_inode *ei, struct dentry *dentry)
++{
++	struct inode *inode;
++
++	/* Only update if the "events" was on the top level */
++	if (!ei || !(ei->attr.mode & EVENTFS_TOPLEVEL))
++		return;
++
++	/* Get the tracefs root inode. */
++	inode = d_inode(dentry->d_sb->s_root);
++	ei->attr.uid = inode->i_uid;
++	ei->attr.gid = inode->i_gid;
++}
++
++static void set_top_events_ownership(struct inode *inode)
++{
++	struct tracefs_inode *ti = get_tracefs(inode);
++	struct eventfs_inode *ei = ti->private;
++	struct dentry *dentry;
++
++	/* The top events directory doesn't get automatically updated */
++	if (!ei || !ei->is_events || !(ei->attr.mode & EVENTFS_TOPLEVEL))
++		return;
++
++	dentry = ei->dentry;
++
++	update_top_events_attr(ei, dentry);
++
++	if (!(ei->attr.mode & EVENTFS_SAVE_UID))
++		inode->i_uid = ei->attr.uid;
++
++	if (!(ei->attr.mode & EVENTFS_SAVE_GID))
++		inode->i_gid = ei->attr.gid;
++}
++
++static int eventfs_get_attr(struct mnt_idmap *idmap,
++			    const struct path *path, struct kstat *stat,
++			    u32 request_mask, unsigned int flags)
++{
++	struct dentry *dentry = path->dentry;
++	struct inode *inode = d_backing_inode(dentry);
++
++	set_top_events_ownership(inode);
++
++	generic_fillattr(idmap, request_mask, inode, stat);
++	return 0;
++}
++
++static int eventfs_permission(struct mnt_idmap *idmap,
++			      struct inode *inode, int mask)
++{
++	set_top_events_ownership(inode);
++	return generic_permission(idmap, inode, mask);
++}
++
+ static const struct inode_operations eventfs_root_dir_inode_operations = {
+ 	.lookup		= eventfs_root_lookup,
+ 	.setattr	= eventfs_set_attr,
++	.getattr	= eventfs_get_attr,
++	.permission	= eventfs_permission,
+ };
+ 
+ static const struct inode_operations eventfs_file_inode_operations = {
+@@ -178,6 +243,8 @@ static struct eventfs_inode *eventfs_find_events(struct dentry *dentry)
+ 	} while (!ei->is_events);
+ 	mutex_unlock(&eventfs_mutex);
+ 
++	update_top_events_attr(ei, dentry);
++
+ 	return ei;
+ }
+ 
+@@ -206,44 +273,6 @@ static void update_inode_attr(struct dentry *dentry, struct inode *inode,
+ 		inode->i_gid = attr->gid;
+ }
+ 
+-static void update_gid(struct eventfs_inode *ei, kgid_t gid, int level)
+-{
+-	struct eventfs_inode *ei_child;
+-
+-	/* at most we have events/system/event */
+-	if (WARN_ON_ONCE(level > 3))
+-		return;
+-
+-	ei->attr.gid = gid;
+-
+-	if (ei->entry_attrs) {
+-		for (int i = 0; i < ei->nr_entries; i++) {
+-			ei->entry_attrs[i].gid = gid;
+-		}
+-	}
+-
+-	/*
+-	 * Only eventfs_inode with dentries are updated, make sure
+-	 * all eventfs_inodes are updated. If one of the children
+-	 * do not have a dentry, this function must traverse it.
+-	 */
+-	list_for_each_entry_srcu(ei_child, &ei->children, list,
+-				 srcu_read_lock_held(&eventfs_srcu)) {
+-		if (!ei_child->dentry)
+-			update_gid(ei_child, gid, level + 1);
+-	}
+-}
+-
+-void eventfs_update_gid(struct dentry *dentry, kgid_t gid)
+-{
+-	struct eventfs_inode *ei = dentry->d_fsdata;
+-	int idx;
+-
+-	idx = srcu_read_lock(&eventfs_srcu);
+-	update_gid(ei, gid, 0);
+-	srcu_read_unlock(&eventfs_srcu, idx);
+-}
+-
+ /**
+  * create_file - create a file in the tracefs filesystem
+  * @name: the name of the file to create.
+@@ -968,6 +997,14 @@ struct eventfs_inode *eventfs_create_events_dir(const char *name, struct dentry
+ 	uid = d_inode(dentry->d_parent)->i_uid;
+ 	gid = d_inode(dentry->d_parent)->i_gid;
+ 
++	/*
++	 * If the events directory is of the top instance, then parent
++	 * is NULL. Set the attr.mode to reflect this and its permissions will
++	 * default to the tracefs root dentry.
++	 */
++	if (!parent)
++		ei->attr.mode = EVENTFS_TOPLEVEL;
++
+ 	/* This is used as the default ownership of the files and directories */
+ 	ei->attr.uid = uid;
+ 	ei->attr.gid = gid;
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index bc86ffdb103bc..e1b172c0e091a 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -91,6 +91,7 @@ static int tracefs_syscall_mkdir(struct mnt_idmap *idmap,
+ 				 struct inode *inode, struct dentry *dentry,
+ 				 umode_t mode)
+ {
++	struct tracefs_inode *ti;
+ 	char *name;
+ 	int ret;
+ 
+@@ -98,6 +99,15 @@ static int tracefs_syscall_mkdir(struct mnt_idmap *idmap,
+ 	if (!name)
+ 		return -ENOMEM;
+ 
++	/*
++	 * This is a new directory that does not take the default of
++	 * the rootfs. It becomes the default permissions for all the
++	 * files and directories underneath it.
++	 */
++	ti = get_tracefs(inode);
++	ti->flags |= TRACEFS_INSTANCE_INODE;
++	ti->private = inode;
++
+ 	/*
+ 	 * The mkdir call can call the generic functions that create
+ 	 * the files within the tracefs system. It is up to the individual
+@@ -141,10 +151,76 @@ static int tracefs_syscall_rmdir(struct inode *inode, struct dentry *dentry)
+ 	return ret;
+ }
+ 
+-static const struct inode_operations tracefs_dir_inode_operations = {
++static void set_tracefs_inode_owner(struct inode *inode)
++{
++	struct tracefs_inode *ti = get_tracefs(inode);
++	struct inode *root_inode = ti->private;
++
++	/*
++	 * If this inode has never been referenced, then update
++	 * the permissions to the superblock.
++	 */
++	if (!(ti->flags & TRACEFS_UID_PERM_SET))
++		inode->i_uid = root_inode->i_uid;
++
++	if (!(ti->flags & TRACEFS_GID_PERM_SET))
++		inode->i_gid = root_inode->i_gid;
++}
++
++static int tracefs_permission(struct mnt_idmap *idmap,
++			      struct inode *inode, int mask)
++{
++	set_tracefs_inode_owner(inode);
++	return generic_permission(idmap, inode, mask);
++}
++
++static int tracefs_getattr(struct mnt_idmap *idmap,
++			   const struct path *path, struct kstat *stat,
++			   u32 request_mask, unsigned int flags)
++{
++	struct inode *inode = d_backing_inode(path->dentry);
++
++	set_tracefs_inode_owner(inode);
++	generic_fillattr(idmap, request_mask, inode, stat);
++	return 0;
++}
++
++static int tracefs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
++			   struct iattr *attr)
++{
++	unsigned int ia_valid = attr->ia_valid;
++	struct inode *inode = d_inode(dentry);
++	struct tracefs_inode *ti = get_tracefs(inode);
++
++	if (ia_valid & ATTR_UID)
++		ti->flags |= TRACEFS_UID_PERM_SET;
++
++	if (ia_valid & ATTR_GID)
++		ti->flags |= TRACEFS_GID_PERM_SET;
++
++	return simple_setattr(idmap, dentry, attr);
++}
++
++static const struct inode_operations tracefs_instance_dir_inode_operations = {
+ 	.lookup		= simple_lookup,
+ 	.mkdir		= tracefs_syscall_mkdir,
+ 	.rmdir		= tracefs_syscall_rmdir,
++	.permission	= tracefs_permission,
++	.getattr	= tracefs_getattr,
++	.setattr	= tracefs_setattr,
++};
++
++static const struct inode_operations tracefs_dir_inode_operations = {
++	.lookup		= simple_lookup,
++	.permission	= tracefs_permission,
++	.getattr	= tracefs_getattr,
++	.setattr	= tracefs_setattr,
++};
++
++static const struct inode_operations tracefs_file_inode_operations = {
++	.permission	= tracefs_permission,
++	.getattr	= tracefs_getattr,
++	.setattr	= tracefs_setattr,
+ };
+ 
+ struct inode *tracefs_get_inode(struct super_block *sb)
+@@ -183,87 +259,6 @@ struct tracefs_fs_info {
+ 	struct tracefs_mount_opts mount_opts;
+ };
+ 
+-static void change_gid(struct dentry *dentry, kgid_t gid)
+-{
+-	if (!dentry->d_inode)
+-		return;
+-	dentry->d_inode->i_gid = gid;
+-}
+-
+-/*
+- * Taken from d_walk, but without he need for handling renames.
+- * Nothing can be renamed while walking the list, as tracefs
+- * does not support renames. This is only called when mounting
+- * or remounting the file system, to set all the files to
+- * the given gid.
+- */
+-static void set_gid(struct dentry *parent, kgid_t gid)
+-{
+-	struct dentry *this_parent;
+-	struct list_head *next;
+-
+-	this_parent = parent;
+-	spin_lock(&this_parent->d_lock);
+-
+-	change_gid(this_parent, gid);
+-repeat:
+-	next = this_parent->d_subdirs.next;
+-resume:
+-	while (next != &this_parent->d_subdirs) {
+-		struct tracefs_inode *ti;
+-		struct list_head *tmp = next;
+-		struct dentry *dentry = list_entry(tmp, struct dentry, d_child);
+-		next = tmp->next;
+-
+-		/* Note, getdents() can add a cursor dentry with no inode */
+-		if (!dentry->d_inode)
+-			continue;
+-
+-		spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
+-
+-		change_gid(dentry, gid);
+-
+-		/* If this is the events directory, update that too */
+-		ti = get_tracefs(dentry->d_inode);
+-		if (ti && (ti->flags & TRACEFS_EVENT_INODE))
+-			eventfs_update_gid(dentry, gid);
+-
+-		if (!list_empty(&dentry->d_subdirs)) {
+-			spin_unlock(&this_parent->d_lock);
+-			spin_release(&dentry->d_lock.dep_map, _RET_IP_);
+-			this_parent = dentry;
+-			spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_);
+-			goto repeat;
+-		}
+-		spin_unlock(&dentry->d_lock);
+-	}
+-	/*
+-	 * All done at this level ... ascend and resume the search.
+-	 */
+-	rcu_read_lock();
+-ascend:
+-	if (this_parent != parent) {
+-		struct dentry *child = this_parent;
+-		this_parent = child->d_parent;
+-
+-		spin_unlock(&child->d_lock);
+-		spin_lock(&this_parent->d_lock);
+-
+-		/* go into the first sibling still alive */
+-		do {
+-			next = child->d_child.next;
+-			if (next == &this_parent->d_subdirs)
+-				goto ascend;
+-			child = list_entry(next, struct dentry, d_child);
+-		} while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED));
+-		rcu_read_unlock();
+-		goto resume;
+-	}
+-	rcu_read_unlock();
+-	spin_unlock(&this_parent->d_lock);
+-	return;
+-}
+-
+ static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts)
+ {
+ 	substring_t args[MAX_OPT_ARGS];
+@@ -336,10 +331,8 @@ static int tracefs_apply_options(struct super_block *sb, bool remount)
+ 	if (!remount || opts->opts & BIT(Opt_uid))
+ 		inode->i_uid = opts->uid;
+ 
+-	if (!remount || opts->opts & BIT(Opt_gid)) {
+-		/* Set all the group ids to the mount option */
+-		set_gid(sb->s_root, opts->gid);
+-	}
++	if (!remount || opts->opts & BIT(Opt_gid))
++		inode->i_gid = opts->gid;
+ 
+ 	return 0;
+ }
+@@ -573,6 +566,26 @@ struct dentry *eventfs_end_creating(struct dentry *dentry)
+ 	return dentry;
+ }
+ 
++/* Find the inode that this will use for default */
++static struct inode *instance_inode(struct dentry *parent, struct inode *inode)
++{
++	struct tracefs_inode *ti;
++
++	/* If parent is NULL then use root inode */
++	if (!parent)
++		return d_inode(inode->i_sb->s_root);
++
++	/* Find the inode that is flagged as an instance or the root inode */
++	while (!IS_ROOT(parent)) {
++		ti = get_tracefs(d_inode(parent));
++		if (ti->flags & TRACEFS_INSTANCE_INODE)
++			break;
++		parent = parent->d_parent;
++	}
++
++	return d_inode(parent);
++}
++
+ /**
+  * tracefs_create_file - create a file in the tracefs filesystem
+  * @name: a pointer to a string containing the name of the file to create.
+@@ -603,6 +616,7 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
+ 				   struct dentry *parent, void *data,
+ 				   const struct file_operations *fops)
+ {
++	struct tracefs_inode *ti;
+ 	struct dentry *dentry;
+ 	struct inode *inode;
+ 
+@@ -621,7 +635,11 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
+ 	if (unlikely(!inode))
+ 		return tracefs_failed_creating(dentry);
+ 
++	ti = get_tracefs(inode);
++	ti->private = instance_inode(parent, inode);
++
+ 	inode->i_mode = mode;
++	inode->i_op = &tracefs_file_inode_operations;
+ 	inode->i_fop = fops ? fops : &tracefs_file_operations;
+ 	inode->i_private = data;
+ 	inode->i_uid = d_inode(dentry->d_parent)->i_uid;
+@@ -634,6 +652,7 @@ struct dentry *tracefs_create_file(const char *name, umode_t mode,
+ static struct dentry *__create_dir(const char *name, struct dentry *parent,
+ 				   const struct inode_operations *ops)
+ {
++	struct tracefs_inode *ti;
+ 	struct dentry *dentry = tracefs_start_creating(name, parent);
+ 	struct inode *inode;
+ 
+@@ -651,6 +670,9 @@ static struct dentry *__create_dir(const char *name, struct dentry *parent,
+ 	inode->i_uid = d_inode(dentry->d_parent)->i_uid;
+ 	inode->i_gid = d_inode(dentry->d_parent)->i_gid;
+ 
++	ti = get_tracefs(inode);
++	ti->private = instance_inode(parent, inode);
++
+ 	/* directory inodes start off with i_nlink == 2 (for "." entry) */
+ 	inc_nlink(inode);
+ 	d_instantiate(dentry, inode);
+@@ -681,7 +703,7 @@ struct dentry *tracefs_create_dir(const char *name, struct dentry *parent)
+ 	if (security_locked_down(LOCKDOWN_TRACEFS))
+ 		return NULL;
+ 
+-	return __create_dir(name, parent, &simple_dir_inode_operations);
++	return __create_dir(name, parent, &tracefs_dir_inode_operations);
+ }
+ 
+ /**
+@@ -712,7 +734,7 @@ __init struct dentry *tracefs_create_instance_dir(const char *name,
+ 	if (WARN_ON(tracefs_ops.mkdir || tracefs_ops.rmdir))
+ 		return NULL;
+ 
+-	dentry = __create_dir(name, parent, &tracefs_dir_inode_operations);
++	dentry = __create_dir(name, parent, &tracefs_instance_dir_inode_operations);
+ 	if (!dentry)
+ 		return NULL;
+ 
+diff --git a/fs/tracefs/internal.h b/fs/tracefs/internal.h
+index 42bdeb471a072..af3b43fa75cdb 100644
+--- a/fs/tracefs/internal.h
++++ b/fs/tracefs/internal.h
+@@ -5,6 +5,9 @@
+ enum {
+ 	TRACEFS_EVENT_INODE		= BIT(1),
+ 	TRACEFS_EVENT_TOP_INODE		= BIT(2),
++	TRACEFS_GID_PERM_SET		= BIT(3),
++	TRACEFS_UID_PERM_SET		= BIT(4),
++	TRACEFS_INSTANCE_INODE		= BIT(5),
+ };
+ 
+ struct tracefs_inode {
+@@ -78,7 +81,6 @@ struct inode *tracefs_get_inode(struct super_block *sb);
+ struct dentry *eventfs_start_creating(const char *name, struct dentry *parent);
+ struct dentry *eventfs_failed_creating(struct dentry *dentry);
+ struct dentry *eventfs_end_creating(struct dentry *dentry);
+-void eventfs_update_gid(struct dentry *dentry, kgid_t gid);
+ void eventfs_set_ei_status_free(struct tracefs_inode *ti, struct dentry *dentry);
+ 
+ #endif /* _TRACEFS_INTERNAL_H */
+diff --git a/include/asm-generic/numa.h b/include/asm-generic/numa.h
+index 1a3ad6d298330..c32e0cf23c909 100644
+--- a/include/asm-generic/numa.h
++++ b/include/asm-generic/numa.h
+@@ -35,6 +35,7 @@ int __init numa_add_memblk(int nodeid, u64 start, u64 end);
+ void __init numa_set_distance(int from, int to, int distance);
+ void __init numa_free_distance(void);
+ void __init early_map_cpu_to_node(unsigned int cpu, int nid);
++int __init early_cpu_to_node(int cpu);
+ void numa_store_cpu_info(unsigned int cpu);
+ void numa_add_cpu(unsigned int cpu);
+ void numa_remove_cpu(unsigned int cpu);
+@@ -46,6 +47,7 @@ static inline void numa_add_cpu(unsigned int cpu) { }
+ static inline void numa_remove_cpu(unsigned int cpu) { }
+ static inline void arch_numa_init(void) { }
+ static inline void early_map_cpu_to_node(unsigned int cpu, int nid) { }
++static inline int early_cpu_to_node(int cpu) { return 0; }
+ 
+ #endif	/* CONFIG_NUMA */
+ 
+diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligned.h
+index 699650f819706..a84c64e5f11ec 100644
+--- a/include/asm-generic/unaligned.h
++++ b/include/asm-generic/unaligned.h
+@@ -104,9 +104,9 @@ static inline u32 get_unaligned_le24(const void *p)
+ 
+ static inline void __put_unaligned_be24(const u32 val, u8 *p)
+ {
+-	*p++ = val >> 16;
+-	*p++ = val >> 8;
+-	*p++ = val;
++	*p++ = (val >> 16) & 0xff;
++	*p++ = (val >> 8) & 0xff;
++	*p++ = val & 0xff;
+ }
+ 
+ static inline void put_unaligned_be24(const u32 val, void *p)
+@@ -116,9 +116,9 @@ static inline void put_unaligned_be24(const u32 val, void *p)
+ 
+ static inline void __put_unaligned_le24(const u32 val, u8 *p)
+ {
+-	*p++ = val;
+-	*p++ = val >> 8;
+-	*p++ = val >> 16;
++	*p++ = val & 0xff;
++	*p++ = (val >> 8) & 0xff;
++	*p++ = (val >> 16) & 0xff;
+ }
+ 
+ static inline void put_unaligned_le24(const u32 val, void *p)
+@@ -128,12 +128,12 @@ static inline void put_unaligned_le24(const u32 val, void *p)
+ 
+ static inline void __put_unaligned_be48(const u64 val, u8 *p)
+ {
+-	*p++ = val >> 40;
+-	*p++ = val >> 32;
+-	*p++ = val >> 24;
+-	*p++ = val >> 16;
+-	*p++ = val >> 8;
+-	*p++ = val;
++	*p++ = (val >> 40) & 0xff;
++	*p++ = (val >> 32) & 0xff;
++	*p++ = (val >> 24) & 0xff;
++	*p++ = (val >> 16) & 0xff;
++	*p++ = (val >> 8) & 0xff;
++	*p++ = val & 0xff;
+ }
+ 
+ static inline void put_unaligned_be48(const u64 val, void *p)
+diff --git a/include/drm/drm_color_mgmt.h b/include/drm/drm_color_mgmt.h
+index 81c298488b0c8..6b5eec10c3db3 100644
+--- a/include/drm/drm_color_mgmt.h
++++ b/include/drm/drm_color_mgmt.h
+@@ -24,6 +24,7 @@
+ #define __DRM_COLOR_MGMT_H__
+ 
+ #include <linux/ctype.h>
++#include <linux/math64.h>
+ #include <drm/drm_property.h>
+ 
+ struct drm_crtc;
+diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
+index c9df0407980c9..c0aec0d4d664e 100644
+--- a/include/drm/drm_mipi_dsi.h
++++ b/include/drm/drm_mipi_dsi.h
+@@ -168,6 +168,7 @@ struct mipi_dsi_device_info {
+  * struct mipi_dsi_device - DSI peripheral device
+  * @host: DSI host for this peripheral
+  * @dev: driver model device node for this peripheral
++ * @attached: the DSI device has been successfully attached
+  * @name: DSI peripheral chip type
+  * @channel: virtual channel assigned to the peripheral
+  * @format: pixel format for video mode
+@@ -184,6 +185,7 @@ struct mipi_dsi_device_info {
+ struct mipi_dsi_device {
+ 	struct mipi_dsi_host *host;
+ 	struct device dev;
++	bool attached;
+ 
+ 	char name[DSI_DEV_NAME_SIZE];
+ 	unsigned int channel;
+diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
+index 6b3acf15be5c2..99ae7960a8d13 100644
+--- a/include/linux/avf/virtchnl.h
++++ b/include/linux/avf/virtchnl.h
+@@ -5,6 +5,7 @@
+ #define _VIRTCHNL_H_
+ 
+ #include <linux/bitops.h>
++#include <linux/bits.h>
+ #include <linux/overflow.h>
+ #include <uapi/linux/if_ether.h>
+ 
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 7a7859a5cce42..cfc6d2f98058d 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1447,7 +1447,7 @@ struct bpf_prog_aux {
+ 	int cgroup_atype; /* enum cgroup_bpf_attach_type */
+ 	struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE];
+ 	char name[BPF_OBJ_NAME_LEN];
+-	unsigned int (*bpf_exception_cb)(u64 cookie, u64 sp, u64 bp);
++	u64 (*bpf_exception_cb)(u64 cookie, u64 sp, u64 bp, u64, u64);
+ #ifdef CONFIG_SECURITY
+ 	void *security;
+ #endif
+diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
+index 8cd11a2232605..136f2980cba30 100644
+--- a/include/linux/irq_work.h
++++ b/include/linux/irq_work.h
+@@ -66,6 +66,9 @@ void irq_work_sync(struct irq_work *work);
+ void irq_work_run(void);
+ bool irq_work_needs_cpu(void);
+ void irq_work_single(void *arg);
++
++void arch_irq_work_raise(void);
++
+ #else
+ static inline bool irq_work_needs_cpu(void) { return false; }
+ static inline void irq_work_run(void) { }
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 957ce38768b2a..950df415d7de9 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -600,6 +600,9 @@ struct vma_numab_state {
+ 	 */
+ 	unsigned long pids_active[2];
+ 
++	/* MM scan sequence ID when scan first started after VMA creation */
++	int start_scan_seq;
++
+ 	/*
+ 	 * MM scan sequence ID when the VMA was last completely scanned.
+ 	 * A VMA is not eligible for scanning if prev_scan_seq == numa_scan_seq
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index a7c32324c2639..2c34e9325c9b9 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -2025,9 +2025,9 @@ static inline int pfn_valid(unsigned long pfn)
+ 	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
+ 		return 0;
+ 	ms = __pfn_to_section(pfn);
+-	rcu_read_lock();
++	rcu_read_lock_sched();
+ 	if (!valid_section(ms)) {
+-		rcu_read_unlock();
++		rcu_read_unlock_sched();
+ 		return 0;
+ 	}
+ 	/*
+@@ -2035,7 +2035,7 @@ static inline int pfn_valid(unsigned long pfn)
+ 	 * the entire section-sized span.
+ 	 */
+ 	ret = early_section(ms) || pfn_section_valid(ms, pfn);
+-	rcu_read_unlock();
++	rcu_read_unlock_sched();
+ 
+ 	return ret;
+ }
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 275799b5f535c..97cc0baad0f4b 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3065,6 +3065,7 @@
+ #define PCI_DEVICE_ID_INTEL_82443GX_0	0x71a0
+ #define PCI_DEVICE_ID_INTEL_82443GX_2	0x71a2
+ #define PCI_DEVICE_ID_INTEL_82372FB_1	0x7601
++#define PCI_DEVICE_ID_INTEL_HDA_ARL	0x7728
+ #define PCI_DEVICE_ID_INTEL_HDA_RPL_S	0x7a50
+ #define PCI_DEVICE_ID_INTEL_HDA_ADL_S	0x7ad0
+ #define PCI_DEVICE_ID_INTEL_HDA_MTL	0x7e28
+diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h
+index ccd97bcef2694..18a102174c4f2 100644
+--- a/include/linux/pm_opp.h
++++ b/include/linux/pm_opp.h
+@@ -157,7 +157,7 @@ struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev,
+ 					      unsigned int *level);
+ 
+ struct dev_pm_opp *dev_pm_opp_find_level_floor(struct device *dev,
+-					       unsigned long *level);
++					       unsigned int *level);
+ 
+ struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev,
+ 					   unsigned int *bw, int index);
+@@ -324,7 +324,7 @@ static inline struct dev_pm_opp *dev_pm_opp_find_level_ceil(struct device *dev,
+ }
+ 
+ static inline struct dev_pm_opp *dev_pm_opp_find_level_floor(struct device *dev,
+-							     unsigned long *level)
++							     unsigned int *level)
+ {
+ 	return ERR_PTR(-EOPNOTSUPP);
+ }
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index cee814d5d1acc..1da1739d75d98 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -149,6 +149,7 @@ struct thermal_cooling_device {
+  * @node:	node in thermal_tz_list (in thermal_core.c)
+  * @poll_queue:	delayed work for polling
+  * @notify_event: Last notification event
++ * @suspended: thermal zone suspend indicator
+  */
+ struct thermal_zone_device {
+ 	int id;
+@@ -181,6 +182,7 @@ struct thermal_zone_device {
+ 	struct list_head node;
+ 	struct delayed_work poll_queue;
+ 	enum thermal_notify_event notify_event;
++	bool suspended;
+ };
+ 
+ /**
+diff --git a/include/net/af_unix.h b/include/net/af_unix.h
+index 49c4640027d8a..afd40dce40f3d 100644
+--- a/include/net/af_unix.h
++++ b/include/net/af_unix.h
+@@ -46,12 +46,6 @@ struct scm_stat {
+ 
+ #define UNIXCB(skb)	(*(struct unix_skb_parms *)&((skb)->cb))
+ 
+-#define unix_state_lock(s)	spin_lock(&unix_sk(s)->lock)
+-#define unix_state_unlock(s)	spin_unlock(&unix_sk(s)->lock)
+-#define unix_state_lock_nested(s) \
+-				spin_lock_nested(&unix_sk(s)->lock, \
+-				SINGLE_DEPTH_NESTING)
+-
+ /* The AF_UNIX socket */
+ struct unix_sock {
+ 	/* WARNING: sk has to be the first member */
+@@ -77,6 +71,20 @@ struct unix_sock {
+ #define unix_sk(ptr) container_of_const(ptr, struct unix_sock, sk)
+ #define unix_peer(sk) (unix_sk(sk)->peer)
+ 
++#define unix_state_lock(s)	spin_lock(&unix_sk(s)->lock)
++#define unix_state_unlock(s)	spin_unlock(&unix_sk(s)->lock)
++enum unix_socket_lock_class {
++	U_LOCK_NORMAL,
++	U_LOCK_SECOND,	/* for double locking, see unix_state_double_lock(). */
++	U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */
++};
++
++static inline void unix_state_lock_nested(struct sock *sk,
++				   enum unix_socket_lock_class subclass)
++{
++	spin_lock_nested(&unix_sk(sk)->lock, subclass);
++}
++
+ #define peer_wait peer_wq.wait
+ 
+ long unix_inq_len(struct sock *sk);
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 1fc4c8d69e333..6390dd08da8cd 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -761,7 +761,7 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev);
+  *	Functions provided by ip_sockglue.c
+  */
+ 
+-void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb);
++void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb, bool drop_dst);
+ void ip_cmsg_recv_offset(struct msghdr *msg, struct sock *sk,
+ 			 struct sk_buff *skb, int tlen, int offset);
+ int ip_cmsg_send(struct sock *sk, struct msghdr *msg,
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b157c5cafd14c..4e8ecabc5f25c 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1322,6 +1322,7 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+  *	@type: stateful object numeric type
+  *	@owner: module owner
+  *	@maxattr: maximum netlink attribute
++ *	@family: address family for AF-specific object types
+  *	@policy: netlink attribute policy
+  */
+ struct nft_object_type {
+@@ -1331,6 +1332,7 @@ struct nft_object_type {
+ 	struct list_head		list;
+ 	u32				type;
+ 	unsigned int                    maxattr;
++	u8				family;
+ 	struct module			*owner;
+ 	const struct nla_policy		*policy;
+ };
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 16205dd29843b..9c8e5f732c4c7 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -487,15 +487,19 @@ static void auditd_conn_free(struct rcu_head *rcu)
+  * @pid: auditd PID
+  * @portid: auditd netlink portid
+  * @net: auditd network namespace pointer
++ * @skb: the netlink command from the audit daemon
++ * @ack: netlink ack flag, cleared if ack'd here
+  *
+  * Description:
+  * This function will obtain and drop network namespace references as
+  * necessary.  Returns zero on success, negative values on failure.
+  */
+-static int auditd_set(struct pid *pid, u32 portid, struct net *net)
++static int auditd_set(struct pid *pid, u32 portid, struct net *net,
++		      struct sk_buff *skb, bool *ack)
+ {
+ 	unsigned long flags;
+ 	struct auditd_connection *ac_old, *ac_new;
++	struct nlmsghdr *nlh;
+ 
+ 	if (!pid || !net)
+ 		return -EINVAL;
+@@ -507,6 +511,13 @@ static int auditd_set(struct pid *pid, u32 portid, struct net *net)
+ 	ac_new->portid = portid;
+ 	ac_new->net = get_net(net);
+ 
++	/* send the ack now to avoid a race with the queue backlog */
++	if (*ack) {
++		nlh = nlmsg_hdr(skb);
++		netlink_ack(skb, nlh, 0, NULL);
++		*ack = false;
++	}
++
+ 	spin_lock_irqsave(&auditd_conn_lock, flags);
+ 	ac_old = rcu_dereference_protected(auditd_conn,
+ 					   lockdep_is_held(&auditd_conn_lock));
+@@ -1200,7 +1211,8 @@ static int audit_replace(struct pid *pid)
+ 	return auditd_send_unicast_skb(skb);
+ }
+ 
+-static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
++static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
++			     bool *ack)
+ {
+ 	u32			seq;
+ 	void			*data;
+@@ -1293,7 +1305,8 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 				/* register a new auditd connection */
+ 				err = auditd_set(req_pid,
+ 						 NETLINK_CB(skb).portid,
+-						 sock_net(NETLINK_CB(skb).sk));
++						 sock_net(NETLINK_CB(skb).sk),
++						 skb, ack);
+ 				if (audit_enabled != AUDIT_OFF)
+ 					audit_log_config_change("audit_pid",
+ 								new_pid,
+@@ -1538,9 +1551,10 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+  * Parse the provided skb and deal with any messages that may be present,
+  * malformed skbs are discarded.
+  */
+-static void audit_receive(struct sk_buff  *skb)
++static void audit_receive(struct sk_buff *skb)
+ {
+ 	struct nlmsghdr *nlh;
++	bool ack;
+ 	/*
+ 	 * len MUST be signed for nlmsg_next to be able to dec it below 0
+ 	 * if the nlmsg_len was not aligned
+@@ -1553,9 +1567,12 @@ static void audit_receive(struct sk_buff  *skb)
+ 
+ 	audit_ctl_lock();
+ 	while (nlmsg_ok(nlh, len)) {
+-		err = audit_receive_msg(skb, nlh);
+-		/* if err or if this message says it wants a response */
+-		if (err || (nlh->nlmsg_flags & NLM_F_ACK))
++		ack = nlh->nlmsg_flags & NLM_F_ACK;
++		err = audit_receive_msg(skb, nlh, &ack);
++
++		/* send an ack if the user asked for one and audit_receive_msg
++		 * didn't already do it, or if there was an error. */
++		if (ack || err)
+ 			netlink_ack(skb, nlh, err, NULL);
+ 
+ 		nlh = nlmsg_next(nlh, &len);
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 9bfad7e969131..c9843dde69081 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -871,7 +871,7 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
+ 	return 0;
+ }
+ 
+-static long fd_array_map_delete_elem(struct bpf_map *map, void *key)
++static long __fd_array_map_delete_elem(struct bpf_map *map, void *key, bool need_defer)
+ {
+ 	struct bpf_array *array = container_of(map, struct bpf_array, map);
+ 	void *old_ptr;
+@@ -890,13 +890,18 @@ static long fd_array_map_delete_elem(struct bpf_map *map, void *key)
+ 	}
+ 
+ 	if (old_ptr) {
+-		map->ops->map_fd_put_ptr(map, old_ptr, true);
++		map->ops->map_fd_put_ptr(map, old_ptr, need_defer);
+ 		return 0;
+ 	} else {
+ 		return -ENOENT;
+ 	}
+ }
+ 
++static long fd_array_map_delete_elem(struct bpf_map *map, void *key)
++{
++	return __fd_array_map_delete_elem(map, key, true);
++}
++
+ static void *prog_fd_array_get_ptr(struct bpf_map *map,
+ 				   struct file *map_file, int fd)
+ {
+@@ -925,13 +930,13 @@ static u32 prog_fd_array_sys_lookup_elem(void *ptr)
+ }
+ 
+ /* decrement refcnt of all bpf_progs that are stored in this map */
+-static void bpf_fd_array_map_clear(struct bpf_map *map)
++static void bpf_fd_array_map_clear(struct bpf_map *map, bool need_defer)
+ {
+ 	struct bpf_array *array = container_of(map, struct bpf_array, map);
+ 	int i;
+ 
+ 	for (i = 0; i < array->map.max_entries; i++)
+-		fd_array_map_delete_elem(map, &i);
++		__fd_array_map_delete_elem(map, &i, need_defer);
+ }
+ 
+ static void prog_array_map_seq_show_elem(struct bpf_map *map, void *key,
+@@ -1072,7 +1077,7 @@ static void prog_array_map_clear_deferred(struct work_struct *work)
+ {
+ 	struct bpf_map *map = container_of(work, struct bpf_array_aux,
+ 					   work)->map;
+-	bpf_fd_array_map_clear(map);
++	bpf_fd_array_map_clear(map, true);
+ 	bpf_map_put(map);
+ }
+ 
+@@ -1222,7 +1227,7 @@ static void perf_event_fd_array_release(struct bpf_map *map,
+ 	for (i = 0; i < array->map.max_entries; i++) {
+ 		ee = READ_ONCE(array->ptrs[i]);
+ 		if (ee && ee->map_file == map_file)
+-			fd_array_map_delete_elem(map, &i);
++			__fd_array_map_delete_elem(map, &i, true);
+ 	}
+ 	rcu_read_unlock();
+ }
+@@ -1230,7 +1235,7 @@ static void perf_event_fd_array_release(struct bpf_map *map,
+ static void perf_event_fd_array_map_free(struct bpf_map *map)
+ {
+ 	if (map->map_flags & BPF_F_PRESERVE_ELEMS)
+-		bpf_fd_array_map_clear(map);
++		bpf_fd_array_map_clear(map, false);
+ 	fd_array_map_free(map);
+ }
+ 
+@@ -1266,7 +1271,7 @@ static void cgroup_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_de
+ 
+ static void cgroup_fd_array_free(struct bpf_map *map)
+ {
+-	bpf_fd_array_map_clear(map);
++	bpf_fd_array_map_clear(map, false);
+ 	fd_array_map_free(map);
+ }
+ 
+@@ -1311,7 +1316,7 @@ static void array_of_map_free(struct bpf_map *map)
+ 	 * is protected by fdget/fdput.
+ 	 */
+ 	bpf_map_meta_free(map->inner_map_meta);
+-	bpf_fd_array_map_clear(map);
++	bpf_fd_array_map_clear(map, false);
+ 	fd_array_map_free(map);
+ }
+ 
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 6950f04616349..b3053af6427d2 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -32,12 +32,13 @@
+  *
+  * Different map implementations will rely on rcu in map methods
+  * lookup/update/delete, therefore eBPF programs must run under rcu lock
+- * if program is allowed to access maps, so check rcu_read_lock_held in
+- * all three functions.
++ * if program is allowed to access maps, so check rcu_read_lock_held() or
++ * rcu_read_lock_trace_held() in all three functions.
+  */
+ BPF_CALL_2(bpf_map_lookup_elem, struct bpf_map *, map, void *, key)
+ {
+-	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_bh_held());
++	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() &&
++		     !rcu_read_lock_bh_held());
+ 	return (unsigned long) map->ops->map_lookup_elem(map, key);
+ }
+ 
+@@ -53,7 +54,8 @@ const struct bpf_func_proto bpf_map_lookup_elem_proto = {
+ BPF_CALL_4(bpf_map_update_elem, struct bpf_map *, map, void *, key,
+ 	   void *, value, u64, flags)
+ {
+-	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_bh_held());
++	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() &&
++		     !rcu_read_lock_bh_held());
+ 	return map->ops->map_update_elem(map, key, value, flags);
+ }
+ 
+@@ -70,7 +72,8 @@ const struct bpf_func_proto bpf_map_update_elem_proto = {
+ 
+ BPF_CALL_2(bpf_map_delete_elem, struct bpf_map *, map, void *, key)
+ {
+-	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_bh_held());
++	WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() &&
++		     !rcu_read_lock_bh_held());
+ 	return map->ops->map_delete_elem(map, key);
+ }
+ 
+@@ -2506,7 +2509,7 @@ __bpf_kfunc void bpf_throw(u64 cookie)
+ 	 * which skips compiler generated instrumentation to do the same.
+ 	 */
+ 	kasan_unpoison_task_stack_below((void *)(long)ctx.sp);
+-	ctx.aux->bpf_exception_cb(cookie, ctx.sp, ctx.bp);
++	ctx.aux->bpf_exception_cb(cookie, ctx.sp, ctx.bp, 0, 0);
+ 	WARN(1, "A call to BPF exception callback should never return\n");
+ }
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 58e34ff811971..349d735b4e1d6 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1702,6 +1702,9 @@ int generic_map_delete_batch(struct bpf_map *map,
+ 	if (!max_count)
+ 		return 0;
+ 
++	if (put_user(0, &uattr->batch.count))
++		return -EFAULT;
++
+ 	key = kvmalloc(map->key_size, GFP_USER | __GFP_NOWARN);
+ 	if (!key)
+ 		return -ENOMEM;
+@@ -1759,6 +1762,9 @@ int generic_map_update_batch(struct bpf_map *map, struct file *map_file,
+ 	if (!max_count)
+ 		return 0;
+ 
++	if (put_user(0, &uattr->batch.count))
++		return -EFAULT;
++
+ 	key = kvmalloc(map->key_size, GFP_USER | __GFP_NOWARN);
+ 	if (!key)
+ 		return -ENOMEM;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 9efd0d7775e7c..7c03305797184 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11425,9 +11425,30 @@ static DEVICE_ATTR_RW(perf_event_mux_interval_ms);
+ static struct attribute *pmu_dev_attrs[] = {
+ 	&dev_attr_type.attr,
+ 	&dev_attr_perf_event_mux_interval_ms.attr,
++	&dev_attr_nr_addr_filters.attr,
++	NULL,
++};
++
++static umode_t pmu_dev_is_visible(struct kobject *kobj, struct attribute *a, int n)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct pmu *pmu = dev_get_drvdata(dev);
++
++	if (n == 2 && !pmu->nr_addr_filters)
++		return 0;
++
++	return a->mode;
++}
++
++static struct attribute_group pmu_dev_attr_group = {
++	.is_visible = pmu_dev_is_visible,
++	.attrs = pmu_dev_attrs,
++};
++
++static const struct attribute_group *pmu_dev_groups[] = {
++	&pmu_dev_attr_group,
+ 	NULL,
+ };
+-ATTRIBUTE_GROUPS(pmu_dev);
+ 
+ static int pmu_bus_running;
+ static struct bus_type pmu_bus = {
+@@ -11464,18 +11485,11 @@ static int pmu_dev_alloc(struct pmu *pmu)
+ 	if (ret)
+ 		goto free_dev;
+ 
+-	/* For PMUs with address filters, throw in an extra attribute: */
+-	if (pmu->nr_addr_filters)
+-		ret = device_create_file(pmu->dev, &dev_attr_nr_addr_filters);
+-
+-	if (ret)
+-		goto del_dev;
+-
+-	if (pmu->attr_update)
++	if (pmu->attr_update) {
+ 		ret = sysfs_update_groups(&pmu->dev->kobj, pmu->attr_update);
+-
+-	if (ret)
+-		goto del_dev;
++		if (ret)
++			goto del_dev;
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 4182fb118ce95..7ac9f4b1d955c 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3164,7 +3164,7 @@ static bool vma_is_accessed(struct mm_struct *mm, struct vm_area_struct *vma)
+ 	 * This is also done to avoid any side effect of task scanning
+ 	 * amplifying the unfairness of disjoint set of VMAs' access.
+ 	 */
+-	if (READ_ONCE(current->mm->numa_scan_seq) < 2)
++	if ((READ_ONCE(current->mm->numa_scan_seq) - vma->numab_state->start_scan_seq) < 2)
+ 		return true;
+ 
+ 	pids = vma->numab_state->pids_active[0] | vma->numab_state->pids_active[1];
+@@ -3307,6 +3307,8 @@ retry_pids:
+ 			if (!vma->numab_state)
+ 				continue;
+ 
++			vma->numab_state->start_scan_seq = mm->numa_scan_seq;
++
+ 			vma->numab_state->next_scan = now +
+ 				msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+ 
+@@ -4096,6 +4098,10 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
+ 	if (cfs_rq->tg == &root_task_group)
+ 		return;
+ 
++	/* rq has been offline and doesn't contribute to the share anymore: */
++	if (!cpu_active(cpu_of(rq_of(cfs_rq))))
++		return;
++
+ 	/*
+ 	 * For migration heavy workloads, access to tg->load_avg can be
+ 	 * unbound. Limit the update rate to at most once per ms.
+@@ -4112,6 +4118,49 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
+ 	}
+ }
+ 
++static inline void clear_tg_load_avg(struct cfs_rq *cfs_rq)
++{
++	long delta;
++	u64 now;
++
++	/*
++	 * No need to update load_avg for root_task_group, as it is not used.
++	 */
++	if (cfs_rq->tg == &root_task_group)
++		return;
++
++	now = sched_clock_cpu(cpu_of(rq_of(cfs_rq)));
++	delta = 0 - cfs_rq->tg_load_avg_contrib;
++	atomic_long_add(delta, &cfs_rq->tg->load_avg);
++	cfs_rq->tg_load_avg_contrib = 0;
++	cfs_rq->last_update_tg_load_avg = now;
++}
++
++/* CPU offline callback: */
++static void __maybe_unused clear_tg_offline_cfs_rqs(struct rq *rq)
++{
++	struct task_group *tg;
++
++	lockdep_assert_rq_held(rq);
++
++	/*
++	 * The rq clock has already been updated in
++	 * set_rq_offline(), so we should skip updating
++	 * the rq clock again in unthrottle_cfs_rq().
++	 */
++	rq_clock_start_loop_update(rq);
++
++	rcu_read_lock();
++	list_for_each_entry_rcu(tg, &task_groups, list) {
++		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
++
++		clear_tg_load_avg(cfs_rq);
++	}
++	rcu_read_unlock();
++
++	rq_clock_stop_loop_update(rq);
++}
++
+ /*
+  * Called within set_task_rq() right before setting a task's CPU. The
+  * caller only guarantees p->pi_lock is held; no other assumptions,
+@@ -4408,6 +4457,8 @@ static inline bool skip_blocked_update(struct sched_entity *se)
+ 
+ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq) {}
+ 
++static inline void clear_tg_offline_cfs_rqs(struct rq *rq) {}
++
+ static inline int propagate_entity_load_avg(struct sched_entity *se)
+ {
+ 	return 0;
+@@ -12413,6 +12464,9 @@ static void rq_offline_fair(struct rq *rq)
+ 
+ 	/* Ensure any throttled groups are reachable by pick_next_task */
+ 	unthrottle_offline_cfs_rqs(rq);
++
++	/* Ensure that we remove rq contribution to group share: */
++	clear_tg_offline_cfs_rqs(rq);
+ }
+ 
+ #endif /* CONFIG_SMP */
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 2a8e9d63fbe38..fb12a9bacd2fa 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -620,9 +620,8 @@ static void debug_objects_fill_pool(void)
+ static void
+ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack)
+ {
+-	enum debug_obj_state state;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+-	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+ 	debug_objects_fill_pool();
+@@ -643,24 +642,18 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack
+ 	case ODEBUG_STATE_INIT:
+ 	case ODEBUG_STATE_INACTIVE:
+ 		obj->state = ODEBUG_STATE_INIT;
+-		break;
+-
+-	case ODEBUG_STATE_ACTIVE:
+-		state = obj->state;
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "init");
+-		debug_object_fixup(descr->fixup_init, addr, state);
+-		return;
+-
+-	case ODEBUG_STATE_DESTROYED:
+ 		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "init");
+ 		return;
+ 	default:
+ 		break;
+ 	}
+ 
++	o = *obj;
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
++	debug_print_object(&o, "init");
++
++	if (o.state == ODEBUG_STATE_ACTIVE)
++		debug_object_fixup(descr->fixup_init, addr, o.state);
+ }
+ 
+ /**
+@@ -701,11 +694,9 @@ EXPORT_SYMBOL_GPL(debug_object_init_on_stack);
+ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ {
+ 	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+-	enum debug_obj_state state;
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+-	int ret;
+ 
+ 	if (!debug_objects_enabled)
+ 		return 0;
+@@ -717,49 +708,38 @@ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+ 	obj = lookup_object_or_alloc(addr, db, descr, false, true);
+-	if (likely(!IS_ERR_OR_NULL(obj))) {
+-		bool print_object = false;
+-
++	if (unlikely(!obj)) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		debug_objects_oom();
++		return 0;
++	} else if (likely(!IS_ERR(obj))) {
+ 		switch (obj->state) {
+-		case ODEBUG_STATE_INIT:
+-		case ODEBUG_STATE_INACTIVE:
+-			obj->state = ODEBUG_STATE_ACTIVE;
+-			ret = 0;
+-			break;
+-
+ 		case ODEBUG_STATE_ACTIVE:
+-			state = obj->state;
+-			raw_spin_unlock_irqrestore(&db->lock, flags);
+-			debug_print_object(obj, "activate");
+-			ret = debug_object_fixup(descr->fixup_activate, addr, state);
+-			return ret ? 0 : -EINVAL;
+-
+ 		case ODEBUG_STATE_DESTROYED:
+-			print_object = true;
+-			ret = -EINVAL;
++			o = *obj;
+ 			break;
++		case ODEBUG_STATE_INIT:
++		case ODEBUG_STATE_INACTIVE:
++			obj->state = ODEBUG_STATE_ACTIVE;
++			fallthrough;
+ 		default:
+-			ret = 0;
+-			break;
++			raw_spin_unlock_irqrestore(&db->lock, flags);
++			return 0;
+ 		}
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		if (print_object)
+-			debug_print_object(obj, "activate");
+-		return ret;
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
++	debug_print_object(&o, "activate");
+ 
+-	/* If NULL the allocation has hit OOM */
+-	if (!obj) {
+-		debug_objects_oom();
+-		return 0;
++	switch (o.state) {
++	case ODEBUG_STATE_ACTIVE:
++	case ODEBUG_STATE_NOTAVAILABLE:
++		if (debug_object_fixup(descr->fixup_activate, addr, o.state))
++			return 0;
++		fallthrough;
++	default:
++		return -EINVAL;
+ 	}
+-
+-	/* Object is neither static nor tracked. It's not initialized */
+-	debug_print_object(&o, "activate");
+-	ret = debug_object_fixup(descr->fixup_activate, addr, ODEBUG_STATE_NOTAVAILABLE);
+-	return ret ? 0 : -EINVAL;
+ }
+ EXPORT_SYMBOL_GPL(debug_object_activate);
+ 
+@@ -770,10 +750,10 @@ EXPORT_SYMBOL_GPL(debug_object_activate);
+  */
+ void debug_object_deactivate(void *addr, const struct debug_obj_descr *descr)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+-	bool print_object = false;
+ 
+ 	if (!debug_objects_enabled)
+ 		return;
+@@ -785,33 +765,24 @@ void debug_object_deactivate(void *addr, const struct debug_obj_descr *descr)
+ 	obj = lookup_object(addr, db);
+ 	if (obj) {
+ 		switch (obj->state) {
++		case ODEBUG_STATE_DESTROYED:
++			break;
+ 		case ODEBUG_STATE_INIT:
+ 		case ODEBUG_STATE_INACTIVE:
+ 		case ODEBUG_STATE_ACTIVE:
+-			if (!obj->astate)
+-				obj->state = ODEBUG_STATE_INACTIVE;
+-			else
+-				print_object = true;
+-			break;
+-
+-		case ODEBUG_STATE_DESTROYED:
+-			print_object = true;
+-			break;
++			if (obj->astate)
++				break;
++			obj->state = ODEBUG_STATE_INACTIVE;
++			fallthrough;
+ 		default:
+-			break;
++			raw_spin_unlock_irqrestore(&db->lock, flags);
++			return;
+ 		}
++		o = *obj;
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (!obj) {
+-		struct debug_obj o = { .object = addr,
+-				       .state = ODEBUG_STATE_NOTAVAILABLE,
+-				       .descr = descr };
+-
+-		debug_print_object(&o, "deactivate");
+-	} else if (print_object) {
+-		debug_print_object(obj, "deactivate");
+-	}
++	debug_print_object(&o, "deactivate");
+ }
+ EXPORT_SYMBOL_GPL(debug_object_deactivate);
+ 
+@@ -822,11 +793,9 @@ EXPORT_SYMBOL_GPL(debug_object_deactivate);
+  */
+ void debug_object_destroy(void *addr, const struct debug_obj_descr *descr)
+ {
+-	enum debug_obj_state state;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+-	struct debug_obj *obj;
+ 	unsigned long flags;
+-	bool print_object = false;
+ 
+ 	if (!debug_objects_enabled)
+ 		return;
+@@ -836,32 +805,31 @@ void debug_object_destroy(void *addr, const struct debug_obj_descr *descr)
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+ 	obj = lookup_object(addr, db);
+-	if (!obj)
+-		goto out_unlock;
++	if (!obj) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		return;
++	}
+ 
+ 	switch (obj->state) {
++	case ODEBUG_STATE_ACTIVE:
++	case ODEBUG_STATE_DESTROYED:
++		break;
+ 	case ODEBUG_STATE_NONE:
+ 	case ODEBUG_STATE_INIT:
+ 	case ODEBUG_STATE_INACTIVE:
+ 		obj->state = ODEBUG_STATE_DESTROYED;
+-		break;
+-	case ODEBUG_STATE_ACTIVE:
+-		state = obj->state;
++		fallthrough;
++	default:
+ 		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "destroy");
+-		debug_object_fixup(descr->fixup_destroy, addr, state);
+ 		return;
+-
+-	case ODEBUG_STATE_DESTROYED:
+-		print_object = true;
+-		break;
+-	default:
+-		break;
+ 	}
+-out_unlock:
++
++	o = *obj;
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (print_object)
+-		debug_print_object(obj, "destroy");
++	debug_print_object(&o, "destroy");
++
++	if (o.state == ODEBUG_STATE_ACTIVE)
++		debug_object_fixup(descr->fixup_destroy, addr, o.state);
+ }
+ EXPORT_SYMBOL_GPL(debug_object_destroy);
+ 
+@@ -872,9 +840,8 @@ EXPORT_SYMBOL_GPL(debug_object_destroy);
+  */
+ void debug_object_free(void *addr, const struct debug_obj_descr *descr)
+ {
+-	enum debug_obj_state state;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+-	struct debug_obj *obj;
+ 	unsigned long flags;
+ 
+ 	if (!debug_objects_enabled)
+@@ -885,24 +852,26 @@ void debug_object_free(void *addr, const struct debug_obj_descr *descr)
+ 	raw_spin_lock_irqsave(&db->lock, flags);
+ 
+ 	obj = lookup_object(addr, db);
+-	if (!obj)
+-		goto out_unlock;
++	if (!obj) {
++		raw_spin_unlock_irqrestore(&db->lock, flags);
++		return;
++	}
+ 
+ 	switch (obj->state) {
+ 	case ODEBUG_STATE_ACTIVE:
+-		state = obj->state;
+-		raw_spin_unlock_irqrestore(&db->lock, flags);
+-		debug_print_object(obj, "free");
+-		debug_object_fixup(descr->fixup_free, addr, state);
+-		return;
++		break;
+ 	default:
+ 		hlist_del(&obj->node);
+ 		raw_spin_unlock_irqrestore(&db->lock, flags);
+ 		free_object(obj);
+ 		return;
+ 	}
+-out_unlock:
++
++	o = *obj;
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
++	debug_print_object(&o, "free");
++
++	debug_object_fixup(descr->fixup_free, addr, o.state);
+ }
+ EXPORT_SYMBOL_GPL(debug_object_free);
+ 
+@@ -954,10 +923,10 @@ void
+ debug_object_active_state(void *addr, const struct debug_obj_descr *descr,
+ 			  unsigned int expect, unsigned int next)
+ {
++	struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr };
+ 	struct debug_bucket *db;
+ 	struct debug_obj *obj;
+ 	unsigned long flags;
+-	bool print_object = false;
+ 
+ 	if (!debug_objects_enabled)
+ 		return;
+@@ -970,28 +939,19 @@ debug_object_active_state(void *addr, const struct debug_obj_descr *descr,
+ 	if (obj) {
+ 		switch (obj->state) {
+ 		case ODEBUG_STATE_ACTIVE:
+-			if (obj->astate == expect)
+-				obj->astate = next;
+-			else
+-				print_object = true;
+-			break;
+-
++			if (obj->astate != expect)
++				break;
++			obj->astate = next;
++			raw_spin_unlock_irqrestore(&db->lock, flags);
++			return;
+ 		default:
+-			print_object = true;
+ 			break;
+ 		}
++		o = *obj;
+ 	}
+ 
+ 	raw_spin_unlock_irqrestore(&db->lock, flags);
+-	if (!obj) {
+-		struct debug_obj o = { .object = addr,
+-				       .state = ODEBUG_STATE_NOTAVAILABLE,
+-				       .descr = descr };
+-
+-		debug_print_object(&o, "active_state");
+-	} else if (print_object) {
+-		debug_print_object(obj, "active_state");
+-	}
++	debug_print_object(&o, "active_state");
+ }
+ EXPORT_SYMBOL_GPL(debug_object_active_state);
+ 
+@@ -999,12 +959,10 @@ EXPORT_SYMBOL_GPL(debug_object_active_state);
+ static void __debug_check_no_obj_freed(const void *address, unsigned long size)
+ {
+ 	unsigned long flags, oaddr, saddr, eaddr, paddr, chunks;
+-	const struct debug_obj_descr *descr;
+-	enum debug_obj_state state;
++	int cnt, objs_checked = 0;
++	struct debug_obj *obj, o;
+ 	struct debug_bucket *db;
+ 	struct hlist_node *tmp;
+-	struct debug_obj *obj;
+-	int cnt, objs_checked = 0;
+ 
+ 	saddr = (unsigned long) address;
+ 	eaddr = saddr + size;
+@@ -1026,12 +984,10 @@ repeat:
+ 
+ 			switch (obj->state) {
+ 			case ODEBUG_STATE_ACTIVE:
+-				descr = obj->descr;
+-				state = obj->state;
++				o = *obj;
+ 				raw_spin_unlock_irqrestore(&db->lock, flags);
+-				debug_print_object(obj, "free");
+-				debug_object_fixup(descr->fixup_free,
+-						   (void *) oaddr, state);
++				debug_print_object(&o, "free");
++				debug_object_fixup(o.descr->fixup_free, (void *)oaddr, o.state);
+ 				goto repeat;
+ 			default:
+ 				hlist_del(&obj->node);
+diff --git a/lib/fw_table.c b/lib/fw_table.c
+index 294df54e33b6f..c49a09ee3853c 100644
+--- a/lib/fw_table.c
++++ b/lib/fw_table.c
+@@ -85,11 +85,6 @@ acpi_get_subtable_type(char *id)
+ 	return ACPI_SUBTABLE_COMMON;
+ }
+ 
+-static __init_or_acpilib bool has_handler(struct acpi_subtable_proc *proc)
+-{
+-	return proc->handler || proc->handler_arg;
+-}
+-
+ static __init_or_acpilib int call_handler(struct acpi_subtable_proc *proc,
+ 					  union acpi_subtable_headers *hdr,
+ 					  unsigned long end)
+@@ -133,7 +128,6 @@ acpi_parse_entries_array(char *id, unsigned long table_size,
+ 	unsigned long table_end, subtable_len, entry_len;
+ 	struct acpi_subtable_entry entry;
+ 	int count = 0;
+-	int errs = 0;
+ 	int i;
+ 
+ 	table_end = (unsigned long)table_header + table_header->length;
+@@ -145,25 +139,19 @@ acpi_parse_entries_array(char *id, unsigned long table_size,
+ 	    ((unsigned long)table_header + table_size);
+ 	subtable_len = acpi_get_subtable_header_length(&entry);
+ 
+-	while (((unsigned long)entry.hdr) + subtable_len  < table_end) {
+-		if (max_entries && count >= max_entries)
+-			break;
+-
++	while (((unsigned long)entry.hdr) + subtable_len < table_end) {
+ 		for (i = 0; i < proc_num; i++) {
+ 			if (acpi_get_entry_type(&entry) != proc[i].id)
+ 				continue;
+-			if (!has_handler(&proc[i]) ||
+-			    (!errs &&
+-			     call_handler(&proc[i], entry.hdr, table_end))) {
+-				errs++;
+-				continue;
+-			}
++
++			if (!max_entries || count < max_entries)
++				if (call_handler(&proc[i], entry.hdr, table_end))
++					return -EINVAL;
+ 
+ 			proc[i].count++;
++			count++;
+ 			break;
+ 		}
+-		if (i != proc_num)
+-			count++;
+ 
+ 		/*
+ 		 * If entry->length is 0, break from this loop to avoid
+@@ -180,9 +168,9 @@ acpi_parse_entries_array(char *id, unsigned long table_size,
+ 	}
+ 
+ 	if (max_entries && count > max_entries) {
+-		pr_warn("[%4.4s:0x%02x] found the maximum %i entries\n",
+-			id, proc->id, count);
++		pr_warn("[%4.4s:0x%02x] ignored %i entries of %i found\n",
++			id, proc->id, count - max_entries, count);
+ 	}
+ 
+-	return errs ? -EINVAL : count;
++	return count;
+ }
+diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c
+index 1236b3cd2fbb2..51013feba58b1 100644
+--- a/lib/kunit/executor.c
++++ b/lib/kunit/executor.c
+@@ -144,6 +144,10 @@ void kunit_free_suite_set(struct kunit_suite_set suite_set)
+ 	kfree(suite_set.start);
+ }
+ 
++/*
++ * Filter and reallocate test suites. Must return the filtered test suites set
++ * allocated at a valid virtual address or NULL in case of error.
++ */
+ struct kunit_suite_set
+ kunit_filter_suites(const struct kunit_suite_set *suite_set,
+ 		    const char *filter_glob,
+diff --git a/lib/kunit/test.c b/lib/kunit/test.c
+index 7aceb07a1af9f..3dc9d66eca494 100644
+--- a/lib/kunit/test.c
++++ b/lib/kunit/test.c
+@@ -16,6 +16,7 @@
+ #include <linux/panic.h>
+ #include <linux/sched/debug.h>
+ #include <linux/sched.h>
++#include <linux/mm.h>
+ 
+ #include "debugfs.h"
+ #include "hooks-impl.h"
+@@ -660,6 +661,7 @@ int kunit_run_tests(struct kunit_suite *suite)
+ 				test.param_index++;
+ 				test.status = KUNIT_SUCCESS;
+ 				test.status_comment[0] = '\0';
++				test.priv = NULL;
+ 			}
+ 		}
+ 
+@@ -775,12 +777,19 @@ static void kunit_module_exit(struct module *mod)
+ 	};
+ 	const char *action = kunit_action();
+ 
++	/*
++	 * Check if the start address is a valid virtual address to detect
++	 * if the module load sequence has failed and the suite set has not
++	 * been initialized and filtered.
++	 */
++	if (!suite_set.start || !virt_addr_valid(suite_set.start))
++		return;
++
+ 	if (!action)
+ 		__kunit_test_suites_exit(mod->kunit_suites,
+ 					 mod->num_kunit_suites);
+ 
+-	if (suite_set.start)
+-		kunit_free_suite_set(suite_set);
++	kunit_free_suite_set(suite_set);
+ }
+ 
+ static int kunit_module_notify(struct notifier_block *nb, unsigned long val,
+@@ -790,12 +799,12 @@ static int kunit_module_notify(struct notifier_block *nb, unsigned long val,
+ 
+ 	switch (val) {
+ 	case MODULE_STATE_LIVE:
++		kunit_module_init(mod);
+ 		break;
+ 	case MODULE_STATE_GOING:
+ 		kunit_module_exit(mod);
+ 		break;
+ 	case MODULE_STATE_COMING:
+-		kunit_module_init(mod);
+ 		break;
+ 	case MODULE_STATE_UNFORMED:
+ 		break;
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index d85a7091a1169..97284d9b2a2e3 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -3800,12 +3800,14 @@ static int hci_set_event_mask_sync(struct hci_dev *hdev)
+ 	if (lmp_bredr_capable(hdev)) {
+ 		events[4] |= 0x01; /* Flow Specification Complete */
+ 
+-		/* Don't set Disconnect Complete when suspended as that
+-		 * would wakeup the host when disconnecting due to
+-		 * suspend.
++		/* Don't set Disconnect Complete and mode change when
++		 * suspended as that would wakeup the host when disconnecting
++		 * due to suspend.
+ 		 */
+-		if (hdev->suspended)
++		if (hdev->suspended) {
+ 			events[0] &= 0xef;
++			events[2] &= 0xf7;
++		}
+ 	} else {
+ 		/* Use a different default for LE-only devices */
+ 		memset(events, 0, sizeof(events));
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 07b80e97aead5..fd81289fd3e57 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -54,6 +54,7 @@ static void iso_sock_kill(struct sock *sk);
+ enum {
+ 	BT_SK_BIG_SYNC,
+ 	BT_SK_PA_SYNC,
++	BT_SK_PA_SYNC_TERM,
+ };
+ 
+ struct iso_pinfo {
+@@ -82,6 +83,11 @@ static bool iso_match_sid(struct sock *sk, void *data);
+ static bool iso_match_sync_handle(struct sock *sk, void *data);
+ static void iso_sock_disconn(struct sock *sk);
+ 
++typedef bool (*iso_sock_match_t)(struct sock *sk, void *data);
++
++static struct sock *iso_get_sock_listen(bdaddr_t *src, bdaddr_t *dst,
++					iso_sock_match_t match, void *data);
++
+ /* ---- ISO timers ---- */
+ #define ISO_CONN_TIMEOUT	(HZ * 40)
+ #define ISO_DISCONN_TIMEOUT	(HZ * 2)
+@@ -190,10 +196,21 @@ static void iso_chan_del(struct sock *sk, int err)
+ 	sock_set_flag(sk, SOCK_ZAPPED);
+ }
+ 
++static bool iso_match_conn_sync_handle(struct sock *sk, void *data)
++{
++	struct hci_conn *hcon = data;
++
++	if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags))
++		return false;
++
++	return hcon->sync_handle == iso_pi(sk)->sync_handle;
++}
++
+ static void iso_conn_del(struct hci_conn *hcon, int err)
+ {
+ 	struct iso_conn *conn = hcon->iso_data;
+ 	struct sock *sk;
++	struct sock *parent;
+ 
+ 	if (!conn)
+ 		return;
+@@ -209,6 +226,25 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ 
+ 	if (sk) {
+ 		lock_sock(sk);
++
++		/* While a PA sync hcon is in the process of closing,
++		 * mark parent socket with a flag, so that any residual
++		 * BIGInfo adv reports that arrive before PA sync is
++		 * terminated are not processed anymore.
++		 */
++		if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
++			parent = iso_get_sock_listen(&hcon->src,
++						     &hcon->dst,
++						     iso_match_conn_sync_handle,
++						     hcon);
++
++			if (parent) {
++				set_bit(BT_SK_PA_SYNC_TERM,
++					&iso_pi(parent)->flags);
++				sock_put(parent);
++			}
++		}
++
+ 		iso_sock_clear_timer(sk);
+ 		iso_chan_del(sk, err);
+ 		release_sock(sk);
+@@ -545,8 +581,6 @@ static struct sock *__iso_get_sock_listen_by_sid(bdaddr_t *ba, bdaddr_t *bc,
+ 	return NULL;
+ }
+ 
+-typedef bool (*iso_sock_match_t)(struct sock *sk, void *data);
+-
+ /* Find socket listening:
+  * source bdaddr (Unicast)
+  * destination bdaddr (Broadcast only)
+@@ -1759,9 +1793,20 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 		/* Try to get PA sync listening socket, if it exists */
+ 		sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr,
+ 						iso_match_pa_sync_flag, NULL);
+-		if (!sk)
++
++		if (!sk) {
+ 			sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr,
+ 						 iso_match_sync_handle, ev2);
++
++			/* If PA Sync is in process of terminating,
++			 * do not handle any more BIGInfo adv reports.
++			 */
++
++			if (sk && test_bit(BT_SK_PA_SYNC_TERM,
++					   &iso_pi(sk)->flags))
++				return lm;
++		}
++
+ 		if (sk) {
+ 			int err;
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index baeebee41cd9e..60298975d5c45 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6526,7 +6526,8 @@ static inline void l2cap_sig_channel(struct l2cap_conn *conn,
+ 		if (len > skb->len || !cmd->ident) {
+ 			BT_DBG("corrupted command");
+ 			l2cap_sig_send_rej(conn, cmd->ident);
+-			break;
++			skb_pull(skb, len > skb->len ? skb->len : len);
++			continue;
+ 		}
+ 
+ 		err = l2cap_bredr_sig_cmd(conn, cmd, len, skb->data);
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index c9fdcc5cdce10..711cf5d59816b 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -542,7 +542,7 @@ struct bpf_fentry_test_t {
+ 
+ int noinline bpf_fentry_test7(struct bpf_fentry_test_t *arg)
+ {
+-	asm volatile ("");
++	asm volatile ("": "+r"(arg));
+ 	return (long)arg;
+ }
+ 
+diff --git a/net/bridge/br_cfm_netlink.c b/net/bridge/br_cfm_netlink.c
+index 5c4c369f8536e..2faab44652e7c 100644
+--- a/net/bridge/br_cfm_netlink.c
++++ b/net/bridge/br_cfm_netlink.c
+@@ -362,7 +362,7 @@ static int br_cc_ccm_tx_parse(struct net_bridge *br, struct nlattr *attr,
+ 
+ 	memset(&tx_info, 0, sizeof(tx_info));
+ 
+-	instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_RDI_INSTANCE]);
++	instance = nla_get_u32(tb[IFLA_BRIDGE_CFM_CC_CCM_TX_INSTANCE]);
+ 	nla_memcpy(&tx_info.dmac.addr,
+ 		   tb[IFLA_BRIDGE_CFM_CC_CCM_TX_DMAC],
+ 		   sizeof(tx_info.dmac.addr));
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index d7d021af10298..2d7b732429588 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1762,6 +1762,10 @@ static void br_ip6_multicast_querier_expired(struct timer_list *t)
+ }
+ #endif
+ 
++static void br_multicast_query_delay_expired(struct timer_list *t)
++{
++}
++
+ static void br_multicast_select_own_querier(struct net_bridge_mcast *brmctx,
+ 					    struct br_ip *ip,
+ 					    struct sk_buff *skb)
+@@ -3198,7 +3202,7 @@ br_multicast_update_query_timer(struct net_bridge_mcast *brmctx,
+ 				unsigned long max_delay)
+ {
+ 	if (!timer_pending(&query->timer))
+-		query->delay_time = jiffies + max_delay;
++		mod_timer(&query->delay_timer, jiffies + max_delay);
+ 
+ 	mod_timer(&query->timer, jiffies + brmctx->multicast_querier_interval);
+ }
+@@ -4041,13 +4045,11 @@ void br_multicast_ctx_init(struct net_bridge *br,
+ 	brmctx->multicast_querier_interval = 255 * HZ;
+ 	brmctx->multicast_membership_interval = 260 * HZ;
+ 
+-	brmctx->ip4_other_query.delay_time = 0;
+ 	brmctx->ip4_querier.port_ifidx = 0;
+ 	seqcount_spinlock_init(&brmctx->ip4_querier.seq, &br->multicast_lock);
+ 	brmctx->multicast_igmp_version = 2;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	brmctx->multicast_mld_version = 1;
+-	brmctx->ip6_other_query.delay_time = 0;
+ 	brmctx->ip6_querier.port_ifidx = 0;
+ 	seqcount_spinlock_init(&brmctx->ip6_querier.seq, &br->multicast_lock);
+ #endif
+@@ -4056,6 +4058,8 @@ void br_multicast_ctx_init(struct net_bridge *br,
+ 		    br_ip4_multicast_local_router_expired, 0);
+ 	timer_setup(&brmctx->ip4_other_query.timer,
+ 		    br_ip4_multicast_querier_expired, 0);
++	timer_setup(&brmctx->ip4_other_query.delay_timer,
++		    br_multicast_query_delay_expired, 0);
+ 	timer_setup(&brmctx->ip4_own_query.timer,
+ 		    br_ip4_multicast_query_expired, 0);
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -4063,6 +4067,8 @@ void br_multicast_ctx_init(struct net_bridge *br,
+ 		    br_ip6_multicast_local_router_expired, 0);
+ 	timer_setup(&brmctx->ip6_other_query.timer,
+ 		    br_ip6_multicast_querier_expired, 0);
++	timer_setup(&brmctx->ip6_other_query.delay_timer,
++		    br_multicast_query_delay_expired, 0);
+ 	timer_setup(&brmctx->ip6_own_query.timer,
+ 		    br_ip6_multicast_query_expired, 0);
+ #endif
+@@ -4197,10 +4203,12 @@ static void __br_multicast_stop(struct net_bridge_mcast *brmctx)
+ {
+ 	del_timer_sync(&brmctx->ip4_mc_router_timer);
+ 	del_timer_sync(&brmctx->ip4_other_query.timer);
++	del_timer_sync(&brmctx->ip4_other_query.delay_timer);
+ 	del_timer_sync(&brmctx->ip4_own_query.timer);
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	del_timer_sync(&brmctx->ip6_mc_router_timer);
+ 	del_timer_sync(&brmctx->ip6_other_query.timer);
++	del_timer_sync(&brmctx->ip6_other_query.delay_timer);
+ 	del_timer_sync(&brmctx->ip6_own_query.timer);
+ #endif
+ }
+@@ -4643,13 +4651,15 @@ int br_multicast_set_querier(struct net_bridge_mcast *brmctx, unsigned long val)
+ 	max_delay = brmctx->multicast_query_response_interval;
+ 
+ 	if (!timer_pending(&brmctx->ip4_other_query.timer))
+-		brmctx->ip4_other_query.delay_time = jiffies + max_delay;
++		mod_timer(&brmctx->ip4_other_query.delay_timer,
++			  jiffies + max_delay);
+ 
+ 	br_multicast_start_querier(brmctx, &brmctx->ip4_own_query);
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	if (!timer_pending(&brmctx->ip6_other_query.timer))
+-		brmctx->ip6_other_query.delay_time = jiffies + max_delay;
++		mod_timer(&brmctx->ip6_other_query.delay_timer,
++			  jiffies + max_delay);
+ 
+ 	br_multicast_start_querier(brmctx, &brmctx->ip6_own_query);
+ #endif
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 6b7f36769d032..f317d8295bf41 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -78,7 +78,7 @@ struct bridge_mcast_own_query {
+ /* other querier */
+ struct bridge_mcast_other_query {
+ 	struct timer_list		timer;
+-	unsigned long			delay_time;
++	struct timer_list		delay_timer;
+ };
+ 
+ /* selected querier */
+@@ -1155,7 +1155,7 @@ __br_multicast_querier_exists(struct net_bridge_mcast *brmctx,
+ 		own_querier_enabled = false;
+ 	}
+ 
+-	return time_is_before_jiffies(querier->delay_time) &&
++	return !timer_pending(&querier->delay_timer) &&
+ 	       (own_querier_enabled || timer_pending(&querier->timer));
+ }
+ 
+diff --git a/net/devlink/port.c b/net/devlink/port.c
+index 7634f187fa505..841a3eafa328e 100644
+--- a/net/devlink/port.c
++++ b/net/devlink/port.c
+@@ -672,7 +672,7 @@ static int devlink_port_function_validate(struct devlink_port *devlink_port,
+ 		return -EOPNOTSUPP;
+ 	}
+ 	if (tb[DEVLINK_PORT_FN_ATTR_STATE] && !ops->port_fn_state_set) {
+-		NL_SET_ERR_MSG_ATTR(extack, tb[DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR],
++		NL_SET_ERR_MSG_ATTR(extack, tb[DEVLINK_PORT_FN_ATTR_STATE],
+ 				    "Function does not support state setting");
+ 		return -EOPNOTSUPP;
+ 	}
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index b06f678b03a19..41537d18eecfd 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1287,6 +1287,12 @@ static int ip_setup_cork(struct sock *sk, struct inet_cork *cork,
+ 	if (unlikely(!rt))
+ 		return -EFAULT;
+ 
++	cork->fragsize = ip_sk_use_pmtu(sk) ?
++			 dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu);
++
++	if (!inetdev_valid_mtu(cork->fragsize))
++		return -ENETUNREACH;
++
+ 	/*
+ 	 * setup for corking.
+ 	 */
+@@ -1303,12 +1309,6 @@ static int ip_setup_cork(struct sock *sk, struct inet_cork *cork,
+ 		cork->addr = ipc->addr;
+ 	}
+ 
+-	cork->fragsize = ip_sk_use_pmtu(sk) ?
+-			 dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu);
+-
+-	if (!inetdev_valid_mtu(cork->fragsize))
+-		return -ENETUNREACH;
+-
+ 	cork->gso_size = ipc->gso_size;
+ 
+ 	cork->dst = &rt->dst;
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 2efc53526a382..8a88e705d8271 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -1366,12 +1366,13 @@ e_inval:
+  * ipv4_pktinfo_prepare - transfer some info from rtable to skb
+  * @sk: socket
+  * @skb: buffer
++ * @drop_dst: if true, drops skb dst
+  *
+  * To support IP_CMSG_PKTINFO option, we store rt_iif and specific
+  * destination in skb->cb[] before dst drop.
+  * This way, receiver doesn't make cache line misses to read rtable.
+  */
+-void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb)
++void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb, bool drop_dst)
+ {
+ 	struct in_pktinfo *pktinfo = PKTINFO_SKB_CB(skb);
+ 	bool prepare = inet_test_bit(PKTINFO, sk) ||
+@@ -1400,7 +1401,8 @@ void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb)
+ 		pktinfo->ipi_ifindex = 0;
+ 		pktinfo->ipi_spec_dst.s_addr = 0;
+ 	}
+-	skb_dst_drop(skb);
++	if (drop_dst)
++		skb_dst_drop(skb);
+ }
+ 
+ int ip_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 0063a237253bf..e49242706b5f5 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -1073,7 +1073,7 @@ static int ipmr_cache_report(const struct mr_table *mrt,
+ 		msg = (struct igmpmsg *)skb_network_header(skb);
+ 		msg->im_vif = vifi;
+ 		msg->im_vif_hi = vifi >> 8;
+-		ipv4_pktinfo_prepare(mroute_sk, pkt);
++		ipv4_pktinfo_prepare(mroute_sk, pkt, false);
+ 		memcpy(skb->cb, pkt->cb, sizeof(skb->cb));
+ 		/* Add our header */
+ 		igmp = skb_put(skb, sizeof(struct igmphdr));
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 27da9d7294c0b..aea89326c6979 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -292,7 +292,7 @@ static int raw_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 
+ 	/* Charge it to the socket. */
+ 
+-	ipv4_pktinfo_prepare(sk, skb);
++	ipv4_pktinfo_prepare(sk, skb, true);
+ 	if (sock_queue_rcv_skb_reason(sk, skb, &reason) < 0) {
+ 		kfree_skb_reason(skb, reason);
+ 		return NET_RX_DROP;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 7bce79beca2b9..b30ef770a6cca 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1786,7 +1786,17 @@ static skb_frag_t *skb_advance_to_frag(struct sk_buff *skb, u32 offset_skb,
+ 
+ static bool can_map_frag(const skb_frag_t *frag)
+ {
+-	return skb_frag_size(frag) == PAGE_SIZE && !skb_frag_off(frag);
++	struct page *page;
++
++	if (skb_frag_size(frag) != PAGE_SIZE || skb_frag_off(frag))
++		return false;
++
++	page = skb_frag_page(frag);
++
++	if (PageCompound(page) || page->mapping)
++		return false;
++
++	return true;
+ }
+ 
+ static int find_next_mappable_frag(const skb_frag_t *frag,
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 148ffb007969f..f631b0a21af4c 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2169,7 +2169,7 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
+ 
+ 	udp_csum_pull_header(skb);
+ 
+-	ipv4_pktinfo_prepare(sk, skb);
++	ipv4_pktinfo_prepare(sk, skb, true);
+ 	return __udp_queue_rcv_skb(sk, skb);
+ 
+ csum_error:
+diff --git a/net/ipv6/addrconf_core.c b/net/ipv6/addrconf_core.c
+index 507a8353a6bdb..c008d21925d7f 100644
+--- a/net/ipv6/addrconf_core.c
++++ b/net/ipv6/addrconf_core.c
+@@ -220,19 +220,26 @@ const struct ipv6_stub *ipv6_stub __read_mostly = &(struct ipv6_stub) {
+ EXPORT_SYMBOL_GPL(ipv6_stub);
+ 
+ /* IPv6 Wildcard Address and Loopback Address defined by RFC2553 */
+-const struct in6_addr in6addr_loopback = IN6ADDR_LOOPBACK_INIT;
++const struct in6_addr in6addr_loopback __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_LOOPBACK_INIT;
+ EXPORT_SYMBOL(in6addr_loopback);
+-const struct in6_addr in6addr_any = IN6ADDR_ANY_INIT;
++const struct in6_addr in6addr_any __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_ANY_INIT;
+ EXPORT_SYMBOL(in6addr_any);
+-const struct in6_addr in6addr_linklocal_allnodes = IN6ADDR_LINKLOCAL_ALLNODES_INIT;
++const struct in6_addr in6addr_linklocal_allnodes __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_LINKLOCAL_ALLNODES_INIT;
+ EXPORT_SYMBOL(in6addr_linklocal_allnodes);
+-const struct in6_addr in6addr_linklocal_allrouters = IN6ADDR_LINKLOCAL_ALLROUTERS_INIT;
++const struct in6_addr in6addr_linklocal_allrouters __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_LINKLOCAL_ALLROUTERS_INIT;
+ EXPORT_SYMBOL(in6addr_linklocal_allrouters);
+-const struct in6_addr in6addr_interfacelocal_allnodes = IN6ADDR_INTERFACELOCAL_ALLNODES_INIT;
++const struct in6_addr in6addr_interfacelocal_allnodes __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_INTERFACELOCAL_ALLNODES_INIT;
+ EXPORT_SYMBOL(in6addr_interfacelocal_allnodes);
+-const struct in6_addr in6addr_interfacelocal_allrouters = IN6ADDR_INTERFACELOCAL_ALLROUTERS_INIT;
++const struct in6_addr in6addr_interfacelocal_allrouters __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_INTERFACELOCAL_ALLROUTERS_INIT;
+ EXPORT_SYMBOL(in6addr_interfacelocal_allrouters);
+-const struct in6_addr in6addr_sitelocal_allrouters = IN6ADDR_SITELOCAL_ALLROUTERS_INIT;
++const struct in6_addr in6addr_sitelocal_allrouters __aligned(BITS_PER_LONG/8)
++	= IN6ADDR_SITELOCAL_ALLROUTERS_INIT;
+ EXPORT_SYMBOL(in6addr_sitelocal_allrouters);
+ 
+ static void snmp6_free_dev(struct inet6_dev *idev)
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 46c19bd489901..9bbabf750a21e 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -796,8 +796,8 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+ 						struct sk_buff *skb),
+ 			 bool log_ecn_err)
+ {
+-	const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+-	int err;
++	const struct ipv6hdr *ipv6h;
++	int nh, err;
+ 
+ 	if ((!(tpi->flags & TUNNEL_CSUM) &&
+ 	     (tunnel->parms.i_flags & TUNNEL_CSUM)) ||
+@@ -829,7 +829,6 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+ 			goto drop;
+ 		}
+ 
+-		ipv6h = ipv6_hdr(skb);
+ 		skb->protocol = eth_type_trans(skb, tunnel->dev);
+ 		skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
+ 	} else {
+@@ -837,7 +836,23 @@ static int __ip6_tnl_rcv(struct ip6_tnl *tunnel, struct sk_buff *skb,
+ 		skb_reset_mac_header(skb);
+ 	}
+ 
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
+ 	skb_reset_network_header(skb);
++
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(tunnel->dev, rx_length_errors);
++		DEV_STATS_INC(tunnel->dev, rx_errors);
++		goto drop;
++	}
++
++	/* Get the outer header. */
++	ipv6h = (struct ipv6hdr *)(skb->head + nh);
++
+ 	memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
+ 
+ 	__skb_tunnel_rx(skb, tunnel->dev, tunnel->net);
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 65d1f6755f98f..1184d40167b86 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -634,7 +634,7 @@ retry:
+ 
+ 		msize = 0;
+ 		for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
+-			msize += skb_shinfo(skb)->frags[i].bv_len;
++			msize += skb_frag_size(&skb_shinfo(skb)->frags[i]);
+ 
+ 		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE,
+ 			      skb_shinfo(skb)->frags, skb_shinfo(skb)->nr_frags,
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 20551cfb7da6d..fde1140d899ef 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -226,6 +226,8 @@ static int llc_ui_release(struct socket *sock)
+ 	}
+ 	netdev_put(llc->dev, &llc->dev_tracker);
+ 	sock_put(sk);
++	sock_orphan(sk);
++	sock->sk = NULL;
+ 	llc_sk_free(sk);
+ out:
+ 	return 0;
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index e573be5afde7a..ae493599a3ef0 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -457,7 +457,8 @@ static void tcp_init_sender(struct ip_ct_tcp_state *sender,
+ 			    const struct sk_buff *skb,
+ 			    unsigned int dataoff,
+ 			    const struct tcphdr *tcph,
+-			    u32 end, u32 win)
++			    u32 end, u32 win,
++			    enum ip_conntrack_dir dir)
+ {
+ 	/* SYN-ACK in reply to a SYN
+ 	 * or SYN from reply direction in simultaneous open.
+@@ -471,7 +472,8 @@ static void tcp_init_sender(struct ip_ct_tcp_state *sender,
+ 	 * Both sides must send the Window Scale option
+ 	 * to enable window scaling in either direction.
+ 	 */
+-	if (!(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE &&
++	if (dir == IP_CT_DIR_REPLY &&
++	    !(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE &&
+ 	      receiver->flags & IP_CT_TCP_FLAG_WINDOW_SCALE)) {
+ 		sender->td_scale = 0;
+ 		receiver->td_scale = 0;
+@@ -542,7 +544,7 @@ tcp_in_window(struct nf_conn *ct, enum ip_conntrack_dir dir,
+ 		if (tcph->syn) {
+ 			tcp_init_sender(sender, receiver,
+ 					skb, dataoff, tcph,
+-					end, win);
++					end, win, dir);
+ 			if (!tcph->ack)
+ 				/* Simultaneous open */
+ 				return NFCT_TCP_ACCEPT;
+@@ -585,7 +587,7 @@ tcp_in_window(struct nf_conn *ct, enum ip_conntrack_dir dir,
+ 		 */
+ 		tcp_init_sender(sender, receiver,
+ 				skb, dataoff, tcph,
+-				end, win);
++				end, win, dir);
+ 
+ 		if (dir == IP_CT_DIR_REPLY && !tcph->ack)
+ 			return NFCT_TCP_ACCEPT;
+diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
+index 8cc52d2bd31be..e16f158388bbe 100644
+--- a/net/netfilter/nf_log.c
++++ b/net/netfilter/nf_log.c
+@@ -193,11 +193,12 @@ void nf_logger_put(int pf, enum nf_log_type type)
+ 		return;
+ 	}
+ 
+-	BUG_ON(loggers[pf][type] == NULL);
+-
+ 	rcu_read_lock();
+ 	logger = rcu_dereference(loggers[pf][type]);
+-	module_put(logger->me);
++	if (!logger)
++		WARN_ON_ONCE(1);
++	else
++		module_put(logger->me);
+ 	rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(nf_logger_put);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 6a987b36d0bb0..0e07f110a539b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -7468,11 +7468,15 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
+-static const struct nft_object_type *__nft_obj_type_get(u32 objtype)
++static const struct nft_object_type *__nft_obj_type_get(u32 objtype, u8 family)
+ {
+ 	const struct nft_object_type *type;
+ 
+ 	list_for_each_entry(type, &nf_tables_objects, list) {
++		if (type->family != NFPROTO_UNSPEC &&
++		    type->family != family)
++			continue;
++
+ 		if (objtype == type->type)
+ 			return type;
+ 	}
+@@ -7480,11 +7484,11 @@ static const struct nft_object_type *__nft_obj_type_get(u32 objtype)
+ }
+ 
+ static const struct nft_object_type *
+-nft_obj_type_get(struct net *net, u32 objtype)
++nft_obj_type_get(struct net *net, u32 objtype, u8 family)
+ {
+ 	const struct nft_object_type *type;
+ 
+-	type = __nft_obj_type_get(objtype);
++	type = __nft_obj_type_get(objtype, family);
+ 	if (type != NULL && try_module_get(type->owner))
+ 		return type;
+ 
+@@ -7577,7 +7581,7 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
+ 		if (info->nlh->nlmsg_flags & NLM_F_REPLACE)
+ 			return -EOPNOTSUPP;
+ 
+-		type = __nft_obj_type_get(objtype);
++		type = __nft_obj_type_get(objtype, family);
+ 		if (WARN_ON_ONCE(!type))
+ 			return -ENOENT;
+ 
+@@ -7591,7 +7595,7 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
+ 	if (!nft_use_inc(&table->use))
+ 		return -EMFILE;
+ 
+-	type = nft_obj_type_get(net, objtype);
++	type = nft_obj_type_get(net, objtype, family);
+ 	if (IS_ERR(type)) {
+ 		err = PTR_ERR(type);
+ 		goto err_type;
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 86bb9d7797d9e..aac98a3c966e9 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -1250,7 +1250,31 @@ static int nft_ct_expect_obj_init(const struct nft_ctx *ctx,
+ 	if (tb[NFTA_CT_EXPECT_L3PROTO])
+ 		priv->l3num = ntohs(nla_get_be16(tb[NFTA_CT_EXPECT_L3PROTO]));
+ 
++	switch (priv->l3num) {
++	case NFPROTO_IPV4:
++	case NFPROTO_IPV6:
++		if (priv->l3num != ctx->family)
++			return -EINVAL;
++
++		fallthrough;
++	case NFPROTO_INET:
++		break;
++	default:
++		return -EOPNOTSUPP;
++	}
++
+ 	priv->l4proto = nla_get_u8(tb[NFTA_CT_EXPECT_L4PROTO]);
++	switch (priv->l4proto) {
++	case IPPROTO_TCP:
++	case IPPROTO_UDP:
++	case IPPROTO_UDPLITE:
++	case IPPROTO_DCCP:
++	case IPPROTO_SCTP:
++		break;
++	default:
++		return -EOPNOTSUPP;
++	}
++
+ 	priv->dport = nla_get_be16(tb[NFTA_CT_EXPECT_DPORT]);
+ 	priv->timeout = nla_get_u32(tb[NFTA_CT_EXPECT_TIMEOUT]);
+ 	priv->size = nla_get_u8(tb[NFTA_CT_EXPECT_SIZE]);
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 9f21953c7433f..f735d79d8be57 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -713,6 +713,7 @@ static const struct nft_object_ops nft_tunnel_obj_ops = {
+ 
+ static struct nft_object_type nft_tunnel_obj_type __read_mostly = {
+ 	.type		= NFT_OBJECT_TUNNEL,
++	.family		= NFPROTO_NETDEV,
+ 	.ops		= &nft_tunnel_obj_ops,
+ 	.maxattr	= NFTA_TUNNEL_KEY_MAX,
+ 	.policy		= nft_tunnel_key_policy,
+diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
+index 89ac05a711a42..39c908a3ca6e8 100644
+--- a/net/rxrpc/conn_service.c
++++ b/net/rxrpc/conn_service.c
+@@ -25,7 +25,7 @@ struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
+ 	struct rxrpc_conn_proto k;
+ 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ 	struct rb_node *p;
+-	unsigned int seq = 0;
++	unsigned int seq = 1;
+ 
+ 	k.epoch	= sp->hdr.epoch;
+ 	k.cid	= sp->hdr.cid & RXRPC_CIDMASK;
+@@ -35,6 +35,7 @@ struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
+ 		 * under just the RCU read lock, so we have to check for
+ 		 * changes.
+ 		 */
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ 		read_seqbegin_or_lock(&peer->service_conn_lock, &seq);
+ 
+ 		p = rcu_dereference_raw(peer->service_conns.rb_node);
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index 72f4d81a3f41f..1489a8421d786 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -155,10 +155,12 @@ static int smc_clc_ueid_remove(char *ueid)
+ 			rc = 0;
+ 		}
+ 	}
++#if IS_ENABLED(CONFIG_S390)
+ 	if (!rc && !smc_clc_eid_table.ueid_cnt) {
+ 		smc_clc_eid_table.seid_enabled = 1;
+ 		rc = -EAGAIN;	/* indicate success and enabling of seid */
+ 	}
++#endif
+ 	write_unlock(&smc_clc_eid_table.lock);
+ 	return rc;
+ }
+@@ -273,22 +275,30 @@ err:
+ 
+ int smc_nl_enable_seid(struct sk_buff *skb, struct genl_info *info)
+ {
++#if IS_ENABLED(CONFIG_S390)
+ 	write_lock(&smc_clc_eid_table.lock);
+ 	smc_clc_eid_table.seid_enabled = 1;
+ 	write_unlock(&smc_clc_eid_table.lock);
+ 	return 0;
++#else
++	return -EOPNOTSUPP;
++#endif
+ }
+ 
+ int smc_nl_disable_seid(struct sk_buff *skb, struct genl_info *info)
+ {
+ 	int rc = 0;
+ 
++#if IS_ENABLED(CONFIG_S390)
+ 	write_lock(&smc_clc_eid_table.lock);
+ 	if (!smc_clc_eid_table.ueid_cnt)
+ 		rc = -ENOENT;
+ 	else
+ 		smc_clc_eid_table.seid_enabled = 0;
+ 	write_unlock(&smc_clc_eid_table.lock);
++#else
++	rc = -EOPNOTSUPP;
++#endif
+ 	return rc;
+ }
+ 
+@@ -1269,7 +1279,11 @@ void __init smc_clc_init(void)
+ 	INIT_LIST_HEAD(&smc_clc_eid_table.list);
+ 	rwlock_init(&smc_clc_eid_table.lock);
+ 	smc_clc_eid_table.ueid_cnt = 0;
++#if IS_ENABLED(CONFIG_S390)
+ 	smc_clc_eid_table.seid_enabled = 1;
++#else
++	smc_clc_eid_table.seid_enabled = 0;
++#endif
+ }
+ 
+ void smc_clc_exit(void)
+diff --git a/net/sunrpc/xprtmultipath.c b/net/sunrpc/xprtmultipath.c
+index 74ee2271251e3..720d3ba742ec0 100644
+--- a/net/sunrpc/xprtmultipath.c
++++ b/net/sunrpc/xprtmultipath.c
+@@ -336,8 +336,9 @@ struct rpc_xprt *xprt_iter_current_entry_offline(struct rpc_xprt_iter *xpi)
+ 			xprt_switch_find_current_entry_offline);
+ }
+ 
+-bool rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
+-			      const struct sockaddr *sap)
++static
++bool __rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
++				const struct sockaddr *sap)
+ {
+ 	struct list_head *head;
+ 	struct rpc_xprt *pos;
+@@ -356,6 +357,18 @@ bool rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
+ 	return false;
+ }
+ 
++bool rpc_xprt_switch_has_addr(struct rpc_xprt_switch *xps,
++			      const struct sockaddr *sap)
++{
++	bool res;
++
++	rcu_read_lock();
++	res = __rpc_xprt_switch_has_addr(xps, sap);
++	rcu_read_unlock();
++
++	return res;
++}
++
+ static
+ struct rpc_xprt *xprt_switch_find_next_entry(struct list_head *head,
+ 		const struct rpc_xprt *cur, bool check_active)
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index ac1f2bc18fc96..30b178ebba60a 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1344,13 +1344,11 @@ static void unix_state_double_lock(struct sock *sk1, struct sock *sk2)
+ 		unix_state_lock(sk1);
+ 		return;
+ 	}
+-	if (sk1 < sk2) {
+-		unix_state_lock(sk1);
+-		unix_state_lock_nested(sk2);
+-	} else {
+-		unix_state_lock(sk2);
+-		unix_state_lock_nested(sk1);
+-	}
++	if (sk1 > sk2)
++		swap(sk1, sk2);
++
++	unix_state_lock(sk1);
++	unix_state_lock_nested(sk2, U_LOCK_SECOND);
+ }
+ 
+ static void unix_state_double_unlock(struct sock *sk1, struct sock *sk2)
+@@ -1591,7 +1589,7 @@ restart:
+ 		goto out_unlock;
+ 	}
+ 
+-	unix_state_lock_nested(sk);
++	unix_state_lock_nested(sk, U_LOCK_SECOND);
+ 
+ 	if (sk->sk_state != st) {
+ 		unix_state_unlock(sk);
+diff --git a/net/unix/diag.c b/net/unix/diag.c
+index bec09a3a1d44c..be19827eca36d 100644
+--- a/net/unix/diag.c
++++ b/net/unix/diag.c
+@@ -84,7 +84,7 @@ static int sk_diag_dump_icons(struct sock *sk, struct sk_buff *nlskb)
+ 			 * queue lock. With the other's queue locked it's
+ 			 * OK to lock the state.
+ 			 */
+-			unix_state_lock_nested(req);
++			unix_state_lock_nested(req, U_LOCK_DIAG);
+ 			peer = unix_sk(req)->peer;
+ 			buf[i++] = (peer ? sock_i_ino(peer) : 0);
+ 			unix_state_unlock(req);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 0d6c3fc1238ab..b9da6f5152cbc 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1864,8 +1864,12 @@ __cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 				list_add(&new->hidden_list,
+ 					 &hidden->hidden_list);
+ 				hidden->refcount++;
++
++				ies = (void *)rcu_access_pointer(new->pub.beacon_ies);
+ 				rcu_assign_pointer(new->pub.beacon_ies,
+ 						   hidden->pub.beacon_ies);
++				if (ies)
++					kfree_rcu(ies, rcu_head);
+ 			}
+ 		} else {
+ 			/*
+diff --git a/sound/hda/hdac_stream.c b/sound/hda/hdac_stream.c
+index 6ce24e248f8e0..610ea7a33cd85 100644
+--- a/sound/hda/hdac_stream.c
++++ b/sound/hda/hdac_stream.c
+@@ -671,17 +671,15 @@ void snd_hdac_stream_timecounter_init(struct hdac_stream *azx_dev,
+ 	struct hdac_stream *s;
+ 	bool inited = false;
+ 	u64 cycle_last = 0;
+-	int i = 0;
+ 
+ 	list_for_each_entry(s, &bus->stream_list, list) {
+-		if (streams & (1 << i)) {
++		if ((streams & (1 << s->index))) {
+ 			azx_timecounter_init(s, inited, cycle_last);
+ 			if (!inited) {
+ 				inited = true;
+ 				cycle_last = s->tc.cycle_last;
+ 			}
+ 		}
+-		i++;
+ 	}
+ 
+ 	snd_pcm_gettime(runtime, &runtime->trigger_tstamp);
+@@ -726,14 +724,13 @@ void snd_hdac_stream_sync(struct hdac_stream *azx_dev, bool start,
+ 			  unsigned int streams)
+ {
+ 	struct hdac_bus *bus = azx_dev->bus;
+-	int i, nwait, timeout;
++	int nwait, timeout;
+ 	struct hdac_stream *s;
+ 
+ 	for (timeout = 5000; timeout; timeout--) {
+ 		nwait = 0;
+-		i = 0;
+ 		list_for_each_entry(s, &bus->stream_list, list) {
+-			if (!(streams & (1 << i++)))
++			if (!(streams & (1 << s->index)))
+ 				continue;
+ 
+ 			if (start) {
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 756fa0aa69bba..6a384b922e4fa 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -521,6 +521,16 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ 		.device = PCI_DEVICE_ID_INTEL_HDA_MTL,
+ 	},
++	/* ArrowLake-S */
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = PCI_DEVICE_ID_INTEL_HDA_ARL_S,
++	},
++	/* ArrowLake */
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++		.device = PCI_DEVICE_ID_INTEL_HDA_ARL,
++	},
+ #endif
+ 
+ /* Lunar Lake */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 2d1df3654424c..2276adc844784 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2504,6 +2504,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, HDA_LNL_P, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE) },
+ 	/* Arrow Lake-S */
+ 	{ PCI_DEVICE_DATA(INTEL, HDA_ARL_S, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE) },
++	/* Arrow Lake */
++	{ PCI_DEVICE_DATA(INTEL, HDA_ARL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE) },
+ 	/* Apollolake (Broxton-P) */
+ 	{ PCI_DEVICE_DATA(INTEL, HDA_APL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON) },
+ 	/* Gemini-Lake */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index a889cccdd607c..e8819e8a98763 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -21,6 +21,12 @@
+ #include "hda_jack.h"
+ #include "hda_generic.h"
+ 
++enum {
++	CX_HEADSET_NOPRESENT = 0,
++	CX_HEADSET_PARTPRESENT,
++	CX_HEADSET_ALLPRESENT,
++};
++
+ struct conexant_spec {
+ 	struct hda_gen_spec gen;
+ 
+@@ -42,7 +48,8 @@ struct conexant_spec {
+ 	unsigned int gpio_led;
+ 	unsigned int gpio_mute_led_mask;
+ 	unsigned int gpio_mic_led_mask;
+-
++	unsigned int headset_present_flag;
++	bool is_cx8070_sn6140;
+ };
+ 
+ 
+@@ -164,6 +171,27 @@ static void cxt_init_gpio_led(struct hda_codec *codec)
+ 	}
+ }
+ 
++static void cx_fixup_headset_recog(struct hda_codec *codec)
++{
++	unsigned int mic_persent;
++
++	/* fix some headset type recognize fail issue, such as EDIFIER headset */
++	/* set micbiasd output current comparator threshold from 66% to 55%. */
++	snd_hda_codec_write(codec, 0x1c, 0, 0x320, 0x010);
++	/* set OFF voltage for DFET from -1.2V to -0.8V, set headset micbias registor
++	 * value adjustment trim from 2.2K ohms to 2.0K ohms.
++	 */
++	snd_hda_codec_write(codec, 0x1c, 0, 0x3b0, 0xe10);
++	/* fix reboot headset type recognize fail issue */
++	mic_persent = snd_hda_codec_read(codec, 0x19, 0, AC_VERB_GET_PIN_SENSE, 0x0);
++	if (mic_persent & AC_PINSENSE_PRESENCE)
++		/* enable headset mic VREF */
++		snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24);
++	else
++		/* disable headset mic VREF */
++		snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x20);
++}
++
+ static int cx_auto_init(struct hda_codec *codec)
+ {
+ 	struct conexant_spec *spec = codec->spec;
+@@ -174,6 +202,9 @@ static int cx_auto_init(struct hda_codec *codec)
+ 	cxt_init_gpio_led(codec);
+ 	snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
+ 
++	if (spec->is_cx8070_sn6140)
++		cx_fixup_headset_recog(codec);
++
+ 	return 0;
+ }
+ 
+@@ -192,6 +223,77 @@ static void cx_auto_free(struct hda_codec *codec)
+ 	snd_hda_gen_free(codec);
+ }
+ 
++static void cx_process_headset_plugin(struct hda_codec *codec)
++{
++	unsigned int val;
++	unsigned int count = 0;
++
++	/* Wait headset detect done. */
++	do {
++		val = snd_hda_codec_read(codec, 0x1c, 0, 0xca0, 0x0);
++		if (val & 0x080) {
++			codec_dbg(codec, "headset type detect done!\n");
++			break;
++		}
++		msleep(20);
++		count++;
++	} while (count < 3);
++	val = snd_hda_codec_read(codec, 0x1c, 0, 0xcb0, 0x0);
++	if (val & 0x800) {
++		codec_dbg(codec, "headset plugin, type is CTIA\n");
++		snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24);
++	} else if (val & 0x400) {
++		codec_dbg(codec, "headset plugin, type is OMTP\n");
++		snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24);
++	} else {
++		codec_dbg(codec, "headphone plugin\n");
++	}
++}
++
++static void cx_update_headset_mic_vref(struct hda_codec *codec, unsigned int res)
++{
++	unsigned int phone_present, mic_persent, phone_tag, mic_tag;
++	struct conexant_spec *spec = codec->spec;
++
++	/* In cx8070 and sn6140, the node 16 can only be config to headphone or disabled,
++	 * the node 19 can only be config to microphone or disabled.
++	 * Check hp&mic tag to process headset pulgin&plugout.
++	 */
++	phone_tag = snd_hda_codec_read(codec, 0x16, 0, AC_VERB_GET_UNSOLICITED_RESPONSE, 0x0);
++	mic_tag = snd_hda_codec_read(codec, 0x19, 0, AC_VERB_GET_UNSOLICITED_RESPONSE, 0x0);
++	if ((phone_tag & (res >> AC_UNSOL_RES_TAG_SHIFT)) ||
++	    (mic_tag & (res >> AC_UNSOL_RES_TAG_SHIFT))) {
++		phone_present = snd_hda_codec_read(codec, 0x16, 0, AC_VERB_GET_PIN_SENSE, 0x0);
++		if (!(phone_present & AC_PINSENSE_PRESENCE)) {/* headphone plugout */
++			spec->headset_present_flag = CX_HEADSET_NOPRESENT;
++			snd_hda_codec_write(codec, 0x19, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x20);
++			return;
++		}
++		if (spec->headset_present_flag == CX_HEADSET_NOPRESENT) {
++			spec->headset_present_flag = CX_HEADSET_PARTPRESENT;
++		} else if (spec->headset_present_flag == CX_HEADSET_PARTPRESENT) {
++			mic_persent = snd_hda_codec_read(codec, 0x19, 0,
++							 AC_VERB_GET_PIN_SENSE, 0x0);
++			/* headset is present */
++			if ((phone_present & AC_PINSENSE_PRESENCE) &&
++			    (mic_persent & AC_PINSENSE_PRESENCE)) {
++				cx_process_headset_plugin(codec);
++				spec->headset_present_flag = CX_HEADSET_ALLPRESENT;
++			}
++		}
++	}
++}
++
++static void cx_jack_unsol_event(struct hda_codec *codec, unsigned int res)
++{
++	struct conexant_spec *spec = codec->spec;
++
++	if (spec->is_cx8070_sn6140)
++		cx_update_headset_mic_vref(codec, res);
++
++	snd_hda_jack_unsol_event(codec, res);
++}
++
+ #ifdef CONFIG_PM
+ static int cx_auto_suspend(struct hda_codec *codec)
+ {
+@@ -205,7 +307,7 @@ static const struct hda_codec_ops cx_auto_patch_ops = {
+ 	.build_pcms = snd_hda_gen_build_pcms,
+ 	.init = cx_auto_init,
+ 	.free = cx_auto_free,
+-	.unsol_event = snd_hda_jack_unsol_event,
++	.unsol_event = cx_jack_unsol_event,
+ #ifdef CONFIG_PM
+ 	.suspend = cx_auto_suspend,
+ 	.check_power_status = snd_hda_gen_check_power_status,
+@@ -1042,6 +1144,15 @@ static int patch_conexant_auto(struct hda_codec *codec)
+ 	codec->spec = spec;
+ 	codec->patch_ops = cx_auto_patch_ops;
+ 
++	/* init cx8070/sn6140 flag and reset headset_present_flag */
++	switch (codec->core.vendor_id) {
++	case 0x14f11f86:
++	case 0x14f11f87:
++		spec->is_cx8070_sn6140 = true;
++		spec->headset_present_flag = CX_HEADSET_NOPRESENT;
++		break;
++	}
++
+ 	cx_auto_parse_eapd(codec);
+ 	spec->gen.own_eapd_ctl = 1;
+ 
+diff --git a/sound/soc/amd/acp-config.c b/sound/soc/amd/acp-config.c
+index 3bc4b2e41650e..dea6d367b9e82 100644
+--- a/sound/soc/amd/acp-config.c
++++ b/sound/soc/amd/acp-config.c
+@@ -3,7 +3,7 @@
+ // This file is provided under a dual BSD/GPLv2 license. When using or
+ // redistributing this file, you may do so under either license.
+ //
+-// Copyright(c) 2021 Advanced Micro Devices, Inc.
++// Copyright(c) 2021, 2023 Advanced Micro Devices, Inc.
+ //
+ // Authors: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com>
+ //
+@@ -47,6 +47,19 @@ static const struct config_entry config_table[] = {
+ 			{}
+ 		},
+ 	},
++	{
++		.flags = FLAG_AMD_LEGACY,
++		.device = ACP_PCI_DEV_ID,
++		.dmi_table = (const struct dmi_system_id []) {
++			{
++				.matches = {
++					DMI_MATCH(DMI_SYS_VENDOR, "Valve"),
++					DMI_MATCH(DMI_PRODUCT_NAME, "Jupiter"),
++				},
++			},
++			{}
++		},
++	},
+ 	{
+ 		.flags = FLAG_AMD_SOF,
+ 		.device = ACP_PCI_DEV_ID,
+diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c
+index 7e21cec3c2fb9..6ce309980cd10 100644
+--- a/sound/soc/codecs/lpass-wsa-macro.c
++++ b/sound/soc/codecs/lpass-wsa-macro.c
+@@ -1584,7 +1584,6 @@ static int wsa_macro_enable_interpolator(struct snd_soc_dapm_widget *w,
+ 	u16 gain_reg;
+ 	u16 reg;
+ 	int val;
+-	int offset_val = 0;
+ 	struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+ 
+ 	if (w->shift == WSA_MACRO_COMP1) {
+@@ -1623,10 +1622,8 @@ static int wsa_macro_enable_interpolator(struct snd_soc_dapm_widget *w,
+ 					CDC_WSA_RX1_RX_PATH_MIX_SEC0,
+ 					CDC_WSA_RX_PGA_HALF_DB_MASK,
+ 					CDC_WSA_RX_PGA_HALF_DB_ENABLE);
+-			offset_val = -2;
+ 		}
+ 		val = snd_soc_component_read(component, gain_reg);
+-		val += offset_val;
+ 		snd_soc_component_write(component, gain_reg, val);
+ 		wsa_macro_config_ear_spkr_gain(component, wsa,
+ 						event, gain_reg);
+@@ -1654,10 +1651,6 @@ static int wsa_macro_enable_interpolator(struct snd_soc_dapm_widget *w,
+ 					CDC_WSA_RX1_RX_PATH_MIX_SEC0,
+ 					CDC_WSA_RX_PGA_HALF_DB_MASK,
+ 					CDC_WSA_RX_PGA_HALF_DB_DISABLE);
+-			offset_val = 2;
+-			val = snd_soc_component_read(component, gain_reg);
+-			val += offset_val;
+-			snd_soc_component_write(component, gain_reg, val);
+ 		}
+ 		wsa_macro_config_ear_spkr_gain(component, wsa,
+ 						event, gain_reg);
+diff --git a/sound/soc/codecs/rtq9128.c b/sound/soc/codecs/rtq9128.c
+index c22b047115cc4..aa3eadecd9746 100644
+--- a/sound/soc/codecs/rtq9128.c
++++ b/sound/soc/codecs/rtq9128.c
+@@ -59,6 +59,7 @@
+ 
+ struct rtq9128_data {
+ 	struct gpio_desc *enable;
++	unsigned int daifmt;
+ 	int tdm_slots;
+ 	int tdm_slot_width;
+ 	bool tdm_input_data2_select;
+@@ -391,7 +392,11 @@ static int rtq9128_component_probe(struct snd_soc_component *comp)
+ 	unsigned int val;
+ 	int i, ret;
+ 
+-	pm_runtime_resume_and_get(comp->dev);
++	ret = pm_runtime_resume_and_get(comp->dev);
++	if (ret < 0) {
++		dev_err(comp->dev, "Failed to resume device (%d)\n", ret);
++		return ret;
++	}
+ 
+ 	val = snd_soc_component_read(comp, RTQ9128_REG_EFUSE_DATA);
+ 
+@@ -437,10 +442,7 @@ static const struct snd_soc_component_driver rtq9128_comp_driver = {
+ static int rtq9128_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ {
+ 	struct rtq9128_data *data = snd_soc_dai_get_drvdata(dai);
+-	struct snd_soc_component *comp = dai->component;
+ 	struct device *dev = dai->dev;
+-	unsigned int audfmt, fmtval;
+-	int ret;
+ 
+ 	dev_dbg(dev, "%s: fmt 0x%8x\n", __func__, fmt);
+ 
+@@ -450,35 +452,10 @@ static int rtq9128_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		return -EINVAL;
+ 	}
+ 
+-	fmtval = fmt & SND_SOC_DAIFMT_FORMAT_MASK;
+-	if (data->tdm_slots && fmtval != SND_SOC_DAIFMT_DSP_A && fmtval != SND_SOC_DAIFMT_DSP_B) {
+-		dev_err(dev, "TDM is used, format only support DSP_A or DSP_B\n");
+-		return -EINVAL;
+-	}
++	/* Store here and will be used in runtime hw_params for DAI format setting */
++	data->daifmt = fmt;
+ 
+-	switch (fmtval) {
+-	case SND_SOC_DAIFMT_I2S:
+-		audfmt = 8;
+-		break;
+-	case SND_SOC_DAIFMT_LEFT_J:
+-		audfmt = 9;
+-		break;
+-	case SND_SOC_DAIFMT_RIGHT_J:
+-		audfmt = 10;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_A:
+-		audfmt = data->tdm_slots ? 12 : 11;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B:
+-		audfmt = data->tdm_slots ? 4 : 3;
+-		break;
+-	default:
+-		dev_err(dev, "Unsupported format 0x%8x\n", fmt);
+-		return -EINVAL;
+-	}
+-
+-	ret = snd_soc_component_write_field(comp, RTQ9128_REG_I2S_OPT, RTQ9128_AUDFMT_MASK, audfmt);
+-	return ret < 0 ? ret : 0;
++	return 0;
+ }
+ 
+ static int rtq9128_dai_set_tdm_slot(struct snd_soc_dai *dai, unsigned int tx_mask,
+@@ -554,10 +531,38 @@ static int rtq9128_dai_hw_params(struct snd_pcm_substream *stream, struct snd_pc
+ 	unsigned int width, slot_width, bitrate, audbit, dolen;
+ 	struct snd_soc_component *comp = dai->component;
+ 	struct device *dev = dai->dev;
++	unsigned int fmtval, audfmt;
+ 	int ret;
+ 
+ 	dev_dbg(dev, "%s: width %d\n", __func__, params_width(param));
+ 
++	fmtval = FIELD_GET(SND_SOC_DAIFMT_FORMAT_MASK, data->daifmt);
++	if (data->tdm_slots && fmtval != SND_SOC_DAIFMT_DSP_A && fmtval != SND_SOC_DAIFMT_DSP_B) {
++		dev_err(dev, "TDM is used, format only support DSP_A or DSP_B\n");
++		return -EINVAL;
++	}
++
++	switch (fmtval) {
++	case SND_SOC_DAIFMT_I2S:
++		audfmt = 8;
++		break;
++	case SND_SOC_DAIFMT_LEFT_J:
++		audfmt = 9;
++		break;
++	case SND_SOC_DAIFMT_RIGHT_J:
++		audfmt = 10;
++		break;
++	case SND_SOC_DAIFMT_DSP_A:
++		audfmt = data->tdm_slots ? 12 : 11;
++		break;
++	case SND_SOC_DAIFMT_DSP_B:
++		audfmt = data->tdm_slots ? 4 : 3;
++		break;
++	default:
++		dev_err(dev, "Unsupported format 0x%8x\n", fmtval);
++		return -EINVAL;
++	}
++
+ 	switch (width = params_width(param)) {
+ 	case 16:
+ 		audbit = 0;
+@@ -611,6 +616,10 @@ static int rtq9128_dai_hw_params(struct snd_pcm_substream *stream, struct snd_pc
+ 		return -EINVAL;
+ 	}
+ 
++	ret = snd_soc_component_write_field(comp, RTQ9128_REG_I2S_OPT, RTQ9128_AUDFMT_MASK, audfmt);
++	if (ret < 0)
++		return ret;
++
+ 	ret = snd_soc_component_write_field(comp, RTQ9128_REG_I2S_OPT, RTQ9128_AUDBIT_MASK, audbit);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index faf8d3f9b3c5d..98055dd39b782 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -210,7 +210,7 @@ struct wcd938x_priv {
+ };
+ 
+ static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(ear_pa_gain, 600, -1800);
+-static const DECLARE_TLV_DB_SCALE(line_gain, -3000, 150, -3000);
++static const DECLARE_TLV_DB_SCALE(line_gain, -3000, 150, 0);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_MINMAX(analog_gain, 0, 3000);
+ 
+ struct wcd938x_mbhc_zdet_param {
+diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
+index cb83c569e18d6..a2e86ef7d18f5 100644
+--- a/sound/soc/codecs/wsa883x.c
++++ b/sound/soc/codecs/wsa883x.c
+@@ -1098,7 +1098,11 @@ static int wsa_dev_mode_put(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static const DECLARE_TLV_DB_SCALE(pa_gain, -300, 150, -300);
++static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(pa_gain,
++	0, 14, TLV_DB_SCALE_ITEM(-300, 0, 0),
++	15, 29, TLV_DB_SCALE_ITEM(-300, 150, 0),
++	30, 31, TLV_DB_SCALE_ITEM(1800, 0, 0),
++);
+ 
+ static int wsa883x_get_swr_port(struct snd_kcontrol *kcontrol,
+ 				struct snd_ctl_elem_value *ucontrol)
+diff --git a/sound/soc/qcom/sc8280xp.c b/sound/soc/qcom/sc8280xp.c
+index 39cb0b889aff9..e6efc2d10bbef 100644
+--- a/sound/soc/qcom/sc8280xp.c
++++ b/sound/soc/qcom/sc8280xp.c
+@@ -34,12 +34,14 @@ static int sc8280xp_snd_init(struct snd_soc_pcm_runtime *rtd)
+ 	case WSA_CODEC_DMA_RX_0:
+ 	case WSA_CODEC_DMA_RX_1:
+ 		/*
+-		 * set limit of 0dB on Digital Volume for Speakers,
+-		 * this can prevent damage of speakers to some extent without
+-		 * active speaker protection
++		 * Set limit of -3 dB on Digital Volume and 0 dB on PA Volume
++		 * to reduce the risk of speaker damage until we have active
++		 * speaker protection in place.
+ 		 */
+-		snd_soc_limit_volume(card, "WSA_RX0 Digital Volume", 84);
+-		snd_soc_limit_volume(card, "WSA_RX1 Digital Volume", 84);
++		snd_soc_limit_volume(card, "WSA_RX0 Digital Volume", 81);
++		snd_soc_limit_volume(card, "WSA_RX1 Digital Volume", 81);
++		snd_soc_limit_volume(card, "SpkrLeft PA Volume", 17);
++		snd_soc_limit_volume(card, "SpkrRight PA Volume", 17);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/tools/build/feature/test-libopencsd.c b/tools/build/feature/test-libopencsd.c
+index eb6303ff446ed..4cfcef9da3e43 100644
+--- a/tools/build/feature/test-libopencsd.c
++++ b/tools/build/feature/test-libopencsd.c
+@@ -4,9 +4,9 @@
+ /*
+  * Check OpenCSD library version is sufficient to provide required features
+  */
+-#define OCSD_MIN_VER ((1 << 16) | (1 << 8) | (1))
++#define OCSD_MIN_VER ((1 << 16) | (2 << 8) | (1))
+ #if !defined(OCSD_VER_NUM) || (OCSD_VER_NUM < OCSD_MIN_VER)
+-#error "OpenCSD >= 1.1.1 is required"
++#error "OpenCSD >= 1.2.1 is required"
+ #endif
+ 
+ int main(void)
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index e067be95da3c0..df1b550f7460a 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4344,6 +4344,8 @@ bpf_object__collect_prog_relos(struct bpf_object *obj, Elf64_Shdr *shdr, Elf_Dat
+ 
+ 	scn = elf_sec_by_idx(obj, sec_idx);
+ 	scn_data = elf_sec_data(obj, scn);
++	if (!scn_data)
++		return -LIBBPF_ERRNO__FORMAT;
+ 
+ 	relo_sec_name = elf_sec_str(obj, shdr->sh_name);
+ 	sec_name = elf_sec_name(obj, scn);
+diff --git a/tools/lib/bpf/libbpf_common.h b/tools/lib/bpf/libbpf_common.h
+index b7060f2544861..8fe248e14eb63 100644
+--- a/tools/lib/bpf/libbpf_common.h
++++ b/tools/lib/bpf/libbpf_common.h
+@@ -79,11 +79,14 @@
+  */
+ #define LIBBPF_OPTS_RESET(NAME, ...)					    \
+ 	do {								    \
+-		memset(&NAME, 0, sizeof(NAME));				    \
+-		NAME = (typeof(NAME)) {					    \
+-			.sz = sizeof(NAME),				    \
+-			__VA_ARGS__					    \
+-		};							    \
++		typeof(NAME) ___##NAME = ({ 				    \
++			memset(&___##NAME, 0, sizeof(NAME));		    \
++			(typeof(NAME)) {				    \
++				.sz = sizeof(NAME),			    \
++				__VA_ARGS__				    \
++			};						    \
++		});							    \
++		memcpy(&NAME, &___##NAME, sizeof(NAME));		    \
+ 	} while (0)
+ 
+ #endif /* __LIBBPF_LIBBPF_COMMON_H */
+diff --git a/tools/lib/subcmd/help.c b/tools/lib/subcmd/help.c
+index adfbae27dc369..8561b0f01a247 100644
+--- a/tools/lib/subcmd/help.c
++++ b/tools/lib/subcmd/help.c
+@@ -52,11 +52,21 @@ void uniq(struct cmdnames *cmds)
+ 	if (!cmds->cnt)
+ 		return;
+ 
+-	for (i = j = 1; i < cmds->cnt; i++)
+-		if (strcmp(cmds->names[i]->name, cmds->names[i-1]->name))
+-			cmds->names[j++] = cmds->names[i];
+-
++	for (i = 1; i < cmds->cnt; i++) {
++		if (!strcmp(cmds->names[i]->name, cmds->names[i-1]->name))
++			zfree(&cmds->names[i - 1]);
++	}
++	for (i = 0, j = 0; i < cmds->cnt; i++) {
++		if (cmds->names[i]) {
++			if (i == j)
++				j++;
++			else
++				cmds->names[j++] = cmds->names[i];
++		}
++	}
+ 	cmds->cnt = j;
++	while (j < i)
++		cmds->names[j++] = NULL;
+ }
+ 
+ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
+diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
+index 79d8832c862aa..ce34be15c929e 100644
+--- a/tools/testing/kunit/kunit_parser.py
++++ b/tools/testing/kunit/kunit_parser.py
+@@ -450,7 +450,7 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
+ 	Log of diagnostic lines
+ 	"""
+ 	log = []  # type: List[str]
+-	non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START, TAP_START]
++	non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START, TAP_START, TEST_PLAN]
+ 	while lines and not any(re.match(lines.peek())
+ 			for re in non_diagnostic_lines):
+ 		log.append(lines.pop())
+@@ -726,6 +726,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest:
+ 		# test plan
+ 		test.name = "main"
+ 		ktap_line = parse_ktap_header(lines, test)
++		test.log.extend(parse_diagnostic(lines))
+ 		parse_test_plan(lines, test)
+ 		parent_test = True
+ 	else:
+@@ -737,6 +738,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest:
+ 		if parent_test:
+ 			# If KTAP version line and/or subtest header is found, attempt
+ 			# to parse test plan and print test header
++			test.log.extend(parse_diagnostic(lines))
+ 			parse_test_plan(lines, test)
+ 			print_test_header(test)
+ 	expected_count = test.expected_count
+diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
+index 5b1da2a32ea72..10b5f42e65e78 100644
+--- a/tools/testing/selftests/bpf/cgroup_helpers.c
++++ b/tools/testing/selftests/bpf/cgroup_helpers.c
+@@ -523,10 +523,20 @@ int setup_classid_environment(void)
+ 		return 1;
+ 	}
+ 
+-	if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls") &&
+-	    errno != EBUSY) {
+-		log_err("mount cgroup net_cls");
+-		return 1;
++	if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls")) {
++		if (errno != EBUSY) {
++			log_err("mount cgroup net_cls");
++			return 1;
++		}
++
++		if (rmdir(NETCLS_MOUNT_PATH)) {
++			log_err("rmdir cgroup net_cls");
++			return 1;
++		}
++		if (umount(CGROUP_MOUNT_DFLT)) {
++			log_err("umount cgroup base");
++			return 1;
++		}
+ 	}
+ 
+ 	cleanup_classid_environment();
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index 92d51f377fe59..816145bcb6476 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -4630,11 +4630,6 @@ static int test_btf_id(unsigned int test_num)
+ 	/* The map holds the last ref to BTF and its btf_id */
+ 	close(map_fd);
+ 	map_fd = -1;
+-	btf_fd[0] = bpf_btf_get_fd_by_id(map_info.btf_id);
+-	if (CHECK(btf_fd[0] >= 0, "BTF lingers")) {
+-		err = -1;
+-		goto done;
+-	}
+ 
+ 	fprintf(stderr, "OK");
+ 
+@@ -5265,6 +5260,7 @@ static size_t get_pprint_mapv_size(enum pprint_mapv_kind_t mapv_kind)
+ #endif
+ 
+ 	assert(0);
++	return 0;
+ }
+ 
+ static void set_pprint_mapv(enum pprint_mapv_kind_t mapv_kind,
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_opts.c b/tools/testing/selftests/bpf/prog_tests/tc_opts.c
+index 51883ccb80206..196abf2234656 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_opts.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_opts.c
+@@ -2387,12 +2387,9 @@ static int generate_dummy_prog(void)
+ 	const size_t prog_insn_cnt = sizeof(prog_insns) / sizeof(struct bpf_insn);
+ 	LIBBPF_OPTS(bpf_prog_load_opts, opts);
+ 	const size_t log_buf_sz = 256;
+-	char *log_buf;
++	char log_buf[log_buf_sz];
+ 	int fd = -1;
+ 
+-	log_buf = malloc(log_buf_sz);
+-	if (!ASSERT_OK_PTR(log_buf, "log_buf_alloc"))
+-		return fd;
+ 	opts.log_buf = log_buf;
+ 	opts.log_size = log_buf_sz;
+ 
+@@ -2402,7 +2399,6 @@ static int generate_dummy_prog(void)
+ 			   prog_insns, prog_insn_cnt, &opts);
+ 	ASSERT_STREQ(log_buf, "", "log_0");
+ 	ASSERT_GE(fd, 0, "prog_fd");
+-	free(log_buf);
+ 	return fd;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/progs/pyperf180.c b/tools/testing/selftests/bpf/progs/pyperf180.c
+index c39f559d3100e..42c4a8b62e360 100644
+--- a/tools/testing/selftests/bpf/progs/pyperf180.c
++++ b/tools/testing/selftests/bpf/progs/pyperf180.c
+@@ -1,4 +1,26 @@
+ // SPDX-License-Identifier: GPL-2.0
+ // Copyright (c) 2019 Facebook
+ #define STACK_MAX_LEN 180
++
++/* llvm upstream commit at clang18
++ *   https://github.com/llvm/llvm-project/commit/1a2e77cf9e11dbf56b5720c607313a566eebb16e
++ * changed inlining behavior and caused compilation failure as some branch
++ * target distance exceeded 16bit representation which is the maximum for
++ * cpu v1/v2/v3. Macro __BPF_CPU_VERSION__ is later implemented in clang18
++ * to specify which cpu version is used for compilation. So a smaller
++ * unroll_count can be set if __BPF_CPU_VERSION__ is less than 4, which
++ * reduced some branch target distances and resolved the compilation failure.
++ *
++ * To capture the case where a developer/ci uses clang18 but the corresponding
++ * repo checkpoint does not have __BPF_CPU_VERSION__, a smaller unroll_count
++ * will be set as well to prevent potential compilation failures.
++ */
++#ifdef __BPF_CPU_VERSION__
++#if __BPF_CPU_VERSION__ < 4
++#define UNROLL_COUNT 90
++#endif
++#elif __clang_major__ == 18
++#define UNROLL_COUNT 90
++#endif
++
+ #include "pyperf.h"
+diff --git a/tools/testing/selftests/bpf/progs/test_global_func17.c b/tools/testing/selftests/bpf/progs/test_global_func17.c
+index a32e11c7d933e..5de44b09e8ec1 100644
+--- a/tools/testing/selftests/bpf/progs/test_global_func17.c
++++ b/tools/testing/selftests/bpf/progs/test_global_func17.c
+@@ -5,6 +5,7 @@
+ 
+ __noinline int foo(int *p)
+ {
++	barrier_var(p);
+ 	return p ? (*p = 42) : 0;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
+index 655095810d4a1..0ad98b6a8e6ef 100644
+--- a/tools/testing/selftests/bpf/veristat.c
++++ b/tools/testing/selftests/bpf/veristat.c
+@@ -1214,7 +1214,7 @@ static int cmp_join_stat(const struct verif_stats_join *s1,
+ 			 enum stat_id id, enum stat_variant var, bool asc)
+ {
+ 	const char *str1 = NULL, *str2 = NULL;
+-	double v1, v2;
++	double v1 = 0.0, v2 = 0.0;
+ 	int cmp = 0;
+ 
+ 	fetch_join_stat_value(s1, id, var, &str1, &v1);
+diff --git a/tools/testing/selftests/bpf/xdp_hw_metadata.c b/tools/testing/selftests/bpf/xdp_hw_metadata.c
+index c3ba40d0b9de4..c5e7937d7f631 100644
+--- a/tools/testing/selftests/bpf/xdp_hw_metadata.c
++++ b/tools/testing/selftests/bpf/xdp_hw_metadata.c
+@@ -70,7 +70,7 @@ static int open_xsk(int ifindex, struct xsk *xsk, __u32 queue_id)
+ 		.frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE,
+ 		.flags = XDP_UMEM_UNALIGNED_CHUNK_FLAG,
+ 	};
+-	__u32 idx;
++	__u32 idx = 0;
+ 	u64 addr;
+ 	int ret;
+ 	int i;
+diff --git a/tools/testing/selftests/drivers/net/bonding/lag_lib.sh b/tools/testing/selftests/drivers/net/bonding/lag_lib.sh
+index 2a268b17b61f5..dbdd736a41d39 100644
+--- a/tools/testing/selftests/drivers/net/bonding/lag_lib.sh
++++ b/tools/testing/selftests/drivers/net/bonding/lag_lib.sh
+@@ -48,6 +48,17 @@ test_LAG_cleanup()
+ 	ip link add mv0 link "$name" up address "$ucaddr" type macvlan
+ 	# Used to test dev->mc handling
+ 	ip address add "$addr6" dev "$name"
++
++	# Check that addresses were added as expected
++	(grep_bridge_fdb "$ucaddr" bridge fdb show dev dummy1 ||
++		grep_bridge_fdb "$ucaddr" bridge fdb show dev dummy2) >/dev/null
++	check_err $? "macvlan unicast address not found on a slave"
++
++	# mcaddr is added asynchronously by addrconf_dad_work(), use busywait
++	(busywait 10000 grep_bridge_fdb "$mcaddr" bridge fdb show dev dummy1 ||
++		grep_bridge_fdb "$mcaddr" bridge fdb show dev dummy2) >/dev/null
++	check_err $? "IPv6 solicited-node multicast mac address not found on a slave"
++
+ 	ip link set dev "$name" down
+ 	ip link del "$name"
+ 
+diff --git a/tools/testing/selftests/drivers/net/team/config b/tools/testing/selftests/drivers/net/team/config
+index 265b6882cc21e..b5e3a3aad4bfb 100644
+--- a/tools/testing/selftests/drivers/net/team/config
++++ b/tools/testing/selftests/drivers/net/team/config
+@@ -1,3 +1,5 @@
++CONFIG_DUMMY=y
++CONFIG_IPV6=y
++CONFIG_MACVLAN=y
+ CONFIG_NET_TEAM=y
+ CONFIG_NET_TEAM_MODE_LOADBALANCE=y
+-CONFIG_MACVLAN=y
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 9e5bf59a20bff..c1ae90c785654 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -84,6 +84,7 @@ TEST_PROGS += sctp_vrf.sh
+ TEST_GEN_FILES += sctp_hello
+ TEST_GEN_FILES += csum
+ TEST_GEN_FILES += nat6to4.o
++TEST_GEN_FILES += xdp_dummy.o
+ TEST_GEN_FILES += ip_local_port_range
+ TEST_GEN_FILES += bind_wildcard
+ TEST_PROGS += test_vxlan_mdb.sh
+@@ -103,7 +104,7 @@ $(OUTPUT)/tcp_inq: LDLIBS += -lpthread
+ $(OUTPUT)/bind_bhash: LDLIBS += -lpthread
+ $(OUTPUT)/io_uring_zerocopy_tx: CFLAGS += -I../../../include/
+ 
+-# Rules to generate bpf obj nat6to4.o
++# Rules to generate bpf objs
+ CLANG ?= clang
+ SCRATCH_DIR := $(OUTPUT)/tools
+ BUILD_DIR := $(SCRATCH_DIR)/build
+@@ -138,7 +139,7 @@ endif
+ 
+ CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH))
+ 
+-$(OUTPUT)/nat6to4.o: nat6to4.c $(BPFOBJ) | $(MAKE_DIRS)
++$(OUTPUT)/nat6to4.o $(OUTPUT)/xdp_dummy.o: $(OUTPUT)/%.o : %.c $(BPFOBJ) | $(MAKE_DIRS)
+ 	$(CLANG) -O2 --target=bpf -c $< $(CCINCLUDE) $(CLANG_SYS_INCLUDES) -o $@
+ 
+ $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile)		       \
+diff --git a/tools/testing/selftests/net/config b/tools/testing/selftests/net/config
+index 19ff750516609..3b749addd3640 100644
+--- a/tools/testing/selftests/net/config
++++ b/tools/testing/selftests/net/config
+@@ -19,8 +19,11 @@ CONFIG_BRIDGE_VLAN_FILTERING=y
+ CONFIG_BRIDGE=y
+ CONFIG_CRYPTO_CHACHA20POLY1305=m
+ CONFIG_VLAN_8021Q=y
++CONFIG_GENEVE=m
+ CONFIG_IFB=y
+ CONFIG_INET_DIAG=y
++CONFIG_INET_ESP=y
++CONFIG_INET_ESP_OFFLOAD=y
+ CONFIG_IP_GRE=m
+ CONFIG_NETFILTER=y
+ CONFIG_NETFILTER_ADVANCED=y
+@@ -29,7 +32,10 @@ CONFIG_NF_NAT=m
+ CONFIG_IP6_NF_IPTABLES=m
+ CONFIG_IP_NF_IPTABLES=m
+ CONFIG_IP6_NF_NAT=m
++CONFIG_IP6_NF_RAW=m
+ CONFIG_IP_NF_NAT=m
++CONFIG_IP_NF_RAW=m
++CONFIG_IP_NF_TARGET_TTL=m
+ CONFIG_IPV6_GRE=m
+ CONFIG_IPV6_SEG6_LWTUNNEL=y
+ CONFIG_L2TP_ETH=m
+@@ -45,8 +51,14 @@ CONFIG_NF_TABLES=m
+ CONFIG_NF_TABLES_IPV6=y
+ CONFIG_NF_TABLES_IPV4=y
+ CONFIG_NFT_NAT=m
++CONFIG_NETFILTER_XT_MATCH_LENGTH=m
++CONFIG_NET_ACT_CSUM=m
++CONFIG_NET_ACT_CT=m
+ CONFIG_NET_ACT_GACT=m
++CONFIG_NET_ACT_PEDIT=m
+ CONFIG_NET_CLS_BASIC=m
++CONFIG_NET_CLS_BPF=m
++CONFIG_NET_CLS_MATCHALL=m
+ CONFIG_NET_CLS_U32=m
+ CONFIG_NET_IPGRE_DEMUX=m
+ CONFIG_NET_IPGRE=m
+@@ -55,6 +67,9 @@ CONFIG_NET_SCH_HTB=m
+ CONFIG_NET_SCH_FQ=m
+ CONFIG_NET_SCH_ETF=m
+ CONFIG_NET_SCH_NETEM=y
++CONFIG_NET_SCH_PRIO=m
++CONFIG_NFT_COMPAT=m
++CONFIG_NF_FLOW_TABLE=m
+ CONFIG_PSAMPLE=m
+ CONFIG_TCP_MD5SIG=y
+ CONFIG_TEST_BLACKHOLE_DEV=m
+@@ -80,3 +95,4 @@ CONFIG_IP_SCTP=m
+ CONFIG_NETFILTER_XT_MATCH_POLICY=m
+ CONFIG_CRYPTO_ARIA=y
+ CONFIG_XFRM_INTERFACE=m
++CONFIG_XFRM_USER=m
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index b3b2dc5a630cf..4a5f031be2324 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -714,23 +714,23 @@ setup_xfrm6() {
+ }
+ 
+ setup_xfrm4udp() {
+-	setup_xfrm 4 ${veth4_a_addr} ${veth4_b_addr} "encap espinudp 4500 4500 0.0.0.0"
+-	setup_nettest_xfrm 4 4500
++	setup_xfrm 4 ${veth4_a_addr} ${veth4_b_addr} "encap espinudp 4500 4500 0.0.0.0" && \
++		setup_nettest_xfrm 4 4500
+ }
+ 
+ setup_xfrm6udp() {
+-	setup_xfrm 6 ${veth6_a_addr} ${veth6_b_addr} "encap espinudp 4500 4500 0.0.0.0"
+-	setup_nettest_xfrm 6 4500
++	setup_xfrm 6 ${veth6_a_addr} ${veth6_b_addr} "encap espinudp 4500 4500 0.0.0.0" && \
++		setup_nettest_xfrm 6 4500
+ }
+ 
+ setup_xfrm4udprouted() {
+-	setup_xfrm 4 ${prefix4}.${a_r1}.1 ${prefix4}.${b_r1}.1 "encap espinudp 4500 4500 0.0.0.0"
+-	setup_nettest_xfrm 4 4500
++	setup_xfrm 4 ${prefix4}.${a_r1}.1 ${prefix4}.${b_r1}.1 "encap espinudp 4500 4500 0.0.0.0" && \
++		setup_nettest_xfrm 4 4500
+ }
+ 
+ setup_xfrm6udprouted() {
+-	setup_xfrm 6 ${prefix6}:${a_r1}::1 ${prefix6}:${b_r1}::1 "encap espinudp 4500 4500 0.0.0.0"
+-	setup_nettest_xfrm 6 4500
++	setup_xfrm 6 ${prefix6}:${a_r1}::1 ${prefix6}:${b_r1}::1 "encap espinudp 4500 4500 0.0.0.0" && \
++		setup_nettest_xfrm 6 4500
+ }
+ 
+ setup_routing_old() {
+@@ -1348,7 +1348,7 @@ test_pmtu_ipvX_over_bridged_vxlanY_or_geneveY_exception() {
+ 
+ 		sleep 1
+ 
+-		dd if=/dev/zero of=/dev/stdout status=none bs=1M count=1 | ${target} socat -T 3 -u STDIN $TCPDST,connect-timeout=3
++		dd if=/dev/zero status=none bs=1M count=1 | ${target} socat -T 3 -u STDIN $TCPDST,connect-timeout=3
+ 
+ 		size=$(du -sb $tmpoutfile)
+ 		size=${size%%/tmp/*}
+diff --git a/tools/testing/selftests/net/setup_veth.sh b/tools/testing/selftests/net/setup_veth.sh
+index 1003ddf7b3b26..227fd1076f213 100644
+--- a/tools/testing/selftests/net/setup_veth.sh
++++ b/tools/testing/selftests/net/setup_veth.sh
+@@ -8,7 +8,7 @@ setup_veth_ns() {
+ 	local -r ns_mac="$4"
+ 
+ 	[[ -e /var/run/netns/"${ns_name}" ]] || ip netns add "${ns_name}"
+-	echo 100000 > "/sys/class/net/${ns_dev}/gro_flush_timeout"
++	echo 1000000 > "/sys/class/net/${ns_dev}/gro_flush_timeout"
+ 	ip link set dev "${ns_dev}" netns "${ns_name}" mtu 65535
+ 	ip -netns "${ns_name}" link set dev "${ns_dev}" up
+ 
+diff --git a/tools/testing/selftests/net/udpgro.sh b/tools/testing/selftests/net/udpgro.sh
+index 0c743752669af..3f09ac78f4452 100755
+--- a/tools/testing/selftests/net/udpgro.sh
++++ b/tools/testing/selftests/net/udpgro.sh
+@@ -5,7 +5,7 @@
+ 
+ readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
+ 
+-BPF_FILE="../bpf/xdp_dummy.bpf.o"
++BPF_FILE="xdp_dummy.o"
+ 
+ # set global exit status, but never reset nonzero one.
+ check_err()
+@@ -198,7 +198,7 @@ run_all() {
+ }
+ 
+ if [ ! -f ${BPF_FILE} ]; then
+-	echo "Missing ${BPF_FILE}. Build bpf selftest first"
++	echo "Missing ${BPF_FILE}. Run 'make' first"
+ 	exit -1
+ fi
+ 
+diff --git a/tools/testing/selftests/net/udpgro_bench.sh b/tools/testing/selftests/net/udpgro_bench.sh
+index 894972877e8b0..65ff1d4240086 100755
+--- a/tools/testing/selftests/net/udpgro_bench.sh
++++ b/tools/testing/selftests/net/udpgro_bench.sh
+@@ -5,7 +5,7 @@
+ 
+ readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
+ 
+-BPF_FILE="../bpf/xdp_dummy.bpf.o"
++BPF_FILE="xdp_dummy.o"
+ 
+ cleanup() {
+ 	local -r jobs="$(jobs -p)"
+@@ -83,7 +83,7 @@ run_all() {
+ }
+ 
+ if [ ! -f ${BPF_FILE} ]; then
+-	echo "Missing ${BPF_FILE}. Build bpf selftest first"
++	echo "Missing ${BPF_FILE}. Run 'make' first"
+ 	exit -1
+ fi
+ 
+diff --git a/tools/testing/selftests/net/udpgro_frglist.sh b/tools/testing/selftests/net/udpgro_frglist.sh
+index 0a6359bed0b92..bd51d386b52eb 100755
+--- a/tools/testing/selftests/net/udpgro_frglist.sh
++++ b/tools/testing/selftests/net/udpgro_frglist.sh
+@@ -5,7 +5,7 @@
+ 
+ readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
+ 
+-BPF_FILE="../bpf/xdp_dummy.bpf.o"
++BPF_FILE="xdp_dummy.o"
+ 
+ cleanup() {
+ 	local -r jobs="$(jobs -p)"
+@@ -84,12 +84,12 @@ run_all() {
+ }
+ 
+ if [ ! -f ${BPF_FILE} ]; then
+-	echo "Missing ${BPF_FILE}. Build bpf selftest first"
++	echo "Missing ${BPF_FILE}. Run 'make' first"
+ 	exit -1
+ fi
+ 
+ if [ ! -f nat6to4.o ]; then
+-	echo "Missing nat6to4 helper. Build bpf nat6to4.o selftest first"
++	echo "Missing nat6to4 helper. Run 'make' first"
+ 	exit -1
+ fi
+ 
+diff --git a/tools/testing/selftests/net/udpgro_fwd.sh b/tools/testing/selftests/net/udpgro_fwd.sh
+index c079565add392..d6b9c759043ca 100755
+--- a/tools/testing/selftests/net/udpgro_fwd.sh
++++ b/tools/testing/selftests/net/udpgro_fwd.sh
+@@ -1,7 +1,9 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-BPF_FILE="../bpf/xdp_dummy.bpf.o"
++source net_helper.sh
++
++BPF_FILE="xdp_dummy.o"
+ readonly BASE="ns-$(mktemp -u XXXXXX)"
+ readonly SRC=2
+ readonly DST=1
+@@ -119,7 +121,7 @@ run_test() {
+ 	ip netns exec $NS_DST $ipt -A INPUT -p udp --dport 8000
+ 	ip netns exec $NS_DST ./udpgso_bench_rx -C 1000 -R 10 -n 10 -l 1300 $rx_args &
+ 	local spid=$!
+-	sleep 0.1
++	wait_local_port_listen "$NS_DST" 8000 udp
+ 	ip netns exec $NS_SRC ./udpgso_bench_tx $family -M 1 -s 13000 -S 1300 -D $dst
+ 	local retc=$?
+ 	wait $spid
+@@ -168,7 +170,7 @@ run_bench() {
+ 	ip netns exec $NS_DST bash -c "echo 2 > /sys/class/net/veth$DST/queues/rx-0/rps_cpus"
+ 	ip netns exec $NS_DST taskset 0x2 ./udpgso_bench_rx -C 1000 -R 10  &
+ 	local spid=$!
+-	sleep 0.1
++	wait_local_port_listen "$NS_DST" 8000 udp
+ 	ip netns exec $NS_SRC taskset 0x1 ./udpgso_bench_tx $family -l 3 -S 1300 -D $dst
+ 	local retc=$?
+ 	wait $spid
+diff --git a/tools/testing/selftests/net/veth.sh b/tools/testing/selftests/net/veth.sh
+index 2d073595c6202..27574bbf2d638 100755
+--- a/tools/testing/selftests/net/veth.sh
++++ b/tools/testing/selftests/net/veth.sh
+@@ -1,7 +1,7 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-BPF_FILE="../bpf/xdp_dummy.bpf.o"
++BPF_FILE="xdp_dummy.o"
+ readonly STATS="$(mktemp -p /tmp ns-XXXXXX)"
+ readonly BASE=`basename $STATS`
+ readonly SRC=2
+@@ -218,7 +218,7 @@ while getopts "hs:" option; do
+ done
+ 
+ if [ ! -f ${BPF_FILE} ]; then
+-	echo "Missing ${BPF_FILE}. Build bpf selftest first"
++	echo "Missing ${BPF_FILE}. Run 'make' first"
+ 	exit 1
+ fi
+ 
+diff --git a/tools/testing/selftests/net/xdp_dummy.c b/tools/testing/selftests/net/xdp_dummy.c
+new file mode 100644
+index 0000000000000..d988b2e0cee84
+--- /dev/null
++++ b/tools/testing/selftests/net/xdp_dummy.c
+@@ -0,0 +1,13 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#define KBUILD_MODNAME "xdp_dummy"
++#include <linux/bpf.h>
++#include <bpf/bpf_helpers.h>
++
++SEC("xdp")
++int xdp_dummy_prog(struct xdp_md *ctx)
++{
++	return XDP_PASS;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
+index 2f10541e6f381..e173014f6b667 100644
+--- a/tools/testing/selftests/nolibc/nolibc-test.c
++++ b/tools/testing/selftests/nolibc/nolibc-test.c
+@@ -150,11 +150,11 @@ static void result(int llen, enum RESULT r)
+ 	const char *msg;
+ 
+ 	if (r == OK)
+-		msg = " [OK]";
++		msg = "  [OK]";
+ 	else if (r == SKIPPED)
+ 		msg = "[SKIPPED]";
+ 	else
+-		msg = "[FAIL]";
++		msg = " [FAIL]";
+ 
+ 	if (llen < 64)
+ 		putcharn(' ', 64 - llen);
+diff --git a/tools/testing/selftests/sgx/test_encl.lds b/tools/testing/selftests/sgx/test_encl.lds
+index a1ec64f7d91fc..108bc11d1d8c5 100644
+--- a/tools/testing/selftests/sgx/test_encl.lds
++++ b/tools/testing/selftests/sgx/test_encl.lds
+@@ -34,8 +34,4 @@ SECTIONS
+ 	}
+ }
+ 
+-ASSERT(!DEFINED(.altinstructions), "ALTERNATIVES are not supported in enclaves")
+-ASSERT(!DEFINED(.altinstr_replacement), "ALTERNATIVES are not supported in enclaves")
+-ASSERT(!DEFINED(.discard.retpoline_safe), "RETPOLINE ALTERNATIVES are not supported in enclaves")
+-ASSERT(!DEFINED(.discard.nospec), "RETPOLINE ALTERNATIVES are not supported in enclaves")
+-ASSERT(!DEFINED(.got.plt), "Libcalls are not supported in enclaves")
++ASSERT(!DEFINED(_GLOBAL_OFFSET_TABLE_), "Libcalls through GOT are not supported in enclaves")


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-05 21:13 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-05 21:13 UTC (permalink / raw
  To: gentoo-commits

commit:     c96d6b02badf50e7f2cc1c705a1bce1184a14006
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb  5 21:12:44 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb  5 21:12:44 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c96d6b02

Temporarily removnge non-applying BMQ patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                 |     4 -
 5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch | 11471 --------------------------
 2 files changed, 11475 deletions(-)

diff --git a/0000_README b/0000_README
index 9c8bcf02..1753553d 100644
--- a/0000_README
+++ b/0000_README
@@ -118,7 +118,3 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-
-Patch:  5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
-From:   https://gitlab.com/alfredchen/projectc
-Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
deleted file mode 100644
index 4e71c3ef..00000000
--- a/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
+++ /dev/null
@@ -1,11471 +0,0 @@
-diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
-index 6584a1f9bfe3..226c79dd34cc 100644
---- a/Documentation/admin-guide/sysctl/kernel.rst
-+++ b/Documentation/admin-guide/sysctl/kernel.rst
-@@ -1646,3 +1646,13 @@ is 10 seconds.
- 
- The softlockup threshold is (``2 * watchdog_thresh``). Setting this
- tunable to zero will disable lockup detection altogether.
-+
-+yield_type:
-+===========
-+
-+BMQ/PDS CPU scheduler only. This determines what type of yield calls
-+to sched_yield() will be performed.
-+
-+  0 - No yield.
-+  1 - Requeue task. (default)
-+  2 - Set run queue skip task. Same as CFS.
-diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
-new file mode 100644
-index 000000000000..05c84eec0f31
---- /dev/null
-+++ b/Documentation/scheduler/sched-BMQ.txt
-@@ -0,0 +1,110 @@
-+                         BitMap queue CPU Scheduler
-+                         --------------------------
-+
-+CONTENT
-+========
-+
-+ Background
-+ Design
-+   Overview
-+   Task policy
-+   Priority management
-+   BitMap Queue
-+   CPU Assignment and Migration
-+
-+
-+Background
-+==========
-+
-+BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
-+of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
-+and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
-+simple, while efficiency and scalable for interactive tasks, such as desktop,
-+movie playback and gaming etc.
-+
-+Design
-+======
-+
-+Overview
-+--------
-+
-+BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
-+each CPU is responsible for scheduling the tasks that are putting into it's
-+run queue.
-+
-+The run queue is a set of priority queues. Note that these queues are fifo
-+queue for non-rt tasks or priority queue for rt tasks in data structure. See
-+BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
-+that most applications are non-rt tasks. No matter the queue is fifo or
-+priority, In each queue is an ordered list of runnable tasks awaiting execution
-+and the data structures are the same. When it is time for a new task to run,
-+the scheduler simply looks the lowest numbered queueue that contains a task,
-+and runs the first task from the head of that queue. And per CPU idle task is
-+also in the run queue, so the scheduler can always find a task to run on from
-+its run queue.
-+
-+Each task will assigned the same timeslice(default 4ms) when it is picked to
-+start running. Task will be reinserted at the end of the appropriate priority
-+queue when it uses its whole timeslice. When the scheduler selects a new task
-+from the priority queue it sets the CPU's preemption timer for the remainder of
-+the previous timeslice. When that timer fires the scheduler will stop execution
-+on that task, select another task and start over again.
-+
-+If a task blocks waiting for a shared resource then it's taken out of its
-+priority queue and is placed in a wait queue for the shared resource. When it
-+is unblocked it will be reinserted in the appropriate priority queue of an
-+eligible CPU.
-+
-+Task policy
-+-----------
-+
-+BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
-+mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
-+NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
-+policy.
-+
-+DEADLINE
-+	It is squashed as priority 0 FIFO task.
-+
-+FIFO/RR
-+	All RT tasks share one single priority queue in BMQ run queue designed. The
-+complexity of insert operation is O(n). BMQ is not designed for system runs
-+with major rt policy tasks.
-+
-+NORMAL/BATCH/IDLE
-+	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
-+NORMAL policy tasks, but they just don't boost. To control the priority of
-+NORMAL/BATCH/IDLE tasks, simply use nice level.
-+
-+ISO
-+	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
-+task instead.
-+
-+Priority management
-+-------------------
-+
-+RT tasks have priority from 0-99. For non-rt tasks, there are three different
-+factors used to determine the effective priority of a task. The effective
-+priority being what is used to determine which queue it will be in.
-+
-+The first factor is simply the task’s static priority. Which is assigned from
-+task's nice level, within [-20, 19] in userland's point of view and [0, 39]
-+internally.
-+
-+The second factor is the priority boost. This is a value bounded between
-+[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
-+modified by the following cases:
-+
-+*When a thread has used up its entire timeslice, always deboost its boost by
-+increasing by one.
-+*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
-+and its switch-in time(time after last switch and run) below the thredhold
-+based on its priority boost, will boost its boost by decreasing by one buti is
-+capped at 0 (won’t go negative).
-+
-+The intent in this system is to ensure that interactive threads are serviced
-+quickly. These are usually the threads that interact directly with the user
-+and cause user-perceivable latency. These threads usually do little work and
-+spend most of their time blocked awaiting another user event. So they get the
-+priority boost from unblocking while background threads that do most of the
-+processing receive the priority penalty for using their entire timeslice.
-diff --git a/fs/proc/base.c b/fs/proc/base.c
-index dd31e3b6bf77..12d1248cb4df 100644
---- a/fs/proc/base.c
-+++ b/fs/proc/base.c
-@@ -480,7 +480,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
- 		seq_puts(m, "0 0 0\n");
- 	else
- 		seq_printf(m, "%llu %llu %lu\n",
--		   (unsigned long long)task->se.sum_exec_runtime,
-+		   (unsigned long long)tsk_seruntime(task),
- 		   (unsigned long long)task->sched_info.run_delay,
- 		   task->sched_info.pcount);
- 
-diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
-index 8874f681b056..59eb72bf7d5f 100644
---- a/include/asm-generic/resource.h
-+++ b/include/asm-generic/resource.h
-@@ -23,7 +23,7 @@
- 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
- 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
--	[RLIMIT_NICE]		= { 0, 0 },				\
-+	[RLIMIT_NICE]		= { 30, 30 },				\
- 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
- 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
- }
-diff --git a/include/linux/sched.h b/include/linux/sched.h
-index 292c31697248..f5b026795dc6 100644
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -769,8 +769,14 @@ struct task_struct {
- 	unsigned int			ptrace;
- 
- #ifdef CONFIG_SMP
--	int				on_cpu;
- 	struct __call_single_node	wake_entry;
-+#endif
-+#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
-+	int				on_cpu;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 	unsigned int			wakee_flips;
- 	unsigned long			wakee_flip_decay_ts;
- 	struct task_struct		*last_wakee;
-@@ -784,6 +790,7 @@ struct task_struct {
- 	 */
- 	int				recent_used_cpu;
- 	int				wake_cpu;
-+#endif /* !CONFIG_SCHED_ALT */
- #endif
- 	int				on_rq;
- 
-@@ -792,6 +799,20 @@ struct task_struct {
- 	int				normal_prio;
- 	unsigned int			rt_priority;
- 
-+#ifdef CONFIG_SCHED_ALT
-+	u64				last_ran;
-+	s64				time_slice;
-+	int				sq_idx;
-+	struct list_head		sq_node;
-+#ifdef CONFIG_SCHED_BMQ
-+	int				boost_prio;
-+#endif /* CONFIG_SCHED_BMQ */
-+#ifdef CONFIG_SCHED_PDS
-+	u64				deadline;
-+#endif /* CONFIG_SCHED_PDS */
-+	/* sched_clock time spent running */
-+	u64				sched_time;
-+#else /* !CONFIG_SCHED_ALT */
- 	struct sched_entity		se;
- 	struct sched_rt_entity		rt;
- 	struct sched_dl_entity		dl;
-@@ -802,6 +823,7 @@ struct task_struct {
- 	unsigned long			core_cookie;
- 	unsigned int			core_occupation;
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_CGROUP_SCHED
- 	struct task_group		*sched_task_group;
-@@ -1561,6 +1583,15 @@ struct task_struct {
- 	 */
- };
- 
-+#ifdef CONFIG_SCHED_ALT
-+#define tsk_seruntime(t)		((t)->sched_time)
-+/* replace the uncertian rt_timeout with 0UL */
-+#define tsk_rttimeout(t)		(0UL)
-+#else /* CFS */
-+#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
-+#define tsk_rttimeout(t)	((t)->rt.timeout)
-+#endif /* !CONFIG_SCHED_ALT */
-+
- static inline struct pid *task_pid(struct task_struct *task)
- {
- 	return task->thread_pid;
-diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
-index df3aca89d4f5..1df1f7635188 100644
---- a/include/linux/sched/deadline.h
-+++ b/include/linux/sched/deadline.h
-@@ -2,6 +2,25 @@
- #ifndef _LINUX_SCHED_DEADLINE_H
- #define _LINUX_SCHED_DEADLINE_H
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+static inline int dl_task(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#define __tsk_deadline(p)	(0UL)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
-+#endif
-+
-+#else
-+
-+#define __tsk_deadline(p)	((p)->dl.deadline)
-+
- /*
-  * SCHED_DEADLINE tasks has negative priorities, reflecting
-  * the fact that any of them has higher prio than RT and
-@@ -23,6 +42,7 @@ static inline int dl_task(struct task_struct *p)
- {
- 	return dl_prio(p->prio);
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- static inline bool dl_time_before(u64 a, u64 b)
- {
-diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
-index ab83d85e1183..a9a1dfa99140 100644
---- a/include/linux/sched/prio.h
-+++ b/include/linux/sched/prio.h
-@@ -18,6 +18,32 @@
- #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
- #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
- 
-+#ifdef CONFIG_SCHED_ALT
-+
-+/* Undefine MAX_PRIO and DEFAULT_PRIO */
-+#undef MAX_PRIO
-+#undef DEFAULT_PRIO
-+
-+/* +/- priority levels from the base priority */
-+#ifdef CONFIG_SCHED_BMQ
-+#define MAX_PRIORITY_ADJ	(12)
-+
-+#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
-+#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
-+#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define MAX_PRIORITY_ADJ	(0)
-+
-+#define MIN_NORMAL_PRIO		(128)
-+#define NORMAL_PRIO_NUM		(64)
-+#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
-+#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
-+#endif
-+
-+#endif /* CONFIG_SCHED_ALT */
-+
- /*
-  * Convert user-nice values [ -20 ... 0 ... 19 ]
-  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
-diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
-index b2b9e6eb9683..09bd4d8758b2 100644
---- a/include/linux/sched/rt.h
-+++ b/include/linux/sched/rt.h
-@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
- 
- 	if (policy == SCHED_FIFO || policy == SCHED_RR)
- 		return true;
-+#ifndef CONFIG_SCHED_ALT
- 	if (policy == SCHED_DEADLINE)
- 		return true;
-+#endif
- 	return false;
- }
- 
-diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
-index de545ba85218..941bb18ff72c 100644
---- a/include/linux/sched/topology.h
-+++ b/include/linux/sched/topology.h
-@@ -238,7 +238,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
- 
- #endif	/* !CONFIG_SMP */
- 
--#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
-+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
-+	!defined(CONFIG_SCHED_ALT)
- extern void rebuild_sched_domains_energy(void);
- #else
- static inline void rebuild_sched_domains_energy(void)
-diff --git a/init/Kconfig b/init/Kconfig
-index 9ffb103fc927..8f0b7eeff77e 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -629,6 +629,7 @@ config TASK_IO_ACCOUNTING
- 
- config PSI
- 	bool "Pressure stall information tracking"
-+	depends on !SCHED_ALT
- 	select KERNFS
- 	help
- 	  Collect metrics that indicate how overcommitted the CPU, memory,
-@@ -794,6 +795,7 @@ menu "Scheduler features"
- config UCLAMP_TASK
- 	bool "Enable utilization clamping for RT/FAIR tasks"
- 	depends on CPU_FREQ_GOV_SCHEDUTIL
-+	depends on !SCHED_ALT
- 	help
- 	  This feature enables the scheduler to track the clamped utilization
- 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
-@@ -840,6 +842,35 @@ config UCLAMP_BUCKETS_COUNT
- 
- 	  If in doubt, use the default value.
- 
-+menuconfig SCHED_ALT
-+	bool "Alternative CPU Schedulers"
-+	default y
-+	help
-+	  This feature enable alternative CPU scheduler"
-+
-+if SCHED_ALT
-+
-+choice
-+	prompt "Alternative CPU Scheduler"
-+	default SCHED_BMQ
-+
-+config SCHED_BMQ
-+	bool "BMQ CPU scheduler"
-+	help
-+	  The BitMap Queue CPU scheduler for excellent interactivity and
-+	  responsiveness on the desktop and solid scalability on normal
-+	  hardware and commodity servers.
-+
-+config SCHED_PDS
-+	bool "PDS CPU scheduler"
-+	help
-+	  The Priority and Deadline based Skip list multiple queue CPU
-+	  Scheduler.
-+
-+endchoice
-+
-+endif
-+
- endmenu
- 
- #
-@@ -893,6 +924,7 @@ config NUMA_BALANCING
- 	depends on ARCH_SUPPORTS_NUMA_BALANCING
- 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
- 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
-+	depends on !SCHED_ALT
- 	help
- 	  This option adds support for automatic NUMA aware memory/task placement.
- 	  The mechanism is quite primitive and is based on migrating memory when
-@@ -990,6 +1022,7 @@ config FAIR_GROUP_SCHED
- 	depends on CGROUP_SCHED
- 	default CGROUP_SCHED
- 
-+if !SCHED_ALT
- config CFS_BANDWIDTH
- 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
- 	depends on FAIR_GROUP_SCHED
-@@ -1012,6 +1045,7 @@ config RT_GROUP_SCHED
- 	  realtime bandwidth for them.
- 	  See Documentation/scheduler/sched-rt-group.rst for more information.
- 
-+endif #!SCHED_ALT
- endif #CGROUP_SCHED
- 
- config SCHED_MM_CID
-@@ -1260,6 +1294,7 @@ config CHECKPOINT_RESTORE
- 
- config SCHED_AUTOGROUP
- 	bool "Automatic process group scheduling"
-+	depends on !SCHED_ALT
- 	select CGROUPS
- 	select CGROUP_SCHED
- 	select FAIR_GROUP_SCHED
-diff --git a/init/init_task.c b/init/init_task.c
-index 5727d42149c3..e2e2622d50d5 100644
---- a/init/init_task.c
-+++ b/init/init_task.c
-@@ -75,9 +75,15 @@ struct task_struct init_task
- 	.stack		= init_stack,
- 	.usage		= REFCOUNT_INIT(2),
- 	.flags		= PF_KTHREAD,
-+#ifdef CONFIG_SCHED_ALT
-+	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
-+	.static_prio	= DEFAULT_PRIO,
-+	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
-+#else
- 	.prio		= MAX_PRIO - 20,
- 	.static_prio	= MAX_PRIO - 20,
- 	.normal_prio	= MAX_PRIO - 20,
-+#endif
- 	.policy		= SCHED_NORMAL,
- 	.cpus_ptr	= &init_task.cpus_mask,
- 	.user_cpus_ptr	= NULL,
-@@ -89,6 +95,17 @@ struct task_struct init_task
- 	.restart_block	= {
- 		.fn = do_no_restart_syscall,
- 	},
-+#ifdef CONFIG_SCHED_ALT
-+	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
-+#ifdef CONFIG_SCHED_BMQ
-+	.boost_prio	= 0,
-+	.sq_idx		= 15,
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+	.deadline	= 0,
-+#endif
-+	.time_slice	= HZ,
-+#else
- 	.se		= {
- 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
- 	},
-@@ -96,6 +113,7 @@ struct task_struct init_task
- 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
- 		.time_slice	= RR_TIMESLICE,
- 	},
-+#endif
- 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
- #ifdef CONFIG_SMP
- 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
-diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
-index c2f1fd95a821..41654679b1b2 100644
---- a/kernel/Kconfig.preempt
-+++ b/kernel/Kconfig.preempt
-@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
- 
- config SCHED_CORE
- 	bool "Core Scheduling for SMT"
--	depends on SCHED_SMT
-+	depends on SCHED_SMT && !SCHED_ALT
- 	help
- 	  This option permits Core Scheduling, a means of coordinated task
- 	  selection across SMT siblings. When enabled -- see
-diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
-index 615daaf87f1f..16fb54ec732c 100644
---- a/kernel/cgroup/cpuset.c
-+++ b/kernel/cgroup/cpuset.c
-@@ -848,7 +848,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
- 	return ret;
- }
- 
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
- /*
-  * Helper routine for generate_sched_domains().
-  * Do cpusets a, b have overlapping effective cpus_allowed masks?
-@@ -1247,7 +1247,7 @@ static void rebuild_sched_domains_locked(void)
- 	/* Have scheduler rebuild the domains */
- 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
- }
--#else /* !CONFIG_SMP */
-+#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
- static void rebuild_sched_domains_locked(void)
- {
- }
-@@ -3206,12 +3206,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
- 				goto out_unlock;
- 		}
- 
-+#ifndef CONFIG_SCHED_ALT
- 		if (dl_task(task)) {
- 			cs->nr_migrate_dl_tasks++;
- 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
- 		}
-+#endif
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (!cs->nr_migrate_dl_tasks)
- 		goto out_success;
- 
-@@ -3232,6 +3235,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
- 	}
- 
- out_success:
-+#endif
- 	/*
- 	 * Mark attach is in progress.  This makes validate_change() fail
- 	 * changes which zero cpus/mems_allowed.
-@@ -3255,12 +3259,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
- 	if (!cs->attach_in_progress)
- 		wake_up(&cpuset_attach_wq);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (cs->nr_migrate_dl_tasks) {
- 		int cpu = cpumask_any(cs->effective_cpus);
- 
- 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
- 		reset_migrate_dl_data(cs);
- 	}
-+#endif
- 
- 	mutex_unlock(&cpuset_mutex);
- }
-diff --git a/kernel/delayacct.c b/kernel/delayacct.c
-index 6f0c358e73d8..8111481ce8b1 100644
---- a/kernel/delayacct.c
-+++ b/kernel/delayacct.c
-@@ -150,7 +150,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
- 	 */
- 	t1 = tsk->sched_info.pcount;
- 	t2 = tsk->sched_info.run_delay;
--	t3 = tsk->se.sum_exec_runtime;
-+	t3 = tsk_seruntime(tsk);
- 
- 	d->cpu_count += t1;
- 
-diff --git a/kernel/exit.c b/kernel/exit.c
-index aedc0832c9f4..ff8bf6cddc34 100644
---- a/kernel/exit.c
-+++ b/kernel/exit.c
-@@ -174,7 +174,7 @@ static void __exit_signal(struct task_struct *tsk)
- 			sig->curr_target = next_thread(tsk);
- 	}
- 
--	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
-+	add_device_randomness((const void*) &tsk_seruntime(tsk),
- 			      sizeof(unsigned long long));
- 
- 	/*
-@@ -195,7 +195,7 @@ static void __exit_signal(struct task_struct *tsk)
- 	sig->inblock += task_io_get_inblock(tsk);
- 	sig->oublock += task_io_get_oublock(tsk);
- 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
--	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
-+	sig->sum_sched_runtime += tsk_seruntime(tsk);
- 	sig->nr_threads--;
- 	__unhash_process(tsk, group_dead);
- 	write_sequnlock(&sig->stats_lock);
-diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
-index 4a10e8c16fd2..cfbbdd64b851 100644
---- a/kernel/locking/rtmutex.c
-+++ b/kernel/locking/rtmutex.c
-@@ -362,7 +362,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
- 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
- 
- 	waiter->tree.prio = __waiter_prio(task);
--	waiter->tree.deadline = task->dl.deadline;
-+	waiter->tree.deadline = __tsk_deadline(task);
- }
- 
- /*
-@@ -383,16 +383,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
-  * Only use with rt_waiter_node_{less,equal}()
-  */
- #define task_to_waiter_node(p)	\
--	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
-+	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
- #define task_to_waiter(p)	\
- 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
- 
- static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
- 					       struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline < right->deadline);
-+#else
- 	if (left->prio < right->prio)
- 		return 1;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -401,16 +405,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
- 	 */
- 	if (dl_prio(left->prio))
- 		return dl_time_before(left->deadline, right->deadline);
-+#endif
- 
- 	return 0;
-+#endif
- }
- 
- static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
- 						 struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+	return (left->deadline == right->deadline);
-+#else
- 	if (left->prio != right->prio)
- 		return 0;
- 
-+#ifndef CONFIG_SCHED_BMQ
- 	/*
- 	 * If both waiters have dl_prio(), we check the deadlines of the
- 	 * associated tasks.
-@@ -419,8 +429,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
- 	 */
- 	if (dl_prio(left->prio))
- 		return left->deadline == right->deadline;
-+#endif
- 
- 	return 1;
-+#endif
- }
- 
- static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
-diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
-index 976092b7bd45..31d587c16ec1 100644
---- a/kernel/sched/Makefile
-+++ b/kernel/sched/Makefile
-@@ -28,7 +28,12 @@ endif
- # These compilation units have roughly the same size and complexity - so their
- # build parallelizes well and finishes roughly at once:
- #
-+ifdef CONFIG_SCHED_ALT
-+obj-y += alt_core.o
-+obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
-+else
- obj-y += core.o
- obj-y += fair.o
-+endif
- obj-y += build_policy.o
- obj-y += build_utility.o
-diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
-new file mode 100644
-index 000000000000..5b6bdff6e630
---- /dev/null
-+++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,8944 @@
-+/*
-+ *  kernel/sched/alt_core.c
-+ *
-+ *  Core alternative kernel scheduler code and related syscalls
-+ *
-+ *  Copyright (C) 1991-2002  Linus Torvalds
-+ *
-+ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
-+ *		a whole lot of those previous things.
-+ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
-+ *		scheduler by Alfred Chen.
-+ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
-+ */
-+#include <linux/sched/clock.h>
-+#include <linux/sched/cputime.h>
-+#include <linux/sched/debug.h>
-+#include <linux/sched/isolation.h>
-+#include <linux/sched/loadavg.h>
-+#include <linux/sched/mm.h>
-+#include <linux/sched/nohz.h>
-+#include <linux/sched/stat.h>
-+#include <linux/sched/wake_q.h>
-+
-+#include <linux/blkdev.h>
-+#include <linux/context_tracking.h>
-+#include <linux/cpuset.h>
-+#include <linux/delayacct.h>
-+#include <linux/init_task.h>
-+#include <linux/kcov.h>
-+#include <linux/kprobes.h>
-+#include <linux/nmi.h>
-+#include <linux/scs.h>
-+
-+#include <uapi/linux/sched/types.h>
-+
-+#include <asm/irq_regs.h>
-+#include <asm/switch_to.h>
-+
-+#define CREATE_TRACE_POINTS
-+#include <trace/events/sched.h>
-+#include <trace/events/ipi.h>
-+#undef CREATE_TRACE_POINTS
-+
-+#include "sched.h"
-+
-+#include "pelt.h"
-+
-+#include "../../io_uring/io-wq.h"
-+#include "../smpboot.h"
-+
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
-+
-+/*
-+ * Export tracepoints that act as a bare tracehook (ie: have no trace event
-+ * associated with them) to allow external modules to probe them.
-+ */
-+EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+#define sched_feat(x)	(1)
-+/*
-+ * Print a warning if need_resched is set for the given duration (if
-+ * LATENCY_WARN is enabled).
-+ *
-+ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
-+ * per boot.
-+ */
-+__read_mostly int sysctl_resched_latency_warn_ms = 100;
-+__read_mostly int sysctl_resched_latency_warn_once = 1;
-+#else
-+#define sched_feat(x)	(0)
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+#define ALT_SCHED_VERSION "v6.7-r2"
-+
-+/*
-+ * Compile time debug macro
-+ * #define ALT_SCHED_DEBUG
-+ */
-+
-+/* rt_prio(prio) defined in include/linux/sched/rt.h */
-+#define rt_task(p)		rt_prio((p)->prio)
-+#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
-+#define task_has_rt_policy(p)	(rt_policy((p)->policy))
-+
-+#define STOP_PRIO		(MAX_RT_PRIO - 1)
-+
-+/*
-+ * Time slice
-+ * (default: 4 msec, units: nanoseconds)
-+ */
-+unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
-+
-+static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx);
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#include "bmq.h"
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+#include "pds.h"
-+#endif
-+
-+struct affinity_context {
-+	const struct cpumask *new_mask;
-+	struct cpumask *user_mask;
-+	unsigned int flags;
-+};
-+
-+/* Reschedule if less than this many μs left */
-+#define RESCHED_NS		(100 << 10)
-+
-+/**
-+ * sched_yield_type - Type of sched_yield() will be performed.
-+ * 0: No yield.
-+ * 1: Requeue task. (default)
-+ * 2: Set rq skip task. (Same as mainline)
-+ */
-+int sched_yield_type __read_mostly = 1;
-+
-+#ifdef CONFIG_SMP
-+static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
-+
-+DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
-+
-+#ifdef CONFIG_SCHED_SMT
-+DEFINE_STATIC_KEY_FALSE(sched_smt_present);
-+EXPORT_SYMBOL_GPL(sched_smt_present);
-+#endif
-+
-+/*
-+ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
-+ * the domain), this allows us to quickly tell if two cpus are in the same cache
-+ * domain, see cpus_share_cache().
-+ */
-+DEFINE_PER_CPU(int, sd_llc_id);
-+#endif /* CONFIG_SMP */
-+
-+static DEFINE_MUTEX(sched_hotcpu_mutex);
-+
-+DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
-+#endif
-+static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS] ____cacheline_aligned_in_smp;
-+static cpumask_t *const sched_idle_mask = &sched_preempt_mask[0];
-+
-+/* task function */
-+static inline const struct cpumask *task_user_cpus(struct task_struct *p)
-+{
-+	if (!p->user_cpus_ptr)
-+		return cpu_possible_mask; /* &init_task.cpus_mask */
-+	return p->user_cpus_ptr;
-+}
-+
-+/* sched_queue related functions */
-+static inline void sched_queue_init(struct sched_queue *q)
-+{
-+	int i;
-+
-+	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
-+	for(i = 0; i < SCHED_LEVELS; i++)
-+		INIT_LIST_HEAD(&q->heads[i]);
-+}
-+
-+/*
-+ * Init idle task and put into queue structure of rq
-+ * IMPORTANT: may be called multiple times for a single cpu
-+ */
-+static inline void sched_queue_init_idle(struct sched_queue *q,
-+					 struct task_struct *idle)
-+{
-+	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
-+	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
-+	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
-+}
-+
-+static inline void
-+clear_recorded_preempt_mask(int pr, int low, int high, int cpu)
-+{
-+	if (low < pr && pr <= high)
-+		cpumask_clear_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
-+}
-+
-+static inline void
-+set_recorded_preempt_mask(int pr, int low, int high, int cpu)
-+{
-+	if (low < pr && pr <= high)
-+		cpumask_set_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
-+}
-+
-+static atomic_t sched_prio_record = ATOMIC_INIT(0);
-+
-+/* water mark related functions */
-+static inline void update_sched_preempt_mask(struct rq *rq)
-+{
-+	unsigned long prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
-+	unsigned long last_prio = rq->prio;
-+	int cpu, pr;
-+
-+	if (prio == last_prio)
-+		return;
-+
-+	rq->prio = prio;
-+	cpu = cpu_of(rq);
-+	pr = atomic_read(&sched_prio_record);
-+
-+	if (prio < last_prio) {
-+		if (IDLE_TASK_SCHED_PRIO == last_prio) {
-+#ifdef CONFIG_SCHED_SMT
-+			if (static_branch_likely(&sched_smt_present))
-+				cpumask_andnot(&sched_sg_idle_mask,
-+					       &sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+			cpumask_clear_cpu(cpu, sched_idle_mask);
-+			last_prio -= 2;
-+		}
-+		clear_recorded_preempt_mask(pr, prio, last_prio, cpu);
-+
-+		return;
-+	}
-+	/* last_prio < prio */
-+	if (IDLE_TASK_SCHED_PRIO == prio) {
-+#ifdef CONFIG_SCHED_SMT
-+		if (static_branch_likely(&sched_smt_present) &&
-+		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
-+			cpumask_or(&sched_sg_idle_mask,
-+				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
-+#endif
-+		cpumask_set_cpu(cpu, sched_idle_mask);
-+		prio -= 2;
-+	}
-+	set_recorded_preempt_mask(pr, last_prio, prio, cpu);
-+}
-+
-+/*
-+ * This routine assume that the idle task always in queue
-+ */
-+static inline struct task_struct *sched_rq_first_task(struct rq *rq)
-+{
-+	const struct list_head *head = &rq->queue.heads[sched_prio2idx(rq->prio, rq)];
-+
-+	return list_first_entry(head, struct task_struct, sq_node);
-+}
-+
-+static inline struct task_struct *
-+sched_rq_next_task(struct task_struct *p, struct rq *rq)
-+{
-+	unsigned long idx = p->sq_idx;
-+	struct list_head *head = &rq->queue.heads[idx];
-+
-+	if (list_is_last(&p->sq_node, head)) {
-+		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
-+				    sched_idx2prio(idx, rq) + 1);
-+		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
-+
-+		return list_first_entry(head, struct task_struct, sq_node);
-+	}
-+
-+	return list_next_entry(p, sq_node);
-+}
-+
-+static inline struct task_struct *rq_runnable_task(struct rq *rq)
-+{
-+	struct task_struct *next = sched_rq_first_task(rq);
-+
-+	if (unlikely(next == rq->skip))
-+		next = sched_rq_next_task(next, rq);
-+
-+	return next;
-+}
-+
-+/*
-+ * Serialization rules:
-+ *
-+ * Lock order:
-+ *
-+ *   p->pi_lock
-+ *     rq->lock
-+ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
-+ *
-+ *  rq1->lock
-+ *    rq2->lock  where: rq1 < rq2
-+ *
-+ * Regular state:
-+ *
-+ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
-+ * local CPU's rq->lock, it optionally removes the task from the runqueue and
-+ * always looks at the local rq data structures to find the most eligible task
-+ * to run next.
-+ *
-+ * Task enqueue is also under rq->lock, possibly taken from another CPU.
-+ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
-+ * the local CPU to avoid bouncing the runqueue state around [ see
-+ * ttwu_queue_wakelist() ]
-+ *
-+ * Task wakeup, specifically wakeups that involve migration, are horribly
-+ * complicated to avoid having to take two rq->locks.
-+ *
-+ * Special state:
-+ *
-+ * System-calls and anything external will use task_rq_lock() which acquires
-+ * both p->pi_lock and rq->lock. As a consequence the state they change is
-+ * stable while holding either lock:
-+ *
-+ *  - sched_setaffinity()/
-+ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
-+ *  - set_user_nice():		p->se.load, p->*prio
-+ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
-+ *				p->se.load, p->rt_priority,
-+ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
-+ *  - sched_setnuma():		p->numa_preferred_nid
-+ *  - sched_move_task():        p->sched_task_group
-+ *  - uclamp_update_active()	p->uclamp*
-+ *
-+ * p->state <- TASK_*:
-+ *
-+ *   is changed locklessly using set_current_state(), __set_current_state() or
-+ *   set_special_state(), see their respective comments, or by
-+ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
-+ *   concurrent self.
-+ *
-+ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
-+ *
-+ *   is set by activate_task() and cleared by deactivate_task(), under
-+ *   rq->lock. Non-zero indicates the task is runnable, the special
-+ *   ON_RQ_MIGRATING state is used for migration without holding both
-+ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
-+ *
-+ * p->on_cpu <- { 0, 1 }:
-+ *
-+ *   is set by prepare_task() and cleared by finish_task() such that it will be
-+ *   set before p is scheduled-in and cleared after p is scheduled-out, both
-+ *   under rq->lock. Non-zero indicates the task is running on its CPU.
-+ *
-+ *   [ The astute reader will observe that it is possible for two tasks on one
-+ *     CPU to have ->on_cpu = 1 at the same time. ]
-+ *
-+ * task_cpu(p): is changed by set_task_cpu(), the rules are:
-+ *
-+ *  - Don't call set_task_cpu() on a blocked task:
-+ *
-+ *    We don't care what CPU we're not running on, this simplifies hotplug,
-+ *    the CPU assignment of blocked tasks isn't required to be valid.
-+ *
-+ *  - for try_to_wake_up(), called under p->pi_lock:
-+ *
-+ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
-+ *
-+ *  - for migration called under rq->lock:
-+ *    [ see task_on_rq_migrating() in task_rq_lock() ]
-+ *
-+ *    o move_queued_task()
-+ *    o detach_task()
-+ *
-+ *  - for migration called under double_rq_lock():
-+ *
-+ *    o __migrate_swap_task()
-+ *    o push_rt_task() / pull_rt_task()
-+ *    o push_dl_task() / pull_dl_task()
-+ *    o dl_task_offline_migration()
-+ *
-+ */
-+
-+/*
-+ * Context: p->pi_lock
-+ */
-+static inline struct rq
-+*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock(&rq->lock);
-+			if (likely((p->on_cpu || task_on_rq_queued(p))
-+				   && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock(&rq->lock);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			*plock = NULL;
-+			return rq;
-+		}
-+	}
-+}
-+
-+static inline void
-+__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
-+{
-+	if (NULL != lock)
-+		raw_spin_unlock(lock);
-+}
-+
-+static inline struct rq
-+*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
-+			  unsigned long *flags)
-+{
-+	struct rq *rq;
-+	for (;;) {
-+		rq = task_rq(p);
-+		if (p->on_cpu || task_on_rq_queued(p)) {
-+			raw_spin_lock_irqsave(&rq->lock, *flags);
-+			if (likely((p->on_cpu || task_on_rq_queued(p))
-+				   && rq == task_rq(p))) {
-+				*plock = &rq->lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&rq->lock, *flags);
-+		} else if (task_on_rq_migrating(p)) {
-+			do {
-+				cpu_relax();
-+			} while (unlikely(task_on_rq_migrating(p)));
-+		} else {
-+			raw_spin_lock_irqsave(&p->pi_lock, *flags);
-+			if (likely(!p->on_cpu && !p->on_rq &&
-+				   rq == task_rq(p))) {
-+				*plock = &p->pi_lock;
-+				return rq;
-+			}
-+			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
-+		}
-+	}
-+}
-+
-+static inline void
-+task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
-+			      unsigned long *flags)
-+{
-+	raw_spin_unlock_irqrestore(lock, *flags);
-+}
-+
-+/*
-+ * __task_rq_lock - lock the rq @p resides on.
-+ */
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	lockdep_assert_held(&p->pi_lock);
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-+			return rq;
-+		raw_spin_unlock(&rq->lock);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+/*
-+ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
-+ */
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	for (;;) {
-+		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
-+		rq = task_rq(p);
-+		raw_spin_lock(&rq->lock);
-+		/*
-+		 *	move_queued_task()		task_rq_lock()
-+		 *
-+		 *	ACQUIRE (rq->lock)
-+		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
-+		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
-+		 *	[S] ->cpu = new_cpu		[L] task_rq()
-+		 *					[L] ->on_rq
-+		 *	RELEASE (rq->lock)
-+		 *
-+		 * If we observe the old CPU in task_rq_lock(), the acquire of
-+		 * the old rq->lock will fully serialize against the stores.
-+		 *
-+		 * If we observe the new CPU in task_rq_lock(), the address
-+		 * dependency headed by '[L] rq = task_rq()' and the acquire
-+		 * will pair with the WMB to ensure we then also see migrating.
-+		 */
-+		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
-+			return rq;
-+		}
-+		raw_spin_unlock(&rq->lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+
-+		while (unlikely(task_on_rq_migrating(p)))
-+			cpu_relax();
-+	}
-+}
-+
-+static inline void
-+rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irqsave(&rq->lock, rf->flags);
-+}
-+
-+static inline void
-+rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
-+}
-+
-+DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
-+		    rq_lock_irqsave(_T->lock, &_T->rf),
-+		    rq_unlock_irqrestore(_T->lock, &_T->rf),
-+		    struct rq_flags rf)
-+
-+void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
-+{
-+	raw_spinlock_t *lock;
-+
-+	/* Matches synchronize_rcu() in __sched_core_enable() */
-+	preempt_disable();
-+
-+	for (;;) {
-+		lock = __rq_lockp(rq);
-+		raw_spin_lock_nested(lock, subclass);
-+		if (likely(lock == __rq_lockp(rq))) {
-+			/* preempt_count *MUST* be > 1 */
-+			preempt_enable_no_resched();
-+			return;
-+		}
-+		raw_spin_unlock(lock);
-+	}
-+}
-+
-+void raw_spin_rq_unlock(struct rq *rq)
-+{
-+	raw_spin_unlock(rq_lockp(rq));
-+}
-+
-+/*
-+ * RQ-clock updating methods:
-+ */
-+
-+static void update_rq_clock_task(struct rq *rq, s64 delta)
-+{
-+/*
-+ * In theory, the compile should just see 0 here, and optimize out the call
-+ * to sched_rt_avg_update. But I don't trust it...
-+ */
-+	s64 __maybe_unused steal = 0, irq_delta = 0;
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
-+
-+	/*
-+	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
-+	 * this case when a previous update_rq_clock() happened inside a
-+	 * {soft,}irq region.
-+	 *
-+	 * When this happens, we stop ->clock_task and only update the
-+	 * prev_irq_time stamp to account for the part that fit, so that a next
-+	 * update will consume the rest. This ensures ->clock_task is
-+	 * monotonic.
-+	 *
-+	 * It does however cause some slight miss-attribution of {soft,}irq
-+	 * time, a more accurate solution would be to update the irq_time using
-+	 * the current rq->clock timestamp, except that would require using
-+	 * atomic ops.
-+	 */
-+	if (irq_delta > delta)
-+		irq_delta = delta;
-+
-+	rq->prev_irq_time += irq_delta;
-+	delta -= irq_delta;
-+	delayacct_irq(rq->curr, irq_delta);
-+#endif
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	if (static_key_false((&paravirt_steal_rq_enabled))) {
-+		steal = paravirt_steal_clock(cpu_of(rq));
-+		steal -= rq->prev_steal_time_rq;
-+
-+		if (unlikely(steal > delta))
-+			steal = delta;
-+
-+		rq->prev_steal_time_rq += steal;
-+		delta -= steal;
-+	}
-+#endif
-+
-+	rq->clock_task += delta;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	if ((irq_delta + steal))
-+		update_irq_load_avg(rq, irq_delta + steal);
-+#endif
-+}
-+
-+static inline void update_rq_clock(struct rq *rq)
-+{
-+	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
-+
-+	if (unlikely(delta <= 0))
-+		return;
-+	rq->clock += delta;
-+	sched_update_rq_clock(rq);
-+	update_rq_clock_task(rq, delta);
-+}
-+
-+/*
-+ * RQ Load update routine
-+ */
-+#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
-+#define RQ_UTIL_SHIFT			(8)
-+#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
-+
-+#define LOAD_BLOCK(t)		((t) >> 17)
-+#define LOAD_HALF_BLOCK(t)	((t) >> 16)
-+#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
-+#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
-+#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
-+
-+static inline void rq_load_update(struct rq *rq)
-+{
-+	u64 time = rq->clock;
-+	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
-+			RQ_LOAD_HISTORY_BITS - 1);
-+	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
-+	u64 curr = !!rq->nr_running;
-+
-+	if (delta) {
-+		rq->load_history = rq->load_history >> delta;
-+
-+		if (delta < RQ_UTIL_SHIFT) {
-+			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
-+			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
-+				rq->load_history ^= LOAD_BLOCK_BIT(delta);
-+		}
-+
-+		rq->load_block = BLOCK_MASK(time) * prev;
-+	} else {
-+		rq->load_block += (time - rq->load_stamp) * prev;
-+	}
-+	if (prev ^ curr)
-+		rq->load_history ^= CURRENT_LOAD_BIT;
-+	rq->load_stamp = time;
-+}
-+
-+unsigned long rq_load_util(struct rq *rq, unsigned long max)
-+{
-+	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
-+}
-+
-+#ifdef CONFIG_SMP
-+unsigned long sched_cpu_util(int cpu)
-+{
-+	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
-+}
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_CPU_FREQ
-+/**
-+ * cpufreq_update_util - Take a note about CPU utilization changes.
-+ * @rq: Runqueue to carry out the update for.
-+ * @flags: Update reason flags.
-+ *
-+ * This function is called by the scheduler on the CPU whose utilization is
-+ * being updated.
-+ *
-+ * It can only be called from RCU-sched read-side critical sections.
-+ *
-+ * The way cpufreq is currently arranged requires it to evaluate the CPU
-+ * performance state (frequency/voltage) on a regular basis to prevent it from
-+ * being stuck in a completely inadequate performance level for too long.
-+ * That is not guaranteed to happen if the updates are only triggered from CFS
-+ * and DL, though, because they may not be coming in if only RT tasks are
-+ * active all the time (or there are RT tasks only).
-+ *
-+ * As a workaround for that issue, this function is called periodically by the
-+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
-+ * but that really is a band-aid.  Going forward it should be replaced with
-+ * solutions targeted more specifically at RT tasks.
-+ */
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+	struct update_util_data *data;
-+
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
-+						  cpu_of(rq)));
-+	if (data)
-+		data->func(data, rq_clock(rq), flags);
-+}
-+#else
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+#ifdef CONFIG_SMP
-+	rq_load_update(rq);
-+#endif
-+}
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+/*
-+ * Tick may be needed by tasks in the runqueue depending on their policy and
-+ * requirements. If tick is needed, lets send the target an IPI to kick it out
-+ * of nohz mode if necessary.
-+ */
-+static inline void sched_update_tick_dependency(struct rq *rq)
-+{
-+	int cpu = cpu_of(rq);
-+
-+	if (!tick_nohz_full_cpu(cpu))
-+		return;
-+
-+	if (rq->nr_running < 2)
-+		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
-+	else
-+		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
-+}
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_update_tick_dependency(struct rq *rq) { }
-+#endif
-+
-+bool sched_task_on_rq(struct task_struct *p)
-+{
-+	return task_on_rq_queued(p);
-+}
-+
-+unsigned long get_wchan(struct task_struct *p)
-+{
-+	unsigned long ip = 0;
-+	unsigned int state;
-+
-+	if (!p || p == current)
-+		return 0;
-+
-+	/* Only get wchan if task is blocked and we can keep it that way. */
-+	raw_spin_lock_irq(&p->pi_lock);
-+	state = READ_ONCE(p->__state);
-+	smp_rmb(); /* see try_to_wake_up() */
-+	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
-+		ip = __get_wchan(p);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	return ip;
-+}
-+
-+/*
-+ * Add/Remove/Requeue task to/from the runqueue routines
-+ * Context: rq->lock
-+ */
-+#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)				\
-+	sched_info_dequeue(rq, p);						\
-+										\
-+	list_del(&p->sq_node);							\
-+	if (list_empty(&rq->queue.heads[p->sq_idx])) { 				\
-+		clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);	\
-+		func;								\
-+	}
-+
-+#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
-+	sched_info_enqueue(rq, p);					\
-+									\
-+	p->sq_idx = task_sched_prio_idx(p, rq);				\
-+	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
-+	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
-+
-+static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+#endif
-+
-+	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+	--rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (1 == rq->nr_running)
-+		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
-+		  task_cpu(p), cpu_of(rq));
-+#endif
-+
-+	__SCHED_ENQUEUE_TASK(p, rq, flags);
-+	update_sched_preempt_mask(rq);
-+	++rq->nr_running;
-+#ifdef CONFIG_SMP
-+	if (2 == rq->nr_running)
-+		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+	sched_update_tick_dependency(rq);
-+}
-+
-+static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+	lockdep_assert_held(&rq->lock);
-+	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
-+		  cpu_of(rq), task_cpu(p));
-+#endif
-+
-+	list_del(&p->sq_node);
-+	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
-+	if (idx != p->sq_idx) {
-+		if (list_empty(&rq->queue.heads[p->sq_idx]))
-+			clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
-+		p->sq_idx = idx;
-+		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
-+		update_sched_preempt_mask(rq);
-+	}
-+}
-+
-+/*
-+ * cmpxchg based fetch_or, macro so it works for different integer types
-+ */
-+#define fetch_or(ptr, mask)						\
-+	({								\
-+		typeof(ptr) _ptr = (ptr);				\
-+		typeof(mask) _mask = (mask);				\
-+		typeof(*_ptr) _val = *_ptr;				\
-+									\
-+		do {							\
-+		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
-+	_val;								\
-+})
-+
-+#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
-+/*
-+ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
-+ * this avoids any races wrt polling state changes and thereby avoids
-+ * spurious IPIs.
-+ */
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
-+}
-+
-+/*
-+ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
-+ *
-+ * If this returns true, then the idle task promises to call
-+ * sched_ttwu_pending() and reschedule soon.
-+ */
-+static bool set_nr_if_polling(struct task_struct *p)
-+{
-+	struct thread_info *ti = task_thread_info(p);
-+	typeof(ti->flags) val = READ_ONCE(ti->flags);
-+
-+	do {
-+		if (!(val & _TIF_POLLING_NRFLAG))
-+			return false;
-+		if (val & _TIF_NEED_RESCHED)
-+			return true;
-+	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
-+
-+	return true;
-+}
-+
-+#else
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+	set_tsk_need_resched(p);
-+	return true;
-+}
-+
-+#ifdef CONFIG_SMP
-+static inline bool set_nr_if_polling(struct task_struct *p)
-+{
-+	return false;
-+}
-+#endif
-+#endif
-+
-+static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	struct wake_q_node *node = &task->wake_q;
-+
-+	/*
-+	 * Atomically grab the task, if ->wake_q is !nil already it means
-+	 * it's already queued (either by us or someone else) and will get the
-+	 * wakeup due to that.
-+	 *
-+	 * In order to ensure that a pending wakeup will observe our pending
-+	 * state, even in the failed case, an explicit smp_mb() must be used.
-+	 */
-+	smp_mb__before_atomic();
-+	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
-+		return false;
-+
-+	/*
-+	 * The head is context local, there can be no concurrency.
-+	 */
-+	*head->lastp = node;
-+	head->lastp = &node->next;
-+	return true;
-+}
-+
-+/**
-+ * wake_q_add() - queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ */
-+void wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (__wake_q_add(head, task))
-+		get_task_struct(task);
-+}
-+
-+/**
-+ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ *
-+ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
-+ * that already hold reference to @task can call the 'safe' version and trust
-+ * wake_q to do the right thing depending whether or not the @task is already
-+ * queued for wakeup.
-+ */
-+void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
-+{
-+	if (!__wake_q_add(head, task))
-+		put_task_struct(task);
-+}
-+
-+void wake_up_q(struct wake_q_head *head)
-+{
-+	struct wake_q_node *node = head->first;
-+
-+	while (node != WAKE_Q_TAIL) {
-+		struct task_struct *task;
-+
-+		task = container_of(node, struct task_struct, wake_q);
-+		/* task can safely be re-inserted now: */
-+		node = node->next;
-+		task->wake_q.next = NULL;
-+
-+		/*
-+		 * wake_up_process() executes a full barrier, which pairs with
-+		 * the queueing in wake_q_add() so as not to miss wakeups.
-+		 */
-+		wake_up_process(task);
-+		put_task_struct(task);
-+	}
-+}
-+
-+/*
-+ * resched_curr - mark rq's current task 'to be rescheduled now'.
-+ *
-+ * On UP this means the setting of the need_resched flag, on SMP it
-+ * might also involve a cross-CPU call to trigger the scheduler on
-+ * the target CPU.
-+ */
-+void resched_curr(struct rq *rq)
-+{
-+	struct task_struct *curr = rq->curr;
-+	int cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	if (test_tsk_need_resched(curr))
-+		return;
-+
-+	cpu = cpu_of(rq);
-+	if (cpu == smp_processor_id()) {
-+		set_tsk_need_resched(curr);
-+		set_preempt_need_resched();
-+		return;
-+	}
-+
-+	if (set_nr_and_not_polling(curr))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+void resched_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (cpu_online(cpu) || cpu == smp_processor_id())
-+		resched_curr(cpu_rq(cpu));
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+#ifdef CONFIG_SMP
-+#ifdef CONFIG_NO_HZ_COMMON
-+void nohz_balance_enter_idle(int cpu) {}
-+
-+void select_nohz_load_balancer(int stop_tick) {}
-+
-+void set_cpu_sd_state_idle(void) {}
-+
-+/*
-+ * In the semi idle case, use the nearest busy CPU for migrating timers
-+ * from an idle CPU.  This is good for power-savings.
-+ *
-+ * We don't do similar optimization for completely idle system, as
-+ * selecting an idle CPU will add more delays to the timers than intended
-+ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
-+ */
-+int get_nohz_timer_target(void)
-+{
-+	int i, cpu = smp_processor_id(), default_cpu = -1;
-+	struct cpumask *mask;
-+	const struct cpumask *hk_mask;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
-+		if (!idle_cpu(cpu))
-+			return cpu;
-+		default_cpu = cpu;
-+	}
-+
-+	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
-+
-+	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
-+	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
-+		for_each_cpu_and(i, mask, hk_mask)
-+			if (!idle_cpu(i))
-+				return i;
-+
-+	if (default_cpu == -1)
-+		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
-+	cpu = default_cpu;
-+
-+	return cpu;
-+}
-+
-+/*
-+ * When add_timer_on() enqueues a timer into the timer wheel of an
-+ * idle CPU then this timer might expire before the next timer event
-+ * which is scheduled to wake up that CPU. In case of a completely
-+ * idle system the next event might even be infinite time into the
-+ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
-+ * leaves the inner idle loop so the newly added timer is taken into
-+ * account when the CPU goes back to idle and evaluates the timer
-+ * wheel for the next timer event.
-+ */
-+static inline void wake_up_idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (cpu == smp_processor_id())
-+		return;
-+
-+	if (set_nr_and_not_polling(rq->idle))
-+		smp_send_reschedule(cpu);
-+	else
-+		trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+static inline bool wake_up_full_nohz_cpu(int cpu)
-+{
-+	/*
-+	 * We just need the target to call irq_exit() and re-evaluate
-+	 * the next tick. The nohz full kick at least implies that.
-+	 * If needed we can still optimize that later with an
-+	 * empty IRQ.
-+	 */
-+	if (cpu_is_offline(cpu))
-+		return true;  /* Don't try to wake offline CPUs. */
-+	if (tick_nohz_full_cpu(cpu)) {
-+		if (cpu != smp_processor_id() ||
-+		    tick_nohz_tick_stopped())
-+			tick_nohz_full_kick_cpu(cpu);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_nohz_cpu(int cpu)
-+{
-+	if (!wake_up_full_nohz_cpu(cpu))
-+		wake_up_idle_cpu(cpu);
-+}
-+
-+static void nohz_csd_func(void *info)
-+{
-+	struct rq *rq = info;
-+	int cpu = cpu_of(rq);
-+	unsigned int flags;
-+
-+	/*
-+	 * Release the rq::nohz_csd.
-+	 */
-+	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
-+	WARN_ON(!(flags & NOHZ_KICK_MASK));
-+
-+	rq->idle_balance = idle_cpu(cpu);
-+	if (rq->idle_balance && !need_resched()) {
-+		rq->nohz_idle_balance = flags;
-+		raise_softirq_irqoff(SCHED_SOFTIRQ);
-+	}
-+}
-+
-+#endif /* CONFIG_NO_HZ_COMMON */
-+#endif /* CONFIG_SMP */
-+
-+static inline void wakeup_preempt(struct rq *rq)
-+{
-+	if (sched_rq_first_task(rq) != rq->curr)
-+		resched_curr(rq);
-+}
-+
-+static __always_inline
-+int __task_state_match(struct task_struct *p, unsigned int state)
-+{
-+	if (READ_ONCE(p->__state) & state)
-+		return 1;
-+
-+	if (READ_ONCE(p->saved_state) & state)
-+		return -1;
-+
-+	return 0;
-+}
-+
-+static __always_inline
-+int task_state_match(struct task_struct *p, unsigned int state)
-+{
-+	/*
-+	 * Serialize against current_save_and_set_rtlock_wait_state(),
-+	 * current_restore_rtlock_saved_state(), and __refrigerator().
-+	 */
-+	guard(raw_spinlock_irq)(&p->pi_lock);
-+
-+	return __task_state_match(p, state);
-+}
-+
-+/*
-+ * wait_task_inactive - wait for a thread to unschedule.
-+ *
-+ * Wait for the thread to block in any of the states set in @match_state.
-+ * If it changes, i.e. @p might have woken up, then return zero.  When we
-+ * succeed in waiting for @p to be off its CPU, we return a positive number
-+ * (its total switch count).  If a second call a short while later returns the
-+ * same number, the caller can be sure that @p has remained unscheduled the
-+ * whole time.
-+ *
-+ * The caller must ensure that the task *will* unschedule sometime soon,
-+ * else this function might spin for a *long* time. This function can't
-+ * be called with interrupts off, or it may introduce deadlock with
-+ * smp_call_function() if an IPI is sent by the same process we are
-+ * waiting to become inactive.
-+ */
-+unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
-+{
-+	unsigned long flags;
-+	int running, queued, match;
-+	unsigned long ncsw;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	for (;;) {
-+		rq = task_rq(p);
-+
-+		/*
-+		 * If the task is actively running on another CPU
-+		 * still, just relax and busy-wait without holding
-+		 * any locks.
-+		 *
-+		 * NOTE! Since we don't hold any locks, it's not
-+		 * even sure that "rq" stays as the right runqueue!
-+		 * But we don't care, since this will return false
-+		 * if the runqueue has changed and p is actually now
-+		 * running somewhere else!
-+		 */
-+		while (task_on_cpu(p)) {
-+			if (!task_state_match(p, match_state))
-+				return 0;
-+			cpu_relax();
-+		}
-+
-+		/*
-+		 * Ok, time to look more closely! We need the rq
-+		 * lock now, to be *sure*. If we're wrong, we'll
-+		 * just go back and repeat.
-+		 */
-+		task_access_lock_irqsave(p, &lock, &flags);
-+		trace_sched_wait_task(p);
-+		running = task_on_cpu(p);
-+		queued = p->on_rq;
-+		ncsw = 0;
-+		if ((match = __task_state_match(p, match_state))) {
-+			/*
-+			 * When matching on p->saved_state, consider this task
-+			 * still queued so it will wait.
-+			 */
-+			if (match < 0)
-+				queued = 1;
-+			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
-+		}
-+		task_access_unlock_irqrestore(p, lock, &flags);
-+
-+		/*
-+		 * If it changed from the expected state, bail out now.
-+		 */
-+		if (unlikely(!ncsw))
-+			break;
-+
-+		/*
-+		 * Was it really running after all now that we
-+		 * checked with the proper locks actually held?
-+		 *
-+		 * Oops. Go back and try again..
-+		 */
-+		if (unlikely(running)) {
-+			cpu_relax();
-+			continue;
-+		}
-+
-+		/*
-+		 * It's not enough that it's not actively running,
-+		 * it must be off the runqueue _entirely_, and not
-+		 * preempted!
-+		 *
-+		 * So if it was still runnable (but just not actively
-+		 * running right now), it's preempted, and we should
-+		 * yield - it could be a while.
-+		 */
-+		if (unlikely(queued)) {
-+			ktime_t to = NSEC_PER_SEC / HZ;
-+
-+			set_current_state(TASK_UNINTERRUPTIBLE);
-+			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
-+			continue;
-+		}
-+
-+		/*
-+		 * Ahh, all good. It wasn't running, and it wasn't
-+		 * runnable, which means that it will never become
-+		 * running in the future either. We're all done!
-+		 */
-+		break;
-+	}
-+
-+	return ncsw;
-+}
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+/*
-+ * Use HR-timers to deliver accurate preemption points.
-+ */
-+
-+static void hrtick_clear(struct rq *rq)
-+{
-+	if (hrtimer_active(&rq->hrtick_timer))
-+		hrtimer_cancel(&rq->hrtick_timer);
-+}
-+
-+/*
-+ * High-resolution timer tick.
-+ * Runs from hardirq context with interrupts disabled.
-+ */
-+static enum hrtimer_restart hrtick(struct hrtimer *timer)
-+{
-+	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
-+
-+	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
-+
-+	raw_spin_lock(&rq->lock);
-+	resched_curr(rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	return HRTIMER_NORESTART;
-+}
-+
-+/*
-+ * Use hrtick when:
-+ *  - enabled by features
-+ *  - hrtimer is actually high res
-+ */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	/**
-+	 * Alt schedule FW doesn't support sched_feat yet
-+	if (!sched_feat(HRTICK))
-+		return 0;
-+	*/
-+	if (!cpu_active(cpu_of(rq)))
-+		return 0;
-+	return hrtimer_is_hres_active(&rq->hrtick_timer);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void __hrtick_restart(struct rq *rq)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	ktime_t time = rq->hrtick_time;
-+
-+	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
-+}
-+
-+/*
-+ * called from hardirq (IPI) context
-+ */
-+static void __hrtick_start(void *arg)
-+{
-+	struct rq *rq = arg;
-+
-+	raw_spin_lock(&rq->lock);
-+	__hrtick_restart(rq);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	struct hrtimer *timer = &rq->hrtick_timer;
-+	s64 delta;
-+
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense and can cause timer DoS.
-+	 */
-+	delta = max_t(s64, delay, 10000LL);
-+
-+	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
-+
-+	if (rq == this_rq())
-+		__hrtick_restart(rq);
-+	else
-+		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-+}
-+
-+#else
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and irqs disabled
-+ */
-+void hrtick_start(struct rq *rq, u64 delay)
-+{
-+	/*
-+	 * Don't schedule slices shorter than 10000ns, that just
-+	 * doesn't make sense. Rely on vruntime for fairness.
-+	 */
-+	delay = max_t(u64, delay, 10000LL);
-+	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
-+		      HRTIMER_MODE_REL_PINNED_HARD);
-+}
-+#endif /* CONFIG_SMP */
-+
-+static void hrtick_rq_init(struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
-+#endif
-+
-+	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
-+	rq->hrtick_timer.function = hrtick;
-+}
-+#else	/* CONFIG_SCHED_HRTICK */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+	return 0;
-+}
-+
-+static inline void hrtick_clear(struct rq *rq)
-+{
-+}
-+
-+static inline void hrtick_rq_init(struct rq *rq)
-+{
-+}
-+#endif	/* CONFIG_SCHED_HRTICK */
-+
-+static inline int __normal_prio(int policy, int rt_prio, int static_prio)
-+{
-+	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
-+		static_prio + MAX_PRIORITY_ADJ;
-+}
-+
-+/*
-+ * Calculate the expected normal priority: i.e. priority
-+ * without taking RT-inheritance into account. Might be
-+ * boosted by interactivity modifiers. Changes upon fork,
-+ * setprio syscalls, and whenever the interactivity
-+ * estimator recalculates.
-+ */
-+static inline int normal_prio(struct task_struct *p)
-+{
-+	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
-+}
-+
-+/*
-+ * Calculate the current priority, i.e. the priority
-+ * taken into account by the scheduler. This value might
-+ * be boosted by RT tasks as it will be RT if the task got
-+ * RT-boosted. If not then it returns p->normal_prio.
-+ */
-+static int effective_prio(struct task_struct *p)
-+{
-+	p->normal_prio = normal_prio(p);
-+	/*
-+	 * If we are RT tasks or we were boosted to RT priority,
-+	 * keep the priority unchanged. Otherwise, update priority
-+	 * to the normal priority:
-+	 */
-+	if (!rt_prio(p->prio))
-+		return p->normal_prio;
-+	return p->prio;
-+}
-+
-+/*
-+ * activate_task - move a task to the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static void activate_task(struct task_struct *p, struct rq *rq)
-+{
-+	enqueue_task(p, rq, ENQUEUE_WAKEUP);
-+	p->on_rq = TASK_ON_RQ_QUEUED;
-+
-+	/*
-+	 * If in_iowait is set, the code below may not trigger any cpufreq
-+	 * utilization updates, so do it here explicitly with the IOWAIT flag
-+	 * passed.
-+	 */
-+	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
-+}
-+
-+/*
-+ * deactivate_task - remove a task from the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static inline void deactivate_task(struct task_struct *p, struct rq *rq)
-+{
-+	dequeue_task(p, rq, DEQUEUE_SLEEP);
-+	p->on_rq = 0;
-+	cpufreq_update_util(rq, 0);
-+}
-+
-+static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
-+	 * successfully executed on another CPU. We must ensure that updates of
-+	 * per-task data have been completed by this moment.
-+	 */
-+	smp_wmb();
-+
-+	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
-+#endif
-+}
-+
-+static inline bool is_migration_disabled(struct task_struct *p)
-+{
-+#ifdef CONFIG_SMP
-+	return p->migration_disabled;
-+#else
-+	return false;
-+#endif
-+}
-+
-+#define SCA_CHECK		0x01
-+#define SCA_USER		0x08
-+
-+#ifdef CONFIG_SMP
-+
-+void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
-+{
-+#ifdef CONFIG_SCHED_DEBUG
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * We should never call set_task_cpu() on a blocked task,
-+	 * ttwu() will sort out the placement.
-+	 */
-+	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
-+
-+#ifdef CONFIG_LOCKDEP
-+	/*
-+	 * The caller should hold either p->pi_lock or rq->lock, when changing
-+	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
-+	 *
-+	 * sched_move_task() holds both and thus holding either pins the cgroup,
-+	 * see task_group().
-+	 */
-+	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
-+				      lockdep_is_held(&task_rq(p)->lock)));
-+#endif
-+	/*
-+	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
-+	 */
-+	WARN_ON_ONCE(!cpu_online(new_cpu));
-+
-+	WARN_ON_ONCE(is_migration_disabled(p));
-+#endif
-+	trace_sched_migrate_task(p, new_cpu);
-+
-+	if (task_cpu(p) != new_cpu)
-+	{
-+		rseq_migrate(p);
-+		perf_event_task_migrate(p);
-+	}
-+
-+	__set_task_cpu(p, new_cpu);
-+}
-+
-+#define MDF_FORCE_ENABLED	0x80
-+
-+static void
-+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	/*
-+	 * This here violates the locking rules for affinity, since we're only
-+	 * supposed to change these variables while holding both rq->lock and
-+	 * p->pi_lock.
-+	 *
-+	 * HOWEVER, it magically works, because ttwu() is the only code that
-+	 * accesses these variables under p->pi_lock and only does so after
-+	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
-+	 * before finish_task().
-+	 *
-+	 * XXX do further audits, this smells like something putrid.
-+	 */
-+	SCHED_WARN_ON(!p->on_cpu);
-+	p->cpus_ptr = new_mask;
-+}
-+
-+void migrate_disable(void)
-+{
-+	struct task_struct *p = current;
-+	int cpu;
-+
-+	if (p->migration_disabled) {
-+		p->migration_disabled++;
-+		return;
-+	}
-+
-+	guard(preempt)();
-+	cpu = smp_processor_id();
-+	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
-+		cpu_rq(cpu)->nr_pinned++;
-+		p->migration_disabled = 1;
-+		p->migration_flags &= ~MDF_FORCE_ENABLED;
-+
-+		/*
-+		 * Violates locking rules! see comment in __do_set_cpus_ptr().
-+		 */
-+		if (p->cpus_ptr == &p->cpus_mask)
-+			__do_set_cpus_ptr(p, cpumask_of(cpu));
-+	}
-+}
-+EXPORT_SYMBOL_GPL(migrate_disable);
-+
-+void migrate_enable(void)
-+{
-+	struct task_struct *p = current;
-+
-+	if (0 == p->migration_disabled)
-+		return;
-+
-+	if (p->migration_disabled > 1) {
-+		p->migration_disabled--;
-+		return;
-+	}
-+
-+	if (WARN_ON_ONCE(!p->migration_disabled))
-+		return;
-+
-+	/*
-+	 * Ensure stop_task runs either before or after this, and that
-+	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
-+	 */
-+	guard(preempt)();
-+	/*
-+	 * Assumption: current should be running on allowed cpu
-+	 */
-+	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
-+	if (p->cpus_ptr != &p->cpus_mask)
-+		__do_set_cpus_ptr(p, &p->cpus_mask);
-+	/*
-+	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
-+	 * regular cpus_mask, otherwise things that race (eg.
-+	 * select_fallback_rq) get confused.
-+	 */
-+	barrier();
-+	p->migration_disabled = 0;
-+	this_rq()->nr_pinned--;
-+}
-+EXPORT_SYMBOL_GPL(migrate_enable);
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return rq->nr_pinned;
-+}
-+
-+/*
-+ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
-+ * __set_cpus_allowed_ptr() and select_fallback_rq().
-+ */
-+static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
-+{
-+	/* When not in the task's cpumask, no point in looking further. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/* migrate_disabled() must be allowed to finish. */
-+	if (is_migration_disabled(p))
-+		return cpu_online(cpu);
-+
-+	/* Non kernel threads are not allowed during either online or offline. */
-+	if (!(p->flags & PF_KTHREAD))
-+		return cpu_active(cpu) && task_cpu_possible(cpu, p);
-+
-+	/* KTHREAD_IS_PER_CPU is always allowed. */
-+	if (kthread_is_per_cpu(p))
-+		return cpu_online(cpu);
-+
-+	/* Regular kernel threads don't get to stay during offline. */
-+	if (cpu_dying(cpu))
-+		return false;
-+
-+	/* But are allowed during online. */
-+	return cpu_online(cpu);
-+}
-+
-+/*
-+ * This is how migration works:
-+ *
-+ * 1) we invoke migration_cpu_stop() on the target CPU using
-+ *    stop_one_cpu().
-+ * 2) stopper starts to run (implicitly forcing the migrated thread
-+ *    off the CPU)
-+ * 3) it checks whether the migrated task is still in the wrong runqueue.
-+ * 4) if it's in the wrong runqueue then the migration thread removes
-+ *    it and puts it into the right queue.
-+ * 5) stopper completes and stop_one_cpu() returns and the migration
-+ *    is done.
-+ */
-+
-+/*
-+ * move_queued_task - move a queued task to new rq.
-+ *
-+ * Returns (locked) new rq. Old rq's lock is released.
-+ */
-+static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
-+				   new_cpu)
-+{
-+	int src_cpu;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	src_cpu = cpu_of(rq);
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
-+	dequeue_task(p, rq, 0);
-+	set_task_cpu(p, new_cpu);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rq = cpu_rq(new_cpu);
-+
-+	raw_spin_lock(&rq->lock);
-+	WARN_ON_ONCE(task_cpu(p) != new_cpu);
-+
-+	sched_mm_cid_migrate_to(rq, p, src_cpu);
-+
-+	sched_task_sanity_check(p, rq);
-+	enqueue_task(p, rq, 0);
-+	p->on_rq = TASK_ON_RQ_QUEUED;
-+	wakeup_preempt(rq);
-+
-+	return rq;
-+}
-+
-+struct migration_arg {
-+	struct task_struct *task;
-+	int dest_cpu;
-+};
-+
-+/*
-+ * Move (not current) task off this CPU, onto the destination CPU. We're doing
-+ * this because either it can't run here any more (set_cpus_allowed()
-+ * away from this CPU, or CPU going down), or because we're
-+ * attempting to rebalance this task on exec (sched_exec).
-+ *
-+ * So we race with normal scheduler movements, but that's OK, as long
-+ * as the task is no longer on this CPU.
-+ */
-+static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
-+				 dest_cpu)
-+{
-+	/* Affinity changed (again). */
-+	if (!is_cpu_allowed(p, dest_cpu))
-+		return rq;
-+
-+	return move_queued_task(rq, p, dest_cpu);
-+}
-+
-+/*
-+ * migration_cpu_stop - this will be executed by a highprio stopper thread
-+ * and performs thread migration by bumping thread off CPU then
-+ * 'pushing' onto another runqueue.
-+ */
-+static int migration_cpu_stop(void *data)
-+{
-+	struct migration_arg *arg = data;
-+	struct task_struct *p = arg->task;
-+	struct rq *rq = this_rq();
-+	unsigned long flags;
-+
-+	/*
-+	 * The original target CPU might have gone down and we might
-+	 * be on another CPU but it doesn't matter.
-+	 */
-+	local_irq_save(flags);
-+	/*
-+	 * We need to explicitly wake pending tasks before running
-+	 * __migrate_task() such that we will not miss enforcing cpus_ptr
-+	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
-+	 */
-+	flush_smp_call_function_queue();
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+	/*
-+	 * If task_rq(p) != rq, it cannot be migrated here, because we're
-+	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
-+	 * we're holding p->pi_lock.
-+	 */
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		rq = __migrate_task(rq, p, arg->dest_cpu);
-+	}
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	return 0;
-+}
-+
-+static inline void
-+set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	cpumask_copy(&p->cpus_mask, ctx->new_mask);
-+	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
-+
-+	/*
-+	 * Swap in a new user_cpus_ptr if SCA_USER flag set
-+	 */
-+	if (ctx->flags & SCA_USER)
-+		swap(p->user_cpus_ptr, ctx->user_mask);
-+}
-+
-+static void
-+__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	lockdep_assert_held(&p->pi_lock);
-+	set_cpus_allowed_common(p, ctx);
-+}
-+
-+/*
-+ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
-+ * affinity (if any) should be destroyed too.
-+ */
-+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.user_mask = NULL,
-+		.flags     = SCA_USER,	/* clear the user requested mask */
-+	};
-+	union cpumask_rcuhead {
-+		cpumask_t cpumask;
-+		struct rcu_head rcu;
-+	};
-+
-+	__do_set_cpus_allowed(p, &ac);
-+
-+	/*
-+	 * Because this is called with p->pi_lock held, it is not possible
-+	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
-+	 * kfree_rcu().
-+	 */
-+	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
-+}
-+
-+static cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+	/*
-+	 * See do_set_cpus_allowed() above for the rcu_head usage.
-+	 */
-+	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
-+
-+	return kmalloc_node(size, GFP_KERNEL, node);
-+}
-+
-+int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
-+		      int node)
-+{
-+	cpumask_t *user_mask;
-+	unsigned long flags;
-+
-+	/*
-+	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
-+	 * may differ by now due to racing.
-+	 */
-+	dst->user_cpus_ptr = NULL;
-+
-+	/*
-+	 * This check is racy and losing the race is a valid situation.
-+	 * It is not worth the extra overhead of taking the pi_lock on
-+	 * every fork/clone.
-+	 */
-+	if (data_race(!src->user_cpus_ptr))
-+		return 0;
-+
-+	user_mask = alloc_user_cpus_ptr(node);
-+	if (!user_mask)
-+		return -ENOMEM;
-+
-+	/*
-+	 * Use pi_lock to protect content of user_cpus_ptr
-+	 *
-+	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
-+	 * do_set_cpus_allowed().
-+	 */
-+	raw_spin_lock_irqsave(&src->pi_lock, flags);
-+	if (src->user_cpus_ptr) {
-+		swap(dst->user_cpus_ptr, user_mask);
-+		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
-+	}
-+	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
-+
-+	if (unlikely(user_mask))
-+		kfree(user_mask);
-+
-+	return 0;
-+}
-+
-+static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
-+{
-+	struct cpumask *user_mask = NULL;
-+
-+	swap(p->user_cpus_ptr, user_mask);
-+
-+	return user_mask;
-+}
-+
-+void release_user_cpus_ptr(struct task_struct *p)
-+{
-+	kfree(clear_user_cpus_ptr(p));
-+}
-+
-+#endif
-+
-+/**
-+ * task_curr - is this task currently executing on a CPU?
-+ * @p: the task in question.
-+ *
-+ * Return: 1 if the task is currently executing. 0 otherwise.
-+ */
-+inline int task_curr(const struct task_struct *p)
-+{
-+	return cpu_curr(task_cpu(p)) == p;
-+}
-+
-+#ifdef CONFIG_SMP
-+/***
-+ * kick_process - kick a running thread to enter/exit the kernel
-+ * @p: the to-be-kicked thread
-+ *
-+ * Cause a process which is running on another CPU to enter
-+ * kernel-mode, without any delay. (to get signals handled.)
-+ *
-+ * NOTE: this function doesn't have to take the runqueue lock,
-+ * because all it wants to ensure is that the remote task enters
-+ * the kernel. If the IPI races and the task has been migrated
-+ * to another CPU then no harm is done and the purpose has been
-+ * achieved as well.
-+ */
-+void kick_process(struct task_struct *p)
-+{
-+	guard(preempt)();
-+	int cpu = task_cpu(p);
-+
-+	if ((cpu != smp_processor_id()) && task_curr(p))
-+		smp_send_reschedule(cpu);
-+}
-+EXPORT_SYMBOL_GPL(kick_process);
-+
-+/*
-+ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
-+ *
-+ * A few notes on cpu_active vs cpu_online:
-+ *
-+ *  - cpu_active must be a subset of cpu_online
-+ *
-+ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
-+ *    see __set_cpus_allowed_ptr(). At this point the newly online
-+ *    CPU isn't yet part of the sched domains, and balancing will not
-+ *    see it.
-+ *
-+ *  - on cpu-down we clear cpu_active() to mask the sched domains and
-+ *    avoid the load balancer to place new tasks on the to be removed
-+ *    CPU. Existing tasks will remain running there and will be taken
-+ *    off.
-+ *
-+ * This means that fallback selection must not select !active CPUs.
-+ * And can assume that any active CPU must be online. Conversely
-+ * select_task_rq() below may allow selection of !active CPUs in order
-+ * to satisfy the above rules.
-+ */
-+static int select_fallback_rq(int cpu, struct task_struct *p)
-+{
-+	int nid = cpu_to_node(cpu);
-+	const struct cpumask *nodemask = NULL;
-+	enum { cpuset, possible, fail } state = cpuset;
-+	int dest_cpu;
-+
-+	/*
-+	 * If the node that the CPU is on has been offlined, cpu_to_node()
-+	 * will return -1. There is no CPU on the node, and we should
-+	 * select the CPU on the other node.
-+	 */
-+	if (nid != -1) {
-+		nodemask = cpumask_of_node(nid);
-+
-+		/* Look for allowed, online CPU in same node. */
-+		for_each_cpu(dest_cpu, nodemask) {
-+			if (is_cpu_allowed(p, dest_cpu))
-+				return dest_cpu;
-+		}
-+	}
-+
-+	for (;;) {
-+		/* Any allowed, online CPU? */
-+		for_each_cpu(dest_cpu, p->cpus_ptr) {
-+			if (!is_cpu_allowed(p, dest_cpu))
-+				continue;
-+			goto out;
-+		}
-+
-+		/* No more Mr. Nice Guy. */
-+		switch (state) {
-+		case cpuset:
-+			if (cpuset_cpus_allowed_fallback(p)) {
-+				state = possible;
-+				break;
-+			}
-+			fallthrough;
-+		case possible:
-+			/*
-+			 * XXX When called from select_task_rq() we only
-+			 * hold p->pi_lock and again violate locking order.
-+			 *
-+			 * More yuck to audit.
-+			 */
-+			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
-+			state = fail;
-+			break;
-+
-+		case fail:
-+			BUG();
-+			break;
-+		}
-+	}
-+
-+out:
-+	if (state != cpuset) {
-+		/*
-+		 * Don't tell them about moving exiting tasks or
-+		 * kernel threads (both mm NULL), since they never
-+		 * leave kernel.
-+		 */
-+		if (p->mm && printk_ratelimit()) {
-+			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
-+					task_pid_nr(p), p->comm, cpu);
-+		}
-+	}
-+
-+	return dest_cpu;
-+}
-+
-+static inline void
-+sched_preempt_mask_flush(cpumask_t *mask, int prio)
-+{
-+	int cpu;
-+
-+	cpumask_copy(mask, sched_idle_mask);
-+
-+	for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
-+		if (prio < cpu_rq(cpu)->prio)
-+			cpumask_set_cpu(cpu, mask);
-+	}
-+}
-+
-+static inline int
-+preempt_mask_check(struct task_struct *p, cpumask_t *allow_mask, cpumask_t *preempt_mask)
-+{
-+	int task_prio = task_sched_prio(p);
-+	cpumask_t *mask = sched_preempt_mask + SCHED_QUEUE_BITS - 1 - task_prio;
-+	int pr = atomic_read(&sched_prio_record);
-+
-+	if (pr != task_prio) {
-+		sched_preempt_mask_flush(mask, task_prio);
-+		atomic_set(&sched_prio_record, task_prio);
-+	}
-+
-+	return cpumask_and(preempt_mask, allow_mask, mask);
-+}
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	cpumask_t allow_mask, mask;
-+
-+	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
-+		return select_fallback_rq(task_cpu(p), p);
-+
-+	if (
-+#ifdef CONFIG_SCHED_SMT
-+	    cpumask_and(&mask, &allow_mask, &sched_sg_idle_mask) ||
-+#endif
-+	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
-+	    preempt_mask_check(p, &allow_mask, &mask))
-+		return best_mask_cpu(task_cpu(p), &mask);
-+
-+	return best_mask_cpu(task_cpu(p), &allow_mask);
-+}
-+
-+void sched_set_stop_task(int cpu, struct task_struct *stop)
-+{
-+	static struct lock_class_key stop_pi_lock;
-+	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
-+	struct sched_param start_param = { .sched_priority = 0 };
-+	struct task_struct *old_stop = cpu_rq(cpu)->stop;
-+
-+	if (stop) {
-+		/*
-+		 * Make it appear like a SCHED_FIFO task, its something
-+		 * userspace knows about and won't get confused about.
-+		 *
-+		 * Also, it will make PI more or less work without too
-+		 * much confusion -- but then, stop work should not
-+		 * rely on PI working anyway.
-+		 */
-+		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
-+
-+		/*
-+		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
-+		 * adjust the effective priority of a task. As a result,
-+		 * rt_mutex_setprio() can trigger (RT) balancing operations,
-+		 * which can then trigger wakeups of the stop thread to push
-+		 * around the current task.
-+		 *
-+		 * The stop task itself will never be part of the PI-chain, it
-+		 * never blocks, therefore that ->pi_lock recursion is safe.
-+		 * Tell lockdep about this by placing the stop->pi_lock in its
-+		 * own class.
-+		 */
-+		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
-+	}
-+
-+	cpu_rq(cpu)->stop = stop;
-+
-+	if (old_stop) {
-+		/*
-+		 * Reset it back to a normal scheduling policy so that
-+		 * it can die in pieces.
-+		 */
-+		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
-+	}
-+}
-+
-+static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
-+			    raw_spinlock_t *lock, unsigned long irq_flags)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	/* Can the task run on the task's current CPU? If so, we're done */
-+	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
-+		if (p->migration_disabled) {
-+			if (likely(p->cpus_ptr != &p->cpus_mask))
-+				__do_set_cpus_ptr(p, &p->cpus_mask);
-+			p->migration_disabled = 0;
-+			p->migration_flags |= MDF_FORCE_ENABLED;
-+			/* When p is migrate_disabled, rq->lock should be held */
-+			rq->nr_pinned--;
-+		}
-+
-+		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
-+			struct migration_arg arg = { p, dest_cpu };
-+
-+			/* Need help from migration thread: drop lock and wait. */
-+			__task_access_unlock(p, lock);
-+			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
-+			return 0;
-+		}
-+		if (task_on_rq_queued(p)) {
-+			/*
-+			 * OK, since we're going to drop the lock immediately
-+			 * afterwards anyway.
-+			 */
-+			update_rq_clock(rq);
-+			rq = move_queued_task(rq, p, dest_cpu);
-+			lock = &rq->lock;
-+		}
-+	}
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+	return 0;
-+}
-+
-+static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
-+					 struct affinity_context *ctx,
-+					 struct rq *rq,
-+					 raw_spinlock_t *lock,
-+					 unsigned long irq_flags)
-+{
-+	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
-+	const struct cpumask *cpu_valid_mask = cpu_active_mask;
-+	bool kthread = p->flags & PF_KTHREAD;
-+	int dest_cpu;
-+	int ret = 0;
-+
-+	if (kthread || is_migration_disabled(p)) {
-+		/*
-+		 * Kernel threads are allowed on online && !active CPUs,
-+		 * however, during cpu-hot-unplug, even these might get pushed
-+		 * away if not KTHREAD_IS_PER_CPU.
-+		 *
-+		 * Specifically, migration_disabled() tasks must not fail the
-+		 * cpumask_any_and_distribute() pick below, esp. so on
-+		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
-+		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
-+		 */
-+		cpu_valid_mask = cpu_online_mask;
-+	}
-+
-+	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	/*
-+	 * Must re-check here, to close a race against __kthread_bind(),
-+	 * sched_setaffinity() is not guaranteed to observe the flag.
-+	 */
-+	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
-+		goto out;
-+
-+	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
-+	if (dest_cpu >= nr_cpu_ids) {
-+		ret = -EINVAL;
-+		goto out;
-+	}
-+
-+	__do_set_cpus_allowed(p, ctx);
-+
-+	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
-+
-+out:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+
-+	return ret;
-+}
-+
-+/*
-+ * Change a given task's CPU affinity. Migrate the thread to a
-+ * is removed from the allowed bitmask.
-+ *
-+ * NOTE: the caller must have a valid reference to the task, the
-+ * task must not exit() & deallocate itself prematurely. The
-+ * call is not atomic; no spinlocks may be held.
-+ */
-+static int __set_cpus_allowed_ptr(struct task_struct *p,
-+				  struct affinity_context *ctx)
-+{
-+	unsigned long irq_flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
-+	 * flags are set.
-+	 */
-+	if (p->user_cpus_ptr &&
-+	    !(ctx->flags & SCA_USER) &&
-+	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
-+		ctx->new_mask = rq->scratch_mask;
-+
-+
-+	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
-+}
-+
-+int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.flags     = 0,
-+	};
-+
-+	return __set_cpus_allowed_ptr(p, &ac);
-+}
-+EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
-+
-+/*
-+ * Change a given task's CPU affinity to the intersection of its current
-+ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
-+ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
-+ * affinity or use cpu_online_mask instead.
-+ *
-+ * If the resulting mask is empty, leave the affinity unchanged and return
-+ * -EINVAL.
-+ */
-+static int restrict_cpus_allowed_ptr(struct task_struct *p,
-+				     struct cpumask *new_mask,
-+				     const struct cpumask *subset_mask)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = new_mask,
-+		.flags     = 0,
-+	};
-+	unsigned long irq_flags;
-+	raw_spinlock_t *lock;
-+	struct rq *rq;
-+	int err;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
-+		err = -EINVAL;
-+		goto err_unlock;
-+	}
-+
-+	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
-+
-+err_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+	return err;
-+}
-+
-+/*
-+ * Restrict the CPU affinity of task @p so that it is a subset of
-+ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
-+ * old affinity mask. If the resulting mask is empty, we warn and walk
-+ * up the cpuset hierarchy until we find a suitable mask.
-+ */
-+void force_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+	cpumask_var_t new_mask;
-+	const struct cpumask *override_mask = task_cpu_possible_mask(p);
-+
-+	alloc_cpumask_var(&new_mask, GFP_KERNEL);
-+
-+	/*
-+	 * __migrate_task() can fail silently in the face of concurrent
-+	 * offlining of the chosen destination CPU, so take the hotplug
-+	 * lock to ensure that the migration succeeds.
-+	 */
-+	cpus_read_lock();
-+	if (!cpumask_available(new_mask))
-+		goto out_set_mask;
-+
-+	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
-+		goto out_free_mask;
-+
-+	/*
-+	 * We failed to find a valid subset of the affinity mask for the
-+	 * task, so override it based on its cpuset hierarchy.
-+	 */
-+	cpuset_cpus_allowed(p, new_mask);
-+	override_mask = new_mask;
-+
-+out_set_mask:
-+	if (printk_ratelimit()) {
-+		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
-+				task_pid_nr(p), p->comm,
-+				cpumask_pr_args(override_mask));
-+	}
-+
-+	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
-+out_free_mask:
-+	cpus_read_unlock();
-+	free_cpumask_var(new_mask);
-+}
-+
-+static int
-+__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
-+
-+/*
-+ * Restore the affinity of a task @p which was previously restricted by a
-+ * call to force_compatible_cpus_allowed_ptr().
-+ *
-+ * It is the caller's responsibility to serialise this with any calls to
-+ * force_compatible_cpus_allowed_ptr(@p).
-+ */
-+void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+	struct affinity_context ac = {
-+		.new_mask  = task_user_cpus(p),
-+		.flags     = 0,
-+	};
-+	int ret;
-+
-+	/*
-+	 * Try to restore the old affinity mask with __sched_setaffinity().
-+	 * Cpuset masking will be done there too.
-+	 */
-+	ret = __sched_setaffinity(p, &ac);
-+	WARN_ON_ONCE(ret);
-+}
-+
-+#else /* CONFIG_SMP */
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+	return 0;
-+}
-+
-+static inline int
-+__set_cpus_allowed_ptr(struct task_struct *p,
-+		       struct affinity_context *ctx)
-+{
-+	return set_cpus_allowed_ptr(p, ctx->new_mask);
-+}
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+	return false;
-+}
-+
-+static inline cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+	return NULL;
-+}
-+
-+#endif /* !CONFIG_SMP */
-+
-+static void
-+ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq;
-+
-+	if (!schedstat_enabled())
-+		return;
-+
-+	rq = this_rq();
-+
-+#ifdef CONFIG_SMP
-+	if (cpu == rq->cpu) {
-+		__schedstat_inc(rq->ttwu_local);
-+		__schedstat_inc(p->stats.nr_wakeups_local);
-+	} else {
-+		/** Alt schedule FW ToDo:
-+		 * How to do ttwu_wake_remote
-+		 */
-+	}
-+#endif /* CONFIG_SMP */
-+
-+	__schedstat_inc(rq->ttwu_count);
-+	__schedstat_inc(p->stats.nr_wakeups);
-+}
-+
-+/*
-+ * Mark the task runnable.
-+ */
-+static inline void ttwu_do_wakeup(struct task_struct *p)
-+{
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	trace_sched_wakeup(p);
-+}
-+
-+static inline void
-+ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
-+{
-+	if (p->sched_contributes_to_load)
-+		rq->nr_uninterruptible--;
-+
-+	if (
-+#ifdef CONFIG_SMP
-+	    !(wake_flags & WF_MIGRATED) &&
-+#endif
-+	    p->in_iowait) {
-+		delayacct_blkio_end(p);
-+		atomic_dec(&task_rq(p)->nr_iowait);
-+	}
-+
-+	activate_task(p, rq);
-+	wakeup_preempt(rq);
-+
-+	ttwu_do_wakeup(p);
-+}
-+
-+/*
-+ * Consider @p being inside a wait loop:
-+ *
-+ *   for (;;) {
-+ *      set_current_state(TASK_UNINTERRUPTIBLE);
-+ *
-+ *      if (CONDITION)
-+ *         break;
-+ *
-+ *      schedule();
-+ *   }
-+ *   __set_current_state(TASK_RUNNING);
-+ *
-+ * between set_current_state() and schedule(). In this case @p is still
-+ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
-+ * an atomic manner.
-+ *
-+ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
-+ * then schedule() must still happen and p->state can be changed to
-+ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
-+ * need to do a full wakeup with enqueue.
-+ *
-+ * Returns: %true when the wakeup is done,
-+ *          %false otherwise.
-+ */
-+static int ttwu_runnable(struct task_struct *p, int wake_flags)
-+{
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	int ret = 0;
-+
-+	rq = __task_access_lock(p, &lock);
-+	if (task_on_rq_queued(p)) {
-+		if (!task_on_cpu(p)) {
-+			/*
-+			 * When on_rq && !on_cpu the task is preempted, see if
-+			 * it should preempt the task that is current now.
-+			 */
-+			update_rq_clock(rq);
-+			wakeup_preempt(rq);
-+		}
-+		ttwu_do_wakeup(p);
-+		ret = 1;
-+	}
-+	__task_access_unlock(p, lock);
-+
-+	return ret;
-+}
-+
-+#ifdef CONFIG_SMP
-+void sched_ttwu_pending(void *arg)
-+{
-+	struct llist_node *llist = arg;
-+	struct rq *rq = this_rq();
-+	struct task_struct *p, *t;
-+	struct rq_flags rf;
-+
-+	if (!llist)
-+		return;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	update_rq_clock(rq);
-+
-+	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
-+		if (WARN_ON_ONCE(p->on_cpu))
-+			smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
-+			set_task_cpu(p, cpu_of(rq));
-+
-+		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
-+	}
-+
-+	/*
-+	 * Must be after enqueueing at least once task such that
-+	 * idle_cpu() does not observe a false-negative -- if it does,
-+	 * it is possible for select_idle_siblings() to stack a number
-+	 * of tasks on this CPU during that window.
-+	 *
-+	 * It is ok to clear ttwu_pending when another task pending.
-+	 * We will receive IPI after local irq enabled and then enqueue it.
-+	 * Since now nr_running > 0, idle_cpu() will always get correct result.
-+	 */
-+	WRITE_ONCE(rq->ttwu_pending, 0);
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Prepare the scene for sending an IPI for a remote smp_call
-+ *
-+ * Returns true if the caller can proceed with sending the IPI.
-+ * Returns false otherwise.
-+ */
-+bool call_function_single_prep_ipi(int cpu)
-+{
-+	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
-+		trace_sched_wake_idle_without_ipi(cpu);
-+		return false;
-+	}
-+
-+	return true;
-+}
-+
-+/*
-+ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
-+ * necessary. The wakee CPU on receipt of the IPI will queue the task
-+ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
-+ * of the wakeup instead of the waker.
-+ */
-+static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
-+
-+	WRITE_ONCE(rq->ttwu_pending, 1);
-+	__smp_call_single_queue(cpu, &p->wake_entry.llist);
-+}
-+
-+static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
-+{
-+	/*
-+	 * Do not complicate things with the async wake_list while the CPU is
-+	 * in hotplug state.
-+	 */
-+	if (!cpu_active(cpu))
-+		return false;
-+
-+	/* Ensure the task will still be allowed to run on the CPU. */
-+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+		return false;
-+
-+	/*
-+	 * If the CPU does not share cache, then queue the task on the
-+	 * remote rqs wakelist to avoid accessing remote data.
-+	 */
-+	if (!cpus_share_cache(smp_processor_id(), cpu))
-+		return true;
-+
-+	if (cpu == smp_processor_id())
-+		return false;
-+
-+	/*
-+	 * If the wakee cpu is idle, or the task is descheduling and the
-+	 * only running task on the CPU, then use the wakelist to offload
-+	 * the task activation to the idle (or soon-to-be-idle) CPU as
-+	 * the current CPU is likely busy. nr_running is checked to
-+	 * avoid unnecessary task stacking.
-+	 *
-+	 * Note that we can only get here with (wakee) p->on_rq=0,
-+	 * p->on_cpu can be whatever, we've done the dequeue, so
-+	 * the wakee has been accounted out of ->nr_running.
-+	 */
-+	if (!cpu_rq(cpu)->nr_running)
-+		return true;
-+
-+	return false;
-+}
-+
-+static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
-+		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
-+		__ttwu_queue_wakelist(p, cpu, wake_flags);
-+		return true;
-+	}
-+
-+	return false;
-+}
-+
-+void wake_up_if_idle(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	guard(rcu)();
-+	if (is_idle_task(rcu_dereference(rq->curr))) {
-+		guard(raw_spinlock_irqsave)(&rq->lock);
-+		if (is_idle_task(rq->curr))
-+			resched_curr(rq);
-+	}
-+}
-+
-+bool cpus_share_cache(int this_cpu, int that_cpu)
-+{
-+	if (this_cpu == that_cpu)
-+		return true;
-+
-+	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-+}
-+#else /* !CONFIG_SMP */
-+
-+static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	return false;
-+}
-+
-+#endif /* CONFIG_SMP */
-+
-+static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (ttwu_queue_wakelist(p, cpu, wake_flags))
-+		return;
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+	ttwu_do_activate(rq, p, wake_flags);
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Invoked from try_to_wake_up() to check whether the task can be woken up.
-+ *
-+ * The caller holds p::pi_lock if p != current or has preemption
-+ * disabled when p == current.
-+ *
-+ * The rules of saved_state:
-+ *
-+ *   The related locking code always holds p::pi_lock when updating
-+ *   p::saved_state, which means the code is fully serialized in both cases.
-+ *
-+ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
-+ *  No other bits set. This allows to distinguish all wakeup scenarios.
-+ *
-+ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
-+ *  allows us to prevent early wakeup of tasks before they can be run on
-+ *  asymmetric ISA architectures (eg ARMv9).
-+ */
-+static __always_inline
-+bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
-+{
-+	int match;
-+
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
-+			     state != TASK_RTLOCK_WAIT);
-+	}
-+
-+	*success = !!(match = __task_state_match(p, state));
-+
-+	/*
-+	 * Saved state preserves the task state across blocking on
-+	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
-+	 * set p::saved_state to TASK_RUNNING, but do not wake the task
-+	 * because it waits for a lock wakeup or __thaw_task(). Also
-+	 * indicate success because from the regular waker's point of
-+	 * view this has succeeded.
-+	 *
-+	 * After acquiring the lock the task will restore p::__state
-+	 * from p::saved_state which ensures that the regular
-+	 * wakeup is not lost. The restore will also set
-+	 * p::saved_state to TASK_RUNNING so any further tests will
-+	 * not result in false positives vs. @success
-+	 */
-+	if (match < 0)
-+		p->saved_state = TASK_RUNNING;
-+
-+	return match > 0;
-+}
-+
-+/*
-+ * Notes on Program-Order guarantees on SMP systems.
-+ *
-+ *  MIGRATION
-+ *
-+ * The basic program-order guarantee on SMP systems is that when a task [t]
-+ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
-+ * execution on its new CPU [c1].
-+ *
-+ * For migration (of runnable tasks) this is provided by the following means:
-+ *
-+ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
-+ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
-+ *     rq(c1)->lock (if not at the same time, then in that order).
-+ *  C) LOCK of the rq(c1)->lock scheduling in task
-+ *
-+ * Transitivity guarantees that B happens after A and C after B.
-+ * Note: we only require RCpc transitivity.
-+ * Note: the CPU doing B need not be c0 or c1
-+ *
-+ * Example:
-+ *
-+ *   CPU0            CPU1            CPU2
-+ *
-+ *   LOCK rq(0)->lock
-+ *   sched-out X
-+ *   sched-in Y
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(0)->lock // orders against CPU0
-+ *                                   dequeue X
-+ *                                   UNLOCK rq(0)->lock
-+ *
-+ *                                   LOCK rq(1)->lock
-+ *                                   enqueue X
-+ *                                   UNLOCK rq(1)->lock
-+ *
-+ *                   LOCK rq(1)->lock // orders against CPU2
-+ *                   sched-out Z
-+ *                   sched-in X
-+ *                   UNLOCK rq(1)->lock
-+ *
-+ *
-+ *  BLOCKING -- aka. SLEEP + WAKEUP
-+ *
-+ * For blocking we (obviously) need to provide the same guarantee as for
-+ * migration. However the means are completely different as there is no lock
-+ * chain to provide order. Instead we do:
-+ *
-+ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
-+ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
-+ *
-+ * Example:
-+ *
-+ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
-+ *
-+ *   LOCK rq(0)->lock LOCK X->pi_lock
-+ *   dequeue X
-+ *   sched-out X
-+ *   smp_store_release(X->on_cpu, 0);
-+ *
-+ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
-+ *                    X->state = WAKING
-+ *                    set_task_cpu(X,2)
-+ *
-+ *                    LOCK rq(2)->lock
-+ *                    enqueue X
-+ *                    X->state = RUNNING
-+ *                    UNLOCK rq(2)->lock
-+ *
-+ *                                          LOCK rq(2)->lock // orders against CPU1
-+ *                                          sched-out Z
-+ *                                          sched-in X
-+ *                                          UNLOCK rq(2)->lock
-+ *
-+ *                    UNLOCK X->pi_lock
-+ *   UNLOCK rq(0)->lock
-+ *
-+ *
-+ * However; for wakeups there is a second guarantee we must provide, namely we
-+ * must observe the state that lead to our wakeup. That is, not only must our
-+ * task observe its own prior state, it must also observe the stores prior to
-+ * its wakeup.
-+ *
-+ * This means that any means of doing remote wakeups must order the CPU doing
-+ * the wakeup against the CPU the task is going to end up running on. This,
-+ * however, is already required for the regular Program-Order guarantee above,
-+ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
-+ *
-+ */
-+
-+/**
-+ * try_to_wake_up - wake up a thread
-+ * @p: the thread to be awakened
-+ * @state: the mask of task states that can be woken
-+ * @wake_flags: wake modifier flags (WF_*)
-+ *
-+ * Conceptually does:
-+ *
-+ *   If (@state & @p->state) @p->state = TASK_RUNNING.
-+ *
-+ * If the task was not queued/runnable, also place it back on a runqueue.
-+ *
-+ * This function is atomic against schedule() which would dequeue the task.
-+ *
-+ * It issues a full memory barrier before accessing @p->state, see the comment
-+ * with set_current_state().
-+ *
-+ * Uses p->pi_lock to serialize against concurrent wake-ups.
-+ *
-+ * Relies on p->pi_lock stabilizing:
-+ *  - p->sched_class
-+ *  - p->cpus_ptr
-+ *  - p->sched_task_group
-+ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
-+ *
-+ * Tries really hard to only take one task_rq(p)->lock for performance.
-+ * Takes rq->lock in:
-+ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
-+ *  - ttwu_queue()       -- new rq, for enqueue of the task;
-+ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
-+ *
-+ * As a consequence we race really badly with just about everything. See the
-+ * many memory barriers and their comments for details.
-+ *
-+ * Return: %true if @p->state changes (an actual wakeup was done),
-+ *	   %false otherwise.
-+ */
-+int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
-+{
-+	guard(preempt)();
-+	int cpu, success = 0;
-+
-+	if (p == current) {
-+		/*
-+		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
-+		 * == smp_processor_id()'. Together this means we can special
-+		 * case the whole 'p->on_rq && ttwu_runnable()' case below
-+		 * without taking any locks.
-+		 *
-+		 * In particular:
-+		 *  - we rely on Program-Order guarantees for all the ordering,
-+		 *  - we're serialized against set_special_state() by virtue of
-+		 *    it disabling IRQs (this allows not taking ->pi_lock).
-+		 */
-+		if (!ttwu_state_match(p, state, &success))
-+			goto out;
-+
-+		trace_sched_waking(p);
-+		ttwu_do_wakeup(p);
-+		goto out;
-+	}
-+
-+	/*
-+	 * If we are going to wake up a thread waiting for CONDITION we
-+	 * need to ensure that CONDITION=1 done by the caller can not be
-+	 * reordered with p->state check below. This pairs with smp_store_mb()
-+	 * in set_current_state() that the waiting thread does.
-+	 */
-+	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
-+		smp_mb__after_spinlock();
-+		if (!ttwu_state_match(p, state, &success))
-+			break;
-+
-+		trace_sched_waking(p);
-+
-+		/*
-+		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
-+		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
-+		 * in smp_cond_load_acquire() below.
-+		 *
-+		 * sched_ttwu_pending()			try_to_wake_up()
-+		 *   STORE p->on_rq = 1			  LOAD p->state
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * __schedule() (switch to task 'p')
-+		 *   LOCK rq->lock			  smp_rmb();
-+		 *   smp_mb__after_spinlock();
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * [task p]
-+		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
-+		 *
-+		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+		 * __schedule().  See the comment for smp_mb__after_spinlock().
-+		 *
-+		 * A similar smp_rmb() lives in __task_needs_rq_lock().
-+		 */
-+		smp_rmb();
-+		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
-+			break;
-+
-+#ifdef CONFIG_SMP
-+		/*
-+		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
-+		 * possible to, falsely, observe p->on_cpu == 0.
-+		 *
-+		 * One must be running (->on_cpu == 1) in order to remove oneself
-+		 * from the runqueue.
-+		 *
-+		 * __schedule() (switch to task 'p')	try_to_wake_up()
-+		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
-+		 *   UNLOCK rq->lock
-+		 *
-+		 * __schedule() (put 'p' to sleep)
-+		 *   LOCK rq->lock			  smp_rmb();
-+		 *   smp_mb__after_spinlock();
-+		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
-+		 *
-+		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+		 * __schedule().  See the comment for smp_mb__after_spinlock().
-+		 *
-+		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
-+		 * schedule()'s deactivate_task() has 'happened' and p will no longer
-+		 * care about it's own p->state. See the comment in __schedule().
-+		 */
-+		smp_acquire__after_ctrl_dep();
-+
-+		/*
-+		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
-+		 * == 0), which means we need to do an enqueue, change p->state to
-+		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
-+		 * enqueue, such as ttwu_queue_wakelist().
-+		 */
-+		WRITE_ONCE(p->__state, TASK_WAKING);
-+
-+		/*
-+		 * If the owning (remote) CPU is still in the middle of schedule() with
-+		 * this task as prev, considering queueing p on the remote CPUs wake_list
-+		 * which potentially sends an IPI instead of spinning on p->on_cpu to
-+		 * let the waker make forward progress. This is safe because IRQs are
-+		 * disabled and the IPI will deliver after on_cpu is cleared.
-+		 *
-+		 * Ensure we load task_cpu(p) after p->on_cpu:
-+		 *
-+		 * set_task_cpu(p, cpu);
-+		 *   STORE p->cpu = @cpu
-+		 * __schedule() (switch to task 'p')
-+		 *   LOCK rq->lock
-+		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
-+		 *   STORE p->on_cpu = 1                LOAD p->cpu
-+		 *
-+		 * to ensure we observe the correct CPU on which the task is currently
-+		 * scheduling.
-+		 */
-+		if (smp_load_acquire(&p->on_cpu) &&
-+		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
-+			break;
-+
-+		/*
-+		 * If the owning (remote) CPU is still in the middle of schedule() with
-+		 * this task as prev, wait until it's done referencing the task.
-+		 *
-+		 * Pairs with the smp_store_release() in finish_task().
-+		 *
-+		 * This ensures that tasks getting woken will be fully ordered against
-+		 * their previous state and preserve Program Order.
-+		 */
-+		smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+		sched_task_ttwu(p);
-+
-+		if ((wake_flags & WF_CURRENT_CPU) &&
-+		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
-+			cpu = smp_processor_id();
-+		else
-+			cpu = select_task_rq(p);
-+
-+		if (cpu != task_cpu(p)) {
-+			if (p->in_iowait) {
-+				delayacct_blkio_end(p);
-+				atomic_dec(&task_rq(p)->nr_iowait);
-+			}
-+
-+			wake_flags |= WF_MIGRATED;
-+			set_task_cpu(p, cpu);
-+		}
-+#else
-+		sched_task_ttwu(p);
-+
-+		cpu = task_cpu(p);
-+#endif /* CONFIG_SMP */
-+
-+		ttwu_queue(p, cpu, wake_flags);
-+	}
-+out:
-+	if (success)
-+		ttwu_stat(p, task_cpu(p), wake_flags);
-+
-+	return success;
-+}
-+
-+static bool __task_needs_rq_lock(struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/*
-+	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
-+	 * the task is blocked. Make sure to check @state since ttwu() can drop
-+	 * locks at the end, see ttwu_queue_wakelist().
-+	 */
-+	if (state == TASK_RUNNING || state == TASK_WAKING)
-+		return true;
-+
-+	/*
-+	 * Ensure we load p->on_rq after p->__state, otherwise it would be
-+	 * possible to, falsely, observe p->on_rq == 0.
-+	 *
-+	 * See try_to_wake_up() for a longer comment.
-+	 */
-+	smp_rmb();
-+	if (p->on_rq)
-+		return true;
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * Ensure the task has finished __schedule() and will not be referenced
-+	 * anymore. Again, see try_to_wake_up() for a longer comment.
-+	 */
-+	smp_rmb();
-+	smp_cond_load_acquire(&p->on_cpu, !VAL);
-+#endif
-+
-+	return false;
-+}
-+
-+/**
-+ * task_call_func - Invoke a function on task in fixed state
-+ * @p: Process for which the function is to be invoked, can be @current.
-+ * @func: Function to invoke.
-+ * @arg: Argument to function.
-+ *
-+ * Fix the task in it's current state by avoiding wakeups and or rq operations
-+ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
-+ * to work out what the state is, if required.  Given that @func can be invoked
-+ * with a runqueue lock held, it had better be quite lightweight.
-+ *
-+ * Returns:
-+ *   Whatever @func returns
-+ */
-+int task_call_func(struct task_struct *p, task_call_f func, void *arg)
-+{
-+	struct rq *rq = NULL;
-+	struct rq_flags rf;
-+	int ret;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
-+
-+	if (__task_needs_rq_lock(p))
-+		rq = __task_rq_lock(p, &rf);
-+
-+	/*
-+	 * At this point the task is pinned; either:
-+	 *  - blocked and we're holding off wakeups      (pi->lock)
-+	 *  - woken, and we're holding off enqueue       (rq->lock)
-+	 *  - queued, and we're holding off schedule     (rq->lock)
-+	 *  - running, and we're holding off de-schedule (rq->lock)
-+	 *
-+	 * The called function (@func) can use: task_curr(), p->on_rq and
-+	 * p->__state to differentiate between these states.
-+	 */
-+	ret = func(p, arg);
-+
-+	if (rq)
-+		__task_rq_unlock(rq, &rf);
-+
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
-+	return ret;
-+}
-+
-+/**
-+ * cpu_curr_snapshot - Return a snapshot of the currently running task
-+ * @cpu: The CPU on which to snapshot the task.
-+ *
-+ * Returns the task_struct pointer of the task "currently" running on
-+ * the specified CPU.  If the same task is running on that CPU throughout,
-+ * the return value will be a pointer to that task's task_struct structure.
-+ * If the CPU did any context switches even vaguely concurrently with the
-+ * execution of this function, the return value will be a pointer to the
-+ * task_struct structure of a randomly chosen task that was running on
-+ * that CPU somewhere around the time that this function was executing.
-+ *
-+ * If the specified CPU was offline, the return value is whatever it
-+ * is, perhaps a pointer to the task_struct structure of that CPU's idle
-+ * task, but there is no guarantee.  Callers wishing a useful return
-+ * value must take some action to ensure that the specified CPU remains
-+ * online throughout.
-+ *
-+ * This function executes full memory barriers before and after fetching
-+ * the pointer, which permits the caller to confine this function's fetch
-+ * with respect to the caller's accesses to other shared variables.
-+ */
-+struct task_struct *cpu_curr_snapshot(int cpu)
-+{
-+	struct task_struct *t;
-+
-+	smp_mb(); /* Pairing determined by caller's synchronization design. */
-+	t = rcu_dereference(cpu_curr(cpu));
-+	smp_mb(); /* Pairing determined by caller's synchronization design. */
-+	return t;
-+}
-+
-+/**
-+ * wake_up_process - Wake up a specific process
-+ * @p: The process to be woken up.
-+ *
-+ * Attempt to wake up the nominated process and move it to the set of runnable
-+ * processes.
-+ *
-+ * Return: 1 if the process was woken up, 0 if it was already running.
-+ *
-+ * This function executes a full memory barrier before accessing the task state.
-+ */
-+int wake_up_process(struct task_struct *p)
-+{
-+	return try_to_wake_up(p, TASK_NORMAL, 0);
-+}
-+EXPORT_SYMBOL(wake_up_process);
-+
-+int wake_up_state(struct task_struct *p, unsigned int state)
-+{
-+	return try_to_wake_up(p, state, 0);
-+}
-+
-+/*
-+ * Perform scheduler related setup for a newly forked process p.
-+ * p is forked by current.
-+ *
-+ * __sched_fork() is basic setup used by init_idle() too:
-+ */
-+static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	p->on_rq			= 0;
-+	p->on_cpu			= 0;
-+	p->utime			= 0;
-+	p->stime			= 0;
-+	p->sched_time			= 0;
-+
-+#ifdef CONFIG_SCHEDSTATS
-+	/* Even if schedstat is disabled, there should not be garbage */
-+	memset(&p->stats, 0, sizeof(p->stats));
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+	INIT_HLIST_HEAD(&p->preempt_notifiers);
-+#endif
-+
-+#ifdef CONFIG_COMPACTION
-+	p->capture_control = NULL;
-+#endif
-+#ifdef CONFIG_SMP
-+	p->wake_entry.u_flags = CSD_TYPE_TTWU;
-+#endif
-+	init_sched_mm_cid(p);
-+}
-+
-+/*
-+ * fork()/clone()-time setup:
-+ */
-+int sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+	__sched_fork(clone_flags, p);
-+	/*
-+	 * We mark the process as NEW here. This guarantees that
-+	 * nobody will actually run it, and a signal or other external
-+	 * event cannot wake it up and insert it on the runqueue either.
-+	 */
-+	p->__state = TASK_NEW;
-+
-+	/*
-+	 * Make sure we do not leak PI boosting priority to the child.
-+	 */
-+	p->prio = current->normal_prio;
-+
-+	/*
-+	 * Revert to default priority/policy on fork if requested.
-+	 */
-+	if (unlikely(p->sched_reset_on_fork)) {
-+		if (task_has_rt_policy(p)) {
-+			p->policy = SCHED_NORMAL;
-+			p->static_prio = NICE_TO_PRIO(0);
-+			p->rt_priority = 0;
-+		} else if (PRIO_TO_NICE(p->static_prio) < 0)
-+			p->static_prio = NICE_TO_PRIO(0);
-+
-+		p->prio = p->normal_prio = p->static_prio;
-+
-+		/*
-+		 * We don't need the reset flag anymore after the fork. It has
-+		 * fulfilled its duty:
-+		 */
-+		p->sched_reset_on_fork = 0;
-+	}
-+
-+#ifdef CONFIG_SCHED_INFO
-+	if (unlikely(sched_info_on()))
-+		memset(&p->sched_info, 0, sizeof(p->sched_info));
-+#endif
-+	init_task_preempt_count(p);
-+
-+	return 0;
-+}
-+
-+void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	/*
-+	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
-+	 * required yet, but lockdep gets upset if rules are violated.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	/*
-+	 * Share the timeslice between parent and child, thus the
-+	 * total amount of pending timeslices in the system doesn't change,
-+	 * resulting in more scheduling fairness.
-+	 */
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	rq->curr->time_slice /= 2;
-+	p->time_slice = rq->curr->time_slice;
-+#ifdef CONFIG_SCHED_HRTICK
-+	hrtick_start(rq, rq->curr->time_slice);
-+#endif
-+
-+	if (p->time_slice < RESCHED_NS) {
-+		p->time_slice = sysctl_sched_base_slice;
-+		resched_curr(rq);
-+	}
-+	sched_task_fork(p, rq);
-+	raw_spin_unlock(&rq->lock);
-+
-+	rseq_migrate(p);
-+	/*
-+	 * We're setting the CPU for the first time, we don't migrate,
-+	 * so use __set_task_cpu().
-+	 */
-+	__set_task_cpu(p, smp_processor_id());
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+void sched_post_fork(struct task_struct *p)
-+{
-+}
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+DEFINE_STATIC_KEY_FALSE(sched_schedstats);
-+
-+static void set_schedstats(bool enabled)
-+{
-+	if (enabled)
-+		static_branch_enable(&sched_schedstats);
-+	else
-+		static_branch_disable(&sched_schedstats);
-+}
-+
-+void force_schedstat_enabled(void)
-+{
-+	if (!schedstat_enabled()) {
-+		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
-+		static_branch_enable(&sched_schedstats);
-+	}
-+}
-+
-+static int __init setup_schedstats(char *str)
-+{
-+	int ret = 0;
-+	if (!str)
-+		goto out;
-+
-+	if (!strcmp(str, "enable")) {
-+		set_schedstats(true);
-+		ret = 1;
-+	} else if (!strcmp(str, "disable")) {
-+		set_schedstats(false);
-+		ret = 1;
-+	}
-+out:
-+	if (!ret)
-+		pr_warn("Unable to parse schedstats=\n");
-+
-+	return ret;
-+}
-+__setup("schedstats=", setup_schedstats);
-+
-+#ifdef CONFIG_PROC_SYSCTL
-+static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
-+		size_t *lenp, loff_t *ppos)
-+{
-+	struct ctl_table t;
-+	int err;
-+	int state = static_branch_likely(&sched_schedstats);
-+
-+	if (write && !capable(CAP_SYS_ADMIN))
-+		return -EPERM;
-+
-+	t = *table;
-+	t.data = &state;
-+	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
-+	if (err < 0)
-+		return err;
-+	if (write)
-+		set_schedstats(state);
-+	return err;
-+}
-+
-+static struct ctl_table sched_core_sysctls[] = {
-+	{
-+		.procname       = "sched_schedstats",
-+		.data           = NULL,
-+		.maxlen         = sizeof(unsigned int),
-+		.mode           = 0644,
-+		.proc_handler   = sysctl_schedstats,
-+		.extra1         = SYSCTL_ZERO,
-+		.extra2         = SYSCTL_ONE,
-+	},
-+	{}
-+};
-+static int __init sched_core_sysctl_init(void)
-+{
-+	register_sysctl_init("kernel", sched_core_sysctls);
-+	return 0;
-+}
-+late_initcall(sched_core_sysctl_init);
-+#endif /* CONFIG_PROC_SYSCTL */
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+/*
-+ * wake_up_new_task - wake up a newly created task for the first time.
-+ *
-+ * This function will do some initial scheduler statistics housekeeping
-+ * that must be done for every newly created context, then puts the task
-+ * on the runqueue and wakes it.
-+ */
-+void wake_up_new_task(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	WRITE_ONCE(p->__state, TASK_RUNNING);
-+	rq = cpu_rq(select_task_rq(p));
-+#ifdef CONFIG_SMP
-+	rseq_migrate(p);
-+	/*
-+	 * Fork balancing, do it here and not earlier because:
-+	 * - cpus_ptr can change in the fork path
-+	 * - any previously selected CPU might disappear through hotplug
-+	 *
-+	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
-+	 * as we're not fully set-up yet.
-+	 */
-+	__set_task_cpu(p, cpu_of(rq));
-+#endif
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	activate_task(p, rq);
-+	trace_sched_wakeup_new(p);
-+	wakeup_preempt(rq);
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+
-+static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
-+
-+void preempt_notifier_inc(void)
-+{
-+	static_branch_inc(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_inc);
-+
-+void preempt_notifier_dec(void)
-+{
-+	static_branch_dec(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_dec);
-+
-+/**
-+ * preempt_notifier_register - tell me when current is being preempted & rescheduled
-+ * @notifier: notifier struct to register
-+ */
-+void preempt_notifier_register(struct preempt_notifier *notifier)
-+{
-+	if (!static_branch_unlikely(&preempt_notifier_key))
-+		WARN(1, "registering preempt_notifier while notifiers disabled\n");
-+
-+	hlist_add_head(&notifier->link, &current->preempt_notifiers);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_register);
-+
-+/**
-+ * preempt_notifier_unregister - no longer interested in preemption notifications
-+ * @notifier: notifier struct to unregister
-+ *
-+ * This is *not* safe to call from within a preemption notifier.
-+ */
-+void preempt_notifier_unregister(struct preempt_notifier *notifier)
-+{
-+	hlist_del(&notifier->link);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
-+
-+static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_in(notifier, raw_smp_processor_id());
-+}
-+
-+static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_in_preempt_notifiers(curr);
-+}
-+
-+static void
-+__fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				   struct task_struct *next)
-+{
-+	struct preempt_notifier *notifier;
-+
-+	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+		notifier->ops->sched_out(notifier, next);
-+}
-+
-+static __always_inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+	if (static_branch_unlikely(&preempt_notifier_key))
-+		__fire_sched_out_preempt_notifiers(curr, next);
-+}
-+
-+#else /* !CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+}
-+
-+static inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+				 struct task_struct *next)
-+{
-+}
-+
-+#endif /* CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void prepare_task(struct task_struct *next)
-+{
-+	/*
-+	 * Claim the task as running, we do this before switching to it
-+	 * such that any running task will have this set.
-+	 *
-+	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
-+	 * its ordering comment.
-+	 */
-+	WRITE_ONCE(next->on_cpu, 1);
-+}
-+
-+static inline void finish_task(struct task_struct *prev)
-+{
-+#ifdef CONFIG_SMP
-+	/*
-+	 * This must be the very last reference to @prev from this CPU. After
-+	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
-+	 * must ensure this doesn't happen until the switch is completely
-+	 * finished.
-+	 *
-+	 * In particular, the load of prev->state in finish_task_switch() must
-+	 * happen before this.
-+	 *
-+	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
-+	 */
-+	smp_store_release(&prev->on_cpu, 0);
-+#else
-+	prev->on_cpu = 0;
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+	void (*func)(struct rq *rq);
-+	struct balance_callback *next;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	while (head) {
-+		func = (void (*)(struct rq *))head->func;
-+		next = head->next;
-+		head->next = NULL;
-+		head = next;
-+
-+		func(rq);
-+	}
-+}
-+
-+static void balance_push(struct rq *rq);
-+
-+/*
-+ * balance_push_callback is a right abuse of the callback interface and plays
-+ * by significantly different rules.
-+ *
-+ * Where the normal balance_callback's purpose is to be ran in the same context
-+ * that queued it (only later, when it's safe to drop rq->lock again),
-+ * balance_push_callback is specifically targeted at __schedule().
-+ *
-+ * This abuse is tolerated because it places all the unlikely/odd cases behind
-+ * a single test, namely: rq->balance_callback == NULL.
-+ */
-+struct balance_callback balance_push_callback = {
-+	.next = NULL,
-+	.func = balance_push,
-+};
-+
-+static inline struct balance_callback *
-+__splice_balance_callbacks(struct rq *rq, bool split)
-+{
-+	struct balance_callback *head = rq->balance_callback;
-+
-+	if (likely(!head))
-+		return NULL;
-+
-+	lockdep_assert_rq_held(rq);
-+	/*
-+	 * Must not take balance_push_callback off the list when
-+	 * splice_balance_callbacks() and balance_callbacks() are not
-+	 * in the same rq->lock section.
-+	 *
-+	 * In that case it would be possible for __schedule() to interleave
-+	 * and observe the list empty.
-+	 */
-+	if (split && head == &balance_push_callback)
-+		head = NULL;
-+	else
-+		rq->balance_callback = NULL;
-+
-+	return head;
-+}
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+	return __splice_balance_callbacks(rq, true);
-+}
-+
-+static void __balance_callbacks(struct rq *rq)
-+{
-+	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+	unsigned long flags;
-+
-+	if (unlikely(head)) {
-+		raw_spin_lock_irqsave(&rq->lock, flags);
-+		do_balance_callbacks(rq, head);
-+		raw_spin_unlock_irqrestore(&rq->lock, flags);
-+	}
-+}
-+
-+#else
-+
-+static inline void __balance_callbacks(struct rq *rq)
-+{
-+}
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+	return NULL;
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+}
-+
-+#endif
-+
-+static inline void
-+prepare_lock_switch(struct rq *rq, struct task_struct *next)
-+{
-+	/*
-+	 * Since the runqueue lock will be released by the next
-+	 * task (which is an invalid locking op but in the case
-+	 * of the scheduler it's an obvious special-case), so we
-+	 * do an early lockdep release here:
-+	 */
-+	spin_release(&rq->lock.dep_map, _THIS_IP_);
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+	/* this is a valid case when another task releases the spinlock */
-+	rq->lock.owner = next;
-+#endif
-+}
-+
-+static inline void finish_lock_switch(struct rq *rq)
-+{
-+	/*
-+	 * If we are tracking spinlock dependencies then we have to
-+	 * fix up the runqueue lock - which gets 'carried over' from
-+	 * prev into current:
-+	 */
-+	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
-+	__balance_callbacks(rq);
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+/*
-+ * NOP if the arch has not defined these:
-+ */
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next)	do { } while (0)
-+#endif
-+
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch()	do { } while (0)
-+#endif
-+
-+static inline void kmap_local_sched_out(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_out();
-+#endif
-+}
-+
-+static inline void kmap_local_sched_in(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+	if (unlikely(current->kmap_ctrl.idx))
-+		__kmap_local_sched_in();
-+#endif
-+}
-+
-+/**
-+ * prepare_task_switch - prepare to switch tasks
-+ * @rq: the runqueue preparing to switch
-+ * @next: the task we are going to switch to.
-+ *
-+ * This is called with the rq lock held and interrupts off. It must
-+ * be paired with a subsequent finish_task_switch after the context
-+ * switch.
-+ *
-+ * prepare_task_switch sets up locking and calls architecture specific
-+ * hooks.
-+ */
-+static inline void
-+prepare_task_switch(struct rq *rq, struct task_struct *prev,
-+		    struct task_struct *next)
-+{
-+	kcov_prepare_switch(prev);
-+	sched_info_switch(rq, prev, next);
-+	perf_event_task_sched_out(prev, next);
-+	rseq_preempt(prev);
-+	fire_sched_out_preempt_notifiers(prev, next);
-+	kmap_local_sched_out();
-+	prepare_task(next);
-+	prepare_arch_switch(next);
-+}
-+
-+/**
-+ * finish_task_switch - clean up after a task-switch
-+ * @rq: runqueue associated with task-switch
-+ * @prev: the thread we just switched away from.
-+ *
-+ * finish_task_switch must be called after the context switch, paired
-+ * with a prepare_task_switch call before the context switch.
-+ * finish_task_switch will reconcile locking set up by prepare_task_switch,
-+ * and do any other architecture-specific cleanup actions.
-+ *
-+ * Note that we may have delayed dropping an mm in context_switch(). If
-+ * so, we finish that here outside of the runqueue lock.  (Doing it
-+ * with the lock held can cause deadlocks; see schedule() for
-+ * details.)
-+ *
-+ * The context switch have flipped the stack from under us and restored the
-+ * local variables which were saved when this task called schedule() in the
-+ * past. prev == current is still correct but we need to recalculate this_rq
-+ * because prev may have moved to another CPU.
-+ */
-+static struct rq *finish_task_switch(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	struct rq *rq = this_rq();
-+	struct mm_struct *mm = rq->prev_mm;
-+	unsigned int prev_state;
-+
-+	/*
-+	 * The previous task will have left us with a preempt_count of 2
-+	 * because it left us after:
-+	 *
-+	 *	schedule()
-+	 *	  preempt_disable();			// 1
-+	 *	  __schedule()
-+	 *	    raw_spin_lock_irq(&rq->lock)	// 2
-+	 *
-+	 * Also, see FORK_PREEMPT_COUNT.
-+	 */
-+	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
-+		      "corrupted preempt_count: %s/%d/0x%x\n",
-+		      current->comm, current->pid, preempt_count()))
-+		preempt_count_set(FORK_PREEMPT_COUNT);
-+
-+	rq->prev_mm = NULL;
-+
-+	/*
-+	 * A task struct has one reference for the use as "current".
-+	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
-+	 * schedule one last time. The schedule call will never return, and
-+	 * the scheduled task must drop that reference.
-+	 *
-+	 * We must observe prev->state before clearing prev->on_cpu (in
-+	 * finish_task), otherwise a concurrent wakeup can get prev
-+	 * running on another CPU and we could rave with its RUNNING -> DEAD
-+	 * transition, resulting in a double drop.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	vtime_task_switch(prev);
-+	perf_event_task_sched_in(prev, current);
-+	finish_task(prev);
-+	tick_nohz_task_switch();
-+	finish_lock_switch(rq);
-+	finish_arch_post_lock_switch();
-+	kcov_finish_switch(current);
-+	/*
-+	 * kmap_local_sched_out() is invoked with rq::lock held and
-+	 * interrupts disabled. There is no requirement for that, but the
-+	 * sched out code does not have an interrupt enabled section.
-+	 * Restoring the maps on sched in does not require interrupts being
-+	 * disabled either.
-+	 */
-+	kmap_local_sched_in();
-+
-+	fire_sched_in_preempt_notifiers(current);
-+	/*
-+	 * When switching through a kernel thread, the loop in
-+	 * membarrier_{private,global}_expedited() may have observed that
-+	 * kernel thread and not issued an IPI. It is therefore possible to
-+	 * schedule between user->kernel->user threads without passing though
-+	 * switch_mm(). Membarrier requires a barrier after storing to
-+	 * rq->curr, before returning to userspace, so provide them here:
-+	 *
-+	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
-+	 *   provided by mmdrop(),
-+	 * - a sync_core for SYNC_CORE.
-+	 */
-+	if (mm) {
-+		membarrier_mm_sync_core_before_usermode(mm);
-+		mmdrop_sched(mm);
-+	}
-+	if (unlikely(prev_state == TASK_DEAD)) {
-+		/* Task is done with its stack. */
-+		put_task_stack(prev);
-+
-+		put_task_struct_rcu_user(prev);
-+	}
-+
-+	return rq;
-+}
-+
-+/**
-+ * schedule_tail - first thing a freshly forked thread must call.
-+ * @prev: the thread we just switched away from.
-+ */
-+asmlinkage __visible void schedule_tail(struct task_struct *prev)
-+	__releases(rq->lock)
-+{
-+	/*
-+	 * New tasks start with FORK_PREEMPT_COUNT, see there and
-+	 * finish_task_switch() for details.
-+	 *
-+	 * finish_task_switch() will drop rq->lock() and lower preempt_count
-+	 * and the preempt_enable() will end up enabling preemption (on
-+	 * PREEMPT_COUNT kernels).
-+	 */
-+
-+	finish_task_switch(prev);
-+	preempt_enable();
-+
-+	if (current->set_child_tid)
-+		put_user(task_pid_vnr(current), current->set_child_tid);
-+
-+	calculate_sigpending();
-+}
-+
-+/*
-+ * context_switch - switch to the new MM and the new thread's register state.
-+ */
-+static __always_inline struct rq *
-+context_switch(struct rq *rq, struct task_struct *prev,
-+	       struct task_struct *next)
-+{
-+	prepare_task_switch(rq, prev, next);
-+
-+	/*
-+	 * For paravirt, this is coupled with an exit in switch_to to
-+	 * combine the page table reload and the switch backend into
-+	 * one hypercall.
-+	 */
-+	arch_start_context_switch(prev);
-+
-+	/*
-+	 * kernel -> kernel   lazy + transfer active
-+	 *   user -> kernel   lazy + mmgrab() active
-+	 *
-+	 * kernel ->   user   switch + mmdrop() active
-+	 *   user ->   user   switch
-+	 *
-+	 * switch_mm_cid() needs to be updated if the barriers provided
-+	 * by context_switch() are modified.
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		enter_lazy_tlb(prev->active_mm, next);
-+
-+		next->active_mm = prev->active_mm;
-+		if (prev->mm)                           // from user
-+			mmgrab(prev->active_mm);
-+		else
-+			prev->active_mm = NULL;
-+	} else {                                        // to user
-+		membarrier_switch_mm(rq, prev->active_mm, next->mm);
-+		/*
-+		 * sys_membarrier() requires an smp_mb() between setting
-+		 * rq->curr / membarrier_switch_mm() and returning to userspace.
-+		 *
-+		 * The below provides this either through switch_mm(), or in
-+		 * case 'prev->active_mm == next->mm' through
-+		 * finish_task_switch()'s mmdrop().
-+		 */
-+		switch_mm_irqs_off(prev->active_mm, next->mm, next);
-+		lru_gen_use_mm(next->mm);
-+
-+		if (!prev->mm) {                        // from kernel
-+			/* will mmdrop() in finish_task_switch(). */
-+			rq->prev_mm = prev->active_mm;
-+			prev->active_mm = NULL;
-+		}
-+	}
-+
-+	/* switch_mm_cid() requires the memory barriers above. */
-+	switch_mm_cid(rq, prev, next);
-+
-+	prepare_lock_switch(rq, next);
-+
-+	/* Here we just switch the register state and the stack. */
-+	switch_to(prev, next, prev);
-+	barrier();
-+
-+	return finish_task_switch(prev);
-+}
-+
-+/*
-+ * nr_running, nr_uninterruptible and nr_context_switches:
-+ *
-+ * externally visible scheduler statistics: current number of runnable
-+ * threads, total number of context switches performed since bootup.
-+ */
-+unsigned int nr_running(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_online_cpu(i)
-+		sum += cpu_rq(i)->nr_running;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Check if only the current task is running on the CPU.
-+ *
-+ * Caution: this function does not check that the caller has disabled
-+ * preemption, thus the result might have a time-of-check-to-time-of-use
-+ * race.  The caller is responsible to use it correctly, for example:
-+ *
-+ * - from a non-preemptible section (of course)
-+ *
-+ * - from a thread that is bound to a single CPU
-+ *
-+ * - in a loop with very short iterations (e.g. a polling loop)
-+ */
-+bool single_task_running(void)
-+{
-+	return raw_rq()->nr_running == 1;
-+}
-+EXPORT_SYMBOL(single_task_running);
-+
-+unsigned long long nr_context_switches_cpu(int cpu)
-+{
-+	return cpu_rq(cpu)->nr_switches;
-+}
-+
-+unsigned long long nr_context_switches(void)
-+{
-+	int i;
-+	unsigned long long sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += cpu_rq(i)->nr_switches;
-+
-+	return sum;
-+}
-+
-+/*
-+ * Consumers of these two interfaces, like for example the cpuidle menu
-+ * governor, are using nonsensical data. Preferring shallow idle state selection
-+ * for a CPU that has IO-wait which might not even end up running the task when
-+ * it does become runnable.
-+ */
-+
-+unsigned int nr_iowait_cpu(int cpu)
-+{
-+	return atomic_read(&cpu_rq(cpu)->nr_iowait);
-+}
-+
-+/*
-+ * IO-wait accounting, and how it's mostly bollocks (on SMP).
-+ *
-+ * The idea behind IO-wait account is to account the idle time that we could
-+ * have spend running if it were not for IO. That is, if we were to improve the
-+ * storage performance, we'd have a proportional reduction in IO-wait time.
-+ *
-+ * This all works nicely on UP, where, when a task blocks on IO, we account
-+ * idle time as IO-wait, because if the storage were faster, it could've been
-+ * running and we'd not be idle.
-+ *
-+ * This has been extended to SMP, by doing the same for each CPU. This however
-+ * is broken.
-+ *
-+ * Imagine for instance the case where two tasks block on one CPU, only the one
-+ * CPU will have IO-wait accounted, while the other has regular idle. Even
-+ * though, if the storage were faster, both could've ran at the same time,
-+ * utilising both CPUs.
-+ *
-+ * This means, that when looking globally, the current IO-wait accounting on
-+ * SMP is a lower bound, by reason of under accounting.
-+ *
-+ * Worse, since the numbers are provided per CPU, they are sometimes
-+ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
-+ * associated with any one particular CPU, it can wake to another CPU than it
-+ * blocked on. This means the per CPU IO-wait number is meaningless.
-+ *
-+ * Task CPU affinities can make all that even more 'interesting'.
-+ */
-+
-+unsigned int nr_iowait(void)
-+{
-+	unsigned int i, sum = 0;
-+
-+	for_each_possible_cpu(i)
-+		sum += nr_iowait_cpu(i);
-+
-+	return sum;
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+/*
-+ * sched_exec - execve() is a valuable balancing opportunity, because at
-+ * this point the task has the smallest effective memory and cache
-+ * footprint.
-+ */
-+void sched_exec(void)
-+{
-+}
-+
-+#endif
-+
-+DEFINE_PER_CPU(struct kernel_stat, kstat);
-+DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
-+
-+EXPORT_PER_CPU_SYMBOL(kstat);
-+EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
-+
-+static inline void update_curr(struct rq *rq, struct task_struct *p)
-+{
-+	s64 ns = rq->clock_task - p->last_ran;
-+
-+	p->sched_time += ns;
-+	cgroup_account_cputime(p, ns);
-+	account_group_exec_runtime(p, ns);
-+
-+	p->time_slice -= ns;
-+	p->last_ran = rq->clock_task;
-+}
-+
-+/*
-+ * Return accounted runtime for the task.
-+ * Return separately the current's pending runtime that have not been
-+ * accounted yet.
-+ */
-+unsigned long long task_sched_runtime(struct task_struct *p)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+	u64 ns;
-+
-+#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
-+	/*
-+	 * 64-bit doesn't need locks to atomically read a 64-bit value.
-+	 * So we have a optimization chance when the task's delta_exec is 0.
-+	 * Reading ->on_cpu is racy, but this is ok.
-+	 *
-+	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
-+	 * If we race with it entering CPU, unaccounted time is 0. This is
-+	 * indistinguishable from the read occurring a few cycles earlier.
-+	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
-+	 * been accounted, so we're correct here as well.
-+	 */
-+	if (!p->on_cpu || !task_on_rq_queued(p))
-+		return tsk_seruntime(p);
-+#endif
-+
-+	rq = task_access_lock_irqsave(p, &lock, &flags);
-+	/*
-+	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
-+	 * project cycles that may never be accounted to this
-+	 * thread, breaking clock_gettime().
-+	 */
-+	if (p == rq->curr && task_on_rq_queued(p)) {
-+		update_rq_clock(rq);
-+		update_curr(rq, p);
-+	}
-+	ns = tsk_seruntime(p);
-+	task_access_unlock_irqrestore(p, lock, &flags);
-+
-+	return ns;
-+}
-+
-+/* This manages tasks that have run out of timeslice during a scheduler_tick */
-+static inline void scheduler_task_tick(struct rq *rq)
-+{
-+	struct task_struct *p = rq->curr;
-+
-+	if (is_idle_task(p))
-+		return;
-+
-+	update_curr(rq, p);
-+	cpufreq_update_util(rq, 0);
-+
-+	/*
-+	 * Tasks have less than RESCHED_NS of time slice left they will be
-+	 * rescheduled.
-+	 */
-+	if (p->time_slice >= RESCHED_NS)
-+		return;
-+	set_tsk_need_resched(p);
-+	set_preempt_need_resched();
-+}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+static u64 cpu_resched_latency(struct rq *rq)
-+{
-+	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
-+	u64 resched_latency, now = rq_clock(rq);
-+	static bool warned_once;
-+
-+	if (sysctl_resched_latency_warn_once && warned_once)
-+		return 0;
-+
-+	if (!need_resched() || !latency_warn_ms)
-+		return 0;
-+
-+	if (system_state == SYSTEM_BOOTING)
-+		return 0;
-+
-+	if (!rq->last_seen_need_resched_ns) {
-+		rq->last_seen_need_resched_ns = now;
-+		rq->ticks_without_resched = 0;
-+		return 0;
-+	}
-+
-+	rq->ticks_without_resched++;
-+	resched_latency = now - rq->last_seen_need_resched_ns;
-+	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
-+		return 0;
-+
-+	warned_once = true;
-+
-+	return resched_latency;
-+}
-+
-+static int __init setup_resched_latency_warn_ms(char *str)
-+{
-+	long val;
-+
-+	if ((kstrtol(str, 0, &val))) {
-+		pr_warn("Unable to set resched_latency_warn_ms\n");
-+		return 1;
-+	}
-+
-+	sysctl_resched_latency_warn_ms = val;
-+	return 1;
-+}
-+__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
-+#else
-+static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+/*
-+ * This function gets called by the timer code, with HZ frequency.
-+ * We call it with interrupts disabled.
-+ */
-+void scheduler_tick(void)
-+{
-+	int cpu __maybe_unused = smp_processor_id();
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *curr = rq->curr;
-+	u64 resched_latency;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		arch_scale_freq_tick();
-+
-+	sched_clock_tick();
-+
-+	raw_spin_lock(&rq->lock);
-+	update_rq_clock(rq);
-+
-+	scheduler_task_tick(rq);
-+	if (sched_feat(LATENCY_WARN))
-+		resched_latency = cpu_resched_latency(rq);
-+	calc_global_load_tick(rq);
-+
-+	task_tick_mm_cid(rq, rq->curr);
-+
-+	raw_spin_unlock(&rq->lock);
-+
-+	if (sched_feat(LATENCY_WARN) && resched_latency)
-+		resched_latency_warn(cpu, resched_latency);
-+
-+	perf_event_task_tick();
-+
-+	if (curr->flags & PF_WQ_WORKER)
-+		wq_worker_tick(curr);
-+}
-+
-+#ifdef CONFIG_SCHED_SMT
-+static inline int sg_balance_cpu_stop(void *data)
-+{
-+	struct rq *rq = this_rq();
-+	struct task_struct *p = data;
-+	cpumask_t tmp;
-+	unsigned long flags;
-+
-+	local_irq_save(flags);
-+
-+	raw_spin_lock(&p->pi_lock);
-+	raw_spin_lock(&rq->lock);
-+
-+	rq->active_balance = 0;
-+	/* _something_ may have changed the task, double check again */
-+	if (task_on_rq_queued(p) && task_rq(p) == rq &&
-+	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
-+	    !is_migration_disabled(p)) {
-+		int cpu = cpu_of(rq);
-+		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
-+		rq = move_queued_task(rq, p, dcpu);
-+	}
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock(&p->pi_lock);
-+
-+	local_irq_restore(flags);
-+
-+	return 0;
-+}
-+
-+/* sg_balance_trigger - trigger slibing group balance for @cpu */
-+static inline int sg_balance_trigger(const int cpu)
-+{
-+	struct rq *rq= cpu_rq(cpu);
-+	unsigned long flags;
-+	struct task_struct *curr;
-+	int res;
-+
-+	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-+		return 0;
-+	curr = rq->curr;
-+	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
-+	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
-+	      !is_migration_disabled(curr) && (!rq->active_balance);
-+
-+	if (res)
-+		rq->active_balance = 1;
-+
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	if (res)
-+		stop_one_cpu_nowait(cpu, sg_balance_cpu_stop, curr,
-+				    &rq->active_balance_work);
-+	return res;
-+}
-+
-+/*
-+ * sg_balance - slibing group balance check for run queue @rq
-+ */
-+static inline void sg_balance(struct rq *rq, int cpu)
-+{
-+	cpumask_t chk;
-+
-+	/* exit when cpu is offline */
-+	if (unlikely(!rq->online))
-+		return;
-+
-+	/*
-+	 * Only cpu in slibing idle group will do the checking and then
-+	 * find potential cpus which can migrate the current running task
-+	 */
-+	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
-+	    cpumask_andnot(&chk, cpu_online_mask, sched_idle_mask) &&
-+	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
-+		int i;
-+
-+		for_each_cpu_wrap(i, &chk, cpu) {
-+			if (!cpumask_intersects(cpu_smt_mask(i), sched_idle_mask) &&\
-+			    sg_balance_trigger(i))
-+				return;
-+		}
-+	}
-+}
-+#endif /* CONFIG_SCHED_SMT */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+
-+struct tick_work {
-+	int			cpu;
-+	atomic_t		state;
-+	struct delayed_work	work;
-+};
-+/* Values for ->state, see diagram below. */
-+#define TICK_SCHED_REMOTE_OFFLINE	0
-+#define TICK_SCHED_REMOTE_OFFLINING	1
-+#define TICK_SCHED_REMOTE_RUNNING	2
-+
-+/*
-+ * State diagram for ->state:
-+ *
-+ *
-+ *          TICK_SCHED_REMOTE_OFFLINE
-+ *                    |   ^
-+ *                    |   |
-+ *                    |   | sched_tick_remote()
-+ *                    |   |
-+ *                    |   |
-+ *                    +--TICK_SCHED_REMOTE_OFFLINING
-+ *                    |   ^
-+ *                    |   |
-+ * sched_tick_start() |   | sched_tick_stop()
-+ *                    |   |
-+ *                    V   |
-+ *          TICK_SCHED_REMOTE_RUNNING
-+ *
-+ *
-+ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
-+ * and sched_tick_start() are happy to leave the state in RUNNING.
-+ */
-+
-+static struct tick_work __percpu *tick_work_cpu;
-+
-+static void sched_tick_remote(struct work_struct *work)
-+{
-+	struct delayed_work *dwork = to_delayed_work(work);
-+	struct tick_work *twork = container_of(dwork, struct tick_work, work);
-+	int cpu = twork->cpu;
-+	struct rq *rq = cpu_rq(cpu);
-+	int os;
-+
-+	/*
-+	 * Handle the tick only if it appears the remote CPU is running in full
-+	 * dynticks mode. The check is racy by nature, but missing a tick or
-+	 * having one too much is no big deal because the scheduler tick updates
-+	 * statistics and checks timeslices in a time-independent way, regardless
-+	 * of when exactly it is running.
-+	 */
-+	if (tick_nohz_tick_stopped_cpu(cpu)) {
-+		guard(raw_spinlock_irqsave)(&rq->lock);
-+		struct task_struct *curr = rq->curr;
-+
-+		if (cpu_online(cpu)) {
-+			update_rq_clock(rq);
-+
-+			if (!is_idle_task(curr)) {
-+				/*
-+				 * Make sure the next tick runs within a
-+				 * reasonable amount of time.
-+				 */
-+				u64 delta = rq_clock_task(rq) - curr->last_ran;
-+				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
-+			}
-+			scheduler_task_tick(rq);
-+
-+			calc_load_nohz_remote(rq);
-+		}
-+	}
-+
-+	/*
-+	 * Run the remote tick once per second (1Hz). This arbitrary
-+	 * frequency is large enough to avoid overload but short enough
-+	 * to keep scheduler internal stats reasonably up to date.  But
-+	 * first update state to reflect hotplug activity if required.
-+	 */
-+	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
-+	if (os == TICK_SCHED_REMOTE_RUNNING)
-+		queue_delayed_work(system_unbound_wq, dwork, HZ);
-+}
-+
-+static void sched_tick_start(int cpu)
-+{
-+	int os;
-+	struct tick_work *twork;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
-+	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
-+	if (os == TICK_SCHED_REMOTE_OFFLINE) {
-+		twork->cpu = cpu;
-+		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
-+		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
-+	}
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+static void sched_tick_stop(int cpu)
-+{
-+	struct tick_work *twork;
-+	int os;
-+
-+	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+		return;
-+
-+	WARN_ON_ONCE(!tick_work_cpu);
-+
-+	twork = per_cpu_ptr(tick_work_cpu, cpu);
-+	/* There cannot be competing actions, but don't rely on stop-machine. */
-+	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
-+	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
-+	/* Don't cancel, as this would mess up the state machine. */
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+int __init sched_tick_offload_init(void)
-+{
-+	tick_work_cpu = alloc_percpu(struct tick_work);
-+	BUG_ON(!tick_work_cpu);
-+	return 0;
-+}
-+
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_tick_start(int cpu) { }
-+static inline void sched_tick_stop(int cpu) { }
-+#endif
-+
-+#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
-+				defined(CONFIG_PREEMPT_TRACER))
-+/*
-+ * If the value passed in is equal to the current preempt count
-+ * then we just disabled preemption. Start timing the latency.
-+ */
-+static inline void preempt_latency_start(int val)
-+{
-+	if (preempt_count() == val) {
-+		unsigned long ip = get_lock_parent_ip();
-+#ifdef CONFIG_DEBUG_PREEMPT
-+		current->preempt_disable_ip = ip;
-+#endif
-+		trace_preempt_off(CALLER_ADDR0, ip);
-+	}
-+}
-+
-+void preempt_count_add(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
-+		return;
-+#endif
-+	__preempt_count_add(val);
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Spinlock count overflowing soon?
-+	 */
-+	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
-+				PREEMPT_MASK - 10);
-+#endif
-+	preempt_latency_start(val);
-+}
-+EXPORT_SYMBOL(preempt_count_add);
-+NOKPROBE_SYMBOL(preempt_count_add);
-+
-+/*
-+ * If the value passed in equals to the current preempt count
-+ * then we just enabled preemption. Stop timing the latency.
-+ */
-+static inline void preempt_latency_stop(int val)
-+{
-+	if (preempt_count() == val)
-+		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
-+}
-+
-+void preempt_count_sub(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	/*
-+	 * Underflow?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
-+		return;
-+	/*
-+	 * Is the spinlock portion underflowing?
-+	 */
-+	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
-+			!(preempt_count() & PREEMPT_MASK)))
-+		return;
-+#endif
-+
-+	preempt_latency_stop(val);
-+	__preempt_count_sub(val);
-+}
-+EXPORT_SYMBOL(preempt_count_sub);
-+NOKPROBE_SYMBOL(preempt_count_sub);
-+
-+#else
-+static inline void preempt_latency_start(int val) { }
-+static inline void preempt_latency_stop(int val) { }
-+#endif
-+
-+static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+	return p->preempt_disable_ip;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+/*
-+ * Print scheduling while atomic bug:
-+ */
-+static noinline void __schedule_bug(struct task_struct *prev)
-+{
-+	/* Save this before calling printk(), since that will clobber it */
-+	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	if (oops_in_progress)
-+		return;
-+
-+	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
-+		prev->comm, prev->pid, preempt_count());
-+
-+	debug_show_held_locks(prev);
-+	print_modules();
-+	if (irqs_disabled())
-+		print_irqtrace_events(prev);
-+	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+		pr_err("Preemption disabled at:");
-+		print_ip_sym(KERN_ERR, preempt_disable_ip);
-+	}
-+	check_panic_on_warn("scheduling while atomic");
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+
-+/*
-+ * Various schedule()-time debugging checks and statistics:
-+ */
-+static inline void schedule_debug(struct task_struct *prev, bool preempt)
-+{
-+#ifdef CONFIG_SCHED_STACK_END_CHECK
-+	if (task_stack_end_corrupted(prev))
-+		panic("corrupted stack end detected inside scheduler\n");
-+
-+	if (task_scs_end_corrupted(prev))
-+		panic("corrupted shadow stack detected inside scheduler\n");
-+#endif
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
-+		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
-+			prev->comm, prev->pid, prev->non_block_count);
-+		dump_stack();
-+		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+	}
-+#endif
-+
-+	if (unlikely(in_atomic_preempt_off())) {
-+		__schedule_bug(prev);
-+		preempt_count_set(PREEMPT_DISABLED);
-+	}
-+	rcu_sleep_check();
-+	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
-+
-+	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
-+
-+	schedstat_inc(this_rq()->sched_count);
-+}
-+
-+#ifdef ALT_SCHED_DEBUG
-+void alt_sched_debug(void)
-+{
-+	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
-+	       sched_rq_pending_mask.bits[0],
-+	       sched_idle_mask->bits[0],
-+	       sched_sg_idle_mask.bits[0]);
-+}
-+#else
-+inline void alt_sched_debug(void) {}
-+#endif
-+
-+#ifdef	CONFIG_SMP
-+
-+#ifdef CONFIG_PREEMPT_RT
-+#define SCHED_NR_MIGRATE_BREAK 8
-+#else
-+#define SCHED_NR_MIGRATE_BREAK 32
-+#endif
-+
-+const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
-+
-+/*
-+ * Migrate pending tasks in @rq to @dest_cpu
-+ */
-+static inline int
-+migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
-+{
-+	struct task_struct *p, *skip = rq->curr;
-+	int nr_migrated = 0;
-+	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
-+
-+	/* WA to check rq->curr is still on rq */
-+	if (!task_on_rq_queued(skip))
-+		return 0;
-+
-+	while (skip != rq->idle && nr_tries &&
-+	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
-+		skip = sched_rq_next_task(p, rq);
-+		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
-+			__SCHED_DEQUEUE_TASK(p, rq, 0, );
-+			set_task_cpu(p, dest_cpu);
-+			sched_task_sanity_check(p, dest_rq);
-+			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
-+			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
-+			nr_migrated++;
-+		}
-+		nr_tries--;
-+	}
-+
-+	return nr_migrated;
-+}
-+
-+static inline int take_other_rq_tasks(struct rq *rq, int cpu)
-+{
-+	struct cpumask *topo_mask, *end_mask;
-+
-+	if (unlikely(!rq->online))
-+		return 0;
-+
-+	if (cpumask_empty(&sched_rq_pending_mask))
-+		return 0;
-+
-+	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
-+	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
-+	do {
-+		int i;
-+		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
-+			int nr_migrated;
-+			struct rq *src_rq;
-+
-+			src_rq = cpu_rq(i);
-+			if (!do_raw_spin_trylock(&src_rq->lock))
-+				continue;
-+			spin_acquire(&src_rq->lock.dep_map,
-+				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
-+
-+			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
-+				src_rq->nr_running -= nr_migrated;
-+				if (src_rq->nr_running < 2)
-+					cpumask_clear_cpu(i, &sched_rq_pending_mask);
-+
-+				spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+				do_raw_spin_unlock(&src_rq->lock);
-+
-+				rq->nr_running += nr_migrated;
-+				if (rq->nr_running > 1)
-+					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
-+
-+				update_sched_preempt_mask(rq);
-+				cpufreq_update_util(rq, 0);
-+
-+				return 1;
-+			}
-+
-+			spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+			do_raw_spin_unlock(&src_rq->lock);
-+		}
-+	} while (++topo_mask < end_mask);
-+
-+	return 0;
-+}
-+#endif
-+
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sysctl_sched_base_slice;
-+
-+	sched_task_renew(p, rq);
-+
-+	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
-+		requeue_task(p, rq, task_sched_prio_idx(p, rq));
-+}
-+
-+/*
-+ * Timeslices below RESCHED_NS are considered as good as expired as there's no
-+ * point rescheduling when there's so little time left.
-+ */
-+static inline void check_curr(struct task_struct *p, struct rq *rq)
-+{
-+	if (unlikely(rq->idle == p))
-+		return;
-+
-+	update_curr(rq, p);
-+
-+	if (p->time_slice < RESCHED_NS)
-+		time_slice_expired(p, rq);
-+}
-+
-+static inline struct task_struct *
-+choose_next_task(struct rq *rq, int cpu)
-+{
-+	struct task_struct *next;
-+
-+	if (unlikely(rq->skip)) {
-+		next = rq_runnable_task(rq);
-+		if (next == rq->idle) {
-+#ifdef	CONFIG_SMP
-+			if (!take_other_rq_tasks(rq, cpu)) {
-+#endif
-+				rq->skip = NULL;
-+				schedstat_inc(rq->sched_goidle);
-+				return next;
-+#ifdef	CONFIG_SMP
-+			}
-+			next = rq_runnable_task(rq);
-+#endif
-+		}
-+		rq->skip = NULL;
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+		hrtick_start(rq, next->time_slice);
-+#endif
-+		return next;
-+	}
-+
-+	next = sched_rq_first_task(rq);
-+	if (next == rq->idle) {
-+#ifdef	CONFIG_SMP
-+		if (!take_other_rq_tasks(rq, cpu)) {
-+#endif
-+			schedstat_inc(rq->sched_goidle);
-+			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
-+			return next;
-+#ifdef	CONFIG_SMP
-+		}
-+		next = sched_rq_first_task(rq);
-+#endif
-+	}
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+	hrtick_start(rq, next->time_slice);
-+#endif
-+	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
-+	return next;
-+}
-+
-+/*
-+ * Constants for the sched_mode argument of __schedule().
-+ *
-+ * The mode argument allows RT enabled kernels to differentiate a
-+ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
-+ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
-+ * optimize the AND operation out and just check for zero.
-+ */
-+#define SM_NONE			0x0
-+#define SM_PREEMPT		0x1
-+#define SM_RTLOCK_WAIT		0x2
-+
-+#ifndef CONFIG_PREEMPT_RT
-+# define SM_MASK_PREEMPT	(~0U)
-+#else
-+# define SM_MASK_PREEMPT	SM_PREEMPT
-+#endif
-+
-+/*
-+ * schedule() is the main scheduler function.
-+ *
-+ * The main means of driving the scheduler and thus entering this function are:
-+ *
-+ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
-+ *
-+ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
-+ *      paths. For example, see arch/x86/entry_64.S.
-+ *
-+ *      To drive preemption between tasks, the scheduler sets the flag in timer
-+ *      interrupt handler scheduler_tick().
-+ *
-+ *   3. Wakeups don't really cause entry into schedule(). They add a
-+ *      task to the run-queue and that's it.
-+ *
-+ *      Now, if the new task added to the run-queue preempts the current
-+ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
-+ *      called on the nearest possible occasion:
-+ *
-+ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
-+ *
-+ *         - in syscall or exception context, at the next outmost
-+ *           preempt_enable(). (this might be as soon as the wake_up()'s
-+ *           spin_unlock()!)
-+ *
-+ *         - in IRQ context, return from interrupt-handler to
-+ *           preemptible context
-+ *
-+ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
-+ *         then at the next:
-+ *
-+ *          - cond_resched() call
-+ *          - explicit schedule() call
-+ *          - return from syscall or exception to user-space
-+ *          - return from interrupt-handler to user-space
-+ *
-+ * WARNING: must be called with preemption disabled!
-+ */
-+static void __sched notrace __schedule(unsigned int sched_mode)
-+{
-+	struct task_struct *prev, *next;
-+	unsigned long *switch_count;
-+	unsigned long prev_state;
-+	struct rq *rq;
-+	int cpu;
-+
-+	cpu = smp_processor_id();
-+	rq = cpu_rq(cpu);
-+	prev = rq->curr;
-+
-+	schedule_debug(prev, !!sched_mode);
-+
-+	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
-+	hrtick_clear(rq);
-+
-+	local_irq_disable();
-+	rcu_note_context_switch(!!sched_mode);
-+
-+	/*
-+	 * Make sure that signal_pending_state()->signal_pending() below
-+	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
-+	 * done by the caller to avoid the race with signal_wake_up():
-+	 *
-+	 * __set_current_state(@state)		signal_wake_up()
-+	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
-+	 *					  wake_up_state(p, state)
-+	 *   LOCK rq->lock			    LOCK p->pi_state
-+	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
-+	 *     if (signal_pending_state())	    if (p->state & @state)
-+	 *
-+	 * Also, the membarrier system call requires a full memory barrier
-+	 * after coming from user-space, before storing to rq->curr.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+	smp_mb__after_spinlock();
-+
-+	update_rq_clock(rq);
-+
-+	switch_count = &prev->nivcsw;
-+	/*
-+	 * We must load prev->state once (task_struct::state is volatile), such
-+	 * that we form a control dependency vs deactivate_task() below.
-+	 */
-+	prev_state = READ_ONCE(prev->__state);
-+	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
-+		if (signal_pending_state(prev_state, prev)) {
-+			WRITE_ONCE(prev->__state, TASK_RUNNING);
-+		} else {
-+			prev->sched_contributes_to_load =
-+				(prev_state & TASK_UNINTERRUPTIBLE) &&
-+				!(prev_state & TASK_NOLOAD) &&
-+				!(prev_state & TASK_FROZEN);
-+
-+			if (prev->sched_contributes_to_load)
-+				rq->nr_uninterruptible++;
-+
-+			/*
-+			 * __schedule()			ttwu()
-+			 *   prev_state = prev->state;    if (p->on_rq && ...)
-+			 *   if (prev_state)		    goto out;
-+			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
-+			 *				  p->state = TASK_WAKING
-+			 *
-+			 * Where __schedule() and ttwu() have matching control dependencies.
-+			 *
-+			 * After this, schedule() must not care about p->state any more.
-+			 */
-+			sched_task_deactivate(prev, rq);
-+			deactivate_task(prev, rq);
-+
-+			if (prev->in_iowait) {
-+				atomic_inc(&rq->nr_iowait);
-+				delayacct_blkio_start();
-+			}
-+		}
-+		switch_count = &prev->nvcsw;
-+	}
-+
-+	check_curr(prev, rq);
-+
-+	next = choose_next_task(rq, cpu);
-+	clear_tsk_need_resched(prev);
-+	clear_preempt_need_resched();
-+#ifdef CONFIG_SCHED_DEBUG
-+	rq->last_seen_need_resched_ns = 0;
-+#endif
-+
-+	if (likely(prev != next)) {
-+#ifdef CONFIG_SCHED_BMQ
-+		rq->last_ts_switch = rq->clock;
-+#endif
-+		next->last_ran = rq->clock_task;
-+
-+		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
-+		rq->nr_switches++;
-+		/*
-+		 * RCU users of rcu_dereference(rq->curr) may not see
-+		 * changes to task_struct made by pick_next_task().
-+		 */
-+		RCU_INIT_POINTER(rq->curr, next);
-+		/*
-+		 * The membarrier system call requires each architecture
-+		 * to have a full memory barrier after updating
-+		 * rq->curr, before returning to user-space.
-+		 *
-+		 * Here are the schemes providing that barrier on the
-+		 * various architectures:
-+		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
-+		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
-+		 * - finish_lock_switch() for weakly-ordered
-+		 *   architectures where spin_unlock is a full barrier,
-+		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
-+		 *   is a RELEASE barrier),
-+		 */
-+		++*switch_count;
-+
-+		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
-+
-+		/* Also unlocks the rq: */
-+		rq = context_switch(rq, prev, next);
-+
-+		cpu = cpu_of(rq);
-+	} else {
-+		__balance_callbacks(rq);
-+		raw_spin_unlock_irq(&rq->lock);
-+	}
-+
-+#ifdef CONFIG_SCHED_SMT
-+	sg_balance(rq, cpu);
-+#endif
-+}
-+
-+void __noreturn do_task_dead(void)
-+{
-+	/* Causes final put_task_struct in finish_task_switch(): */
-+	set_special_state(TASK_DEAD);
-+
-+	/* Tell freezer to ignore us: */
-+	current->flags |= PF_NOFREEZE;
-+
-+	__schedule(SM_NONE);
-+	BUG();
-+
-+	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
-+	for (;;)
-+		cpu_relax();
-+}
-+
-+static inline void sched_submit_work(struct task_struct *tsk)
-+{
-+	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
-+	unsigned int task_flags;
-+
-+	/*
-+	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
-+	 * will use a blocking primitive -- which would lead to recursion.
-+	 */
-+	lock_map_acquire_try(&sched_map);
-+
-+	task_flags = tsk->flags;
-+	/*
-+	 * If a worker goes to sleep, notify and ask workqueue whether it
-+	 * wants to wake up a task to maintain concurrency.
-+	 */
-+	if (task_flags & PF_WQ_WORKER)
-+		wq_worker_sleeping(tsk);
-+	else if (task_flags & PF_IO_WORKER)
-+		io_wq_worker_sleeping(tsk);
-+
-+	/*
-+	 * spinlock and rwlock must not flush block requests.  This will
-+	 * deadlock if the callback attempts to acquire a lock which is
-+	 * already acquired.
-+	 */
-+	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
-+
-+	/*
-+	 * If we are going to sleep and we have plugged IO queued,
-+	 * make sure to submit it to avoid deadlocks.
-+	 */
-+	blk_flush_plug(tsk->plug, true);
-+
-+	lock_map_release(&sched_map);
-+}
-+
-+static void sched_update_worker(struct task_struct *tsk)
-+{
-+	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
-+		if (tsk->flags & PF_WQ_WORKER)
-+			wq_worker_running(tsk);
-+		else
-+			io_wq_worker_running(tsk);
-+	}
-+}
-+
-+static __always_inline void __schedule_loop(unsigned int sched_mode)
-+{
-+	do {
-+		preempt_disable();
-+		__schedule(sched_mode);
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+}
-+
-+asmlinkage __visible void __sched schedule(void)
-+{
-+	struct task_struct *tsk = current;
-+
-+#ifdef CONFIG_RT_MUTEXES
-+	lockdep_assert(!tsk->sched_rt_mutex);
-+#endif
-+
-+	if (!task_is_running(tsk))
-+		sched_submit_work(tsk);
-+	__schedule_loop(SM_NONE);
-+	sched_update_worker(tsk);
-+}
-+EXPORT_SYMBOL(schedule);
-+
-+/*
-+ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
-+ * state (have scheduled out non-voluntarily) by making sure that all
-+ * tasks have either left the run queue or have gone into user space.
-+ * As idle tasks do not do either, they must not ever be preempted
-+ * (schedule out non-voluntarily).
-+ *
-+ * schedule_idle() is similar to schedule_preempt_disable() except that it
-+ * never enables preemption because it does not call sched_submit_work().
-+ */
-+void __sched schedule_idle(void)
-+{
-+	/*
-+	 * As this skips calling sched_submit_work(), which the idle task does
-+	 * regardless because that function is a nop when the task is in a
-+	 * TASK_RUNNING state, make sure this isn't used someplace that the
-+	 * current task can be in any other state. Note, idle is always in the
-+	 * TASK_RUNNING state.
-+	 */
-+	WARN_ON_ONCE(current->__state);
-+	do {
-+		__schedule(SM_NONE);
-+	} while (need_resched());
-+}
-+
-+#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
-+asmlinkage __visible void __sched schedule_user(void)
-+{
-+	/*
-+	 * If we come here after a random call to set_need_resched(),
-+	 * or we have been woken up remotely but the IPI has not yet arrived,
-+	 * we haven't yet exited the RCU idle mode. Do it here manually until
-+	 * we find a better solution.
-+	 *
-+	 * NB: There are buggy callers of this function.  Ideally we
-+	 * should warn if prev_state != CONTEXT_USER, but that will trigger
-+	 * too frequently to make sense yet.
-+	 */
-+	enum ctx_state prev_state = exception_enter();
-+	schedule();
-+	exception_exit(prev_state);
-+}
-+#endif
-+
-+/**
-+ * schedule_preempt_disabled - called with preemption disabled
-+ *
-+ * Returns with preemption disabled. Note: preempt_count must be 1
-+ */
-+void __sched schedule_preempt_disabled(void)
-+{
-+	sched_preempt_enable_no_resched();
-+	schedule();
-+	preempt_disable();
-+}
-+
-+#ifdef CONFIG_PREEMPT_RT
-+void __sched notrace schedule_rtlock(void)
-+{
-+	__schedule_loop(SM_RTLOCK_WAIT);
-+}
-+NOKPROBE_SYMBOL(schedule_rtlock);
-+#endif
-+
-+static void __sched notrace preempt_schedule_common(void)
-+{
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		__schedule(SM_PREEMPT);
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+
-+		/*
-+		 * Check again in case we missed a preemption opportunity
-+		 * between schedule and now.
-+		 */
-+	} while (need_resched());
-+}
-+
-+#ifdef CONFIG_PREEMPTION
-+/*
-+ * This is the entry point to schedule() from in-kernel preemption
-+ * off of preempt_enable.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule(void)
-+{
-+	/*
-+	 * If there is a non-zero preempt_count or interrupts are disabled,
-+	 * we do not want to preempt the current task. Just return..
-+	 */
-+	if (likely(!preemptible()))
-+		return;
-+
-+	preempt_schedule_common();
-+}
-+NOKPROBE_SYMBOL(preempt_schedule);
-+EXPORT_SYMBOL(preempt_schedule);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_dynamic_enabled
-+#define preempt_schedule_dynamic_enabled	preempt_schedule
-+#define preempt_schedule_dynamic_disabled	NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
-+void __sched notrace dynamic_preempt_schedule(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
-+		return;
-+	preempt_schedule();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule);
-+EXPORT_SYMBOL(dynamic_preempt_schedule);
-+#endif
-+#endif
-+
-+/**
-+ * preempt_schedule_notrace - preempt_schedule called by tracing
-+ *
-+ * The tracing infrastructure uses preempt_enable_notrace to prevent
-+ * recursion and tracing preempt enabling caused by the tracing
-+ * infrastructure itself. But as tracing can happen in areas coming
-+ * from userspace or just about to enter userspace, a preempt enable
-+ * can occur before user_exit() is called. This will cause the scheduler
-+ * to be called when the system is still in usermode.
-+ *
-+ * To prevent this, the preempt_enable_notrace will use this function
-+ * instead of preempt_schedule() to exit user context if needed before
-+ * calling the scheduler.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
-+{
-+	enum ctx_state prev_ctx;
-+
-+	if (likely(!preemptible()))
-+		return;
-+
-+	do {
-+		/*
-+		 * Because the function tracer can trace preempt_count_sub()
-+		 * and it also uses preempt_enable/disable_notrace(), if
-+		 * NEED_RESCHED is set, the preempt_enable_notrace() called
-+		 * by the function tracer will call this function again and
-+		 * cause infinite recursion.
-+		 *
-+		 * Preemption must be disabled here before the function
-+		 * tracer can trace. Break up preempt_disable() into two
-+		 * calls. One to disable preemption without fear of being
-+		 * traced. The other to still record the preemption latency,
-+		 * which can also be traced by the function tracer.
-+		 */
-+		preempt_disable_notrace();
-+		preempt_latency_start(1);
-+		/*
-+		 * Needs preempt disabled in case user_exit() is traced
-+		 * and the tracer calls preempt_enable_notrace() causing
-+		 * an infinite recursion.
-+		 */
-+		prev_ctx = exception_enter();
-+		__schedule(SM_PREEMPT);
-+		exception_exit(prev_ctx);
-+
-+		preempt_latency_stop(1);
-+		preempt_enable_no_resched_notrace();
-+	} while (need_resched());
-+}
-+EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_notrace_dynamic_enabled
-+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
-+#define preempt_schedule_notrace_dynamic_disabled	NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
-+void __sched notrace dynamic_preempt_schedule_notrace(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
-+		return;
-+	preempt_schedule_notrace();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
-+EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
-+#endif
-+#endif
-+
-+#endif /* CONFIG_PREEMPTION */
-+
-+/*
-+ * This is the entry point to schedule() from kernel preemption
-+ * off of irq context.
-+ * Note, that this is called and return with irqs disabled. This will
-+ * protect us against recursive calling from irq.
-+ */
-+asmlinkage __visible void __sched preempt_schedule_irq(void)
-+{
-+	enum ctx_state prev_state;
-+
-+	/* Catch callers which need to be fixed */
-+	BUG_ON(preempt_count() || !irqs_disabled());
-+
-+	prev_state = exception_enter();
-+
-+	do {
-+		preempt_disable();
-+		local_irq_enable();
-+		__schedule(SM_PREEMPT);
-+		local_irq_disable();
-+		sched_preempt_enable_no_resched();
-+	} while (need_resched());
-+
-+	exception_exit(prev_state);
-+}
-+
-+int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
-+			  void *key)
-+{
-+	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
-+	return try_to_wake_up(curr->private, mode, wake_flags);
-+}
-+EXPORT_SYMBOL(default_wake_function);
-+
-+static inline void check_task_changed(struct task_struct *p, struct rq *rq)
-+{
-+	/* Trigger resched if task sched_prio has been modified. */
-+	if (task_on_rq_queued(p)) {
-+		int idx;
-+
-+		update_rq_clock(rq);
-+		idx = task_sched_prio_idx(p, rq);
-+		if (idx != p->sq_idx) {
-+			requeue_task(p, rq, idx);
-+			wakeup_preempt(rq);
-+		}
-+	}
-+}
-+
-+static void __setscheduler_prio(struct task_struct *p, int prio)
-+{
-+	p->prio = prio;
-+}
-+
-+#ifdef CONFIG_RT_MUTEXES
-+
-+/*
-+ * Would be more useful with typeof()/auto_type but they don't mix with
-+ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
-+ * name such that if someone were to implement this function we get to compare
-+ * notes.
-+ */
-+#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
-+
-+void rt_mutex_pre_schedule(void)
-+{
-+	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
-+	sched_submit_work(current);
-+}
-+
-+void rt_mutex_schedule(void)
-+{
-+	lockdep_assert(current->sched_rt_mutex);
-+	__schedule_loop(SM_NONE);
-+}
-+
-+void rt_mutex_post_schedule(void)
-+{
-+	sched_update_worker(current);
-+	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
-+}
-+
-+static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
-+{
-+	if (pi_task)
-+		prio = min(prio, pi_task->prio);
-+
-+	return prio;
-+}
-+
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	struct task_struct *pi_task = rt_mutex_get_top_task(p);
-+
-+	return __rt_effective_prio(pi_task, prio);
-+}
-+
-+/*
-+ * rt_mutex_setprio - set the current priority of a task
-+ * @p: task to boost
-+ * @pi_task: donor task
-+ *
-+ * This function changes the 'effective' priority of a task. It does
-+ * not touch ->normal_prio like __setscheduler().
-+ *
-+ * Used by the rt_mutex code to implement priority inheritance
-+ * logic. Call site only calls if the priority of the task changed.
-+ */
-+void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
-+{
-+	int prio;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	/* XXX used to be waiter->prio, not waiter->task->prio */
-+	prio = __rt_effective_prio(pi_task, p->normal_prio);
-+
-+	/*
-+	 * If nothing changed; bail early.
-+	 */
-+	if (p->pi_top_task == pi_task && prio == p->prio)
-+		return;
-+
-+	rq = __task_access_lock(p, &lock);
-+	/*
-+	 * Set under pi_lock && rq->lock, such that the value can be used under
-+	 * either lock.
-+	 *
-+	 * Note that there is loads of tricky to make this pointer cache work
-+	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
-+	 * ensure a task is de-boosted (pi_task is set to NULL) before the
-+	 * task is allowed to run again (and can exit). This ensures the pointer
-+	 * points to a blocked task -- which guarantees the task is present.
-+	 */
-+	p->pi_top_task = pi_task;
-+
-+	/*
-+	 * For FIFO/RR we only need to set prio, if that matches we're done.
-+	 */
-+	if (prio == p->prio)
-+		goto out_unlock;
-+
-+	/*
-+	 * Idle task boosting is a nono in general. There is one
-+	 * exception, when PREEMPT_RT and NOHZ is active:
-+	 *
-+	 * The idle task calls get_next_timer_interrupt() and holds
-+	 * the timer wheel base->lock on the CPU and another CPU wants
-+	 * to access the timer (probably to cancel it). We can safely
-+	 * ignore the boosting request, as the idle CPU runs this code
-+	 * with interrupts disabled and will complete the lock
-+	 * protected section without being interrupted. So there is no
-+	 * real need to boost.
-+	 */
-+	if (unlikely(p == rq->idle)) {
-+		WARN_ON(p != rq->curr);
-+		WARN_ON(p->pi_blocked_on);
-+		goto out_unlock;
-+	}
-+
-+	trace_sched_pi_setprio(p, pi_task);
-+
-+	__setscheduler_prio(p, prio);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+
-+	__balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+
-+	preempt_enable();
-+}
-+#else
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+	return prio;
-+}
-+#endif
-+
-+void set_user_nice(struct task_struct *p, long nice)
-+{
-+	unsigned long flags;
-+	struct rq *rq;
-+	raw_spinlock_t *lock;
-+
-+	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
-+		return;
-+	/*
-+	 * We have to be careful, if called from sys_setpriority(),
-+	 * the task might be in the middle of scheduling on another CPU.
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	rq = __task_access_lock(p, &lock);
-+
-+	p->static_prio = NICE_TO_PRIO(nice);
-+	/*
-+	 * The RT priorities are set via sched_setscheduler(), but we still
-+	 * allow the 'normal' nice value to be set - but as expected
-+	 * it won't have any effect on scheduling until the task is
-+	 * not SCHED_NORMAL/SCHED_BATCH:
-+	 */
-+	if (task_has_rt_policy(p))
-+		goto out_unlock;
-+
-+	p->prio = effective_prio(p);
-+
-+	check_task_changed(p, rq);
-+out_unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+EXPORT_SYMBOL(set_user_nice);
-+
-+/*
-+ * is_nice_reduction - check if nice value is an actual reduction
-+ *
-+ * Similar to can_nice() but does not perform a capability check.
-+ *
-+ * @p: task
-+ * @nice: nice value
-+ */
-+static bool is_nice_reduction(const struct task_struct *p, const int nice)
-+{
-+	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
-+	int nice_rlim = nice_to_rlimit(nice);
-+
-+	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
-+}
-+
-+/*
-+ * can_nice - check if a task can reduce its nice value
-+ * @p: task
-+ * @nice: nice value
-+ */
-+int can_nice(const struct task_struct *p, const int nice)
-+{
-+	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
-+}
-+
-+#ifdef __ARCH_WANT_SYS_NICE
-+
-+/*
-+ * sys_nice - change the priority of the current process.
-+ * @increment: priority increment
-+ *
-+ * sys_setpriority is a more generic, but much slower function that
-+ * does similar things.
-+ */
-+SYSCALL_DEFINE1(nice, int, increment)
-+{
-+	long nice, retval;
-+
-+	/*
-+	 * Setpriority might change our priority at the same moment.
-+	 * We don't have to worry. Conceptually one call occurs first
-+	 * and we have a single winner.
-+	 */
-+
-+	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
-+	nice = task_nice(current) + increment;
-+
-+	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
-+	if (increment < 0 && !can_nice(current, nice))
-+		return -EPERM;
-+
-+	retval = security_task_setnice(current, nice);
-+	if (retval)
-+		return retval;
-+
-+	set_user_nice(current, nice);
-+	return 0;
-+}
-+
-+#endif
-+
-+/**
-+ * task_prio - return the priority value of a given task.
-+ * @p: the task in question.
-+ *
-+ * Return: The priority value as seen by users in /proc.
-+ *
-+ * sched policy         return value   kernel prio    user prio/nice
-+ *
-+ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
-+ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
-+ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
-+ */
-+int task_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
-+		task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+/**
-+ * idle_cpu - is a given CPU idle currently?
-+ * @cpu: the processor in question.
-+ *
-+ * Return: 1 if the CPU is currently idle. 0 otherwise.
-+ */
-+int idle_cpu(int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	if (rq->curr != rq->idle)
-+		return 0;
-+
-+	if (rq->nr_running)
-+		return 0;
-+
-+#ifdef CONFIG_SMP
-+	if (rq->ttwu_pending)
-+		return 0;
-+#endif
-+
-+	return 1;
-+}
-+
-+/**
-+ * idle_task - return the idle task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * Return: The idle task for the cpu @cpu.
-+ */
-+struct task_struct *idle_task(int cpu)
-+{
-+	return cpu_rq(cpu)->idle;
-+}
-+
-+/**
-+ * find_process_by_pid - find a process with a matching PID value.
-+ * @pid: the pid in question.
-+ *
-+ * The task of @pid, if found. %NULL otherwise.
-+ */
-+static inline struct task_struct *find_process_by_pid(pid_t pid)
-+{
-+	return pid ? find_task_by_vpid(pid) : current;
-+}
-+
-+static struct task_struct *find_get_task(pid_t pid)
-+{
-+	struct task_struct *p;
-+	guard(rcu)();
-+
-+	p = find_process_by_pid(pid);
-+	if (likely(p))
-+		get_task_struct(p);
-+
-+	return p;
-+}
-+
-+DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
-+	     find_get_task(pid), pid_t pid)
-+
-+/*
-+ * sched_setparam() passes in -1 for its policy, to let the functions
-+ * it calls know not to change it.
-+ */
-+#define SETPARAM_POLICY -1
-+
-+static void __setscheduler_params(struct task_struct *p,
-+		const struct sched_attr *attr)
-+{
-+	int policy = attr->sched_policy;
-+
-+	if (policy == SETPARAM_POLICY)
-+		policy = p->policy;
-+
-+	p->policy = policy;
-+
-+	/*
-+	 * allow normal nice value to be set, but will not have any
-+	 * effect on scheduling until the task not SCHED_NORMAL/
-+	 * SCHED_BATCH
-+	 */
-+	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
-+
-+	/*
-+	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
-+	 * !rt_policy. Always setting this ensures that things like
-+	 * getparam()/getattr() don't report silly values for !rt tasks.
-+	 */
-+	p->rt_priority = attr->sched_priority;
-+	p->normal_prio = normal_prio(p);
-+}
-+
-+/*
-+ * check the target process has a UID that matches the current process's
-+ */
-+static bool check_same_owner(struct task_struct *p)
-+{
-+	const struct cred *cred = current_cred(), *pcred;
-+	guard(rcu)();
-+
-+	pcred = __task_cred(p);
-+	return (uid_eq(cred->euid, pcred->euid) ||
-+	        uid_eq(cred->euid, pcred->uid));
-+}
-+
-+/*
-+ * Allow unprivileged RT tasks to decrease priority.
-+ * Only issue a capable test if needed and only once to avoid an audit
-+ * event on permitted non-privileged operations:
-+ */
-+static int user_check_sched_setscheduler(struct task_struct *p,
-+					 const struct sched_attr *attr,
-+					 int policy, int reset_on_fork)
-+{
-+	if (rt_policy(policy)) {
-+		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
-+
-+		/* Can't set/change the rt policy: */
-+		if (policy != p->policy && !rlim_rtprio)
-+			goto req_priv;
-+
-+		/* Can't increase priority: */
-+		if (attr->sched_priority > p->rt_priority &&
-+		    attr->sched_priority > rlim_rtprio)
-+			goto req_priv;
-+	}
-+
-+	/* Can't change other user's priorities: */
-+	if (!check_same_owner(p))
-+		goto req_priv;
-+
-+	/* Normal users shall not reset the sched_reset_on_fork flag: */
-+	if (p->sched_reset_on_fork && !reset_on_fork)
-+		goto req_priv;
-+
-+	return 0;
-+
-+req_priv:
-+	if (!capable(CAP_SYS_NICE))
-+		return -EPERM;
-+
-+	return 0;
-+}
-+
-+static int __sched_setscheduler(struct task_struct *p,
-+				const struct sched_attr *attr,
-+				bool user, bool pi)
-+{
-+	const struct sched_attr dl_squash_attr = {
-+		.size		= sizeof(struct sched_attr),
-+		.sched_policy	= SCHED_FIFO,
-+		.sched_nice	= 0,
-+		.sched_priority = 99,
-+	};
-+	int oldpolicy = -1, policy = attr->sched_policy;
-+	int retval, newprio;
-+	struct balance_callback *head;
-+	unsigned long flags;
-+	struct rq *rq;
-+	int reset_on_fork;
-+	raw_spinlock_t *lock;
-+
-+	/* The pi code expects interrupts enabled */
-+	BUG_ON(pi && in_interrupt());
-+
-+	/*
-+	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
-+	 */
-+	if (unlikely(SCHED_DEADLINE == policy)) {
-+		attr = &dl_squash_attr;
-+		policy = attr->sched_policy;
-+	}
-+recheck:
-+	/* Double check policy once rq lock held */
-+	if (policy < 0) {
-+		reset_on_fork = p->sched_reset_on_fork;
-+		policy = oldpolicy = p->policy;
-+	} else {
-+		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
-+
-+		if (policy > SCHED_IDLE)
-+			return -EINVAL;
-+	}
-+
-+	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
-+		return -EINVAL;
-+
-+	/*
-+	 * Valid priorities for SCHED_FIFO and SCHED_RR are
-+	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
-+	 * SCHED_BATCH and SCHED_IDLE is 0.
-+	 */
-+	if (attr->sched_priority < 0 ||
-+	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
-+	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
-+		return -EINVAL;
-+	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
-+	    (attr->sched_priority != 0))
-+		return -EINVAL;
-+
-+	if (user) {
-+		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
-+		if (retval)
-+			return retval;
-+
-+		retval = security_task_setscheduler(p);
-+		if (retval)
-+			return retval;
-+	}
-+
-+	/*
-+	 * Make sure no PI-waiters arrive (or leave) while we are
-+	 * changing the priority of the task:
-+	 */
-+	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+
-+	/*
-+	 * To be able to change p->policy safely, task_access_lock()
-+	 * must be called.
-+	 * IF use task_access_lock() here:
-+	 * For the task p which is not running, reading rq->stop is
-+	 * racy but acceptable as ->stop doesn't change much.
-+	 * An enhancemnet can be made to read rq->stop saftly.
-+	 */
-+	rq = __task_access_lock(p, &lock);
-+
-+	/*
-+	 * Changing the policy of the stop threads its a very bad idea
-+	 */
-+	if (p == rq->stop) {
-+		retval = -EINVAL;
-+		goto unlock;
-+	}
-+
-+	/*
-+	 * If not changing anything there's no need to proceed further:
-+	 */
-+	if (unlikely(policy == p->policy)) {
-+		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
-+			goto change;
-+		if (!rt_policy(policy) &&
-+		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
-+			goto change;
-+
-+		p->sched_reset_on_fork = reset_on_fork;
-+		retval = 0;
-+		goto unlock;
-+	}
-+change:
-+
-+	/* Re-check policy now with rq lock held */
-+	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
-+		policy = oldpolicy = -1;
-+		__task_access_unlock(p, lock);
-+		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+		goto recheck;
-+	}
-+
-+	p->sched_reset_on_fork = reset_on_fork;
-+
-+	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
-+	if (pi) {
-+		/*
-+		 * Take priority boosted tasks into account. If the new
-+		 * effective priority is unchanged, we just store the new
-+		 * normal parameters and do not touch the scheduler class and
-+		 * the runqueue. This will be done when the task deboost
-+		 * itself.
-+		 */
-+		newprio = rt_effective_prio(p, newprio);
-+	}
-+
-+	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
-+		__setscheduler_params(p, attr);
-+		__setscheduler_prio(p, newprio);
-+	}
-+
-+	check_task_changed(p, rq);
-+
-+	/* Avoid rq from going away on us: */
-+	preempt_disable();
-+	head = splice_balance_callbacks(rq);
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+	if (pi)
-+		rt_mutex_adjust_pi(p);
-+
-+	/* Run balance callbacks after we've adjusted the PI chain: */
-+	balance_callbacks(rq, head);
-+	preempt_enable();
-+
-+	return 0;
-+
-+unlock:
-+	__task_access_unlock(p, lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+	return retval;
-+}
-+
-+static int _sched_setscheduler(struct task_struct *p, int policy,
-+			       const struct sched_param *param, bool check)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy   = policy,
-+		.sched_priority = param->sched_priority,
-+		.sched_nice     = PRIO_TO_NICE(p->static_prio),
-+	};
-+
-+	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
-+	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
-+		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		policy &= ~SCHED_RESET_ON_FORK;
-+		attr.sched_policy = policy;
-+	}
-+
-+	return __sched_setscheduler(p, &attr, check, true);
-+}
-+
-+/**
-+ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Use sched_set_fifo(), read its comment.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ *
-+ * NOTE that the task may be already dead.
-+ */
-+int sched_setscheduler(struct task_struct *p, int policy,
-+		       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, true);
-+}
-+
-+int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, true, true);
-+}
-+
-+int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
-+{
-+	return __sched_setscheduler(p, attr, false, true);
-+}
-+EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
-+
-+/**
-+ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
-+ * @p: the task in question.
-+ * @policy: new policy.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Just like sched_setscheduler, only don't bother checking if the
-+ * current context has permission.  For example, this is needed in
-+ * stop_machine(): we create temporary high priority worker threads,
-+ * but our caller might not have that capability.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+int sched_setscheduler_nocheck(struct task_struct *p, int policy,
-+			       const struct sched_param *param)
-+{
-+	return _sched_setscheduler(p, policy, param, false);
-+}
-+
-+/*
-+ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
-+ * incapable of resource management, which is the one thing an OS really should
-+ * be doing.
-+ *
-+ * This is of course the reason it is limited to privileged users only.
-+ *
-+ * Worse still; it is fundamentally impossible to compose static priority
-+ * workloads. You cannot take two correctly working static prio workloads
-+ * and smash them together and still expect them to work.
-+ *
-+ * For this reason 'all' FIFO tasks the kernel creates are basically at:
-+ *
-+ *   MAX_RT_PRIO / 2
-+ *
-+ * The administrator _MUST_ configure the system, the kernel simply doesn't
-+ * know enough information to make a sensible choice.
-+ */
-+void sched_set_fifo(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo);
-+
-+/*
-+ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
-+ */
-+void sched_set_fifo_low(struct task_struct *p)
-+{
-+	struct sched_param sp = { .sched_priority = 1 };
-+	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_fifo_low);
-+
-+void sched_set_normal(struct task_struct *p, int nice)
-+{
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+		.sched_nice = nice,
-+	};
-+	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
-+}
-+EXPORT_SYMBOL_GPL(sched_set_normal);
-+
-+static int
-+do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
-+{
-+	struct sched_param lparam;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
-+		return -EFAULT;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	return sched_setscheduler(p, policy, &lparam);
-+}
-+
-+/*
-+ * Mimics kernel/events/core.c perf_copy_attr().
-+ */
-+static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
-+{
-+	u32 size;
-+	int ret;
-+
-+	/* Zero the full structure, so that a short copy will be nice: */
-+	memset(attr, 0, sizeof(*attr));
-+
-+	ret = get_user(size, &uattr->size);
-+	if (ret)
-+		return ret;
-+
-+	/* ABI compatibility quirk: */
-+	if (!size)
-+		size = SCHED_ATTR_SIZE_VER0;
-+
-+	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
-+		goto err_size;
-+
-+	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
-+	if (ret) {
-+		if (ret == -E2BIG)
-+			goto err_size;
-+		return ret;
-+	}
-+
-+	/*
-+	 * XXX: Do we want to be lenient like existing syscalls; or do we want
-+	 * to be strict and return an error on out-of-bounds values?
-+	 */
-+	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
-+
-+	/* sched/core.c uses zero here but we already know ret is zero */
-+	return 0;
-+
-+err_size:
-+	put_user(sizeof(*attr), &uattr->size);
-+	return -E2BIG;
-+}
-+
-+/**
-+ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
-+ * @pid: the pid in question.
-+ * @policy: new policy.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ * @param: structure containing the new RT priority.
-+ */
-+SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
-+{
-+	if (policy < 0)
-+		return -EINVAL;
-+
-+	return do_sched_setscheduler(pid, policy, param);
-+}
-+
-+/**
-+ * sys_sched_setparam - set/change the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the new RT priority.
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
-+}
-+
-+static void get_params(struct task_struct *p, struct sched_attr *attr)
-+{
-+	if (task_has_rt_policy(p))
-+		attr->sched_priority = p->rt_priority;
-+	else
-+		attr->sched_nice = task_nice(p);
-+}
-+
-+/**
-+ * sys_sched_setattr - same as above, but with extended sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ */
-+SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
-+			       unsigned int, flags)
-+{
-+	struct sched_attr attr;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || flags)
-+		return -EINVAL;
-+
-+	retval = sched_copy_attr(uattr, &attr);
-+	if (retval)
-+		return retval;
-+
-+	if ((int)attr.sched_policy < 0)
-+		return -EINVAL;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
-+		get_params(p, &attr);
-+
-+	return sched_setattr(p, &attr);
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
-+ * @pid: the pid in question.
-+ *
-+ * Return: On success, the policy of the thread. Otherwise, a negative error
-+ * code.
-+ */
-+SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
-+{
-+	struct task_struct *p;
-+	int retval = -EINVAL;
-+
-+	if (pid < 0)
-+		return -ESRCH;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	retval = security_task_getscheduler(p);
-+	if (!retval)
-+		retval = p->policy;
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getscheduler - get the RT priority of a thread
-+ * @pid: the pid in question.
-+ * @param: structure containing the RT priority.
-+ *
-+ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
-+ * code.
-+ */
-+SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
-+{
-+	struct sched_param lp = { .sched_priority = 0 };
-+	struct task_struct *p;
-+
-+	if (!param || pid < 0)
-+		return -EINVAL;
-+
-+	scoped_guard (rcu) {
-+		int retval;
-+
-+		p = find_process_by_pid(pid);
-+		if (!p)
-+			return -EINVAL;
-+
-+		retval = security_task_getscheduler(p);
-+		if (retval)
-+			return retval;
-+
-+		if (task_has_rt_policy(p))
-+			lp.sched_priority = p->rt_priority;
-+	}
-+
-+	/*
-+	 * This one might sleep, we cannot do it with a spinlock held ...
-+	 */
-+	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
-+}
-+
-+/*
-+ * Copy the kernel size attribute structure (which might be larger
-+ * than what user-space knows about) to user-space.
-+ *
-+ * Note that all cases are valid: user-space buffer can be larger or
-+ * smaller than the kernel-space buffer. The usual case is that both
-+ * have the same size.
-+ */
-+static int
-+sched_attr_copy_to_user(struct sched_attr __user *uattr,
-+			struct sched_attr *kattr,
-+			unsigned int usize)
-+{
-+	unsigned int ksize = sizeof(*kattr);
-+
-+	if (!access_ok(uattr, usize))
-+		return -EFAULT;
-+
-+	/*
-+	 * sched_getattr() ABI forwards and backwards compatibility:
-+	 *
-+	 * If usize == ksize then we just copy everything to user-space and all is good.
-+	 *
-+	 * If usize < ksize then we only copy as much as user-space has space for,
-+	 * this keeps ABI compatibility as well. We skip the rest.
-+	 *
-+	 * If usize > ksize then user-space is using a newer version of the ABI,
-+	 * which part the kernel doesn't know about. Just ignore it - tooling can
-+	 * detect the kernel's knowledge of attributes from the attr->size value
-+	 * which is set to ksize in this case.
-+	 */
-+	kattr->size = min(usize, ksize);
-+
-+	if (copy_to_user(uattr, kattr, kattr->size))
-+		return -EFAULT;
-+
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
-+ * @pid: the pid in question.
-+ * @uattr: structure containing the extended parameters.
-+ * @usize: sizeof(attr) for fwd/bwd comp.
-+ * @flags: for future extension.
-+ */
-+SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
-+		unsigned int, usize, unsigned int, flags)
-+{
-+	struct sched_attr kattr = { };
-+	struct task_struct *p;
-+	int retval;
-+
-+	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
-+	    usize < SCHED_ATTR_SIZE_VER0 || flags)
-+		return -EINVAL;
-+
-+	scoped_guard (rcu) {
-+		p = find_process_by_pid(pid);
-+		if (!p)
-+			return -ESRCH;
-+
-+		retval = security_task_getscheduler(p);
-+		if (retval)
-+			return retval;
-+
-+		kattr.sched_policy = p->policy;
-+		if (p->sched_reset_on_fork)
-+			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
-+		get_params(p, &kattr);
-+		kattr.sched_flags &= SCHED_FLAG_ALL;
-+
-+#ifdef CONFIG_UCLAMP_TASK
-+		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
-+		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
-+#endif
-+	}
-+
-+	return sched_attr_copy_to_user(uattr, &kattr, usize);
-+}
-+
-+#ifdef CONFIG_SMP
-+int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
-+{
-+	return 0;
-+}
-+#endif
-+
-+static int
-+__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
-+{
-+	int retval;
-+	cpumask_var_t cpus_allowed, new_mask;
-+
-+	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
-+		retval = -ENOMEM;
-+		goto out_free_cpus_allowed;
-+	}
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
-+
-+	ctx->new_mask = new_mask;
-+	ctx->flags |= SCA_CHECK;
-+
-+	retval = __set_cpus_allowed_ptr(p, ctx);
-+	if (retval)
-+		goto out_free_new_mask;
-+
-+	cpuset_cpus_allowed(p, cpus_allowed);
-+	if (!cpumask_subset(new_mask, cpus_allowed)) {
-+		/*
-+		 * We must have raced with a concurrent cpuset
-+		 * update. Just reset the cpus_allowed to the
-+		 * cpuset's cpus_allowed
-+		 */
-+		cpumask_copy(new_mask, cpus_allowed);
-+
-+		/*
-+		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
-+		 * will restore the previous user_cpus_ptr value.
-+		 *
-+		 * In the unlikely event a previous user_cpus_ptr exists,
-+		 * we need to further restrict the mask to what is allowed
-+		 * by that old user_cpus_ptr.
-+		 */
-+		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
-+			bool empty = !cpumask_and(new_mask, new_mask,
-+						  ctx->user_mask);
-+
-+			if (WARN_ON_ONCE(empty))
-+				cpumask_copy(new_mask, cpus_allowed);
-+		}
-+		__set_cpus_allowed_ptr(p, ctx);
-+		retval = -EINVAL;
-+	}
-+
-+out_free_new_mask:
-+	free_cpumask_var(new_mask);
-+out_free_cpus_allowed:
-+	free_cpumask_var(cpus_allowed);
-+	return retval;
-+}
-+
-+long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
-+{
-+	struct affinity_context ac;
-+	struct cpumask *user_mask;
-+	int retval;
-+
-+	CLASS(find_get_task, p)(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		return -EINVAL;
-+
-+	if (!check_same_owner(p)) {
-+		guard(rcu)();
-+		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
-+			return -EPERM;
-+	}
-+
-+	retval = security_task_setscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	/*
-+	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
-+	 * alloc_user_cpus_ptr() returns NULL.
-+	 */
-+	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
-+	if (user_mask) {
-+		cpumask_copy(user_mask, in_mask);
-+	} else if (IS_ENABLED(CONFIG_SMP)) {
-+		return -ENOMEM;
-+	}
-+
-+	ac = (struct affinity_context){
-+		.new_mask  = in_mask,
-+		.user_mask = user_mask,
-+		.flags     = SCA_USER,
-+	};
-+
-+	retval = __sched_setaffinity(p, &ac);
-+	kfree(ac.user_mask);
-+
-+	return retval;
-+}
-+
-+static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
-+			     struct cpumask *new_mask)
-+{
-+	if (len < cpumask_size())
-+		cpumask_clear(new_mask);
-+	else if (len > cpumask_size())
-+		len = cpumask_size();
-+
-+	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
-+}
-+
-+/**
-+ * sys_sched_setaffinity - set the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to the new CPU mask
-+ *
-+ * Return: 0 on success. An error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	cpumask_var_t new_mask;
-+	int retval;
-+
-+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
-+	if (retval == 0)
-+		retval = sched_setaffinity(pid, new_mask);
-+	free_cpumask_var(new_mask);
-+	return retval;
-+}
-+
-+long sched_getaffinity(pid_t pid, cpumask_t *mask)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -ESRCH;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	guard(raw_spinlock_irqsave)(&p->pi_lock);
-+	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
-+
-+	return retval;
-+}
-+
-+/**
-+ * sys_sched_getaffinity - get the CPU affinity of a process
-+ * @pid: pid of the process
-+ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
-+ * @user_mask_ptr: user-space pointer to hold the current CPU mask
-+ *
-+ * Return: size of CPU mask copied to user_mask_ptr on success. An
-+ * error code otherwise.
-+ */
-+SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
-+		unsigned long __user *, user_mask_ptr)
-+{
-+	int ret;
-+	cpumask_var_t mask;
-+
-+	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
-+		return -EINVAL;
-+	if (len & (sizeof(unsigned long)-1))
-+		return -EINVAL;
-+
-+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
-+		return -ENOMEM;
-+
-+	ret = sched_getaffinity(pid, mask);
-+	if (ret == 0) {
-+		unsigned int retlen = min(len, cpumask_size());
-+
-+		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
-+			ret = -EFAULT;
-+		else
-+			ret = retlen;
-+	}
-+	free_cpumask_var(mask);
-+
-+	return ret;
-+}
-+
-+static void do_sched_yield(void)
-+{
-+	struct rq *rq;
-+	struct rq_flags rf;
-+	struct task_struct *p;
-+
-+	if (!sched_yield_type)
-+		return;
-+
-+	rq = this_rq_lock_irq(&rf);
-+
-+	schedstat_inc(rq->yld_count);
-+
-+	p = current;
-+	if (rt_task(p)) {
-+		if (task_on_rq_queued(p))
-+			requeue_task(p, rq, task_sched_prio_idx(p, rq));
-+	} else if (rq->nr_running > 1) {
-+		if (1 == sched_yield_type) {
-+			do_sched_yield_type_1(p, rq);
-+			if (task_on_rq_queued(p))
-+				requeue_task(p, rq, task_sched_prio_idx(p, rq));
-+		} else if (2 == sched_yield_type) {
-+			rq->skip = p;
-+		}
-+	}
-+
-+	preempt_disable();
-+	raw_spin_unlock_irq(&rq->lock);
-+	sched_preempt_enable_no_resched();
-+
-+	schedule();
-+}
-+
-+/**
-+ * sys_sched_yield - yield the current processor to other threads.
-+ *
-+ * This function yields the current CPU to other tasks. If there are no
-+ * other threads running on this CPU then this function will return.
-+ *
-+ * Return: 0.
-+ */
-+SYSCALL_DEFINE0(sched_yield)
-+{
-+	do_sched_yield();
-+	return 0;
-+}
-+
-+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
-+int __sched __cond_resched(void)
-+{
-+	if (should_resched(0)) {
-+		preempt_schedule_common();
-+		return 1;
-+	}
-+	/*
-+	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
-+	 * whether the current CPU is in an RCU read-side critical section,
-+	 * so the tick can report quiescent states even for CPUs looping
-+	 * in kernel context.  In contrast, in non-preemptible kernels,
-+	 * RCU readers leave no in-memory hints, which means that CPU-bound
-+	 * processes executing in kernel context might never report an
-+	 * RCU quiescent state.  Therefore, the following code causes
-+	 * cond_resched() to report a quiescent state, but only when RCU
-+	 * is in urgent need of one.
-+	 */
-+#ifndef CONFIG_PREEMPT_RCU
-+	rcu_all_qs();
-+#endif
-+	return 0;
-+}
-+EXPORT_SYMBOL(__cond_resched);
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define cond_resched_dynamic_enabled	__cond_resched
-+#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(cond_resched);
-+
-+#define might_resched_dynamic_enabled	__cond_resched
-+#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(might_resched);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
-+int __sched dynamic_cond_resched(void)
-+{
-+	klp_sched_try_switch();
-+	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
-+		return 0;
-+	return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_cond_resched);
-+
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
-+int __sched dynamic_might_resched(void)
-+{
-+	if (!static_branch_unlikely(&sk_dynamic_might_resched))
-+		return 0;
-+	return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_might_resched);
-+#endif
-+#endif
-+
-+/*
-+ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
-+ * call schedule, and on return reacquire the lock.
-+ *
-+ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
-+ * operations here to prevent schedule() from being called twice (once via
-+ * spin_unlock(), once by hand).
-+ */
-+int __cond_resched_lock(spinlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held(lock);
-+
-+	if (spin_needbreak(lock) || resched) {
-+		spin_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		spin_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_lock);
-+
-+int __cond_resched_rwlock_read(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_read(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		read_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		read_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_read);
-+
-+int __cond_resched_rwlock_write(rwlock_t *lock)
-+{
-+	int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+	int ret = 0;
-+
-+	lockdep_assert_held_write(lock);
-+
-+	if (rwlock_needbreak(lock) || resched) {
-+		write_unlock(lock);
-+		if (!_cond_resched())
-+			cpu_relax();
-+		ret = 1;
-+		write_lock(lock);
-+	}
-+	return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_write);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+
-+#ifdef CONFIG_GENERIC_ENTRY
-+#include <linux/entry-common.h>
-+#endif
-+
-+/*
-+ * SC:cond_resched
-+ * SC:might_resched
-+ * SC:preempt_schedule
-+ * SC:preempt_schedule_notrace
-+ * SC:irqentry_exit_cond_resched
-+ *
-+ *
-+ * NONE:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * VOLUNTARY:
-+ *   cond_resched               <- __cond_resched
-+ *   might_resched              <- __cond_resched
-+ *   preempt_schedule           <- NOP
-+ *   preempt_schedule_notrace   <- NOP
-+ *   irqentry_exit_cond_resched <- NOP
-+ *
-+ * FULL:
-+ *   cond_resched               <- RET0
-+ *   might_resched              <- RET0
-+ *   preempt_schedule           <- preempt_schedule
-+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
-+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
-+ */
-+
-+enum {
-+	preempt_dynamic_undefined = -1,
-+	preempt_dynamic_none,
-+	preempt_dynamic_voluntary,
-+	preempt_dynamic_full,
-+};
-+
-+int preempt_dynamic_mode = preempt_dynamic_undefined;
-+
-+int sched_dynamic_mode(const char *str)
-+{
-+	if (!strcmp(str, "none"))
-+		return preempt_dynamic_none;
-+
-+	if (!strcmp(str, "voluntary"))
-+		return preempt_dynamic_voluntary;
-+
-+	if (!strcmp(str, "full"))
-+		return preempt_dynamic_full;
-+
-+	return -EINVAL;
-+}
-+
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
-+#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
-+#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
-+#else
-+#error "Unsupported PREEMPT_DYNAMIC mechanism"
-+#endif
-+
-+static DEFINE_MUTEX(sched_dynamic_mutex);
-+static bool klp_override;
-+
-+static void __sched_dynamic_update(int mode)
-+{
-+	/*
-+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-+	 * the ZERO state, which is invalid.
-+	 */
-+	if (!klp_override)
-+		preempt_dynamic_enable(cond_resched);
-+	preempt_dynamic_enable(cond_resched);
-+	preempt_dynamic_enable(might_resched);
-+	preempt_dynamic_enable(preempt_schedule);
-+	preempt_dynamic_enable(preempt_schedule_notrace);
-+	preempt_dynamic_enable(irqentry_exit_cond_resched);
-+
-+	switch (mode) {
-+	case preempt_dynamic_none:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_disable(might_resched);
-+		preempt_dynamic_disable(preempt_schedule);
-+		preempt_dynamic_disable(preempt_schedule_notrace);
-+		preempt_dynamic_disable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: none\n");
-+		break;
-+
-+	case preempt_dynamic_voluntary:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_enable(might_resched);
-+		preempt_dynamic_disable(preempt_schedule);
-+		preempt_dynamic_disable(preempt_schedule_notrace);
-+		preempt_dynamic_disable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: voluntary\n");
-+		break;
-+
-+	case preempt_dynamic_full:
-+		if (!klp_override)
-+			preempt_dynamic_enable(cond_resched);
-+		preempt_dynamic_disable(might_resched);
-+		preempt_dynamic_enable(preempt_schedule);
-+		preempt_dynamic_enable(preempt_schedule_notrace);
-+		preempt_dynamic_enable(irqentry_exit_cond_resched);
-+		if (mode != preempt_dynamic_mode)
-+			pr_info("Dynamic Preempt: full\n");
-+		break;
-+	}
-+
-+	preempt_dynamic_mode = mode;
-+}
-+
-+void sched_dynamic_update(int mode)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+	__sched_dynamic_update(mode);
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
-+
-+static int klp_cond_resched(void)
-+{
-+	__klp_sched_try_switch();
-+	return __cond_resched();
-+}
-+
-+void sched_dynamic_klp_enable(void)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+
-+	klp_override = true;
-+	static_call_update(cond_resched, klp_cond_resched);
-+
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+void sched_dynamic_klp_disable(void)
-+{
-+	mutex_lock(&sched_dynamic_mutex);
-+
-+	klp_override = false;
-+	__sched_dynamic_update(preempt_dynamic_mode);
-+
-+	mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
-+
-+
-+static int __init setup_preempt_mode(char *str)
-+{
-+	int mode = sched_dynamic_mode(str);
-+	if (mode < 0) {
-+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-+		return 0;
-+	}
-+
-+	sched_dynamic_update(mode);
-+	return 1;
-+}
-+__setup("preempt=", setup_preempt_mode);
-+
-+static void __init preempt_dynamic_init(void)
-+{
-+	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
-+		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
-+			sched_dynamic_update(preempt_dynamic_none);
-+		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
-+			sched_dynamic_update(preempt_dynamic_voluntary);
-+		} else {
-+			/* Default static call setting, nothing to do */
-+			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
-+			preempt_dynamic_mode = preempt_dynamic_full;
-+			pr_info("Dynamic Preempt: full\n");
-+		}
-+	}
-+}
-+
-+#define PREEMPT_MODEL_ACCESSOR(mode) \
-+	bool preempt_model_##mode(void)						 \
-+	{									 \
-+		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
-+		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
-+	}									 \
-+	EXPORT_SYMBOL_GPL(preempt_model_##mode)
-+
-+PREEMPT_MODEL_ACCESSOR(none);
-+PREEMPT_MODEL_ACCESSOR(voluntary);
-+PREEMPT_MODEL_ACCESSOR(full);
-+
-+#else /* !CONFIG_PREEMPT_DYNAMIC */
-+
-+static inline void preempt_dynamic_init(void) { }
-+
-+#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
-+
-+/**
-+ * yield - yield the current processor to other threads.
-+ *
-+ * Do not ever use this function, there's a 99% chance you're doing it wrong.
-+ *
-+ * The scheduler is at all times free to pick the calling task as the most
-+ * eligible task to run, if removing the yield() call from your code breaks
-+ * it, it's already broken.
-+ *
-+ * Typical broken usage is:
-+ *
-+ * while (!event)
-+ * 	yield();
-+ *
-+ * where one assumes that yield() will let 'the other' process run that will
-+ * make event true. If the current task is a SCHED_FIFO task that will never
-+ * happen. Never use yield() as a progress guarantee!!
-+ *
-+ * If you want to use yield() to wait for something, use wait_event().
-+ * If you want to use yield() to be 'nice' for others, use cond_resched().
-+ * If you still want to use yield(), do not!
-+ */
-+void __sched yield(void)
-+{
-+	set_current_state(TASK_RUNNING);
-+	do_sched_yield();
-+}
-+EXPORT_SYMBOL(yield);
-+
-+/**
-+ * yield_to - yield the current processor to another thread in
-+ * your thread group, or accelerate that thread toward the
-+ * processor it's on.
-+ * @p: target task
-+ * @preempt: whether task preemption is allowed or not
-+ *
-+ * It's the caller's job to ensure that the target task struct
-+ * can't go away on us before we can do any checks.
-+ *
-+ * In Alt schedule FW, yield_to is not supported.
-+ *
-+ * Return:
-+ *	true (>0) if we indeed boosted the target task.
-+ *	false (0) if we failed to boost the target.
-+ *	-ESRCH if there's no task to yield to.
-+ */
-+int __sched yield_to(struct task_struct *p, bool preempt)
-+{
-+	return 0;
-+}
-+EXPORT_SYMBOL_GPL(yield_to);
-+
-+int io_schedule_prepare(void)
-+{
-+	int old_iowait = current->in_iowait;
-+
-+	current->in_iowait = 1;
-+	blk_flush_plug(current->plug, true);
-+	return old_iowait;
-+}
-+
-+void io_schedule_finish(int token)
-+{
-+	current->in_iowait = token;
-+}
-+
-+/*
-+ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
-+ * that process accounting knows that this is a task in IO wait state.
-+ *
-+ * But don't do that if it is a deliberate, throttling IO wait (this task
-+ * has set its backing_dev_info: the queue against which it should throttle)
-+ */
-+
-+long __sched io_schedule_timeout(long timeout)
-+{
-+	int token;
-+	long ret;
-+
-+	token = io_schedule_prepare();
-+	ret = schedule_timeout(timeout);
-+	io_schedule_finish(token);
-+
-+	return ret;
-+}
-+EXPORT_SYMBOL(io_schedule_timeout);
-+
-+void __sched io_schedule(void)
-+{
-+	int token;
-+
-+	token = io_schedule_prepare();
-+	schedule();
-+	io_schedule_finish(token);
-+}
-+EXPORT_SYMBOL(io_schedule);
-+
-+/**
-+ * sys_sched_get_priority_max - return maximum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the maximum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = MAX_RT_PRIO - 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+/**
-+ * sys_sched_get_priority_min - return minimum RT priority.
-+ * @policy: scheduling class.
-+ *
-+ * Return: On success, this syscall returns the minimum
-+ * rt_priority that can be used by a given scheduling class.
-+ * On failure, a negative error code is returned.
-+ */
-+SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
-+{
-+	int ret = -EINVAL;
-+
-+	switch (policy) {
-+	case SCHED_FIFO:
-+	case SCHED_RR:
-+		ret = 1;
-+		break;
-+	case SCHED_NORMAL:
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		ret = 0;
-+		break;
-+	}
-+	return ret;
-+}
-+
-+static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
-+{
-+	struct task_struct *p;
-+	int retval;
-+
-+	alt_sched_debug();
-+
-+	if (pid < 0)
-+		return -EINVAL;
-+
-+	guard(rcu)();
-+	p = find_process_by_pid(pid);
-+	if (!p)
-+		return -EINVAL;
-+
-+	retval = security_task_getscheduler(p);
-+	if (retval)
-+		return retval;
-+
-+	*t = ns_to_timespec64(sysctl_sched_base_slice);
-+	return 0;
-+}
-+
-+/**
-+ * sys_sched_rr_get_interval - return the default timeslice of a process.
-+ * @pid: pid of the process.
-+ * @interval: userspace pointer to the timeslice value.
-+ *
-+ *
-+ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
-+ * an error code.
-+ */
-+SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
-+		struct __kernel_timespec __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_timespec64(&t, interval);
-+
-+	return retval;
-+}
-+
-+#ifdef CONFIG_COMPAT_32BIT_TIME
-+SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
-+		struct old_timespec32 __user *, interval)
-+{
-+	struct timespec64 t;
-+	int retval = sched_rr_get_interval(pid, &t);
-+
-+	if (retval == 0)
-+		retval = put_old_timespec32(&t, interval);
-+	return retval;
-+}
-+#endif
-+
-+void sched_show_task(struct task_struct *p)
-+{
-+	unsigned long free = 0;
-+	int ppid;
-+
-+	if (!try_get_task_stack(p))
-+		return;
-+
-+	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
-+
-+	if (task_is_running(p))
-+		pr_cont("  running task    ");
-+#ifdef CONFIG_DEBUG_STACK_USAGE
-+	free = stack_not_used(p);
-+#endif
-+	ppid = 0;
-+	rcu_read_lock();
-+	if (pid_alive(p))
-+		ppid = task_pid_nr(rcu_dereference(p->real_parent));
-+	rcu_read_unlock();
-+	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
-+		free, task_pid_nr(p), task_tgid_nr(p),
-+		ppid, read_task_thread_flags(p));
-+
-+	print_worker_info(KERN_INFO, p);
-+	print_stop_info(KERN_INFO, p);
-+	show_stack(p, NULL, KERN_INFO);
-+	put_task_stack(p);
-+}
-+EXPORT_SYMBOL_GPL(sched_show_task);
-+
-+static inline bool
-+state_filter_match(unsigned long state_filter, struct task_struct *p)
-+{
-+	unsigned int state = READ_ONCE(p->__state);
-+
-+	/* no filter, everything matches */
-+	if (!state_filter)
-+		return true;
-+
-+	/* filter, but doesn't match */
-+	if (!(state & state_filter))
-+		return false;
-+
-+	/*
-+	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
-+	 * TASK_KILLABLE).
-+	 */
-+	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
-+		return false;
-+
-+	return true;
-+}
-+
-+
-+void show_state_filter(unsigned int state_filter)
-+{
-+	struct task_struct *g, *p;
-+
-+	rcu_read_lock();
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * reset the NMI-timeout, listing all files on a slow
-+		 * console might take a lot of time:
-+		 * Also, reset softlockup watchdogs on all CPUs, because
-+		 * another CPU might be blocked waiting for us to process
-+		 * an IPI.
-+		 */
-+		touch_nmi_watchdog();
-+		touch_all_softlockup_watchdogs();
-+		if (state_filter_match(state_filter, p))
-+			sched_show_task(p);
-+	}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	/* TODO: Alt schedule FW should support this
-+	if (!state_filter)
-+		sysrq_sched_debug_show();
-+	*/
-+#endif
-+	rcu_read_unlock();
-+	/*
-+	 * Only show locks if all tasks are dumped:
-+	 */
-+	if (!state_filter)
-+		debug_show_all_locks();
-+}
-+
-+void dump_cpu_task(int cpu)
-+{
-+	if (cpu == smp_processor_id() && in_hardirq()) {
-+		struct pt_regs *regs;
-+
-+		regs = get_irq_regs();
-+		if (regs) {
-+			show_regs(regs);
-+			return;
-+		}
-+	}
-+
-+	if (trigger_single_cpu_backtrace(cpu))
-+		return;
-+
-+	pr_info("Task dump for CPU %d:\n", cpu);
-+	sched_show_task(cpu_curr(cpu));
-+}
-+
-+/**
-+ * init_idle - set up an idle thread for a given CPU
-+ * @idle: task in question
-+ * @cpu: CPU the idle task belongs to
-+ *
-+ * NOTE: this function does not set the idle thread's NEED_RESCHED
-+ * flag, to make booting more robust.
-+ */
-+void __init init_idle(struct task_struct *idle, int cpu)
-+{
-+#ifdef CONFIG_SMP
-+	struct affinity_context ac = (struct affinity_context) {
-+		.new_mask  = cpumask_of(cpu),
-+		.flags     = 0,
-+	};
-+#endif
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	__sched_fork(0, idle);
-+
-+	raw_spin_lock_irqsave(&idle->pi_lock, flags);
-+	raw_spin_lock(&rq->lock);
-+
-+	idle->last_ran = rq->clock_task;
-+	idle->__state = TASK_RUNNING;
-+	/*
-+	 * PF_KTHREAD should already be set at this point; regardless, make it
-+	 * look like a proper per-CPU kthread.
-+	 */
-+	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
-+	kthread_set_per_cpu(idle, cpu);
-+
-+	sched_queue_init_idle(&rq->queue, idle);
-+
-+#ifdef CONFIG_SMP
-+	/*
-+	 * It's possible that init_idle() gets called multiple times on a task,
-+	 * in that case do_set_cpus_allowed() will not do the right thing.
-+	 *
-+	 * And since this is boot we can forgo the serialisation.
-+	 */
-+	set_cpus_allowed_common(idle, &ac);
-+#endif
-+
-+	/* Silence PROVE_RCU */
-+	rcu_read_lock();
-+	__set_task_cpu(idle, cpu);
-+	rcu_read_unlock();
-+
-+	rq->idle = idle;
-+	rcu_assign_pointer(rq->curr, idle);
-+	idle->on_cpu = 1;
-+
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
-+
-+	/* Set the preempt count _outside_ the spinlocks! */
-+	init_idle_preempt_count(idle, cpu);
-+
-+	ftrace_graph_init_idle_task(idle, cpu);
-+	vtime_init_idle(idle, cpu);
-+#ifdef CONFIG_SMP
-+	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
-+			      const struct cpumask __maybe_unused *trial)
-+{
-+	return 1;
-+}
-+
-+int task_can_attach(struct task_struct *p)
-+{
-+	int ret = 0;
-+
-+	/*
-+	 * Kthreads which disallow setaffinity shouldn't be moved
-+	 * to a new cpuset; we don't want to change their CPU
-+	 * affinity and isolating such threads by their set of
-+	 * allowed nodes is unnecessary.  Thus, cpusets are not
-+	 * applicable for such threads.  This prevents checking for
-+	 * success of set_cpus_allowed_ptr() on all attached tasks
-+	 * before cpus_mask may be changed.
-+	 */
-+	if (p->flags & PF_NO_SETAFFINITY)
-+		ret = -EINVAL;
-+
-+	return ret;
-+}
-+
-+bool sched_smp_initialized __read_mostly;
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+/*
-+ * Ensures that the idle task is using init_mm right before its CPU goes
-+ * offline.
-+ */
-+void idle_task_exit(void)
-+{
-+	struct mm_struct *mm = current->active_mm;
-+
-+	BUG_ON(current != this_rq()->idle);
-+
-+	if (mm != &init_mm) {
-+		switch_mm(mm, &init_mm, current);
-+		finish_arch_post_lock_switch();
-+	}
-+
-+	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
-+}
-+
-+static int __balance_push_cpu_stop(void *arg)
-+{
-+	struct task_struct *p = arg;
-+	struct rq *rq = this_rq();
-+	struct rq_flags rf;
-+	int cpu;
-+
-+	raw_spin_lock_irq(&p->pi_lock);
-+	rq_lock(rq, &rf);
-+
-+	update_rq_clock(rq);
-+
-+	if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+		cpu = select_fallback_rq(rq->cpu, p);
-+		rq = __migrate_task(rq, p, cpu);
-+	}
-+
-+	rq_unlock(rq, &rf);
-+	raw_spin_unlock_irq(&p->pi_lock);
-+
-+	put_task_struct(p);
-+
-+	return 0;
-+}
-+
-+static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
-+
-+/*
-+ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
-+ * effective when the hotplug motion is down.
-+ */
-+static void balance_push(struct rq *rq)
-+{
-+	struct task_struct *push_task = rq->curr;
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	/*
-+	 * Ensure the thing is persistent until balance_push_set(.on = false);
-+	 */
-+	rq->balance_callback = &balance_push_callback;
-+
-+	/*
-+	 * Only active while going offline and when invoked on the outgoing
-+	 * CPU.
-+	 */
-+	if (!cpu_dying(rq->cpu) || rq != this_rq())
-+		return;
-+
-+	/*
-+	 * Both the cpu-hotplug and stop task are in this case and are
-+	 * required to complete the hotplug process.
-+	 */
-+	if (kthread_is_per_cpu(push_task) ||
-+	    is_migration_disabled(push_task)) {
-+
-+		/*
-+		 * If this is the idle task on the outgoing CPU try to wake
-+		 * up the hotplug control thread which might wait for the
-+		 * last task to vanish. The rcuwait_active() check is
-+		 * accurate here because the waiter is pinned on this CPU
-+		 * and can't obviously be running in parallel.
-+		 *
-+		 * On RT kernels this also has to check whether there are
-+		 * pinned and scheduled out tasks on the runqueue. They
-+		 * need to leave the migrate disabled section first.
-+		 */
-+		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
-+		    rcuwait_active(&rq->hotplug_wait)) {
-+			raw_spin_unlock(&rq->lock);
-+			rcuwait_wake_up(&rq->hotplug_wait);
-+			raw_spin_lock(&rq->lock);
-+		}
-+		return;
-+	}
-+
-+	get_task_struct(push_task);
-+	/*
-+	 * Temporarily drop rq->lock such that we can wake-up the stop task.
-+	 * Both preemption and IRQs are still disabled.
-+	 */
-+	preempt_disable();
-+	raw_spin_unlock(&rq->lock);
-+	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
-+			    this_cpu_ptr(&push_work));
-+	preempt_enable();
-+	/*
-+	 * At this point need_resched() is true and we'll take the loop in
-+	 * schedule(). The next pick is obviously going to be the stop task
-+	 * which kthread_is_per_cpu() and will push this task away.
-+	 */
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct rq_flags rf;
-+
-+	rq_lock_irqsave(rq, &rf);
-+	if (on) {
-+		WARN_ON_ONCE(rq->balance_callback);
-+		rq->balance_callback = &balance_push_callback;
-+	} else if (rq->balance_callback == &balance_push_callback) {
-+		rq->balance_callback = NULL;
-+	}
-+	rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Invoked from a CPUs hotplug control thread after the CPU has been marked
-+ * inactive. All tasks which are not per CPU kernel threads are either
-+ * pushed off this CPU now via balance_push() or placed on a different CPU
-+ * during wakeup. Wait until the CPU is quiescent.
-+ */
-+static void balance_hotplug_wait(void)
-+{
-+	struct rq *rq = this_rq();
-+
-+	rcuwait_wait_event(&rq->hotplug_wait,
-+			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
-+			   TASK_UNINTERRUPTIBLE);
-+}
-+
-+#else
-+
-+static void balance_push(struct rq *rq)
-+{
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+}
-+
-+static inline void balance_hotplug_wait(void)
-+{
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+static void set_rq_offline(struct rq *rq)
-+{
-+	if (rq->online) {
-+		update_rq_clock(rq);
-+		rq->online = false;
-+	}
-+}
-+
-+static void set_rq_online(struct rq *rq)
-+{
-+	if (!rq->online)
-+		rq->online = true;
-+}
-+
-+/*
-+ * used to mark begin/end of suspend/resume:
-+ */
-+static int num_cpus_frozen;
-+
-+/*
-+ * Update cpusets according to cpu_active mask.  If cpusets are
-+ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
-+ * around partition_sched_domains().
-+ *
-+ * If we come here as part of a suspend/resume, don't touch cpusets because we
-+ * want to restore it back to its original state upon resume anyway.
-+ */
-+static void cpuset_cpu_active(void)
-+{
-+	if (cpuhp_tasks_frozen) {
-+		/*
-+		 * num_cpus_frozen tracks how many CPUs are involved in suspend
-+		 * resume sequence. As long as this is not the last online
-+		 * operation in the resume sequence, just build a single sched
-+		 * domain, ignoring cpusets.
-+		 */
-+		partition_sched_domains(1, NULL, NULL);
-+		if (--num_cpus_frozen)
-+			return;
-+		/*
-+		 * This is the last CPU online operation. So fall through and
-+		 * restore the original sched domains by considering the
-+		 * cpuset configurations.
-+		 */
-+		cpuset_force_rebuild();
-+	}
-+
-+	cpuset_update_active_cpus();
-+}
-+
-+static int cpuset_cpu_inactive(unsigned int cpu)
-+{
-+	if (!cpuhp_tasks_frozen) {
-+		cpuset_update_active_cpus();
-+	} else {
-+		num_cpus_frozen++;
-+		partition_sched_domains(1, NULL, NULL);
-+	}
-+	return 0;
-+}
-+
-+int sched_cpu_activate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/*
-+	 * Clear the balance_push callback and prepare to schedule
-+	 * regular tasks.
-+	 */
-+	balance_push_set(cpu, false);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going up, increment the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
-+		static_branch_inc_cpuslocked(&sched_smt_present);
-+#endif
-+	set_cpu_active(cpu, true);
-+
-+	if (sched_smp_initialized)
-+		cpuset_cpu_active();
-+
-+	/*
-+	 * Put the rq online, if not already. This happens:
-+	 *
-+	 * 1) In the early boot process, because we build the real domains
-+	 *    after all cpus have been brought up.
-+	 *
-+	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
-+	 *    domains.
-+	 */
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_online(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	return 0;
-+}
-+
-+int sched_cpu_deactivate(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+	int ret;
-+
-+	set_cpu_active(cpu, false);
-+
-+	/*
-+	 * From this point forward, this CPU will refuse to run any task that
-+	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
-+	 * push those tasks away until this gets cleared, see
-+	 * sched_cpu_dying().
-+	 */
-+	balance_push_set(cpu, true);
-+
-+	/*
-+	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
-+	 * users of this state to go away such that all new such users will
-+	 * observe it.
-+	 *
-+	 * Specifically, we rely on ttwu to no longer target this CPU, see
-+	 * ttwu_queue_cond() and is_cpu_allowed().
-+	 *
-+	 * Do sync before park smpboot threads to take care the rcu boost case.
-+	 */
-+	synchronize_rcu();
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	set_rq_offline(rq);
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+#ifdef CONFIG_SCHED_SMT
-+	/*
-+	 * When going down, decrement the number of cores with SMT present.
-+	 */
-+	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
-+		static_branch_dec_cpuslocked(&sched_smt_present);
-+		if (!static_branch_likely(&sched_smt_present))
-+			cpumask_clear(&sched_sg_idle_mask);
-+	}
-+#endif
-+
-+	if (!sched_smp_initialized)
-+		return 0;
-+
-+	ret = cpuset_cpu_inactive(cpu);
-+	if (ret) {
-+		balance_push_set(cpu, false);
-+		set_cpu_active(cpu, true);
-+		return ret;
-+	}
-+
-+	return 0;
-+}
-+
-+static void sched_rq_cpu_starting(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+
-+	rq->calc_load_update = calc_load_update;
-+}
-+
-+int sched_cpu_starting(unsigned int cpu)
-+{
-+	sched_rq_cpu_starting(cpu);
-+	sched_tick_start(cpu);
-+	return 0;
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+
-+/*
-+ * Invoked immediately before the stopper thread is invoked to bring the
-+ * CPU down completely. At this point all per CPU kthreads except the
-+ * hotplug thread (current) and the stopper thread (inactive) have been
-+ * either parked or have been unbound from the outgoing CPU. Ensure that
-+ * any of those which might be on the way out are gone.
-+ *
-+ * If after this point a bound task is being woken on this CPU then the
-+ * responsible hotplug callback has failed to do it's job.
-+ * sched_cpu_dying() will catch it with the appropriate fireworks.
-+ */
-+int sched_cpu_wait_empty(unsigned int cpu)
-+{
-+	balance_hotplug_wait();
-+	return 0;
-+}
-+
-+/*
-+ * Since this CPU is going 'away' for a while, fold any nr_active delta we
-+ * might have. Called from the CPU stopper task after ensuring that the
-+ * stopper is the last running task on the CPU, so nr_active count is
-+ * stable. We need to take the teardown thread which is calling this into
-+ * account, so we hand in adjust = 1 to the load calculation.
-+ *
-+ * Also see the comment "Global load-average calculations".
-+ */
-+static void calc_load_migrate(struct rq *rq)
-+{
-+	long delta = calc_load_fold_active(rq, 1);
-+
-+	if (delta)
-+		atomic_long_add(delta, &calc_load_tasks);
-+}
-+
-+static void dump_rq_tasks(struct rq *rq, const char *loglvl)
-+{
-+	struct task_struct *g, *p;
-+	int cpu = cpu_of(rq);
-+
-+	lockdep_assert_held(&rq->lock);
-+
-+	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
-+	for_each_process_thread(g, p) {
-+		if (task_cpu(p) != cpu)
-+			continue;
-+
-+		if (!task_on_rq_queued(p))
-+			continue;
-+
-+		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
-+	}
-+}
-+
-+int sched_cpu_dying(unsigned int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	unsigned long flags;
-+
-+	/* Handle pending wakeups and then migrate everything off */
-+	sched_tick_stop(cpu);
-+
-+	raw_spin_lock_irqsave(&rq->lock, flags);
-+	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
-+		WARN(true, "Dying CPU not properly vacated!");
-+		dump_rq_tasks(rq, KERN_WARNING);
-+	}
-+	raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+	calc_load_migrate(rq);
-+	hrtick_clear(rq);
-+	return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_SMP
-+static void sched_init_topology_cpumask_early(void)
-+{
-+	int cpu;
-+	cpumask_t *tmp;
-+
-+	for_each_possible_cpu(cpu) {
-+		/* init topo masks */
-+		tmp = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+		cpumask_copy(tmp, cpumask_of(cpu));
-+		tmp++;
-+		cpumask_copy(tmp, cpu_possible_mask);
-+		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
-+		/*per_cpu(sd_llc_id, cpu) = cpu;*/
-+	}
-+}
-+
-+#define TOPOLOGY_CPUMASK(name, mask, last)\
-+	if (cpumask_and(topo, topo, mask)) {					\
-+		cpumask_copy(topo, mask);					\
-+		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
-+		       cpu, (topo++)->bits[0]);					\
-+	}									\
-+	if (!last)								\
-+		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
-+				  nr_cpumask_bits);
-+
-+static void sched_init_topology_cpumask(void)
-+{
-+	int cpu;
-+	cpumask_t *topo;
-+
-+	for_each_online_cpu(cpu) {
-+		/* take chance to reset time slice for idle tasks */
-+		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
-+
-+		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
-+
-+		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
-+				  nr_cpumask_bits);
-+#ifdef CONFIG_SCHED_SMT
-+		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
-+#endif
-+		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
-+		per_cpu(sched_cpu_llc_mask, cpu) = topo;
-+		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
-+
-+		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
-+
-+		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
-+		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
-+		       cpu, per_cpu(sd_llc_id, cpu),
-+		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
-+			      per_cpu(sched_cpu_topo_masks, cpu)));
-+	}
-+}
-+#endif
-+
-+void __init sched_init_smp(void)
-+{
-+	/* Move init over to a non-isolated CPU */
-+	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
-+		BUG();
-+	current->flags &= ~PF_NO_SETAFFINITY;
-+
-+	sched_init_topology_cpumask();
-+
-+	sched_smp_initialized = true;
-+}
-+
-+static int __init migration_init(void)
-+{
-+	sched_cpu_starting(smp_processor_id());
-+	return 0;
-+}
-+early_initcall(migration_init);
-+
-+#else
-+void __init sched_init_smp(void)
-+{
-+	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
-+}
-+#endif /* CONFIG_SMP */
-+
-+int in_sched_functions(unsigned long addr)
-+{
-+	return in_lock_functions(addr) ||
-+		(addr >= (unsigned long)__sched_text_start
-+		&& addr < (unsigned long)__sched_text_end);
-+}
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/* task group related information */
-+struct task_group {
-+	struct cgroup_subsys_state css;
-+
-+	struct rcu_head rcu;
-+	struct list_head list;
-+
-+	struct task_group *parent;
-+	struct list_head siblings;
-+	struct list_head children;
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	unsigned long		shares;
-+#endif
-+};
-+
-+/*
-+ * Default task group.
-+ * Every task in system belongs to this group at bootup.
-+ */
-+struct task_group root_task_group;
-+LIST_HEAD(task_groups);
-+
-+/* Cacheline aligned slab cache for task_group */
-+static struct kmem_cache *task_group_cache __ro_after_init;
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+void __init sched_init(void)
-+{
-+	int i;
-+	struct rq *rq;
-+
-+	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
-+			 " by Alfred Chen.\n");
-+
-+	wait_bit_init();
-+
-+#ifdef CONFIG_SMP
-+	for (i = 0; i < SCHED_QUEUE_BITS; i++)
-+		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
-+#endif
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+	task_group_cache = KMEM_CACHE(task_group, 0);
-+
-+	list_add(&root_task_group.list, &task_groups);
-+	INIT_LIST_HEAD(&root_task_group.children);
-+	INIT_LIST_HEAD(&root_task_group.siblings);
-+#endif /* CONFIG_CGROUP_SCHED */
-+	for_each_possible_cpu(i) {
-+		rq = cpu_rq(i);
-+
-+		sched_queue_init(&rq->queue);
-+		rq->prio = IDLE_TASK_SCHED_PRIO;
-+		rq->skip = NULL;
-+
-+		raw_spin_lock_init(&rq->lock);
-+		rq->nr_running = rq->nr_uninterruptible = 0;
-+		rq->calc_load_active = 0;
-+		rq->calc_load_update = jiffies + LOAD_FREQ;
-+#ifdef CONFIG_SMP
-+		rq->online = false;
-+		rq->cpu = i;
-+
-+#ifdef CONFIG_SCHED_SMT
-+		rq->active_balance = 0;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
-+#endif
-+		rq->balance_callback = &balance_push_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+		rcuwait_init(&rq->hotplug_wait);
-+#endif
-+#endif /* CONFIG_SMP */
-+		rq->nr_switches = 0;
-+
-+		hrtick_rq_init(rq);
-+		atomic_set(&rq->nr_iowait, 0);
-+
-+		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
-+	}
-+#ifdef CONFIG_SMP
-+	/* Set rq->online for cpu 0 */
-+	cpu_rq(0)->online = true;
-+#endif
-+	/*
-+	 * The boot idle thread does lazy MMU switching as well:
-+	 */
-+	mmgrab(&init_mm);
-+	enter_lazy_tlb(&init_mm, current);
-+
-+	/*
-+	 * The idle task doesn't need the kthread struct to function, but it
-+	 * is dressed up as a per-CPU kthread and thus needs to play the part
-+	 * if we want to avoid special-casing it in code that deals with per-CPU
-+	 * kthreads.
-+	 */
-+	WARN_ON(!set_kthread_struct(current));
-+
-+	/*
-+	 * Make us the idle thread. Technically, schedule() should not be
-+	 * called from this thread, however somewhere below it might be,
-+	 * but because we are the idle thread, we just pick up running again
-+	 * when this runqueue becomes "idle".
-+	 */
-+	init_idle(current, smp_processor_id());
-+
-+	calc_load_update = jiffies + LOAD_FREQ;
-+
-+#ifdef CONFIG_SMP
-+	idle_thread_set_boot_cpu();
-+	balance_push_set(smp_processor_id(), false);
-+
-+	sched_init_topology_cpumask_early();
-+#endif /* SMP */
-+
-+	preempt_dynamic_init();
-+}
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+
-+void __might_sleep(const char *file, int line)
-+{
-+	unsigned int state = get_current_state();
-+	/*
-+	 * Blocking primitives will set (and therefore destroy) current->state,
-+	 * since we will exit with TASK_RUNNING make sure we enter with it,
-+	 * otherwise we will destroy state.
-+	 */
-+	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
-+			"do not call blocking ops when !TASK_RUNNING; "
-+			"state=%x set at [<%p>] %pS\n", state,
-+			(void *)current->task_state_change,
-+			(void *)current->task_state_change);
-+
-+	__might_resched(file, line, 0);
-+}
-+EXPORT_SYMBOL(__might_sleep);
-+
-+static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
-+{
-+	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
-+		return;
-+
-+	if (preempt_count() == preempt_offset)
-+		return;
-+
-+	pr_err("Preemption disabled at:");
-+	print_ip_sym(KERN_ERR, ip);
-+}
-+
-+static inline bool resched_offsets_ok(unsigned int offsets)
-+{
-+	unsigned int nested = preempt_count();
-+
-+	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
-+
-+	return nested == offsets;
-+}
-+
-+void __might_resched(const char *file, int line, unsigned int offsets)
-+{
-+	/* Ratelimiting timestamp: */
-+	static unsigned long prev_jiffy;
-+
-+	unsigned long preempt_disable_ip;
-+
-+	/* WARN_ON_ONCE() by default, no rate limit required: */
-+	rcu_sleep_check();
-+
-+	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
-+	     !is_idle_task(current) && !current->non_block_count) ||
-+	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
-+	    oops_in_progress)
-+		return;
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	/* Save this before calling printk(), since that will clobber it: */
-+	preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
-+	       file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), current->non_block_count,
-+	       current->pid, current->comm);
-+	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
-+	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
-+
-+	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
-+		pr_err("RCU nest depth: %d, expected: %u\n",
-+		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
-+	}
-+
-+	if (task_stack_end_corrupted(current))
-+		pr_emerg("Thread overran stack, or stack corrupted\n");
-+
-+	debug_show_held_locks(current);
-+	if (irqs_disabled())
-+		print_irqtrace_events(current);
-+
-+	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
-+				 preempt_disable_ip);
-+
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL(__might_resched);
-+
-+void __cant_sleep(const char *file, int line, int preempt_offset)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > preempt_offset)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
-+	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
-+			in_atomic(), irqs_disabled(),
-+			current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_sleep);
-+
-+#ifdef CONFIG_SMP
-+void __cant_migrate(const char *file, int line)
-+{
-+	static unsigned long prev_jiffy;
-+
-+	if (irqs_disabled())
-+		return;
-+
-+	if (is_migration_disabled(current))
-+		return;
-+
-+	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+		return;
-+
-+	if (preempt_count() > 0)
-+		return;
-+
-+	if (current->migration_flags & MDF_FORCE_ENABLED)
-+		return;
-+
-+	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+		return;
-+	prev_jiffy = jiffies;
-+
-+	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
-+	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
-+	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
-+	       current->pid, current->comm);
-+
-+	debug_show_held_locks(current);
-+	dump_stack();
-+	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_migrate);
-+#endif
-+#endif
-+
-+#ifdef CONFIG_MAGIC_SYSRQ
-+void normalize_rt_tasks(void)
-+{
-+	struct task_struct *g, *p;
-+	struct sched_attr attr = {
-+		.sched_policy = SCHED_NORMAL,
-+	};
-+
-+	read_lock(&tasklist_lock);
-+	for_each_process_thread(g, p) {
-+		/*
-+		 * Only normalize user tasks:
-+		 */
-+		if (p->flags & PF_KTHREAD)
-+			continue;
-+
-+		schedstat_set(p->stats.wait_start,  0);
-+		schedstat_set(p->stats.sleep_start, 0);
-+		schedstat_set(p->stats.block_start, 0);
-+
-+		if (!rt_task(p)) {
-+			/*
-+			 * Renice negative nice level userspace
-+			 * tasks back to 0:
-+			 */
-+			if (task_nice(p) < 0)
-+				set_user_nice(p, 0);
-+			continue;
-+		}
-+
-+		__sched_setscheduler(p, &attr, false, false);
-+	}
-+	read_unlock(&tasklist_lock);
-+}
-+#endif /* CONFIG_MAGIC_SYSRQ */
-+
-+#if defined(CONFIG_KGDB_KDB)
-+/*
-+ * These functions are only useful for kdb.
-+ *
-+ * They can only be called when the whole system has been
-+ * stopped - every CPU needs to be quiescent, and no scheduling
-+ * activity can take place. Using them for anything else would
-+ * be a serious bug, and as a result, they aren't even visible
-+ * under any other configuration.
-+ */
-+
-+/**
-+ * curr_task - return the current task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
-+ *
-+ * Return: The current task for @cpu.
-+ */
-+struct task_struct *curr_task(int cpu)
-+{
-+	return cpu_curr(cpu);
-+}
-+
-+#endif /* defined(CONFIG_KGDB_KDB) */
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+static void sched_free_group(struct task_group *tg)
-+{
-+	kmem_cache_free(task_group_cache, tg);
-+}
-+
-+static void sched_free_group_rcu(struct rcu_head *rhp)
-+{
-+	sched_free_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+static void sched_unregister_group(struct task_group *tg)
-+{
-+	/*
-+	 * We have to wait for yet another RCU grace period to expire, as
-+	 * print_cfs_stats() might run concurrently.
-+	 */
-+	call_rcu(&tg->rcu, sched_free_group_rcu);
-+}
-+
-+/* allocate runqueue etc for a new task group */
-+struct task_group *sched_create_group(struct task_group *parent)
-+{
-+	struct task_group *tg;
-+
-+	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
-+	if (!tg)
-+		return ERR_PTR(-ENOMEM);
-+
-+	return tg;
-+}
-+
-+void sched_online_group(struct task_group *tg, struct task_group *parent)
-+{
-+}
-+
-+/* rcu callback to free various structures associated with a task group */
-+static void sched_unregister_group_rcu(struct rcu_head *rhp)
-+{
-+	/* Now it should be safe to free those cfs_rqs: */
-+	sched_unregister_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+void sched_destroy_group(struct task_group *tg)
-+{
-+	/* Wait for possible concurrent references to cfs_rqs complete: */
-+	call_rcu(&tg->rcu, sched_unregister_group_rcu);
-+}
-+
-+void sched_release_group(struct task_group *tg)
-+{
-+}
-+
-+static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
-+{
-+	return css ? container_of(css, struct task_group, css) : NULL;
-+}
-+
-+static struct cgroup_subsys_state *
-+cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
-+{
-+	struct task_group *parent = css_tg(parent_css);
-+	struct task_group *tg;
-+
-+	if (!parent) {
-+		/* This is early initialization for the top cgroup */
-+		return &root_task_group.css;
-+	}
-+
-+	tg = sched_create_group(parent);
-+	if (IS_ERR(tg))
-+		return ERR_PTR(-ENOMEM);
-+	return &tg->css;
-+}
-+
-+/* Expose task group only after completing cgroup initialization */
-+static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+	struct task_group *parent = css_tg(css->parent);
-+
-+	if (parent)
-+		sched_online_group(tg, parent);
-+	return 0;
-+}
-+
-+static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	sched_release_group(tg);
-+}
-+
-+static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	/*
-+	 * Relies on the RCU grace period between css_released() and this.
-+	 */
-+	sched_unregister_group(tg);
-+}
-+
-+#ifdef CONFIG_RT_GROUP_SCHED
-+static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
-+{
-+	return 0;
-+}
-+#endif
-+
-+static void cpu_cgroup_attach(struct cgroup_taskset *tset)
-+{
-+}
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+static DEFINE_MUTEX(shares_mutex);
-+
-+int sched_group_set_shares(struct task_group *tg, unsigned long shares)
-+{
-+	/*
-+	 * We can't change the weight of the root cgroup.
-+	 */
-+	if (&root_task_group == tg)
-+		return -EINVAL;
-+
-+	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
-+
-+	mutex_lock(&shares_mutex);
-+	if (tg->shares == shares)
-+		goto done;
-+
-+	tg->shares = shares;
-+done:
-+	mutex_unlock(&shares_mutex);
-+	return 0;
-+}
-+
-+static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cftype, u64 shareval)
-+{
-+	if (shareval > scale_load_down(ULONG_MAX))
-+		shareval = MAX_SHARES;
-+	return sched_group_set_shares(css_tg(css), scale_load(shareval));
-+}
-+
-+static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	struct task_group *tg = css_tg(css);
-+
-+	return (u64) scale_load_down(tg->shares);
-+}
-+#endif
-+
-+static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
-+				  struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
-+				   struct cftype *cftype, s64 cfs_quota_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
-+				   struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
-+				    struct cftype *cftype, u64 cfs_period_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
-+				  struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
-+				   struct cftype *cftype, u64 cfs_burst_us)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
-+				struct cftype *cft, s64 val)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
-+				    struct cftype *cftype, u64 rt_period_us)
-+{
-+	return 0;
-+}
-+
-+static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
-+				   struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
-+				    char *buf, size_t nbytes,
-+				    loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
-+				    char *buf, size_t nbytes,
-+				    loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static struct cftype cpu_legacy_files[] = {
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+	{
-+		.name = "shares",
-+		.read_u64 = cpu_shares_read_u64,
-+		.write_u64 = cpu_shares_write_u64,
-+	},
-+#endif
-+	{
-+		.name = "cfs_quota_us",
-+		.read_s64 = cpu_cfs_quota_read_s64,
-+		.write_s64 = cpu_cfs_quota_write_s64,
-+	},
-+	{
-+		.name = "cfs_period_us",
-+		.read_u64 = cpu_cfs_period_read_u64,
-+		.write_u64 = cpu_cfs_period_write_u64,
-+	},
-+	{
-+		.name = "cfs_burst_us",
-+		.read_u64 = cpu_cfs_burst_read_u64,
-+		.write_u64 = cpu_cfs_burst_write_u64,
-+	},
-+	{
-+		.name = "stat",
-+		.seq_show = cpu_cfs_stat_show,
-+	},
-+	{
-+		.name = "stat.local",
-+		.seq_show = cpu_cfs_local_stat_show,
-+	},
-+	{
-+		.name = "rt_runtime_us",
-+		.read_s64 = cpu_rt_runtime_read,
-+		.write_s64 = cpu_rt_runtime_write,
-+	},
-+	{
-+		.name = "rt_period_us",
-+		.read_u64 = cpu_rt_period_read_uint,
-+		.write_u64 = cpu_rt_period_write_uint,
-+	},
-+	{
-+		.name = "uclamp.min",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_min_show,
-+		.write = cpu_uclamp_min_write,
-+	},
-+	{
-+		.name = "uclamp.max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_max_show,
-+		.write = cpu_uclamp_max_write,
-+	},
-+	{ }	/* Terminate */
-+};
-+
-+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
-+				struct cftype *cft, u64 weight)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
-+				    struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
-+				     struct cftype *cft, s64 nice)
-+{
-+	return 0;
-+}
-+
-+static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
-+			       struct cftype *cft)
-+{
-+	return 0;
-+}
-+
-+static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
-+				struct cftype *cft, s64 idle)
-+{
-+	return 0;
-+}
-+
-+static int cpu_max_show(struct seq_file *sf, void *v)
-+{
-+	return 0;
-+}
-+
-+static ssize_t cpu_max_write(struct kernfs_open_file *of,
-+			     char *buf, size_t nbytes, loff_t off)
-+{
-+	return nbytes;
-+}
-+
-+static struct cftype cpu_files[] = {
-+	{
-+		.name = "weight",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_u64 = cpu_weight_read_u64,
-+		.write_u64 = cpu_weight_write_u64,
-+	},
-+	{
-+		.name = "weight.nice",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_s64 = cpu_weight_nice_read_s64,
-+		.write_s64 = cpu_weight_nice_write_s64,
-+	},
-+	{
-+		.name = "idle",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_s64 = cpu_idle_read_s64,
-+		.write_s64 = cpu_idle_write_s64,
-+	},
-+	{
-+		.name = "max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_max_show,
-+		.write = cpu_max_write,
-+	},
-+	{
-+		.name = "max.burst",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.read_u64 = cpu_cfs_burst_read_u64,
-+		.write_u64 = cpu_cfs_burst_write_u64,
-+	},
-+	{
-+		.name = "uclamp.min",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_min_show,
-+		.write = cpu_uclamp_min_write,
-+	},
-+	{
-+		.name = "uclamp.max",
-+		.flags = CFTYPE_NOT_ON_ROOT,
-+		.seq_show = cpu_uclamp_max_show,
-+		.write = cpu_uclamp_max_write,
-+	},
-+	{ }	/* terminate */
-+};
-+
-+static int cpu_extra_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+static int cpu_local_stat_show(struct seq_file *sf,
-+			       struct cgroup_subsys_state *css)
-+{
-+	return 0;
-+}
-+
-+struct cgroup_subsys cpu_cgrp_subsys = {
-+	.css_alloc	= cpu_cgroup_css_alloc,
-+	.css_online	= cpu_cgroup_css_online,
-+	.css_released	= cpu_cgroup_css_released,
-+	.css_free	= cpu_cgroup_css_free,
-+	.css_extra_stat_show = cpu_extra_stat_show,
-+	.css_local_stat_show = cpu_local_stat_show,
-+#ifdef CONFIG_RT_GROUP_SCHED
-+	.can_attach	= cpu_cgroup_can_attach,
-+#endif
-+	.attach		= cpu_cgroup_attach,
-+	.legacy_cftypes	= cpu_legacy_files,
-+	.dfl_cftypes	= cpu_files,
-+	.early_init	= true,
-+	.threaded	= true,
-+};
-+#endif	/* CONFIG_CGROUP_SCHED */
-+
-+#undef CREATE_TRACE_POINTS
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#
-+/*
-+ * @cid_lock: Guarantee forward-progress of cid allocation.
-+ *
-+ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
-+ * is only used when contention is detected by the lock-free allocation so
-+ * forward progress can be guaranteed.
-+ */
-+DEFINE_RAW_SPINLOCK(cid_lock);
-+
-+/*
-+ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
-+ *
-+ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
-+ * detected, it is set to 1 to ensure that all newly coming allocations are
-+ * serialized by @cid_lock until the allocation which detected contention
-+ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
-+ * of a cid allocation.
-+ */
-+int use_cid_lock;
-+
-+/*
-+ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
-+ * concurrently with respect to the execution of the source runqueue context
-+ * switch.
-+ *
-+ * There is one basic properties we want to guarantee here:
-+ *
-+ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
-+ * used by a task. That would lead to concurrent allocation of the cid and
-+ * userspace corruption.
-+ *
-+ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
-+ * that a pair of loads observe at least one of a pair of stores, which can be
-+ * shown as:
-+ *
-+ *      X = Y = 0
-+ *
-+ *      w[X]=1          w[Y]=1
-+ *      MB              MB
-+ *      r[Y]=y          r[X]=x
-+ *
-+ * Which guarantees that x==0 && y==0 is impossible. But rather than using
-+ * values 0 and 1, this algorithm cares about specific state transitions of the
-+ * runqueue current task (as updated by the scheduler context switch), and the
-+ * per-mm/cpu cid value.
-+ *
-+ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
-+ * task->mm != mm for the rest of the discussion. There are two scheduler state
-+ * transitions on context switch we care about:
-+ *
-+ * (TSA) Store to rq->curr with transition from (N) to (Y)
-+ *
-+ * (TSB) Store to rq->curr with transition from (Y) to (N)
-+ *
-+ * On the remote-clear side, there is one transition we care about:
-+ *
-+ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
-+ *
-+ * There is also a transition to UNSET state which can be performed from all
-+ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
-+ * guarantees that only a single thread will succeed:
-+ *
-+ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
-+ *
-+ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
-+ * when a thread is actively using the cid (property (1)).
-+ *
-+ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
-+ *
-+ * Scenario A) (TSA)+(TMA) (from next task perspective)
-+ *
-+ * CPU0                                      CPU1
-+ *
-+ * Context switch CS-1                       Remote-clear
-+ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
-+ *                                             (implied barrier after cmpxchg)
-+ *   - switch_mm_cid()
-+ *     - memory barrier (see switch_mm_cid()
-+ *       comment explaining how this barrier
-+ *       is combined with other scheduler
-+ *       barriers)
-+ *     - mm_cid_get (next)
-+ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
-+ *
-+ * This Dekker ensures that either task (Y) is observed by the
-+ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
-+ * observed.
-+ *
-+ * If task (Y) store is observed by rcu_dereference(), it means that there is
-+ * still an active task on the cpu. Remote-clear will therefore not transition
-+ * to UNSET, which fulfills property (1).
-+ *
-+ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
-+ * it will move its state to UNSET, which clears the percpu cid perhaps
-+ * uselessly (which is not an issue for correctness). Because task (Y) is not
-+ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
-+ * state to UNSET is done with a cmpxchg expecting that the old state has the
-+ * LAZY flag set, only one thread will successfully UNSET.
-+ *
-+ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
-+ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
-+ * CPU1 will observe task (Y) and do nothing more, which is fine.
-+ *
-+ * What we are effectively preventing with this Dekker is a scenario where
-+ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
-+ * because this would UNSET a cid which is actively used.
-+ */
-+
-+void sched_mm_cid_migrate_from(struct task_struct *t)
-+{
-+	t->migrate_from_cpu = task_cpu(t);
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
-+					  struct task_struct *t,
-+					  struct mm_cid *src_pcpu_cid)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct task_struct *src_task;
-+	int src_cid, last_mm_cid;
-+
-+	if (!mm)
-+		return -1;
-+
-+	last_mm_cid = t->last_mm_cid;
-+	/*
-+	 * If the migrated task has no last cid, or if the current
-+	 * task on src rq uses the cid, it means the source cid does not need
-+	 * to be moved to the destination cpu.
-+	 */
-+	if (last_mm_cid == -1)
-+		return -1;
-+	src_cid = READ_ONCE(src_pcpu_cid->cid);
-+	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
-+		return -1;
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq, it means we
-+	 * are not the last task to be migrated from this cpu for this mm, so
-+	 * there is no need to move src_cid to the destination cpu.
-+	 */
-+	rcu_read_lock();
-+	src_task = rcu_dereference(src_rq->curr);
-+	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+		rcu_read_unlock();
-+		t->last_mm_cid = -1;
-+		return -1;
-+	}
-+	rcu_read_unlock();
-+
-+	return src_cid;
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
-+					      struct task_struct *t,
-+					      struct mm_cid *src_pcpu_cid,
-+					      int src_cid)
-+{
-+	struct task_struct *src_task;
-+	struct mm_struct *mm = t->mm;
-+	int lazy_cid;
-+
-+	if (src_cid == -1)
-+		return -1;
-+
-+	/*
-+	 * Attempt to clear the source cpu cid to move it to the destination
-+	 * cpu.
-+	 */
-+	lazy_cid = mm_cid_set_lazy_put(src_cid);
-+	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
-+		return -1;
-+
-+	/*
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm matches the scheduler barrier in context_switch()
-+	 * between store to rq->curr and load of prev and next task's
-+	 * per-mm/cpu cid.
-+	 *
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm_cid_active matches the barrier in
-+	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+	 * load of per-mm/cpu cid.
-+	 */
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq after setting
-+	 * the lazy-put flag, this task will be responsible for transitioning
-+	 * from lazy-put flag set to MM_CID_UNSET.
-+	 */
-+	scoped_guard (rcu) {
-+		src_task = rcu_dereference(src_rq->curr);
-+		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+			rcu_read_unlock();
-+			/*
-+			 * We observed an active task for this mm, there is therefore
-+			 * no point in moving this cid to the destination cpu.
-+			 */
-+			t->last_mm_cid = -1;
-+			return -1;
-+		}
-+	}
-+
-+	/*
-+	 * The src_cid is unused, so it can be unset.
-+	 */
-+	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+		return -1;
-+	return src_cid;
-+}
-+
-+/*
-+ * Migration to dst cpu. Called with dst_rq lock held.
-+ * Interrupts are disabled, which keeps the window of cid ownership without the
-+ * source rq lock held small.
-+ */
-+void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
-+{
-+	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
-+	struct mm_struct *mm = t->mm;
-+	int src_cid, dst_cid;
-+	struct rq *src_rq;
-+
-+	lockdep_assert_rq_held(dst_rq);
-+
-+	if (!mm)
-+		return;
-+	if (src_cpu == -1) {
-+		t->last_mm_cid = -1;
-+		return;
-+	}
-+	/*
-+	 * Move the src cid if the dst cid is unset. This keeps id
-+	 * allocation closest to 0 in cases where few threads migrate around
-+	 * many cpus.
-+	 *
-+	 * If destination cid is already set, we may have to just clear
-+	 * the src cid to ensure compactness in frequent migrations
-+	 * scenarios.
-+	 *
-+	 * It is not useful to clear the src cid when the number of threads is
-+	 * greater or equal to the number of allowed cpus, because user-space
-+	 * can expect that the number of allowed cids can reach the number of
-+	 * allowed cpus.
-+	 */
-+	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
-+	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
-+	if (!mm_cid_is_unset(dst_cid) &&
-+	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
-+		return;
-+	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
-+	src_rq = cpu_rq(src_cpu);
-+	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
-+	if (src_cid == -1)
-+		return;
-+	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
-+							    src_cid);
-+	if (src_cid == -1)
-+		return;
-+	if (!mm_cid_is_unset(dst_cid)) {
-+		__mm_cid_put(mm, src_cid);
-+		return;
-+	}
-+	/* Move src_cid to dst cpu. */
-+	mm_cid_snapshot_time(dst_rq, mm);
-+	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
-+}
-+
-+static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
-+				      int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct task_struct *t;
-+	int cid, lazy_cid;
-+
-+	cid = READ_ONCE(pcpu_cid->cid);
-+	if (!mm_cid_is_valid(cid))
-+		return;
-+
-+	/*
-+	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
-+	 * there happens to be other tasks left on the source cpu using this
-+	 * mm, the next task using this mm will reallocate its cid on context
-+	 * switch.
-+	 */
-+	lazy_cid = mm_cid_set_lazy_put(cid);
-+	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
-+		return;
-+
-+	/*
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm matches the scheduler barrier in context_switch()
-+	 * between store to rq->curr and load of prev and next task's
-+	 * per-mm/cpu cid.
-+	 *
-+	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+	 * rq->curr->mm_cid_active matches the barrier in
-+	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+	 * load of per-mm/cpu cid.
-+	 */
-+
-+	/*
-+	 * If we observe an active task using the mm on this rq after setting
-+	 * the lazy-put flag, that task will be responsible for transitioning
-+	 * from lazy-put flag set to MM_CID_UNSET.
-+	 */
-+	scoped_guard (rcu) {
-+		t = rcu_dereference(rq->curr);
-+		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
-+			return;
-+	}
-+
-+	/*
-+	 * The cid is unused, so it can be unset.
-+	 * Disable interrupts to keep the window of cid ownership without rq
-+	 * lock small.
-+	 */
-+	scoped_guard (irqsave) {
-+		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+			__mm_cid_put(mm, cid);
-+	}
-+}
-+
-+static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
-+{
-+	struct rq *rq = cpu_rq(cpu);
-+	struct mm_cid *pcpu_cid;
-+	struct task_struct *curr;
-+	u64 rq_clock;
-+
-+	/*
-+	 * rq->clock load is racy on 32-bit but one spurious clear once in a
-+	 * while is irrelevant.
-+	 */
-+	rq_clock = READ_ONCE(rq->clock);
-+	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+
-+	/*
-+	 * In order to take care of infrequently scheduled tasks, bump the time
-+	 * snapshot associated with this cid if an active task using the mm is
-+	 * observed on this rq.
-+	 */
-+	scoped_guard (rcu) {
-+		curr = rcu_dereference(rq->curr);
-+		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
-+			WRITE_ONCE(pcpu_cid->time, rq_clock);
-+			return;
-+		}
-+	}
-+
-+	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
-+		return;
-+	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
-+					     int weight)
-+{
-+	struct mm_cid *pcpu_cid;
-+	int cid;
-+
-+	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+	cid = READ_ONCE(pcpu_cid->cid);
-+	if (!mm_cid_is_valid(cid) || cid < weight)
-+		return;
-+	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void task_mm_cid_work(struct callback_head *work)
-+{
-+	unsigned long now = jiffies, old_scan, next_scan;
-+	struct task_struct *t = current;
-+	struct cpumask *cidmask;
-+	struct mm_struct *mm;
-+	int weight, cpu;
-+
-+	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-+
-+	work->next = work;	/* Prevent double-add */
-+	if (t->flags & PF_EXITING)
-+		return;
-+	mm = t->mm;
-+	if (!mm)
-+		return;
-+	old_scan = READ_ONCE(mm->mm_cid_next_scan);
-+	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+	if (!old_scan) {
-+		unsigned long res;
-+
-+		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
-+		if (res != old_scan)
-+			old_scan = res;
-+		else
-+			old_scan = next_scan;
-+	}
-+	if (time_before(now, old_scan))
-+		return;
-+	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-+		return;
-+	cidmask = mm_cidmask(mm);
-+	/* Clear cids that were not recently used. */
-+	for_each_possible_cpu(cpu)
-+		sched_mm_cid_remote_clear_old(mm, cpu);
-+	weight = cpumask_weight(cidmask);
-+	/*
-+	 * Clear cids that are greater or equal to the cidmask weight to
-+	 * recompact it.
-+	 */
-+	for_each_possible_cpu(cpu)
-+		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-+}
-+
-+void init_sched_mm_cid(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	int mm_users = 0;
-+
-+	if (mm) {
-+		mm_users = atomic_read(&mm->mm_users);
-+		if (mm_users == 1)
-+			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+	}
-+	t->cid_work.next = &t->cid_work;	/* Protect against double add */
-+	init_task_work(&t->cid_work, task_mm_cid_work);
-+}
-+
-+void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-+{
-+	struct callback_head *work = &curr->cid_work;
-+	unsigned long now = jiffies;
-+
-+	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-+	    work->next != work)
-+		return;
-+	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
-+		return;
-+	task_work_add(curr, work, TWA_RESUME);
-+}
-+
-+void sched_mm_cid_exit_signals(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	guard(rq_lock_irqsave)(rq);
-+	preempt_enable_no_resched();	/* holding spinlock */
-+	WRITE_ONCE(t->mm_cid_active, 0);
-+	/*
-+	 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+	 * Matches barrier in sched_mm_cid_remote_clear_old().
-+	 */
-+	smp_mb();
-+	mm_cid_put(mm);
-+	t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_before_execve(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	guard(rq_lock_irqsave)(rq);
-+	preempt_enable_no_resched();	/* holding spinlock */
-+	WRITE_ONCE(t->mm_cid_active, 0);
-+	/*
-+	 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+	 * Matches barrier in sched_mm_cid_remote_clear_old().
-+	 */
-+	smp_mb();
-+	mm_cid_put(mm);
-+	t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_after_execve(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct rq *rq;
-+
-+	if (!mm)
-+		return;
-+
-+	preempt_disable();
-+	rq = this_rq();
-+	scoped_guard (rq_lock_irqsave, rq) {
-+		preempt_enable_no_resched();	/* holding spinlock */
-+		WRITE_ONCE(t->mm_cid_active, 1);
-+		/*
-+		 * Store t->mm_cid_active before loading per-mm/cpu cid.
-+		 * Matches barrier in sched_mm_cid_remote_clear_old().
-+		 */
-+		smp_mb();
-+		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
-+	}
-+	rseq_set_notify_resume(t);
-+}
-+
-+void sched_mm_cid_fork(struct task_struct *t)
-+{
-+	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
-+	t->mm_cid_active = 1;
-+}
-+#endif
-diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
-new file mode 100644
-index 000000000000..1212a031700e
---- /dev/null
-+++ b/kernel/sched/alt_debug.c
-@@ -0,0 +1,31 @@
-+/*
-+ * kernel/sched/alt_debug.c
-+ *
-+ * Print the alt scheduler debugging details
-+ *
-+ * Author: Alfred Chen
-+ * Date  : 2020
-+ */
-+#include "sched.h"
-+
-+/*
-+ * This allows printing both to /proc/sched_debug and
-+ * to the console
-+ */
-+#define SEQ_printf(m, x...)			\
-+ do {						\
-+	if (m)					\
-+		seq_printf(m, x);		\
-+	else					\
-+		pr_cont(x);			\
-+ } while (0)
-+
-+void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
-+			  struct seq_file *m)
-+{
-+	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
-+						get_nr_threads(p));
-+}
-+
-+void proc_sched_set_task(struct task_struct *p)
-+{}
-diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
-new file mode 100644
-index 000000000000..0eff5391092c
---- /dev/null
-+++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,923 @@
-+#ifndef ALT_SCHED_H
-+#define ALT_SCHED_H
-+
-+#include <linux/context_tracking.h>
-+#include <linux/profile.h>
-+#include <linux/stop_machine.h>
-+#include <linux/syscalls.h>
-+#include <linux/tick.h>
-+
-+#include <trace/events/power.h>
-+#include <trace/events/sched.h>
-+
-+#include "../workqueue_internal.h"
-+
-+#include "cpupri.h"
-+
-+#define MIN_SCHED_NORMAL_PRIO	(32)
-+/*
-+ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
-+ *
-+ * -- BMQ --
-+ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
-+ * -- PDS --
-+ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
-+ */
-+#define SCHED_LEVELS		(64 + 1)
-+
-+#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
-+extern void resched_latency_warn(int cpu, u64 latency);
-+#else
-+# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
-+static inline void resched_latency_warn(int cpu, u64 latency) {}
-+#endif
-+
-+/*
-+ * Increase resolution of nice-level calculations for 64-bit architectures.
-+ * The extra resolution improves shares distribution and load balancing of
-+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
-+ * hierarchies, especially on larger systems. This is not a user-visible change
-+ * and does not change the user-interface for setting shares/weights.
-+ *
-+ * We increase resolution only if we have enough bits to allow this increased
-+ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
-+ * are pretty high and the returns do not justify the increased costs.
-+ *
-+ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
-+ * increase coverage and consistency always enable it on 64-bit platforms.
-+ */
-+#ifdef CONFIG_64BIT
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load_down(w) \
-+({ \
-+	unsigned long __w = (w); \
-+	if (__w) \
-+		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
-+	__w; \
-+})
-+#else
-+# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w)		(w)
-+# define scale_load_down(w)	(w)
-+#endif
-+
-+#ifdef CONFIG_FAIR_GROUP_SCHED
-+#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
-+
-+/*
-+ * A weight of 0 or 1 can cause arithmetics problems.
-+ * A weight of a cfs_rq is the sum of weights of which entities
-+ * are queued on this cfs_rq, so a weight of a entity should not be
-+ * too large, so as the shares value of a task group.
-+ * (The default weight is 1024 - so there's no practical
-+ *  limitation from this.)
-+ */
-+#define MIN_SHARES		(1UL <<  1)
-+#define MAX_SHARES		(1UL << 18)
-+#endif
-+
-+/*
-+ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
-+ */
-+#ifdef CONFIG_SCHED_DEBUG
-+# define const_debug __read_mostly
-+#else
-+# define const_debug const
-+#endif
-+
-+/* task_struct::on_rq states: */
-+#define TASK_ON_RQ_QUEUED	1
-+#define TASK_ON_RQ_MIGRATING	2
-+
-+static inline int task_on_rq_queued(struct task_struct *p)
-+{
-+	return p->on_rq == TASK_ON_RQ_QUEUED;
-+}
-+
-+static inline int task_on_rq_migrating(struct task_struct *p)
-+{
-+	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
-+}
-+
-+/* Wake flags. The first three directly map to some SD flag value */
-+#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
-+#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
-+#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
-+
-+#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
-+#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
-+#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
-+
-+#ifdef CONFIG_SMP
-+static_assert(WF_EXEC == SD_BALANCE_EXEC);
-+static_assert(WF_FORK == SD_BALANCE_FORK);
-+static_assert(WF_TTWU == SD_BALANCE_WAKE);
-+#endif
-+
-+#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
-+
-+struct sched_queue {
-+	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
-+	struct list_head heads[SCHED_LEVELS];
-+};
-+
-+struct rq;
-+struct cpuidle_state;
-+
-+struct balance_callback {
-+	struct balance_callback *next;
-+	void (*func)(struct rq *rq);
-+};
-+
-+/*
-+ * This is the main, per-CPU runqueue data structure.
-+ * This data should only be modified by the local cpu.
-+ */
-+struct rq {
-+	/* runqueue lock: */
-+	raw_spinlock_t			lock;
-+
-+	struct task_struct __rcu	*curr;
-+	struct task_struct		*idle;
-+	struct task_struct		*stop;
-+	struct task_struct		*skip;
-+	struct mm_struct		*prev_mm;
-+
-+	struct sched_queue	queue;
-+#ifdef CONFIG_SCHED_PDS
-+	u64			time_edge;
-+#endif
-+	unsigned long		prio;
-+
-+	/* switch count */
-+	u64 nr_switches;
-+
-+	atomic_t nr_iowait;
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+	u64 last_seen_need_resched_ns;
-+	int ticks_without_resched;
-+#endif
-+
-+#ifdef CONFIG_MEMBARRIER
-+	int membarrier_state;
-+#endif
-+
-+#ifdef CONFIG_SMP
-+	int cpu;		/* cpu of this runqueue */
-+	bool online;
-+
-+	unsigned int		ttwu_pending;
-+	unsigned char		nohz_idle_balance;
-+	unsigned char		idle_balance;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+	struct sched_avg	avg_irq;
-+#endif
-+
-+#ifdef CONFIG_SCHED_SMT
-+	int active_balance;
-+	struct cpu_stop_work	active_balance_work;
-+#endif
-+	struct balance_callback	*balance_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+	struct rcuwait		hotplug_wait;
-+#endif
-+	unsigned int		nr_pinned;
-+
-+#endif /* CONFIG_SMP */
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+	u64 prev_irq_time;
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+#ifdef CONFIG_PARAVIRT
-+	u64 prev_steal_time;
-+#endif /* CONFIG_PARAVIRT */
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+	u64 prev_steal_time_rq;
-+#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
-+
-+	/* For genenal cpu load util */
-+	s32 load_history;
-+	u64 load_block;
-+	u64 load_stamp;
-+
-+	/* calc_load related fields */
-+	unsigned long calc_load_update;
-+	long calc_load_active;
-+
-+	/* Ensure that all clocks are in the same cache line */
-+	u64			clock ____cacheline_aligned;
-+	u64			clock_task;
-+#ifdef CONFIG_SCHED_BMQ
-+	u64			last_ts_switch;
-+#endif
-+
-+	unsigned int  nr_running;
-+	unsigned long nr_uninterruptible;
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+#ifdef CONFIG_SMP
-+	call_single_data_t hrtick_csd;
-+#endif
-+	struct hrtimer		hrtick_timer;
-+	ktime_t			hrtick_time;
-+#endif
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+	/* latency stats */
-+	struct sched_info rq_sched_info;
-+	unsigned long long rq_cpu_time;
-+	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
-+
-+	/* sys_sched_yield() stats */
-+	unsigned int yld_count;
-+
-+	/* schedule() stats */
-+	unsigned int sched_switch;
-+	unsigned int sched_count;
-+	unsigned int sched_goidle;
-+
-+	/* try_to_wake_up() stats */
-+	unsigned int ttwu_count;
-+	unsigned int ttwu_local;
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+#ifdef CONFIG_CPU_IDLE
-+	/* Must be inspected within a rcu lock section */
-+	struct cpuidle_state *idle_state;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#ifdef CONFIG_SMP
-+	call_single_data_t	nohz_csd;
-+#endif
-+	atomic_t		nohz_flags;
-+#endif /* CONFIG_NO_HZ_COMMON */
-+
-+	/* Scratch cpumask to be temporarily used under rq_lock */
-+	cpumask_var_t		scratch_mask;
-+};
-+
-+extern unsigned int sysctl_sched_base_slice;
-+
-+extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
-+
-+extern unsigned long calc_load_update;
-+extern atomic_long_t calc_load_tasks;
-+
-+extern void calc_global_load_tick(struct rq *this_rq);
-+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
-+
-+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
-+#define this_rq()		this_cpu_ptr(&runqueues)
-+#define task_rq(p)		cpu_rq(task_cpu(p))
-+#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
-+#define raw_rq()		raw_cpu_ptr(&runqueues)
-+
-+#ifdef CONFIG_SMP
-+#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
-+void register_sched_domain_sysctl(void);
-+void unregister_sched_domain_sysctl(void);
-+#else
-+static inline void register_sched_domain_sysctl(void)
-+{
-+}
-+static inline void unregister_sched_domain_sysctl(void)
-+{
-+}
-+#endif
-+
-+extern bool sched_smp_initialized;
-+
-+enum {
-+	ITSELF_LEVEL_SPACE_HOLDER,
-+#ifdef CONFIG_SCHED_SMT
-+	SMT_LEVEL_SPACE_HOLDER,
-+#endif
-+	COREGROUP_LEVEL_SPACE_HOLDER,
-+	CORE_LEVEL_SPACE_HOLDER,
-+	OTHER_LEVEL_SPACE_HOLDER,
-+	NR_CPU_AFFINITY_LEVELS
-+};
-+
-+DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+
-+static inline int
-+__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
-+{
-+	int cpu;
-+
-+	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
-+		mask++;
-+
-+	return cpu;
-+}
-+
-+static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
-+{
-+	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
-+}
-+
-+extern void flush_smp_call_function_queue(void);
-+
-+#else  /* !CONFIG_SMP */
-+static inline void flush_smp_call_function_queue(void) { }
-+#endif
-+
-+#ifndef arch_scale_freq_tick
-+static __always_inline
-+void arch_scale_freq_tick(void)
-+{
-+}
-+#endif
-+
-+#ifndef arch_scale_freq_capacity
-+static __always_inline
-+unsigned long arch_scale_freq_capacity(int cpu)
-+{
-+	return SCHED_CAPACITY_SCALE;
-+}
-+#endif
-+
-+static inline u64 __rq_clock_broken(struct rq *rq)
-+{
-+	return READ_ONCE(rq->clock);
-+}
-+
-+static inline u64 rq_clock(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock;
-+}
-+
-+static inline u64 rq_clock_task(struct rq *rq)
-+{
-+	/*
-+	 * Relax lockdep_assert_held() checking as in VRQ, call to
-+	 * sched_info_xxxx() may not held rq->lock
-+	 * lockdep_assert_held(&rq->lock);
-+	 */
-+	return rq->clock_task;
-+}
-+
-+/*
-+ * {de,en}queue flags:
-+ *
-+ * DEQUEUE_SLEEP  - task is no longer runnable
-+ * ENQUEUE_WAKEUP - task just became runnable
-+ *
-+ */
-+
-+#define DEQUEUE_SLEEP		0x01
-+
-+#define ENQUEUE_WAKEUP		0x01
-+
-+
-+/*
-+ * Below are scheduler API which using in other kernel code
-+ * It use the dummy rq_flags
-+ * ToDo : BMQ need to support these APIs for compatibility with mainline
-+ * scheduler code.
-+ */
-+struct rq_flags {
-+	unsigned long flags;
-+};
-+
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(rq->lock);
-+
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+	__acquires(p->pi_lock)
-+	__acquires(rq->lock);
-+
-+static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
-+	__releases(rq->lock)
-+	__releases(p->pi_lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+}
-+
-+static inline void
-+rq_lock(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+rq_lock_irq(struct rq *rq, struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	raw_spin_lock_irq(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
-+	__releases(rq->lock)
-+{
-+	raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+static inline struct rq *
-+this_rq_lock_irq(struct rq_flags *rf)
-+	__acquires(rq->lock)
-+{
-+	struct rq *rq;
-+
-+	local_irq_disable();
-+	rq = this_rq();
-+	raw_spin_lock(&rq->lock);
-+
-+	return rq;
-+}
-+
-+static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
-+{
-+	return &rq->lock;
-+}
-+
-+static inline raw_spinlock_t *rq_lockp(struct rq *rq)
-+{
-+	return __rq_lockp(rq);
-+}
-+
-+static inline void lockdep_assert_rq_held(struct rq *rq)
-+{
-+	lockdep_assert_held(__rq_lockp(rq));
-+}
-+
-+extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
-+extern void raw_spin_rq_unlock(struct rq *rq);
-+
-+static inline void raw_spin_rq_lock(struct rq *rq)
-+{
-+	raw_spin_rq_lock_nested(rq, 0);
-+}
-+
-+static inline void raw_spin_rq_lock_irq(struct rq *rq)
-+{
-+	local_irq_disable();
-+	raw_spin_rq_lock(rq);
-+}
-+
-+static inline void raw_spin_rq_unlock_irq(struct rq *rq)
-+{
-+	raw_spin_rq_unlock(rq);
-+	local_irq_enable();
-+}
-+
-+static inline int task_current(struct rq *rq, struct task_struct *p)
-+{
-+	return rq->curr == p;
-+}
-+
-+static inline bool task_on_cpu(struct task_struct *p)
-+{
-+	return p->on_cpu;
-+}
-+
-+extern int task_running_nice(struct task_struct *p);
-+
-+extern struct static_key_false sched_schedstats;
-+
-+#ifdef CONFIG_CPU_IDLE
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+	rq->idle_state = idle_state;
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	WARN_ON(!rcu_read_lock_held());
-+	return rq->idle_state;
-+}
-+#else
-+static inline void idle_set_state(struct rq *rq,
-+				  struct cpuidle_state *idle_state)
-+{
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+	return NULL;
-+}
-+#endif
-+
-+static inline int cpu_of(const struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+	return rq->cpu;
-+#else
-+	return 0;
-+#endif
-+}
-+
-+#include "stats.h"
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#define NOHZ_BALANCE_KICK_BIT	0
-+#define NOHZ_STATS_KICK_BIT	1
-+
-+#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
-+#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
-+
-+#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
-+
-+#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
-+
-+/* TODO: needed?
-+extern void nohz_balance_exit_idle(struct rq *rq);
-+#else
-+static inline void nohz_balance_exit_idle(struct rq *rq) { }
-+*/
-+#endif
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+struct irqtime {
-+	u64			total;
-+	u64			tick_delta;
-+	u64			irq_start_time;
-+	struct u64_stats_sync	sync;
-+};
-+
-+DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
-+
-+/*
-+ * Returns the irqtime minus the softirq time computed by ksoftirqd.
-+ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
-+ * and never move forward.
-+ */
-+static inline u64 irq_time_read(int cpu)
-+{
-+	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
-+	unsigned int seq;
-+	u64 total;
-+
-+	do {
-+		seq = __u64_stats_fetch_begin(&irqtime->sync);
-+		total = irqtime->total;
-+	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
-+
-+	return total;
-+}
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+
-+#ifdef CONFIG_CPU_FREQ
-+DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+extern int __init sched_tick_offload_init(void);
-+#else
-+static inline int sched_tick_offload_init(void) { return 0; }
-+#endif
-+
-+#ifdef arch_scale_freq_capacity
-+#ifndef arch_scale_freq_invariant
-+#define arch_scale_freq_invariant()	(true)
-+#endif
-+#else /* arch_scale_freq_capacity */
-+#define arch_scale_freq_invariant()	(false)
-+#endif
-+
-+extern void schedule_idle(void);
-+
-+#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
-+
-+/*
-+ * !! For sched_setattr_nocheck() (kernel) only !!
-+ *
-+ * This is actually gross. :(
-+ *
-+ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
-+ * tasks, but still be able to sleep. We need this on platforms that cannot
-+ * atomically change clock frequency. Remove once fast switching will be
-+ * available on such platforms.
-+ *
-+ * SUGOV stands for SchedUtil GOVernor.
-+ */
-+#define SCHED_FLAG_SUGOV	0x10000000
-+
-+#ifdef CONFIG_MEMBARRIER
-+/*
-+ * The scheduler provides memory barriers required by membarrier between:
-+ * - prior user-space memory accesses and store to rq->membarrier_state,
-+ * - store to rq->membarrier_state and following user-space memory accesses.
-+ * In the same way it provides those guarantees around store to rq->curr.
-+ */
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+	int membarrier_state;
-+
-+	if (prev_mm == next_mm)
-+		return;
-+
-+	membarrier_state = atomic_read(&next_mm->membarrier_state);
-+	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
-+		return;
-+
-+	WRITE_ONCE(rq->membarrier_state, membarrier_state);
-+}
-+#else
-+static inline void membarrier_switch_mm(struct rq *rq,
-+					struct mm_struct *prev_mm,
-+					struct mm_struct *next_mm)
-+{
-+}
-+#endif
-+
-+#ifdef CONFIG_NUMA
-+extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
-+#else
-+static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return nr_cpu_ids;
-+}
-+#endif
-+
-+extern void swake_up_all_locked(struct swait_queue_head *q);
-+extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
-+
-+extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+extern int preempt_dynamic_mode;
-+extern int sched_dynamic_mode(const char *str);
-+extern void sched_dynamic_update(int mode);
-+#endif
-+
-+static inline void nohz_run_idle_balance(int cpu) { }
-+
-+static inline
-+unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-+				  struct task_struct *p)
-+{
-+	return util;
-+}
-+
-+static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
-+#define MM_CID_SCAN_DELAY	100			/* 100ms */
-+
-+extern raw_spinlock_t cid_lock;
-+extern int use_cid_lock;
-+
-+extern void sched_mm_cid_migrate_from(struct task_struct *t);
-+extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
-+extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-+extern void init_sched_mm_cid(struct task_struct *t);
-+
-+static inline void __mm_cid_put(struct mm_struct *mm, int cid)
-+{
-+	if (cid < 0)
-+		return;
-+	cpumask_clear_cpu(cid, mm_cidmask(mm));
-+}
-+
-+/*
-+ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
-+ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
-+ * be held to transition to other states.
-+ *
-+ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
-+ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
-+ */
-+static inline void mm_cid_put_lazy(struct task_struct *t)
-+{
-+	struct mm_struct *mm = t->mm;
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	int cid;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	if (!mm_cid_is_lazy_put(cid) ||
-+	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+		return;
-+	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
-+{
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	int cid, res;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	for (;;) {
-+		if (mm_cid_is_unset(cid))
-+			return MM_CID_UNSET;
-+		/*
-+		 * Attempt transition from valid or lazy-put to unset.
-+		 */
-+		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
-+		if (res == cid)
-+			break;
-+		cid = res;
-+	}
-+	return cid;
-+}
-+
-+static inline void mm_cid_put(struct mm_struct *mm)
-+{
-+	int cid;
-+
-+	lockdep_assert_irqs_disabled();
-+	cid = mm_cid_pcpu_unset(mm);
-+	if (cid == MM_CID_UNSET)
-+		return;
-+	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int __mm_cid_try_get(struct mm_struct *mm)
-+{
-+	struct cpumask *cpumask;
-+	int cid;
-+
-+	cpumask = mm_cidmask(mm);
-+	/*
-+	 * Retry finding first zero bit if the mask is temporarily
-+	 * filled. This only happens during concurrent remote-clear
-+	 * which owns a cid without holding a rq lock.
-+	 */
-+	for (;;) {
-+		cid = cpumask_first_zero(cpumask);
-+		if (cid < nr_cpu_ids)
-+			break;
-+		cpu_relax();
-+	}
-+	if (cpumask_test_and_set_cpu(cid, cpumask))
-+		return -1;
-+	return cid;
-+}
-+
-+/*
-+ * Save a snapshot of the current runqueue time of this cpu
-+ * with the per-cpu cid value, allowing to estimate how recently it was used.
-+ */
-+static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
-+{
-+	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
-+
-+	lockdep_assert_rq_held(rq);
-+	WRITE_ONCE(pcpu_cid->time, rq->clock);
-+}
-+
-+static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+	int cid;
-+
-+	/*
-+	 * All allocations (even those using the cid_lock) are lock-free. If
-+	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
-+	 * guarantee forward progress.
-+	 */
-+	if (!READ_ONCE(use_cid_lock)) {
-+		cid = __mm_cid_try_get(mm);
-+		if (cid >= 0)
-+			goto end;
-+		raw_spin_lock(&cid_lock);
-+	} else {
-+		raw_spin_lock(&cid_lock);
-+		cid = __mm_cid_try_get(mm);
-+		if (cid >= 0)
-+			goto unlock;
-+	}
-+
-+	/*
-+	 * cid concurrently allocated. Retry while forcing following
-+	 * allocations to use the cid_lock to ensure forward progress.
-+	 */
-+	WRITE_ONCE(use_cid_lock, 1);
-+	/*
-+	 * Set use_cid_lock before allocation. Only care about program order
-+	 * because this is only required for forward progress.
-+	 */
-+	barrier();
-+	/*
-+	 * Retry until it succeeds. It is guaranteed to eventually succeed once
-+	 * all newcoming allocations observe the use_cid_lock flag set.
-+	 */
-+	do {
-+		cid = __mm_cid_try_get(mm);
-+		cpu_relax();
-+	} while (cid < 0);
-+	/*
-+	 * Allocate before clearing use_cid_lock. Only care about
-+	 * program order because this is for forward progress.
-+	 */
-+	barrier();
-+	WRITE_ONCE(use_cid_lock, 0);
-+unlock:
-+	raw_spin_unlock(&cid_lock);
-+end:
-+	mm_cid_snapshot_time(rq, mm);
-+	return cid;
-+}
-+
-+static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+	struct cpumask *cpumask;
-+	int cid;
-+
-+	lockdep_assert_rq_held(rq);
-+	cpumask = mm_cidmask(mm);
-+	cid = __this_cpu_read(pcpu_cid->cid);
-+	if (mm_cid_is_valid(cid)) {
-+		mm_cid_snapshot_time(rq, mm);
-+		return cid;
-+	}
-+	if (mm_cid_is_lazy_put(cid)) {
-+		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+	}
-+	cid = __mm_cid_get(rq, mm);
-+	__this_cpu_write(pcpu_cid->cid, cid);
-+	return cid;
-+}
-+
-+static inline void switch_mm_cid(struct rq *rq,
-+				 struct task_struct *prev,
-+				 struct task_struct *next)
-+{
-+	/*
-+	 * Provide a memory barrier between rq->curr store and load of
-+	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
-+	 *
-+	 * Should be adapted if context_switch() is modified.
-+	 */
-+	if (!next->mm) {                                // to kernel
-+		/*
-+		 * user -> kernel transition does not guarantee a barrier, but
-+		 * we can use the fact that it performs an atomic operation in
-+		 * mmgrab().
-+		 */
-+		if (prev->mm)                           // from user
-+			smp_mb__after_mmgrab();
-+		/*
-+		 * kernel -> kernel transition does not change rq->curr->mm
-+		 * state. It stays NULL.
-+		 */
-+	} else {                                        // to user
-+		/*
-+		 * kernel -> user transition does not provide a barrier
-+		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
-+		 * Provide it here.
-+		 */
-+		if (!prev->mm)                          // from kernel
-+			smp_mb();
-+		/*
-+		 * user -> user transition guarantees a memory barrier through
-+		 * switch_mm() when current->mm changes. If current->mm is
-+		 * unchanged, no barrier is needed.
-+		 */
-+	}
-+	if (prev->mm_cid_active) {
-+		mm_cid_snapshot_time(rq, prev->mm);
-+		mm_cid_put_lazy(prev);
-+		prev->mm_cid = -1;
-+	}
-+	if (next->mm_cid_active)
-+		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
-+}
-+
-+#else
-+static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
-+static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
-+static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
-+static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-+static inline void init_sched_mm_cid(struct task_struct *t) { }
-+#endif
-+
-+#endif /* ALT_SCHED_H */
-diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
-new file mode 100644
-index 000000000000..840009dc1e8d
---- /dev/null
-+++ b/kernel/sched/bmq.h
-@@ -0,0 +1,99 @@
-+#define ALT_SCHED_NAME "BMQ"
-+
-+/*
-+ * BMQ only routines
-+ */
-+#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
-+#define boost_threshold(p)	(sysctl_sched_base_slice >> ((14 - (p)->boost_prio) / 2))
-+
-+static inline void boost_task(struct task_struct *p)
-+{
-+	int limit;
-+
-+	switch (p->policy) {
-+	case SCHED_NORMAL:
-+		limit = -MAX_PRIORITY_ADJ;
-+		break;
-+	case SCHED_BATCH:
-+	case SCHED_IDLE:
-+		limit = 0;
-+		break;
-+	default:
-+		return;
-+	}
-+
-+	if (p->boost_prio > limit)
-+		p->boost_prio--;
-+}
-+
-+static inline void deboost_task(struct task_struct *p)
-+{
-+	if (p->boost_prio < MAX_PRIORITY_ADJ)
-+		p->boost_prio++;
-+}
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms) {}
-+
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	return p->prio + p->boost_prio - MAX_RT_PRIO;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MAX_RT_PRIO)? (p->prio >> 2) :
-+		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MAX_RT_PRIO) / 2;
-+}
-+
-+static inline int
-+task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
-+{
-+	return task_sched_prio(p);
-+}
-+
-+static inline int sched_prio2idx(int prio, struct rq *rq)
-+{
-+	return prio;
-+}
-+
-+static inline int sched_idx2prio(int idx, struct rq *rq)
-+{
-+	return idx;
-+}
-+
-+inline int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq) {}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+	if (rq_switch_time(rq) > sysctl_sched_base_slice)
-+		deboost_task(p);
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
-+static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->boost_prio = MAX_PRIORITY_ADJ;
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p)
-+{
-+	if(this_rq()->clock_task - p->last_ran > sysctl_sched_base_slice)
-+		boost_task(p);
-+}
-+
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
-+{
-+	if (rq_switch_time(rq) < boost_threshold(p))
-+		boost_task(p);
-+}
-diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
-index d9dc9ab3773f..71a25540d65e 100644
---- a/kernel/sched/build_policy.c
-+++ b/kernel/sched/build_policy.c
-@@ -42,13 +42,19 @@
- 
- #include "idle.c"
- 
-+#ifndef CONFIG_SCHED_ALT
- #include "rt.c"
-+#endif
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- # include "cpudeadline.c"
-+#endif
- # include "pelt.c"
- #endif
- 
- #include "cputime.c"
--#include "deadline.c"
- 
-+#ifndef CONFIG_SCHED_ALT
-+#include "deadline.c"
-+#endif
-diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
-index 80a3df49ab47..bc17d5a6fc41 100644
---- a/kernel/sched/build_utility.c
-+++ b/kernel/sched/build_utility.c
-@@ -84,7 +84,9 @@
- 
- #ifdef CONFIG_SMP
- # include "cpupri.c"
-+#ifndef CONFIG_SCHED_ALT
- # include "stop_task.c"
-+#endif
- # include "topology.c"
- #endif
- 
-diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
-index 5888176354e2..6ab2534714f6 100644
---- a/kernel/sched/cpufreq_schedutil.c
-+++ b/kernel/sched/cpufreq_schedutil.c
-@@ -155,12 +155,18 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
- 
- static void sugov_get_util(struct sugov_cpu *sg_cpu)
- {
--	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
- 	struct rq *rq = cpu_rq(sg_cpu->cpu);
- 
-+#ifndef CONFIG_SCHED_ALT
-+	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
-+
- 	sg_cpu->bw_dl = cpu_bw_dl(rq);
- 	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
- 					  FREQUENCY_UTIL, NULL);
-+#else
-+	sg_cpu->bw_dl = 0;
-+	sg_cpu->util = rq_load_util(rq, arch_scale_cpu_capacity(sg_cpu->cpu));
-+#endif /* CONFIG_SCHED_ALT */
- }
- 
- /**
-@@ -306,8 +312,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
-  */
- static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
- {
-+#ifndef CONFIG_SCHED_ALT
- 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
- 		sg_cpu->sg_policy->limits_changed = true;
-+#endif
- }
- 
- static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
-@@ -636,6 +644,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
- 	}
- 
- 	ret = sched_setattr_nocheck(thread, &attr);
-+
- 	if (ret) {
- 		kthread_stop(thread);
- 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
-diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
-index af7952f12e6c..6461cbbb734d 100644
---- a/kernel/sched/cputime.c
-+++ b/kernel/sched/cputime.c
-@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
- 	p->utime += cputime;
- 	account_group_user_time(p, cputime);
- 
--	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
-+	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
- 
- 	/* Add user time to cpustat. */
- 	task_group_account_field(p, index, cputime);
-@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
- 	p->gtime += cputime;
- 
- 	/* Add guest time to cpustat. */
--	if (task_nice(p) > 0) {
-+	if (task_running_nice(p)) {
- 		task_group_account_field(p, CPUTIME_NICE, cputime);
- 		cpustat[CPUTIME_GUEST_NICE] += cputime;
- 	} else {
-@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
- #ifdef CONFIG_64BIT
- static inline u64 read_sum_exec_runtime(struct task_struct *t)
- {
--	return t->se.sum_exec_runtime;
-+	return tsk_seruntime(t);
- }
- #else
- static u64 read_sum_exec_runtime(struct task_struct *t)
-@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
- 	struct rq *rq;
- 
- 	rq = task_rq_lock(t, &rf);
--	ns = t->se.sum_exec_runtime;
-+	ns = tsk_seruntime(t);
- 	task_rq_unlock(rq, t, &rf);
- 
- 	return ns;
-@@ -630,7 +630,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
- void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
- {
- 	struct task_cputime cputime = {
--		.sum_exec_runtime = p->se.sum_exec_runtime,
-+		.sum_exec_runtime = tsk_seruntime(p),
- 	};
- 
- 	if (task_cputime(p, &cputime.utime, &cputime.stime))
-diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
-index 4580a450700e..8c8fd7da4617 100644
---- a/kernel/sched/debug.c
-+++ b/kernel/sched/debug.c
-@@ -7,6 +7,7 @@
-  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
-  */
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * This allows printing both to /sys/kernel/debug/sched/debug and
-  * to the console
-@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
- };
- 
- #endif /* SMP */
-+#endif /* !CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 
-@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
- 
- #endif /* CONFIG_PREEMPT_DYNAMIC */
- 
-+#ifndef CONFIG_SCHED_ALT
- __read_mostly bool sched_debug_verbose;
- 
- #ifdef CONFIG_SMP
-@@ -332,6 +335,7 @@ static const struct file_operations sched_debug_fops = {
- 	.llseek		= seq_lseek,
- 	.release	= seq_release,
- };
-+#endif /* !CONFIG_SCHED_ALT */
- 
- static struct dentry *debugfs_sched;
- 
-@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
- 
- 	debugfs_sched = debugfs_create_dir("sched", NULL);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
- 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- #ifdef CONFIG_PREEMPT_DYNAMIC
- 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
- #endif
- 
- 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
- 
-+#ifndef CONFIG_SCHED_ALT
- 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
- 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
- 
-@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
- #endif
- 
- 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- 
- 	return 0;
- }
- late_initcall(sched_init_debug);
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 
- static cpumask_var_t		sd_sysctl_cpus;
-@@ -1106,6 +1115,7 @@ void proc_sched_set_task(struct task_struct *p)
- 	memset(&p->stats, 0, sizeof(p->stats));
- #endif
- }
-+#endif /* !CONFIG_SCHED_ALT */
- 
- void resched_latency_warn(int cpu, u64 latency)
- {
-diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
-index 565f8374ddbb..67d51e05a8ac 100644
---- a/kernel/sched/idle.c
-+++ b/kernel/sched/idle.c
-@@ -380,6 +380,7 @@ void cpu_startup_entry(enum cpuhp_state state)
- 		do_idle();
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * idle-task scheduling class.
-  */
-@@ -501,3 +502,4 @@ DEFINE_SCHED_CLASS(idle) = {
- 	.switched_to		= switched_to_idle,
- 	.update_curr		= update_curr_idle,
- };
-+#endif
-diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
-new file mode 100644
-index 000000000000..c35dfb909f23
---- /dev/null
-+++ b/kernel/sched/pds.h
-@@ -0,0 +1,141 @@
-+#define ALT_SCHED_NAME "PDS"
-+
-+static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
-+
-+#define SCHED_NORMAL_PRIO_NUM	(32)
-+#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
-+
-+/* PDS assume NORMAL_PRIO_NUM is power of 2 */
-+#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
-+
-+/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
-+static __read_mostly int sched_timeslice_shift = 23;
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms)
-+{
-+	if (2 == timeslice_ms)
-+		sched_timeslice_shift = 22;
-+}
-+
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+	s64 delta = p->deadline - rq->time_edge + SCHED_EDGE_DELTA;
-+
-+#ifdef ALT_SCHED_DEBUG
-+	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
-+		      "pds: task_sched_prio_normal() delta %lld\n", delta))
-+		return SCHED_NORMAL_PRIO_NUM - 1;
-+#endif
-+
-+	return max(0LL, delta);
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
-+		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+static inline int
-+task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
-+{
-+	u64 idx;
-+
-+	if (p->prio < MIN_NORMAL_PRIO)
-+		return p->prio >> 2;
-+
-+	idx = max(p->deadline + SCHED_EDGE_DELTA, rq->time_edge);
-+	/*printk(KERN_INFO "sched: task_sched_prio_idx edge:%llu, deadline=%llu idx=%llu\n", rq->time_edge, p->deadline, idx);*/
-+	return MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(idx);
-+}
-+
-+static inline int sched_prio2idx(int sched_prio, struct rq *rq)
-+{
-+	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
-+		sched_prio :
-+		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
-+}
-+
-+static inline int sched_idx2prio(int sched_idx, struct rq *rq)
-+{
-+	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
-+		sched_idx :
-+		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
-+}
-+
-+int task_running_nice(struct task_struct *p)
-+{
-+	return (p->prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq)
-+{
-+	struct list_head head;
-+	u64 old = rq->time_edge;
-+	u64 now = rq->clock >> sched_timeslice_shift;
-+	u64 prio, delta;
-+	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
-+
-+	if (now == old)
-+		return;
-+
-+	rq->time_edge = now;
-+	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
-+	INIT_LIST_HEAD(&head);
-+
-+	prio = MIN_SCHED_NORMAL_PRIO;
-+	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
-+		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
-+				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
-+
-+	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
-+	if (!list_empty(&head)) {
-+		struct task_struct *p;
-+		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
-+
-+		list_for_each_entry(p, &head, sq_node)
-+			p->sq_idx = idx;
-+
-+		list_splice(&head, rq->queue.heads + idx);
-+		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
-+	}
-+	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
-+		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
-+
-+	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
-+		return;
-+
-+	rq->prio = (rq->prio < MIN_SCHED_NORMAL_PRIO + delta) ?
-+		MIN_SCHED_NORMAL_PRIO : rq->prio - delta;
-+}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+	if (p->prio >= MIN_NORMAL_PRIO)
-+		p->deadline = rq->time_edge + (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
-+{
-+	u64 max_dl = rq->time_edge + NICE_WIDTH / 2 - 1;
-+	if (unlikely(p->deadline > max_dl))
-+		p->deadline = max_dl;
-+}
-+
-+static void sched_task_fork(struct task_struct *p, struct rq *rq)
-+{
-+	sched_task_renew(p, rq);
-+}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+	p->time_slice = sysctl_sched_base_slice;
-+	sched_task_renew(p, rq);
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p) {}
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
-diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
-index 63b6cf898220..9ca10ece4d3a 100644
---- a/kernel/sched/pelt.c
-+++ b/kernel/sched/pelt.c
-@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
- 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- /*
-  * sched_entity:
-  *
-@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
- 
- 	return 0;
- }
-+#endif
- 
--#ifdef CONFIG_SCHED_THERMAL_PRESSURE
-+#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- /*
-  * thermal:
-  *
-diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
-index 3a0e0dc28721..e8a7d84aa5a5 100644
---- a/kernel/sched/pelt.h
-+++ b/kernel/sched/pelt.h
-@@ -1,13 +1,15 @@
- #ifdef CONFIG_SMP
- #include "sched-pelt.h"
- 
-+#ifndef CONFIG_SCHED_ALT
- int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
- int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
- int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
- int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
- int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
-+#endif
- 
--#ifdef CONFIG_SCHED_THERMAL_PRESSURE
-+#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
- 
- static inline u64 thermal_load_avg(struct rq *rq)
-@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
- 	return PELT_MIN_DIVIDER + avg->period_contrib;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void cfs_se_util_change(struct sched_avg *avg)
- {
- 	unsigned int enqueued;
-@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
- 	return rq_clock_pelt(rq_of(cfs_rq));
- }
- #endif
-+#endif /* CONFIG_SCHED_ALT */
- 
- #else
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline int
- update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
- {
-@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
- {
- 	return 0;
- }
-+#endif
- 
- static inline int
- update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
-diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
-index 2e5a95486a42..0c86131a2a64 100644
---- a/kernel/sched/sched.h
-+++ b/kernel/sched/sched.h
-@@ -5,6 +5,10 @@
- #ifndef _KERNEL_SCHED_SCHED_H
- #define _KERNEL_SCHED_SCHED_H
- 
-+#ifdef CONFIG_SCHED_ALT
-+#include "alt_sched.h"
-+#else
-+
- #include <linux/sched/affinity.h>
- #include <linux/sched/autogroup.h>
- #include <linux/sched/cpufreq.h>
-@@ -3509,4 +3513,9 @@ static inline void init_sched_mm_cid(struct task_struct *t) { }
- extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
- extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
- 
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+	return (task_nice(p) > 0);
-+}
-+#endif /* !CONFIG_SCHED_ALT */
- #endif /* _KERNEL_SCHED_SCHED_H */
-diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
-index 857f837f52cb..5486c63e4790 100644
---- a/kernel/sched/stats.c
-+++ b/kernel/sched/stats.c
-@@ -125,8 +125,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
- 	} else {
- 		struct rq *rq;
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		struct sched_domain *sd;
- 		int dcount = 0;
-+#endif
- #endif
- 		cpu = (unsigned long)(v - 2);
- 		rq = cpu_rq(cpu);
-@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
- 		seq_printf(seq, "\n");
- 
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- 		/* domain-specific stats */
- 		rcu_read_lock();
- 		for_each_domain(cpu, sd) {
-@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
- 			    sd->ttwu_move_balance);
- 		}
- 		rcu_read_unlock();
-+#endif
- #endif
- 	}
- 	return 0;
-diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
-index 38f3698f5e5b..b9d597394316 100644
---- a/kernel/sched/stats.h
-+++ b/kernel/sched/stats.h
-@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
- 
- #endif /* CONFIG_SCHEDSTATS */
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_FAIR_GROUP_SCHED
- struct sched_entity_stats {
- 	struct sched_entity     se;
-@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
- #endif
- 	return &task_of(se)->stats;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- #ifdef CONFIG_PSI
- void psi_task_change(struct task_struct *task, int clear, int set);
-diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
-index 10d1391e7416..120933a5b206 100644
---- a/kernel/sched/topology.c
-+++ b/kernel/sched/topology.c
-@@ -3,6 +3,7 @@
-  * Scheduler topology setup/handling methods
-  */
- 
-+#ifndef CONFIG_SCHED_ALT
- #include <linux/bsearch.h>
- 
- DEFINE_MUTEX(sched_domains_mutex);
-@@ -1445,8 +1446,10 @@ static void asym_cpu_capacity_scan(void)
-  */
- 
- static int default_relax_domain_level = -1;
-+#endif /* CONFIG_SCHED_ALT */
- int sched_domain_level_max;
- 
-+#ifndef CONFIG_SCHED_ALT
- static int __init setup_relax_domain_level(char *str)
- {
- 	if (kstrtoint(str, 0, &default_relax_domain_level))
-@@ -1680,6 +1683,7 @@ sd_init(struct sched_domain_topology_level *tl,
- 
- 	return sd;
- }
-+#endif /* CONFIG_SCHED_ALT */
- 
- /*
-  * Topology list, bottom-up.
-@@ -1716,6 +1720,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
- 	sched_domain_topology_saved = NULL;
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_NUMA
- 
- static const struct cpumask *sd_numa_mask(int cpu)
-@@ -2793,3 +2798,20 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
- 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
- 	mutex_unlock(&sched_domains_mutex);
- }
-+#else /* CONFIG_SCHED_ALT */
-+void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
-+			     struct sched_domain_attr *dattr_new)
-+{}
-+
-+#ifdef CONFIG_NUMA
-+int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+	return best_mask_cpu(cpu, cpus);
-+}
-+
-+int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
-+{
-+	return cpumask_nth(cpu, cpus);
-+}
-+#endif /* CONFIG_NUMA */
-+#endif
-diff --git a/kernel/sysctl.c b/kernel/sysctl.c
-index 157f7ce2942d..63083a9a2935 100644
---- a/kernel/sysctl.c
-+++ b/kernel/sysctl.c
-@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
- 
- /* Constants used for minimum and maximum */
- 
-+#ifdef CONFIG_SCHED_ALT
-+extern int sched_yield_type;
-+#endif
-+
- #ifdef CONFIG_PERF_EVENTS
- static const int six_hundred_forty_kb = 640 * 1024;
- #endif
-@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
- 		.proc_handler	= proc_dointvec,
- 	},
- #endif
-+#ifdef CONFIG_SCHED_ALT
-+	{
-+		.procname	= "yield_type",
-+		.data		= &sched_yield_type,
-+		.maxlen		= sizeof (int),
-+		.mode		= 0644,
-+		.proc_handler	= &proc_dointvec_minmax,
-+		.extra1		= SYSCTL_ZERO,
-+		.extra2		= SYSCTL_TWO,
-+	},
-+#endif
- #if defined(CONFIG_S390) && defined(CONFIG_SMP)
- 	{
- 		.procname	= "spin_retry",
-diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
-index 760793998cdd..3198ed8ab40a 100644
---- a/kernel/time/hrtimer.c
-+++ b/kernel/time/hrtimer.c
-@@ -2091,8 +2091,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
- 	int ret = 0;
- 	u64 slack;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	slack = current->timer_slack_ns;
--	if (rt_task(current))
-+	if (dl_task(current) || rt_task(current))
-+#endif
- 		slack = 0;
- 
- 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
-diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
-index e9c6f9d0e42c..43ee0a94abdd 100644
---- a/kernel/time/posix-cpu-timers.c
-+++ b/kernel/time/posix-cpu-timers.c
-@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
- 	u64 stime, utime;
- 
- 	task_cputime(p, &utime, &stime);
--	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
-+	store_samples(samples, stime, utime, tsk_seruntime(p));
- }
- 
- static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
-@@ -867,6 +867,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
- 	}
- }
- 
-+#ifndef CONFIG_SCHED_ALT
- static inline void check_dl_overrun(struct task_struct *tsk)
- {
- 	if (tsk->dl.dl_overrun) {
-@@ -874,6 +875,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
- 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
- 	}
- }
-+#endif
- 
- static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
- {
-@@ -901,8 +903,10 @@ static void check_thread_timers(struct task_struct *tsk,
- 	u64 samples[CPUCLOCK_MAX];
- 	unsigned long soft;
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk))
- 		check_dl_overrun(tsk);
-+#endif
- 
- 	if (expiry_cache_is_inactive(pct))
- 		return;
-@@ -916,7 +920,7 @@ static void check_thread_timers(struct task_struct *tsk,
- 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
- 	if (soft != RLIM_INFINITY) {
- 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
--		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
-+		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
- 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
- 
- 		/* At the hard limit, send SIGKILL. No further action. */
-@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
- 			return true;
- 	}
- 
-+#ifndef CONFIG_SCHED_ALT
- 	if (dl_task(tsk) && tsk->dl.dl_overrun)
- 		return true;
-+#endif
- 
- 	return false;
- }
-diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
-index 529590499b1f..d04bb99b4f0e 100644
---- a/kernel/trace/trace_selftest.c
-+++ b/kernel/trace/trace_selftest.c
-@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void *data)
- {
- 	/* Make this a -deadline thread */
- 	static const struct sched_attr attr = {
-+#ifdef CONFIG_SCHED_ALT
-+		/* No deadline on BMQ/PDS, use RR */
-+		.sched_policy = SCHED_RR,
-+#else
- 		.sched_policy = SCHED_DEADLINE,
- 		.sched_runtime = 100000ULL,
- 		.sched_deadline = 10000000ULL,
- 		.sched_period = 10000000ULL
-+#endif
- 	};
- 	struct wakeup_test_data *x = data;
- 
-diff --git a/kernel/workqueue.c b/kernel/workqueue.c
-index 2989b57e154a..7313d9f5585f 100644
---- a/kernel/workqueue.c
-+++ b/kernel/workqueue.c
-@@ -1114,6 +1114,7 @@ static bool kick_pool(struct worker_pool *pool)
- 
- 	p = worker->task;
- 
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- 	/*
- 	 * Idle @worker is about to execute @work and waking up provides an
-@@ -1139,6 +1140,8 @@ static bool kick_pool(struct worker_pool *pool)
- 		get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++;
- 	}
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
-+
- 	wake_up_process(p);
- 	return true;
- }
-@@ -1263,7 +1266,11 @@ void wq_worker_running(struct task_struct *task)
- 	 * CPU intensive auto-detection cares about how long a work item hogged
- 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
- 	 */
-+#ifdef CONFIG_SCHED_ALT
-+	worker->current_at = worker->task->sched_time;
-+#else
- 	worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- 
- 	WRITE_ONCE(worker->sleeping, 0);
- }
-@@ -1348,7 +1355,11 @@ void wq_worker_tick(struct task_struct *task)
- 	 * We probably want to make this prettier in the future.
- 	 */
- 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
-+#ifdef CONFIG_SCHED_ALT
-+	    worker->task->sched_time - worker->current_at <
-+#else
- 	    worker->task->se.sum_exec_runtime - worker->current_at <
-+#endif
- 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
- 		return;
- 
-@@ -2559,7 +2570,11 @@ __acquires(&pool->lock)
- 	worker->current_work = work;
- 	worker->current_func = work->func;
- 	worker->current_pwq = pwq;
-+#ifdef CONFIG_SCHED_ALT
-+	worker->current_at = worker->task->sched_time;
-+#else
- 	worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- 	work_data = *work_data_bits(work);
- 	worker->current_color = get_work_color(work_data);
- 
-diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
-index 120933a5b206..dc717683342e 100644
---- a/kernel/sched/topology.c
-+++ b/kernel/sched/topology.c
-@@ -2813,5 +2813,11 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
- {
- 	return cpumask_nth(cpu, cpus);
- }
-+
-+const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
-+{
-+	return ERR_PTR(-EOPNOTSUPP);
-+}
-+EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
- #endif /* CONFIG_NUMA */
- #endif
--- 
-GitLab
-


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-16 18:55 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-16 18:55 UTC (permalink / raw
  To: gentoo-commits

commit:     9ca25153dea35273a8c132829ba6a228787bbf66
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 16 18:54:53 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 16 18:54:53 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ca25153

Linux patch 6.7.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1004_linux-6.7.5.patch | 5893 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5897 insertions(+)

diff --git a/0000_README b/0000_README
index 1753553d..db01c0e3 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-6.7.4.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.4
 
+Patch:  1004_linux-6.7.5.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.5
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1004_linux-6.7.5.patch b/1004_linux-6.7.5.patch
new file mode 100644
index 00000000..d4c91ccf
--- /dev/null
+++ b/1004_linux-6.7.5.patch
@@ -0,0 +1,5893 @@
+diff --git a/Makefile b/Makefile
+index 73a208d9d4c48..0f5bb9ddc98fa 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h
+index 563af3e75f01f..329c94cd45d8f 100644
+--- a/arch/arc/include/asm/cacheflush.h
++++ b/arch/arc/include/asm/cacheflush.h
+@@ -40,6 +40,7 @@ void dma_cache_wback(phys_addr_t start, unsigned long sz);
+ 
+ /* TBD: optimize this */
+ #define flush_cache_vmap(start, end)		flush_cache_all()
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		flush_cache_all()
+ 
+ #define flush_cache_dup_mm(mm)			/* called on fork (VIVT only) */
+diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
+index f6181f69577fe..1075534b0a2ee 100644
+--- a/arch/arm/include/asm/cacheflush.h
++++ b/arch/arm/include/asm/cacheflush.h
+@@ -340,6 +340,8 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end)
+ 		dsb(ishst);
+ }
+ 
++#define flush_cache_vmap_early(start, end)	do { } while (0)
++
+ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
+ {
+ 	if (!cache_is_vipt_nonaliasing())
+diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h
+index 908d8b0bc4fdc..d011a81575d21 100644
+--- a/arch/csky/abiv1/inc/abi/cacheflush.h
++++ b/arch/csky/abiv1/inc/abi/cacheflush.h
+@@ -43,6 +43,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
+  */
+ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+ #define flush_cache_vmap(start, end)		cache_wbinv_all()
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		cache_wbinv_all()
+ 
+ #define flush_icache_range(start, end)		cache_wbinv_range(start, end)
+diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h
+index 40be16907267d..6513ac5d25788 100644
+--- a/arch/csky/abiv2/inc/abi/cacheflush.h
++++ b/arch/csky/abiv2/inc/abi/cacheflush.h
+@@ -41,6 +41,7 @@ void flush_icache_mm_range(struct mm_struct *mm,
+ void flush_icache_deferred(struct mm_struct *mm);
+ 
+ #define flush_cache_vmap(start, end)		do { } while (0)
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		do { } while (0)
+ 
+ #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h
+index ed12358c4783b..9a71b0148461a 100644
+--- a/arch/m68k/include/asm/cacheflush_mm.h
++++ b/arch/m68k/include/asm/cacheflush_mm.h
+@@ -191,6 +191,7 @@ extern void cache_push_v(unsigned long vaddr, int len);
+ #define flush_cache_all() __flush_cache_all()
+ 
+ #define flush_cache_vmap(start, end)		flush_cache_all()
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		flush_cache_all()
+ 
+ static inline void flush_cache_mm(struct mm_struct *mm)
+diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h
+index f36c2519ed976..1f14132b3fc98 100644
+--- a/arch/mips/include/asm/cacheflush.h
++++ b/arch/mips/include/asm/cacheflush.h
+@@ -97,6 +97,8 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end)
+ 		__flush_cache_vmap();
+ }
+ 
++#define flush_cache_vmap_early(start, end)     do { } while (0)
++
+ extern void (*__flush_cache_vunmap)(void);
+ 
+ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
+diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
+index 348cea0977927..81484a776b333 100644
+--- a/arch/nios2/include/asm/cacheflush.h
++++ b/arch/nios2/include/asm/cacheflush.h
+@@ -38,6 +38,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+ #define flush_icache_pages flush_icache_pages
+ 
+ #define flush_cache_vmap(start, end)		flush_dcache_range(start, end)
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		flush_dcache_range(start, end)
+ 
+ extern void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
+index b4006f2a97052..ba4c05bc24d69 100644
+--- a/arch/parisc/include/asm/cacheflush.h
++++ b/arch/parisc/include/asm/cacheflush.h
+@@ -41,6 +41,7 @@ void flush_kernel_vmap_range(void *vaddr, int size);
+ void invalidate_kernel_vmap_range(void *vaddr, int size);
+ 
+ #define flush_cache_vmap(start, end)		flush_cache_all()
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		flush_cache_all()
+ 
+ void flush_dcache_folio(struct folio *folio);
+diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
+index 3cb53c4df27cf..a129dac4521d3 100644
+--- a/arch/riscv/include/asm/cacheflush.h
++++ b/arch/riscv/include/asm/cacheflush.h
+@@ -37,7 +37,8 @@ static inline void flush_dcache_page(struct page *page)
+ 	flush_icache_mm(vma->vm_mm, 0)
+ 
+ #ifdef CONFIG_64BIT
+-#define flush_cache_vmap(start, end)	flush_tlb_kernel_range(start, end)
++#define flush_cache_vmap(start, end)		flush_tlb_kernel_range(start, end)
++#define flush_cache_vmap_early(start, end)	local_flush_tlb_kernel_range(start, end)
+ #endif
+ 
+ #ifndef CONFIG_SMP
+diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
+index 4c5b0e929890f..20f9c3ba23414 100644
+--- a/arch/riscv/include/asm/hugetlb.h
++++ b/arch/riscv/include/asm/hugetlb.h
+@@ -11,6 +11,9 @@ static inline void arch_clear_hugepage_flags(struct page *page)
+ }
+ #define arch_clear_hugepage_flags arch_clear_hugepage_flags
+ 
++bool arch_hugetlb_migration_supported(struct hstate *h);
++#define arch_hugetlb_migration_supported arch_hugetlb_migration_supported
++
+ #ifdef CONFIG_RISCV_ISA_SVNAPOT
+ #define __HAVE_ARCH_HUGE_PTE_CLEAR
+ void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+diff --git a/arch/riscv/include/asm/stacktrace.h b/arch/riscv/include/asm/stacktrace.h
+index f7e8ef2418b99..b1495a7e06ce6 100644
+--- a/arch/riscv/include/asm/stacktrace.h
++++ b/arch/riscv/include/asm/stacktrace.h
+@@ -21,4 +21,9 @@ static inline bool on_thread_stack(void)
+ 	return !(((unsigned long)(current->stack) ^ current_stack_pointer) & ~(THREAD_SIZE - 1));
+ }
+ 
++
++#ifdef CONFIG_VMAP_STACK
++DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack);
++#endif /* CONFIG_VMAP_STACK */
++
+ #endif /* _ASM_RISCV_STACKTRACE_H */
+diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h
+index 1eb5682b2af60..50b63b5c15bd8 100644
+--- a/arch/riscv/include/asm/tlb.h
++++ b/arch/riscv/include/asm/tlb.h
+@@ -16,7 +16,7 @@ static void tlb_flush(struct mmu_gather *tlb);
+ static inline void tlb_flush(struct mmu_gather *tlb)
+ {
+ #ifdef CONFIG_MMU
+-	if (tlb->fullmm || tlb->need_flush_all)
++	if (tlb->fullmm || tlb->need_flush_all || tlb->freed_tables)
+ 		flush_tlb_mm(tlb->mm);
+ 	else
+ 		flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end,
+diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
+index 8f3418c5f1724..51664ae4852e7 100644
+--- a/arch/riscv/include/asm/tlbflush.h
++++ b/arch/riscv/include/asm/tlbflush.h
+@@ -41,6 +41,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
+ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ 		     unsigned long end);
+ void flush_tlb_kernel_range(unsigned long start, unsigned long end);
++void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
+ void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+@@ -66,6 +67,7 @@ static inline void flush_tlb_kernel_range(unsigned long start,
+ 
+ #define flush_tlb_mm(mm) flush_tlb_all()
+ #define flush_tlb_mm_range(mm, start, end, page_size) flush_tlb_all()
++#define local_flush_tlb_kernel_range(start, end) flush_tlb_all()
+ #endif /* !CONFIG_SMP || !CONFIG_MMU */
+ 
+ #endif /* _ASM_RISCV_TLBFLUSH_H */
+diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
+index b52f0210481fa..e7b69281875b2 100644
+--- a/arch/riscv/mm/hugetlbpage.c
++++ b/arch/riscv/mm/hugetlbpage.c
+@@ -125,6 +125,26 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
+ 	return pte;
+ }
+ 
++unsigned long hugetlb_mask_last_page(struct hstate *h)
++{
++	unsigned long hp_size = huge_page_size(h);
++
++	switch (hp_size) {
++#ifndef __PAGETABLE_PMD_FOLDED
++	case PUD_SIZE:
++		return P4D_SIZE - PUD_SIZE;
++#endif
++	case PMD_SIZE:
++		return PUD_SIZE - PMD_SIZE;
++	case napot_cont_size(NAPOT_CONT64KB_ORDER):
++		return PMD_SIZE - napot_cont_size(NAPOT_CONT64KB_ORDER);
++	default:
++		break;
++	}
++
++	return 0UL;
++}
++
+ static pte_t get_clear_contig(struct mm_struct *mm,
+ 			      unsigned long addr,
+ 			      pte_t *ptep,
+@@ -177,13 +197,36 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
+ 	return entry;
+ }
+ 
++static void clear_flush(struct mm_struct *mm,
++			unsigned long addr,
++			pte_t *ptep,
++			unsigned long pgsize,
++			unsigned long ncontig)
++{
++	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
++	unsigned long i, saddr = addr;
++
++	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
++		ptep_get_and_clear(mm, addr, ptep);
++
++	flush_tlb_range(&vma, saddr, addr);
++}
++
++/*
++ * When dealing with NAPOT mappings, the privileged specification indicates that
++ * "if an update needs to be made, the OS generally should first mark all of the
++ * PTEs invalid, then issue SFENCE.VMA instruction(s) covering all 4 KiB regions
++ * within the range, [...] then update the PTE(s), as described in Section
++ * 4.2.1.". That's the equivalent of the Break-Before-Make approach used by
++ * arm64.
++ */
+ void set_huge_pte_at(struct mm_struct *mm,
+ 		     unsigned long addr,
+ 		     pte_t *ptep,
+ 		     pte_t pte,
+ 		     unsigned long sz)
+ {
+-	unsigned long hugepage_shift;
++	unsigned long hugepage_shift, pgsize;
+ 	int i, pte_num;
+ 
+ 	if (sz >= PGDIR_SIZE)
+@@ -198,7 +241,22 @@ void set_huge_pte_at(struct mm_struct *mm,
+ 		hugepage_shift = PAGE_SHIFT;
+ 
+ 	pte_num = sz >> hugepage_shift;
+-	for (i = 0; i < pte_num; i++, ptep++, addr += (1 << hugepage_shift))
++	pgsize = 1 << hugepage_shift;
++
++	if (!pte_present(pte)) {
++		for (i = 0; i < pte_num; i++, ptep++, addr += pgsize)
++			set_ptes(mm, addr, ptep, pte, 1);
++		return;
++	}
++
++	if (!pte_napot(pte)) {
++		set_ptes(mm, addr, ptep, pte, 1);
++		return;
++	}
++
++	clear_flush(mm, addr, ptep, pgsize, pte_num);
++
++	for (i = 0; i < pte_num; i++, ptep++, addr += pgsize)
+ 		set_pte_at(mm, addr, ptep, pte);
+ }
+ 
+@@ -306,7 +364,7 @@ void huge_pte_clear(struct mm_struct *mm,
+ 		pte_clear(mm, addr, ptep);
+ }
+ 
+-static __init bool is_napot_size(unsigned long size)
++static bool is_napot_size(unsigned long size)
+ {
+ 	unsigned long order;
+ 
+@@ -334,7 +392,7 @@ arch_initcall(napot_hugetlbpages_init);
+ 
+ #else
+ 
+-static __init bool is_napot_size(unsigned long size)
++static bool is_napot_size(unsigned long size)
+ {
+ 	return false;
+ }
+@@ -351,7 +409,7 @@ int pmd_huge(pmd_t pmd)
+ 	return pmd_leaf(pmd);
+ }
+ 
+-bool __init arch_hugetlb_valid_size(unsigned long size)
++static bool __hugetlb_valid_size(unsigned long size)
+ {
+ 	if (size == HPAGE_SIZE)
+ 		return true;
+@@ -363,6 +421,16 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
+ 		return false;
+ }
+ 
++bool __init arch_hugetlb_valid_size(unsigned long size)
++{
++	return __hugetlb_valid_size(size);
++}
++
++bool arch_hugetlb_migration_supported(struct hstate *h)
++{
++	return __hugetlb_valid_size(huge_page_size(h));
++}
++
+ #ifdef CONFIG_CONTIG_ALLOC
+ static __init int gigantic_pages_init(void)
+ {
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index ad77ed410d4dd..ee224fe18d185 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -1385,6 +1385,10 @@ void __init misc_mem_init(void)
+ 	early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
+ 	arch_numa_init();
+ 	sparse_init();
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++	/* The entire VMEMMAP region has been populated. Flush TLB for this region */
++	local_flush_tlb_kernel_range(VMEMMAP_START, VMEMMAP_END);
++#endif
+ 	zone_sizes_init();
+ 	arch_reserve_crashkernel();
+ 	memblock_dump_all();
+diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
+index e6659d7368b35..1f90721d22e9a 100644
+--- a/arch/riscv/mm/tlbflush.c
++++ b/arch/riscv/mm/tlbflush.c
+@@ -66,6 +66,12 @@ static inline void local_flush_tlb_range_asid(unsigned long start,
+ 		local_flush_tlb_range_threshold_asid(start, size, stride, asid);
+ }
+ 
++/* Flush a range of kernel pages without broadcasting */
++void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
++{
++	local_flush_tlb_range_asid(start, end - start, PAGE_SIZE, FLUSH_TLB_NO_ASID);
++}
++
+ static void __ipi_flush_tlb_all(void *info)
+ {
+ 	local_flush_tlb_all();
+diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
+index 878b6b551bd2d..51112f54552b3 100644
+--- a/arch/sh/include/asm/cacheflush.h
++++ b/arch/sh/include/asm/cacheflush.h
+@@ -90,6 +90,7 @@ extern void copy_from_user_page(struct vm_area_struct *vma,
+ 	unsigned long len);
+ 
+ #define flush_cache_vmap(start, end)		local_flush_cache_all(NULL)
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		local_flush_cache_all(NULL)
+ 
+ #define flush_dcache_mmap_lock(mapping)		do { } while (0)
+diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h
+index f3b7270bf71b2..9fee0ccfccb8e 100644
+--- a/arch/sparc/include/asm/cacheflush_32.h
++++ b/arch/sparc/include/asm/cacheflush_32.h
+@@ -48,6 +48,7 @@ static inline void flush_dcache_page(struct page *page)
+ #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
+ 
+ #define flush_cache_vmap(start, end)		flush_cache_all()
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		flush_cache_all()
+ 
+ /* When a context switch happens we must flush all user windows so that
+diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h
+index 0e879004efff1..2b1261b77ecd1 100644
+--- a/arch/sparc/include/asm/cacheflush_64.h
++++ b/arch/sparc/include/asm/cacheflush_64.h
+@@ -75,6 +75,7 @@ void flush_ptrace_access(struct vm_area_struct *, struct page *,
+ #define flush_dcache_mmap_unlock(mapping)	do { } while (0)
+ 
+ #define flush_cache_vmap(start, end)		do { } while (0)
++#define flush_cache_vmap_early(start, end)	do { } while (0)
+ #define flush_cache_vunmap(start, end)		do { } while (0)
+ 
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
+index b2771710ed989..a1bbedd989e42 100644
+--- a/arch/x86/boot/header.S
++++ b/arch/x86/boot/header.S
+@@ -106,8 +106,7 @@ extra_header_fields:
+ 	.word	0				# MinorSubsystemVersion
+ 	.long	0				# Win32VersionValue
+ 
+-	.long	setup_size + ZO__end + pecompat_vsize
+-						# SizeOfImage
++	.long	setup_size + ZO__end		# SizeOfImage
+ 
+ 	.long	salign				# SizeOfHeaders
+ 	.long	0				# CheckSum
+@@ -143,7 +142,7 @@ section_table:
+ 	.ascii	".setup"
+ 	.byte	0
+ 	.byte	0
+-	.long	setup_size - salign 		# VirtualSize
++	.long	pecompat_fstart - salign 	# VirtualSize
+ 	.long	salign				# VirtualAddress
+ 	.long	pecompat_fstart - salign	# SizeOfRawData
+ 	.long	salign				# PointerToRawData
+@@ -156,8 +155,8 @@ section_table:
+ #ifdef CONFIG_EFI_MIXED
+ 	.asciz	".compat"
+ 
+-	.long	8				# VirtualSize
+-	.long	setup_size + ZO__end		# VirtualAddress
++	.long	pecompat_fsize			# VirtualSize
++	.long	pecompat_fstart			# VirtualAddress
+ 	.long	pecompat_fsize			# SizeOfRawData
+ 	.long	pecompat_fstart			# PointerToRawData
+ 
+@@ -172,17 +171,16 @@ section_table:
+ 	 * modes this image supports.
+ 	 */
+ 	.pushsection ".pecompat", "a", @progbits
+-	.balign	falign
+-	.set	pecompat_vsize, salign
++	.balign	salign
+ 	.globl	pecompat_fstart
+ pecompat_fstart:
+ 	.byte	0x1				# Version
+ 	.byte	8				# Size
+ 	.word	IMAGE_FILE_MACHINE_I386		# PE machine type
+ 	.long	setup_size + ZO_efi32_pe_entry	# Entrypoint
++	.byte	0x0				# Sentinel
+ 	.popsection
+ #else
+-	.set	pecompat_vsize, 0
+ 	.set	pecompat_fstart, setup_size
+ #endif
+ 	.ascii	".text"
+diff --git a/arch/x86/boot/setup.ld b/arch/x86/boot/setup.ld
+index 83bb7efad8ae7..3a2d1360abb01 100644
+--- a/arch/x86/boot/setup.ld
++++ b/arch/x86/boot/setup.ld
+@@ -24,6 +24,9 @@ SECTIONS
+ 	.text		: { *(.text .text.*) }
+ 	.text32		: { *(.text32) }
+ 
++	.pecompat	: { *(.pecompat) }
++	PROVIDE(pecompat_fsize = setup_size - pecompat_fstart);
++
+ 	. = ALIGN(16);
+ 	.rodata		: { *(.rodata*) }
+ 
+@@ -36,9 +39,6 @@ SECTIONS
+ 	. = ALIGN(16);
+ 	.data		: { *(.data*) }
+ 
+-	.pecompat	: { *(.pecompat) }
+-	PROVIDE(pecompat_fsize = setup_size - pecompat_fstart);
+-
+ 	.signature	: {
+ 		setup_sig = .;
+ 		LONG(0x5a5aaa55)
+diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
+index 20ef350a60fbb..10d5ed8b5990f 100644
+--- a/arch/x86/lib/getuser.S
++++ b/arch/x86/lib/getuser.S
+@@ -163,23 +163,23 @@ SYM_CODE_END(__get_user_8_handle_exception)
+ #endif
+ 
+ /* get_user */
+-	_ASM_EXTABLE(1b, __get_user_handle_exception)
+-	_ASM_EXTABLE(2b, __get_user_handle_exception)
+-	_ASM_EXTABLE(3b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(1b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(2b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(3b, __get_user_handle_exception)
+ #ifdef CONFIG_X86_64
+-	_ASM_EXTABLE(4b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(4b, __get_user_handle_exception)
+ #else
+-	_ASM_EXTABLE(4b, __get_user_8_handle_exception)
+-	_ASM_EXTABLE(5b, __get_user_8_handle_exception)
++	_ASM_EXTABLE_UA(4b, __get_user_8_handle_exception)
++	_ASM_EXTABLE_UA(5b, __get_user_8_handle_exception)
+ #endif
+ 
+ /* __get_user */
+-	_ASM_EXTABLE(6b, __get_user_handle_exception)
+-	_ASM_EXTABLE(7b, __get_user_handle_exception)
+-	_ASM_EXTABLE(8b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(6b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(7b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(8b, __get_user_handle_exception)
+ #ifdef CONFIG_X86_64
+-	_ASM_EXTABLE(9b, __get_user_handle_exception)
++	_ASM_EXTABLE_UA(9b, __get_user_handle_exception)
+ #else
+-	_ASM_EXTABLE(9b, __get_user_8_handle_exception)
+-	_ASM_EXTABLE(10b, __get_user_8_handle_exception)
++	_ASM_EXTABLE_UA(9b, __get_user_8_handle_exception)
++	_ASM_EXTABLE_UA(10b, __get_user_8_handle_exception)
+ #endif
+diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
+index 2877f59341775..975c9c18263d2 100644
+--- a/arch/x86/lib/putuser.S
++++ b/arch/x86/lib/putuser.S
+@@ -133,15 +133,15 @@ SYM_CODE_START_LOCAL(__put_user_handle_exception)
+ 	RET
+ SYM_CODE_END(__put_user_handle_exception)
+ 
+-	_ASM_EXTABLE(1b, __put_user_handle_exception)
+-	_ASM_EXTABLE(2b, __put_user_handle_exception)
+-	_ASM_EXTABLE(3b, __put_user_handle_exception)
+-	_ASM_EXTABLE(4b, __put_user_handle_exception)
+-	_ASM_EXTABLE(5b, __put_user_handle_exception)
+-	_ASM_EXTABLE(6b, __put_user_handle_exception)
+-	_ASM_EXTABLE(7b, __put_user_handle_exception)
+-	_ASM_EXTABLE(9b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(1b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(2b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(3b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(4b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(5b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(6b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(7b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(9b, __put_user_handle_exception)
+ #ifdef CONFIG_X86_32
+-	_ASM_EXTABLE(8b, __put_user_handle_exception)
+-	_ASM_EXTABLE(10b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(8b, __put_user_handle_exception)
++	_ASM_EXTABLE_UA(10b, __put_user_handle_exception)
+ #endif
+diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h
+index 785a00ce83c11..38bcecb0e457d 100644
+--- a/arch/xtensa/include/asm/cacheflush.h
++++ b/arch/xtensa/include/asm/cacheflush.h
+@@ -116,8 +116,9 @@ void flush_cache_page(struct vm_area_struct*,
+ #define flush_cache_mm(mm)		flush_cache_all()
+ #define flush_cache_dup_mm(mm)		flush_cache_mm(mm)
+ 
+-#define flush_cache_vmap(start,end)	flush_cache_all()
+-#define flush_cache_vunmap(start,end)	flush_cache_all()
++#define flush_cache_vmap(start,end)		flush_cache_all()
++#define flush_cache_vmap_early(start,end)	do { } while (0)
++#define flush_cache_vunmap(start,end)		flush_cache_all()
+ 
+ void flush_dcache_folio(struct folio *folio);
+ #define flush_dcache_folio flush_dcache_folio
+@@ -140,6 +141,7 @@ void local_flush_cache_page(struct vm_area_struct *vma,
+ #define flush_cache_dup_mm(mm)				do { } while (0)
+ 
+ #define flush_cache_vmap(start,end)			do { } while (0)
++#define flush_cache_vmap_early(start,end)		do { } while (0)
+ #define flush_cache_vunmap(start,end)			do { } while (0)
+ 
+ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 089fcb9cfce37..7ee8d85c2c68d 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1353,6 +1353,13 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now)
+ 
+ 	lockdep_assert_held(&iocg->waitq.lock);
+ 
++	/*
++	 * If the delay is set by another CPU, we may be in the past. No need to
++	 * change anything if so. This avoids decay calculation underflow.
++	 */
++	if (time_before64(now->now, iocg->delay_at))
++		return false;
++
+ 	/* calculate the current delay in effect - 1/2 every second */
+ 	tdelta = now->now - iocg->delay_at;
+ 	if (iocg->delay)
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index e327a0229dc17..e7f713cd70d3f 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -2930,6 +2930,8 @@ open_card_ubr0(struct idt77252_dev *card)
+ 	vc->scq = alloc_scq(card, vc->class);
+ 	if (!vc->scq) {
+ 		printk("%s: can't get SCQ.\n", card->name);
++		kfree(card->vcs[0]);
++		card->vcs[0] = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c b/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
+index 7958ac33e36ce..5a8061a307cda 100644
+--- a/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
++++ b/drivers/dma/fsl-dpaa2-qdma/dpaa2-qdma.c
+@@ -38,15 +38,17 @@ static int dpaa2_qdma_alloc_chan_resources(struct dma_chan *chan)
+ 	if (!dpaa2_chan->fd_pool)
+ 		goto err;
+ 
+-	dpaa2_chan->fl_pool = dma_pool_create("fl_pool", dev,
+-					      sizeof(struct dpaa2_fl_entry),
+-					      sizeof(struct dpaa2_fl_entry), 0);
++	dpaa2_chan->fl_pool =
++		dma_pool_create("fl_pool", dev,
++				 sizeof(struct dpaa2_fl_entry) * 3,
++				 sizeof(struct dpaa2_fl_entry), 0);
++
+ 	if (!dpaa2_chan->fl_pool)
+ 		goto err_fd;
+ 
+ 	dpaa2_chan->sdd_pool =
+ 		dma_pool_create("sdd_pool", dev,
+-				sizeof(struct dpaa2_qdma_sd_d),
++				sizeof(struct dpaa2_qdma_sd_d) * 2,
+ 				sizeof(struct dpaa2_qdma_sd_d), 0);
+ 	if (!dpaa2_chan->sdd_pool)
+ 		goto err_fl;
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index 47cb284680494..3a5595a1d4421 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -514,11 +514,11 @@ static struct fsl_qdma_queue
+ 			queue_temp = queue_head + i + (j * queue_num);
+ 
+ 			queue_temp->cq =
+-			dma_alloc_coherent(&pdev->dev,
+-					   sizeof(struct fsl_qdma_format) *
+-					   queue_size[i],
+-					   &queue_temp->bus_addr,
+-					   GFP_KERNEL);
++			dmam_alloc_coherent(&pdev->dev,
++					    sizeof(struct fsl_qdma_format) *
++					    queue_size[i],
++					    &queue_temp->bus_addr,
++					    GFP_KERNEL);
+ 			if (!queue_temp->cq)
+ 				return NULL;
+ 			queue_temp->block_base = fsl_qdma->block_base +
+@@ -563,11 +563,11 @@ static struct fsl_qdma_queue
+ 	/*
+ 	 * Buffer for queue command
+ 	 */
+-	status_head->cq = dma_alloc_coherent(&pdev->dev,
+-					     sizeof(struct fsl_qdma_format) *
+-					     status_size,
+-					     &status_head->bus_addr,
+-					     GFP_KERNEL);
++	status_head->cq = dmam_alloc_coherent(&pdev->dev,
++					      sizeof(struct fsl_qdma_format) *
++					      status_size,
++					      &status_head->bus_addr,
++					      GFP_KERNEL);
+ 	if (!status_head->cq) {
+ 		devm_kfree(&pdev->dev, status_head);
+ 		return NULL;
+@@ -1268,8 +1268,6 @@ static void fsl_qdma_cleanup_vchan(struct dma_device *dmadev)
+ 
+ static void fsl_qdma_remove(struct platform_device *pdev)
+ {
+-	int i;
+-	struct fsl_qdma_queue *status;
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct fsl_qdma_engine *fsl_qdma = platform_get_drvdata(pdev);
+ 
+@@ -1277,12 +1275,6 @@ static void fsl_qdma_remove(struct platform_device *pdev)
+ 	fsl_qdma_cleanup_vchan(&fsl_qdma->dma_dev);
+ 	of_dma_controller_free(np);
+ 	dma_async_device_unregister(&fsl_qdma->dma_dev);
+-
+-	for (i = 0; i < fsl_qdma->block_number; i++) {
+-		status = fsl_qdma->status[i];
+-		dma_free_coherent(&pdev->dev, sizeof(struct fsl_qdma_format) *
+-				status->n_cq, status->cq, status->bus_addr);
+-	}
+ }
+ 
+ static const struct of_device_id fsl_qdma_dt_ids[] = {
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 30fd2f386f36a..037f1408e7983 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -3968,6 +3968,7 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
+ {
+ 	struct udma_chan *uc = to_udma_chan(&vc->chan);
+ 	struct udma_desc *d;
++	u8 status;
+ 
+ 	if (!vd)
+ 		return;
+@@ -3977,12 +3978,12 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
+ 	if (d->metadata_size)
+ 		udma_fetch_epib(uc, d);
+ 
+-	/* Provide residue information for the client */
+ 	if (result) {
+ 		void *desc_vaddr = udma_curr_cppi5_desc_vaddr(d, d->desc_idx);
+ 
+ 		if (cppi5_desc_get_type(desc_vaddr) ==
+ 		    CPPI5_INFO0_DESC_TYPE_VAL_HOST) {
++			/* Provide residue information for the client */
+ 			result->residue = d->residue -
+ 					  cppi5_hdesc_get_pktlen(desc_vaddr);
+ 			if (result->residue)
+@@ -3991,7 +3992,12 @@ static void udma_desc_pre_callback(struct virt_dma_chan *vc,
+ 				result->result = DMA_TRANS_NOERROR;
+ 		} else {
+ 			result->residue = 0;
+-			result->result = DMA_TRANS_NOERROR;
++			/* Propagate TR Response errors to the client */
++			status = d->hwdesc[0].tr_resp_base->status;
++			if (status)
++				result->result = DMA_TRANS_ABORTED;
++			else
++				result->result = DMA_TRANS_NOERROR;
+ 		}
+ 	}
+ }
+diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
+index 212687c30d79c..c04b82ea40f21 100644
+--- a/drivers/firmware/efi/libstub/efistub.h
++++ b/drivers/firmware/efi/libstub/efistub.h
+@@ -956,7 +956,8 @@ efi_status_t efi_get_random_bytes(unsigned long size, u8 *out);
+ 
+ efi_status_t efi_random_alloc(unsigned long size, unsigned long align,
+ 			      unsigned long *addr, unsigned long random_seed,
+-			      int memory_type, unsigned long alloc_limit);
++			      int memory_type, unsigned long alloc_min,
++			      unsigned long alloc_max);
+ 
+ efi_status_t efi_random_get_seed(void);
+ 
+diff --git a/drivers/firmware/efi/libstub/kaslr.c b/drivers/firmware/efi/libstub/kaslr.c
+index 62d63f7a2645b..1a9808012abd3 100644
+--- a/drivers/firmware/efi/libstub/kaslr.c
++++ b/drivers/firmware/efi/libstub/kaslr.c
+@@ -119,7 +119,7 @@ efi_status_t efi_kaslr_relocate_kernel(unsigned long *image_addr,
+ 		 */
+ 		status = efi_random_alloc(*reserve_size, min_kimg_align,
+ 					  reserve_addr, phys_seed,
+-					  EFI_LOADER_CODE, EFI_ALLOC_LIMIT);
++					  EFI_LOADER_CODE, 0, EFI_ALLOC_LIMIT);
+ 		if (status != EFI_SUCCESS)
+ 			efi_warn("efi_random_alloc() failed: 0x%lx\n", status);
+ 	} else {
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index 674a064b8f7ad..4e96a855fdf47 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -17,7 +17,7 @@
+ static unsigned long get_entry_num_slots(efi_memory_desc_t *md,
+ 					 unsigned long size,
+ 					 unsigned long align_shift,
+-					 u64 alloc_limit)
++					 u64 alloc_min, u64 alloc_max)
+ {
+ 	unsigned long align = 1UL << align_shift;
+ 	u64 first_slot, last_slot, region_end;
+@@ -30,11 +30,11 @@ static unsigned long get_entry_num_slots(efi_memory_desc_t *md,
+ 		return 0;
+ 
+ 	region_end = min(md->phys_addr + md->num_pages * EFI_PAGE_SIZE - 1,
+-			 alloc_limit);
++			 alloc_max);
+ 	if (region_end < size)
+ 		return 0;
+ 
+-	first_slot = round_up(md->phys_addr, align);
++	first_slot = round_up(max(md->phys_addr, alloc_min), align);
+ 	last_slot = round_down(region_end - size + 1, align);
+ 
+ 	if (first_slot > last_slot)
+@@ -56,7 +56,8 @@ efi_status_t efi_random_alloc(unsigned long size,
+ 			      unsigned long *addr,
+ 			      unsigned long random_seed,
+ 			      int memory_type,
+-			      unsigned long alloc_limit)
++			      unsigned long alloc_min,
++			      unsigned long alloc_max)
+ {
+ 	unsigned long total_slots = 0, target_slot;
+ 	unsigned long total_mirrored_slots = 0;
+@@ -78,7 +79,8 @@ efi_status_t efi_random_alloc(unsigned long size,
+ 		efi_memory_desc_t *md = (void *)map->map + map_offset;
+ 		unsigned long slots;
+ 
+-		slots = get_entry_num_slots(md, size, ilog2(align), alloc_limit);
++		slots = get_entry_num_slots(md, size, ilog2(align), alloc_min,
++					    alloc_max);
+ 		MD_NUM_SLOTS(md) = slots;
+ 		total_slots += slots;
+ 		if (md->attribute & EFI_MEMORY_MORE_RELIABLE)
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 0d510c9a06a45..99429bc4b0c7e 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -223,8 +223,8 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params)
+ 	}
+ }
+ 
+-void efi_adjust_memory_range_protection(unsigned long start,
+-					unsigned long size)
++efi_status_t efi_adjust_memory_range_protection(unsigned long start,
++						unsigned long size)
+ {
+ 	efi_status_t status;
+ 	efi_gcd_memory_space_desc_t desc;
+@@ -236,13 +236,17 @@ void efi_adjust_memory_range_protection(unsigned long start,
+ 	rounded_end = roundup(start + size, EFI_PAGE_SIZE);
+ 
+ 	if (memattr != NULL) {
+-		efi_call_proto(memattr, clear_memory_attributes, rounded_start,
+-			       rounded_end - rounded_start, EFI_MEMORY_XP);
+-		return;
++		status = efi_call_proto(memattr, clear_memory_attributes,
++					rounded_start,
++					rounded_end - rounded_start,
++					EFI_MEMORY_XP);
++		if (status != EFI_SUCCESS)
++			efi_warn("Failed to clear EFI_MEMORY_XP attribute\n");
++		return status;
+ 	}
+ 
+ 	if (efi_dxe_table == NULL)
+-		return;
++		return EFI_SUCCESS;
+ 
+ 	/*
+ 	 * Don't modify memory region attributes, they are
+@@ -255,7 +259,7 @@ void efi_adjust_memory_range_protection(unsigned long start,
+ 		status = efi_dxe_call(get_memory_space_descriptor, start, &desc);
+ 
+ 		if (status != EFI_SUCCESS)
+-			return;
++			break;
+ 
+ 		next = desc.base_address + desc.length;
+ 
+@@ -280,8 +284,10 @@ void efi_adjust_memory_range_protection(unsigned long start,
+ 				 unprotect_start,
+ 				 unprotect_start + unprotect_size,
+ 				 status);
++			break;
+ 		}
+ 	}
++	return EFI_SUCCESS;
+ }
+ 
+ static void setup_unaccepted_memory(void)
+@@ -793,6 +799,7 @@ static efi_status_t efi_decompress_kernel(unsigned long *kernel_entry)
+ 
+ 	status = efi_random_alloc(alloc_size, CONFIG_PHYSICAL_ALIGN, &addr,
+ 				  seed[0], EFI_LOADER_CODE,
++				  LOAD_PHYSICAL_ADDR,
+ 				  EFI_X86_KERNEL_ALLOC_LIMIT);
+ 	if (status != EFI_SUCCESS)
+ 		return status;
+@@ -805,9 +812,7 @@ static efi_status_t efi_decompress_kernel(unsigned long *kernel_entry)
+ 
+ 	*kernel_entry = addr + entry;
+ 
+-	efi_adjust_memory_range_protection(addr, kernel_total_size);
+-
+-	return EFI_SUCCESS;
++	return efi_adjust_memory_range_protection(addr, kernel_total_size);
+ }
+ 
+ static void __noreturn enter_kernel(unsigned long kernel_addr,
+diff --git a/drivers/firmware/efi/libstub/x86-stub.h b/drivers/firmware/efi/libstub/x86-stub.h
+index 37c5a36b9d8cf..1c20e99a64944 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.h
++++ b/drivers/firmware/efi/libstub/x86-stub.h
+@@ -5,8 +5,8 @@
+ extern void trampoline_32bit_src(void *, bool);
+ extern const u16 trampoline_ljmp_imm_offset;
+ 
+-void efi_adjust_memory_range_protection(unsigned long start,
+-					unsigned long size);
++efi_status_t efi_adjust_memory_range_protection(unsigned long start,
++						unsigned long size);
+ 
+ #ifdef CONFIG_X86_64
+ efi_status_t efi_setup_5level_paging(void);
+diff --git a/drivers/firmware/efi/libstub/zboot.c b/drivers/firmware/efi/libstub/zboot.c
+index bdb17eac0cb40..1ceace9567586 100644
+--- a/drivers/firmware/efi/libstub/zboot.c
++++ b/drivers/firmware/efi/libstub/zboot.c
+@@ -119,7 +119,7 @@ efi_zboot_entry(efi_handle_t handle, efi_system_table_t *systab)
+ 		}
+ 
+ 		status = efi_random_alloc(alloc_size, min_kimg_align, &image_base,
+-					  seed, EFI_LOADER_CODE, EFI_ALLOC_LIMIT);
++					  seed, EFI_LOADER_CODE, 0, EFI_ALLOC_LIMIT);
+ 		if (status != EFI_SUCCESS) {
+ 			efi_err("Failed to allocate memory\n");
+ 			goto free_cmdline;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 9257c9af3fee6..5fe1df95dc389 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4051,23 +4051,13 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 				}
+ 			}
+ 		} else {
+-			switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
+-			case IP_VERSION(13, 0, 0):
+-			case IP_VERSION(13, 0, 7):
+-			case IP_VERSION(13, 0, 10):
+-				r = psp_gpu_reset(adev);
+-				break;
+-			default:
+-				tmp = amdgpu_reset_method;
+-				/* It should do a default reset when loading or reloading the driver,
+-				 * regardless of the module parameter reset_method.
+-				 */
+-				amdgpu_reset_method = AMD_RESET_METHOD_NONE;
+-				r = amdgpu_asic_reset(adev);
+-				amdgpu_reset_method = tmp;
+-				break;
+-			}
+-
++			tmp = amdgpu_reset_method;
++			/* It should do a default reset when loading or reloading the driver,
++			 * regardless of the module parameter reset_method.
++			 */
++			amdgpu_reset_method = AMD_RESET_METHOD_NONE;
++			r = amdgpu_asic_reset(adev);
++			amdgpu_reset_method = tmp;
+ 			if (r) {
+ 				dev_err(adev->dev, "asic reset on init failed\n");
+ 				goto failed;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+index f3b75f283aa25..ce04caf35d802 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+@@ -999,7 +999,7 @@ static struct stream_encoder *dcn301_stream_encoder_create(enum engine_id eng_id
+ 	vpg = dcn301_vpg_create(ctx, vpg_inst);
+ 	afmt = dcn301_afmt_create(ctx, afmt_inst);
+ 
+-	if (!enc1 || !vpg || !afmt) {
++	if (!enc1 || !vpg || !afmt || eng_id >= ARRAY_SIZE(stream_enc_regs)) {
+ 		kfree(enc1);
+ 		kfree(vpg);
+ 		kfree(afmt);
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+index 8e88dcaf88f5b..5c7f380a84f91 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+@@ -206,28 +206,32 @@ void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx)
+ void dcn21_set_pipe(struct pipe_ctx *pipe_ctx)
+ {
+ 	struct abm *abm = pipe_ctx->stream_res.abm;
+-	uint32_t otg_inst = pipe_ctx->stream_res.tg->inst;
++	struct timing_generator *tg = pipe_ctx->stream_res.tg;
+ 	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
+ 	struct dmcu *dmcu = pipe_ctx->stream->ctx->dc->res_pool->dmcu;
++	uint32_t otg_inst;
++
++	if (!abm && !tg && !panel_cntl)
++		return;
++
++	otg_inst = tg->inst;
+ 
+ 	if (dmcu) {
+ 		dce110_set_pipe(pipe_ctx);
+ 		return;
+ 	}
+ 
+-	if (abm && panel_cntl) {
+-		if (abm->funcs && abm->funcs->set_pipe_ex) {
+-			abm->funcs->set_pipe_ex(abm,
++	if (abm->funcs && abm->funcs->set_pipe_ex) {
++		abm->funcs->set_pipe_ex(abm,
+ 					otg_inst,
+ 					SET_ABM_PIPE_NORMAL,
+ 					panel_cntl->inst,
+ 					panel_cntl->pwrseq_inst);
+-		} else {
+-				dmub_abm_set_pipe(abm, otg_inst,
+-						SET_ABM_PIPE_NORMAL,
+-						panel_cntl->inst,
+-						panel_cntl->pwrseq_inst);
+-		}
++	} else {
++		dmub_abm_set_pipe(abm, otg_inst,
++				  SET_ABM_PIPE_NORMAL,
++				  panel_cntl->inst,
++				  panel_cntl->pwrseq_inst);
+ 	}
+ }
+ 
+@@ -237,34 +241,35 @@ bool dcn21_set_backlight_level(struct pipe_ctx *pipe_ctx,
+ {
+ 	struct dc_context *dc = pipe_ctx->stream->ctx;
+ 	struct abm *abm = pipe_ctx->stream_res.abm;
++	struct timing_generator *tg = pipe_ctx->stream_res.tg;
+ 	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
++	uint32_t otg_inst;
++
++	if (!abm && !tg && !panel_cntl)
++		return false;
++
++	otg_inst = tg->inst;
+ 
+ 	if (dc->dc->res_pool->dmcu) {
+ 		dce110_set_backlight_level(pipe_ctx, backlight_pwm_u16_16, frame_ramp);
+ 		return true;
+ 	}
+ 
+-	if (abm != NULL) {
+-		uint32_t otg_inst = pipe_ctx->stream_res.tg->inst;
+-
+-		if (abm && panel_cntl) {
+-			if (abm->funcs && abm->funcs->set_pipe_ex) {
+-				abm->funcs->set_pipe_ex(abm,
+-						otg_inst,
+-						SET_ABM_PIPE_NORMAL,
+-						panel_cntl->inst,
+-						panel_cntl->pwrseq_inst);
+-			} else {
+-					dmub_abm_set_pipe(abm,
+-							otg_inst,
+-							SET_ABM_PIPE_NORMAL,
+-							panel_cntl->inst,
+-							panel_cntl->pwrseq_inst);
+-			}
+-		}
++	if (abm->funcs && abm->funcs->set_pipe_ex) {
++		abm->funcs->set_pipe_ex(abm,
++					otg_inst,
++					SET_ABM_PIPE_NORMAL,
++					panel_cntl->inst,
++					panel_cntl->pwrseq_inst);
++	} else {
++		dmub_abm_set_pipe(abm,
++				  otg_inst,
++				  SET_ABM_PIPE_NORMAL,
++				  panel_cntl->inst,
++				  panel_cntl->pwrseq_inst);
+ 	}
+ 
+-	if (abm && abm->funcs && abm->funcs->set_backlight_level_pwm)
++	if (abm->funcs && abm->funcs->set_backlight_level_pwm)
+ 		abm->funcs->set_backlight_level_pwm(abm, backlight_pwm_u16_16,
+ 			frame_ramp, 0, panel_cntl->inst);
+ 	else
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index cd252e2b584c6..60dce148b2d7b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -734,7 +734,7 @@ static int smu_early_init(void *handle)
+ 	smu->adev = adev;
+ 	smu->pm_enabled = !!amdgpu_dpm;
+ 	smu->is_apu = false;
+-	smu->smu_baco.state = SMU_BACO_STATE_NONE;
++	smu->smu_baco.state = SMU_BACO_STATE_EXIT;
+ 	smu->smu_baco.platform_support = false;
+ 	smu->user_dpm_profile.fan_mode = -1;
+ 
+@@ -1746,31 +1746,10 @@ static int smu_smc_hw_cleanup(struct smu_context *smu)
+ 	return 0;
+ }
+ 
+-static int smu_reset_mp1_state(struct smu_context *smu)
+-{
+-	struct amdgpu_device *adev = smu->adev;
+-	int ret = 0;
+-
+-	if ((!adev->in_runpm) && (!adev->in_suspend) &&
+-		(!amdgpu_in_reset(adev)))
+-		switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
+-		case IP_VERSION(13, 0, 0):
+-		case IP_VERSION(13, 0, 7):
+-		case IP_VERSION(13, 0, 10):
+-			ret = smu_set_mp1_state(smu, PP_MP1_STATE_UNLOAD);
+-			break;
+-		default:
+-			break;
+-		}
+-
+-	return ret;
+-}
+-
+ static int smu_hw_fini(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 	struct smu_context *smu = adev->powerplay.pp_handle;
+-	int ret;
+ 
+ 	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
+ 		return 0;
+@@ -1788,15 +1767,7 @@ static int smu_hw_fini(void *handle)
+ 
+ 	adev->pm.dpm_enabled = false;
+ 
+-	ret = smu_smc_hw_cleanup(smu);
+-	if (ret)
+-		return ret;
+-
+-	ret = smu_reset_mp1_state(smu);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
++	return smu_smc_hw_cleanup(smu);
+ }
+ 
+ static void smu_late_fini(void *handle)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+index f8b2e6cc25688..e8329d157bfe2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+@@ -419,7 +419,6 @@ enum smu_reset_mode {
+ enum smu_baco_state {
+ 	SMU_BACO_STATE_ENTER = 0,
+ 	SMU_BACO_STATE_EXIT,
+-	SMU_BACO_STATE_NONE,
+ };
+ 
+ struct smu_baco_context {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 2a76064d909fb..5625a6e5702a3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2773,13 +2773,7 @@ static int smu_v13_0_0_set_mp1_state(struct smu_context *smu,
+ 
+ 	switch (mp1_state) {
+ 	case PP_MP1_STATE_UNLOAD:
+-		ret = smu_cmn_send_smc_msg_with_param(smu,
+-											  SMU_MSG_PrepareMp1ForUnload,
+-											  0x55, NULL);
+-
+-		if (!ret && smu->smu_baco.state == SMU_BACO_STATE_EXIT)
+-			ret = smu_v13_0_disable_pmfw_state(smu);
+-
++		ret = smu_cmn_set_mp1_state(smu, mp1_state);
+ 		break;
+ 	default:
+ 		/* Ignore others */
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index d380a53e8f772..bc5891c3f6487 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2500,13 +2500,7 @@ static int smu_v13_0_7_set_mp1_state(struct smu_context *smu,
+ 
+ 	switch (mp1_state) {
+ 	case PP_MP1_STATE_UNLOAD:
+-		ret = smu_cmn_send_smc_msg_with_param(smu,
+-											  SMU_MSG_PrepareMp1ForUnload,
+-											  0x55, NULL);
+-
+-		if (!ret && smu->smu_baco.state == SMU_BACO_STATE_EXIT)
+-			ret = smu_v13_0_disable_pmfw_state(smu);
+-
++		ret = smu_cmn_set_mp1_state(smu, mp1_state);
+ 		break;
+ 	default:
+ 		/* Ignore others */
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index a9f7fa9b90bda..d30f8814d9b10 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -2850,8 +2850,7 @@ static int handle_mmio(struct intel_gvt_mmio_table_iter *iter, u32 offset,
+ 	for (i = start; i < end; i += 4) {
+ 		p = intel_gvt_find_mmio_info(gvt, i);
+ 		if (p) {
+-			WARN(1, "dup mmio definition offset %x\n",
+-				info->offset);
++			WARN(1, "dup mmio definition offset %x\n", i);
+ 
+ 			/* We return -EEXIST here to make GVT-g load fail.
+ 			 * So duplicated MMIO can be found as soon as
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index b9f0093389a82..cf0d44b6e7a30 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2075,7 +2075,7 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)
+ 	}
+ 
+ 	/* reset the merge 3D HW block */
+-	if (phys_enc->hw_pp->merge_3d) {
++	if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d) {
+ 		phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d,
+ 				BLEND_3D_NONE);
+ 		if (phys_enc->hw_ctl->ops.update_pending_flush_merge_3d)
+@@ -2097,7 +2097,7 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)
+ 	if (phys_enc->hw_wb)
+ 		intf_cfg.wb = phys_enc->hw_wb->idx;
+ 
+-	if (phys_enc->hw_pp->merge_3d)
++	if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d)
+ 		intf_cfg.merge_3d = phys_enc->hw_pp->merge_3d->idx;
+ 
+ 	if (ctl->ops.reset_intf_cfg)
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 77a8d9366ed7b..fb588fde298a2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -135,11 +135,6 @@ static void dp_ctrl_config_ctrl(struct dp_ctrl_private *ctrl)
+ 	tbd = dp_link_get_test_bits_depth(ctrl->link,
+ 			ctrl->panel->dp_mode.bpp);
+ 
+-	if (tbd == DP_TEST_BIT_DEPTH_UNKNOWN) {
+-		pr_debug("BIT_DEPTH not set. Configure default\n");
+-		tbd = DP_TEST_BIT_DEPTH_8;
+-	}
+-
+ 	config |= tbd << DP_CONFIGURATION_CTRL_BPC_SHIFT;
+ 
+ 	/* Num of Lanes */
+diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c
+index 98427d45e9a7e..49dfac1fd1ef2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_link.c
++++ b/drivers/gpu/drm/msm/dp/dp_link.c
+@@ -7,6 +7,7 @@
+ 
+ #include <drm/drm_print.h>
+ 
++#include "dp_reg.h"
+ #include "dp_link.h"
+ #include "dp_panel.h"
+ 
+@@ -1082,7 +1083,7 @@ int dp_link_process_request(struct dp_link *dp_link)
+ 
+ int dp_link_get_colorimetry_config(struct dp_link *dp_link)
+ {
+-	u32 cc;
++	u32 cc = DP_MISC0_COLORIMERY_CFG_LEGACY_RGB;
+ 	struct dp_link_private *link;
+ 
+ 	if (!dp_link) {
+@@ -1096,10 +1097,11 @@ int dp_link_get_colorimetry_config(struct dp_link *dp_link)
+ 	 * Unless a video pattern CTS test is ongoing, use RGB_VESA
+ 	 * Only RGB_VESA and RGB_CEA supported for now
+ 	 */
+-	if (dp_link_is_video_pattern_requested(link))
+-		cc = link->dp_link.test_video.test_dyn_range;
+-	else
+-		cc = DP_TEST_DYNAMIC_RANGE_VESA;
++	if (dp_link_is_video_pattern_requested(link)) {
++		if (link->dp_link.test_video.test_dyn_range &
++					DP_TEST_DYNAMIC_RANGE_CEA)
++			cc = DP_MISC0_COLORIMERY_CFG_CEA_RGB;
++	}
+ 
+ 	return cc;
+ }
+@@ -1179,6 +1181,9 @@ void dp_link_reset_phy_params_vx_px(struct dp_link *dp_link)
+ u32 dp_link_get_test_bits_depth(struct dp_link *dp_link, u32 bpp)
+ {
+ 	u32 tbd;
++	struct dp_link_private *link;
++
++	link = container_of(dp_link, struct dp_link_private, dp_link);
+ 
+ 	/*
+ 	 * Few simplistic rules and assumptions made here:
+@@ -1196,12 +1201,13 @@ u32 dp_link_get_test_bits_depth(struct dp_link *dp_link, u32 bpp)
+ 		tbd = DP_TEST_BIT_DEPTH_10;
+ 		break;
+ 	default:
+-		tbd = DP_TEST_BIT_DEPTH_UNKNOWN;
++		drm_dbg_dp(link->drm_dev, "bpp=%d not supported, use bpc=8\n",
++			   bpp);
++		tbd = DP_TEST_BIT_DEPTH_8;
+ 		break;
+ 	}
+ 
+-	if (tbd != DP_TEST_BIT_DEPTH_UNKNOWN)
+-		tbd = (tbd >> DP_TEST_BIT_DEPTH_SHIFT);
++	tbd = (tbd >> DP_TEST_BIT_DEPTH_SHIFT);
+ 
+ 	return tbd;
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_reg.h b/drivers/gpu/drm/msm/dp/dp_reg.h
+index ea85a691e72b5..78785ed4b40c4 100644
+--- a/drivers/gpu/drm/msm/dp/dp_reg.h
++++ b/drivers/gpu/drm/msm/dp/dp_reg.h
+@@ -143,6 +143,9 @@
+ #define DP_MISC0_COLORIMETRY_CFG_SHIFT		(0x00000001)
+ #define DP_MISC0_TEST_BITS_DEPTH_SHIFT		(0x00000005)
+ 
++#define DP_MISC0_COLORIMERY_CFG_LEGACY_RGB	(0)
++#define DP_MISC0_COLORIMERY_CFG_CEA_RGB		(0x04)
++
+ #define REG_DP_VALID_BOUNDARY			(0x00000030)
+ #define REG_DP_VALID_BOUNDARY_2			(0x00000034)
+ 
+diff --git a/drivers/hwmon/aspeed-pwm-tacho.c b/drivers/hwmon/aspeed-pwm-tacho.c
+index 997df4b405098..b2ae2176f11fe 100644
+--- a/drivers/hwmon/aspeed-pwm-tacho.c
++++ b/drivers/hwmon/aspeed-pwm-tacho.c
+@@ -193,6 +193,8 @@ struct aspeed_pwm_tacho_data {
+ 	u8 fan_tach_ch_source[16];
+ 	struct aspeed_cooling_device *cdev[8];
+ 	const struct attribute_group *groups[3];
++	/* protects access to shared ASPEED_PTCR_RESULT */
++	struct mutex tach_lock;
+ };
+ 
+ enum type { TYPEM, TYPEN, TYPEO };
+@@ -527,6 +529,8 @@ static int aspeed_get_fan_tach_ch_rpm(struct aspeed_pwm_tacho_data *priv,
+ 	u8 fan_tach_ch_source, type, mode, both;
+ 	int ret;
+ 
++	mutex_lock(&priv->tach_lock);
++
+ 	regmap_write(priv->regmap, ASPEED_PTCR_TRIGGER, 0);
+ 	regmap_write(priv->regmap, ASPEED_PTCR_TRIGGER, 0x1 << fan_tach_ch);
+ 
+@@ -544,6 +548,8 @@ static int aspeed_get_fan_tach_ch_rpm(struct aspeed_pwm_tacho_data *priv,
+ 		ASPEED_RPM_STATUS_SLEEP_USEC,
+ 		usec);
+ 
++	mutex_unlock(&priv->tach_lock);
++
+ 	/* return -ETIMEDOUT if we didn't get an answer. */
+ 	if (ret)
+ 		return ret;
+@@ -903,6 +909,7 @@ static int aspeed_pwm_tacho_probe(struct platform_device *pdev)
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
++	mutex_init(&priv->tach_lock);
+ 	priv->regmap = devm_regmap_init(dev, NULL, (__force void *)regs,
+ 			&aspeed_pwm_tacho_regmap_config);
+ 	if (IS_ERR(priv->regmap))
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index ba82d1e79c131..95f4c0b00b2d8 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -419,7 +419,7 @@ static ssize_t show_temp(struct device *dev,
+ }
+ 
+ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
+-			     int attr_no)
++			     int index)
+ {
+ 	int i;
+ 	static ssize_t (*const rd_ptr[TOTAL_ATTRS]) (struct device *dev,
+@@ -431,13 +431,20 @@ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
+ 	};
+ 
+ 	for (i = 0; i < tdata->attr_size; i++) {
++		/*
++		 * We map the attr number to core id of the CPU
++		 * The attr number is always core id + 2
++		 * The Pkgtemp will always show up as temp1_*, if available
++		 */
++		int attr_no = tdata->is_pkg_data ? 1 : tdata->cpu_core_id + 2;
++
+ 		snprintf(tdata->attr_name[i], CORETEMP_NAME_LENGTH,
+ 			 "temp%d_%s", attr_no, suffixes[i]);
+ 		sysfs_attr_init(&tdata->sd_attrs[i].dev_attr.attr);
+ 		tdata->sd_attrs[i].dev_attr.attr.name = tdata->attr_name[i];
+ 		tdata->sd_attrs[i].dev_attr.attr.mode = 0444;
+ 		tdata->sd_attrs[i].dev_attr.show = rd_ptr[i];
+-		tdata->sd_attrs[i].index = attr_no;
++		tdata->sd_attrs[i].index = index;
+ 		tdata->attrs[i] = &tdata->sd_attrs[i].dev_attr.attr;
+ 	}
+ 	tdata->attr_group.attrs = tdata->attrs;
+@@ -495,30 +502,25 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ 	struct platform_data *pdata = platform_get_drvdata(pdev);
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	u32 eax, edx;
+-	int err, index, attr_no;
++	int err, index;
+ 
+ 	if (!housekeeping_cpu(cpu, HK_TYPE_MISC))
+ 		return 0;
+ 
+ 	/*
+-	 * Find attr number for sysfs:
+-	 * We map the attr number to core id of the CPU
+-	 * The attr number is always core id + 2
+-	 * The Pkgtemp will always show up as temp1_*, if available
++	 * Get the index of tdata in pdata->core_data[]
++	 * tdata for package: pdata->core_data[1]
++	 * tdata for core: pdata->core_data[2] .. pdata->core_data[NUM_REAL_CORES + 1]
+ 	 */
+ 	if (pkg_flag) {
+-		attr_no = PKG_SYSFS_ATTR_NO;
++		index = PKG_SYSFS_ATTR_NO;
+ 	} else {
+-		index = ida_alloc(&pdata->ida, GFP_KERNEL);
++		index = ida_alloc_max(&pdata->ida, NUM_REAL_CORES - 1, GFP_KERNEL);
+ 		if (index < 0)
+ 			return index;
+-		pdata->cpu_map[index] = topology_core_id(cpu);
+-		attr_no = index + BASE_SYSFS_ATTR_NO;
+-	}
+ 
+-	if (attr_no > MAX_CORE_DATA - 1) {
+-		err = -ERANGE;
+-		goto ida_free;
++		pdata->cpu_map[index] = topology_core_id(cpu);
++		index += BASE_SYSFS_ATTR_NO;
+ 	}
+ 
+ 	tdata = init_temp_data(cpu, pkg_flag);
+@@ -544,20 +546,20 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
+ 		if (get_ttarget(tdata, &pdev->dev) >= 0)
+ 			tdata->attr_size++;
+ 
+-	pdata->core_data[attr_no] = tdata;
++	pdata->core_data[index] = tdata;
+ 
+ 	/* Create sysfs interfaces */
+-	err = create_core_attrs(tdata, pdata->hwmon_dev, attr_no);
++	err = create_core_attrs(tdata, pdata->hwmon_dev, index);
+ 	if (err)
+ 		goto exit_free;
+ 
+ 	return 0;
+ exit_free:
+-	pdata->core_data[attr_no] = NULL;
++	pdata->core_data[index] = NULL;
+ 	kfree(tdata);
+ ida_free:
+ 	if (!pkg_flag)
+-		ida_free(&pdata->ida, index);
++		ida_free(&pdata->ida, index - BASE_SYSFS_ATTR_NO);
+ 	return err;
+ }
+ 
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index 13ef6284223da..c229bd6b3f7f2 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -811,7 +811,6 @@ static int atkbd_probe(struct atkbd *atkbd)
+ {
+ 	struct ps2dev *ps2dev = &atkbd->ps2dev;
+ 	unsigned char param[2];
+-	bool skip_getid;
+ 
+ /*
+  * Some systems, where the bit-twiddling when testing the io-lines of the
+@@ -825,6 +824,11 @@ static int atkbd_probe(struct atkbd *atkbd)
+ 				 "keyboard reset failed on %s\n",
+ 				 ps2dev->serio->phys);
+ 
++	if (atkbd_skip_getid(atkbd)) {
++		atkbd->id = 0xab83;
++		return 0;
++	}
++
+ /*
+  * Then we check the keyboard ID. We should get 0xab83 under normal conditions.
+  * Some keyboards report different values, but the first byte is always 0xab or
+@@ -833,18 +837,17 @@ static int atkbd_probe(struct atkbd *atkbd)
+  */
+ 
+ 	param[0] = param[1] = 0xa5;	/* initialize with invalid values */
+-	skip_getid = atkbd_skip_getid(atkbd);
+-	if (skip_getid || ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
++	if (ps2_command(ps2dev, param, ATKBD_CMD_GETID)) {
+ 
+ /*
+- * If the get ID command was skipped or failed, we check if we can at least set
++ * If the get ID command failed, we check if we can at least set
+  * the LEDs on the keyboard. This should work on every keyboard out there.
+  * It also turns the LEDs off, which we want anyway.
+  */
+ 		param[0] = 0;
+ 		if (ps2_command(ps2dev, param, ATKBD_CMD_SETLEDS))
+ 			return -1;
+-		atkbd->id = skip_getid ? 0xab83 : 0xabba;
++		atkbd->id = 0xabba;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index b585b1dab870e..cd45a65e17f2c 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1208,6 +1208,12 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 					SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
+ 					SERIO_QUIRK_NOPNP)
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "NS5x_7xPU"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"),
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c b/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c
+index abd4832e4ed21..5acb3e16b5677 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ptp.c
+@@ -993,7 +993,7 @@ int aq_ptp_ring_alloc(struct aq_nic_s *aq_nic)
+ 	return 0;
+ 
+ err_exit_hwts_rx:
+-	aq_ring_free(&aq_ptp->hwts_rx);
++	aq_ring_hwts_rx_free(&aq_ptp->hwts_rx);
+ err_exit_ptp_rx:
+ 	aq_ring_free(&aq_ptp->ptp_rx);
+ err_exit_ptp_tx:
+@@ -1011,7 +1011,7 @@ void aq_ptp_ring_free(struct aq_nic_s *aq_nic)
+ 
+ 	aq_ring_free(&aq_ptp->ptp_tx);
+ 	aq_ring_free(&aq_ptp->ptp_rx);
+-	aq_ring_free(&aq_ptp->hwts_rx);
++	aq_ring_hwts_rx_free(&aq_ptp->hwts_rx);
+ 
+ 	aq_ptp_skb_ring_release(&aq_ptp->skb_ring);
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index cda8597b4e146..f7433abd65915 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -919,6 +919,19 @@ void aq_ring_free(struct aq_ring_s *self)
+ 	}
+ }
+ 
++void aq_ring_hwts_rx_free(struct aq_ring_s *self)
++{
++	if (!self)
++		return;
++
++	if (self->dx_ring) {
++		dma_free_coherent(aq_nic_get_dev(self->aq_nic),
++				  self->size * self->dx_size + AQ_CFG_RXDS_DEF,
++				  self->dx_ring, self->dx_ring_pa);
++		self->dx_ring = NULL;
++	}
++}
++
+ unsigned int aq_ring_fill_stats_data(struct aq_ring_s *self, u64 *data)
+ {
+ 	unsigned int count;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.h b/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
+index 52847310740a2..d627ace850ff5 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.h
+@@ -210,6 +210,7 @@ int aq_ring_rx_fill(struct aq_ring_s *self);
+ int aq_ring_hwts_rx_alloc(struct aq_ring_s *self,
+ 			  struct aq_nic_s *aq_nic, unsigned int idx,
+ 			  unsigned int size, unsigned int dx_size);
++void aq_ring_hwts_rx_free(struct aq_ring_s *self);
+ void aq_ring_hwts_rx_clean(struct aq_ring_s *self, struct aq_nic_s *aq_nic);
+ 
+ unsigned int aq_ring_fill_stats_data(struct aq_ring_s *self, u64 *data);
+diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
+index 9aeff2b37a612..64eadd3207983 100644
+--- a/drivers/net/ethernet/engleder/tsnep_main.c
++++ b/drivers/net/ethernet/engleder/tsnep_main.c
+@@ -719,17 +719,25 @@ static void tsnep_xdp_xmit_flush(struct tsnep_tx *tx)
+ 
+ static bool tsnep_xdp_xmit_back(struct tsnep_adapter *adapter,
+ 				struct xdp_buff *xdp,
+-				struct netdev_queue *tx_nq, struct tsnep_tx *tx)
++				struct netdev_queue *tx_nq, struct tsnep_tx *tx,
++				bool zc)
+ {
+ 	struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
+ 	bool xmit;
++	u32 type;
+ 
+ 	if (unlikely(!xdpf))
+ 		return false;
+ 
++	/* no page pool for zero copy */
++	if (zc)
++		type = TSNEP_TX_TYPE_XDP_NDO;
++	else
++		type = TSNEP_TX_TYPE_XDP_TX;
++
+ 	__netif_tx_lock(tx_nq, smp_processor_id());
+ 
+-	xmit = tsnep_xdp_xmit_frame_ring(xdpf, tx, TSNEP_TX_TYPE_XDP_TX);
++	xmit = tsnep_xdp_xmit_frame_ring(xdpf, tx, type);
+ 
+ 	/* Avoid transmit queue timeout since we share it with the slow path */
+ 	if (xmit)
+@@ -1273,7 +1281,7 @@ static bool tsnep_xdp_run_prog(struct tsnep_rx *rx, struct bpf_prog *prog,
+ 	case XDP_PASS:
+ 		return false;
+ 	case XDP_TX:
+-		if (!tsnep_xdp_xmit_back(rx->adapter, xdp, tx_nq, tx))
++		if (!tsnep_xdp_xmit_back(rx->adapter, xdp, tx_nq, tx, false))
+ 			goto out_failure;
+ 		*status |= TSNEP_XDP_TX;
+ 		return true;
+@@ -1323,7 +1331,7 @@ static bool tsnep_xdp_run_prog_zc(struct tsnep_rx *rx, struct bpf_prog *prog,
+ 	case XDP_PASS:
+ 		return false;
+ 	case XDP_TX:
+-		if (!tsnep_xdp_xmit_back(rx->adapter, xdp, tx_nq, tx))
++		if (!tsnep_xdp_xmit_back(rx->adapter, xdp, tx_nq, tx, true))
+ 			goto out_failure;
+ 		*status |= TSNEP_XDP_TX;
+ 		return true;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 7ca6941ea0b9b..02d0b707aea5b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -951,8 +951,11 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
+ 	if (pfvf->ptp && qidx < pfvf->hw.tx_queues) {
+ 		err = qmem_alloc(pfvf->dev, &sq->timestamps, qset->sqe_cnt,
+ 				 sizeof(*sq->timestamps));
+-		if (err)
++		if (err) {
++			kfree(sq->sg);
++			sq->sg = NULL;
+ 			return err;
++		}
+ 	}
+ 
+ 	sq->head = 0;
+@@ -968,7 +971,14 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
+ 	sq->stats.bytes = 0;
+ 	sq->stats.pkts = 0;
+ 
+-	return pfvf->hw_ops->sq_aq_init(pfvf, qidx, sqb_aura);
++	err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, sqb_aura);
++	if (err) {
++		kfree(sq->sg);
++		sq->sg = NULL;
++		return err;
++	}
++
++	return 0;
+ 
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index e3f650e88f82f..588e44d57f291 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -216,6 +216,7 @@ struct stmmac_safety_stats {
+ 	unsigned long mac_errors[32];
+ 	unsigned long mtl_errors[32];
+ 	unsigned long dma_errors[32];
++	unsigned long dma_dpp_errors[32];
+ };
+ 
+ /* Number of fields in Safety Stats */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+index a4e8b498dea96..17394847476f3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+@@ -319,6 +319,8 @@
+ #define XGMAC_RXCEIE			BIT(4)
+ #define XGMAC_TXCEIE			BIT(0)
+ #define XGMAC_MTL_ECC_INT_STATUS	0x000010cc
++#define XGMAC_MTL_DPP_CONTROL		0x000010e0
++#define XGMAC_DPP_DISABLE		BIT(0)
+ #define XGMAC_MTL_TXQ_OPMODE(x)		(0x00001100 + (0x80 * (x)))
+ #define XGMAC_TQS			GENMASK(25, 16)
+ #define XGMAC_TQS_SHIFT			16
+@@ -401,6 +403,7 @@
+ #define XGMAC_DCEIE			BIT(1)
+ #define XGMAC_TCEIE			BIT(0)
+ #define XGMAC_DMA_ECC_INT_STATUS	0x0000306c
++#define XGMAC_DMA_DPP_INT_STATUS	0x00003074
+ #define XGMAC_DMA_CH_CONTROL(x)		(0x00003100 + (0x80 * (x)))
+ #define XGMAC_SPH			BIT(24)
+ #define XGMAC_PBLx8			BIT(16)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index a74e71db79f94..b5509f244ecd1 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -830,6 +830,44 @@ static const struct dwxgmac3_error_desc dwxgmac3_dma_errors[32]= {
+ 	{ false, "UNKNOWN", "Unknown Error" }, /* 31 */
+ };
+ 
++#define DPP_RX_ERR "Read Rx Descriptor Parity checker Error"
++#define DPP_TX_ERR "Read Tx Descriptor Parity checker Error"
++
++static const struct dwxgmac3_error_desc dwxgmac3_dma_dpp_errors[32] = {
++	{ true, "TDPES0", DPP_TX_ERR },
++	{ true, "TDPES1", DPP_TX_ERR },
++	{ true, "TDPES2", DPP_TX_ERR },
++	{ true, "TDPES3", DPP_TX_ERR },
++	{ true, "TDPES4", DPP_TX_ERR },
++	{ true, "TDPES5", DPP_TX_ERR },
++	{ true, "TDPES6", DPP_TX_ERR },
++	{ true, "TDPES7", DPP_TX_ERR },
++	{ true, "TDPES8", DPP_TX_ERR },
++	{ true, "TDPES9", DPP_TX_ERR },
++	{ true, "TDPES10", DPP_TX_ERR },
++	{ true, "TDPES11", DPP_TX_ERR },
++	{ true, "TDPES12", DPP_TX_ERR },
++	{ true, "TDPES13", DPP_TX_ERR },
++	{ true, "TDPES14", DPP_TX_ERR },
++	{ true, "TDPES15", DPP_TX_ERR },
++	{ true, "RDPES0", DPP_RX_ERR },
++	{ true, "RDPES1", DPP_RX_ERR },
++	{ true, "RDPES2", DPP_RX_ERR },
++	{ true, "RDPES3", DPP_RX_ERR },
++	{ true, "RDPES4", DPP_RX_ERR },
++	{ true, "RDPES5", DPP_RX_ERR },
++	{ true, "RDPES6", DPP_RX_ERR },
++	{ true, "RDPES7", DPP_RX_ERR },
++	{ true, "RDPES8", DPP_RX_ERR },
++	{ true, "RDPES9", DPP_RX_ERR },
++	{ true, "RDPES10", DPP_RX_ERR },
++	{ true, "RDPES11", DPP_RX_ERR },
++	{ true, "RDPES12", DPP_RX_ERR },
++	{ true, "RDPES13", DPP_RX_ERR },
++	{ true, "RDPES14", DPP_RX_ERR },
++	{ true, "RDPES15", DPP_RX_ERR },
++};
++
+ static void dwxgmac3_handle_dma_err(struct net_device *ndev,
+ 				    void __iomem *ioaddr, bool correctable,
+ 				    struct stmmac_safety_stats *stats)
+@@ -841,6 +879,13 @@ static void dwxgmac3_handle_dma_err(struct net_device *ndev,
+ 
+ 	dwxgmac3_log_error(ndev, value, correctable, "DMA",
+ 			   dwxgmac3_dma_errors, STAT_OFF(dma_errors), stats);
++
++	value = readl(ioaddr + XGMAC_DMA_DPP_INT_STATUS);
++	writel(value, ioaddr + XGMAC_DMA_DPP_INT_STATUS);
++
++	dwxgmac3_log_error(ndev, value, false, "DMA_DPP",
++			   dwxgmac3_dma_dpp_errors,
++			   STAT_OFF(dma_dpp_errors), stats);
+ }
+ 
+ static int
+@@ -881,6 +926,12 @@ dwxgmac3_safety_feat_config(void __iomem *ioaddr, unsigned int asp,
+ 	value |= XGMAC_TMOUTEN; /* FSM Timeout Feature */
+ 	writel(value, ioaddr + XGMAC_MAC_FSM_CONTROL);
+ 
++	/* 5. Enable Data Path Parity Protection */
++	value = readl(ioaddr + XGMAC_MTL_DPP_CONTROL);
++	/* already enabled by default, explicit enable it again */
++	value &= ~XGMAC_DPP_DISABLE;
++	writel(value, ioaddr + XGMAC_MTL_DPP_CONTROL);
++
+ 	return 0;
+ }
+ 
+@@ -914,7 +965,11 @@ static int dwxgmac3_safety_feat_irq_status(struct net_device *ndev,
+ 		ret |= !corr;
+ 	}
+ 
+-	err = dma & (XGMAC_DEUIS | XGMAC_DECIS);
++	/* DMA_DPP_Interrupt_Status is indicated by MCSIS bit in
++	 * DMA_Safety_Interrupt_Status, so we handle DMA Data Path
++	 * Parity Errors here
++	 */
++	err = dma & (XGMAC_DEUIS | XGMAC_DECIS | XGMAC_MCSIS);
+ 	corr = dma & XGMAC_DECIS;
+ 	if (err) {
+ 		dwxgmac3_handle_dma_err(ndev, ioaddr, corr, stats);
+@@ -930,6 +985,7 @@ static const struct dwxgmac3_error {
+ 	{ dwxgmac3_mac_errors },
+ 	{ dwxgmac3_mtl_errors },
+ 	{ dwxgmac3_dma_errors },
++	{ dwxgmac3_dma_dpp_errors },
+ };
+ 
+ static int dwxgmac3_safety_feat_dump(struct stmmac_safety_stats *stats,
+diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c
+index b4d3b9cde8bd6..92a7a36b93ac0 100644
+--- a/drivers/net/netdevsim/dev.c
++++ b/drivers/net/netdevsim/dev.c
+@@ -835,14 +835,14 @@ static void nsim_dev_trap_report_work(struct work_struct *work)
+ 				      trap_report_dw.work);
+ 	nsim_dev = nsim_trap_data->nsim_dev;
+ 
+-	/* For each running port and enabled packet trap, generate a UDP
+-	 * packet with a random 5-tuple and report it.
+-	 */
+ 	if (!devl_trylock(priv_to_devlink(nsim_dev))) {
+-		schedule_delayed_work(&nsim_dev->trap_data->trap_report_dw, 0);
++		schedule_delayed_work(&nsim_dev->trap_data->trap_report_dw, 1);
+ 		return;
+ 	}
+ 
++	/* For each running port and enabled packet trap, generate a UDP
++	 * packet with a random 5-tuple and report it.
++	 */
+ 	list_for_each_entry(nsim_dev_port, &nsim_dev->port_list, list) {
+ 		if (!netif_running(nsim_dev_port->ns->netdev))
+ 			continue;
+diff --git a/drivers/net/ppp/ppp_async.c b/drivers/net/ppp/ppp_async.c
+index fbaaa8c102a1b..e94a4b08fd63b 100644
+--- a/drivers/net/ppp/ppp_async.c
++++ b/drivers/net/ppp/ppp_async.c
+@@ -460,6 +460,10 @@ ppp_async_ioctl(struct ppp_channel *chan, unsigned int cmd, unsigned long arg)
+ 	case PPPIOCSMRU:
+ 		if (get_user(val, p))
+ 			break;
++		if (val > U16_MAX) {
++			err = -EINVAL;
++			break;
++		}
+ 		if (val < PPP_MRU)
+ 			val = PPP_MRU;
+ 		ap->mru = val;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 667462369a32f..44cea18dd20ed 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -3779,8 +3779,10 @@ static int brcmf_internal_escan_add_info(struct cfg80211_scan_request *req,
+ 		if (req->channels[i] == chan)
+ 			break;
+ 	}
+-	if (i == req->n_channels)
+-		req->channels[req->n_channels++] = chan;
++	if (i == req->n_channels) {
++		req->n_channels++;
++		req->channels[i] = chan;
++	}
+ 
+ 	for (i = 0; i < req->n_ssids; i++) {
+ 		if (req->ssids[i].ssid_len == ssid_len &&
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h b/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h
+index 7b18e098b125b..8248c3beb18d0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h
+@@ -531,7 +531,7 @@ enum iwl_fw_dbg_config_cmd_type {
+ }; /* LDBG_CFG_CMD_TYPE_API_E_VER_1 */
+ 
+ /* this token disables debug asserts in the firmware */
+-#define IWL_FW_DBG_CONFIG_TOKEN 0x00011301
++#define IWL_FW_DBG_CONFIG_TOKEN 0x00010001
+ 
+ /**
+  * struct iwl_fw_dbg_config_cmd - configure FW debug
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index a64600f0ed9f4..586334eee0566 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1586,7 +1586,8 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
+ 	 */
+ 	if (vif->type == NL80211_IFTYPE_AP ||
+ 	    vif->type == NL80211_IFTYPE_ADHOC) {
+-		iwl_mvm_vif_dbgfs_add_link(mvm, vif);
++		if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++			iwl_mvm_vif_dbgfs_add_link(mvm, vif);
+ 		ret = 0;
+ 		goto out;
+ 	}
+@@ -1626,7 +1627,8 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
+ 			iwl_mvm_chandef_get_primary_80(&vif->bss_conf.chandef);
+ 	}
+ 
+-	iwl_mvm_vif_dbgfs_add_link(mvm, vif);
++	if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++		iwl_mvm_vif_dbgfs_add_link(mvm, vif);
+ 
+ 	if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+ 	    vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+index 61170173f917a..893b69fc841b8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+@@ -81,7 +81,8 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
+ 		ieee80211_hw_set(mvm->hw, RX_INCLUDES_FCS);
+ 	}
+ 
+-	iwl_mvm_vif_dbgfs_add_link(mvm, vif);
++	if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++		iwl_mvm_vif_dbgfs_add_link(mvm, vif);
+ 
+ 	if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
+ 	    vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
+@@ -437,6 +438,9 @@ __iwl_mvm_mld_unassign_vif_chanctx(struct iwl_mvm *mvm,
+ 		mvmvif->ap_ibss_active = false;
+ 	}
+ 
++	iwl_mvm_link_changed(mvm, vif, link_conf,
++			     LINK_CONTEXT_MODIFY_ACTIVE, false);
++
+ 	if (iwl_mvm_is_esr_supported(mvm->fwrt.trans) && n_active > 1) {
+ 		int ret = iwl_mvm_esr_mode_inactive(mvm, vif);
+ 
+@@ -448,9 +452,6 @@ __iwl_mvm_mld_unassign_vif_chanctx(struct iwl_mvm *mvm,
+ 	if (vif->type == NL80211_IFTYPE_MONITOR)
+ 		iwl_mvm_mld_rm_snif_sta(mvm, vif);
+ 
+-	iwl_mvm_link_changed(mvm, vif, link_conf,
+-			     LINK_CONTEXT_MODIFY_ACTIVE, false);
+-
+ 	if (switching_chanctx)
+ 		return;
+ 	mvmvif->link[link_id]->phy_ctxt = NULL;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 60f14019f9816..86149275ccb8e 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4107,6 +4107,7 @@ static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl)
+ static void nvme_get_fw_slot_info(struct nvme_ctrl *ctrl)
+ {
+ 	struct nvme_fw_slot_info_log *log;
++	u8 next_fw_slot, cur_fw_slot;
+ 
+ 	log = kmalloc(sizeof(*log), GFP_KERNEL);
+ 	if (!log)
+@@ -4118,13 +4119,15 @@ static void nvme_get_fw_slot_info(struct nvme_ctrl *ctrl)
+ 		goto out_free_log;
+ 	}
+ 
+-	if (log->afi & 0x70 || !(log->afi & 0x7)) {
++	cur_fw_slot = log->afi & 0x7;
++	next_fw_slot = (log->afi & 0x70) >> 4;
++	if (!cur_fw_slot || (next_fw_slot && (cur_fw_slot != next_fw_slot))) {
+ 		dev_info(ctrl->device,
+ 			 "Firmware is activated after next Controller Level Reset\n");
+ 		goto out_free_log;
+ 	}
+ 
+-	memcpy(ctrl->subsys->firmware_rev, &log->frs[(log->afi & 0x7) - 1],
++	memcpy(ctrl->subsys->firmware_rev, &log->frs[cur_fw_slot - 1],
+ 		sizeof(ctrl->subsys->firmware_rev));
+ 
+ out_free_log:
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 9c2137dae429a..826b5016a1010 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -386,21 +386,8 @@ void pci_bus_add_devices(const struct pci_bus *bus)
+ }
+ EXPORT_SYMBOL(pci_bus_add_devices);
+ 
+-/** pci_walk_bus - walk devices on/under bus, calling callback.
+- *  @top      bus whose devices should be walked
+- *  @cb       callback to be called for each device found
+- *  @userdata arbitrary pointer to be passed to callback.
+- *
+- *  Walk the given bus, including any bridged devices
+- *  on buses under this bus.  Call the provided callback
+- *  on each device found.
+- *
+- *  We check the return of @cb each time. If it returns anything
+- *  other than 0, we break out.
+- *
+- */
+-void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
+-		  void *userdata)
++static void __pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
++			   void *userdata, bool locked)
+ {
+ 	struct pci_dev *dev;
+ 	struct pci_bus *bus;
+@@ -408,7 +395,8 @@ void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
+ 	int retval;
+ 
+ 	bus = top;
+-	down_read(&pci_bus_sem);
++	if (!locked)
++		down_read(&pci_bus_sem);
+ 	next = top->devices.next;
+ 	for (;;) {
+ 		if (next == &bus->devices) {
+@@ -431,10 +419,37 @@ void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
+ 		if (retval)
+ 			break;
+ 	}
+-	up_read(&pci_bus_sem);
++	if (!locked)
++		up_read(&pci_bus_sem);
++}
++
++/**
++ *  pci_walk_bus - walk devices on/under bus, calling callback.
++ *  @top: bus whose devices should be walked
++ *  @cb: callback to be called for each device found
++ *  @userdata: arbitrary pointer to be passed to callback
++ *
++ *  Walk the given bus, including any bridged devices
++ *  on buses under this bus.  Call the provided callback
++ *  on each device found.
++ *
++ *  We check the return of @cb each time. If it returns anything
++ *  other than 0, we break out.
++ */
++void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *), void *userdata)
++{
++	__pci_walk_bus(top, cb, userdata, false);
+ }
+ EXPORT_SYMBOL_GPL(pci_walk_bus);
+ 
++void pci_walk_bus_locked(struct pci_bus *top, int (*cb)(struct pci_dev *, void *), void *userdata)
++{
++	lockdep_assert_held(&pci_bus_sem);
++
++	__pci_walk_bus(top, cb, userdata, true);
++}
++EXPORT_SYMBOL_GPL(pci_walk_bus_locked);
++
+ struct pci_bus *pci_bus_get(struct pci_bus *bus)
+ {
+ 	if (bus)
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 11c80555d9754..cbc3f08817708 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -972,7 +972,7 @@ static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata)
+ 	 * Downstream devices need to be in D0 state before enabling PCI PM
+ 	 * substates.
+ 	 */
+-	pci_set_power_state(pdev, PCI_D0);
++	pci_set_power_state_locked(pdev, PCI_D0);
+ 	pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
+ 
+ 	return 0;
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index bdbf8a94b4d09..b2000b14faa0d 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1291,6 +1291,7 @@ end:
+ /**
+  * pci_set_full_power_state - Put a PCI device into D0 and update its state
+  * @dev: PCI device to power up
++ * @locked: whether pci_bus_sem is held
+  *
+  * Call pci_power_up() to put @dev into D0, read from its PCI_PM_CTRL register
+  * to confirm the state change, restore its BARs if they might be lost and
+@@ -1300,7 +1301,7 @@ end:
+  * to D0, it is more efficient to use pci_power_up() directly instead of this
+  * function.
+  */
+-static int pci_set_full_power_state(struct pci_dev *dev)
++static int pci_set_full_power_state(struct pci_dev *dev, bool locked)
+ {
+ 	u16 pmcsr;
+ 	int ret;
+@@ -1336,7 +1337,7 @@ static int pci_set_full_power_state(struct pci_dev *dev)
+ 	}
+ 
+ 	if (dev->bus->self)
+-		pcie_aspm_pm_state_change(dev->bus->self);
++		pcie_aspm_pm_state_change(dev->bus->self, locked);
+ 
+ 	return 0;
+ }
+@@ -1365,10 +1366,22 @@ void pci_bus_set_current_state(struct pci_bus *bus, pci_power_t state)
+ 		pci_walk_bus(bus, __pci_dev_set_current_state, &state);
+ }
+ 
++static void __pci_bus_set_current_state(struct pci_bus *bus, pci_power_t state, bool locked)
++{
++	if (!bus)
++		return;
++
++	if (locked)
++		pci_walk_bus_locked(bus, __pci_dev_set_current_state, &state);
++	else
++		pci_walk_bus(bus, __pci_dev_set_current_state, &state);
++}
++
+ /**
+  * pci_set_low_power_state - Put a PCI device into a low-power state.
+  * @dev: PCI device to handle.
+  * @state: PCI power state (D1, D2, D3hot) to put the device into.
++ * @locked: whether pci_bus_sem is held
+  *
+  * Use the device's PCI_PM_CTRL register to put it into a low-power state.
+  *
+@@ -1379,7 +1392,7 @@ void pci_bus_set_current_state(struct pci_bus *bus, pci_power_t state)
+  * 0 if device already is in the requested state.
+  * 0 if device's power state has been successfully changed.
+  */
+-static int pci_set_low_power_state(struct pci_dev *dev, pci_power_t state)
++static int pci_set_low_power_state(struct pci_dev *dev, pci_power_t state, bool locked)
+ {
+ 	u16 pmcsr;
+ 
+@@ -1433,29 +1446,12 @@ static int pci_set_low_power_state(struct pci_dev *dev, pci_power_t state)
+ 				     pci_power_name(state));
+ 
+ 	if (dev->bus->self)
+-		pcie_aspm_pm_state_change(dev->bus->self);
++		pcie_aspm_pm_state_change(dev->bus->self, locked);
+ 
+ 	return 0;
+ }
+ 
+-/**
+- * pci_set_power_state - Set the power state of a PCI device
+- * @dev: PCI device to handle.
+- * @state: PCI power state (D0, D1, D2, D3hot) to put the device into.
+- *
+- * Transition a device to a new power state, using the platform firmware and/or
+- * the device's PCI PM registers.
+- *
+- * RETURN VALUE:
+- * -EINVAL if the requested state is invalid.
+- * -EIO if device does not support PCI PM or its PM capabilities register has a
+- * wrong version, or device doesn't support the requested state.
+- * 0 if the transition is to D1 or D2 but D1 and D2 are not supported.
+- * 0 if device already is in the requested state.
+- * 0 if the transition is to D3 but D3 is not supported.
+- * 0 if device's power state has been successfully changed.
+- */
+-int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
++static int __pci_set_power_state(struct pci_dev *dev, pci_power_t state, bool locked)
+ {
+ 	int error;
+ 
+@@ -1479,7 +1475,7 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ 		return 0;
+ 
+ 	if (state == PCI_D0)
+-		return pci_set_full_power_state(dev);
++		return pci_set_full_power_state(dev, locked);
+ 
+ 	/*
+ 	 * This device is quirked not to be put into D3, so don't put it in
+@@ -1493,16 +1489,16 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ 		 * To put the device in D3cold, put it into D3hot in the native
+ 		 * way, then put it into D3cold using platform ops.
+ 		 */
+-		error = pci_set_low_power_state(dev, PCI_D3hot);
++		error = pci_set_low_power_state(dev, PCI_D3hot, locked);
+ 
+ 		if (pci_platform_power_transition(dev, PCI_D3cold))
+ 			return error;
+ 
+ 		/* Powering off a bridge may power off the whole hierarchy */
+ 		if (dev->current_state == PCI_D3cold)
+-			pci_bus_set_current_state(dev->subordinate, PCI_D3cold);
++			__pci_bus_set_current_state(dev->subordinate, PCI_D3cold, locked);
+ 	} else {
+-		error = pci_set_low_power_state(dev, state);
++		error = pci_set_low_power_state(dev, state, locked);
+ 
+ 		if (pci_platform_power_transition(dev, state))
+ 			return error;
+@@ -1510,8 +1506,38 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ 
+ 	return 0;
+ }
++
++/**
++ * pci_set_power_state - Set the power state of a PCI device
++ * @dev: PCI device to handle.
++ * @state: PCI power state (D0, D1, D2, D3hot) to put the device into.
++ *
++ * Transition a device to a new power state, using the platform firmware and/or
++ * the device's PCI PM registers.
++ *
++ * RETURN VALUE:
++ * -EINVAL if the requested state is invalid.
++ * -EIO if device does not support PCI PM or its PM capabilities register has a
++ * wrong version, or device doesn't support the requested state.
++ * 0 if the transition is to D1 or D2 but D1 and D2 are not supported.
++ * 0 if device already is in the requested state.
++ * 0 if the transition is to D3 but D3 is not supported.
++ * 0 if device's power state has been successfully changed.
++ */
++int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
++{
++	return __pci_set_power_state(dev, state, false);
++}
+ EXPORT_SYMBOL(pci_set_power_state);
+ 
++int pci_set_power_state_locked(struct pci_dev *dev, pci_power_t state)
++{
++	lockdep_assert_held(&pci_bus_sem);
++
++	return __pci_set_power_state(dev, state, true);
++}
++EXPORT_SYMBOL(pci_set_power_state_locked);
++
+ #define PCI_EXP_SAVE_REGS	7
+ 
+ static struct pci_cap_saved_state *_pci_find_saved_cap(struct pci_dev *pci_dev,
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index c6283ba781979..24ae29f0d36d7 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -569,12 +569,12 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt);
+ #ifdef CONFIG_PCIEASPM
+ void pcie_aspm_init_link_state(struct pci_dev *pdev);
+ void pcie_aspm_exit_link_state(struct pci_dev *pdev);
+-void pcie_aspm_pm_state_change(struct pci_dev *pdev);
++void pcie_aspm_pm_state_change(struct pci_dev *pdev, bool locked);
+ void pcie_aspm_powersave_config_link(struct pci_dev *pdev);
+ #else
+ static inline void pcie_aspm_init_link_state(struct pci_dev *pdev) { }
+ static inline void pcie_aspm_exit_link_state(struct pci_dev *pdev) { }
+-static inline void pcie_aspm_pm_state_change(struct pci_dev *pdev) { }
++static inline void pcie_aspm_pm_state_change(struct pci_dev *pdev, bool locked) { }
+ static inline void pcie_aspm_powersave_config_link(struct pci_dev *pdev) { }
+ #endif
+ 
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 060f4b3c8698f..eb2c77d522cbd 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1008,8 +1008,11 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 	up_read(&pci_bus_sem);
+ }
+ 
+-/* @pdev: the root port or switch downstream port */
+-void pcie_aspm_pm_state_change(struct pci_dev *pdev)
++/*
++ * @pdev: the root port or switch downstream port
++ * @locked: whether pci_bus_sem is held
++ */
++void pcie_aspm_pm_state_change(struct pci_dev *pdev, bool locked)
+ {
+ 	struct pcie_link_state *link = pdev->link_state;
+ 
+@@ -1019,12 +1022,14 @@ void pcie_aspm_pm_state_change(struct pci_dev *pdev)
+ 	 * Devices changed PM state, we should recheck if latency
+ 	 * meets all functions' requirement
+ 	 */
+-	down_read(&pci_bus_sem);
++	if (!locked)
++		down_read(&pci_bus_sem);
+ 	mutex_lock(&aspm_lock);
+ 	pcie_update_aspm_capable(link->root);
+ 	pcie_config_aspm_path(link);
+ 	mutex_unlock(&aspm_lock);
+-	up_read(&pci_bus_sem);
++	if (!locked)
++		up_read(&pci_bus_sem);
+ }
+ 
+ void pcie_aspm_powersave_config_link(struct pci_dev *pdev)
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index 02f156298e77c..a3719719e2e0f 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -1276,6 +1276,14 @@ static const char * const qmp_phy_vreg_l[] = {
+ 	"vdda-phy", "vdda-pll",
+ };
+ 
++static const struct qmp_usb_offsets qmp_usb_offsets_ipq8074 = {
++	.serdes		= 0,
++	.pcs		= 0x800,
++	.pcs_misc	= 0x600,
++	.tx		= 0x200,
++	.rx		= 0x400,
++};
++
+ static const struct qmp_usb_offsets qmp_usb_offsets_ipq9574 = {
+ 	.serdes		= 0,
+ 	.pcs		= 0x800,
+@@ -1317,10 +1325,28 @@ static const struct qmp_usb_offsets qmp_usb_offsets_v5 = {
+ 	.rx		= 0x1000,
+ };
+ 
++static const struct qmp_phy_cfg ipq6018_usb3phy_cfg = {
++	.lanes			= 1,
++
++	.offsets		= &qmp_usb_offsets_ipq8074,
++
++	.serdes_tbl		= ipq9574_usb3_serdes_tbl,
++	.serdes_tbl_num		= ARRAY_SIZE(ipq9574_usb3_serdes_tbl),
++	.tx_tbl			= msm8996_usb3_tx_tbl,
++	.tx_tbl_num		= ARRAY_SIZE(msm8996_usb3_tx_tbl),
++	.rx_tbl			= ipq8074_usb3_rx_tbl,
++	.rx_tbl_num		= ARRAY_SIZE(ipq8074_usb3_rx_tbl),
++	.pcs_tbl		= ipq8074_usb3_pcs_tbl,
++	.pcs_tbl_num		= ARRAY_SIZE(ipq8074_usb3_pcs_tbl),
++	.vreg_list		= qmp_phy_vreg_l,
++	.num_vregs		= ARRAY_SIZE(qmp_phy_vreg_l),
++	.regs			= qmp_v3_usb3phy_regs_layout,
++};
++
+ static const struct qmp_phy_cfg ipq8074_usb3phy_cfg = {
+ 	.lanes			= 1,
+ 
+-	.offsets		= &qmp_usb_offsets_v3,
++	.offsets		= &qmp_usb_offsets_ipq8074,
+ 
+ 	.serdes_tbl		= ipq8074_usb3_serdes_tbl,
+ 	.serdes_tbl_num		= ARRAY_SIZE(ipq8074_usb3_serdes_tbl),
+@@ -2225,7 +2251,7 @@ err_node_put:
+ static const struct of_device_id qmp_usb_of_match_table[] = {
+ 	{
+ 		.compatible = "qcom,ipq6018-qmp-usb3-phy",
+-		.data = &ipq8074_usb3phy_cfg,
++		.data = &ipq6018_usb3phy_cfg,
+ 	}, {
+ 		.compatible = "qcom,ipq8074-qmp-usb3-phy",
+ 		.data = &ipq8074_usb3phy_cfg,
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index e53eace7c91e3..6387c0d34c551 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -673,8 +673,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ 	channel->irq = platform_get_irq_optional(pdev, 0);
+ 	channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node);
+ 	if (channel->dr_mode != USB_DR_MODE_UNKNOWN) {
+-		int ret;
+-
+ 		channel->is_otg_channel = true;
+ 		channel->uses_otg_pins = !of_property_read_bool(dev->of_node,
+ 							"renesas,no-otg-pins");
+@@ -738,8 +736,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ 		ret = PTR_ERR(provider);
+ 		goto error;
+ 	} else if (channel->is_otg_channel) {
+-		int ret;
+-
+ 		ret = device_create_file(dev, &dev_attr_role);
+ 		if (ret < 0)
+ 			goto error;
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index dd2913ac0fa28..78e19b128962a 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -117,7 +117,7 @@ static int omap_usb_set_vbus(struct usb_otg *otg, bool enabled)
+ {
+ 	struct omap_usb *phy = phy_to_omapusb(otg->usb_phy);
+ 
+-	if (!phy->comparator)
++	if (!phy->comparator || !phy->comparator->set_vbus)
+ 		return -ENODEV;
+ 
+ 	return phy->comparator->set_vbus(phy->comparator, enabled);
+@@ -127,7 +127,7 @@ static int omap_usb_start_srp(struct usb_otg *otg)
+ {
+ 	struct omap_usb *phy = phy_to_omapusb(otg->usb_phy);
+ 
+-	if (!phy->comparator)
++	if (!phy->comparator || !phy->comparator->start_srp)
+ 		return -ENODEV;
+ 
+ 	return phy->comparator->start_srp(phy->comparator);
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 3328b175a8326..43eff1107038a 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -282,11 +282,12 @@ static void scsi_eh_inc_host_failed(struct rcu_head *head)
+ {
+ 	struct scsi_cmnd *scmd = container_of(head, typeof(*scmd), rcu);
+ 	struct Scsi_Host *shost = scmd->device->host;
++	unsigned int busy = scsi_host_busy(shost);
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(shost->host_lock, flags);
+ 	shost->host_failed++;
+-	scsi_eh_wakeup(shost, scsi_host_busy(shost));
++	scsi_eh_wakeup(shost, busy);
+ 	spin_unlock_irqrestore(shost->host_lock, flags);
+ }
+ 
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 1fb80eae9a63a..df5ac03d5d6c2 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -278,9 +278,11 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
+ 	rcu_read_lock();
+ 	__clear_bit(SCMD_STATE_INFLIGHT, &cmd->state);
+ 	if (unlikely(scsi_host_in_recovery(shost))) {
++		unsigned int busy = scsi_host_busy(shost);
++
+ 		spin_lock_irqsave(shost->host_lock, flags);
+ 		if (shost->host_failed || shost->host_eh_scheduled)
+-			scsi_eh_wakeup(shost, scsi_host_busy(shost));
++			scsi_eh_wakeup(shost, busy);
+ 		spin_unlock_irqrestore(shost->host_lock, flags);
+ 	}
+ 	rcu_read_unlock();
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 6604845c397cd..39564e17f3b07 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -51,6 +51,8 @@
+ #define PCI_DEVICE_ID_INTEL_MTLP		0x7ec1
+ #define PCI_DEVICE_ID_INTEL_MTLS		0x7f6f
+ #define PCI_DEVICE_ID_INTEL_MTL			0x7e7e
++#define PCI_DEVICE_ID_INTEL_ARLH		0x7ec1
++#define PCI_DEVICE_ID_INTEL_ARLH_PCH		0x777e
+ #define PCI_DEVICE_ID_INTEL_TGL			0x9a15
+ #define PCI_DEVICE_ID_AMD_MR			0x163a
+ 
+@@ -421,6 +423,8 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, MTLP, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, MTL, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, MTLS, &dwc3_pci_intel_swnode) },
++	{ PCI_DEVICE_DATA(INTEL, ARLH, &dwc3_pci_intel_swnode) },
++	{ PCI_DEVICE_DATA(INTEL, ARLH_PCH, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, TGL, &dwc3_pci_intel_swnode) },
+ 
+ 	{ PCI_DEVICE_DATA(AMD, NL_USB, &dwc3_pci_amd_swnode) },
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index 61f57fe5bb783..43230915323c7 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -61,7 +61,7 @@ out:
+ 
+ int dwc3_host_init(struct dwc3 *dwc)
+ {
+-	struct property_entry	props[4];
++	struct property_entry	props[5];
+ 	struct platform_device	*xhci;
+ 	int			ret, irq;
+ 	int			prop_idx = 0;
+@@ -89,6 +89,8 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 
+ 	memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props));
+ 
++	props[prop_idx++] = PROPERTY_ENTRY_BOOL("xhci-sg-trb-cache-size-quirk");
++
+ 	if (dwc->usb3_lpm_capable)
+ 		props[prop_idx++] = PROPERTY_ENTRY_BOOL("usb3-lpm-capable");
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index f0853c4478f57..d68e9abcdc69a 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -250,6 +250,9 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+ 		if (device_property_read_bool(tmpdev, "quirk-broken-port-ped"))
+ 			xhci->quirks |= XHCI_BROKEN_PORT_PED;
+ 
++		if (device_property_read_bool(tmpdev, "xhci-sg-trb-cache-size-quirk"))
++			xhci->quirks |= XHCI_SG_TRB_CACHE_SIZE_QUIRK;
++
+ 		device_property_read_u32(tmpdev, "imod-interval-ns",
+ 					 &xhci->imod_interval);
+ 	}
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index f3b5e6345858c..9673354d70d59 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2375,6 +2375,9 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 	/* handle completion code */
+ 	switch (trb_comp_code) {
+ 	case COMP_SUCCESS:
++		/* Don't overwrite status if TD had an error, see xHCI 4.9.1 */
++		if (td->error_mid_td)
++			break;
+ 		if (remaining) {
+ 			frame->status = short_framestatus;
+ 			if (xhci->quirks & XHCI_TRUST_TX_LENGTH)
+@@ -2390,9 +2393,13 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 	case COMP_BANDWIDTH_OVERRUN_ERROR:
+ 		frame->status = -ECOMM;
+ 		break;
+-	case COMP_ISOCH_BUFFER_OVERRUN:
+ 	case COMP_BABBLE_DETECTED_ERROR:
++		sum_trbs_for_length = true;
++		fallthrough;
++	case COMP_ISOCH_BUFFER_OVERRUN:
+ 		frame->status = -EOVERFLOW;
++		if (ep_trb != td->last_trb)
++			td->error_mid_td = true;
+ 		break;
+ 	case COMP_INCOMPATIBLE_DEVICE_ERROR:
+ 	case COMP_STALL_ERROR:
+@@ -2400,8 +2407,9 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 		break;
+ 	case COMP_USB_TRANSACTION_ERROR:
+ 		frame->status = -EPROTO;
++		sum_trbs_for_length = true;
+ 		if (ep_trb != td->last_trb)
+-			return 0;
++			td->error_mid_td = true;
+ 		break;
+ 	case COMP_STOPPED:
+ 		sum_trbs_for_length = true;
+@@ -2421,6 +2429,9 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 		break;
+ 	}
+ 
++	if (td->urb_length_set)
++		goto finish_td;
++
+ 	if (sum_trbs_for_length)
+ 		frame->actual_length = sum_trb_lengths(xhci, ep->ring, ep_trb) +
+ 			ep_trb_len - remaining;
+@@ -2429,6 +2440,14 @@ static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_virt_ep *ep,
+ 
+ 	td->urb->actual_length += frame->actual_length;
+ 
++finish_td:
++	/* Don't give back TD yet if we encountered an error mid TD */
++	if (td->error_mid_td && ep_trb != td->last_trb) {
++		xhci_dbg(xhci, "Error mid isoc TD, wait for final completion event\n");
++		td->urb_length_set = true;
++		return 0;
++	}
++
+ 	return finish_td(xhci, ep, ep_ring, td, trb_comp_code);
+ }
+ 
+@@ -2807,17 +2826,51 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 		}
+ 
+ 		if (!ep_seg) {
+-			if (!ep->skip ||
+-			    !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
+-				/* Some host controllers give a spurious
+-				 * successful event after a short transfer.
+-				 * Ignore it.
+-				 */
+-				if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) &&
+-						ep_ring->last_td_was_short) {
+-					ep_ring->last_td_was_short = false;
+-					goto cleanup;
++
++			if (ep->skip && usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
++				skip_isoc_td(xhci, td, ep, status);
++				goto cleanup;
++			}
++
++			/*
++			 * Some hosts give a spurious success event after a short
++			 * transfer. Ignore it.
++			 */
++			if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) &&
++			    ep_ring->last_td_was_short) {
++				ep_ring->last_td_was_short = false;
++				goto cleanup;
++			}
++
++			/*
++			 * xhci 4.10.2 states isoc endpoints should continue
++			 * processing the next TD if there was an error mid TD.
++			 * So host like NEC don't generate an event for the last
++			 * isoc TRB even if the IOC flag is set.
++			 * xhci 4.9.1 states that if there are errors in mult-TRB
++			 * TDs xHC should generate an error for that TRB, and if xHC
++			 * proceeds to the next TD it should genete an event for
++			 * any TRB with IOC flag on the way. Other host follow this.
++			 * So this event might be for the next TD.
++			 */
++			if (td->error_mid_td &&
++			    !list_is_last(&td->td_list, &ep_ring->td_list)) {
++				struct xhci_td *td_next = list_next_entry(td, td_list);
++
++				ep_seg = trb_in_td(xhci, td_next->start_seg, td_next->first_trb,
++						   td_next->last_trb, ep_trb_dma, false);
++				if (ep_seg) {
++					/* give back previous TD, start handling new */
++					xhci_dbg(xhci, "Missing TD completion event after mid TD error\n");
++					ep_ring->dequeue = td->last_trb;
++					ep_ring->deq_seg = td->last_trb_seg;
++					inc_deq(xhci, ep_ring);
++					xhci_td_cleanup(xhci, td, ep_ring, td->status);
++					td = td_next;
+ 				}
++			}
++
++			if (!ep_seg) {
+ 				/* HC is busted, give up! */
+ 				xhci_err(xhci,
+ 					"ERROR Transfer event TRB DMA ptr not "
+@@ -2829,9 +2882,6 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 					  ep_trb_dma, true);
+ 				return -ESHUTDOWN;
+ 			}
+-
+-			skip_isoc_td(xhci, td, ep, status);
+-			goto cleanup;
+ 		}
+ 		if (trb_comp_code == COMP_SHORT_PACKET)
+ 			ep_ring->last_td_was_short = true;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 3ea5c092bba71..6f53a950d9c09 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1547,6 +1547,7 @@ struct xhci_td {
+ 	struct xhci_segment	*bounce_seg;
+ 	/* actual_length of the URB has already been set */
+ 	bool			urb_length_set;
++	bool			error_mid_td;
+ 	unsigned int		num_trbs;
+ };
+ 
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 1e61fe0431715..923e0ed85444b 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -146,6 +146,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
+ 	{ USB_DEVICE(0x10C4, 0x8664) }, /* AC-Services CAN-IF */
+ 	{ USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */
++	{ USB_DEVICE(0x10C4, 0x87ED) }, /* IMST USB-Stick for Smart Meter */
+ 	{ USB_DEVICE(0x10C4, 0x8856) },	/* CEL EM357 ZigBee USB Stick - LR */
+ 	{ USB_DEVICE(0x10C4, 0x8857) },	/* CEL EM357 ZigBee USB Stick */
+ 	{ USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 72390dbf07692..2ae124c49d448 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2269,6 +2269,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) },			/* Fibocom FM160 (MBIM mode) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a3, 0xff) },			/* Fibocom FM101-GL (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff),			/* Fibocom FM101-GL (laptop MBIM) */
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index b1e844bf31f81..703a9c5635573 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -184,6 +184,8 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x413c, 0x81d0)},   /* Dell Wireless 5819 */
+ 	{DEVICE_SWI(0x413c, 0x81d1)},   /* Dell Wireless 5818 */
+ 	{DEVICE_SWI(0x413c, 0x81d2)},   /* Dell Wireless 5818 */
++	{DEVICE_SWI(0x413c, 0x8217)},	/* Dell Wireless DW5826e */
++	{DEVICE_SWI(0x413c, 0x8218)},	/* Dell Wireless DW5826e QDL */
+ 
+ 	/* Huawei devices */
+ 	{DEVICE_HWI(0x03f0, 0x581d)},	/* HP lt4112 LTE/HSPA+ Gobi 4G Modem (Huawei me906e) */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index bfb6f9481e87f..a1487bc670f15 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4862,8 +4862,7 @@ static void run_state_machine(struct tcpm_port *port)
+ 		break;
+ 	case PORT_RESET:
+ 		tcpm_reset_port(port);
+-		tcpm_set_cc(port, tcpm_default_state(port) == SNK_UNATTACHED ?
+-			    TYPEC_CC_RD : tcpm_rp_cc(port));
++		tcpm_set_cc(port, TYPEC_CC_OPEN);
+ 		tcpm_set_state(port, PORT_RESET_WAIT_OFF,
+ 			       PD_T_ERROR_RECOVERY);
+ 		break;
+diff --git a/fs/bcachefs/clock.c b/fs/bcachefs/clock.c
+index f41889093a2c7..3636444511064 100644
+--- a/fs/bcachefs/clock.c
++++ b/fs/bcachefs/clock.c
+@@ -109,7 +109,7 @@ void bch2_kthread_io_clock_wait(struct io_clock *clock,
+ 	if (cpu_timeout != MAX_SCHEDULE_TIMEOUT)
+ 		mod_timer(&wait.cpu_timer, cpu_timeout + jiffies);
+ 
+-	while (1) {
++	do {
+ 		set_current_state(TASK_INTERRUPTIBLE);
+ 		if (kthread && kthread_should_stop())
+ 			break;
+@@ -119,7 +119,7 @@ void bch2_kthread_io_clock_wait(struct io_clock *clock,
+ 
+ 		schedule();
+ 		try_to_freeze();
+-	}
++	} while (0);
+ 
+ 	__set_current_state(TASK_RUNNING);
+ 	del_timer_sync(&wait.cpu_timer);
+diff --git a/fs/bcachefs/fs-io.c b/fs/bcachefs/fs-io.c
+index b0e8144ec5500..fffa1743dfd22 100644
+--- a/fs/bcachefs/fs-io.c
++++ b/fs/bcachefs/fs-io.c
+@@ -79,7 +79,7 @@ void bch2_inode_flush_nocow_writes_async(struct bch_fs *c,
+ 			continue;
+ 
+ 		bio = container_of(bio_alloc_bioset(ca->disk_sb.bdev, 0,
+-						    REQ_OP_FLUSH,
++						    REQ_OP_WRITE|REQ_PREFLUSH,
+ 						    GFP_KERNEL,
+ 						    &c->nocow_flush_bioset),
+ 				   struct nocow_flush, bio);
+diff --git a/fs/bcachefs/fs-ioctl.c b/fs/bcachefs/fs-ioctl.c
+index 14d5cc6f90d7d..dbc87747eaafd 100644
+--- a/fs/bcachefs/fs-ioctl.c
++++ b/fs/bcachefs/fs-ioctl.c
+@@ -345,11 +345,12 @@ static long __bch2_ioctl_subvolume_create(struct bch_fs *c, struct file *filp,
+ 	if (arg.flags & BCH_SUBVOL_SNAPSHOT_RO)
+ 		create_flags |= BCH_CREATE_SNAPSHOT_RO;
+ 
+-	/* why do we need this lock? */
+-	down_read(&c->vfs_sb->s_umount);
+-
+-	if (arg.flags & BCH_SUBVOL_SNAPSHOT_CREATE)
++	if (arg.flags & BCH_SUBVOL_SNAPSHOT_CREATE) {
++		/* sync_inodes_sb enforce s_umount is locked */
++		down_read(&c->vfs_sb->s_umount);
+ 		sync_inodes_sb(c->vfs_sb);
++		up_read(&c->vfs_sb->s_umount);
++	}
+ retry:
+ 	if (arg.src_ptr) {
+ 		error = user_path_at(arg.dirfd,
+@@ -433,8 +434,6 @@ err2:
+ 		goto retry;
+ 	}
+ err1:
+-	up_read(&c->vfs_sb->s_umount);
+-
+ 	return error;
+ }
+ 
+@@ -451,33 +450,36 @@ static long bch2_ioctl_subvolume_create(struct bch_fs *c, struct file *filp,
+ static long bch2_ioctl_subvolume_destroy(struct bch_fs *c, struct file *filp,
+ 				struct bch_ioctl_subvolume arg)
+ {
++	const char __user *name = (void __user *)(unsigned long)arg.dst_ptr;
+ 	struct path path;
+ 	struct inode *dir;
++	struct dentry *victim;
+ 	int ret = 0;
+ 
+ 	if (arg.flags)
+ 		return -EINVAL;
+ 
+-	ret = user_path_at(arg.dirfd,
+-			(const char __user *)(unsigned long)arg.dst_ptr,
+-			LOOKUP_FOLLOW, &path);
+-	if (ret)
+-		return ret;
++	victim = user_path_locked_at(arg.dirfd, name, &path);
++	if (IS_ERR(victim))
++		return PTR_ERR(victim);
+ 
+-	if (path.dentry->d_sb->s_fs_info != c) {
++	dir = d_inode(path.dentry);
++	if (victim->d_sb->s_fs_info != c) {
+ 		ret = -EXDEV;
+ 		goto err;
+ 	}
+-
+-	dir = path.dentry->d_parent->d_inode;
+-
+-	ret = __bch2_unlink(dir, path.dentry, true);
+-	if (ret)
++	if (!d_is_positive(victim)) {
++		ret = -ENOENT;
+ 		goto err;
+-
+-	fsnotify_rmdir(dir, path.dentry);
+-	d_delete(path.dentry);
++	}
++	ret = __bch2_unlink(dir, victim, true);
++	if (!ret) {
++		fsnotify_rmdir(dir, victim);
++		d_delete(victim);
++	}
+ err:
++	inode_unlock(dir);
++	dput(victim);
+ 	path_put(&path);
+ 	return ret;
+ }
+diff --git a/fs/bcachefs/journal_io.c b/fs/bcachefs/journal_io.c
+index 3eb6c3f62a811..6ab756a485350 100644
+--- a/fs/bcachefs/journal_io.c
++++ b/fs/bcachefs/journal_io.c
+@@ -1948,7 +1948,8 @@ CLOSURE_CALLBACK(bch2_journal_write)
+ 			percpu_ref_get(&ca->io_ref);
+ 
+ 			bio = ca->journal.bio;
+-			bio_reset(bio, ca->disk_sb.bdev, REQ_OP_FLUSH);
++			bio_reset(bio, ca->disk_sb.bdev,
++				  REQ_OP_WRITE|REQ_PREFLUSH);
+ 			bio->bi_end_io		= journal_write_endio;
+ 			bio->bi_private		= ca;
+ 			closure_bio_submit(bio, cl);
+diff --git a/fs/bcachefs/move.c b/fs/bcachefs/move.c
+index 54830ee0ed886..f3dac4511af19 100644
+--- a/fs/bcachefs/move.c
++++ b/fs/bcachefs/move.c
+@@ -152,7 +152,7 @@ void bch2_move_ctxt_wait_for_io(struct moving_context *ctxt)
+ 		atomic_read(&ctxt->write_sectors) != sectors_pending);
+ }
+ 
+-static void bch2_moving_ctxt_flush_all(struct moving_context *ctxt)
++void bch2_moving_ctxt_flush_all(struct moving_context *ctxt)
+ {
+ 	move_ctxt_wait_event(ctxt, list_empty(&ctxt->reads));
+ 	bch2_trans_unlock_long(ctxt->trans);
+diff --git a/fs/bcachefs/move.h b/fs/bcachefs/move.h
+index 0906aa2d1de29..c5a7aed2e1ae1 100644
+--- a/fs/bcachefs/move.h
++++ b/fs/bcachefs/move.h
+@@ -81,6 +81,7 @@ void bch2_moving_ctxt_init(struct moving_context *, struct bch_fs *,
+ 			   struct write_point_specifier, bool);
+ struct moving_io *bch2_moving_ctxt_next_pending_write(struct moving_context *);
+ void bch2_moving_ctxt_do_pending_writes(struct moving_context *);
++void bch2_moving_ctxt_flush_all(struct moving_context *);
+ void bch2_move_ctxt_wait_for_io(struct moving_context *);
+ int bch2_move_ratelimit(struct moving_context *);
+ 
+diff --git a/fs/bcachefs/rebalance.c b/fs/bcachefs/rebalance.c
+index 3319190b8d9c3..dd6fed2581003 100644
+--- a/fs/bcachefs/rebalance.c
++++ b/fs/bcachefs/rebalance.c
+@@ -317,8 +317,16 @@ static int do_rebalance(struct moving_context *ctxt)
+ 			     BTREE_ID_rebalance_work, POS_MIN,
+ 			     BTREE_ITER_ALL_SNAPSHOTS);
+ 
+-	while (!bch2_move_ratelimit(ctxt) &&
+-	       !kthread_wait_freezable(r->enabled)) {
++	while (!bch2_move_ratelimit(ctxt)) {
++		if (!r->enabled) {
++			bch2_moving_ctxt_flush_all(ctxt);
++			kthread_wait_freezable(r->enabled ||
++					       kthread_should_stop());
++		}
++
++		if (kthread_should_stop())
++			break;
++
+ 		bch2_trans_begin(trans);
+ 
+ 		ret = bkey_err(k = next_rebalance_entry(trans, &rebalance_work_iter));
+@@ -348,6 +356,7 @@ static int do_rebalance(struct moving_context *ctxt)
+ 	    !kthread_should_stop() &&
+ 	    !atomic64_read(&r->work_stats.sectors_seen) &&
+ 	    !atomic64_read(&r->scan_stats.sectors_seen)) {
++		bch2_moving_ctxt_flush_all(ctxt);
+ 		bch2_trans_unlock_long(trans);
+ 		rebalance_wait(c);
+ 	}
+diff --git a/fs/bcachefs/replicas.c b/fs/bcachefs/replicas.c
+index 2008fe8bf7060..1c4a8f5c92c67 100644
+--- a/fs/bcachefs/replicas.c
++++ b/fs/bcachefs/replicas.c
+@@ -9,6 +9,12 @@
+ static int bch2_cpu_replicas_to_sb_replicas(struct bch_fs *,
+ 					    struct bch_replicas_cpu *);
+ 
++/* Some (buggy!) compilers don't allow memcmp to be passed as a pointer */
++static int bch2_memcmp(const void *l, const void *r, size_t size)
++{
++	return memcmp(l, r, size);
++}
++
+ /* Replicas tracking - in memory: */
+ 
+ static void verify_replicas_entry(struct bch_replicas_entry *e)
+@@ -33,7 +39,7 @@ void bch2_replicas_entry_sort(struct bch_replicas_entry *e)
+ 
+ static void bch2_cpu_replicas_sort(struct bch_replicas_cpu *r)
+ {
+-	eytzinger0_sort(r->entries, r->nr, r->entry_size, memcmp, NULL);
++	eytzinger0_sort(r->entries, r->nr, r->entry_size, bch2_memcmp, NULL);
+ }
+ 
+ static void bch2_replicas_entry_v0_to_text(struct printbuf *out,
+@@ -833,7 +839,7 @@ static int bch2_cpu_replicas_validate(struct bch_replicas_cpu *cpu_r,
+ 	sort_cmp_size(cpu_r->entries,
+ 		      cpu_r->nr,
+ 		      cpu_r->entry_size,
+-		      memcmp, NULL);
++		      bch2_memcmp, NULL);
+ 
+ 	for (i = 0; i < cpu_r->nr; i++) {
+ 		struct bch_replicas_entry *e =
+diff --git a/fs/bcachefs/snapshot.c b/fs/bcachefs/snapshot.c
+index 5dac038f08519..bf5d6f4e9abcb 100644
+--- a/fs/bcachefs/snapshot.c
++++ b/fs/bcachefs/snapshot.c
+@@ -1709,5 +1709,5 @@ int bch2_snapshots_read(struct bch_fs *c)
+ 
+ void bch2_fs_snapshots_exit(struct bch_fs *c)
+ {
+-	kfree(rcu_dereference_protected(c->snapshots, true));
++	kvfree(rcu_dereference_protected(c->snapshots, true));
+ }
+diff --git a/fs/bcachefs/util.c b/fs/bcachefs/util.c
+index 84b142fcc3dfc..3b7c349f2665a 100644
+--- a/fs/bcachefs/util.c
++++ b/fs/bcachefs/util.c
+@@ -362,14 +362,15 @@ static inline void bch2_time_stats_update_one(struct bch2_time_stats *stats,
+ 		bch2_quantiles_update(&stats->quantiles, duration);
+ 	}
+ 
+-	if (time_after64(end, stats->last_event)) {
++	if (stats->last_event && time_after64(end, stats->last_event)) {
+ 		freq = end - stats->last_event;
+ 		mean_and_variance_update(&stats->freq_stats, freq);
+ 		mean_and_variance_weighted_update(&stats->freq_stats_weighted, freq);
+ 		stats->max_freq = max(stats->max_freq, freq);
+ 		stats->min_freq = min(stats->min_freq, freq);
+-		stats->last_event = end;
+ 	}
++
++	stats->last_event = end;
+ }
+ 
+ static noinline void bch2_time_stats_clear_buffer(struct bch2_time_stats *stats,
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 0679240f06db9..7d41c3e03476b 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -78,6 +78,8 @@ struct inode *ceph_new_inode(struct inode *dir, struct dentry *dentry,
+ 	if (!inode)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	inode->i_blkbits = CEPH_FSCRYPT_BLOCK_SHIFT;
++
+ 	if (!S_ISLNK(*mode)) {
+ 		err = ceph_pre_init_acls(dir, mode, as_ctx);
+ 		if (err < 0)
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 8408318e1d32e..3c5786841c6c8 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1233,6 +1233,24 @@ void ext4_mb_generate_buddy(struct super_block *sb,
+ 	atomic64_add(period, &sbi->s_mb_generation_time);
+ }
+ 
++static void mb_regenerate_buddy(struct ext4_buddy *e4b)
++{
++	int count;
++	int order = 1;
++	void *buddy;
++
++	while ((buddy = mb_find_buddy(e4b, order++, &count)))
++		mb_set_bits(buddy, 0, count);
++
++	e4b->bd_info->bb_fragments = 0;
++	memset(e4b->bd_info->bb_counters, 0,
++		sizeof(*e4b->bd_info->bb_counters) *
++		(e4b->bd_sb->s_blocksize_bits + 2));
++
++	ext4_mb_generate_buddy(e4b->bd_sb, e4b->bd_buddy,
++		e4b->bd_bitmap, e4b->bd_group, e4b->bd_info);
++}
++
+ /* The buddy information is attached the buddy cache inode
+  * for convenience. The information regarding each group
+  * is loaded via ext4_mb_load_buddy. The information involve
+@@ -1921,6 +1939,8 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 			ext4_mark_group_bitmap_corrupted(
+ 				sb, e4b->bd_group,
+ 				EXT4_GROUP_INFO_BBITMAP_CORRUPT);
++		} else {
++			mb_regenerate_buddy(e4b);
+ 		}
+ 		goto done;
+ 	}
+diff --git a/fs/namei.c b/fs/namei.c
+index 29bafbdb44ca8..c981dec3c582f 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2573,13 +2573,13 @@ static int filename_parentat(int dfd, struct filename *name,
+ }
+ 
+ /* does lookup, returns the object with parent locked */
+-static struct dentry *__kern_path_locked(struct filename *name, struct path *path)
++static struct dentry *__kern_path_locked(int dfd, struct filename *name, struct path *path)
+ {
+ 	struct dentry *d;
+ 	struct qstr last;
+ 	int type, error;
+ 
+-	error = filename_parentat(AT_FDCWD, name, 0, path, &last, &type);
++	error = filename_parentat(dfd, name, 0, path, &last, &type);
+ 	if (error)
+ 		return ERR_PTR(error);
+ 	if (unlikely(type != LAST_NORM)) {
+@@ -2598,12 +2598,22 @@ static struct dentry *__kern_path_locked(struct filename *name, struct path *pat
+ struct dentry *kern_path_locked(const char *name, struct path *path)
+ {
+ 	struct filename *filename = getname_kernel(name);
+-	struct dentry *res = __kern_path_locked(filename, path);
++	struct dentry *res = __kern_path_locked(AT_FDCWD, filename, path);
+ 
+ 	putname(filename);
+ 	return res;
+ }
+ 
++struct dentry *user_path_locked_at(int dfd, const char __user *name, struct path *path)
++{
++	struct filename *filename = getname(name);
++	struct dentry *res = __kern_path_locked(dfd, filename, path);
++
++	putname(filename);
++	return res;
++}
++EXPORT_SYMBOL(user_path_locked_at);
++
+ int kern_path(const char *name, unsigned int flags, struct path *path)
+ {
+ 	struct filename *filename = getname_kernel(name);
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index f6706143d14bc..a46d30b84bf39 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -473,7 +473,7 @@ bool al_delete_le(struct ntfs_inode *ni, enum ATTR_TYPE type, CLST vcn,
+ int al_update(struct ntfs_inode *ni, int sync);
+ static inline size_t al_aligned(size_t size)
+ {
+-	return (size + 1023) & ~(size_t)1023;
++	return size_add(size, 1023) & ~(size_t)1023;
+ }
+ 
+ /* Globals from bitfunc.c */
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index a16e175731eb3..a1b9734564711 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -269,6 +269,8 @@ int cifs_try_adding_channels(struct cifs_ses *ses)
+ 					 &iface->sockaddr,
+ 					 rc);
+ 				kref_put(&iface->refcount, release_iface);
++				/* failure to add chan should increase weight */
++				iface->weight_fulfilled++;
+ 				continue;
+ 			}
+ 
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index f5006aa97f5b3..5d9c87d2e1e01 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -410,7 +410,7 @@ skip_sess_setup:
+ 		rc = SMB3_request_interfaces(xid, tcon, false);
+ 		free_xid(xid);
+ 
+-		if (rc == -EOPNOTSUPP) {
++		if (rc == -EOPNOTSUPP && ses->chan_count > 1) {
+ 			/*
+ 			 * some servers like Azure SMB server do not advertise
+ 			 * that multichannel has been disabled with server
+diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
+index 84ec53ccc4502..7ee8a179d1036 100644
+--- a/include/asm-generic/cacheflush.h
++++ b/include/asm-generic/cacheflush.h
+@@ -91,6 +91,12 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end)
+ }
+ #endif
+ 
++#ifndef flush_cache_vmap_early
++static inline void flush_cache_vmap_early(unsigned long start, unsigned long end)
++{
++}
++#endif
++
+ #ifndef flush_cache_vunmap
+ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
+ {
+diff --git a/include/linux/ceph/messenger.h b/include/linux/ceph/messenger.h
+index 2eaaabbe98cb6..1717cc57cdacd 100644
+--- a/include/linux/ceph/messenger.h
++++ b/include/linux/ceph/messenger.h
+@@ -283,7 +283,7 @@ struct ceph_msg {
+ 	struct kref kref;
+ 	bool more_to_follow;
+ 	bool needs_out_seq;
+-	bool sparse_read;
++	u64 sparse_read_total;
+ 	int front_alloc_len;
+ 
+ 	struct ceph_msgpool *pool;
+diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
+index 3df70d6131c8f..752dbde4cec1f 100644
+--- a/include/linux/dmaengine.h
++++ b/include/linux/dmaengine.h
+@@ -953,7 +953,8 @@ static inline int dmaengine_slave_config(struct dma_chan *chan,
+ 
+ static inline bool is_slave_direction(enum dma_transfer_direction direction)
+ {
+-	return (direction == DMA_MEM_TO_DEV) || (direction == DMA_DEV_TO_MEM);
++	return (direction == DMA_MEM_TO_DEV) || (direction == DMA_DEV_TO_MEM) ||
++	       (direction == DMA_DEV_TO_DEV);
+ }
+ 
+ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index f2044d5a652b5..254d4a898179c 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -197,6 +197,7 @@ enum  hrtimer_base_type {
+  * @max_hang_time:	Maximum time spent in hrtimer_interrupt
+  * @softirq_expiry_lock: Lock which is taken while softirq based hrtimer are
+  *			 expired
++ * @online:		CPU is online from an hrtimers point of view
+  * @timer_waiters:	A hrtimer_cancel() invocation waits for the timer
+  *			callback to finish.
+  * @expires_next:	absolute time of the next event, is required for remote
+@@ -219,7 +220,8 @@ struct hrtimer_cpu_base {
+ 	unsigned int			hres_active		: 1,
+ 					in_hrtirq		: 1,
+ 					hang_detected		: 1,
+-					softirq_activated       : 1;
++					softirq_activated       : 1,
++					online			: 1;
+ #ifdef CONFIG_HIGH_RES_TIMERS
+ 	unsigned int			nr_events;
+ 	unsigned short			nr_retries;
+diff --git a/include/linux/namei.h b/include/linux/namei.h
+index 3100371b5e321..74e0cc14ebf86 100644
+--- a/include/linux/namei.h
++++ b/include/linux/namei.h
+@@ -66,6 +66,7 @@ extern struct dentry *kern_path_create(int, const char *, struct path *, unsigne
+ extern struct dentry *user_path_create(int, const char __user *, struct path *, unsigned int);
+ extern void done_path_create(struct path *, struct dentry *);
+ extern struct dentry *kern_path_locked(const char *, struct path *);
++extern struct dentry *user_path_locked_at(int , const char __user *, struct path *);
+ int vfs_path_parent_lookup(struct filename *filename, unsigned int flags,
+ 			   struct path *parent, struct qstr *last, int *type,
+ 			   const struct path *root);
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index bc80960fad7c4..675937a5bd7ce 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1417,6 +1417,7 @@ int pci_load_and_free_saved_state(struct pci_dev *dev,
+ 				  struct pci_saved_state **state);
+ int pci_platform_power_transition(struct pci_dev *dev, pci_power_t state);
+ int pci_set_power_state(struct pci_dev *dev, pci_power_t state);
++int pci_set_power_state_locked(struct pci_dev *dev, pci_power_t state);
+ pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state);
+ bool pci_pme_capable(struct pci_dev *dev, pci_power_t state);
+ void pci_pme_active(struct pci_dev *dev, bool enable);
+@@ -1620,6 +1621,8 @@ int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max,
+ 
+ void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
+ 		  void *userdata);
++void pci_walk_bus_locked(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
++			 void *userdata);
+ int pci_cfg_space_size(struct pci_dev *dev);
+ unsigned char pci_bus_max_busnr(struct pci_bus *bus);
+ void pci_setup_bridge(struct pci_bus *bus);
+@@ -2019,6 +2022,8 @@ static inline int pci_save_state(struct pci_dev *dev) { return 0; }
+ static inline void pci_restore_state(struct pci_dev *dev) { }
+ static inline int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+ { return 0; }
++static inline int pci_set_power_state_locked(struct pci_dev *dev, pci_power_t state)
++{ return 0; }
+ static inline int pci_wake_from_d3(struct pci_dev *dev, bool enable)
+ { return 0; }
+ static inline pci_power_t pci_choose_state(struct pci_dev *dev,
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 4ecfb06c413db..8f2c487618334 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -2865,6 +2865,8 @@ struct cfg80211_bss_ies {
+  *	own the beacon_ies, but they're just pointers to the ones from the
+  *	@hidden_beacon_bss struct)
+  * @proberesp_ies: the information elements from the last Probe Response frame
++ * @proberesp_ecsa_stuck: ECSA element is stuck in the Probe Response frame,
++ *	cannot rely on it having valid data
+  * @hidden_beacon_bss: in case this BSS struct represents a probe response from
+  *	a BSS that hides the SSID in its beacon, this points to the BSS struct
+  *	that holds the beacon data. @beacon_ies is still valid, of course, and
+@@ -2900,6 +2902,8 @@ struct cfg80211_bss {
+ 	u8 chains;
+ 	s8 chain_signal[IEEE80211_MAX_CHAINS];
+ 
++	u8 proberesp_ecsa_stuck:1;
++
+ 	u8 bssid_index;
+ 	u8 max_bssid_indicator;
+ 
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 4e8ecabc5f25c..62013d0184115 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -798,10 +798,16 @@ static inline struct nft_set_elem_expr *nft_set_ext_expr(const struct nft_set_ex
+ 	return nft_set_ext(ext, NFT_SET_EXT_EXPRESSIONS);
+ }
+ 
+-static inline bool nft_set_elem_expired(const struct nft_set_ext *ext)
++static inline bool __nft_set_elem_expired(const struct nft_set_ext *ext,
++					  u64 tstamp)
+ {
+ 	return nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION) &&
+-	       time_is_before_eq_jiffies64(*nft_set_ext_expiration(ext));
++	       time_after_eq64(tstamp, *nft_set_ext_expiration(ext));
++}
++
++static inline bool nft_set_elem_expired(const struct nft_set_ext *ext)
++{
++	return __nft_set_elem_expired(ext, get_jiffies_64());
+ }
+ 
+ static inline struct nft_set_ext *nft_set_elem_ext(const struct nft_set *set,
+@@ -1750,6 +1756,7 @@ struct nftables_pernet {
+ 	struct list_head	notify_list;
+ 	struct mutex		commit_mutex;
+ 	u64			table_handle;
++	u64			tstamp;
+ 	unsigned int		base_seq;
+ 	unsigned int		gc_seq;
+ 	u8			validate_state;
+@@ -1762,6 +1769,11 @@ static inline struct nftables_pernet *nft_pernet(const struct net *net)
+ 	return net_generic(net, nf_tables_net_id);
+ }
+ 
++static inline u64 nft_net_tstamp(const struct net *net)
++{
++	return nft_pernet(net)->tstamp;
++}
++
+ #define __NFT_REDUCE_READONLY	1UL
+ #define NFT_REDUCE_READONLY	(void *)__NFT_REDUCE_READONLY
+ 
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4c1ef7b3705c2..87b8de9b6c1c4 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -128,6 +128,7 @@
+ 	EM(rxrpc_skb_eaten_by_unshare_nomem,	"ETN unshar-nm") \
+ 	EM(rxrpc_skb_get_conn_secured,		"GET conn-secd") \
+ 	EM(rxrpc_skb_get_conn_work,		"GET conn-work") \
++	EM(rxrpc_skb_get_last_nack,		"GET last-nack") \
+ 	EM(rxrpc_skb_get_local_work,		"GET locl-work") \
+ 	EM(rxrpc_skb_get_reject_work,		"GET rej-work ") \
+ 	EM(rxrpc_skb_get_to_recvmsg,		"GET to-recv  ") \
+@@ -141,6 +142,7 @@
+ 	EM(rxrpc_skb_put_error_report,		"PUT error-rep") \
+ 	EM(rxrpc_skb_put_input,			"PUT input    ") \
+ 	EM(rxrpc_skb_put_jumbo_subpacket,	"PUT jumbo-sub") \
++	EM(rxrpc_skb_put_last_nack,		"PUT last-nack") \
+ 	EM(rxrpc_skb_put_purge,			"PUT purge    ") \
+ 	EM(rxrpc_skb_put_rotate,		"PUT rotate   ") \
+ 	EM(rxrpc_skb_put_unknown,		"PUT unknown  ") \
+@@ -1552,7 +1554,7 @@ TRACE_EVENT(rxrpc_congest,
+ 		    memcpy(&__entry->sum, summary, sizeof(__entry->sum));
+ 			   ),
+ 
+-	    TP_printk("c=%08x r=%08x %s q=%08x %s cw=%u ss=%u nA=%u,%u+%u r=%u b=%u u=%u d=%u l=%x%s%s%s",
++	    TP_printk("c=%08x r=%08x %s q=%08x %s cw=%u ss=%u nA=%u,%u+%u,%u b=%u u=%u d=%u l=%x%s%s%s",
+ 		      __entry->call,
+ 		      __entry->ack_serial,
+ 		      __print_symbolic(__entry->sum.ack_reason, rxrpc_ack_names),
+@@ -1560,9 +1562,9 @@ TRACE_EVENT(rxrpc_congest,
+ 		      __print_symbolic(__entry->sum.mode, rxrpc_congest_modes),
+ 		      __entry->sum.cwnd,
+ 		      __entry->sum.ssthresh,
+-		      __entry->sum.nr_acks, __entry->sum.saw_nacks,
++		      __entry->sum.nr_acks, __entry->sum.nr_retained_nacks,
+ 		      __entry->sum.nr_new_acks,
+-		      __entry->sum.nr_rot_new_acks,
++		      __entry->sum.nr_new_nacks,
+ 		      __entry->top - __entry->hard_ack,
+ 		      __entry->sum.cumulative_acks,
+ 		      __entry->sum.dup_acks,
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index ca30232b7bc8a..117c6a9b845b1 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -285,9 +285,11 @@ enum nft_rule_attributes {
+ /**
+  * enum nft_rule_compat_flags - nf_tables rule compat flags
+  *
++ * @NFT_RULE_COMPAT_F_UNUSED: unused
+  * @NFT_RULE_COMPAT_F_INV: invert the check result
+  */
+ enum nft_rule_compat_flags {
++	NFT_RULE_COMPAT_F_UNUSED = (1 << 0),
+ 	NFT_RULE_COMPAT_F_INV	= (1 << 1),
+ 	NFT_RULE_COMPAT_F_MASK	= NFT_RULE_COMPAT_F_INV,
+ };
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index ed84f2737b3a3..c9992cd7f1385 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -30,6 +30,13 @@ enum {
+ 	IOU_OK			= 0,
+ 	IOU_ISSUE_SKIP_COMPLETE	= -EIOCBQUEUED,
+ 
++	/*
++	 * Requeue the task_work to restart operations on this request. The
++	 * actual value isn't important, should just be not an otherwise
++	 * valid error code, yet less than -MAX_ERRNO and valid internally.
++	 */
++	IOU_REQUEUE		= -3072,
++
+ 	/*
+ 	 * Intended only when both IO_URING_F_MULTISHOT is passed
+ 	 * to indicate to the poll runner that multishot should be
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 75d494dad7e2c..43bc9a5f96f9d 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -60,6 +60,7 @@ struct io_sr_msg {
+ 	unsigned			len;
+ 	unsigned			done_io;
+ 	unsigned			msg_flags;
++	unsigned			nr_multishot_loops;
+ 	u16				flags;
+ 	/* initialised and used only by !msg send variants */
+ 	u16				addr_len;
+@@ -70,6 +71,13 @@ struct io_sr_msg {
+ 	struct io_kiocb 		*notif;
+ };
+ 
++/*
++ * Number of times we'll try and do receives if there's more data. If we
++ * exceed this limit, then add us to the back of the queue and retry from
++ * there. This helps fairness between flooding clients.
++ */
++#define MULTISHOT_MAX_RETRY	32
++
+ static inline bool io_check_multishot(struct io_kiocb *req,
+ 				      unsigned int issue_flags)
+ {
+@@ -611,6 +619,7 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 		sr->msg_flags |= MSG_CMSG_COMPAT;
+ #endif
+ 	sr->done_io = 0;
++	sr->nr_multishot_loops = 0;
+ 	return 0;
+ }
+ 
+@@ -645,23 +654,35 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ 		return true;
+ 	}
+ 
+-	if (!mshot_finished) {
+-		if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
+-					*ret, cflags | IORING_CQE_F_MORE)) {
+-			io_recv_prep_retry(req);
+-			/* Known not-empty or unknown state, retry */
+-			if (cflags & IORING_CQE_F_SOCK_NONEMPTY ||
+-			    msg->msg_inq == -1)
++	if (mshot_finished)
++		goto finish;
++
++	/*
++	 * Fill CQE for this receive and see if we should keep trying to
++	 * receive from this socket.
++	 */
++	if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
++				*ret, cflags | IORING_CQE_F_MORE)) {
++		struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
++		int mshot_retry_ret = IOU_ISSUE_SKIP_COMPLETE;
++
++		io_recv_prep_retry(req);
++		/* Known not-empty or unknown state, retry */
++		if (cflags & IORING_CQE_F_SOCK_NONEMPTY || msg->msg_inq == -1) {
++			if (sr->nr_multishot_loops++ < MULTISHOT_MAX_RETRY)
+ 				return false;
+-			if (issue_flags & IO_URING_F_MULTISHOT)
+-				*ret = IOU_ISSUE_SKIP_COMPLETE;
+-			else
+-				*ret = -EAGAIN;
+-			return true;
++			/* mshot retries exceeded, force a requeue */
++			sr->nr_multishot_loops = 0;
++			mshot_retry_ret = IOU_REQUEUE;
+ 		}
+-		/* Otherwise stop multishot but use the current result. */
++		if (issue_flags & IO_URING_F_MULTISHOT)
++			*ret = mshot_retry_ret;
++		else
++			*ret = -EAGAIN;
++		return true;
+ 	}
+-
++	/* Otherwise stop multishot but use the current result. */
++finish:
+ 	io_req_set_res(req, *ret, cflags);
+ 
+ 	if (issue_flags & IO_URING_F_MULTISHOT)
+@@ -902,6 +923,7 @@ retry_multishot:
+ 		if (!buf)
+ 			return -ENOBUFS;
+ 		sr->buf = buf;
++		sr->len = len;
+ 	}
+ 
+ 	ret = import_ubuf(ITER_DEST, sr->buf, len, &msg.msg_iter);
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index d59b74a99d4e4..7513afc7b702e 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -226,8 +226,29 @@ enum {
+ 	IOU_POLL_NO_ACTION = 1,
+ 	IOU_POLL_REMOVE_POLL_USE_RES = 2,
+ 	IOU_POLL_REISSUE = 3,
++	IOU_POLL_REQUEUE = 4,
+ };
+ 
++static void __io_poll_execute(struct io_kiocb *req, int mask)
++{
++	unsigned flags = 0;
++
++	io_req_set_res(req, mask, 0);
++	req->io_task_work.func = io_poll_task_func;
++
++	trace_io_uring_task_add(req, mask);
++
++	if (!(req->flags & REQ_F_POLL_NO_LAZY))
++		flags = IOU_F_TWQ_LAZY_WAKE;
++	__io_req_task_work_add(req, flags);
++}
++
++static inline void io_poll_execute(struct io_kiocb *req, int res)
++{
++	if (io_poll_get_ownership(req))
++		__io_poll_execute(req, res);
++}
++
+ /*
+  * All poll tw should go through this. Checks for poll events, manages
+  * references, does rewait, etc.
+@@ -309,6 +330,8 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts)
+ 			int ret = io_poll_issue(req, ts);
+ 			if (ret == IOU_STOP_MULTISHOT)
+ 				return IOU_POLL_REMOVE_POLL_USE_RES;
++			else if (ret == IOU_REQUEUE)
++				return IOU_POLL_REQUEUE;
+ 			if (ret < 0)
+ 				return ret;
+ 		}
+@@ -331,8 +354,12 @@ void io_poll_task_func(struct io_kiocb *req, struct io_tw_state *ts)
+ 	int ret;
+ 
+ 	ret = io_poll_check_events(req, ts);
+-	if (ret == IOU_POLL_NO_ACTION)
++	if (ret == IOU_POLL_NO_ACTION) {
++		return;
++	} else if (ret == IOU_POLL_REQUEUE) {
++		__io_poll_execute(req, 0);
+ 		return;
++	}
+ 	io_poll_remove_entries(req);
+ 	io_poll_tw_hash_eject(req, ts);
+ 
+@@ -364,26 +391,6 @@ void io_poll_task_func(struct io_kiocb *req, struct io_tw_state *ts)
+ 	}
+ }
+ 
+-static void __io_poll_execute(struct io_kiocb *req, int mask)
+-{
+-	unsigned flags = 0;
+-
+-	io_req_set_res(req, mask, 0);
+-	req->io_task_work.func = io_poll_task_func;
+-
+-	trace_io_uring_task_add(req, mask);
+-
+-	if (!(req->flags & REQ_F_POLL_NO_LAZY))
+-		flags = IOU_F_TWQ_LAZY_WAKE;
+-	__io_req_task_work_add(req, flags);
+-}
+-
+-static inline void io_poll_execute(struct io_kiocb *req, int res)
+-{
+-	if (io_poll_get_ownership(req))
+-		__io_poll_execute(req, res);
+-}
+-
+ static void io_poll_cancel_req(struct io_kiocb *req)
+ {
+ 	io_poll_mark_cancelled(req);
+diff --git a/io_uring/poll.h b/io_uring/poll.h
+index ff4d5d753387e..1dacae9e816c9 100644
+--- a/io_uring/poll.h
++++ b/io_uring/poll.h
+@@ -24,6 +24,15 @@ struct async_poll {
+ 	struct io_poll		*double_poll;
+ };
+ 
++/*
++ * Must only be called inside issue_flags & IO_URING_F_MULTISHOT, or
++ * potentially other cases where we already "own" this poll request.
++ */
++static inline void io_poll_multishot_retry(struct io_kiocb *req)
++{
++	atomic_inc(&req->poll_refs);
++}
++
+ int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
+ int io_poll_add(struct io_kiocb *req, unsigned int issue_flags);
+ 
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 743732fba7a45..9394bf83e8358 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -18,6 +18,7 @@
+ #include "opdef.h"
+ #include "kbuf.h"
+ #include "rsrc.h"
++#include "poll.h"
+ #include "rw.h"
+ 
+ struct io_rw {
+@@ -956,8 +957,15 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ 		if (io_fill_cqe_req_aux(req,
+ 					issue_flags & IO_URING_F_COMPLETE_DEFER,
+ 					ret, cflags | IORING_CQE_F_MORE)) {
+-			if (issue_flags & IO_URING_F_MULTISHOT)
++			if (issue_flags & IO_URING_F_MULTISHOT) {
++				/*
++				 * Force retry, as we might have more data to
++				 * be read and otherwise it won't get retried
++				 * until (if ever) another poll is triggered.
++				 */
++				io_poll_multishot_retry(req);
+ 				return IOU_ISSUE_SKIP_COMPLETE;
++			}
+ 			return -EAGAIN;
+ 		}
+ 	}
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 760793998cdd7..edb0f821dceaa 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -1085,6 +1085,7 @@ static int enqueue_hrtimer(struct hrtimer *timer,
+ 			   enum hrtimer_mode mode)
+ {
+ 	debug_activate(timer, mode);
++	WARN_ON_ONCE(!base->cpu_base->online);
+ 
+ 	base->cpu_base->active_bases |= 1 << base->index;
+ 
+@@ -2183,6 +2184,7 @@ int hrtimers_prepare_cpu(unsigned int cpu)
+ 	cpu_base->softirq_next_timer = NULL;
+ 	cpu_base->expires_next = KTIME_MAX;
+ 	cpu_base->softirq_expires_next = KTIME_MAX;
++	cpu_base->online = 1;
+ 	hrtimer_cpu_base_init_expiry_lock(cpu_base);
+ 	return 0;
+ }
+@@ -2250,6 +2252,7 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ 	smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
+ 
+ 	raw_spin_unlock(&new_base->lock);
++	old_base->online = 0;
+ 	raw_spin_unlock(&old_base->lock);
+ 
+ 	return 0;
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 7b97d31df7676..4e11fc1e6deff 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -3333,13 +3333,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t
+ 		if (rc < 0)
+ 			panic("failed to map percpu area, err=%d\n", rc);
+ 
+-		/*
+-		 * FIXME: Archs with virtual cache should flush local
+-		 * cache for the linear mapping here - something
+-		 * equivalent to flush_cache_vmap() on the local cpu.
+-		 * flush_cache_vmap() can't be used as most supporting
+-		 * data structures are not set up yet.
+-		 */
++		flush_cache_vmap_early(unit_addr, unit_addr + ai->unit_size);
+ 
+ 		/* copy static data */
+ 		memcpy((void *)unit_addr, __per_cpu_load, ai->static_size);
+diff --git a/net/ceph/messenger_v1.c b/net/ceph/messenger_v1.c
+index f9a50d7f0d204..0cb61c76b9b87 100644
+--- a/net/ceph/messenger_v1.c
++++ b/net/ceph/messenger_v1.c
+@@ -160,8 +160,9 @@ static size_t sizeof_footer(struct ceph_connection *con)
+ static void prepare_message_data(struct ceph_msg *msg, u32 data_len)
+ {
+ 	/* Initialize data cursor if it's not a sparse read */
+-	if (!msg->sparse_read)
+-		ceph_msg_data_cursor_init(&msg->cursor, msg, data_len);
++	u64 len = msg->sparse_read_total ? : data_len;
++
++	ceph_msg_data_cursor_init(&msg->cursor, msg, len);
+ }
+ 
+ /*
+@@ -991,7 +992,7 @@ static inline int read_partial_message_section(struct ceph_connection *con,
+ 	return read_partial_message_chunk(con, section, sec_len, crc);
+ }
+ 
+-static int read_sparse_msg_extent(struct ceph_connection *con, u32 *crc)
++static int read_partial_sparse_msg_extent(struct ceph_connection *con, u32 *crc)
+ {
+ 	struct ceph_msg_data_cursor *cursor = &con->in_msg->cursor;
+ 	bool do_bounce = ceph_test_opt(from_msgr(con->msgr), RXBOUNCE);
+@@ -1026,7 +1027,7 @@ static int read_sparse_msg_extent(struct ceph_connection *con, u32 *crc)
+ 	return 1;
+ }
+ 
+-static int read_sparse_msg_data(struct ceph_connection *con)
++static int read_partial_sparse_msg_data(struct ceph_connection *con)
+ {
+ 	struct ceph_msg_data_cursor *cursor = &con->in_msg->cursor;
+ 	bool do_datacrc = !ceph_test_opt(from_msgr(con->msgr), NOCRC);
+@@ -1036,31 +1037,31 @@ static int read_sparse_msg_data(struct ceph_connection *con)
+ 	if (do_datacrc)
+ 		crc = con->in_data_crc;
+ 
+-	do {
++	while (cursor->total_resid) {
+ 		if (con->v1.in_sr_kvec.iov_base)
+ 			ret = read_partial_message_chunk(con,
+ 							 &con->v1.in_sr_kvec,
+ 							 con->v1.in_sr_len,
+ 							 &crc);
+ 		else if (cursor->sr_resid > 0)
+-			ret = read_sparse_msg_extent(con, &crc);
+-
+-		if (ret <= 0) {
+-			if (do_datacrc)
+-				con->in_data_crc = crc;
+-			return ret;
+-		}
++			ret = read_partial_sparse_msg_extent(con, &crc);
++		if (ret <= 0)
++			break;
+ 
+ 		memset(&con->v1.in_sr_kvec, 0, sizeof(con->v1.in_sr_kvec));
+ 		ret = con->ops->sparse_read(con, cursor,
+ 				(char **)&con->v1.in_sr_kvec.iov_base);
++		if (ret <= 0) {
++			ret = ret ? ret : 1;  /* must return > 0 to indicate success */
++			break;
++		}
+ 		con->v1.in_sr_len = ret;
+-	} while (ret > 0);
++	}
+ 
+ 	if (do_datacrc)
+ 		con->in_data_crc = crc;
+ 
+-	return ret < 0 ? ret : 1;  /* must return > 0 to indicate success */
++	return ret;
+ }
+ 
+ static int read_partial_msg_data(struct ceph_connection *con)
+@@ -1253,8 +1254,8 @@ static int read_partial_message(struct ceph_connection *con)
+ 		if (!m->num_data_items)
+ 			return -EIO;
+ 
+-		if (m->sparse_read)
+-			ret = read_sparse_msg_data(con);
++		if (m->sparse_read_total)
++			ret = read_partial_sparse_msg_data(con);
+ 		else if (ceph_test_opt(from_msgr(con->msgr), RXBOUNCE))
+ 			ret = read_partial_msg_data_bounce(con);
+ 		else
+diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c
+index f8ec60e1aba3a..a0ca5414b333d 100644
+--- a/net/ceph/messenger_v2.c
++++ b/net/ceph/messenger_v2.c
+@@ -1128,7 +1128,7 @@ static int decrypt_tail(struct ceph_connection *con)
+ 	struct sg_table enc_sgt = {};
+ 	struct sg_table sgt = {};
+ 	struct page **pages = NULL;
+-	bool sparse = con->in_msg->sparse_read;
++	bool sparse = !!con->in_msg->sparse_read_total;
+ 	int dpos = 0;
+ 	int tail_len;
+ 	int ret;
+@@ -2060,7 +2060,7 @@ static int prepare_read_tail_plain(struct ceph_connection *con)
+ 	}
+ 
+ 	if (data_len(msg)) {
+-		if (msg->sparse_read)
++		if (msg->sparse_read_total)
+ 			con->v2.in_state = IN_S_PREPARE_SPARSE_DATA;
+ 		else
+ 			con->v2.in_state = IN_S_PREPARE_READ_DATA;
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index d3a759e052c81..8d9760397b887 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -5510,7 +5510,7 @@ static struct ceph_msg *get_reply(struct ceph_connection *con,
+ 	}
+ 
+ 	m = ceph_msg_get(req->r_reply);
+-	m->sparse_read = (bool)srlen;
++	m->sparse_read_total = srlen;
+ 
+ 	dout("get_reply tid %lld %p\n", tid, m);
+ 
+@@ -5777,11 +5777,8 @@ static int prep_next_sparse_read(struct ceph_connection *con,
+ 	}
+ 
+ 	if (o->o_sparse_op_idx < 0) {
+-		u64 srlen = sparse_data_requested(req);
+-
+-		dout("%s: [%d] starting new sparse read req. srlen=0x%llx\n",
+-		     __func__, o->o_osd, srlen);
+-		ceph_msg_data_cursor_init(cursor, con->in_msg, srlen);
++		dout("%s: [%d] starting new sparse read req\n",
++		     __func__, o->o_osd);
+ 	} else {
+ 		u64 end;
+ 
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 103d46fa0eeb3..a8b625abe242c 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -751,7 +751,7 @@ size_t memcpy_to_iter_csum(void *iter_to, size_t progress,
+ 			   size_t len, void *from, void *priv2)
+ {
+ 	__wsum *csum = priv2;
+-	__wsum next = csum_partial_copy_nocheck(from, iter_to, len);
++	__wsum next = csum_partial_copy_nocheck(from + progress, iter_to, len);
+ 
+ 	*csum = csum_block_add(*csum, next, progress);
+ 	return 0;
+diff --git a/net/devlink/core.c b/net/devlink/core.c
+index 6984877e9f10d..cbf8560c93752 100644
+--- a/net/devlink/core.c
++++ b/net/devlink/core.c
+@@ -46,7 +46,7 @@ struct devlink_rel {
+ 		u32 obj_index;
+ 		devlink_rel_notify_cb_t *notify_cb;
+ 		devlink_rel_cleanup_cb_t *cleanup_cb;
+-		struct work_struct notify_work;
++		struct delayed_work notify_work;
+ 	} nested_in;
+ };
+ 
+@@ -70,7 +70,7 @@ static void __devlink_rel_put(struct devlink_rel *rel)
+ static void devlink_rel_nested_in_notify_work(struct work_struct *work)
+ {
+ 	struct devlink_rel *rel = container_of(work, struct devlink_rel,
+-					       nested_in.notify_work);
++					       nested_in.notify_work.work);
+ 	struct devlink *devlink;
+ 
+ 	devlink = devlinks_xa_get(rel->nested_in.devlink_index);
+@@ -96,13 +96,13 @@ rel_put:
+ 	return;
+ 
+ reschedule_work:
+-	schedule_work(&rel->nested_in.notify_work);
++	schedule_delayed_work(&rel->nested_in.notify_work, 1);
+ }
+ 
+ static void devlink_rel_nested_in_notify_work_schedule(struct devlink_rel *rel)
+ {
+ 	__devlink_rel_get(rel);
+-	schedule_work(&rel->nested_in.notify_work);
++	schedule_delayed_work(&rel->nested_in.notify_work, 0);
+ }
+ 
+ static struct devlink_rel *devlink_rel_alloc(void)
+@@ -123,8 +123,8 @@ static struct devlink_rel *devlink_rel_alloc(void)
+ 	}
+ 
+ 	refcount_set(&rel->refcount, 1);
+-	INIT_WORK(&rel->nested_in.notify_work,
+-		  &devlink_rel_nested_in_notify_work);
++	INIT_DELAYED_WORK(&rel->nested_in.notify_work,
++			  &devlink_rel_nested_in_notify_work);
+ 	return rel;
+ }
+ 
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index 1c58bd72e1245..e59962f34caa6 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1628,10 +1628,12 @@ EXPORT_SYMBOL(inet_current_timestamp);
+ 
+ int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
+ {
+-	if (sk->sk_family == AF_INET)
++	unsigned int family = READ_ONCE(sk->sk_family);
++
++	if (family == AF_INET)
+ 		return ip_recv_error(sk, msg, len, addr_len);
+ #if IS_ENABLED(CONFIG_IPV6)
+-	if (sk->sk_family == AF_INET6)
++	if (family == AF_INET6)
+ 		return pingv6_ops.ipv6_recv_error(sk, msg, len, addr_len);
+ #endif
+ 	return -EINVAL;
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index 586b1b3e35b80..80ccd6661aa32 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -332,7 +332,7 @@ static int iptunnel_pmtud_build_icmpv6(struct sk_buff *skb, int mtu)
+ 	};
+ 	skb_reset_network_header(skb);
+ 
+-	csum = csum_partial(icmp6h, len, 0);
++	csum = skb_checksum(skb, skb_transport_offset(skb), len, 0);
+ 	icmp6h->icmp6_cksum = csum_ipv6_magic(&nip6h->saddr, &nip6h->daddr, len,
+ 					      IPPROTO_ICMPV6, csum);
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index eb1d3ef843538..bfb06dea43c2d 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -5,7 +5,7 @@
+  * Copyright 2006-2010	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2015  Intel Mobile Communications GmbH
+  * Copyright (C) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2022 Intel Corporation
++ * Copyright (C) 2018-2024 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -987,7 +987,8 @@ static int
+ ieee80211_set_unsol_bcast_probe_resp(struct ieee80211_sub_if_data *sdata,
+ 				     struct cfg80211_unsol_bcast_probe_resp *params,
+ 				     struct ieee80211_link_data *link,
+-				     struct ieee80211_bss_conf *link_conf)
++				     struct ieee80211_bss_conf *link_conf,
++				     u64 *changed)
+ {
+ 	struct unsol_bcast_probe_resp_data *new, *old = NULL;
+ 
+@@ -1011,7 +1012,8 @@ ieee80211_set_unsol_bcast_probe_resp(struct ieee80211_sub_if_data *sdata,
+ 		RCU_INIT_POINTER(link->u.ap.unsol_bcast_probe_resp, NULL);
+ 	}
+ 
+-	return BSS_CHANGED_UNSOL_BCAST_PROBE_RESP;
++	*changed |= BSS_CHANGED_UNSOL_BCAST_PROBE_RESP;
++	return 0;
+ }
+ 
+ static int ieee80211_set_ftm_responder_params(
+@@ -1450,10 +1452,9 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
+ 
+ 	err = ieee80211_set_unsol_bcast_probe_resp(sdata,
+ 						   &params->unsol_bcast_probe_resp,
+-						   link, link_conf);
++						   link, link_conf, &changed);
+ 	if (err < 0)
+ 		goto error;
+-	changed |= err;
+ 
+ 	err = drv_start_ap(sdata->local, sdata, link_conf);
+ 	if (err) {
+@@ -1525,10 +1526,9 @@ static int ieee80211_change_beacon(struct wiphy *wiphy, struct net_device *dev,
+ 
+ 	err = ieee80211_set_unsol_bcast_probe_resp(sdata,
+ 						   &params->unsol_bcast_probe_resp,
+-						   link, link_conf);
++						   link, link_conf, &changed);
+ 	if (err < 0)
+ 		return err;
+-	changed |= err;
+ 
+ 	if (beacon->he_bss_color_valid &&
+ 	    beacon->he_bss_color.enabled != link_conf->he_bss_color.enabled) {
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index dcdaab19efbd5..5a03bf1de6bb7 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -7288,6 +7288,75 @@ out_err:
+ 	return err;
+ }
+ 
++static bool ieee80211_mgd_csa_present(struct ieee80211_sub_if_data *sdata,
++				      const struct cfg80211_bss_ies *ies,
++				      u8 cur_channel, bool ignore_ecsa)
++{
++	const struct element *csa_elem, *ecsa_elem;
++	struct ieee80211_channel_sw_ie *csa = NULL;
++	struct ieee80211_ext_chansw_ie *ecsa = NULL;
++
++	if (!ies)
++		return false;
++
++	csa_elem = cfg80211_find_elem(WLAN_EID_CHANNEL_SWITCH,
++				      ies->data, ies->len);
++	if (csa_elem && csa_elem->datalen == sizeof(*csa))
++		csa = (void *)csa_elem->data;
++
++	ecsa_elem = cfg80211_find_elem(WLAN_EID_EXT_CHANSWITCH_ANN,
++				       ies->data, ies->len);
++	if (ecsa_elem && ecsa_elem->datalen == sizeof(*ecsa))
++		ecsa = (void *)ecsa_elem->data;
++
++	if (csa && csa->count == 0)
++		csa = NULL;
++	if (csa && !csa->mode && csa->new_ch_num == cur_channel)
++		csa = NULL;
++
++	if (ecsa && ecsa->count == 0)
++		ecsa = NULL;
++	if (ecsa && !ecsa->mode && ecsa->new_ch_num == cur_channel)
++		ecsa = NULL;
++
++	if (ignore_ecsa && ecsa) {
++		sdata_info(sdata,
++			   "Ignoring ECSA in probe response - was considered stuck!\n");
++		return csa;
++	}
++
++	return csa || ecsa;
++}
++
++static bool ieee80211_mgd_csa_in_process(struct ieee80211_sub_if_data *sdata,
++					 struct cfg80211_bss *bss)
++{
++	u8 cur_channel;
++	bool ret;
++
++	cur_channel = ieee80211_frequency_to_channel(bss->channel->center_freq);
++
++	rcu_read_lock();
++	if (ieee80211_mgd_csa_present(sdata,
++				      rcu_dereference(bss->beacon_ies),
++				      cur_channel, false)) {
++		ret = true;
++		goto out;
++	}
++
++	if (ieee80211_mgd_csa_present(sdata,
++				      rcu_dereference(bss->proberesp_ies),
++				      cur_channel, bss->proberesp_ecsa_stuck)) {
++		ret = true;
++		goto out;
++	}
++
++	ret = false;
++out:
++	rcu_read_unlock();
++	return ret;
++}
++
+ /* config hooks */
+ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
+ 		       struct cfg80211_auth_request *req)
+@@ -7296,7 +7365,6 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 	struct ieee80211_mgd_auth_data *auth_data;
+ 	struct ieee80211_link_data *link;
+-	const struct element *csa_elem, *ecsa_elem;
+ 	u16 auth_alg;
+ 	int err;
+ 	bool cont_auth;
+@@ -7339,21 +7407,10 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
+ 	if (ifmgd->assoc_data)
+ 		return -EBUSY;
+ 
+-	rcu_read_lock();
+-	csa_elem = ieee80211_bss_get_elem(req->bss, WLAN_EID_CHANNEL_SWITCH);
+-	ecsa_elem = ieee80211_bss_get_elem(req->bss,
+-					   WLAN_EID_EXT_CHANSWITCH_ANN);
+-	if ((csa_elem &&
+-	     csa_elem->datalen == sizeof(struct ieee80211_channel_sw_ie) &&
+-	     ((struct ieee80211_channel_sw_ie *)csa_elem->data)->count != 0) ||
+-	    (ecsa_elem &&
+-	     ecsa_elem->datalen == sizeof(struct ieee80211_ext_chansw_ie) &&
+-	     ((struct ieee80211_ext_chansw_ie *)ecsa_elem->data)->count != 0)) {
+-		rcu_read_unlock();
++	if (ieee80211_mgd_csa_in_process(sdata, req->bss)) {
+ 		sdata_info(sdata, "AP is in CSA process, reject auth\n");
+ 		return -EINVAL;
+ 	}
+-	rcu_read_unlock();
+ 
+ 	auth_data = kzalloc(sizeof(*auth_data) + req->auth_data_len +
+ 			    req->ie_len, GFP_KERNEL);
+@@ -7662,7 +7719,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 	struct ieee80211_mgd_assoc_data *assoc_data;
+-	const struct element *ssid_elem, *csa_elem, *ecsa_elem;
++	const struct element *ssid_elem;
+ 	struct ieee80211_vif_cfg *vif_cfg = &sdata->vif.cfg;
+ 	ieee80211_conn_flags_t conn_flags = 0;
+ 	struct ieee80211_link_data *link;
+@@ -7685,23 +7742,15 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
+ 
+ 	cbss = req->link_id < 0 ? req->bss : req->links[req->link_id].bss;
+ 
+-	rcu_read_lock();
+-	ssid_elem = ieee80211_bss_get_elem(cbss, WLAN_EID_SSID);
+-	if (!ssid_elem || ssid_elem->datalen > sizeof(assoc_data->ssid)) {
+-		rcu_read_unlock();
++	if (ieee80211_mgd_csa_in_process(sdata, cbss)) {
++		sdata_info(sdata, "AP is in CSA process, reject assoc\n");
+ 		kfree(assoc_data);
+ 		return -EINVAL;
+ 	}
+ 
+-	csa_elem = ieee80211_bss_get_elem(cbss, WLAN_EID_CHANNEL_SWITCH);
+-	ecsa_elem = ieee80211_bss_get_elem(cbss, WLAN_EID_EXT_CHANSWITCH_ANN);
+-	if ((csa_elem &&
+-	     csa_elem->datalen == sizeof(struct ieee80211_channel_sw_ie) &&
+-	     ((struct ieee80211_channel_sw_ie *)csa_elem->data)->count != 0) ||
+-	    (ecsa_elem &&
+-	     ecsa_elem->datalen == sizeof(struct ieee80211_ext_chansw_ie) &&
+-	     ((struct ieee80211_ext_chansw_ie *)ecsa_elem->data)->count != 0)) {
+-		sdata_info(sdata, "AP is in CSA process, reject assoc\n");
++	rcu_read_lock();
++	ssid_elem = ieee80211_bss_get_elem(cbss, WLAN_EID_SSID);
++	if (!ssid_elem || ssid_elem->datalen > sizeof(assoc_data->ssid)) {
+ 		rcu_read_unlock();
+ 		kfree(assoc_data);
+ 		return -EINVAL;
+@@ -7976,8 +8025,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
+ 
+ 		rcu_read_lock();
+ 		beacon_ies = rcu_dereference(req->bss->beacon_ies);
+-
+-		if (beacon_ies) {
++		if (!beacon_ies) {
+ 			/*
+ 			 * Wait up to one beacon interval ...
+ 			 * should this be more if we miss one?
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index ed4fdf655343f..a94feb6b579bb 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3100,10 +3100,11 @@ void ieee80211_check_fast_xmit(struct sta_info *sta)
+ 			/* DA SA BSSID */
+ 			build.da_offs = offsetof(struct ieee80211_hdr, addr1);
+ 			build.sa_offs = offsetof(struct ieee80211_hdr, addr2);
++			rcu_read_lock();
+ 			link = rcu_dereference(sdata->link[tdls_link_id]);
+-			if (WARN_ON_ONCE(!link))
+-				break;
+-			memcpy(hdr->addr3, link->u.mgd.bssid, ETH_ALEN);
++			if (!WARN_ON_ONCE(!link))
++				memcpy(hdr->addr3, link->u.mgd.bssid, ETH_ALEN);
++			rcu_read_unlock();
+ 			build.hdr_len = 24;
+ 			break;
+ 		}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0e07f110a539b..04c5aa4debc74 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -9744,6 +9744,7 @@ dead_elem:
+ struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc)
+ {
+ 	struct nft_set_elem_catchall *catchall, *next;
++	u64 tstamp = nft_net_tstamp(gc->net);
+ 	const struct nft_set *set = gc->set;
+ 	struct nft_elem_priv *elem_priv;
+ 	struct nft_set_ext *ext;
+@@ -9753,7 +9754,7 @@ struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc)
+ 	list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
+ 		ext = nft_set_elem_ext(set, catchall->elem);
+ 
+-		if (!nft_set_elem_expired(ext))
++		if (!__nft_set_elem_expired(ext, tstamp))
+ 			continue;
+ 
+ 		gc = nft_trans_gc_queue_sync(gc, GFP_KERNEL);
+@@ -10539,6 +10540,7 @@ static bool nf_tables_valid_genid(struct net *net, u32 genid)
+ 	bool genid_ok;
+ 
+ 	mutex_lock(&nft_net->commit_mutex);
++	nft_net->tstamp = get_jiffies_64();
+ 
+ 	genid_ok = genid == 0 || nft_net->base_seq == genid;
+ 	if (!genid_ok)
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 171d1f52d3dd0..5cf38fc0a366a 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -232,18 +232,25 @@ static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
+ 	if (verdict == NF_ACCEPT ||
+ 	    verdict == NF_REPEAT ||
+ 	    verdict == NF_STOP) {
++		unsigned int ct_verdict = verdict;
++
+ 		rcu_read_lock();
+ 		ct_hook = rcu_dereference(nf_ct_hook);
+ 		if (ct_hook)
+-			verdict = ct_hook->update(entry->state.net, entry->skb);
++			ct_verdict = ct_hook->update(entry->state.net, entry->skb);
+ 		rcu_read_unlock();
+ 
+-		switch (verdict & NF_VERDICT_MASK) {
++		switch (ct_verdict & NF_VERDICT_MASK) {
++		case NF_ACCEPT:
++			/* follow userspace verdict, could be REPEAT */
++			break;
+ 		case NF_STOLEN:
+ 			nf_queue_entry_free(entry);
+ 			return;
++		default:
++			verdict = ct_verdict & NF_VERDICT_MASK;
++			break;
+ 		}
+-
+ 	}
+ 	nf_reinject(entry, verdict);
+ }
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index f0eeda97bfcd9..1f9474fefe849 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -135,7 +135,7 @@ static void nft_target_eval_bridge(const struct nft_expr *expr,
+ 
+ static const struct nla_policy nft_target_policy[NFTA_TARGET_MAX + 1] = {
+ 	[NFTA_TARGET_NAME]	= { .type = NLA_NUL_STRING },
+-	[NFTA_TARGET_REV]	= { .type = NLA_U32 },
++	[NFTA_TARGET_REV]	= NLA_POLICY_MAX(NLA_BE32, 255),
+ 	[NFTA_TARGET_INFO]	= { .type = NLA_BINARY },
+ };
+ 
+@@ -200,6 +200,7 @@ static const struct nla_policy nft_rule_compat_policy[NFTA_RULE_COMPAT_MAX + 1]
+ static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv)
+ {
+ 	struct nlattr *tb[NFTA_RULE_COMPAT_MAX+1];
++	u32 l4proto;
+ 	u32 flags;
+ 	int err;
+ 
+@@ -212,12 +213,18 @@ static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv)
+ 		return -EINVAL;
+ 
+ 	flags = ntohl(nla_get_be32(tb[NFTA_RULE_COMPAT_FLAGS]));
+-	if (flags & ~NFT_RULE_COMPAT_F_MASK)
++	if (flags & NFT_RULE_COMPAT_F_UNUSED ||
++	    flags & ~NFT_RULE_COMPAT_F_MASK)
+ 		return -EINVAL;
+ 	if (flags & NFT_RULE_COMPAT_F_INV)
+ 		*inv = true;
+ 
+-	*proto = ntohl(nla_get_be32(tb[NFTA_RULE_COMPAT_PROTO]));
++	l4proto = ntohl(nla_get_be32(tb[NFTA_RULE_COMPAT_PROTO]));
++	if (l4proto > U16_MAX)
++		return -EINVAL;
++
++	*proto = l4proto;
++
+ 	return 0;
+ }
+ 
+@@ -419,7 +426,7 @@ static void nft_match_eval(const struct nft_expr *expr,
+ 
+ static const struct nla_policy nft_match_policy[NFTA_MATCH_MAX + 1] = {
+ 	[NFTA_MATCH_NAME]	= { .type = NLA_NUL_STRING },
+-	[NFTA_MATCH_REV]	= { .type = NLA_U32 },
++	[NFTA_MATCH_REV]	= NLA_POLICY_MAX(NLA_BE32, 255),
+ 	[NFTA_MATCH_INFO]	= { .type = NLA_BINARY },
+ };
+ 
+@@ -724,7 +731,7 @@ out_put:
+ static const struct nla_policy nfnl_compat_policy_get[NFTA_COMPAT_MAX+1] = {
+ 	[NFTA_COMPAT_NAME]	= { .type = NLA_NUL_STRING,
+ 				    .len = NFT_COMPAT_NAME_MAX-1 },
+-	[NFTA_COMPAT_REV]	= { .type = NLA_U32 },
++	[NFTA_COMPAT_REV]	= NLA_POLICY_MAX(NLA_BE32, 255),
+ 	[NFTA_COMPAT_TYPE]	= { .type = NLA_U32 },
+ };
+ 
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index aac98a3c966e9..bfd3e5a14dab6 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -476,6 +476,9 @@ static int nft_ct_get_init(const struct nft_ctx *ctx,
+ 		break;
+ #endif
+ 	case NFT_CT_ID:
++		if (tb[NFTA_CT_DIRECTION])
++			return -EINVAL;
++
+ 		len = sizeof(u32);
+ 		break;
+ 	default:
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 6c2061bfdae6c..6968a3b342367 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -36,6 +36,7 @@ struct nft_rhash_cmp_arg {
+ 	const struct nft_set		*set;
+ 	const u32			*key;
+ 	u8				genmask;
++	u64				tstamp;
+ };
+ 
+ static inline u32 nft_rhash_key(const void *data, u32 len, u32 seed)
+@@ -62,7 +63,7 @@ static inline int nft_rhash_cmp(struct rhashtable_compare_arg *arg,
+ 		return 1;
+ 	if (nft_set_elem_is_dead(&he->ext))
+ 		return 1;
+-	if (nft_set_elem_expired(&he->ext))
++	if (__nft_set_elem_expired(&he->ext, x->tstamp))
+ 		return 1;
+ 	if (!nft_set_elem_active(&he->ext, x->genmask))
+ 		return 1;
+@@ -87,6 +88,7 @@ bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_cur(net),
+ 		.set	 = set,
+ 		.key	 = key,
++		.tstamp  = get_jiffies_64(),
+ 	};
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+@@ -106,6 +108,7 @@ nft_rhash_get(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_cur(net),
+ 		.set	 = set,
+ 		.key	 = elem->key.val.data,
++		.tstamp  = get_jiffies_64(),
+ 	};
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+@@ -131,6 +134,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+ 		.genmask = NFT_GENMASK_ANY,
+ 		.set	 = set,
+ 		.key	 = key,
++		.tstamp  = get_jiffies_64(),
+ 	};
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+@@ -175,6 +179,7 @@ static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_next(net),
+ 		.set	 = set,
+ 		.key	 = elem->key.val.data,
++		.tstamp	 = nft_net_tstamp(net),
+ 	};
+ 	struct nft_rhash_elem *prev;
+ 
+@@ -216,6 +221,7 @@ nft_rhash_deactivate(const struct net *net, const struct nft_set *set,
+ 		.genmask = nft_genmask_next(net),
+ 		.set	 = set,
+ 		.key	 = elem->key.val.data,
++		.tstamp	 = nft_net_tstamp(net),
+ 	};
+ 
+ 	rcu_read_lock();
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 7252fcdae3499..3089c4ca8fff3 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -342,9 +342,6 @@
+ #include "nft_set_pipapo_avx2.h"
+ #include "nft_set_pipapo.h"
+ 
+-/* Current working bitmap index, toggled between field matches */
+-static DEFINE_PER_CPU(bool, nft_pipapo_scratch_index);
+-
+ /**
+  * pipapo_refill() - For each set bit, set bits from selected mapping table item
+  * @map:	Bitmap to be scanned for set bits
+@@ -412,6 +409,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		       const u32 *key, const struct nft_set_ext **ext)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	struct nft_pipapo_scratch *scratch;
+ 	unsigned long *res_map, *fill_map;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	const u8 *rp = (const u8 *)key;
+@@ -422,15 +420,17 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 
+ 	local_bh_disable();
+ 
+-	map_index = raw_cpu_read(nft_pipapo_scratch_index);
+-
+ 	m = rcu_dereference(priv->match);
+ 
+ 	if (unlikely(!m || !*raw_cpu_ptr(m->scratch)))
+ 		goto out;
+ 
+-	res_map  = *raw_cpu_ptr(m->scratch) + (map_index ? m->bsize_max : 0);
+-	fill_map = *raw_cpu_ptr(m->scratch) + (map_index ? 0 : m->bsize_max);
++	scratch = *raw_cpu_ptr(m->scratch);
++
++	map_index = scratch->map_index;
++
++	res_map  = scratch->map + (map_index ? m->bsize_max : 0);
++	fill_map = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+ 	memset(res_map, 0xff, m->bsize_max * sizeof(*res_map));
+ 
+@@ -460,7 +460,7 @@ next_match:
+ 		b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt,
+ 				  last);
+ 		if (b < 0) {
+-			raw_cpu_write(nft_pipapo_scratch_index, map_index);
++			scratch->map_index = map_index;
+ 			local_bh_enable();
+ 
+ 			return false;
+@@ -477,7 +477,7 @@ next_match:
+ 			 * current inactive bitmap is clean and can be reused as
+ 			 * *next* bitmap (not initial) for the next packet.
+ 			 */
+-			raw_cpu_write(nft_pipapo_scratch_index, map_index);
++			scratch->map_index = map_index;
+ 			local_bh_enable();
+ 
+ 			return true;
+@@ -504,6 +504,7 @@ out:
+  * @set:	nftables API set representation
+  * @data:	Key data to be matched against existing elements
+  * @genmask:	If set, check that element is active in given genmask
++ * @tstamp:	timestamp to check for expired elements
+  *
+  * This is essentially the same as the lookup function, except that it matches
+  * key data against the uncommitted copy and doesn't use preallocated maps for
+@@ -513,7 +514,8 @@ out:
+  */
+ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+ 					  const struct nft_set *set,
+-					  const u8 *data, u8 genmask)
++					  const u8 *data, u8 genmask,
++					  u64 tstamp)
+ {
+ 	struct nft_pipapo_elem *ret = ERR_PTR(-ENOENT);
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+@@ -566,7 +568,7 @@ next_match:
+ 			goto out;
+ 
+ 		if (last) {
+-			if (nft_set_elem_expired(&f->mt[b].e->ext))
++			if (__nft_set_elem_expired(&f->mt[b].e->ext, tstamp))
+ 				goto next_match;
+ 			if ((genmask &&
+ 			     !nft_set_elem_active(&f->mt[b].e->ext, genmask)))
+@@ -603,10 +605,10 @@ static struct nft_elem_priv *
+ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+ 	       const struct nft_set_elem *elem, unsigned int flags)
+ {
+-	static struct nft_pipapo_elem *e;
++	struct nft_pipapo_elem *e;
+ 
+ 	e = pipapo_get(net, set, (const u8 *)elem->key.val.data,
+-		       nft_genmask_cur(net));
++		       nft_genmask_cur(net), get_jiffies_64());
+ 	if (IS_ERR(e))
+ 		return ERR_CAST(e);
+ 
+@@ -1108,6 +1110,25 @@ static void pipapo_map(struct nft_pipapo_match *m,
+ 		f->mt[map[i].to + j].e = e;
+ }
+ 
++/**
++ * pipapo_free_scratch() - Free per-CPU map at original (not aligned) address
++ * @m:		Matching data
++ * @cpu:	CPU number
++ */
++static void pipapo_free_scratch(const struct nft_pipapo_match *m, unsigned int cpu)
++{
++	struct nft_pipapo_scratch *s;
++	void *mem;
++
++	s = *per_cpu_ptr(m->scratch, cpu);
++	if (!s)
++		return;
++
++	mem = s;
++	mem -= s->align_off;
++	kfree(mem);
++}
++
+ /**
+  * pipapo_realloc_scratch() - Reallocate scratch maps for partial match results
+  * @clone:	Copy of matching data with pending insertions and deletions
+@@ -1121,12 +1142,13 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 	int i;
+ 
+ 	for_each_possible_cpu(i) {
+-		unsigned long *scratch;
++		struct nft_pipapo_scratch *scratch;
+ #ifdef NFT_PIPAPO_ALIGN
+-		unsigned long *scratch_aligned;
++		void *scratch_aligned;
++		u32 align_off;
+ #endif
+-
+-		scratch = kzalloc_node(bsize_max * sizeof(*scratch) * 2 +
++		scratch = kzalloc_node(struct_size(scratch, map,
++						   bsize_max * 2) +
+ 				       NFT_PIPAPO_ALIGN_HEADROOM,
+ 				       GFP_KERNEL, cpu_to_node(i));
+ 		if (!scratch) {
+@@ -1140,14 +1162,25 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 			return -ENOMEM;
+ 		}
+ 
+-		kfree(*per_cpu_ptr(clone->scratch, i));
+-
+-		*per_cpu_ptr(clone->scratch, i) = scratch;
++		pipapo_free_scratch(clone, i);
+ 
+ #ifdef NFT_PIPAPO_ALIGN
+-		scratch_aligned = NFT_PIPAPO_LT_ALIGN(scratch);
+-		*per_cpu_ptr(clone->scratch_aligned, i) = scratch_aligned;
++		/* Align &scratch->map (not the struct itself): the extra
++		 * %NFT_PIPAPO_ALIGN_HEADROOM bytes passed to kzalloc_node()
++		 * above guarantee we can waste up to those bytes in order
++		 * to align the map field regardless of its offset within
++		 * the struct.
++		 */
++		BUILD_BUG_ON(offsetof(struct nft_pipapo_scratch, map) > NFT_PIPAPO_ALIGN_HEADROOM);
++
++		scratch_aligned = NFT_PIPAPO_LT_ALIGN(&scratch->map);
++		scratch_aligned -= offsetof(struct nft_pipapo_scratch, map);
++		align_off = scratch_aligned - (void *)scratch;
++
++		scratch = scratch_aligned;
++		scratch->align_off = align_off;
+ #endif
++		*per_cpu_ptr(clone->scratch, i) = scratch;
+ 	}
+ 
+ 	return 0;
+@@ -1173,6 +1206,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_pipapo_match *m = priv->clone;
+ 	u8 genmask = nft_genmask_next(net);
+ 	struct nft_pipapo_elem *e, *dup;
++	u64 tstamp = nft_net_tstamp(net);
+ 	struct nft_pipapo_field *f;
+ 	const u8 *start_p, *end_p;
+ 	int i, bsize_max, err = 0;
+@@ -1182,7 +1216,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	else
+ 		end = start;
+ 
+-	dup = pipapo_get(net, set, start, genmask);
++	dup = pipapo_get(net, set, start, genmask, tstamp);
+ 	if (!IS_ERR(dup)) {
+ 		/* Check if we already have the same exact entry */
+ 		const struct nft_data *dup_key, *dup_end;
+@@ -1204,7 +1238,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 
+ 	if (PTR_ERR(dup) == -ENOENT) {
+ 		/* Look for partially overlapping entries */
+-		dup = pipapo_get(net, set, end, nft_genmask_next(net));
++		dup = pipapo_get(net, set, end, nft_genmask_next(net), tstamp);
+ 	}
+ 
+ 	if (PTR_ERR(dup) != -ENOENT) {
+@@ -1301,11 +1335,6 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 	if (!new->scratch)
+ 		goto out_scratch;
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-	new->scratch_aligned = alloc_percpu(*new->scratch_aligned);
+-	if (!new->scratch_aligned)
+-		goto out_scratch;
+-#endif
+ 	for_each_possible_cpu(i)
+ 		*per_cpu_ptr(new->scratch, i) = NULL;
+ 
+@@ -1357,10 +1386,7 @@ out_lt:
+ 	}
+ out_scratch_realloc:
+ 	for_each_possible_cpu(i)
+-		kfree(*per_cpu_ptr(new->scratch, i));
+-#ifdef NFT_PIPAPO_ALIGN
+-	free_percpu(new->scratch_aligned);
+-#endif
++		pipapo_free_scratch(new, i);
+ out_scratch:
+ 	free_percpu(new->scratch);
+ 	kfree(new);
+@@ -1560,6 +1586,7 @@ static void pipapo_gc(struct nft_set *set, struct nft_pipapo_match *m)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct net *net = read_pnet(&set->net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	int rules_f0, first_rule = 0;
+ 	struct nft_pipapo_elem *e;
+ 	struct nft_trans_gc *gc;
+@@ -1594,7 +1621,7 @@ static void pipapo_gc(struct nft_set *set, struct nft_pipapo_match *m)
+ 		/* synchronous gc never fails, there is no need to set on
+ 		 * NFT_SET_ELEM_DEAD_BIT.
+ 		 */
+-		if (nft_set_elem_expired(&e->ext)) {
++		if (__nft_set_elem_expired(&e->ext, tstamp)) {
+ 			priv->dirty = true;
+ 
+ 			gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
+@@ -1640,13 +1667,9 @@ static void pipapo_free_match(struct nft_pipapo_match *m)
+ 	int i;
+ 
+ 	for_each_possible_cpu(i)
+-		kfree(*per_cpu_ptr(m->scratch, i));
++		pipapo_free_scratch(m, i);
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-	free_percpu(m->scratch_aligned);
+-#endif
+ 	free_percpu(m->scratch);
+-
+ 	pipapo_free_fields(m);
+ 
+ 	kfree(m);
+@@ -1769,7 +1792,7 @@ static void *pipapo_deactivate(const struct net *net, const struct nft_set *set,
+ {
+ 	struct nft_pipapo_elem *e;
+ 
+-	e = pipapo_get(net, set, data, nft_genmask_next(net));
++	e = pipapo_get(net, set, data, nft_genmask_next(net), nft_net_tstamp(net));
+ 	if (IS_ERR(e))
+ 		return NULL;
+ 
+@@ -2132,7 +2155,7 @@ static int nft_pipapo_init(const struct nft_set *set,
+ 	m->field_count = field_count;
+ 	m->bsize_max = 0;
+ 
+-	m->scratch = alloc_percpu(unsigned long *);
++	m->scratch = alloc_percpu(struct nft_pipapo_scratch *);
+ 	if (!m->scratch) {
+ 		err = -ENOMEM;
+ 		goto out_scratch;
+@@ -2140,16 +2163,6 @@ static int nft_pipapo_init(const struct nft_set *set,
+ 	for_each_possible_cpu(i)
+ 		*per_cpu_ptr(m->scratch, i) = NULL;
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-	m->scratch_aligned = alloc_percpu(unsigned long *);
+-	if (!m->scratch_aligned) {
+-		err = -ENOMEM;
+-		goto out_free;
+-	}
+-	for_each_possible_cpu(i)
+-		*per_cpu_ptr(m->scratch_aligned, i) = NULL;
+-#endif
+-
+ 	rcu_head_init(&m->rcu);
+ 
+ 	nft_pipapo_for_each_field(f, i, m) {
+@@ -2180,9 +2193,6 @@ static int nft_pipapo_init(const struct nft_set *set,
+ 	return 0;
+ 
+ out_free:
+-#ifdef NFT_PIPAPO_ALIGN
+-	free_percpu(m->scratch_aligned);
+-#endif
+ 	free_percpu(m->scratch);
+ out_scratch:
+ 	kfree(m);
+@@ -2236,11 +2246,8 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 
+ 		nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-		free_percpu(m->scratch_aligned);
+-#endif
+ 		for_each_possible_cpu(cpu)
+-			kfree(*per_cpu_ptr(m->scratch, cpu));
++			pipapo_free_scratch(m, cpu);
+ 		free_percpu(m->scratch);
+ 		pipapo_free_fields(m);
+ 		kfree(m);
+@@ -2253,11 +2260,8 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 		if (priv->dirty)
+ 			nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+-#ifdef NFT_PIPAPO_ALIGN
+-		free_percpu(priv->clone->scratch_aligned);
+-#endif
+ 		for_each_possible_cpu(cpu)
+-			kfree(*per_cpu_ptr(priv->clone->scratch, cpu));
++			pipapo_free_scratch(priv->clone, cpu);
+ 		free_percpu(priv->clone->scratch);
+ 
+ 		pipapo_free_fields(priv->clone);
+diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
+index 1040223da5fa3..f59a0cd811051 100644
+--- a/net/netfilter/nft_set_pipapo.h
++++ b/net/netfilter/nft_set_pipapo.h
+@@ -130,21 +130,29 @@ struct nft_pipapo_field {
+ 	union nft_pipapo_map_bucket *mt;
+ };
+ 
++/**
++ * struct nft_pipapo_scratch - percpu data used for lookup and matching
++ * @map_index:	Current working bitmap index, toggled between field matches
++ * @align_off:	Offset to get the originally allocated address
++ * @map:	store partial matching results during lookup
++ */
++struct nft_pipapo_scratch {
++	u8 map_index;
++	u32 align_off;
++	unsigned long map[];
++};
++
+ /**
+  * struct nft_pipapo_match - Data used for lookup and matching
+  * @field_count		Amount of fields in set
+  * @scratch:		Preallocated per-CPU maps for partial matching results
+- * @scratch_aligned:	Version of @scratch aligned to NFT_PIPAPO_ALIGN bytes
+  * @bsize_max:		Maximum lookup table bucket size of all fields, in longs
+  * @rcu			Matching data is swapped on commits
+  * @f:			Fields, with lookup and mapping tables
+  */
+ struct nft_pipapo_match {
+ 	int field_count;
+-#ifdef NFT_PIPAPO_ALIGN
+-	unsigned long * __percpu *scratch_aligned;
+-#endif
+-	unsigned long * __percpu *scratch;
++	struct nft_pipapo_scratch * __percpu *scratch;
+ 	size_t bsize_max;
+ 	struct rcu_head rcu;
+ 	struct nft_pipapo_field f[] __counted_by(field_count);
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index 52e0d026d30ad..90e275bb3e5d7 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -71,9 +71,6 @@
+ #define NFT_PIPAPO_AVX2_ZERO(reg)					\
+ 	asm volatile("vpxor %ymm" #reg ", %ymm" #reg ", %ymm" #reg)
+ 
+-/* Current working bitmap index, toggled between field matches */
+-static DEFINE_PER_CPU(bool, nft_pipapo_avx2_scratch_index);
+-
+ /**
+  * nft_pipapo_avx2_prepare() - Prepare before main algorithm body
+  *
+@@ -1120,11 +1117,12 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 			    const u32 *key, const struct nft_set_ext **ext)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
+-	unsigned long *res, *fill, *scratch;
++	struct nft_pipapo_scratch *scratch;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	const u8 *rp = (const u8 *)key;
+ 	struct nft_pipapo_match *m;
+ 	struct nft_pipapo_field *f;
++	unsigned long *res, *fill;
+ 	bool map_index;
+ 	int i, ret = 0;
+ 
+@@ -1141,15 +1139,16 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	 */
+ 	kernel_fpu_begin_mask(0);
+ 
+-	scratch = *raw_cpu_ptr(m->scratch_aligned);
++	scratch = *raw_cpu_ptr(m->scratch);
+ 	if (unlikely(!scratch)) {
+ 		kernel_fpu_end();
+ 		return false;
+ 	}
+-	map_index = raw_cpu_read(nft_pipapo_avx2_scratch_index);
+ 
+-	res  = scratch + (map_index ? m->bsize_max : 0);
+-	fill = scratch + (map_index ? 0 : m->bsize_max);
++	map_index = scratch->map_index;
++
++	res  = scratch->map + (map_index ? m->bsize_max : 0);
++	fill = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+ 	/* Starting map doesn't need to be set for this implementation */
+ 
+@@ -1221,7 +1220,7 @@ next_match:
+ 
+ out:
+ 	if (i % 2)
+-		raw_cpu_write(nft_pipapo_avx2_scratch_index, !map_index);
++		scratch->map_index = !map_index;
+ 	kernel_fpu_end();
+ 
+ 	return ret >= 0;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index baa3fea4fe65c..9944fe479e536 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -234,7 +234,7 @@ static void nft_rbtree_gc_elem_remove(struct net *net, struct nft_set *set,
+ 
+ static const struct nft_rbtree_elem *
+ nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
+-		   struct nft_rbtree_elem *rbe, u8 genmask)
++		   struct nft_rbtree_elem *rbe)
+ {
+ 	struct nft_set *set = (struct nft_set *)__set;
+ 	struct rb_node *prev = rb_prev(&rbe->node);
+@@ -253,7 +253,7 @@ nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
+ 	while (prev) {
+ 		rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
+ 		if (nft_rbtree_interval_end(rbe_prev) &&
+-		    nft_set_elem_active(&rbe_prev->ext, genmask))
++		    nft_set_elem_active(&rbe_prev->ext, NFT_GENMASK_ANY))
+ 			break;
+ 
+ 		prev = rb_prev(prev);
+@@ -313,6 +313,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	u8 cur_genmask = nft_genmask_cur(net);
+ 	u8 genmask = nft_genmask_next(net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	int d;
+ 
+ 	/* Descend the tree to search for an existing element greater than the
+@@ -360,11 +361,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 		/* perform garbage collection to avoid bogus overlap reports
+ 		 * but skip new elements in this transaction.
+ 		 */
+-		if (nft_set_elem_expired(&rbe->ext) &&
++		if (__nft_set_elem_expired(&rbe->ext, tstamp) &&
+ 		    nft_set_elem_active(&rbe->ext, cur_genmask)) {
+ 			const struct nft_rbtree_elem *removed_end;
+ 
+-			removed_end = nft_rbtree_gc_elem(set, priv, rbe, genmask);
++			removed_end = nft_rbtree_gc_elem(set, priv, rbe);
+ 			if (IS_ERR(removed_end))
+ 				return PTR_ERR(removed_end);
+ 
+@@ -551,6 +552,7 @@ nft_rbtree_deactivate(const struct net *net, const struct nft_set *set,
+ 	const struct nft_rbtree *priv = nft_set_priv(set);
+ 	const struct rb_node *parent = priv->root.rb_node;
+ 	u8 genmask = nft_genmask_next(net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	int d;
+ 
+ 	while (parent != NULL) {
+@@ -571,7 +573,7 @@ nft_rbtree_deactivate(const struct net *net, const struct nft_set *set,
+ 				   nft_rbtree_interval_end(this)) {
+ 				parent = parent->rb_right;
+ 				continue;
+-			} else if (nft_set_elem_expired(&rbe->ext)) {
++			} else if (__nft_set_elem_expired(&rbe->ext, tstamp)) {
+ 				break;
+ 			} else if (!nft_set_elem_active(&rbe->ext, genmask)) {
+ 				parent = parent->rb_left;
+@@ -624,9 +626,10 @@ static void nft_rbtree_gc(struct nft_set *set)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	struct nft_rbtree_elem *rbe, *rbe_end = NULL;
++	struct net *net = read_pnet(&set->net);
++	u64 tstamp = nft_net_tstamp(net);
+ 	struct rb_node *node, *next;
+ 	struct nft_trans_gc *gc;
+-	struct net *net;
+ 
+ 	set  = nft_set_container_of(priv);
+ 	net  = read_pnet(&set->net);
+@@ -648,7 +651,7 @@ static void nft_rbtree_gc(struct nft_set *set)
+ 			rbe_end = rbe;
+ 			continue;
+ 		}
+-		if (!nft_set_elem_expired(&rbe->ext))
++		if (!__nft_set_elem_expired(&rbe->ext, tstamp))
+ 			continue;
+ 
+ 		gc = nft_trans_gc_queue_sync(gc, GFP_KERNEL);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 5d5b19f20d1eb..027414dafe7f5 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -198,11 +198,19 @@ struct rxrpc_host_header {
+  */
+ struct rxrpc_skb_priv {
+ 	struct rxrpc_connection *conn;	/* Connection referred to (poke packet) */
+-	u16		offset;		/* Offset of data */
+-	u16		len;		/* Length of data */
+-	u8		flags;
++	union {
++		struct {
++			u16		offset;		/* Offset of data */
++			u16		len;		/* Length of data */
++			u8		flags;
+ #define RXRPC_RX_VERIFIED	0x01
+-
++		};
++		struct {
++			rxrpc_seq_t	first_ack;	/* First packet in acks table */
++			u8		nr_acks;	/* Number of acks+nacks */
++			u8		nr_nacks;	/* Number of nacks */
++		};
++	};
+ 	struct rxrpc_host_header hdr;	/* RxRPC packet header from this packet */
+ };
+ 
+@@ -507,7 +515,7 @@ struct rxrpc_connection {
+ 	enum rxrpc_call_completion completion;	/* Completion condition */
+ 	s32			abort_code;	/* Abort code of connection abort */
+ 	int			debug_id;	/* debug ID for printks */
+-	atomic_t		serial;		/* packet serial number counter */
++	rxrpc_serial_t		tx_serial;	/* Outgoing packet serial number counter */
+ 	unsigned int		hi_serial;	/* highest serial number received */
+ 	u32			service_id;	/* Service ID, possibly upgraded */
+ 	u32			security_level;	/* Security level selected */
+@@ -689,11 +697,11 @@ struct rxrpc_call {
+ 	u8			cong_dup_acks;	/* Count of ACKs showing missing packets */
+ 	u8			cong_cumul_acks; /* Cumulative ACK count */
+ 	ktime_t			cong_tstamp;	/* Last time cwnd was changed */
++	struct sk_buff		*cong_last_nack; /* Last ACK with nacks received */
+ 
+ 	/* Receive-phase ACK management (ACKs we send). */
+ 	u8			ackr_reason;	/* reason to ACK */
+ 	u16			ackr_sack_base;	/* Starting slot in SACK table ring */
+-	rxrpc_serial_t		ackr_serial;	/* serial of packet being ACK'd */
+ 	rxrpc_seq_t		ackr_window;	/* Base of SACK window */
+ 	rxrpc_seq_t		ackr_wtop;	/* Base of SACK window */
+ 	unsigned int		ackr_nr_unacked; /* Number of unacked packets */
+@@ -727,7 +735,8 @@ struct rxrpc_call {
+ struct rxrpc_ack_summary {
+ 	u16			nr_acks;		/* Number of ACKs in packet */
+ 	u16			nr_new_acks;		/* Number of new ACKs in packet */
+-	u16			nr_rot_new_acks;	/* Number of rotated new ACKs */
++	u16			nr_new_nacks;		/* Number of new nacks in packet */
++	u16			nr_retained_nacks;	/* Number of nacks retained between ACKs */
+ 	u8			ack_reason;
+ 	bool			saw_nacks;		/* Saw NACKs in packet */
+ 	bool			new_low_nack;		/* T if new low NACK found */
+@@ -819,6 +828,20 @@ static inline bool rxrpc_sending_to_client(const struct rxrpc_txbuf *txb)
+ 
+ #include <trace/events/rxrpc.h>
+ 
++/*
++ * Allocate the next serial number on a connection.  0 must be skipped.
++ */
++static inline rxrpc_serial_t rxrpc_get_next_serial(struct rxrpc_connection *conn)
++{
++	rxrpc_serial_t serial;
++
++	serial = conn->tx_serial;
++	if (serial == 0)
++		serial = 1;
++	conn->tx_serial = serial + 1;
++	return serial;
++}
++
+ /*
+  * af_rxrpc.c
+  */
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index e363f21a20141..0f78544d043be 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -43,8 +43,6 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial,
+ 	unsigned long expiry = rxrpc_soft_ack_delay;
+ 	unsigned long now = jiffies, ack_at;
+ 
+-	call->ackr_serial = serial;
+-
+ 	if (rxrpc_soft_ack_delay < expiry)
+ 		expiry = rxrpc_soft_ack_delay;
+ 	if (call->peer->srtt_us != 0)
+@@ -114,6 +112,7 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
+ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb)
+ {
+ 	struct rxrpc_ackpacket *ack = NULL;
++	struct rxrpc_skb_priv *sp;
+ 	struct rxrpc_txbuf *txb;
+ 	unsigned long resend_at;
+ 	rxrpc_seq_t transmitted = READ_ONCE(call->tx_transmitted);
+@@ -141,14 +140,15 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb)
+ 	 * explicitly NAK'd packets.
+ 	 */
+ 	if (ack_skb) {
++		sp = rxrpc_skb(ack_skb);
+ 		ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header);
+ 
+-		for (i = 0; i < ack->nAcks; i++) {
++		for (i = 0; i < sp->nr_acks; i++) {
+ 			rxrpc_seq_t seq;
+ 
+ 			if (ack->acks[i] & 1)
+ 				continue;
+-			seq = ntohl(ack->firstPacket) + i;
++			seq = sp->first_ack + i;
+ 			if (after(txb->seq, transmitted))
+ 				break;
+ 			if (after(txb->seq, seq))
+@@ -373,7 +373,6 @@ static void rxrpc_send_initial_ping(struct rxrpc_call *call)
+ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb)
+ {
+ 	unsigned long now, next, t;
+-	rxrpc_serial_t ackr_serial;
+ 	bool resend = false, expired = false;
+ 	s32 abort_code;
+ 
+@@ -423,8 +422,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb)
+ 	if (time_after_eq(now, t)) {
+ 		trace_rxrpc_timer(call, rxrpc_timer_exp_ack, now);
+ 		cmpxchg(&call->delay_ack_at, t, now + MAX_JIFFY_OFFSET);
+-		ackr_serial = xchg(&call->ackr_serial, 0);
+-		rxrpc_send_ACK(call, RXRPC_ACK_DELAY, ackr_serial,
++		rxrpc_send_ACK(call, RXRPC_ACK_DELAY, 0,
+ 			       rxrpc_propose_ack_ping_for_lost_ack);
+ 	}
+ 
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 0943e54370ba0..9fc9a6c3f6858 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -686,6 +686,7 @@ static void rxrpc_destroy_call(struct work_struct *work)
+ 
+ 	del_timer_sync(&call->timer);
+ 
++	rxrpc_free_skb(call->cong_last_nack, rxrpc_skb_put_last_nack);
+ 	rxrpc_cleanup_ring(call);
+ 	while ((txb = list_first_entry_or_null(&call->tx_sendmsg,
+ 					       struct rxrpc_txbuf, call_link))) {
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 95f4bc206b3dc..1f251d758cb9d 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -95,6 +95,14 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 
+ 	_enter("%d", conn->debug_id);
+ 
++	if (sp && sp->hdr.type == RXRPC_PACKET_TYPE_ACK) {
++		if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
++				  &pkt.ack, sizeof(pkt.ack)) < 0)
++			return;
++		if (pkt.ack.reason == RXRPC_ACK_PING_RESPONSE)
++			return;
++	}
++
+ 	chan = &conn->channels[channel];
+ 
+ 	/* If the last call got moved on whilst we were waiting to run, just
+@@ -117,7 +125,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 	iov[2].iov_base	= &ack_info;
+ 	iov[2].iov_len	= sizeof(ack_info);
+ 
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 
+ 	pkt.whdr.epoch		= htonl(conn->proto.epoch);
+ 	pkt.whdr.cid		= htonl(conn->proto.cid | channel);
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 92495e73b8699..9691de00ade75 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -45,11 +45,9 @@ static void rxrpc_congestion_management(struct rxrpc_call *call,
+ 	}
+ 
+ 	cumulative_acks += summary->nr_new_acks;
+-	cumulative_acks += summary->nr_rot_new_acks;
+ 	if (cumulative_acks > 255)
+ 		cumulative_acks = 255;
+ 
+-	summary->mode = call->cong_mode;
+ 	summary->cwnd = call->cong_cwnd;
+ 	summary->ssthresh = call->cong_ssthresh;
+ 	summary->cumulative_acks = cumulative_acks;
+@@ -151,6 +149,7 @@ out_no_clear_ca:
+ 		cwnd = RXRPC_TX_MAX_WINDOW;
+ 	call->cong_cwnd = cwnd;
+ 	call->cong_cumul_acks = cumulative_acks;
++	summary->mode = call->cong_mode;
+ 	trace_rxrpc_congest(call, summary, acked_serial, change);
+ 	if (resend)
+ 		rxrpc_resend(call, skb);
+@@ -213,7 +212,6 @@ static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 	list_for_each_entry_rcu(txb, &call->tx_buffer, call_link, false) {
+ 		if (before_eq(txb->seq, call->acks_hard_ack))
+ 			continue;
+-		summary->nr_rot_new_acks++;
+ 		if (test_bit(RXRPC_TXBUF_LAST, &txb->flags)) {
+ 			set_bit(RXRPC_CALL_TX_LAST, &call->flags);
+ 			rot_last = true;
+@@ -254,6 +252,11 @@ static void rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ {
+ 	ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags));
+ 
++	if (unlikely(call->cong_last_nack)) {
++		rxrpc_free_skb(call->cong_last_nack, rxrpc_skb_put_last_nack);
++		call->cong_last_nack = NULL;
++	}
++
+ 	switch (__rxrpc_call_state(call)) {
+ 	case RXRPC_CALL_CLIENT_SEND_REQUEST:
+ 	case RXRPC_CALL_CLIENT_AWAIT_REPLY:
+@@ -702,6 +705,43 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb,
+ 		wake_up(&call->waitq);
+ }
+ 
++/*
++ * Determine how many nacks from the previous ACK have now been satisfied.
++ */
++static rxrpc_seq_t rxrpc_input_check_prev_ack(struct rxrpc_call *call,
++					      struct rxrpc_ack_summary *summary,
++					      rxrpc_seq_t seq)
++{
++	struct sk_buff *skb = call->cong_last_nack;
++	struct rxrpc_ackpacket ack;
++	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
++	unsigned int i, new_acks = 0, retained_nacks = 0;
++	rxrpc_seq_t old_seq = sp->first_ack;
++	u8 *acks = skb->data + sizeof(struct rxrpc_wire_header) + sizeof(ack);
++
++	if (after_eq(seq, old_seq + sp->nr_acks)) {
++		summary->nr_new_acks += sp->nr_nacks;
++		summary->nr_new_acks += seq - (old_seq + sp->nr_acks);
++		summary->nr_retained_nacks = 0;
++	} else if (seq == old_seq) {
++		summary->nr_retained_nacks = sp->nr_nacks;
++	} else {
++		for (i = 0; i < sp->nr_acks; i++) {
++			if (acks[i] == RXRPC_ACK_TYPE_NACK) {
++				if (before(old_seq + i, seq))
++					new_acks++;
++				else
++					retained_nacks++;
++			}
++		}
++
++		summary->nr_new_acks += new_acks;
++		summary->nr_retained_nacks = retained_nacks;
++	}
++
++	return old_seq + sp->nr_acks;
++}
++
+ /*
+  * Process individual soft ACKs.
+  *
+@@ -711,25 +751,51 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb,
+  * the timer on the basis that the peer might just not have processed them at
+  * the time the ACK was sent.
+  */
+-static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks,
+-				  rxrpc_seq_t seq, int nr_acks,
+-				  struct rxrpc_ack_summary *summary)
++static void rxrpc_input_soft_acks(struct rxrpc_call *call,
++				  struct rxrpc_ack_summary *summary,
++				  struct sk_buff *skb,
++				  rxrpc_seq_t seq,
++				  rxrpc_seq_t since)
+ {
+-	unsigned int i;
++	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
++	unsigned int i, old_nacks = 0;
++	rxrpc_seq_t lowest_nak = seq + sp->nr_acks;
++	u8 *acks = skb->data + sizeof(struct rxrpc_wire_header) + sizeof(struct rxrpc_ackpacket);
+ 
+-	for (i = 0; i < nr_acks; i++) {
++	for (i = 0; i < sp->nr_acks; i++) {
+ 		if (acks[i] == RXRPC_ACK_TYPE_ACK) {
+ 			summary->nr_acks++;
+-			summary->nr_new_acks++;
++			if (after_eq(seq, since))
++				summary->nr_new_acks++;
+ 		} else {
+-			if (!summary->saw_nacks &&
+-			    call->acks_lowest_nak != seq + i) {
+-				call->acks_lowest_nak = seq + i;
+-				summary->new_low_nack = true;
+-			}
+ 			summary->saw_nacks = true;
++			if (before(seq, since)) {
++				/* Overlap with previous ACK */
++				old_nacks++;
++			} else {
++				summary->nr_new_nacks++;
++				sp->nr_nacks++;
++			}
++
++			if (before(seq, lowest_nak))
++				lowest_nak = seq;
+ 		}
++		seq++;
++	}
++
++	if (lowest_nak != call->acks_lowest_nak) {
++		call->acks_lowest_nak = lowest_nak;
++		summary->new_low_nack = true;
+ 	}
++
++	/* We *can* have more nacks than we did - the peer is permitted to drop
++	 * packets it has soft-acked and re-request them.  Further, it is
++	 * possible for the nack distribution to change whilst the number of
++	 * nacks stays the same or goes down.
++	 */
++	if (old_nacks < summary->nr_retained_nacks)
++		summary->nr_new_acks += summary->nr_retained_nacks - old_nacks;
++	summary->nr_retained_nacks = old_nacks;
+ }
+ 
+ /*
+@@ -773,7 +839,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ 	struct rxrpc_ackinfo info;
+ 	rxrpc_serial_t ack_serial, acked_serial;
+-	rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt;
++	rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt, since;
+ 	int nr_acks, offset, ioffset;
+ 
+ 	_enter("");
+@@ -789,6 +855,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	prev_pkt = ntohl(ack.previousPacket);
+ 	hard_ack = first_soft_ack - 1;
+ 	nr_acks = ack.nAcks;
++	sp->first_ack = first_soft_ack;
++	sp->nr_acks = nr_acks;
+ 	summary.ack_reason = (ack.reason < RXRPC_ACK__INVALID ?
+ 			      ack.reason : RXRPC_ACK__INVALID);
+ 
+@@ -858,6 +926,16 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	if (nr_acks > 0)
+ 		skb_condense(skb);
+ 
++	if (call->cong_last_nack) {
++		since = rxrpc_input_check_prev_ack(call, &summary, first_soft_ack);
++		rxrpc_free_skb(call->cong_last_nack, rxrpc_skb_put_last_nack);
++		call->cong_last_nack = NULL;
++	} else {
++		summary.nr_new_acks = first_soft_ack - call->acks_first_seq;
++		call->acks_lowest_nak = first_soft_ack + nr_acks;
++		since = first_soft_ack;
++	}
++
+ 	call->acks_latest_ts = skb->tstamp;
+ 	call->acks_first_seq = first_soft_ack;
+ 	call->acks_prev_seq = prev_pkt;
+@@ -866,7 +944,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	case RXRPC_ACK_PING:
+ 		break;
+ 	default:
+-		if (after(acked_serial, call->acks_highest_serial))
++		if (acked_serial && after(acked_serial, call->acks_highest_serial))
+ 			call->acks_highest_serial = acked_serial;
+ 		break;
+ 	}
+@@ -905,8 +983,9 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	if (nr_acks > 0) {
+ 		if (offset > (int)skb->len - nr_acks)
+ 			return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_short_sack);
+-		rxrpc_input_soft_acks(call, skb->data + offset, first_soft_ack,
+-				      nr_acks, &summary);
++		rxrpc_input_soft_acks(call, &summary, skb, first_soft_ack, since);
++		rxrpc_get_skb(skb, rxrpc_skb_get_last_nack);
++		call->cong_last_nack = skb;
+ 	}
+ 
+ 	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags) &&
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index a0906145e8293..4a292f860ae37 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -216,7 +216,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
+ 	iov[0].iov_len	= sizeof(txb->wire) + sizeof(txb->ack) + n;
+ 	len = iov[0].iov_len;
+ 
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 	txb->wire.serial = htonl(serial);
+ 	trace_rxrpc_tx_ack(call->debug_id, serial,
+ 			   ntohl(txb->ack.firstPacket),
+@@ -302,7 +302,7 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call)
+ 	iov[0].iov_base	= &pkt;
+ 	iov[0].iov_len	= sizeof(pkt);
+ 
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 	pkt.whdr.serial = htonl(serial);
+ 
+ 	iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, sizeof(pkt));
+@@ -334,7 +334,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
+ 	_enter("%x,{%d}", txb->seq, txb->len);
+ 
+ 	/* Each transmission of a Tx packet needs a new serial number */
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 	txb->wire.serial = htonl(serial);
+ 
+ 	if (test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags) &&
+@@ -558,7 +558,7 @@ void rxrpc_send_conn_abort(struct rxrpc_connection *conn)
+ 
+ 	len = iov[0].iov_len + iov[1].iov_len;
+ 
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 	whdr.serial = htonl(serial);
+ 
+ 	iov_iter_kvec(&msg.msg_iter, WRITE, iov, 2, len);
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index 682636d3b060b..208312c244f6b 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -181,7 +181,7 @@ print:
+ 		   atomic_read(&conn->active),
+ 		   state,
+ 		   key_serial(conn->key),
+-		   atomic_read(&conn->serial),
++		   conn->tx_serial,
+ 		   conn->hi_serial,
+ 		   conn->channels[0].call_id,
+ 		   conn->channels[1].call_id,
+diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
+index b52dedcebce0a..6b32d61d4cdc4 100644
+--- a/net/rxrpc/rxkad.c
++++ b/net/rxrpc/rxkad.c
+@@ -664,7 +664,7 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn)
+ 
+ 	len = iov[0].iov_len + iov[1].iov_len;
+ 
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 	whdr.serial = htonl(serial);
+ 
+ 	ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len);
+@@ -721,7 +721,7 @@ static int rxkad_send_response(struct rxrpc_connection *conn,
+ 
+ 	len = iov[0].iov_len + iov[1].iov_len + iov[2].iov_len;
+ 
+-	serial = atomic_inc_return(&conn->serial);
++	serial = rxrpc_get_next_serial(conn);
+ 	whdr.serial = htonl(serial);
+ 
+ 	rxrpc_local_dont_fragment(conn->local, false);
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index 2cde375477e38..878415c435276 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -1086,6 +1086,12 @@ int tipc_nl_bearer_add(struct sk_buff *skb, struct genl_info *info)
+ 
+ #ifdef CONFIG_TIPC_MEDIA_UDP
+ 	if (attrs[TIPC_NLA_BEARER_UDP_OPTS]) {
++		if (b->media->type_id != TIPC_MEDIA_TYPE_UDP) {
++			rtnl_unlock();
++			NL_SET_ERR_MSG(info->extack, "UDP option is unsupported");
++			return -EINVAL;
++		}
++
+ 		err = tipc_udp_nl_bearer_add(b,
+ 					     attrs[TIPC_NLA_BEARER_UDP_OPTS]);
+ 		if (err) {
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 2405f0f9af31c..8f63f0b4bf012 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -314,6 +314,17 @@ void unix_gc(void)
+ 	/* Here we are. Hitlist is filled. Die. */
+ 	__skb_queue_purge(&hitlist);
+ 
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++	list_for_each_entry_safe(u, next, &gc_candidates, link) {
++		struct sk_buff *skb = u->oob_skb;
++
++		if (skb) {
++			u->oob_skb = NULL;
++			kfree_skb(skb);
++		}
++	}
++#endif
++
+ 	spin_lock(&unix_gc_lock);
+ 
+ 	/* There could be io_uring registered files, just push them back to
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index b9da6f5152cbc..3f49f5c69916f 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1725,6 +1725,61 @@ static void cfg80211_update_hidden_bsses(struct cfg80211_internal_bss *known,
+ 	}
+ }
+ 
++static void cfg80211_check_stuck_ecsa(struct cfg80211_registered_device *rdev,
++				      struct cfg80211_internal_bss *known,
++				      const struct cfg80211_bss_ies *old)
++{
++	const struct ieee80211_ext_chansw_ie *ecsa;
++	const struct element *elem_new, *elem_old;
++	const struct cfg80211_bss_ies *new, *bcn;
++
++	if (known->pub.proberesp_ecsa_stuck)
++		return;
++
++	new = rcu_dereference_protected(known->pub.proberesp_ies,
++					lockdep_is_held(&rdev->bss_lock));
++	if (WARN_ON(!new))
++		return;
++
++	if (new->tsf - old->tsf < USEC_PER_SEC)
++		return;
++
++	elem_old = cfg80211_find_elem(WLAN_EID_EXT_CHANSWITCH_ANN,
++				      old->data, old->len);
++	if (!elem_old)
++		return;
++
++	elem_new = cfg80211_find_elem(WLAN_EID_EXT_CHANSWITCH_ANN,
++				      new->data, new->len);
++	if (!elem_new)
++		return;
++
++	bcn = rcu_dereference_protected(known->pub.beacon_ies,
++					lockdep_is_held(&rdev->bss_lock));
++	if (bcn &&
++	    cfg80211_find_elem(WLAN_EID_EXT_CHANSWITCH_ANN,
++			       bcn->data, bcn->len))
++		return;
++
++	if (elem_new->datalen != elem_old->datalen)
++		return;
++	if (elem_new->datalen < sizeof(struct ieee80211_ext_chansw_ie))
++		return;
++	if (memcmp(elem_new->data, elem_old->data, elem_new->datalen))
++		return;
++
++	ecsa = (void *)elem_new->data;
++
++	if (!ecsa->mode)
++		return;
++
++	if (ecsa->new_ch_num !=
++	    ieee80211_frequency_to_channel(known->pub.channel->center_freq))
++		return;
++
++	known->pub.proberesp_ecsa_stuck = 1;
++}
++
+ static bool
+ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 			  struct cfg80211_internal_bss *known,
+@@ -1744,9 +1799,13 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 		/* Override possible earlier Beacon frame IEs */
+ 		rcu_assign_pointer(known->pub.ies,
+ 				   new->pub.proberesp_ies);
+-		if (old)
++		if (old) {
++			cfg80211_check_stuck_ecsa(rdev, known, old);
+ 			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+-	} else if (rcu_access_pointer(new->pub.beacon_ies)) {
++		}
++	}
++
++	if (rcu_access_pointer(new->pub.beacon_ies)) {
+ 		const struct cfg80211_bss_ies *old;
+ 
+ 		if (known->pub.hidden_beacon_bss &&
+diff --git a/sound/soc/amd/acp-config.c b/sound/soc/amd/acp-config.c
+index dea6d367b9e82..3bc4b2e41650e 100644
+--- a/sound/soc/amd/acp-config.c
++++ b/sound/soc/amd/acp-config.c
+@@ -3,7 +3,7 @@
+ // This file is provided under a dual BSD/GPLv2 license. When using or
+ // redistributing this file, you may do so under either license.
+ //
+-// Copyright(c) 2021, 2023 Advanced Micro Devices, Inc.
++// Copyright(c) 2021 Advanced Micro Devices, Inc.
+ //
+ // Authors: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com>
+ //
+@@ -47,19 +47,6 @@ static const struct config_entry config_table[] = {
+ 			{}
+ 		},
+ 	},
+-	{
+-		.flags = FLAG_AMD_LEGACY,
+-		.device = ACP_PCI_DEV_ID,
+-		.dmi_table = (const struct dmi_system_id []) {
+-			{
+-				.matches = {
+-					DMI_MATCH(DMI_SYS_VENDOR, "Valve"),
+-					DMI_MATCH(DMI_PRODUCT_NAME, "Jupiter"),
+-				},
+-			},
+-			{}
+-		},
+-	},
+ 	{
+ 		.flags = FLAG_AMD_SOF,
+ 		.device = ACP_PCI_DEV_ID,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 07cc6a201579a..09712e61c606e 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2031,10 +2031,14 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x0499, 0x3108, /* Yamaha YIT-W12TX */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x04d8, 0xfeea, /* Benchmark DAC1 Pre */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x04e8, 0xa051, /* Samsung USBC Headset (AKG) */
+ 		   QUIRK_FLAG_SKIP_CLOCK_SELECTOR | QUIRK_FLAG_CTL_MSG_DELAY_5M),
++	DEVICE_FLG(0x0525, 0xa4ad, /* Hamedal C20 usb camero */
++		   QUIRK_FLAG_IFACE_SKIP_CLOSE),
+ 	DEVICE_FLG(0x054c, 0x0b8c, /* Sony WALKMAN NW-A45 DAC */
+ 		   QUIRK_FLAG_SET_IFACE_FIRST),
+ 	DEVICE_FLG(0x0556, 0x0014, /* Phoenix Audio TMX320VC */
+@@ -2073,14 +2077,22 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x0763, 0x2031, /* M-Audio Fast Track C600 */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x07fd, 0x000b, /* MOTU M Series 2nd hardware revision */
++		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x08bb, 0x2702, /* LineX FM Transmitter */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x0951, 0x16ad, /* Kingston HyperX */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x0b0e, 0x0349, /* Jabra 550a */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
++		   QUIRK_FLAG_FIXED_RATE),
++	DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
++		   QUIRK_FLAG_FIXED_RATE),
+ 	DEVICE_FLG(0x0fd9, 0x0008, /* Hauppauge HVR-950Q */
+ 		   QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
++	DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1397, 0x0507, /* Behringer UMC202HD */
+@@ -2113,6 +2125,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x1bcf, 0x2283, /* NexiGo N930AF FHD Webcam */
++		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x2040, 0x7200, /* Hauppauge HVR-950Q */
+ 		   QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ 	DEVICE_FLG(0x2040, 0x7201, /* Hauppauge HVR-950Q-MXL */
+@@ -2155,6 +2171,12 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */
++		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */
++		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */
++		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+@@ -2163,22 +2185,6 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_ALIGN_TRANSFER),
+ 	DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */
+ 		   QUIRK_FLAG_ALIGN_TRANSFER),
+-	DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+-		   QUIRK_FLAG_GET_SAMPLE_RATE),
+-	DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */
+-		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+-	DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */
+-		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+-	DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */
+-		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+-	DEVICE_FLG(0x0525, 0xa4ad, /* Hamedal C20 usb camero */
+-		   QUIRK_FLAG_IFACE_SKIP_CLOSE),
+-	DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+-		   QUIRK_FLAG_FIXED_RATE),
+-	DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
+-		   QUIRK_FLAG_FIXED_RATE),
+-	DEVICE_FLG(0x1bcf, 0x2283, /* NexiGo N930AF FHD Webcam */
+-		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 
+ 	/* Vendor matches */
+ 	VENDOR_FLG(0x045e, /* MS Lifecam */
+diff --git a/tools/perf/tests/shell/script.sh b/tools/perf/tests/shell/script.sh
+new file mode 100755
+index 0000000000000..2973adab445d2
+--- /dev/null
++++ b/tools/perf/tests/shell/script.sh
+@@ -0,0 +1,73 @@
++#!/bin/sh
++# perf script tests
++# SPDX-License-Identifier: GPL-2.0
++
++set -e
++
++temp_dir=$(mktemp -d /tmp/perf-test-script.XXXXXXXXXX)
++
++perfdatafile="${temp_dir}/perf.data"
++db_test="${temp_dir}/db_test.py"
++
++err=0
++
++cleanup()
++{
++	trap - EXIT TERM INT
++	sane=$(echo "${temp_dir}" | cut -b 1-21)
++	if [ "${sane}" = "/tmp/perf-test-script" ] ; then
++		echo "--- Cleaning up ---"
++		rm -f "${temp_dir}/"*
++		rmdir "${temp_dir}"
++	fi
++}
++
++trap_cleanup()
++{
++	cleanup
++	exit 1
++}
++
++trap trap_cleanup EXIT TERM INT
++
++
++test_db()
++{
++	echo "DB test"
++
++	# Check if python script is supported
++	libpython=$(perf version --build-options | grep python | grep -cv OFF)
++	if [ "${libpython}" != "1" ] ; then
++		echo "SKIP: python scripting is not supported"
++		err=2
++		return
++	fi
++
++	cat << "_end_of_file_" > "${db_test}"
++perf_db_export_mode = True
++perf_db_export_calls = False
++perf_db_export_callchains = True
++
++def sample_table(*args):
++    print(f'sample_table({args})')
++
++def call_path_table(*args):
++    print(f'call_path_table({args}')
++_end_of_file_
++	case $(uname -m)
++	in s390x)
++		cmd_flags="--call-graph dwarf -e cpu-clock";;
++	*)
++		cmd_flags="-g";;
++	esac
++
++	perf record $cmd_flags -o "${perfdatafile}" true
++	perf script -i "${perfdatafile}" -s "${db_test}"
++	echo "DB test [Success]"
++}
++
++test_db
++
++cleanup
++
++exit $err
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index e36da58522efb..b0ed14a6da9d4 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -103,7 +103,14 @@ struct evlist *evlist__new_default(void)
+ 	err = parse_event(evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu");
+ 	if (err) {
+ 		evlist__delete(evlist);
+-		evlist = NULL;
++		return NULL;
++	}
++
++	if (evlist->core.nr_entries > 1) {
++		struct evsel *evsel;
++
++		evlist__for_each_entry(evlist, evsel)
++			evsel__set_sample_id(evsel, /*can_sample_identifier=*/false);
+ 	}
+ 
+ 	return evlist;
+diff --git a/tools/testing/selftests/core/close_range_test.c b/tools/testing/selftests/core/close_range_test.c
+index 534576f06df1c..c59e4adb905df 100644
+--- a/tools/testing/selftests/core/close_range_test.c
++++ b/tools/testing/selftests/core/close_range_test.c
+@@ -12,6 +12,7 @@
+ #include <syscall.h>
+ #include <unistd.h>
+ #include <sys/resource.h>
++#include <linux/close_range.h>
+ 
+ #include "../kselftest_harness.h"
+ #include "../clone3/clone3_selftests.h"
+diff --git a/tools/testing/selftests/net/big_tcp.sh b/tools/testing/selftests/net/big_tcp.sh
+index cde9a91c47971..2db9d15cd45fe 100755
+--- a/tools/testing/selftests/net/big_tcp.sh
++++ b/tools/testing/selftests/net/big_tcp.sh
+@@ -122,7 +122,9 @@ do_netperf() {
+ 	local netns=$1
+ 
+ 	[ "$NF" = "6" ] && serip=$SERVER_IP6
+-	ip net exec $netns netperf -$NF -t TCP_STREAM -H $serip 2>&1 >/dev/null
++
++	# use large write to be sure to generate big tcp packets
++	ip net exec $netns netperf -$NF -t TCP_STREAM -l 1 -H $serip -- -m 262144 2>&1 >/dev/null
+ }
+ 
+ do_test() {
+diff --git a/tools/testing/selftests/net/cmsg_ipv6.sh b/tools/testing/selftests/net/cmsg_ipv6.sh
+index 330d0b1ceced3..c921750ca118d 100755
+--- a/tools/testing/selftests/net/cmsg_ipv6.sh
++++ b/tools/testing/selftests/net/cmsg_ipv6.sh
+@@ -91,7 +91,7 @@ for ovr in setsock cmsg both diff; do
+ 	check_result $? 0 "TCLASS $prot $ovr - pass"
+ 
+ 	while [ -d /proc/$BG ]; do
+-	    $NSEXE ./cmsg_sender -6 -p u $TGT6 1234
++	    $NSEXE ./cmsg_sender -6 -p $p $m $((TOS2)) $TGT6 1234
+ 	done
+ 
+ 	tcpdump -r $TMPF -v 2>&1 | grep "class $TOS2" >> /dev/null
+@@ -128,7 +128,7 @@ for ovr in setsock cmsg both diff; do
+ 	check_result $? 0 "HOPLIMIT $prot $ovr - pass"
+ 
+ 	while [ -d /proc/$BG ]; do
+-	    $NSEXE ./cmsg_sender -6 -p u $TGT6 1234
++	    $NSEXE ./cmsg_sender -6 -p $p $m $LIM $TGT6 1234
+ 	done
+ 
+ 	tcpdump -r $TMPF -v 2>&1 | grep "hlim $LIM[^0-9]" >> /dev/null
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 4a5f031be2324..d65fdd407d73f 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Check that route PMTU values match expectations, and that initial device MTU
+@@ -198,8 +198,8 @@
+ # - pmtu_ipv6_route_change
+ #	Same as above but with IPv6
+ 
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
++source lib.sh
++source net_helper.sh
+ 
+ PAUSE_ON_FAIL=no
+ VERBOSE=0
+@@ -268,16 +268,6 @@ tests="
+ 	pmtu_ipv4_route_change		ipv4: PMTU exception w/route replace	1
+ 	pmtu_ipv6_route_change		ipv6: PMTU exception w/route replace	1"
+ 
+-NS_A="ns-A"
+-NS_B="ns-B"
+-NS_C="ns-C"
+-NS_R1="ns-R1"
+-NS_R2="ns-R2"
+-ns_a="ip netns exec ${NS_A}"
+-ns_b="ip netns exec ${NS_B}"
+-ns_c="ip netns exec ${NS_C}"
+-ns_r1="ip netns exec ${NS_R1}"
+-ns_r2="ip netns exec ${NS_R2}"
+ # Addressing and routing for tests with routers: four network segments, with
+ # index SEGMENT between 1 and 4, a common prefix (PREFIX4 or PREFIX6) and an
+ # identifier ID, which is 1 for hosts (A and B), 2 for routers (R1 and R2).
+@@ -543,13 +533,17 @@ setup_ip6ip6() {
+ }
+ 
+ setup_namespaces() {
++	setup_ns NS_A NS_B NS_C NS_R1 NS_R2
+ 	for n in ${NS_A} ${NS_B} ${NS_C} ${NS_R1} ${NS_R2}; do
+-		ip netns add ${n} || return 1
+-
+ 		# Disable DAD, so that we don't have to wait to use the
+ 		# configured IPv6 addresses
+ 		ip netns exec ${n} sysctl -q net/ipv6/conf/default/accept_dad=0
+ 	done
++	ns_a="ip netns exec ${NS_A}"
++	ns_b="ip netns exec ${NS_B}"
++	ns_c="ip netns exec ${NS_C}"
++	ns_r1="ip netns exec ${NS_R1}"
++	ns_r2="ip netns exec ${NS_R2}"
+ }
+ 
+ setup_veth() {
+@@ -839,7 +833,7 @@ setup_bridge() {
+ 	run_cmd ${ns_a} ip link set br0 up
+ 
+ 	run_cmd ${ns_c} ip link add veth_C-A type veth peer name veth_A-C
+-	run_cmd ${ns_c} ip link set veth_A-C netns ns-A
++	run_cmd ${ns_c} ip link set veth_A-C netns ${NS_A}
+ 
+ 	run_cmd ${ns_a} ip link set veth_A-C up
+ 	run_cmd ${ns_c} ip link set veth_C-A up
+@@ -944,9 +938,7 @@ cleanup() {
+ 	done
+ 	socat_pids=
+ 
+-	for n in ${NS_A} ${NS_B} ${NS_C} ${NS_R1} ${NS_R2}; do
+-		ip netns del ${n} 2> /dev/null
+-	done
++	cleanup_all_ns
+ 
+ 	ip link del veth_A-C			2>/dev/null
+ 	ip link del veth_A-R1			2>/dev/null
+@@ -1345,13 +1337,15 @@ test_pmtu_ipvX_over_bridged_vxlanY_or_geneveY_exception() {
+ 			TCPDST="TCP:[${dst}]:50000"
+ 		fi
+ 		${ns_b} socat -T 3 -u -6 TCP-LISTEN:50000 STDOUT > $tmpoutfile &
++		local socat_pid=$!
+ 
+-		sleep 1
++		wait_local_port_listen ${NS_B} 50000 tcp
+ 
+ 		dd if=/dev/zero status=none bs=1M count=1 | ${target} socat -T 3 -u STDIN $TCPDST,connect-timeout=3
+ 
+ 		size=$(du -sb $tmpoutfile)
+ 		size=${size%%/tmp/*}
++		wait ${socat_pid}
+ 
+ 		[ $size -ne 1048576 ] && err "File size $size mismatches exepcted value in locally bridged vxlan test" && return 1
+ 	done
+@@ -1963,6 +1957,13 @@ check_command() {
+ 	return 0
+ }
+ 
++check_running() {
++	pid=${1}
++	cmd=${2}
++
++	[ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ]
++}
++
+ test_cleanup_vxlanX_exception() {
+ 	outer="${1}"
+ 	encap="vxlan"
+@@ -1993,11 +1994,12 @@ test_cleanup_vxlanX_exception() {
+ 
+ 	${ns_a} ip link del dev veth_A-R1 &
+ 	iplink_pid=$!
+-	sleep 1
+-	if [ "$(cat /proc/${iplink_pid}/cmdline 2>/dev/null | tr -d '\0')" = "iplinkdeldevveth_A-R1" ]; then
+-		err "  can't delete veth device in a timely manner, PMTU dst likely leaked"
+-		return 1
+-	fi
++	for i in $(seq 1 20); do
++		check_running ${iplink_pid} "iplinkdeldevveth_A-R1" || return 0
++		sleep 0.1
++	done
++	err "  can't delete veth device in a timely manner, PMTU dst likely leaked"
++	return 1
+ }
+ 
+ test_cleanup_ipv6_exception() {
+diff --git a/tools/testing/selftests/net/udpgro_fwd.sh b/tools/testing/selftests/net/udpgro_fwd.sh
+index d6b9c759043ca..9cd5e885e91f7 100755
+--- a/tools/testing/selftests/net/udpgro_fwd.sh
++++ b/tools/testing/selftests/net/udpgro_fwd.sh
+@@ -39,6 +39,10 @@ create_ns() {
+ 	for ns in $NS_SRC $NS_DST; do
+ 		ip netns add $ns
+ 		ip -n $ns link set dev lo up
++
++		# disable route solicitations to decrease 'noise' traffic
++		ip netns exec $ns sysctl -qw net.ipv6.conf.default.router_solicitations=0
++		ip netns exec $ns sysctl -qw net.ipv6.conf.all.router_solicitations=0
+ 	done
+ 
+ 	ip link add name veth$SRC type veth peer name veth$DST
+@@ -80,6 +84,12 @@ create_vxlan_pair() {
+ 		create_vxlan_endpoint $BASE$ns veth$ns $BM_NET_V6$((3 - $ns)) vxlan6$ns 6
+ 		ip -n $BASE$ns addr add dev vxlan6$ns $OL_NET_V6$ns/24 nodad
+ 	done
++
++	# preload neighbur cache, do avoid some noisy traffic
++	local addr_dst=$(ip -j -n $BASE$DST link show dev vxlan6$DST  |jq -r '.[]["address"]')
++	local addr_src=$(ip -j -n $BASE$SRC link show dev vxlan6$SRC  |jq -r '.[]["address"]')
++	ip -n $BASE$DST neigh add dev vxlan6$DST lladdr $addr_src $OL_NET_V6$SRC
++	ip -n $BASE$SRC neigh add dev vxlan6$SRC lladdr $addr_dst $OL_NET_V6$DST
+ }
+ 
+ is_ipv6() {
+@@ -119,7 +129,7 @@ run_test() {
+ 	# not enable GRO
+ 	ip netns exec $NS_DST $ipt -A INPUT -p udp --dport 4789
+ 	ip netns exec $NS_DST $ipt -A INPUT -p udp --dport 8000
+-	ip netns exec $NS_DST ./udpgso_bench_rx -C 1000 -R 10 -n 10 -l 1300 $rx_args &
++	ip netns exec $NS_DST ./udpgso_bench_rx -C 2000 -R 100 -n 10 -l 1300 $rx_args &
+ 	local spid=$!
+ 	wait_local_port_listen "$NS_DST" 8000 udp
+ 	ip netns exec $NS_SRC ./udpgso_bench_tx $family -M 1 -s 13000 -S 1300 -D $dst
+@@ -168,7 +178,7 @@ run_bench() {
+ 	# bind the sender and the receiver to different CPUs to try
+ 	# get reproducible results
+ 	ip netns exec $NS_DST bash -c "echo 2 > /sys/class/net/veth$DST/queues/rx-0/rps_cpus"
+-	ip netns exec $NS_DST taskset 0x2 ./udpgso_bench_rx -C 1000 -R 10  &
++	ip netns exec $NS_DST taskset 0x2 ./udpgso_bench_rx -C 2000 -R 100  &
+ 	local spid=$!
+ 	wait_local_port_listen "$NS_DST" 8000 udp
+ 	ip netns exec $NS_SRC taskset 0x1 ./udpgso_bench_tx $family -l 3 -S 1300 -D $dst
+diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
+index f35a924d4a303..1cbadd267c963 100644
+--- a/tools/testing/selftests/net/udpgso_bench_rx.c
++++ b/tools/testing/selftests/net/udpgso_bench_rx.c
+@@ -375,7 +375,7 @@ static void do_recv(void)
+ 			do_flush_udp(fd);
+ 
+ 		tnow = gettimeofday_ms();
+-		if (tnow > treport) {
++		if (!cfg_expected_pkt_nr && tnow > treport) {
+ 			if (packets)
+ 				fprintf(stderr,
+ 					"%s rx: %6lu MB/s %8lu calls/s\n",
+diff --git a/tools/testing/selftests/net/unicast_extensions.sh b/tools/testing/selftests/net/unicast_extensions.sh
+index 2d10ccac898a7..f52aa5f7da524 100755
+--- a/tools/testing/selftests/net/unicast_extensions.sh
++++ b/tools/testing/selftests/net/unicast_extensions.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # By Seth Schoen (c) 2021, for the IPv4 Unicast Extensions Project
+@@ -28,8 +28,7 @@
+ # These tests provide an easy way to flip the expected result of any
+ # of these behaviors for testing kernel patches that change them.
+ 
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
++source lib.sh
+ 
+ # nettest can be run from PATH or from same directory as this selftest
+ if ! which nettest >/dev/null; then
+@@ -61,20 +60,20 @@ _do_segmenttest(){
+ 	# foo --- bar
+ 	# Arguments: ip_a ip_b prefix_length test_description
+ 	#
+-	# Caller must set up foo-ns and bar-ns namespaces
++	# Caller must set up $foo_ns and $bar_ns namespaces
+ 	# containing linked veth devices foo and bar,
+ 	# respectively.
+ 
+-	ip -n foo-ns address add $1/$3 dev foo || return 1
+-	ip -n foo-ns link set foo up || return 1
+-	ip -n bar-ns address add $2/$3 dev bar || return 1
+-	ip -n bar-ns link set bar up || return 1
++	ip -n $foo_ns address add $1/$3 dev foo || return 1
++	ip -n $foo_ns link set foo up || return 1
++	ip -n $bar_ns address add $2/$3 dev bar || return 1
++	ip -n $bar_ns link set bar up || return 1
+ 
+-	ip netns exec foo-ns timeout 2 ping -c 1 $2 || return 1
+-	ip netns exec bar-ns timeout 2 ping -c 1 $1 || return 1
++	ip netns exec $foo_ns timeout 2 ping -c 1 $2 || return 1
++	ip netns exec $bar_ns timeout 2 ping -c 1 $1 || return 1
+ 
+-	nettest -B -N bar-ns -O foo-ns -r $1 || return 1
+-	nettest -B -N foo-ns -O bar-ns -r $2 || return 1
++	nettest -B -N $bar_ns -O $foo_ns -r $1 || return 1
++	nettest -B -N $foo_ns -O $bar_ns -r $2 || return 1
+ 
+ 	return 0
+ }
+@@ -88,31 +87,31 @@ _do_route_test(){
+ 	# Arguments: foo_ip foo1_ip bar1_ip bar_ip prefix_len test_description
+ 	# Displays test result and returns success or failure.
+ 
+-	# Caller must set up foo-ns, bar-ns, and router-ns
++	# Caller must set up $foo_ns, $bar_ns, and $router_ns
+ 	# containing linked veth devices foo-foo1, bar1-bar
+-	# (foo in foo-ns, foo1 and bar1 in router-ns, and
+-	# bar in bar-ns).
+-
+-	ip -n foo-ns address add $1/$5 dev foo || return 1
+-	ip -n foo-ns link set foo up || return 1
+-	ip -n foo-ns route add default via $2 || return 1
+-	ip -n bar-ns address add $4/$5 dev bar || return 1
+-	ip -n bar-ns link set bar up || return 1
+-	ip -n bar-ns route add default via $3 || return 1
+-	ip -n router-ns address add $2/$5 dev foo1 || return 1
+-	ip -n router-ns link set foo1 up || return 1
+-	ip -n router-ns address add $3/$5 dev bar1 || return 1
+-	ip -n router-ns link set bar1 up || return 1
+-
+-	echo 1 | ip netns exec router-ns tee /proc/sys/net/ipv4/ip_forward
+-
+-	ip netns exec foo-ns timeout 2 ping -c 1 $2 || return 1
+-	ip netns exec foo-ns timeout 2 ping -c 1 $4 || return 1
+-	ip netns exec bar-ns timeout 2 ping -c 1 $3 || return 1
+-	ip netns exec bar-ns timeout 2 ping -c 1 $1 || return 1
+-
+-	nettest -B -N bar-ns -O foo-ns -r $1 || return 1
+-	nettest -B -N foo-ns -O bar-ns -r $4 || return 1
++	# (foo in $foo_ns, foo1 and bar1 in $router_ns, and
++	# bar in $bar_ns).
++
++	ip -n $foo_ns address add $1/$5 dev foo || return 1
++	ip -n $foo_ns link set foo up || return 1
++	ip -n $foo_ns route add default via $2 || return 1
++	ip -n $bar_ns address add $4/$5 dev bar || return 1
++	ip -n $bar_ns link set bar up || return 1
++	ip -n $bar_ns route add default via $3 || return 1
++	ip -n $router_ns address add $2/$5 dev foo1 || return 1
++	ip -n $router_ns link set foo1 up || return 1
++	ip -n $router_ns address add $3/$5 dev bar1 || return 1
++	ip -n $router_ns link set bar1 up || return 1
++
++	echo 1 | ip netns exec $router_ns tee /proc/sys/net/ipv4/ip_forward
++
++	ip netns exec $foo_ns timeout 2 ping -c 1 $2 || return 1
++	ip netns exec $foo_ns timeout 2 ping -c 1 $4 || return 1
++	ip netns exec $bar_ns timeout 2 ping -c 1 $3 || return 1
++	ip netns exec $bar_ns timeout 2 ping -c 1 $1 || return 1
++
++	nettest -B -N $bar_ns -O $foo_ns -r $1 || return 1
++	nettest -B -N $foo_ns -O $bar_ns -r $4 || return 1
+ 
+ 	return 0
+ }
+@@ -121,17 +120,15 @@ segmenttest(){
+ 	# Sets up veth link and tries to connect over it.
+ 	# Arguments: ip_a ip_b prefix_len test_description
+ 	hide_output
+-	ip netns add foo-ns
+-	ip netns add bar-ns
+-	ip link add foo netns foo-ns type veth peer name bar netns bar-ns
++	setup_ns foo_ns bar_ns
++	ip link add foo netns $foo_ns type veth peer name bar netns $bar_ns
+ 
+ 	test_result=0
+ 	_do_segmenttest "$@" || test_result=1
+ 
+-	ip netns pids foo-ns | xargs -r kill -9
+-	ip netns pids bar-ns | xargs -r kill -9
+-	ip netns del foo-ns
+-	ip netns del bar-ns
++	ip netns pids $foo_ns | xargs -r kill -9
++	ip netns pids $bar_ns | xargs -r kill -9
++	cleanup_ns $foo_ns $bar_ns
+ 	show_output
+ 
+ 	# inverted tests will expect failure instead of success
+@@ -147,21 +144,17 @@ route_test(){
+ 	# Returns success or failure.
+ 
+ 	hide_output
+-	ip netns add foo-ns
+-	ip netns add bar-ns
+-	ip netns add router-ns
+-	ip link add foo netns foo-ns type veth peer name foo1 netns router-ns
+-	ip link add bar netns bar-ns type veth peer name bar1 netns router-ns
++	setup_ns foo_ns bar_ns router_ns
++	ip link add foo netns $foo_ns type veth peer name foo1 netns $router_ns
++	ip link add bar netns $bar_ns type veth peer name bar1 netns $router_ns
+ 
+ 	test_result=0
+ 	_do_route_test "$@" || test_result=1
+ 
+-	ip netns pids foo-ns | xargs -r kill -9
+-	ip netns pids bar-ns | xargs -r kill -9
+-	ip netns pids router-ns | xargs -r kill -9
+-	ip netns del foo-ns
+-	ip netns del bar-ns
+-	ip netns del router-ns
++	ip netns pids $foo_ns | xargs -r kill -9
++	ip netns pids $bar_ns | xargs -r kill -9
++	ip netns pids $router_ns | xargs -r kill -9
++	cleanup_ns $foo_ns $bar_ns $router_ns
+ 
+ 	show_output
+ 


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-23 12:33 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-23 12:33 UTC (permalink / raw
  To: gentoo-commits

commit:     b2625a5bc42f1172c105a7321155d3ebfd03a740
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 23 12:33:02 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 23 12:33:02 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2625a5b

Linux patch 6.7.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1005_linux-6.7.6.patch | 14195 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 14199 insertions(+)

diff --git a/0000_README b/0000_README
index db01c0e3..4a76a3a5 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-6.7.5.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.5
 
+Patch:  1005_linux-6.7.6.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.6
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1005_linux-6.7.6.patch b/1005_linux-6.7.6.patch
new file mode 100644
index 00000000..072d4262
--- /dev/null
+++ b/1005_linux-6.7.6.patch
@@ -0,0 +1,14195 @@
+diff --git a/Documentation/ABI/testing/sysfs-class-net-statistics b/Documentation/ABI/testing/sysfs-class-net-statistics
+index 55db27815361b..53e508c6936a5 100644
+--- a/Documentation/ABI/testing/sysfs-class-net-statistics
++++ b/Documentation/ABI/testing/sysfs-class-net-statistics
+@@ -1,4 +1,4 @@
+-What:		/sys/class/<iface>/statistics/collisions
++What:		/sys/class/net/<iface>/statistics/collisions
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -6,7 +6,7 @@ Description:
+ 		Indicates the number of collisions seen by this network device.
+ 		This value might not be relevant with all MAC layers.
+ 
+-What:		/sys/class/<iface>/statistics/multicast
++What:		/sys/class/net/<iface>/statistics/multicast
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -14,7 +14,7 @@ Description:
+ 		Indicates the number of multicast packets received by this
+ 		network device.
+ 
+-What:		/sys/class/<iface>/statistics/rx_bytes
++What:		/sys/class/net/<iface>/statistics/rx_bytes
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -23,7 +23,7 @@ Description:
+ 		See the network driver for the exact meaning of when this
+ 		value is incremented.
+ 
+-What:		/sys/class/<iface>/statistics/rx_compressed
++What:		/sys/class/net/<iface>/statistics/rx_compressed
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -32,7 +32,7 @@ Description:
+ 		network device. This value might only be relevant for interfaces
+ 		that support packet compression (e.g: PPP).
+ 
+-What:		/sys/class/<iface>/statistics/rx_crc_errors
++What:		/sys/class/net/<iface>/statistics/rx_crc_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -41,7 +41,7 @@ Description:
+ 		by this network device. Note that the specific meaning might
+ 		depend on the MAC layer used by the interface.
+ 
+-What:		/sys/class/<iface>/statistics/rx_dropped
++What:		/sys/class/net/<iface>/statistics/rx_dropped
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -51,7 +51,7 @@ Description:
+ 		packet processing. See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_errors
++What:		/sys/class/net/<iface>/statistics/rx_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -59,7 +59,7 @@ Description:
+ 		Indicates the number of receive errors on this network device.
+ 		See the network driver for the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_fifo_errors
++What:		/sys/class/net/<iface>/statistics/rx_fifo_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -68,7 +68,7 @@ Description:
+ 		network device. See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_frame_errors
++What:		/sys/class/net/<iface>/statistics/rx_frame_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -78,7 +78,7 @@ Description:
+ 		on the MAC layer protocol used. See the network driver for
+ 		the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_length_errors
++What:		/sys/class/net/<iface>/statistics/rx_length_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -87,7 +87,7 @@ Description:
+ 		error, oversized or undersized. See the network driver for the
+ 		exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_missed_errors
++What:		/sys/class/net/<iface>/statistics/rx_missed_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -96,7 +96,7 @@ Description:
+ 		due to lack of capacity in the receive side. See the network
+ 		driver for the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_nohandler
++What:		/sys/class/net/<iface>/statistics/rx_nohandler
+ Date:		February 2016
+ KernelVersion:	4.6
+ Contact:	netdev@vger.kernel.org
+@@ -104,7 +104,7 @@ Description:
+ 		Indicates the number of received packets that were dropped on
+ 		an inactive device by the network core.
+ 
+-What:		/sys/class/<iface>/statistics/rx_over_errors
++What:		/sys/class/net/<iface>/statistics/rx_over_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -114,7 +114,7 @@ Description:
+ 		(e.g: larger than MTU). See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/rx_packets
++What:		/sys/class/net/<iface>/statistics/rx_packets
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -122,7 +122,7 @@ Description:
+ 		Indicates the total number of good packets received by this
+ 		network device.
+ 
+-What:		/sys/class/<iface>/statistics/tx_aborted_errors
++What:		/sys/class/net/<iface>/statistics/tx_aborted_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -132,7 +132,7 @@ Description:
+ 		a medium collision). See the network driver for the exact
+ 		meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/tx_bytes
++What:		/sys/class/net/<iface>/statistics/tx_bytes
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -143,7 +143,7 @@ Description:
+ 		transmitted packets or all packets that have been queued for
+ 		transmission.
+ 
+-What:		/sys/class/<iface>/statistics/tx_carrier_errors
++What:		/sys/class/net/<iface>/statistics/tx_carrier_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -152,7 +152,7 @@ Description:
+ 		because of carrier errors (e.g: physical link down). See the
+ 		network driver for the exact meaning of this value.
+ 
+-What:		/sys/class/<iface>/statistics/tx_compressed
++What:		/sys/class/net/<iface>/statistics/tx_compressed
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -161,7 +161,7 @@ Description:
+ 		this might only be relevant for devices that support
+ 		compression (e.g: PPP).
+ 
+-What:		/sys/class/<iface>/statistics/tx_dropped
++What:		/sys/class/net/<iface>/statistics/tx_dropped
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -170,7 +170,7 @@ Description:
+ 		See the driver for the exact reasons as to why the packets were
+ 		dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_errors
++What:		/sys/class/net/<iface>/statistics/tx_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -179,7 +179,7 @@ Description:
+ 		a network device. See the driver for the exact reasons as to
+ 		why the packets were dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_fifo_errors
++What:		/sys/class/net/<iface>/statistics/tx_fifo_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -188,7 +188,7 @@ Description:
+ 		FIFO error. See the driver for the exact reasons as to why the
+ 		packets were dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_heartbeat_errors
++What:		/sys/class/net/<iface>/statistics/tx_heartbeat_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -197,7 +197,7 @@ Description:
+ 		reported as heartbeat errors. See the driver for the exact
+ 		reasons as to why the packets were dropped.
+ 
+-What:		/sys/class/<iface>/statistics/tx_packets
++What:		/sys/class/net/<iface>/statistics/tx_packets
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+@@ -206,7 +206,7 @@ Description:
+ 		device. See the driver for whether this reports the number of all
+ 		attempted or successful transmissions.
+ 
+-What:		/sys/class/<iface>/statistics/tx_window_errors
++What:		/sys/class/net/<iface>/statistics/tx_window_errors
+ Date:		April 2005
+ KernelVersion:	2.6.12
+ Contact:	netdev@vger.kernel.org
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 7acd64c61f50c..29fd5213eeb2b 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -235,3 +235,10 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ASR            | ASR8601         | #8601001        | N/A                         |
+ +----------------+-----------------+-----------------+-----------------------------+
+++----------------+-----------------+-----------------+-----------------------------+
++| Microsoft      | Azure Cobalt 100| #2139208        | ARM64_ERRATUM_2139208       |
+++----------------+-----------------+-----------------+-----------------------------+
++| Microsoft      | Azure Cobalt 100| #2067961        | ARM64_ERRATUM_2067961       |
+++----------------+-----------------+-----------------+-----------------------------+
++| Microsoft      | Azure Cobalt 100| #2253138        | ARM64_ERRATUM_2253138       |
+++----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/netlink/specs/dpll.yaml b/Documentation/netlink/specs/dpll.yaml
+index cf8abe1c0550f..2b4c4bcd83615 100644
+--- a/Documentation/netlink/specs/dpll.yaml
++++ b/Documentation/netlink/specs/dpll.yaml
+@@ -374,8 +374,6 @@ operations:
+             - type
+ 
+       dump:
+-        pre: dpll-lock-dumpit
+-        post: dpll-unlock-dumpit
+         reply: *dev-attrs
+ 
+     -
+@@ -462,8 +460,6 @@ operations:
+             - phase-adjust
+ 
+       dump:
+-        pre: dpll-lock-dumpit
+-        post: dpll-unlock-dumpit
+         request:
+           attributes:
+             - id
+diff --git a/Documentation/networking/devlink/devlink-port.rst b/Documentation/networking/devlink/devlink-port.rst
+index e33ad2401ad70..562f46b412744 100644
+--- a/Documentation/networking/devlink/devlink-port.rst
++++ b/Documentation/networking/devlink/devlink-port.rst
+@@ -126,7 +126,7 @@ Users may also set the RoCE capability of the function using
+ `devlink port function set roce` command.
+ 
+ Users may also set the function as migratable using
+-'devlink port function set migratable' command.
++`devlink port function set migratable` command.
+ 
+ Users may also set the IPsec crypto capability of the function using
+ `devlink port function set ipsec_crypto` command.
+diff --git a/Documentation/sphinx/kernel_feat.py b/Documentation/sphinx/kernel_feat.py
+index b9df61eb45013..03ace5f01b5c0 100644
+--- a/Documentation/sphinx/kernel_feat.py
++++ b/Documentation/sphinx/kernel_feat.py
+@@ -109,7 +109,7 @@ class KernelFeat(Directive):
+             else:
+                 out_lines += line + "\n"
+ 
+-        nodeList = self.nestedParse(out_lines, fname)
++        nodeList = self.nestedParse(out_lines, self.arguments[0])
+         return nodeList
+ 
+     def nestedParse(self, lines, fname):
+diff --git a/Makefile b/Makefile
+index 0f5bb9ddc98fa..d07a8e0179acf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index f4b210ab06129..a27129d486674 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -681,6 +681,7 @@ config SHADOW_CALL_STACK
+ 	bool "Shadow Call Stack"
+ 	depends on ARCH_SUPPORTS_SHADOW_CALL_STACK
+ 	depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER
++	depends on MMU
+ 	help
+ 	  This option enables the compiler's Shadow Call Stack, which
+ 	  uses a shadow stack to protect function return addresses from
+diff --git a/arch/arc/include/asm/jump_label.h b/arch/arc/include/asm/jump_label.h
+index 9d96180797396..a339223d9e052 100644
+--- a/arch/arc/include/asm/jump_label.h
++++ b/arch/arc/include/asm/jump_label.h
+@@ -31,7 +31,7 @@
+ static __always_inline bool arch_static_branch(struct static_key *key,
+ 					       bool branch)
+ {
+-	asm_volatile_goto(".balign "__stringify(JUMP_LABEL_NOP_SIZE)"	\n"
++	asm goto(".balign "__stringify(JUMP_LABEL_NOP_SIZE)"		\n"
+ 		 "1:							\n"
+ 		 "nop							\n"
+ 		 ".pushsection __jump_table, \"aw\"			\n"
+@@ -47,7 +47,7 @@ static __always_inline bool arch_static_branch(struct static_key *key,
+ static __always_inline bool arch_static_branch_jump(struct static_key *key,
+ 						    bool branch)
+ {
+-	asm_volatile_goto(".balign "__stringify(JUMP_LABEL_NOP_SIZE)"	\n"
++	asm goto(".balign "__stringify(JUMP_LABEL_NOP_SIZE)"		\n"
+ 		 "1:							\n"
+ 		 "b %l[l_yes]						\n"
+ 		 ".pushsection __jump_table, \"aw\"			\n"
+diff --git a/arch/arm/include/asm/jump_label.h b/arch/arm/include/asm/jump_label.h
+index e12d7d096fc03..e4eb54f6cd9fe 100644
+--- a/arch/arm/include/asm/jump_label.h
++++ b/arch/arm/include/asm/jump_label.h
+@@ -11,7 +11,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 WASM(nop) "\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".word 1b, %l[l_yes], %c0\n\t"
+@@ -25,7 +25,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 WASM(b) " %l[l_yes]\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".word 1b, %l[l_yes], %c0\n\t"
+diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
+index 210bb43cff2c7..d328f549b1a60 100644
+--- a/arch/arm64/include/asm/alternative-macros.h
++++ b/arch/arm64/include/asm/alternative-macros.h
+@@ -229,7 +229,7 @@ alternative_has_cap_likely(const unsigned long cpucap)
+ 	if (!cpucap_is_possible(cpucap))
+ 		return false;
+ 
+-	asm_volatile_goto(
++	asm goto(
+ 	ALTERNATIVE_CB("b	%l[l_no]", %[cpucap], alt_cb_patch_nops)
+ 	:
+ 	: [cpucap] "i" (cpucap)
+@@ -247,7 +247,7 @@ alternative_has_cap_unlikely(const unsigned long cpucap)
+ 	if (!cpucap_is_possible(cpucap))
+ 		return false;
+ 
+-	asm_volatile_goto(
++	asm goto(
+ 	ALTERNATIVE("nop", "b	%l[l_yes]", %[cpucap])
+ 	:
+ 	: [cpucap] "i" (cpucap)
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 7c7493cb571f9..52f076afeb960 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -61,6 +61,7 @@
+ #define ARM_CPU_IMP_HISI		0x48
+ #define ARM_CPU_IMP_APPLE		0x61
+ #define ARM_CPU_IMP_AMPERE		0xC0
++#define ARM_CPU_IMP_MICROSOFT		0x6D
+ 
+ #define ARM_CPU_PART_AEM_V8		0xD0F
+ #define ARM_CPU_PART_FOUNDATION		0xD00
+@@ -135,6 +136,8 @@
+ 
+ #define AMPERE_CPU_PART_AMPERE1		0xAC3
+ 
++#define MICROSOFT_CPU_PART_AZURE_COBALT_100	0xD49 /* Based on r0p0 of ARM Neoverse N2 */
++
+ #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
+ #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
+ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
+@@ -193,6 +196,7 @@
+ #define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
+ #define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
+ #define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
++#define MIDR_MICROSOFT_AZURE_COBALT_100 MIDR_CPU_MODEL(ARM_CPU_IMP_MICROSOFT, MICROSOFT_CPU_PART_AZURE_COBALT_100)
+ 
+ /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
+ #define MIDR_FUJITSU_ERRATUM_010001		MIDR_FUJITSU_A64FX
+diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
+index 48ddc0f45d228..6aafbb7899916 100644
+--- a/arch/arm64/include/asm/jump_label.h
++++ b/arch/arm64/include/asm/jump_label.h
+@@ -18,7 +18,7 @@
+ static __always_inline bool arch_static_branch(struct static_key * const key,
+ 					       const bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"1:	nop					\n\t"
+ 		 "	.pushsection	__jump_table, \"aw\"	\n\t"
+ 		 "	.align		3			\n\t"
+@@ -35,7 +35,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key,
+ static __always_inline bool arch_static_branch_jump(struct static_key * const key,
+ 						    const bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"1:	b		%l[l_yes]		\n\t"
+ 		 "	.pushsection	__jump_table, \"aw\"	\n\t"
+ 		 "	.align		3			\n\t"
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 967c7c7a4e7db..76b8dd37092ad 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -374,6 +374,7 @@ static const struct midr_range erratum_1463225[] = {
+ static const struct midr_range trbe_overwrite_fill_mode_cpus[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_2139208
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++	MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100),
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_2119858
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+@@ -387,6 +388,7 @@ static const struct midr_range trbe_overwrite_fill_mode_cpus[] = {
+ static const struct midr_range tsb_flush_fail_cpus[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_2067961
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++	MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100),
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_2054223
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+@@ -399,6 +401,7 @@ static const struct midr_range tsb_flush_fail_cpus[] = {
+ static struct midr_range trbe_write_out_of_range_cpus[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_2253138
+ 	MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++	MIDR_ALL_VERSIONS(MIDR_MICROSOFT_AZURE_COBALT_100),
+ #endif
+ #ifdef CONFIG_ARM64_ERRATUM_2224489
+ 	MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 7363f2eb98e81..f7d8f5d81cfe9 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1628,7 +1628,7 @@ void fpsimd_preserve_current_state(void)
+ void fpsimd_signal_preserve_current_state(void)
+ {
+ 	fpsimd_preserve_current_state();
+-	if (test_thread_flag(TIF_SVE))
++	if (current->thread.fp_type == FP_STATE_SVE)
+ 		sve_to_fpsimd(current);
+ }
+ 
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 0e8beb3349ea2..425b1bc17a3f6 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -242,7 +242,7 @@ static int preserve_sve_context(struct sve_context __user *ctx)
+ 		vl = task_get_sme_vl(current);
+ 		vq = sve_vq_from_vl(vl);
+ 		flags |= SVE_SIG_FLAG_SM;
+-	} else if (test_thread_flag(TIF_SVE)) {
++	} else if (current->thread.fp_type == FP_STATE_SVE) {
+ 		vq = sve_vq_from_vl(vl);
+ 	}
+ 
+@@ -878,7 +878,7 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
+ 	if (system_supports_sve() || system_supports_sme()) {
+ 		unsigned int vq = 0;
+ 
+-		if (add_all || test_thread_flag(TIF_SVE) ||
++		if (add_all || current->thread.fp_type == FP_STATE_SVE ||
+ 		    thread_sm_enabled(&current->thread)) {
+ 			int vl = max(sve_max_vl(), sme_max_vl());
+ 
+diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
+index 8350fb8fee0b9..b7be96a535973 100644
+--- a/arch/arm64/kvm/pkvm.c
++++ b/arch/arm64/kvm/pkvm.c
+@@ -101,6 +101,17 @@ void __init kvm_hyp_reserve(void)
+ 		 hyp_mem_base);
+ }
+ 
++static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
++{
++	if (host_kvm->arch.pkvm.handle) {
++		WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_vm,
++					  host_kvm->arch.pkvm.handle));
++	}
++
++	host_kvm->arch.pkvm.handle = 0;
++	free_hyp_memcache(&host_kvm->arch.pkvm.teardown_mc);
++}
++
+ /*
+  * Allocates and donates memory for hypervisor VM structs at EL2.
+  *
+@@ -181,7 +192,7 @@ static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
+ 	return 0;
+ 
+ destroy_vm:
+-	pkvm_destroy_hyp_vm(host_kvm);
++	__pkvm_destroy_hyp_vm(host_kvm);
+ 	return ret;
+ free_vm:
+ 	free_pages_exact(hyp_vm, hyp_vm_sz);
+@@ -194,23 +205,19 @@ int pkvm_create_hyp_vm(struct kvm *host_kvm)
+ {
+ 	int ret = 0;
+ 
+-	mutex_lock(&host_kvm->lock);
++	mutex_lock(&host_kvm->arch.config_lock);
+ 	if (!host_kvm->arch.pkvm.handle)
+ 		ret = __pkvm_create_hyp_vm(host_kvm);
+-	mutex_unlock(&host_kvm->lock);
++	mutex_unlock(&host_kvm->arch.config_lock);
+ 
+ 	return ret;
+ }
+ 
+ void pkvm_destroy_hyp_vm(struct kvm *host_kvm)
+ {
+-	if (host_kvm->arch.pkvm.handle) {
+-		WARN_ON(kvm_call_hyp_nvhe(__pkvm_teardown_vm,
+-					  host_kvm->arch.pkvm.handle));
+-	}
+-
+-	host_kvm->arch.pkvm.handle = 0;
+-	free_hyp_memcache(&host_kvm->arch.pkvm.teardown_mc);
++	mutex_lock(&host_kvm->arch.config_lock);
++	__pkvm_destroy_hyp_vm(host_kvm);
++	mutex_unlock(&host_kvm->arch.config_lock);
+ }
+ 
+ int pkvm_init_host_vm(struct kvm *host_kvm)
+diff --git a/arch/csky/include/asm/jump_label.h b/arch/csky/include/asm/jump_label.h
+index 98a3f4b168bd2..ef2e37a10a0fe 100644
+--- a/arch/csky/include/asm/jump_label.h
++++ b/arch/csky/include/asm/jump_label.h
+@@ -12,7 +12,7 @@
+ static __always_inline bool arch_static_branch(struct static_key *key,
+ 					       bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"1:	nop32					\n"
+ 		"	.pushsection	__jump_table, \"aw\"	\n"
+ 		"	.align		2			\n"
+@@ -29,7 +29,7 @@ static __always_inline bool arch_static_branch(struct static_key *key,
+ static __always_inline bool arch_static_branch_jump(struct static_key *key,
+ 						    bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"1:	bsr32		%l[label]		\n"
+ 		"	.pushsection	__jump_table, \"aw\"	\n"
+ 		"	.align		2			\n"
+diff --git a/arch/loongarch/include/asm/jump_label.h b/arch/loongarch/include/asm/jump_label.h
+index 3cea299a5ef58..29acfe3de3faa 100644
+--- a/arch/loongarch/include/asm/jump_label.h
++++ b/arch/loongarch/include/asm/jump_label.h
+@@ -22,7 +22,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key * const key, const bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"1:	nop			\n\t"
+ 		JUMP_TABLE_ENTRY
+ 		:  :  "i"(&((char *)key)[branch]) :  : l_yes);
+@@ -35,7 +35,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key, co
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key * const key, const bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"1:	b	%l[l_yes]	\n\t"
+ 		JUMP_TABLE_ENTRY
+ 		:  :  "i"(&((char *)key)[branch]) :  : l_yes);
+diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
+index cc3e81fe0186f..c608adc998458 100644
+--- a/arch/loongarch/mm/kasan_init.c
++++ b/arch/loongarch/mm/kasan_init.c
+@@ -44,6 +44,9 @@ void *kasan_mem_to_shadow(const void *addr)
+ 		unsigned long xrange = (maddr >> XRANGE_SHIFT) & 0xffff;
+ 		unsigned long offset = 0;
+ 
++		if (maddr >= FIXADDR_START)
++			return (void *)(kasan_early_shadow_page);
++
+ 		maddr &= XRANGE_SHADOW_MASK;
+ 		switch (xrange) {
+ 		case XKPRANGE_CC_SEG:
+diff --git a/arch/mips/include/asm/checksum.h b/arch/mips/include/asm/checksum.h
+index 4044eaf989ac7..0921ddda11a4b 100644
+--- a/arch/mips/include/asm/checksum.h
++++ b/arch/mips/include/asm/checksum.h
+@@ -241,7 +241,8 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ 	"	.set	pop"
+ 	: "=&r" (sum), "=&r" (tmp)
+ 	: "r" (saddr), "r" (daddr),
+-	  "0" (htonl(len)), "r" (htonl(proto)), "r" (sum));
++	  "0" (htonl(len)), "r" (htonl(proto)), "r" (sum)
++	: "memory");
+ 
+ 	return csum_fold(sum);
+ }
+diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
+index c5c6864e64bc4..405c85173f2c1 100644
+--- a/arch/mips/include/asm/jump_label.h
++++ b/arch/mips/include/asm/jump_label.h
+@@ -36,7 +36,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\t" B_INSN " 2f\n\t"
++	asm goto("1:\t" B_INSN " 2f\n\t"
+ 		"2:\t.insn\n\t"
+ 		".pushsection __jump_table,  \"aw\"\n\t"
+ 		WORD_INSN " 1b, %l[l_yes], %0\n\t"
+@@ -50,7 +50,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\t" J_INSN " %l[l_yes]\n\t"
++	asm goto("1:\t" J_INSN " %l[l_yes]\n\t"
+ 		".pushsection __jump_table,  \"aw\"\n\t"
+ 		WORD_INSN " 1b, %l[l_yes], %0\n\t"
+ 		".popsection\n\t"
+diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
+index daf3cf244ea97..701a233583c2c 100644
+--- a/arch/mips/include/asm/ptrace.h
++++ b/arch/mips/include/asm/ptrace.h
+@@ -154,6 +154,8 @@ static inline long regs_return_value(struct pt_regs *regs)
+ }
+ 
+ #define instruction_pointer(regs) ((regs)->cp0_epc)
++extern unsigned long exception_ip(struct pt_regs *regs);
++#define exception_ip(regs) exception_ip(regs)
+ #define profile_pc(regs) instruction_pointer(regs)
+ 
+ extern asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall);
+diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
+index d9df543f7e2c4..59288c13b581b 100644
+--- a/arch/mips/kernel/ptrace.c
++++ b/arch/mips/kernel/ptrace.c
+@@ -31,6 +31,7 @@
+ #include <linux/seccomp.h>
+ #include <linux/ftrace.h>
+ 
++#include <asm/branch.h>
+ #include <asm/byteorder.h>
+ #include <asm/cpu.h>
+ #include <asm/cpu-info.h>
+@@ -48,6 +49,12 @@
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/syscalls.h>
+ 
++unsigned long exception_ip(struct pt_regs *regs)
++{
++	return exception_epc(regs);
++}
++EXPORT_SYMBOL(exception_ip);
++
+ /*
+  * Called by kernel/ptrace.c when detaching..
+  *
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index d14ccc948a29b..5c845e8d59d92 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -25,7 +25,6 @@ config PARISC
+ 	select RTC_DRV_GENERIC
+ 	select INIT_ALL_POSSIBLE
+ 	select BUG
+-	select BUILDTIME_TABLE_SORT
+ 	select HAVE_KERNEL_UNCOMPRESSED
+ 	select HAVE_PCI
+ 	select HAVE_PERF_EVENTS
+diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h
+index 74d17d7e759da..5937d5edaba1e 100644
+--- a/arch/parisc/include/asm/assembly.h
++++ b/arch/parisc/include/asm/assembly.h
+@@ -576,6 +576,7 @@
+ 	.section __ex_table,"aw"			!	\
+ 	.align 4					!	\
+ 	.word (fault_addr - .), (except_addr - .)	!	\
++	or %r0,%r0,%r0					!	\
+ 	.previous
+ 
+ 
+diff --git a/arch/parisc/include/asm/extable.h b/arch/parisc/include/asm/extable.h
+new file mode 100644
+index 0000000000000..4ea23e3d79dc9
+--- /dev/null
++++ b/arch/parisc/include/asm/extable.h
+@@ -0,0 +1,64 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __PARISC_EXTABLE_H
++#define __PARISC_EXTABLE_H
++
++#include <asm/ptrace.h>
++#include <linux/compiler.h>
++
++/*
++ * The exception table consists of three addresses:
++ *
++ * - A relative address to the instruction that is allowed to fault.
++ * - A relative address at which the program should continue (fixup routine)
++ * - An asm statement which specifies which CPU register will
++ *   receive -EFAULT when an exception happens if the lowest bit in
++ *   the fixup address is set.
++ *
++ * Note: The register specified in the err_opcode instruction will be
++ * modified at runtime if a fault happens. Register %r0 will be ignored.
++ *
++ * Since relative addresses are used, 32bit values are sufficient even on
++ * 64bit kernel.
++ */
++
++struct pt_regs;
++int fixup_exception(struct pt_regs *regs);
++
++#define ARCH_HAS_RELATIVE_EXTABLE
++struct exception_table_entry {
++	int insn;	/* relative address of insn that is allowed to fault. */
++	int fixup;	/* relative address of fixup routine */
++	int err_opcode; /* sample opcode with register which holds error code */
++};
++
++#define ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr, opcode )\
++	".section __ex_table,\"aw\"\n"			   \
++	".align 4\n"					   \
++	".word (" #fault_addr " - .), (" #except_addr " - .)\n" \
++	opcode "\n"					   \
++	".previous\n"
++
++/*
++ * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() creates a special exception table entry
++ * (with lowest bit set) for which the fault handler in fixup_exception() will
++ * load -EFAULT on fault into the register specified by the err_opcode instruction,
++ * and zeroes the target register in case of a read fault in get_user().
++ */
++#define ASM_EXCEPTIONTABLE_VAR(__err_var)		\
++	int __err_var = 0
++#define ASM_EXCEPTIONTABLE_ENTRY_EFAULT( fault_addr, except_addr, register )\
++	ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr + 1, "or %%r0,%%r0," register)
++
++static inline void swap_ex_entry_fixup(struct exception_table_entry *a,
++				       struct exception_table_entry *b,
++				       struct exception_table_entry tmp,
++				       int delta)
++{
++	a->fixup = b->fixup + delta;
++	b->fixup = tmp.fixup - delta;
++	a->err_opcode = b->err_opcode;
++	b->err_opcode = tmp.err_opcode;
++}
++#define swap_ex_entry_fixup swap_ex_entry_fixup
++
++#endif
+diff --git a/arch/parisc/include/asm/jump_label.h b/arch/parisc/include/asm/jump_label.h
+index 94428798b6aa6..317ebc5edc9fe 100644
+--- a/arch/parisc/include/asm/jump_label.h
++++ b/arch/parisc/include/asm/jump_label.h
+@@ -12,7 +12,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 "nop\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align %1\n\t"
+@@ -29,7 +29,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 "b,n %l[l_yes]\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align %1\n\t"
+diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
+index c822bd0c0e3c6..51f40eaf77806 100644
+--- a/arch/parisc/include/asm/special_insns.h
++++ b/arch/parisc/include/asm/special_insns.h
+@@ -8,7 +8,8 @@
+ 		"copy %%r0,%0\n"			\
+ 		"8:\tlpa %%r0(%1),%0\n"			\
+ 		"9:\n"					\
+-		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b)	\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b,	\
++				"or %%r0,%%r0,%%r0")	\
+ 		: "=&r" (pa)				\
+ 		: "r" (va)				\
+ 		: "memory"				\
+@@ -22,7 +23,8 @@
+ 		"copy %%r0,%0\n"			\
+ 		"8:\tlpa %%r0(%%sr3,%1),%0\n"		\
+ 		"9:\n"					\
+-		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b)	\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b,	\
++				"or %%r0,%%r0,%%r0")	\
+ 		: "=&r" (pa)				\
+ 		: "r" (va)				\
+ 		: "memory"				\
+diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h
+index 4165079898d9e..88d0ae5769dde 100644
+--- a/arch/parisc/include/asm/uaccess.h
++++ b/arch/parisc/include/asm/uaccess.h
+@@ -7,6 +7,7 @@
+  */
+ #include <asm/page.h>
+ #include <asm/cache.h>
++#include <asm/extable.h>
+ 
+ #include <linux/bug.h>
+ #include <linux/string.h>
+@@ -26,37 +27,6 @@
+ #define STD_USER(sr, x, ptr)	__put_user_asm(sr, "std", x, ptr)
+ #endif
+ 
+-/*
+- * The exception table contains two values: the first is the relative offset to
+- * the address of the instruction that is allowed to fault, and the second is
+- * the relative offset to the address of the fixup routine. Since relative
+- * addresses are used, 32bit values are sufficient even on 64bit kernel.
+- */
+-
+-#define ARCH_HAS_RELATIVE_EXTABLE
+-struct exception_table_entry {
+-	int insn;	/* relative address of insn that is allowed to fault. */
+-	int fixup;	/* relative address of fixup routine */
+-};
+-
+-#define ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr )\
+-	".section __ex_table,\"aw\"\n"			   \
+-	".align 4\n"					   \
+-	".word (" #fault_addr " - .), (" #except_addr " - .)\n\t" \
+-	".previous\n"
+-
+-/*
+- * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() creates a special exception table entry
+- * (with lowest bit set) for which the fault handler in fixup_exception() will
+- * load -EFAULT into %r29 for a read or write fault, and zeroes the target
+- * register in case of a read fault in get_user().
+- */
+-#define ASM_EXCEPTIONTABLE_REG	29
+-#define ASM_EXCEPTIONTABLE_VAR(__variable)		\
+-	register long __variable __asm__ ("r29") = 0
+-#define ASM_EXCEPTIONTABLE_ENTRY_EFAULT( fault_addr, except_addr )\
+-	ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr + 1)
+-
+ #define __get_user_internal(sr, val, ptr)		\
+ ({							\
+ 	ASM_EXCEPTIONTABLE_VAR(__gu_err);		\
+@@ -83,7 +53,7 @@ struct exception_table_entry {
+ 							\
+ 	__asm__("1: " ldx " 0(%%sr%2,%3),%0\n"		\
+ 		"9:\n"					\
+-		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b)	\
++		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%1")	\
+ 		: "=r"(__gu_val), "+r"(__gu_err)        \
+ 		: "i"(sr), "r"(ptr));			\
+ 							\
+@@ -115,8 +85,8 @@ struct exception_table_entry {
+ 		"1: ldw 0(%%sr%2,%3),%0\n"		\
+ 		"2: ldw 4(%%sr%2,%3),%R0\n"		\
+ 		"9:\n"					\
+-		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b)	\
+-		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b)	\
++		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%1")	\
++		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b, "%1")	\
+ 		: "=&r"(__gu_tmp.l), "+r"(__gu_err)	\
+ 		: "i"(sr), "r"(ptr));			\
+ 							\
+@@ -174,7 +144,7 @@ struct exception_table_entry {
+ 	__asm__ __volatile__ (					\
+ 		"1: " stx " %1,0(%%sr%2,%3)\n"			\
+ 		"9:\n"						\
+-		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b)		\
++		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%0")	\
+ 		: "+r"(__pu_err)				\
+ 		: "r"(x), "i"(sr), "r"(ptr))
+ 
+@@ -186,15 +156,14 @@ struct exception_table_entry {
+ 		"1: stw %1,0(%%sr%2,%3)\n"			\
+ 		"2: stw %R1,4(%%sr%2,%3)\n"			\
+ 		"9:\n"						\
+-		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b)		\
+-		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b)		\
++		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%0")	\
++		ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b, "%0")	\
+ 		: "+r"(__pu_err)				\
+ 		: "r"(__val), "i"(sr), "r"(ptr));		\
+ } while (0)
+ 
+ #endif /* !defined(CONFIG_64BIT) */
+ 
+-
+ /*
+  * Complex access routines -- external declarations
+  */
+@@ -216,7 +185,4 @@ unsigned long __must_check raw_copy_from_user(void *dst, const void __user *src,
+ #define INLINE_COPY_TO_USER
+ #define INLINE_COPY_FROM_USER
+ 
+-struct pt_regs;
+-int fixup_exception(struct pt_regs *regs);
+-
+ #endif /* __PARISC_UACCESS_H */
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 268d90a9325b4..393822f167270 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -58,7 +58,7 @@ int pa_serialize_tlb_flushes __ro_after_init;
+ 
+ struct pdc_cache_info cache_info __ro_after_init;
+ #ifndef CONFIG_PA20
+-struct pdc_btlb_info btlb_info __ro_after_init;
++struct pdc_btlb_info btlb_info;
+ #endif
+ 
+ DEFINE_STATIC_KEY_TRUE(parisc_has_cache);
+@@ -850,7 +850,7 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes,
+ #endif
+ 			"   fic,m	%3(%4,%0)\n"
+ 			"2: sync\n"
+-			ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b)
++			ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1")
+ 			: "+r" (start), "+r" (error)
+ 			: "r" (end), "r" (dcache_stride), "i" (SR_USER));
+ 	}
+@@ -865,7 +865,7 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes,
+ #endif
+ 			"   fdc,m	%3(%4,%0)\n"
+ 			"2: sync\n"
+-			ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b)
++			ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1")
+ 			: "+r" (start), "+r" (error)
+ 			: "r" (end), "r" (icache_stride), "i" (SR_USER));
+ 	}
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index 25f9b9e9d6dfb..404ea37705ce6 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -1004,6 +1004,9 @@ static __init int qemu_print_iodc_data(struct device *lin_dev, void *data)
+ 
+ 	pr_info("\n");
+ 
++	/* Prevent hung task messages when printing on serial console */
++	cond_resched();
++
+ 	pr_info("#define HPA_%08lx_DESCRIPTION \"%s\"\n",
+ 		hpa, parisc_hardware_description(&dev->id));
+ 
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index ce25acfe4889d..c520e551a1652 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -120,8 +120,8 @@ static int emulate_ldh(struct pt_regs *regs, int toreg)
+ "2:	ldbs	1(%%sr1,%3), %0\n"
+ "	depw	%2, 23, 24, %0\n"
+ "3:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1")
+ 	: "+r" (val), "+r" (ret), "=&r" (temp1)
+ 	: "r" (saddr), "r" (regs->isr) );
+ 
+@@ -152,8 +152,8 @@ static int emulate_ldw(struct pt_regs *regs, int toreg, int flop)
+ "	mtctl	%2,11\n"
+ "	vshd	%0,%3,%0\n"
+ "3:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1")
+ 	: "+r" (val), "+r" (ret), "=&r" (temp1), "=&r" (temp2)
+ 	: "r" (saddr), "r" (regs->isr) );
+ 
+@@ -189,8 +189,8 @@ static int emulate_ldd(struct pt_regs *regs, int toreg, int flop)
+ "	mtsar	%%r19\n"
+ "	shrpd	%0,%%r20,%%sar,%0\n"
+ "3:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1")
+ 	: "=r" (val), "+r" (ret)
+ 	: "0" (val), "r" (saddr), "r" (regs->isr)
+ 	: "r19", "r20" );
+@@ -209,9 +209,9 @@ static int emulate_ldd(struct pt_regs *regs, int toreg, int flop)
+ "	vshd	%0,%R0,%0\n"
+ "	vshd	%R0,%4,%R0\n"
+ "4:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 4b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 4b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 4b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 4b, "%1")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 4b, "%1")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 4b, "%1")
+ 	: "+r" (val), "+r" (ret), "+r" (saddr), "=&r" (shift), "=&r" (temp1)
+ 	: "r" (regs->isr) );
+     }
+@@ -244,8 +244,8 @@ static int emulate_sth(struct pt_regs *regs, int frreg)
+ "1:	stb %1, 0(%%sr1, %3)\n"
+ "2:	stb %2, 1(%%sr1, %3)\n"
+ "3:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%0")
+ 	: "+r" (ret), "=&r" (temp1)
+ 	: "r" (val), "r" (regs->ior), "r" (regs->isr) );
+ 
+@@ -285,8 +285,8 @@ static int emulate_stw(struct pt_regs *regs, int frreg, int flop)
+ "	stw	%%r20,0(%%sr1,%2)\n"
+ "	stw	%%r21,4(%%sr1,%2)\n"
+ "3:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%0")
+ 	: "+r" (ret)
+ 	: "r" (val), "r" (regs->ior), "r" (regs->isr)
+ 	: "r19", "r20", "r21", "r22", "r1" );
+@@ -329,10 +329,10 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ "3:	std	%%r20,0(%%sr1,%2)\n"
+ "4:	std	%%r21,8(%%sr1,%2)\n"
+ "5:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 5b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 5b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 5b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 5b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 5b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 5b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 5b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 5b, "%0")
+ 	: "+r" (ret)
+ 	: "r" (val), "r" (regs->ior), "r" (regs->isr)
+ 	: "r19", "r20", "r21", "r22", "r1" );
+@@ -357,11 +357,11 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
+ "4:	stw	%%r1,4(%%sr1,%2)\n"
+ "5:	stw	%R1,8(%%sr1,%2)\n"
+ "6:	\n"
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 6b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 6b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 6b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 6b)
+-	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(5b, 6b)
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 6b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 6b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 6b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 6b, "%0")
++	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(5b, 6b, "%0")
+ 	: "+r" (ret)
+ 	: "r" (val), "r" (regs->ior), "r" (regs->isr)
+ 	: "r19", "r20", "r21", "r1" );
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 2fe5b44986e09..c39de84e98b05 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -150,11 +150,16 @@ int fixup_exception(struct pt_regs *regs)
+ 		 * Fix up get_user() and put_user().
+ 		 * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() sets the least-significant
+ 		 * bit in the relative address of the fixup routine to indicate
+-		 * that gr[ASM_EXCEPTIONTABLE_REG] should be loaded with
+-		 * -EFAULT to report a userspace access error.
++		 * that the register encoded in the "or %r0,%r0,register"
++		 * opcode should be loaded with -EFAULT to report a userspace
++		 * access error.
+ 		 */
+ 		if (fix->fixup & 1) {
+-			regs->gr[ASM_EXCEPTIONTABLE_REG] = -EFAULT;
++			int fault_error_reg = fix->err_opcode & 0x1f;
++			if (!WARN_ON(!fault_error_reg))
++				regs->gr[fault_error_reg] = -EFAULT;
++			pr_debug("Unalignment fixup of register %d at %pS\n",
++				fault_error_reg, (void*)regs->iaoq[0]);
+ 
+ 			/* zero target register for get_user() */
+ 			if (parisc_acctyp(0, regs->iir) == VM_READ) {
+diff --git a/arch/powerpc/include/asm/jump_label.h b/arch/powerpc/include/asm/jump_label.h
+index 93ce3ec253877..2f2a86ed2280a 100644
+--- a/arch/powerpc/include/asm/jump_label.h
++++ b/arch/powerpc/include/asm/jump_label.h
+@@ -17,7 +17,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 "nop # arch_static_branch\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".long 1b - ., %l[l_yes] - .\n\t"
+@@ -32,7 +32,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 "b %l[l_yes] # arch_static_branch_jump\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".long 1b - ., %l[l_yes] - .\n\t"
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index 4ae4ab9090a2d..ade5f094dbd22 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -617,6 +617,8 @@
+ #endif
+ #define SPRN_HID2	0x3F8		/* Hardware Implementation Register 2 */
+ #define SPRN_HID2_GEKKO	0x398		/* Gekko HID2 Register */
++#define SPRN_HID2_G2_LE	0x3F3		/* G2_LE HID2 Register */
++#define  HID2_G2_LE_HBE	(1<<18)		/* High BAT Enable (G2_LE) */
+ #define SPRN_IABR	0x3F2	/* Instruction Address Breakpoint Register */
+ #define SPRN_IABR2	0x3FA		/* 83xx */
+ #define SPRN_IBCR	0x135		/* 83xx Insn Breakpoint Control Reg */
+diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
+index bf5dde1a41147..15c5691dd2184 100644
+--- a/arch/powerpc/include/asm/thread_info.h
++++ b/arch/powerpc/include/asm/thread_info.h
+@@ -14,7 +14,7 @@
+ 
+ #ifdef __KERNEL__
+ 
+-#ifdef CONFIG_KASAN
++#if defined(CONFIG_KASAN) && CONFIG_THREAD_SHIFT < 15
+ #define MIN_THREAD_SHIFT	(CONFIG_THREAD_SHIFT + 1)
+ #else
+ #define MIN_THREAD_SHIFT	CONFIG_THREAD_SHIFT
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index f1f9890f50d3e..de10437fd2065 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -74,7 +74,7 @@ __pu_failed:							\
+ /* -mprefixed can generate offsets beyond range, fall back hack */
+ #ifdef CONFIG_PPC_KERNEL_PREFIXED
+ #define __put_user_asm_goto(x, addr, label, op)			\
+-	asm_volatile_goto(					\
++	asm goto(					\
+ 		"1:	" op " %0,0(%1)	# put_user\n"		\
+ 		EX_TABLE(1b, %l2)				\
+ 		:						\
+@@ -83,7 +83,7 @@ __pu_failed:							\
+ 		: label)
+ #else
+ #define __put_user_asm_goto(x, addr, label, op)			\
+-	asm_volatile_goto(					\
++	asm goto(					\
+ 		"1:	" op "%U1%X1 %0,%1	# put_user\n"	\
+ 		EX_TABLE(1b, %l2)				\
+ 		:						\
+@@ -97,7 +97,7 @@ __pu_failed:							\
+ 	__put_user_asm_goto(x, ptr, label, "std")
+ #else /* __powerpc64__ */
+ #define __put_user_asm2_goto(x, addr, label)			\
+-	asm_volatile_goto(					\
++	asm goto(					\
+ 		"1:	stw%X1 %0, %1\n"			\
+ 		"2:	stw%X1 %L0, %L1\n"			\
+ 		EX_TABLE(1b, %l2)				\
+@@ -146,7 +146,7 @@ do {								\
+ /* -mprefixed can generate offsets beyond range, fall back hack */
+ #ifdef CONFIG_PPC_KERNEL_PREFIXED
+ #define __get_user_asm_goto(x, addr, label, op)			\
+-	asm_volatile_goto(					\
++	asm_goto_output(					\
+ 		"1:	"op" %0,0(%1)	# get_user\n"		\
+ 		EX_TABLE(1b, %l2)				\
+ 		: "=r" (x)					\
+@@ -155,7 +155,7 @@ do {								\
+ 		: label)
+ #else
+ #define __get_user_asm_goto(x, addr, label, op)			\
+-	asm_volatile_goto(					\
++	asm_goto_output(					\
+ 		"1:	"op"%U1%X1 %0, %1	# get_user\n"	\
+ 		EX_TABLE(1b, %l2)				\
+ 		: "=r" (x)					\
+@@ -169,7 +169,7 @@ do {								\
+ 	__get_user_asm_goto(x, addr, label, "ld")
+ #else /* __powerpc64__ */
+ #define __get_user_asm2_goto(x, addr, label)			\
+-	asm_volatile_goto(					\
++	asm_goto_output(					\
+ 		"1:	lwz%X1 %0, %1\n"			\
+ 		"2:	lwz%X1 %L0, %L1\n"			\
+ 		EX_TABLE(1b, %l2)				\
+diff --git a/arch/powerpc/kernel/cpu_setup_6xx.S b/arch/powerpc/kernel/cpu_setup_6xx.S
+index f29ce3dd6140f..bfd3f442e5eb9 100644
+--- a/arch/powerpc/kernel/cpu_setup_6xx.S
++++ b/arch/powerpc/kernel/cpu_setup_6xx.S
+@@ -26,6 +26,15 @@ BEGIN_FTR_SECTION
+ 	bl	__init_fpu_registers
+ END_FTR_SECTION_IFCLR(CPU_FTR_FPU_UNAVAILABLE)
+ 	bl	setup_common_caches
++
++	/*
++	 * This assumes that all cores using __setup_cpu_603 with
++	 * MMU_FTR_USE_HIGH_BATS are G2_LE compatible
++	 */
++BEGIN_MMU_FTR_SECTION
++	bl      setup_g2_le_hid2
++END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
++
+ 	mtlr	r5
+ 	blr
+ _GLOBAL(__setup_cpu_604)
+@@ -115,6 +124,16 @@ SYM_FUNC_START_LOCAL(setup_604_hid0)
+ 	blr
+ SYM_FUNC_END(setup_604_hid0)
+ 
++/* Enable high BATs for G2_LE and derivatives like e300cX */
++SYM_FUNC_START_LOCAL(setup_g2_le_hid2)
++	mfspr	r11,SPRN_HID2_G2_LE
++	oris	r11,r11,HID2_G2_LE_HBE@h
++	mtspr	SPRN_HID2_G2_LE,r11
++	sync
++	isync
++	blr
++SYM_FUNC_END(setup_g2_le_hid2)
++
+ /* 7400 <= rev 2.7 and 7410 rev = 1.0 suffer from some
+  * erratas we work around here.
+  * Moto MPC710CE.pdf describes them, those are errata
+@@ -495,4 +514,3 @@ _GLOBAL(__restore_cpu_setup)
+ 	mtcr	r7
+ 	blr
+ _ASM_NOKPROBE_SYMBOL(__restore_cpu_setup)
+-
+diff --git a/arch/powerpc/kernel/cpu_specs_e500mc.h b/arch/powerpc/kernel/cpu_specs_e500mc.h
+index ceb06b109f831..2ae8e9a7b461c 100644
+--- a/arch/powerpc/kernel/cpu_specs_e500mc.h
++++ b/arch/powerpc/kernel/cpu_specs_e500mc.h
+@@ -8,7 +8,8 @@
+ 
+ #ifdef CONFIG_PPC64
+ #define COMMON_USER_BOOKE	(PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \
+-				 PPC_FEATURE_HAS_FPU | PPC_FEATURE_64)
++				 PPC_FEATURE_HAS_FPU | PPC_FEATURE_64 | \
++				 PPC_FEATURE_BOOKE)
+ #else
+ #define COMMON_USER_BOOKE	(PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \
+ 				 PPC_FEATURE_BOOKE)
+diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
+index bd863702d8121..1ad059a9e2fef 100644
+--- a/arch/powerpc/kernel/interrupt_64.S
++++ b/arch/powerpc/kernel/interrupt_64.S
+@@ -52,7 +52,8 @@ _ASM_NOKPROBE_SYMBOL(system_call_vectored_\name)
+ 	mr	r10,r1
+ 	ld	r1,PACAKSAVE(r13)
+ 	std	r10,0(r1)
+-	std	r11,_NIP(r1)
++	std	r11,_LINK(r1)
++	std	r11,_NIP(r1)	/* Saved LR is also the next instruction */
+ 	std	r12,_MSR(r1)
+ 	std	r0,GPR0(r1)
+ 	std	r10,GPR1(r1)
+@@ -70,7 +71,6 @@ _ASM_NOKPROBE_SYMBOL(system_call_vectored_\name)
+ 	std	r9,GPR13(r1)
+ 	SAVE_NVGPRS(r1)
+ 	std	r11,_XER(r1)
+-	std	r11,_LINK(r1)
+ 	std	r11,_CTR(r1)
+ 
+ 	li	r11,\trapnr
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index ebe259bdd4629..df17b33b89d13 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -1290,8 +1290,10 @@ spapr_tce_platform_iommu_attach_dev(struct iommu_domain *platform_domain,
+ 	int ret = -EINVAL;
+ 
+ 	/* At first attach the ownership is already set */
+-	if (!domain)
++	if (!domain) {
++		iommu_group_put(grp);
+ 		return 0;
++	}
+ 
+ 	if (!grp)
+ 		return -ENODEV;
+diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
+index 938e66829eae6..d5c48d1b0a31e 100644
+--- a/arch/powerpc/kernel/irq_64.c
++++ b/arch/powerpc/kernel/irq_64.c
+@@ -230,7 +230,7 @@ notrace __no_kcsan void arch_local_irq_restore(unsigned long mask)
+ 	 * This allows interrupts to be unmasked without hard disabling, and
+ 	 * also without new hard interrupts coming in ahead of pending ones.
+ 	 */
+-	asm_volatile_goto(
++	asm goto(
+ "1:					\n"
+ "		lbz	9,%0(13)	\n"
+ "		cmpwi	9,0		\n"
+diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
+index a70828a6d9357..aa9aa11927b2f 100644
+--- a/arch/powerpc/mm/kasan/init_32.c
++++ b/arch/powerpc/mm/kasan/init_32.c
+@@ -64,6 +64,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
+ 	if (ret)
+ 		return ret;
+ 
++	k_start = k_start & PAGE_MASK;
+ 	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
+ 	if (!block)
+ 		return -ENOMEM;
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 4561667832ed4..4e9916bb03d71 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -662,8 +662,12 @@ u64 pseries_paravirt_steal_clock(int cpu)
+ {
+ 	struct lppaca *lppaca = &lppaca_of(cpu);
+ 
+-	return be64_to_cpu(READ_ONCE(lppaca->enqueue_dispatch_tb)) +
+-		be64_to_cpu(READ_ONCE(lppaca->ready_enqueue_tb));
++	/*
++	 * VPA steal time counters are reported at TB frequency. Hence do a
++	 * conversion to ns before returning
++	 */
++	return tb_to_ns(be64_to_cpu(READ_ONCE(lppaca->enqueue_dispatch_tb)) +
++			be64_to_cpu(READ_ONCE(lppaca->ready_enqueue_tb)));
+ }
+ #endif
+ 
+diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h
+index 224b4dc02b50b..ce47613e38653 100644
+--- a/arch/riscv/include/asm/bitops.h
++++ b/arch/riscv/include/asm/bitops.h
+@@ -39,7 +39,7 @@ static __always_inline unsigned long variable__ffs(unsigned long word)
+ {
+ 	int num;
+ 
+-	asm_volatile_goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
++	asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
+ 				      RISCV_ISA_EXT_ZBB, 1)
+ 			  : : : : legacy);
+ 
+@@ -95,7 +95,7 @@ static __always_inline unsigned long variable__fls(unsigned long word)
+ {
+ 	int num;
+ 
+-	asm_volatile_goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
++	asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
+ 				      RISCV_ISA_EXT_ZBB, 1)
+ 			  : : : : legacy);
+ 
+@@ -154,7 +154,7 @@ static __always_inline int variable_ffs(int x)
+ 	if (!x)
+ 		return 0;
+ 
+-	asm_volatile_goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
++	asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
+ 				      RISCV_ISA_EXT_ZBB, 1)
+ 			  : : : : legacy);
+ 
+@@ -209,7 +209,7 @@ static __always_inline int variable_fls(unsigned int x)
+ 	if (!x)
+ 		return 0;
+ 
+-	asm_volatile_goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
++	asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
+ 				      RISCV_ISA_EXT_ZBB, 1)
+ 			  : : : : legacy);
+ 
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index a418c3112cd60..aa6548b46a8a5 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -78,7 +78,7 @@ riscv_has_extension_likely(const unsigned long ext)
+ 			   "ext must be < RISCV_ISA_EXT_MAX");
+ 
+ 	if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) {
+-		asm_volatile_goto(
++		asm goto(
+ 		ALTERNATIVE("j	%l[l_no]", "nop", 0, %[ext], 1)
+ 		:
+ 		: [ext] "i" (ext)
+@@ -101,7 +101,7 @@ riscv_has_extension_unlikely(const unsigned long ext)
+ 			   "ext must be < RISCV_ISA_EXT_MAX");
+ 
+ 	if (IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) {
+-		asm_volatile_goto(
++		asm goto(
+ 		ALTERNATIVE("nop", "j	%l[l_yes]", 0, %[ext], 1)
+ 		:
+ 		: [ext] "i" (ext)
+diff --git a/arch/riscv/include/asm/jump_label.h b/arch/riscv/include/asm/jump_label.h
+index 14a5ea8d8ef0f..4a35d787c0191 100644
+--- a/arch/riscv/include/asm/jump_label.h
++++ b/arch/riscv/include/asm/jump_label.h
+@@ -17,7 +17,7 @@
+ static __always_inline bool arch_static_branch(struct static_key * const key,
+ 					       const bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"	.align		2			\n\t"
+ 		"	.option push				\n\t"
+ 		"	.option norelax				\n\t"
+@@ -39,7 +39,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key,
+ static __always_inline bool arch_static_branch_jump(struct static_key * const key,
+ 						    const bool branch)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		"	.align		2			\n\t"
+ 		"	.option push				\n\t"
+ 		"	.option norelax				\n\t"
+diff --git a/arch/s390/include/asm/jump_label.h b/arch/s390/include/asm/jump_label.h
+index 895f774bbcc55..bf78cf381dfcd 100644
+--- a/arch/s390/include/asm/jump_label.h
++++ b/arch/s390/include/asm/jump_label.h
+@@ -25,7 +25,7 @@
+  */
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("0:	brcl 0,%l[label]\n"
++	asm goto("0:	brcl 0,%l[label]\n"
+ 			  ".pushsection __jump_table,\"aw\"\n"
+ 			  ".balign	8\n"
+ 			  ".long	0b-.,%l[label]-.\n"
+@@ -39,7 +39,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("0:	brcl 15,%l[label]\n"
++	asm goto("0:	brcl 15,%l[label]\n"
+ 			  ".pushsection __jump_table,\"aw\"\n"
+ 			  ".balign	8\n"
+ 			  ".long	0b-.,%l[label]-.\n"
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 8207a892bbe22..db9a180de65f1 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -1220,7 +1220,6 @@ static int acquire_gmap_shadow(struct kvm_vcpu *vcpu,
+ 	gmap = gmap_shadow(vcpu->arch.gmap, asce, edat);
+ 	if (IS_ERR(gmap))
+ 		return PTR_ERR(gmap);
+-	gmap->private = vcpu->kvm;
+ 	vcpu->kvm->stat.gmap_shadow_create++;
+ 	WRITE_ONCE(vsie_page->gmap, gmap);
+ 	return 0;
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 6f96b5a71c638..8da39deb56ca4 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -1691,6 +1691,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
+ 		return ERR_PTR(-ENOMEM);
+ 	new->mm = parent->mm;
+ 	new->parent = gmap_get(parent);
++	new->private = parent->private;
+ 	new->orig_asce = asce;
+ 	new->edat_level = edat_level;
+ 	new->initialized = false;
+diff --git a/arch/sparc/include/asm/jump_label.h b/arch/sparc/include/asm/jump_label.h
+index 94eb529dcb776..2718cbea826a7 100644
+--- a/arch/sparc/include/asm/jump_label.h
++++ b/arch/sparc/include/asm/jump_label.h
+@@ -10,7 +10,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 "nop\n\t"
+ 		 "nop\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+@@ -26,7 +26,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 		 "b %l[l_yes]\n\t"
+ 		 "nop\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index 82f05f2506348..34957dcb88b9c 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -115,7 +115,9 @@ archprepare:
+ 	$(Q)$(MAKE) $(build)=$(HOST_DIR)/um include/generated/user_constants.h
+ 
+ LINK-$(CONFIG_LD_SCRIPT_STATIC) += -static
+-LINK-$(CONFIG_LD_SCRIPT_DYN) += $(call cc-option, -no-pie)
++ifdef CONFIG_LD_SCRIPT_DYN
++LINK-$(call gcc-min-version, 60100)$(CONFIG_CC_IS_CLANG) += -no-pie
++endif
+ LINK-$(CONFIG_LD_SCRIPT_DYN_RPATH) += -Wl,-rpath,/lib
+ 
+ CFLAGS_NO_HARDENING := $(call cc-option, -fno-PIC,) $(call cc-option, -fno-pic,) \
+diff --git a/arch/um/include/asm/cpufeature.h b/arch/um/include/asm/cpufeature.h
+index 4b6d1b526bc12..66fe06db872f0 100644
+--- a/arch/um/include/asm/cpufeature.h
++++ b/arch/um/include/asm/cpufeature.h
+@@ -75,7 +75,7 @@ extern void setup_clear_cpu_cap(unsigned int bit);
+  */
+ static __always_inline bool _static_cpu_has(u16 bit)
+ {
+-	asm_volatile_goto("1: jmp 6f\n"
++	asm goto("1: jmp 6f\n"
+ 		 "2:\n"
+ 		 ".skip -(((5f-4f) - (2b-1b)) > 0) * "
+ 			 "((5f-4f) - (2b-1b)),0x90\n"
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 00468adf180f1..87396575cfa77 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -375,7 +375,7 @@ config X86_CMOV
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
+ 	default "5" if X86_32 && X86_CMPXCHG64
+ 	default "4"
+ 
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 35389b2af88ee..0216f63a366b5 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -81,22 +81,4 @@ do {									\
+ 
+ #include <asm-generic/barrier.h>
+ 
+-/*
+- * Make previous memory operations globally visible before
+- * a WRMSR.
+- *
+- * MFENCE makes writes visible, but only affects load/store
+- * instructions.  WRMSR is unfortunately not a load/store
+- * instruction and is unaffected by MFENCE.  The LFENCE ensures
+- * that the WRMSR is not reordered.
+- *
+- * Most WRMSRs are full serializing instructions themselves and
+- * do not require this barrier.  This is only required for the
+- * IA32_TSC_DEADLINE and X2APIC MSRs.
+- */
+-static inline void weak_wrmsr_fence(void)
+-{
+-	asm volatile("mfence; lfence" : : : "memory");
+-}
+-
+ #endif /* _ASM_X86_BARRIER_H */
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index a26bebbdff87e..a1273698fc430 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -168,7 +168,7 @@ extern void clear_cpu_cap(struct cpuinfo_x86 *c, unsigned int bit);
+  */
+ static __always_inline bool _static_cpu_has(u16 bit)
+ {
+-	asm_volatile_goto(
++	asm goto(
+ 		ALTERNATIVE_TERNARY("jmp 6f", %P[feature], "", "jmp %l[t_no]")
+ 		".pushsection .altinstr_aux,\"ax\"\n"
+ 		"6:\n"
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 4af140cf5719e..3e973ff20e235 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -308,10 +308,10 @@
+ #define X86_FEATURE_SMBA		(11*32+21) /* "" Slow Memory Bandwidth Allocation */
+ #define X86_FEATURE_BMEC		(11*32+22) /* "" Bandwidth Monitoring Event Configuration */
+ #define X86_FEATURE_USER_SHSTK		(11*32+23) /* Shadow stack support for user mode applications */
+-
+ #define X86_FEATURE_SRSO		(11*32+24) /* "" AMD BTB untrain RETs */
+ #define X86_FEATURE_SRSO_ALIAS		(11*32+25) /* "" AMD BTB untrain RETs through aliasing */
+ #define X86_FEATURE_IBPB_ON_VMEXIT	(11*32+26) /* "" Issue an IBPB only on VMEXIT */
++#define X86_FEATURE_APIC_MSRS_FENCE	(11*32+27) /* "" IA32_TSC_DEADLINE and X2APIC MSRs need fencing */
+ 
+ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
+ #define X86_FEATURE_AVX_VNNI		(12*32+ 4) /* AVX VNNI instructions */
+diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
+index 071572e23d3a0..cbbef32517f00 100644
+--- a/arch/x86/include/asm/jump_label.h
++++ b/arch/x86/include/asm/jump_label.h
+@@ -24,7 +24,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:"
++	asm goto("1:"
+ 		"jmp %l[l_yes] # objtool NOPs this \n\t"
+ 		JUMP_TABLE_ENTRY
+ 		: :  "i" (key), "i" (2 | branch) : : l_yes);
+@@ -38,7 +38,7 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
+ 
+ static __always_inline bool arch_static_branch(struct static_key * const key, const bool branch)
+ {
+-	asm_volatile_goto("1:"
++	asm goto("1:"
+ 		".byte " __stringify(BYTES_NOP5) "\n\t"
+ 		JUMP_TABLE_ENTRY
+ 		: :  "i" (key), "i" (branch) : : l_yes);
+@@ -52,7 +52,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key, co
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key * const key, const bool branch)
+ {
+-	asm_volatile_goto("1:"
++	asm goto("1:"
+ 		"jmp %l[l_yes]\n\t"
+ 		JUMP_TABLE_ENTRY
+ 		: :  "i" (key), "i" (branch) : : l_yes);
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index ae81a7191c1c0..26620d7642a9f 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -749,4 +749,22 @@ enum mds_mitigations {
+ 
+ extern bool gds_ucode_mitigated(void);
+ 
++/*
++ * Make previous memory operations globally visible before
++ * a WRMSR.
++ *
++ * MFENCE makes writes visible, but only affects load/store
++ * instructions.  WRMSR is unfortunately not a load/store
++ * instruction and is unaffected by MFENCE.  The LFENCE ensures
++ * that the WRMSR is not reordered.
++ *
++ * Most WRMSRs are full serializing instructions themselves and
++ * do not require this barrier.  This is only required for the
++ * IA32_TSC_DEADLINE and X2APIC MSRs.
++ */
++static inline void weak_wrmsr_fence(void)
++{
++	alternative("mfence; lfence", "", ALT_NOT(X86_FEATURE_APIC_MSRS_FENCE));
++}
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
+index 4b081e0d3306b..363266cbcadaf 100644
+--- a/arch/x86/include/asm/rmwcc.h
++++ b/arch/x86/include/asm/rmwcc.h
+@@ -13,7 +13,7 @@
+ #define __GEN_RMWcc(fullop, _var, cc, clobbers, ...)			\
+ ({									\
+ 	bool c = false;							\
+-	asm_volatile_goto (fullop "; j" #cc " %l[cc_label]"		\
++	asm goto (fullop "; j" #cc " %l[cc_label]"		\
+ 			: : [var] "m" (_var), ## __VA_ARGS__		\
+ 			: clobbers : cc_label);				\
+ 	if (0) {							\
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index d6cd9344f6c78..48f8dd47cf688 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -205,7 +205,7 @@ static inline void clwb(volatile void *__p)
+ #ifdef CONFIG_X86_USER_SHADOW_STACK
+ static inline int write_user_shstk_64(u64 __user *addr, u64 val)
+ {
+-	asm_volatile_goto("1: wrussq %[val], (%[addr])\n"
++	asm goto("1: wrussq %[val], (%[addr])\n"
+ 			  _ASM_EXTABLE(1b, %l[fail])
+ 			  :: [addr] "r" (addr), [val] "r" (val)
+ 			  :: fail);
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 5c367c1290c35..237dc8cdd12b9 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -133,7 +133,7 @@ extern int __get_user_bad(void);
+ 
+ #ifdef CONFIG_X86_32
+ #define __put_user_goto_u64(x, addr, label)			\
+-	asm_volatile_goto("\n"					\
++	asm goto("\n"					\
+ 		     "1:	movl %%eax,0(%1)\n"		\
+ 		     "2:	movl %%edx,4(%1)\n"		\
+ 		     _ASM_EXTABLE_UA(1b, %l2)			\
+@@ -295,7 +295,7 @@ do {									\
+ } while (0)
+ 
+ #define __get_user_asm(x, addr, itype, ltype, label)			\
+-	asm_volatile_goto("\n"						\
++	asm_goto_output("\n"						\
+ 		     "1:	mov"itype" %[umem],%[output]\n"		\
+ 		     _ASM_EXTABLE_UA(1b, %l2)				\
+ 		     : [output] ltype(x)				\
+@@ -375,7 +375,7 @@ do {									\
+ 	__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);		\
+ 	__typeof__(*(_ptr)) __old = *_old;				\
+ 	__typeof__(*(_ptr)) __new = (_new);				\
+-	asm_volatile_goto("\n"						\
++	asm_goto_output("\n"						\
+ 		     "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
+ 		     _ASM_EXTABLE_UA(1b, %l[label])			\
+ 		     : CC_OUT(z) (success),				\
+@@ -394,7 +394,7 @@ do {									\
+ 	__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold);		\
+ 	__typeof__(*(_ptr)) __old = *_old;				\
+ 	__typeof__(*(_ptr)) __new = (_new);				\
+-	asm_volatile_goto("\n"						\
++	asm_goto_output("\n"						\
+ 		     "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n"		\
+ 		     _ASM_EXTABLE_UA(1b, %l[label])			\
+ 		     : CC_OUT(z) (success),				\
+@@ -477,7 +477,7 @@ struct __large_struct { unsigned long buf[100]; };
+  * aliasing issues.
+  */
+ #define __put_user_goto(x, addr, itype, ltype, label)			\
+-	asm_volatile_goto("\n"						\
++	asm goto("\n"							\
+ 		"1:	mov"itype" %0,%1\n"				\
+ 		_ASM_EXTABLE_UA(1b, %l2)				\
+ 		: : ltype(x), "m" (__m(addr))				\
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index f322ebd053a91..2055fb308f5b9 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1162,6 +1162,9 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) &&
+ 	     cpu_has_amd_erratum(c, amd_erratum_1485))
+ 		msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT);
++
++	/* AMD CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */
++	clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b14fc8c1c9538..98f7ea6b931cb 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1856,6 +1856,13 @@ static void identify_cpu(struct cpuinfo_x86 *c)
+ 	c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
+ #endif
+ 
++
++	/*
++	 * Set default APIC and TSC_DEADLINE MSR fencing flag. AMD and
++	 * Hygon will clear it in ->c_init() below.
++	 */
++	set_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
++
+ 	/*
+ 	 * Vendor-specific initialization.  In this section we
+ 	 * canonicalize the feature flags, meaning if there are
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index 6f247d66758de..f0cd95502faae 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -354,6 +354,9 @@ static void init_hygon(struct cpuinfo_x86 *c)
+ 		set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
+ 
+ 	check_null_seg_clears_base(c);
++
++	/* Hygon CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */
++	clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
+ }
+ 
+ static void cpu_detect_tlb_hygon(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 558076dbde5bf..247f2225aa9f3 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -274,12 +274,13 @@ static int __restore_fpregs_from_user(void __user *buf, u64 ufeatures,
+  * Attempt to restore the FPU registers directly from user memory.
+  * Pagefaults are handled and any errors returned are fatal.
+  */
+-static bool restore_fpregs_from_user(void __user *buf, u64 xrestore,
+-				     bool fx_only, unsigned int size)
++static bool restore_fpregs_from_user(void __user *buf, u64 xrestore, bool fx_only)
+ {
+ 	struct fpu *fpu = &current->thread.fpu;
+ 	int ret;
+ 
++	/* Restore enabled features only. */
++	xrestore &= fpu->fpstate->user_xfeatures;
+ retry:
+ 	fpregs_lock();
+ 	/* Ensure that XFD is up to date */
+@@ -309,7 +310,7 @@ static bool restore_fpregs_from_user(void __user *buf, u64 xrestore,
+ 		if (ret != X86_TRAP_PF)
+ 			return false;
+ 
+-		if (!fault_in_readable(buf, size))
++		if (!fault_in_readable(buf, fpu->fpstate->user_size))
+ 			goto retry;
+ 		return false;
+ 	}
+@@ -339,7 +340,6 @@ static bool __fpu_restore_sig(void __user *buf, void __user *buf_fx,
+ 	struct user_i387_ia32_struct env;
+ 	bool success, fx_only = false;
+ 	union fpregs_state *fpregs;
+-	unsigned int state_size;
+ 	u64 user_xfeatures = 0;
+ 
+ 	if (use_xsave()) {
+@@ -349,17 +349,14 @@ static bool __fpu_restore_sig(void __user *buf, void __user *buf_fx,
+ 			return false;
+ 
+ 		fx_only = !fx_sw_user.magic1;
+-		state_size = fx_sw_user.xstate_size;
+ 		user_xfeatures = fx_sw_user.xfeatures;
+ 	} else {
+ 		user_xfeatures = XFEATURE_MASK_FPSSE;
+-		state_size = fpu->fpstate->user_size;
+ 	}
+ 
+ 	if (likely(!ia32_fxstate)) {
+ 		/* Restore the FPU registers directly from user memory. */
+-		return restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only,
+-						state_size);
++		return restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only);
+ 	}
+ 
+ 	/*
+diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
+index 36c8af87a707a..4e725854c63a1 100644
+--- a/arch/x86/kvm/svm/svm_ops.h
++++ b/arch/x86/kvm/svm/svm_ops.h
+@@ -8,7 +8,7 @@
+ 
+ #define svm_asm(insn, clobber...)				\
+ do {								\
+-	asm_volatile_goto("1: " __stringify(insn) "\n\t"	\
++	asm goto("1: " __stringify(insn) "\n\t"	\
+ 			  _ASM_EXTABLE(1b, %l[fault])		\
+ 			  ::: clobber : fault);			\
+ 	return;							\
+@@ -18,7 +18,7 @@ fault:								\
+ 
+ #define svm_asm1(insn, op1, clobber...)				\
+ do {								\
+-	asm_volatile_goto("1: "  __stringify(insn) " %0\n\t"	\
++	asm goto("1: "  __stringify(insn) " %0\n\t"	\
+ 			  _ASM_EXTABLE(1b, %l[fault])		\
+ 			  :: op1 : clobber : fault);		\
+ 	return;							\
+@@ -28,7 +28,7 @@ fault:								\
+ 
+ #define svm_asm2(insn, op1, op2, clobber...)				\
+ do {									\
+-	asm_volatile_goto("1: "  __stringify(insn) " %1, %0\n\t"	\
++	asm goto("1: "  __stringify(insn) " %1, %0\n\t"	\
+ 			  _ASM_EXTABLE(1b, %l[fault])			\
+ 			  :: op1, op2 : clobber : fault);		\
+ 	return;								\
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 90c1f7f07e53b..1549461fa42b7 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -71,7 +71,7 @@ static int fixed_pmc_events[] = {
+ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data)
+ {
+ 	struct kvm_pmc *pmc;
+-	u8 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl;
++	u64 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl;
+ 	int i;
+ 
+ 	pmu->fixed_ctr_ctrl = data;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index be20a60047b1f..94082169bbbc2 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -745,7 +745,7 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx,
+  */
+ static int kvm_cpu_vmxoff(void)
+ {
+-	asm_volatile_goto("1: vmxoff\n\t"
++	asm goto("1: vmxoff\n\t"
+ 			  _ASM_EXTABLE(1b, %l[fault])
+ 			  ::: "cc", "memory" : fault);
+ 
+@@ -2789,7 +2789,7 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer)
+ 
+ 	cr4_set_bits(X86_CR4_VMXE);
+ 
+-	asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
++	asm goto("1: vmxon %[vmxon_pointer]\n\t"
+ 			  _ASM_EXTABLE(1b, %l[fault])
+ 			  : : [vmxon_pointer] "m"(vmxon_pointer)
+ 			  : : fault);
+diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h
+index 33af7b4c6eb4a..6a0c6e81f7f3e 100644
+--- a/arch/x86/kvm/vmx/vmx_ops.h
++++ b/arch/x86/kvm/vmx/vmx_ops.h
+@@ -94,7 +94,7 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field)
+ 
+ #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
+ 
+-	asm_volatile_goto("1: vmread %[field], %[output]\n\t"
++	asm_goto_output("1: vmread %[field], %[output]\n\t"
+ 			  "jna %l[do_fail]\n\t"
+ 
+ 			  _ASM_EXTABLE(1b, %l[do_exception])
+@@ -188,7 +188,7 @@ static __always_inline unsigned long vmcs_readl(unsigned long field)
+ 
+ #define vmx_asm1(insn, op1, error_args...)				\
+ do {									\
+-	asm_volatile_goto("1: " __stringify(insn) " %0\n\t"		\
++	asm goto("1: " __stringify(insn) " %0\n\t"			\
+ 			  ".byte 0x2e\n\t" /* branch not taken hint */	\
+ 			  "jna %l[error]\n\t"				\
+ 			  _ASM_EXTABLE(1b, %l[fault])			\
+@@ -205,7 +205,7 @@ fault:									\
+ 
+ #define vmx_asm2(insn, op1, op2, error_args...)				\
+ do {									\
+-	asm_volatile_goto("1: "  __stringify(insn) " %1, %0\n\t"	\
++	asm goto("1: "  __stringify(insn) " %1, %0\n\t"			\
+ 			  ".byte 0x2e\n\t" /* branch not taken hint */	\
+ 			  "jna %l[error]\n\t"				\
+ 			  _ASM_EXTABLE(1b, %l[fault])			\
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 1a3aaa7dafae4..468870450b8ba 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5405,7 +5405,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
+ 	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
+ 		vcpu->arch.nmi_pending = 0;
+ 		atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending);
+-		kvm_make_request(KVM_REQ_NMI, vcpu);
++		if (events->nmi.pending)
++			kvm_make_request(KVM_REQ_NMI, vcpu);
+ 	}
+ 	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
+ 
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index 968d7005f4a72..f50cc210a9818 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
+ 	for (; addr < end; addr = next) {
+ 		pud_t *pud = pud_page + pud_index(addr);
+ 		pmd_t *pmd;
++		bool use_gbpage;
+ 
+ 		next = (addr & PUD_MASK) + PUD_SIZE;
+ 		if (next > end)
+ 			next = end;
+ 
+-		if (info->direct_gbpages) {
+-			pud_t pudval;
++		/* if this is already a gbpage, this portion is already mapped */
++		if (pud_large(*pud))
++			continue;
++
++		/* Is using a gbpage allowed? */
++		use_gbpage = info->direct_gbpages;
+ 
+-			if (pud_present(*pud))
+-				continue;
++		/* Don't use gbpage if it maps more than the requested region. */
++		/* at the begining: */
++		use_gbpage &= ((addr & ~PUD_MASK) == 0);
++		/* ... or at the end: */
++		use_gbpage &= ((next & ~PUD_MASK) == 0);
++
++		/* Never overwrite existing mappings */
++		use_gbpage &= !pud_present(*pud);
++
++		if (use_gbpage) {
++			pud_t pudval;
+ 
+-			addr &= PUD_MASK;
+ 			pudval = __pud((addr - info->offset) | info->page_flag);
+ 			set_pud(pud, pudval);
+ 			continue;
+diff --git a/arch/xtensa/include/asm/jump_label.h b/arch/xtensa/include/asm/jump_label.h
+index c812bf85021c0..46c8596259d2d 100644
+--- a/arch/xtensa/include/asm/jump_label.h
++++ b/arch/xtensa/include/asm/jump_label.h
+@@ -13,7 +13,7 @@
+ static __always_inline bool arch_static_branch(struct static_key *key,
+ 					       bool branch)
+ {
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 			  "_nop\n\t"
+ 			  ".pushsection __jump_table,  \"aw\"\n\t"
+ 			  ".word 1b, %l[l_yes], %c0\n\t"
+@@ -38,7 +38,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key,
+ 	 * make it reachable and wrap both into a no-transform block
+ 	 * to avoid any assembler interference with this.
+ 	 */
+-	asm_volatile_goto("1:\n\t"
++	asm goto("1:\n\t"
+ 			  ".begin no-transform\n\t"
+ 			  "_j %l[l_yes]\n\t"
+ 			  "2:\n\t"
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a71974a5e57cd..a02d3d922c583 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -772,11 +772,16 @@ static void req_bio_endio(struct request *rq, struct bio *bio,
+ 		/*
+ 		 * Partial zone append completions cannot be supported as the
+ 		 * BIO fragments may end up not being written sequentially.
++		 * For such case, force the completed nbytes to be equal to
++		 * the BIO size so that bio_advance() sets the BIO remaining
++		 * size to 0 and we end up calling bio_endio() before returning.
+ 		 */
+-		if (bio->bi_iter.bi_size != nbytes)
++		if (bio->bi_iter.bi_size != nbytes) {
+ 			bio->bi_status = BLK_STS_IOERR;
+-		else
++			nbytes = bio->bi_iter.bi_size;
++		} else {
+ 			bio->bi_iter.bi_sector = rq->__sector;
++		}
+ 	}
+ 
+ 	bio_advance(bio, nbytes);
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index 0bb613139becb..f8fda9cf583e1 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -165,9 +165,9 @@ static void wb_timestamp(struct rq_wb *rwb, unsigned long *var)
+  */
+ static bool wb_recent_wait(struct rq_wb *rwb)
+ {
+-	struct bdi_writeback *wb = &rwb->rqos.disk->bdi->wb;
++	struct backing_dev_info *bdi = rwb->rqos.disk->bdi;
+ 
+-	return time_before(jiffies, wb->dirty_sleep + HZ);
++	return time_before(jiffies, bdi->last_bdp_sleep + HZ);
+ }
+ 
+ static inline struct rq_wait *get_rq_wait(struct rq_wb *rwb,
+diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
+index 82c44d4899b96..e24c829d7a015 100644
+--- a/crypto/algif_hash.c
++++ b/crypto/algif_hash.c
+@@ -91,13 +91,13 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		if (!(msg->msg_flags & MSG_MORE)) {
+ 			err = hash_alloc_result(sk, ctx);
+ 			if (err)
+-				goto unlock_free;
++				goto unlock_free_result;
+ 			ahash_request_set_crypt(&ctx->req, NULL,
+ 						ctx->result, 0);
+ 			err = crypto_wait_req(crypto_ahash_final(&ctx->req),
+ 					      &ctx->wait);
+ 			if (err)
+-				goto unlock_free;
++				goto unlock_free_result;
+ 		}
+ 		goto done_more;
+ 	}
+@@ -170,6 +170,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg,
+ 
+ unlock_free:
+ 	af_alg_free_sg(&ctx->sgl);
++unlock_free_result:
+ 	hash_free_result(sk, ctx);
+ 	ctx->more = false;
+ 	goto unlock;
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 71a40a4c546f5..8460458ebe3d4 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -478,6 +478,16 @@ binder_enqueue_thread_work_ilocked(struct binder_thread *thread,
+ {
+ 	WARN_ON(!list_empty(&thread->waiting_thread_node));
+ 	binder_enqueue_work_ilocked(work, &thread->todo);
++
++	/* (e)poll-based threads require an explicit wakeup signal when
++	 * queuing their own work; they rely on these events to consume
++	 * messages without I/O block. Without it, threads risk waiting
++	 * indefinitely without handling the work.
++	 */
++	if (thread->looper & BINDER_LOOPER_STATE_POLL &&
++	    thread->pid == current->pid && !thread->process_todo)
++		wake_up_interruptible_sync(&thread->wait);
++
+ 	thread->process_todo = true;
+ }
+ 
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 67ba592afc777..f9fb692491103 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -284,10 +284,12 @@ static bool device_is_ancestor(struct device *dev, struct device *target)
+ 	return false;
+ }
+ 
++#define DL_MARKER_FLAGS		(DL_FLAG_INFERRED | \
++				 DL_FLAG_CYCLE | \
++				 DL_FLAG_MANAGED)
+ static inline bool device_link_flag_is_sync_state_only(u32 flags)
+ {
+-	return (flags & ~(DL_FLAG_INFERRED | DL_FLAG_CYCLE)) ==
+-		(DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED);
++	return (flags & ~DL_MARKER_FLAGS) == DL_FLAG_SYNC_STATE_ONLY;
+ }
+ 
+ /**
+@@ -2058,9 +2060,14 @@ static int fw_devlink_create_devlink(struct device *con,
+ 
+ 	/*
+ 	 * SYNC_STATE_ONLY device links don't block probing and supports cycles.
+-	 * So cycle detection isn't necessary and shouldn't be done.
++	 * So, one might expect that cycle detection isn't necessary for them.
++	 * However, if the device link was marked as SYNC_STATE_ONLY because
++	 * it's part of a cycle, then we still need to do cycle detection. This
++	 * is because the consumer and supplier might be part of multiple cycles
++	 * and we need to detect all those cycles.
+ 	 */
+-	if (!(flags & DL_FLAG_SYNC_STATE_ONLY)) {
++	if (!device_link_flag_is_sync_state_only(flags) ||
++	    flags & DL_FLAG_CYCLE) {
+ 		device_links_write_lock();
+ 		if (__fw_devlink_relax_cycles(con, sup_handle)) {
+ 			__fwnode_link_cycle(link);
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index da1777e39eaa5..011948f02b367 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1111,7 +1111,7 @@ static int __init genpd_power_off_unused(void)
+ 
+ 	return 0;
+ }
+-late_initcall(genpd_power_off_unused);
++late_initcall_sync(genpd_power_off_unused);
+ 
+ #ifdef CONFIG_PM_SLEEP
+ 
+diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
+index 3d5e6d705fc6e..44b19e6961763 100644
+--- a/drivers/connector/cn_proc.c
++++ b/drivers/connector/cn_proc.c
+@@ -108,9 +108,8 @@ static inline void send_msg(struct cn_msg *msg)
+ 		filter_data[1] = 0;
+ 	}
+ 
+-	if (cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
+-			     cn_filter, (void *)filter_data) == -ESRCH)
+-		atomic_set(&proc_event_num_listeners, 0);
++	cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
++			     cn_filter, (void *)filter_data);
+ 
+ 	local_unlock(&local_event.lock);
+ }
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index fcaccd0b5a651..53b217a621042 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -534,10 +534,16 @@ EXPORT_SYMBOL_GPL(sev_platform_init);
+ 
+ static int __sev_platform_shutdown_locked(int *error)
+ {
+-	struct sev_device *sev = psp_master->sev_data;
++	struct psp_device *psp = psp_master;
++	struct sev_device *sev;
+ 	int ret;
+ 
+-	if (!sev || sev->state == SEV_STATE_UNINIT)
++	if (!psp || !psp->sev_data)
++		return 0;
++
++	sev = psp->sev_data;
++
++	if (sev->state == SEV_STATE_UNINIT)
+ 		return 0;
+ 
+ 	ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c
+index 7cc99d627942e..c8c2e836193aa 100644
+--- a/drivers/dpll/dpll_netlink.c
++++ b/drivers/dpll/dpll_netlink.c
+@@ -1171,6 +1171,7 @@ int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+ 	unsigned long i;
+ 	int ret = 0;
+ 
++	mutex_lock(&dpll_lock);
+ 	xa_for_each_marked_start(&dpll_pin_xa, i, pin, DPLL_REGISTERED,
+ 				 ctx->idx) {
+ 		if (!dpll_pin_available(pin))
+@@ -1190,6 +1191,8 @@ int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+ 		}
+ 		genlmsg_end(skb, hdr);
+ 	}
++	mutex_unlock(&dpll_lock);
++
+ 	if (ret == -EMSGSIZE) {
+ 		ctx->idx = i;
+ 		return skb->len;
+@@ -1345,6 +1348,7 @@ int dpll_nl_device_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+ 	unsigned long i;
+ 	int ret = 0;
+ 
++	mutex_lock(&dpll_lock);
+ 	xa_for_each_marked_start(&dpll_device_xa, i, dpll, DPLL_REGISTERED,
+ 				 ctx->idx) {
+ 		hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+@@ -1361,6 +1365,8 @@ int dpll_nl_device_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+ 		}
+ 		genlmsg_end(skb, hdr);
+ 	}
++	mutex_unlock(&dpll_lock);
++
+ 	if (ret == -EMSGSIZE) {
+ 		ctx->idx = i;
+ 		return skb->len;
+@@ -1411,20 +1417,6 @@ dpll_unlock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ 	mutex_unlock(&dpll_lock);
+ }
+ 
+-int dpll_lock_dumpit(struct netlink_callback *cb)
+-{
+-	mutex_lock(&dpll_lock);
+-
+-	return 0;
+-}
+-
+-int dpll_unlock_dumpit(struct netlink_callback *cb)
+-{
+-	mutex_unlock(&dpll_lock);
+-
+-	return 0;
+-}
+-
+ int dpll_pin_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ 		      struct genl_info *info)
+ {
+diff --git a/drivers/dpll/dpll_nl.c b/drivers/dpll/dpll_nl.c
+index eaee5be7aa642..1e95f5397cfce 100644
+--- a/drivers/dpll/dpll_nl.c
++++ b/drivers/dpll/dpll_nl.c
+@@ -95,9 +95,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
+ 	},
+ 	{
+ 		.cmd	= DPLL_CMD_DEVICE_GET,
+-		.start	= dpll_lock_dumpit,
+ 		.dumpit	= dpll_nl_device_get_dumpit,
+-		.done	= dpll_unlock_dumpit,
+ 		.flags	= GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
+ 	},
+ 	{
+@@ -129,9 +127,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
+ 	},
+ 	{
+ 		.cmd		= DPLL_CMD_PIN_GET,
+-		.start		= dpll_lock_dumpit,
+ 		.dumpit		= dpll_nl_pin_get_dumpit,
+-		.done		= dpll_unlock_dumpit,
+ 		.policy		= dpll_pin_get_dump_nl_policy,
+ 		.maxattr	= DPLL_A_PIN_ID,
+ 		.flags		= GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
+diff --git a/drivers/dpll/dpll_nl.h b/drivers/dpll/dpll_nl.h
+index 92d4c9c4f788d..f491262bee4f0 100644
+--- a/drivers/dpll/dpll_nl.h
++++ b/drivers/dpll/dpll_nl.h
+@@ -30,8 +30,6 @@ dpll_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ void
+ dpll_pin_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ 		   struct genl_info *info);
+-int dpll_lock_dumpit(struct netlink_callback *cb);
+-int dpll_unlock_dumpit(struct netlink_callback *cb);
+ 
+ int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info);
+ int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info);
+diff --git a/drivers/firewire/core-device.c b/drivers/firewire/core-device.c
+index 2828e9573e90b..da8a4c8f28768 100644
+--- a/drivers/firewire/core-device.c
++++ b/drivers/firewire/core-device.c
+@@ -100,10 +100,9 @@ static int textual_leaf_to_string(const u32 *block, char *buf, size_t size)
+  * @buf:	where to put the string
+  * @size:	size of @buf, in bytes
+  *
+- * The string is taken from a minimal ASCII text descriptor leaf after
+- * the immediate entry with @key.  The string is zero-terminated.
+- * An overlong string is silently truncated such that it and the
+- * zero byte fit into @size.
++ * The string is taken from a minimal ASCII text descriptor leaf just after the entry with the
++ * @key. The string is zero-terminated. An overlong string is silently truncated such that it
++ * and the zero byte fit into @size.
+  *
+  * Returns strlen(buf) or a negative error code.
+  */
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index 06964a3c130f6..d561d7de46a99 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -28,7 +28,7 @@ cflags-$(CONFIG_ARM)		+= -DEFI_HAVE_STRLEN -DEFI_HAVE_STRNLEN \
+ 				   -DEFI_HAVE_MEMCHR -DEFI_HAVE_STRRCHR \
+ 				   -DEFI_HAVE_STRCMP -fno-builtin -fpic \
+ 				   $(call cc-option,-mno-single-pic-base)
+-cflags-$(CONFIG_RISCV)		+= -fpic -DNO_ALTERNATIVE
++cflags-$(CONFIG_RISCV)		+= -fpic -DNO_ALTERNATIVE -mno-relax
+ cflags-$(CONFIG_LOONGARCH)	+= -fpie
+ 
+ cflags-$(CONFIG_EFI_PARAMS_FROM_FDT)	+= -I$(srctree)/scripts/dtc/libfdt
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 5fe1df95dc389..19bc8d47317bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4496,7 +4496,6 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
+ 		drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true);
+ 
+ 	cancel_delayed_work_sync(&adev->delayed_init_work);
+-	flush_delayed_work(&adev->gfx.gfx_off_delay_work);
+ 
+ 	amdgpu_ras_suspend(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index b9674c57c4365..6ddc8e3360e22 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -723,8 +723,15 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
+ 
+ 		if (adev->gfx.gfx_off_req_count == 0 &&
+ 		    !adev->gfx.gfx_off_state) {
+-			schedule_delayed_work(&adev->gfx.gfx_off_delay_work,
++			/* If going to s2idle, no need to wait */
++			if (adev->in_s0ix) {
++				if (!amdgpu_dpm_set_powergating_by_smu(adev,
++						AMD_IP_BLOCK_TYPE_GFX, true))
++					adev->gfx.gfx_off_state = true;
++			} else {
++				schedule_delayed_work(&adev->gfx.gfx_off_delay_work,
+ 					      delay);
++			}
+ 		}
+ 	} else {
+ 		if (adev->gfx.gfx_off_req_count == 0) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/cik_ih.c b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
+index 6f7c031dd197a..f24e34dc33d1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cik_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
+@@ -204,6 +204,12 @@ static u32 cik_ih_get_wptr(struct amdgpu_device *adev,
+ 		tmp = RREG32(mmIH_RB_CNTL);
+ 		tmp |= IH_RB_CNTL__WPTR_OVERFLOW_CLEAR_MASK;
+ 		WREG32(mmIH_RB_CNTL, tmp);
++
++		/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++		 * can be detected.
++		 */
++		tmp &= ~IH_RB_CNTL__WPTR_OVERFLOW_CLEAR_MASK;
++		WREG32(mmIH_RB_CNTL, tmp);
+ 	}
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/cz_ih.c b/drivers/gpu/drm/amd/amdgpu/cz_ih.c
+index b8c47e0cf37ad..c19681492efa7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cz_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/cz_ih.c
+@@ -216,6 +216,11 @@ static u32 cz_ih_get_wptr(struct amdgpu_device *adev,
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32(mmIH_RB_CNTL, tmp);
+ 
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32(mmIH_RB_CNTL, tmp);
+ 
+ out:
+ 	return (wptr & ih->ptr_mask);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index ecb622b7f9709..dcdecb18b2306 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4027,8 +4027,6 @@ static int gfx_v10_0_init_microcode(struct amdgpu_device *adev)
+ 		err = 0;
+ 		adev->gfx.mec2_fw = NULL;
+ 	}
+-	amdgpu_gfx_cp_init_microcode(adev, AMDGPU_UCODE_ID_CP_MEC2);
+-	amdgpu_gfx_cp_init_microcode(adev, AMDGPU_UCODE_ID_CP_MEC2_JT);
+ 
+ 	gfx_v10_0_check_fw_write_wait(adev);
+ out:
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index ad91329f227df..ced2e1bdccb7a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1947,14 +1947,6 @@ static int gmc_v9_0_init_mem_ranges(struct amdgpu_device *adev)
+ 
+ static void gmc_v9_4_3_init_vram_info(struct amdgpu_device *adev)
+ {
+-	static const u32 regBIF_BIOS_SCRATCH_4 = 0x50;
+-	u32 vram_info;
+-
+-	/* Only for dGPU, vendor informaton is reliable */
+-	if (!amdgpu_sriov_vf(adev) && !(adev->flags & AMD_IS_APU)) {
+-		vram_info = RREG32(regBIF_BIOS_SCRATCH_4);
+-		adev->gmc.vram_vendor = vram_info & 0xF;
+-	}
+ 	adev->gmc.vram_type = AMDGPU_VRAM_TYPE_HBM;
+ 	adev->gmc.vram_width = 128 * 64;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/iceland_ih.c b/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
+index aecad530b10a6..2c02ae69883d2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
+@@ -215,6 +215,11 @@ static u32 iceland_ih_get_wptr(struct amdgpu_device *adev,
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32(mmIH_RB_CNTL, tmp);
+ 
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32(mmIH_RB_CNTL, tmp);
+ 
+ out:
+ 	return (wptr & ih->ptr_mask);
+diff --git a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
+index d9ed7332d805d..ad4ad39f128f7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/ih_v6_0.c
+@@ -418,6 +418,12 @@ static u32 ih_v6_0_get_wptr(struct amdgpu_device *adev,
+ 	tmp = RREG32_NO_KIQ(ih_regs->ih_rb_cntl);
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
+ out:
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/ih_v6_1.c b/drivers/gpu/drm/amd/amdgpu/ih_v6_1.c
+index 8fb05eae340ad..b8da0fc29378c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/ih_v6_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/ih_v6_1.c
+@@ -418,6 +418,13 @@ static u32 ih_v6_1_get_wptr(struct amdgpu_device *adev,
+ 	tmp = RREG32_NO_KIQ(ih_regs->ih_rb_cntl);
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++
+ out:
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/navi10_ih.c b/drivers/gpu/drm/amd/amdgpu/navi10_ih.c
+index e64b33115848d..de93614726c9a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/navi10_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/navi10_ih.c
+@@ -442,6 +442,12 @@ static u32 navi10_ih_get_wptr(struct amdgpu_device *adev,
+ 	tmp = RREG32_NO_KIQ(ih_regs->ih_rb_cntl);
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
+ out:
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_ih.c b/drivers/gpu/drm/amd/amdgpu/si_ih.c
+index 9a24f17a57502..cada9f300a7f5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_ih.c
+@@ -119,6 +119,12 @@ static u32 si_ih_get_wptr(struct amdgpu_device *adev,
+ 		tmp = RREG32(IH_RB_CNTL);
+ 		tmp |= IH_RB_CNTL__WPTR_OVERFLOW_CLEAR_MASK;
+ 		WREG32(IH_RB_CNTL, tmp);
++
++		/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++		 * can be detected.
++		 */
++		tmp &= ~IH_RB_CNTL__WPTR_OVERFLOW_CLEAR_MASK;
++		WREG32(IH_RB_CNTL, tmp);
+ 	}
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index 48c6efcdeac97..4d7188912edfe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -50,13 +50,13 @@ static const struct amd_ip_funcs soc21_common_ip_funcs;
+ /* SOC21 */
+ static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn0[] = {
+ 	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+-	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ 	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+ 
+ static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn1[] = {
+ 	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
+-	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
++	{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 0)},
+ };
+ 
+ static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn0 = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
+index 917707bba7f36..450b6e8315091 100644
+--- a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
+@@ -219,6 +219,12 @@ static u32 tonga_ih_get_wptr(struct amdgpu_device *adev,
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32(mmIH_RB_CNTL, tmp);
+ 
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32(mmIH_RB_CNTL, tmp);
++
+ out:
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
+index d364c6dd152c3..bf68e18e3824b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
+@@ -373,6 +373,12 @@ static u32 vega10_ih_get_wptr(struct amdgpu_device *adev,
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
+ 
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++
+ out:
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+index ddfc6941f9d55..db66e6cccaf2a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+@@ -421,6 +421,12 @@ static u32 vega20_ih_get_wptr(struct amdgpu_device *adev,
+ 	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+ 	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
+ 
++	/* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++	 * can be detected.
++	 */
++	tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++	WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++
+ out:
+ 	return (wptr & ih->ptr_mask);
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index affc628004ffb..d3d1ccf66ece0 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -6110,7 +6110,9 @@ create_stream_for_sink(struct drm_connector *connector,
+ 		if (recalculate_timing) {
+ 			freesync_mode = get_highest_refresh_rate_mode(aconnector, false);
+ 			drm_mode_copy(&saved_mode, &mode);
++			saved_mode.picture_aspect_ratio = mode.picture_aspect_ratio;
+ 			drm_mode_copy(&mode, freesync_mode);
++			mode.picture_aspect_ratio = saved_mode.picture_aspect_ratio;
+ 		} else {
+ 			decide_crtc_timing_for_drm_display_mode(
+ 					&mode, preferred_mode, scale);
+@@ -10440,11 +10442,13 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 		}
+ 
+-		ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars);
+-		if (ret) {
+-			DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n");
+-			ret = -EINVAL;
+-			goto fail;
++		if (dc_resource_is_dsc_encoding_supported(dc)) {
++			ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars);
++			if (ret) {
++				DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n");
++				ret = -EINVAL;
++				goto fail;
++			}
+ 		}
+ 
+ 		ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context, vars);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 4ef90a3add1c9..54df6cac19c8b 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -654,10 +654,13 @@ static void dcn35_clk_mgr_helper_populate_bw_params(struct clk_mgr_internal *clk
+ 	struct clk_limit_table_entry def_max = bw_params->clk_table.entries[bw_params->clk_table.num_entries - 1];
+ 	uint32_t max_fclk = 0, min_pstate = 0, max_dispclk = 0, max_dppclk = 0;
+ 	uint32_t max_pstate = 0, max_dram_speed_mts = 0, min_dram_speed_mts = 0;
++	uint32_t num_memps, num_fclk, num_dcfclk;
+ 	int i;
+ 
+ 	/* Determine min/max p-state values. */
+-	for (i = 0; i < clock_table->NumMemPstatesEnabled; i++) {
++	num_memps = (clock_table->NumMemPstatesEnabled > NUM_MEM_PSTATE_LEVELS) ? NUM_MEM_PSTATE_LEVELS :
++		clock_table->NumMemPstatesEnabled;
++	for (i = 0; i < num_memps; i++) {
+ 		uint32_t dram_speed_mts = calc_dram_speed_mts(&clock_table->MemPstateTable[i]);
+ 
+ 		if (is_valid_clock_value(dram_speed_mts) && dram_speed_mts > max_dram_speed_mts) {
+@@ -669,7 +672,7 @@ static void dcn35_clk_mgr_helper_populate_bw_params(struct clk_mgr_internal *clk
+ 	min_dram_speed_mts = max_dram_speed_mts;
+ 	min_pstate = max_pstate;
+ 
+-	for (i = 0; i < clock_table->NumMemPstatesEnabled; i++) {
++	for (i = 0; i < num_memps; i++) {
+ 		uint32_t dram_speed_mts = calc_dram_speed_mts(&clock_table->MemPstateTable[i]);
+ 
+ 		if (is_valid_clock_value(dram_speed_mts) && dram_speed_mts < min_dram_speed_mts) {
+@@ -698,9 +701,13 @@ static void dcn35_clk_mgr_helper_populate_bw_params(struct clk_mgr_internal *clk
+ 	/* Base the clock table on dcfclk, need at least one entry regardless of pmfw table */
+ 	ASSERT(clock_table->NumDcfClkLevelsEnabled > 0);
+ 
+-	max_fclk = find_max_clk_value(clock_table->FclkClocks_Freq, clock_table->NumFclkLevelsEnabled);
++	num_fclk = (clock_table->NumFclkLevelsEnabled > NUM_FCLK_DPM_LEVELS) ? NUM_FCLK_DPM_LEVELS :
++		clock_table->NumFclkLevelsEnabled;
++	max_fclk = find_max_clk_value(clock_table->FclkClocks_Freq, num_fclk);
+ 
+-	for (i = 0; i < clock_table->NumDcfClkLevelsEnabled; i++) {
++	num_dcfclk = (clock_table->NumFclkLevelsEnabled > NUM_DCFCLK_DPM_LEVELS) ? NUM_DCFCLK_DPM_LEVELS :
++		clock_table->NumDcfClkLevelsEnabled;
++	for (i = 0; i < num_dcfclk; i++) {
+ 		int j;
+ 
+ 		/* First search defaults for the clocks we don't read using closest lower or equal default dcfclk */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile b/drivers/gpu/drm/amd/display/dc/dml/Makefile
+index 6042a5a6a44f8..59ade76ffb18d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
+@@ -72,11 +72,11 @@ CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_vba.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn10/dcn10_fpu.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/dcn20_fpu.o := $(dml_ccflags)
+-CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20.o := $(dml_ccflags)
++CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20.o := $(dml_ccflags) $(frame_warn_flag)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20.o := $(dml_ccflags)
+-CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20v2.o := $(dml_ccflags)
++CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20v2.o := $(dml_ccflags) $(frame_warn_flag)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20v2.o := $(dml_ccflags)
+-CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_ccflags)
++CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_ccflags) $(frame_warn_flag)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_rq_dlg_calc_21.o := $(dml_ccflags)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_mode_vba_30.o := $(dml_ccflags) $(frame_warn_flag)
+ CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_rq_dlg_calc_30.o := $(dml_ccflags)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index 92e2ddc9ab7e2..fe2b67d745f0d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -1089,7 +1089,7 @@ struct pipe_slice_table {
+ 		struct pipe_ctx *pri_pipe;
+ 		struct dc_plane_state *plane;
+ 		int slice_count;
+-	} mpc_combines[MAX_SURFACES];
++	} mpc_combines[MAX_PLANES];
+ 	int mpc_combine_count;
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index bcc03b184e5ea..2c379be19aa84 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -791,35 +791,28 @@ static void populate_dml_surface_cfg_from_plane_state(enum dml_project_id dml2_p
+ 	}
+ }
+ 
+-/*TODO no support for mpc combine, need rework - should calculate scaling params based on plane+stream*/
+-static struct scaler_data get_scaler_data_for_plane(const struct dc_plane_state *in, const struct dc_state *context)
++static struct scaler_data get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc_state *context)
+ {
+ 	int i;
+-	struct scaler_data data = { 0 };
++	struct pipe_ctx *temp_pipe = &context->res_ctx.temp_pipe;
++
++	memset(temp_pipe, 0, sizeof(struct pipe_ctx));
+ 
+ 	for (i = 0; i < MAX_PIPES; i++)	{
+ 		const struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+ 
+ 		if (pipe->plane_state == in && !pipe->prev_odm_pipe) {
+-			const struct pipe_ctx *next_pipe = pipe->next_odm_pipe;
+-
+-			data = context->res_ctx.pipe_ctx[i].plane_res.scl_data;
+-			while (next_pipe) {
+-				data.h_active += next_pipe->plane_res.scl_data.h_active;
+-				data.recout.width += next_pipe->plane_res.scl_data.recout.width;
+-				if (in->rotation == ROTATION_ANGLE_0 || in->rotation == ROTATION_ANGLE_180) {
+-					data.viewport.width += next_pipe->plane_res.scl_data.viewport.width;
+-				} else {
+-					data.viewport.height += next_pipe->plane_res.scl_data.viewport.height;
+-				}
+-				next_pipe = next_pipe->next_odm_pipe;
+-			}
++			temp_pipe->stream = pipe->stream;
++			temp_pipe->plane_state = pipe->plane_state;
++			temp_pipe->plane_res.scl_data.taps = pipe->plane_res.scl_data.taps;
++
++			resource_build_scaling_params(temp_pipe);
+ 			break;
+ 		}
+ 	}
+ 
+ 	ASSERT(i < MAX_PIPES);
+-	return data;
++	return temp_pipe->plane_res.scl_data;
+ }
+ 
+ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_stream_state *in)
+@@ -864,7 +857,7 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ 	out->ScalerEnabled[location] = false;
+ }
+ 
+-static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_plane_state *in, const struct dc_state *context)
++static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_plane_state *in, struct dc_state *context)
+ {
+ 	const struct scaler_data scaler_data = get_scaler_data_for_plane(in, context);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+index 10397d4dfb075..bc9cda3294a19 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+@@ -462,6 +462,8 @@ struct resource_context {
+ 	unsigned int hpo_dp_link_enc_to_link_idx[MAX_HPO_DP2_LINK_ENCODERS];
+ 	int hpo_dp_link_enc_ref_cnts[MAX_HPO_DP2_LINK_ENCODERS];
+ 	bool is_mpc_3dlut_acquired[MAX_PIPES];
++	/* solely used for build scalar data in dml2 */
++	struct pipe_ctx temp_pipe;
+ };
+ 
+ struct dce_bw_output {
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index 5a0b045189569..16a62e0187122 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -517,6 +517,7 @@ enum link_training_result dp_check_link_loss_status(
+ {
+ 	enum link_training_result status = LINK_TRAINING_SUCCESS;
+ 	union lane_status lane_status;
++	union lane_align_status_updated dpcd_lane_status_updated;
+ 	uint8_t dpcd_buf[6] = {0};
+ 	uint32_t lane;
+ 
+@@ -532,10 +533,12 @@ enum link_training_result dp_check_link_loss_status(
+ 		 * check lanes status
+ 		 */
+ 		lane_status.raw = dp_get_nibble_at_index(&dpcd_buf[2], lane);
++		dpcd_lane_status_updated.raw = dpcd_buf[4];
+ 
+ 		if (!lane_status.bits.CHANNEL_EQ_DONE_0 ||
+ 			!lane_status.bits.CR_DONE_0 ||
+-			!lane_status.bits.SYMBOL_LOCKED_0) {
++			!lane_status.bits.SYMBOL_LOCKED_0 ||
++			!dp_is_interlane_aligned(dpcd_lane_status_updated)) {
+ 			/* if one of the channel equalization, clock
+ 			 * recovery or symbol lock is dropped
+ 			 * consider it as (link has been
+diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
+index f57e6d74fb0e0..c1a99bf4dffd1 100644
+--- a/drivers/gpu/drm/drm_buddy.c
++++ b/drivers/gpu/drm/drm_buddy.c
+@@ -539,6 +539,12 @@ static int __alloc_range(struct drm_buddy *mm,
+ 	} while (1);
+ 
+ 	list_splice_tail(&allocated, blocks);
++
++	if (total_allocated < size) {
++		err = -ENOSPC;
++		goto err_free;
++	}
++
+ 	return 0;
+ 
+ err_undo:
+diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
+index 834a5e28abbe5..7352bde299d54 100644
+--- a/drivers/gpu/drm/drm_prime.c
++++ b/drivers/gpu/drm/drm_prime.c
+@@ -820,7 +820,7 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
+ 	if (max_segment == 0)
+ 		max_segment = UINT_MAX;
+ 	err = sg_alloc_table_from_pages_segment(sg, pages, nr_pages, 0,
+-						nr_pages << PAGE_SHIFT,
++						(unsigned long)nr_pages << PAGE_SHIFT,
+ 						max_segment, GFP_KERNEL);
+ 	if (err) {
+ 		kfree(sg);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 1710ef24c5117..ef09356bcb44c 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2275,6 +2275,9 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
+ 	limits->min_rate = intel_dp_common_rate(intel_dp, 0);
+ 	limits->max_rate = intel_dp_max_link_rate(intel_dp);
+ 
++	/* FIXME 128b/132b SST support missing */
++	limits->max_rate = min(limits->max_rate, 810000);
++
+ 	limits->min_lane_count = 1;
+ 	limits->max_lane_count = intel_dp_max_lane_count(intel_dp);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_vdsc_regs.h b/drivers/gpu/drm/i915/display/intel_vdsc_regs.h
+index 64f440fdc22b2..8b21dc8e26d52 100644
+--- a/drivers/gpu/drm/i915/display/intel_vdsc_regs.h
++++ b/drivers/gpu/drm/i915/display/intel_vdsc_regs.h
+@@ -51,8 +51,8 @@
+ #define DSCC_PICTURE_PARAMETER_SET_0		_MMIO(0x6BA00)
+ #define _DSCA_PPS_0				0x6B200
+ #define _DSCC_PPS_0				0x6BA00
+-#define DSCA_PPS(pps)				_MMIO(_DSCA_PPS_0 + (pps) * 4)
+-#define DSCC_PPS(pps)				_MMIO(_DSCC_PPS_0 + (pps) * 4)
++#define DSCA_PPS(pps)				_MMIO(_DSCA_PPS_0 + ((pps) < 12 ? (pps) : (pps) + 12) * 4)
++#define DSCC_PPS(pps)				_MMIO(_DSCC_PPS_0 + ((pps) < 12 ? (pps) : (pps) + 12) * 4)
+ #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PB	0x78270
+ #define _ICL_DSC1_PICTURE_PARAMETER_SET_0_PB	0x78370
+ #define _ICL_DSC0_PICTURE_PARAMETER_SET_0_PC	0x78470
+diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
+index 5f68e31a3e4e1..0915f3b68752e 100644
+--- a/drivers/gpu/drm/msm/msm_gem_prime.c
++++ b/drivers/gpu/drm/msm/msm_gem_prime.c
+@@ -26,7 +26,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
+ {
+ 	void *vaddr;
+ 
+-	vaddr = msm_gem_get_vaddr(obj);
++	vaddr = msm_gem_get_vaddr_locked(obj);
+ 	if (IS_ERR(vaddr))
+ 		return PTR_ERR(vaddr);
+ 	iosys_map_set_vaddr(map, vaddr);
+@@ -36,7 +36,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
+ 
+ void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
+ {
+-	msm_gem_put_vaddr(obj);
++	msm_gem_put_vaddr_locked(obj);
+ }
+ 
+ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
+diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
+index 7f64c66673002..5c10b559a5957 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.c
++++ b/drivers/gpu/drm/msm/msm_gpu.c
+@@ -749,12 +749,14 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	struct msm_ringbuffer *ring = submit->ring;
+ 	unsigned long flags;
+ 
+-	pm_runtime_get_sync(&gpu->pdev->dev);
++	WARN_ON(!mutex_is_locked(&gpu->lock));
+ 
+-	mutex_lock(&gpu->lock);
++	pm_runtime_get_sync(&gpu->pdev->dev);
+ 
+ 	msm_gpu_hw_init(gpu);
+ 
++	submit->seqno = submit->hw_fence->seqno;
++
+ 	update_sw_cntrs(gpu);
+ 
+ 	/*
+@@ -779,11 +781,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ 	gpu->funcs->submit(gpu, submit);
+ 	gpu->cur_ctx_seqno = submit->queue->ctx->seqno;
+ 
+-	hangcheck_timer_reset(gpu);
+-
+-	mutex_unlock(&gpu->lock);
+-
+ 	pm_runtime_put(&gpu->pdev->dev);
++	hangcheck_timer_reset(gpu);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
+index 5cc8d358cc975..d5512037c38bc 100644
+--- a/drivers/gpu/drm/msm/msm_iommu.c
++++ b/drivers/gpu/drm/msm/msm_iommu.c
+@@ -21,6 +21,8 @@ struct msm_iommu_pagetable {
+ 	struct msm_mmu base;
+ 	struct msm_mmu *parent;
+ 	struct io_pgtable_ops *pgtbl_ops;
++	const struct iommu_flush_ops *tlb;
++	struct device *iommu_dev;
+ 	unsigned long pgsize_bitmap;	/* Bitmap of page sizes in use */
+ 	phys_addr_t ttbr;
+ 	u32 asid;
+@@ -201,11 +203,33 @@ static const struct msm_mmu_funcs pagetable_funcs = {
+ 
+ static void msm_iommu_tlb_flush_all(void *cookie)
+ {
++	struct msm_iommu_pagetable *pagetable = cookie;
++	struct adreno_smmu_priv *adreno_smmu;
++
++	if (!pm_runtime_get_if_in_use(pagetable->iommu_dev))
++		return;
++
++	adreno_smmu = dev_get_drvdata(pagetable->parent->dev);
++
++	pagetable->tlb->tlb_flush_all((void *)adreno_smmu->cookie);
++
++	pm_runtime_put_autosuspend(pagetable->iommu_dev);
+ }
+ 
+ static void msm_iommu_tlb_flush_walk(unsigned long iova, size_t size,
+ 		size_t granule, void *cookie)
+ {
++	struct msm_iommu_pagetable *pagetable = cookie;
++	struct adreno_smmu_priv *adreno_smmu;
++
++	if (!pm_runtime_get_if_in_use(pagetable->iommu_dev))
++		return;
++
++	adreno_smmu = dev_get_drvdata(pagetable->parent->dev);
++
++	pagetable->tlb->tlb_flush_walk(iova, size, granule, (void *)adreno_smmu->cookie);
++
++	pm_runtime_put_autosuspend(pagetable->iommu_dev);
+ }
+ 
+ static void msm_iommu_tlb_add_page(struct iommu_iotlb_gather *gather,
+@@ -213,7 +237,7 @@ static void msm_iommu_tlb_add_page(struct iommu_iotlb_gather *gather,
+ {
+ }
+ 
+-static const struct iommu_flush_ops null_tlb_ops = {
++static const struct iommu_flush_ops tlb_ops = {
+ 	.tlb_flush_all = msm_iommu_tlb_flush_all,
+ 	.tlb_flush_walk = msm_iommu_tlb_flush_walk,
+ 	.tlb_add_page = msm_iommu_tlb_add_page,
+@@ -254,10 +278,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
+ 
+ 	/* The incoming cfg will have the TTBR1 quirk enabled */
+ 	ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1;
+-	ttbr0_cfg.tlb = &null_tlb_ops;
++	ttbr0_cfg.tlb = &tlb_ops;
+ 
+ 	pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1,
+-		&ttbr0_cfg, iommu->domain);
++		&ttbr0_cfg, pagetable);
+ 
+ 	if (!pagetable->pgtbl_ops) {
+ 		kfree(pagetable);
+@@ -279,6 +303,8 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
+ 
+ 	/* Needed later for TLB flush */
+ 	pagetable->parent = parent;
++	pagetable->tlb = ttbr1_cfg->tlb;
++	pagetable->iommu_dev = ttbr1_cfg->iommu_dev;
+ 	pagetable->pgsize_bitmap = ttbr0_cfg.pgsize_bitmap;
+ 	pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr;
+ 
+diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c
+index 95257ab0185dc..a7e152f659a2c 100644
+--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
++++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
+@@ -21,8 +21,6 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job)
+ 
+ 	msm_fence_init(submit->hw_fence, fctx);
+ 
+-	submit->seqno = submit->hw_fence->seqno;
+-
+ 	mutex_lock(&priv->lru.lock);
+ 
+ 	for (i = 0; i < submit->nr_bos; i++) {
+@@ -34,8 +32,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job)
+ 
+ 	mutex_unlock(&priv->lru.lock);
+ 
++	/* TODO move submit path over to using a per-ring lock.. */
++	mutex_lock(&gpu->lock);
++
+ 	msm_gpu_submit(gpu, submit);
+ 
++	mutex_unlock(&gpu->lock);
++
+ 	return dma_fence_get(submit->hw_fence);
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+index d1437c08645f9..6f5d376d8fcc1 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+@@ -9,7 +9,7 @@
+ #define GSP_PAGE_SIZE  BIT(GSP_PAGE_SHIFT)
+ 
+ struct nvkm_gsp_mem {
+-	u32 size;
++	size_t size;
+ 	void *data;
+ 	dma_addr_t addr;
+ };
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
+index ca762ea554136..93f08f9479d89 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
+@@ -103,6 +103,7 @@ nouveau_fence_context_kill(struct nouveau_fence_chan *fctx, int error)
+ void
+ nouveau_fence_context_del(struct nouveau_fence_chan *fctx)
+ {
++	cancel_work_sync(&fctx->uevent_work);
+ 	nouveau_fence_context_kill(fctx, 0);
+ 	nvif_event_dtor(&fctx->event);
+ 	fctx->dead = 1;
+@@ -145,12 +146,13 @@ nouveau_fence_update(struct nouveau_channel *chan, struct nouveau_fence_chan *fc
+ 	return drop;
+ }
+ 
+-static int
+-nouveau_fence_wait_uevent_handler(struct nvif_event *event, void *repv, u32 repc)
++static void
++nouveau_fence_uevent_work(struct work_struct *work)
+ {
+-	struct nouveau_fence_chan *fctx = container_of(event, typeof(*fctx), event);
++	struct nouveau_fence_chan *fctx = container_of(work, struct nouveau_fence_chan,
++						       uevent_work);
+ 	unsigned long flags;
+-	int ret = NVIF_EVENT_KEEP;
++	int drop = 0;
+ 
+ 	spin_lock_irqsave(&fctx->lock, flags);
+ 	if (!list_empty(&fctx->pending)) {
+@@ -160,11 +162,20 @@ nouveau_fence_wait_uevent_handler(struct nvif_event *event, void *repv, u32 repc
+ 		fence = list_entry(fctx->pending.next, typeof(*fence), head);
+ 		chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock));
+ 		if (nouveau_fence_update(chan, fctx))
+-			ret = NVIF_EVENT_DROP;
++			drop = 1;
+ 	}
++	if (drop)
++		nvif_event_block(&fctx->event);
++
+ 	spin_unlock_irqrestore(&fctx->lock, flags);
++}
+ 
+-	return ret;
++static int
++nouveau_fence_wait_uevent_handler(struct nvif_event *event, void *repv, u32 repc)
++{
++	struct nouveau_fence_chan *fctx = container_of(event, typeof(*fctx), event);
++	schedule_work(&fctx->uevent_work);
++	return NVIF_EVENT_KEEP;
+ }
+ 
+ void
+@@ -178,6 +189,7 @@ nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_cha
+ 	} args;
+ 	int ret;
+ 
++	INIT_WORK(&fctx->uevent_work, nouveau_fence_uevent_work);
+ 	INIT_LIST_HEAD(&fctx->flip);
+ 	INIT_LIST_HEAD(&fctx->pending);
+ 	spin_lock_init(&fctx->lock);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouveau/nouveau_fence.h
+index 64d33ae7f3561..8bc065acfe358 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.h
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.h
+@@ -44,6 +44,7 @@ struct nouveau_fence_chan {
+ 	u32 context;
+ 	char name[32];
+ 
++	struct work_struct uevent_work;
+ 	struct nvif_event event;
+ 	int notify_ref, dead, killed;
+ };
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index cc03e0c22ff3f..5e4565c5011a9 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -1011,7 +1011,7 @@ nouveau_svm_fault_buffer_ctor(struct nouveau_svm *svm, s32 oclass, int id)
+ 	if (ret)
+ 		return ret;
+ 
+-	buffer->fault = kvcalloc(sizeof(*buffer->fault), buffer->entries, GFP_KERNEL);
++	buffer->fault = kvcalloc(buffer->entries, sizeof(*buffer->fault), GFP_KERNEL);
+ 	if (!buffer->fault)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index 9ee58e2a0eb2a..6208ddd929645 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -997,6 +997,32 @@ r535_gsp_rpc_get_gsp_static_info(struct nvkm_gsp *gsp)
+ 	return 0;
+ }
+ 
++static void
++nvkm_gsp_mem_dtor(struct nvkm_gsp *gsp, struct nvkm_gsp_mem *mem)
++{
++	if (mem->data) {
++		/*
++		 * Poison the buffer to catch any unexpected access from
++		 * GSP-RM if the buffer was prematurely freed.
++		 */
++		memset(mem->data, 0xFF, mem->size);
++
++		dma_free_coherent(gsp->subdev.device->dev, mem->size, mem->data, mem->addr);
++		memset(mem, 0, sizeof(*mem));
++	}
++}
++
++static int
++nvkm_gsp_mem_ctor(struct nvkm_gsp *gsp, size_t size, struct nvkm_gsp_mem *mem)
++{
++	mem->size = size;
++	mem->data = dma_alloc_coherent(gsp->subdev.device->dev, size, &mem->addr, GFP_KERNEL);
++	if (WARN_ON(!mem->data))
++		return -ENOMEM;
++
++	return 0;
++}
++
+ static int
+ r535_gsp_postinit(struct nvkm_gsp *gsp)
+ {
+@@ -1024,6 +1050,13 @@ r535_gsp_postinit(struct nvkm_gsp *gsp)
+ 
+ 	nvkm_inth_allow(&gsp->subdev.inth);
+ 	nvkm_wr32(device, 0x110004, 0x00000040);
++
++	/* Release the DMA buffers that were needed only for boot and init */
++	nvkm_gsp_mem_dtor(gsp, &gsp->boot.fw);
++	nvkm_gsp_mem_dtor(gsp, &gsp->libos);
++	nvkm_gsp_mem_dtor(gsp, &gsp->rmargs);
++	nvkm_gsp_mem_dtor(gsp, &gsp->wpr_meta);
++
+ 	return ret;
+ }
+ 
+@@ -1078,7 +1111,6 @@ r535_gsp_rpc_set_registry(struct nvkm_gsp *gsp)
+ 	if (IS_ERR(rpc))
+ 		return PTR_ERR(rpc);
+ 
+-	rpc->size = sizeof(*rpc);
+ 	rpc->numEntries = NV_GSP_REG_NUM_ENTRIES;
+ 
+ 	str_offset = offsetof(typeof(*rpc), entries[NV_GSP_REG_NUM_ENTRIES]);
+@@ -1094,6 +1126,7 @@ r535_gsp_rpc_set_registry(struct nvkm_gsp *gsp)
+ 		strings += name_len;
+ 		str_offset += name_len;
+ 	}
++	rpc->size = str_offset;
+ 
+ 	return nvkm_gsp_rpc_wr(gsp, rpc, false);
+ }
+@@ -1532,27 +1565,6 @@ r535_gsp_msg_run_cpu_sequencer(void *priv, u32 fn, void *repv, u32 repc)
+ 	return 0;
+ }
+ 
+-static void
+-nvkm_gsp_mem_dtor(struct nvkm_gsp *gsp, struct nvkm_gsp_mem *mem)
+-{
+-	if (mem->data) {
+-		dma_free_coherent(gsp->subdev.device->dev, mem->size, mem->data, mem->addr);
+-		mem->data = NULL;
+-	}
+-}
+-
+-static int
+-nvkm_gsp_mem_ctor(struct nvkm_gsp *gsp, u32 size, struct nvkm_gsp_mem *mem)
+-{
+-	mem->size = size;
+-	mem->data = dma_alloc_coherent(gsp->subdev.device->dev, size, &mem->addr, GFP_KERNEL);
+-	if (WARN_ON(!mem->data))
+-		return -ENOMEM;
+-
+-	return 0;
+-}
+-
+-
+ static int
+ r535_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1)
+ {
+@@ -2150,6 +2162,11 @@ r535_gsp_dtor(struct nvkm_gsp *gsp)
+ 	mutex_destroy(&gsp->cmdq.mutex);
+ 
+ 	r535_gsp_dtor_fws(gsp);
++
++	nvkm_gsp_mem_dtor(gsp, &gsp->shm.mem);
++	nvkm_gsp_mem_dtor(gsp, &gsp->loginit);
++	nvkm_gsp_mem_dtor(gsp, &gsp->logintr);
++	nvkm_gsp_mem_dtor(gsp, &gsp->logrm);
+ }
+ 
+ int
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
+index f8e9abe647b92..9539aa28937fa 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
+@@ -94,6 +94,7 @@ static int virtio_gpu_probe(struct virtio_device *vdev)
+ 			goto err_free;
+ 	}
+ 
++	dma_set_max_seg_size(dev->dev, dma_max_mapping_size(dev->dev) ?: UINT_MAX);
+ 	ret = virtio_gpu_init(vdev, dev);
+ 	if (ret)
+ 		goto err_free;
+diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c
+index d9ef45fcaeab1..7903c8638e817 100644
+--- a/drivers/hid/bpf/hid_bpf_dispatch.c
++++ b/drivers/hid/bpf/hid_bpf_dispatch.c
+@@ -241,6 +241,39 @@ int hid_bpf_reconnect(struct hid_device *hdev)
+ 	return 0;
+ }
+ 
++static int do_hid_bpf_attach_prog(struct hid_device *hdev, int prog_fd, struct bpf_prog *prog,
++				  __u32 flags)
++{
++	int fd, err, prog_type;
++
++	prog_type = hid_bpf_get_prog_attach_type(prog);
++	if (prog_type < 0)
++		return prog_type;
++
++	if (prog_type >= HID_BPF_PROG_TYPE_MAX)
++		return -EINVAL;
++
++	if (prog_type == HID_BPF_PROG_TYPE_DEVICE_EVENT) {
++		err = hid_bpf_allocate_event_data(hdev);
++		if (err)
++			return err;
++	}
++
++	fd = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, prog, flags);
++	if (fd < 0)
++		return fd;
++
++	if (prog_type == HID_BPF_PROG_TYPE_RDESC_FIXUP) {
++		err = hid_bpf_reconnect(hdev);
++		if (err) {
++			close_fd(fd);
++			return err;
++		}
++	}
++
++	return fd;
++}
++
+ /**
+  * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device
+  *
+@@ -257,18 +290,13 @@ noinline int
+ hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags)
+ {
+ 	struct hid_device *hdev;
++	struct bpf_prog *prog;
+ 	struct device *dev;
+-	int fd, err, prog_type = hid_bpf_get_prog_attach_type(prog_fd);
++	int err, fd;
+ 
+ 	if (!hid_bpf_ops)
+ 		return -EINVAL;
+ 
+-	if (prog_type < 0)
+-		return prog_type;
+-
+-	if (prog_type >= HID_BPF_PROG_TYPE_MAX)
+-		return -EINVAL;
+-
+ 	if ((flags & ~HID_BPF_FLAG_MASK))
+ 		return -EINVAL;
+ 
+@@ -278,25 +306,29 @@ hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags)
+ 
+ 	hdev = to_hid_device(dev);
+ 
+-	if (prog_type == HID_BPF_PROG_TYPE_DEVICE_EVENT) {
+-		err = hid_bpf_allocate_event_data(hdev);
+-		if (err)
+-			return err;
++	/*
++	 * take a ref on the prog itself, it will be released
++	 * on errors or when it'll be detached
++	 */
++	prog = bpf_prog_get(prog_fd);
++	if (IS_ERR(prog)) {
++		err = PTR_ERR(prog);
++		goto out_dev_put;
+ 	}
+ 
+-	fd = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags);
+-	if (fd < 0)
+-		return fd;
+-
+-	if (prog_type == HID_BPF_PROG_TYPE_RDESC_FIXUP) {
+-		err = hid_bpf_reconnect(hdev);
+-		if (err) {
+-			close_fd(fd);
+-			return err;
+-		}
++	fd = do_hid_bpf_attach_prog(hdev, prog_fd, prog, flags);
++	if (fd < 0) {
++		err = fd;
++		goto out_prog_put;
+ 	}
+ 
+ 	return fd;
++
++ out_prog_put:
++	bpf_prog_put(prog);
++ out_dev_put:
++	put_device(dev);
++	return err;
+ }
+ 
+ /**
+@@ -323,8 +355,10 @@ hid_bpf_allocate_context(unsigned int hid_id)
+ 	hdev = to_hid_device(dev);
+ 
+ 	ctx_kern = kzalloc(sizeof(*ctx_kern), GFP_KERNEL);
+-	if (!ctx_kern)
++	if (!ctx_kern) {
++		put_device(dev);
+ 		return NULL;
++	}
+ 
+ 	ctx_kern->ctx.hid = hdev;
+ 
+@@ -341,10 +375,15 @@ noinline void
+ hid_bpf_release_context(struct hid_bpf_ctx *ctx)
+ {
+ 	struct hid_bpf_ctx_kern *ctx_kern;
++	struct hid_device *hid;
+ 
+ 	ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx);
++	hid = (struct hid_device *)ctx_kern->ctx.hid; /* ignore const */
+ 
+ 	kfree(ctx_kern);
++
++	/* get_device() is called by bus_find_device() */
++	put_device(&hid->dev);
+ }
+ 
+ /**
+diff --git a/drivers/hid/bpf/hid_bpf_dispatch.h b/drivers/hid/bpf/hid_bpf_dispatch.h
+index 63dfc8605cd21..fbe0639d09f26 100644
+--- a/drivers/hid/bpf/hid_bpf_dispatch.h
++++ b/drivers/hid/bpf/hid_bpf_dispatch.h
+@@ -12,9 +12,9 @@ struct hid_bpf_ctx_kern {
+ 
+ int hid_bpf_preload_skel(void);
+ void hid_bpf_free_links_and_skel(void);
+-int hid_bpf_get_prog_attach_type(int prog_fd);
++int hid_bpf_get_prog_attach_type(struct bpf_prog *prog);
+ int __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type, int prog_fd,
+-			  __u32 flags);
++			  struct bpf_prog *prog, __u32 flags);
+ void __hid_bpf_destroy_device(struct hid_device *hdev);
+ int hid_bpf_prog_run(struct hid_device *hdev, enum hid_bpf_prog_type type,
+ 		     struct hid_bpf_ctx_kern *ctx_kern);
+diff --git a/drivers/hid/bpf/hid_bpf_jmp_table.c b/drivers/hid/bpf/hid_bpf_jmp_table.c
+index eca34b7372f95..aa8e1c79cdf55 100644
+--- a/drivers/hid/bpf/hid_bpf_jmp_table.c
++++ b/drivers/hid/bpf/hid_bpf_jmp_table.c
+@@ -196,6 +196,7 @@ static void __hid_bpf_do_release_prog(int map_fd, unsigned int idx)
+ static void hid_bpf_release_progs(struct work_struct *work)
+ {
+ 	int i, j, n, map_fd = -1;
++	bool hdev_destroyed;
+ 
+ 	if (!jmp_table.map)
+ 		return;
+@@ -220,6 +221,12 @@ static void hid_bpf_release_progs(struct work_struct *work)
+ 		if (entry->hdev) {
+ 			hdev = entry->hdev;
+ 			type = entry->type;
++			/*
++			 * hdev is still valid, even if we are called after hid_destroy_device():
++			 * when hid_bpf_attach() gets called, it takes a ref on the dev through
++			 * bus_find_device()
++			 */
++			hdev_destroyed = hdev->bpf.destroyed;
+ 
+ 			hid_bpf_populate_hdev(hdev, type);
+ 
+@@ -232,12 +239,19 @@ static void hid_bpf_release_progs(struct work_struct *work)
+ 				if (test_bit(next->idx, jmp_table.enabled))
+ 					continue;
+ 
+-				if (next->hdev == hdev && next->type == type)
++				if (next->hdev == hdev && next->type == type) {
++					/*
++					 * clear the hdev reference and decrement the device ref
++					 * that was taken during bus_find_device() while calling
++					 * hid_bpf_attach()
++					 */
+ 					next->hdev = NULL;
++					put_device(&hdev->dev);
++				}
+ 			}
+ 
+-			/* if type was rdesc fixup, reconnect device */
+-			if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP)
++			/* if type was rdesc fixup and the device is not gone, reconnect device */
++			if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP && !hdev_destroyed)
+ 				hid_bpf_reconnect(hdev);
+ 		}
+ 	}
+@@ -333,15 +347,10 @@ static int hid_bpf_insert_prog(int prog_fd, struct bpf_prog *prog)
+ 	return err;
+ }
+ 
+-int hid_bpf_get_prog_attach_type(int prog_fd)
++int hid_bpf_get_prog_attach_type(struct bpf_prog *prog)
+ {
+-	struct bpf_prog *prog = NULL;
+-	int i;
+ 	int prog_type = HID_BPF_PROG_TYPE_UNDEF;
+-
+-	prog = bpf_prog_get(prog_fd);
+-	if (IS_ERR(prog))
+-		return PTR_ERR(prog);
++	int i;
+ 
+ 	for (i = 0; i < HID_BPF_PROG_TYPE_MAX; i++) {
+ 		if (hid_bpf_btf_ids[i] == prog->aux->attach_btf_id) {
+@@ -350,8 +359,6 @@ int hid_bpf_get_prog_attach_type(int prog_fd)
+ 		}
+ 	}
+ 
+-	bpf_prog_put(prog);
+-
+ 	return prog_type;
+ }
+ 
+@@ -388,19 +395,13 @@ static const struct bpf_link_ops hid_bpf_link_lops = {
+ /* called from syscall */
+ noinline int
+ __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type,
+-		      int prog_fd, __u32 flags)
++		      int prog_fd, struct bpf_prog *prog, __u32 flags)
+ {
+ 	struct bpf_link_primer link_primer;
+ 	struct hid_bpf_link *link;
+-	struct bpf_prog *prog = NULL;
+ 	struct hid_bpf_prog_entry *prog_entry;
+ 	int cnt, err = -EINVAL, prog_table_idx = -1;
+ 
+-	/* take a ref on the prog itself */
+-	prog = bpf_prog_get(prog_fd);
+-	if (IS_ERR(prog))
+-		return PTR_ERR(prog);
+-
+ 	mutex_lock(&hid_bpf_attach_lock);
+ 
+ 	link = kzalloc(sizeof(*link), GFP_USER);
+@@ -467,7 +468,6 @@ __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type,
+  err_unlock:
+ 	mutex_unlock(&hid_bpf_attach_lock);
+ 
+-	bpf_prog_put(prog);
+ 	kfree(link);
+ 
+ 	return err;
+diff --git a/drivers/hid/i2c-hid/i2c-hid-of.c b/drivers/hid/i2c-hid/i2c-hid-of.c
+index c4e1fa0273c84..8be4d576da773 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-of.c
++++ b/drivers/hid/i2c-hid/i2c-hid-of.c
+@@ -87,6 +87,7 @@ static int i2c_hid_of_probe(struct i2c_client *client)
+ 	if (!ihid_of)
+ 		return -ENOMEM;
+ 
++	ihid_of->client = client;
+ 	ihid_of->ops.power_up = i2c_hid_of_power_up;
+ 	ihid_of->ops.power_down = i2c_hid_of_power_down;
+ 
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 3f704b8072e8a..7659c98d94292 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2080,7 +2080,7 @@ static int wacom_allocate_inputs(struct wacom *wacom)
+ 	return 0;
+ }
+ 
+-static int wacom_register_inputs(struct wacom *wacom)
++static int wacom_setup_inputs(struct wacom *wacom)
+ {
+ 	struct input_dev *pen_input_dev, *touch_input_dev, *pad_input_dev;
+ 	struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
+@@ -2099,10 +2099,6 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 		input_free_device(pen_input_dev);
+ 		wacom_wac->pen_input = NULL;
+ 		pen_input_dev = NULL;
+-	} else {
+-		error = input_register_device(pen_input_dev);
+-		if (error)
+-			goto fail;
+ 	}
+ 
+ 	error = wacom_setup_touch_input_capabilities(touch_input_dev, wacom_wac);
+@@ -2111,10 +2107,6 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 		input_free_device(touch_input_dev);
+ 		wacom_wac->touch_input = NULL;
+ 		touch_input_dev = NULL;
+-	} else {
+-		error = input_register_device(touch_input_dev);
+-		if (error)
+-			goto fail;
+ 	}
+ 
+ 	error = wacom_setup_pad_input_capabilities(pad_input_dev, wacom_wac);
+@@ -2123,7 +2115,34 @@ static int wacom_register_inputs(struct wacom *wacom)
+ 		input_free_device(pad_input_dev);
+ 		wacom_wac->pad_input = NULL;
+ 		pad_input_dev = NULL;
+-	} else {
++	}
++
++	return 0;
++}
++
++static int wacom_register_inputs(struct wacom *wacom)
++{
++	struct input_dev *pen_input_dev, *touch_input_dev, *pad_input_dev;
++	struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
++	int error = 0;
++
++	pen_input_dev = wacom_wac->pen_input;
++	touch_input_dev = wacom_wac->touch_input;
++	pad_input_dev = wacom_wac->pad_input;
++
++	if (pen_input_dev) {
++		error = input_register_device(pen_input_dev);
++		if (error)
++			goto fail;
++	}
++
++	if (touch_input_dev) {
++		error = input_register_device(touch_input_dev);
++		if (error)
++			goto fail;
++	}
++
++	if (pad_input_dev) {
+ 		error = input_register_device(pad_input_dev);
+ 		if (error)
+ 			goto fail;
+@@ -2376,6 +2395,20 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ 	if (error)
+ 		goto fail;
+ 
++	error = wacom_setup_inputs(wacom);
++	if (error)
++		goto fail;
++
++	if (features->type == HID_GENERIC)
++		connect_mask |= HID_CONNECT_DRIVER;
++
++	/* Regular HID work starts now */
++	error = hid_hw_start(hdev, connect_mask);
++	if (error) {
++		hid_err(hdev, "hw start failed\n");
++		goto fail;
++	}
++
+ 	error = wacom_register_inputs(wacom);
+ 	if (error)
+ 		goto fail;
+@@ -2390,16 +2423,6 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ 			goto fail;
+ 	}
+ 
+-	if (features->type == HID_GENERIC)
+-		connect_mask |= HID_CONNECT_DRIVER;
+-
+-	/* Regular HID work starts now */
+-	error = hid_hw_start(hdev, connect_mask);
+-	if (error) {
+-		hid_err(hdev, "hw start failed\n");
+-		goto fail;
+-	}
+-
+ 	if (!wireless) {
+ 		/* Note that if query fails it is not a hard failure */
+ 		wacom_query_tablet_data(wacom);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 8289ce7637044..002cbaa16bd16 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2574,7 +2574,14 @@ static void wacom_wac_pen_report(struct hid_device *hdev,
+ 				wacom_wac->hid_data.tipswitch);
+ 		input_report_key(input, wacom_wac->tool[0], sense);
+ 		if (wacom_wac->serial[0]) {
+-			input_event(input, EV_MSC, MSC_SERIAL, wacom_wac->serial[0]);
++			/*
++			 * xf86-input-wacom does not accept a serial number
++			 * of '0'. Report the low 32 bits if possible, but
++			 * if they are zero, report the upper ones instead.
++			 */
++			__u32 serial_lo = wacom_wac->serial[0] & 0xFFFFFFFFu;
++			__u32 serial_hi = wacom_wac->serial[0] >> 32;
++			input_event(input, EV_MSC, MSC_SERIAL, (int)(serial_lo ? serial_lo : serial_hi));
+ 			input_report_abs(input, ABS_MISC, sense ? id : 0);
+ 		}
+ 
+diff --git a/drivers/i2c/busses/Makefile b/drivers/i2c/busses/Makefile
+index 3757b9391e60a..aa0ee8ecd6f2f 100644
+--- a/drivers/i2c/busses/Makefile
++++ b/drivers/i2c/busses/Makefile
+@@ -90,10 +90,8 @@ obj-$(CONFIG_I2C_NPCM)		+= i2c-npcm7xx.o
+ obj-$(CONFIG_I2C_OCORES)	+= i2c-ocores.o
+ obj-$(CONFIG_I2C_OMAP)		+= i2c-omap.o
+ obj-$(CONFIG_I2C_OWL)		+= i2c-owl.o
+-i2c-pasemi-objs := i2c-pasemi-core.o i2c-pasemi-pci.o
+-obj-$(CONFIG_I2C_PASEMI)	+= i2c-pasemi.o
+-i2c-apple-objs := i2c-pasemi-core.o i2c-pasemi-platform.o
+-obj-$(CONFIG_I2C_APPLE)	+= i2c-apple.o
++obj-$(CONFIG_I2C_PASEMI)	+= i2c-pasemi-core.o i2c-pasemi-pci.o
++obj-$(CONFIG_I2C_APPLE)		+= i2c-pasemi-core.o i2c-pasemi-platform.o
+ obj-$(CONFIG_I2C_PCA_PLATFORM)	+= i2c-pca-platform.o
+ obj-$(CONFIG_I2C_PNX)		+= i2c-pnx.o
+ obj-$(CONFIG_I2C_PXA)		+= i2c-pxa.o
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 070999139c6dc..6a5a93cf4ecc6 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -498,11 +498,10 @@ static int i801_block_transaction_by_block(struct i801_priv *priv,
+ 	/* Set block buffer mode */
+ 	outb_p(inb_p(SMBAUXCTL(priv)) | SMBAUXCTL_E32B, SMBAUXCTL(priv));
+ 
+-	inb_p(SMBHSTCNT(priv)); /* reset the data buffer index */
+-
+ 	if (read_write == I2C_SMBUS_WRITE) {
+ 		len = data->block[0];
+ 		outb_p(len, SMBHSTDAT0(priv));
++		inb_p(SMBHSTCNT(priv));	/* reset the data buffer index */
+ 		for (i = 0; i < len; i++)
+ 			outb_p(data->block[i+1], SMBBLKDAT(priv));
+ 	}
+@@ -520,6 +519,7 @@ static int i801_block_transaction_by_block(struct i801_priv *priv,
+ 		}
+ 
+ 		data->block[0] = len;
++		inb_p(SMBHSTCNT(priv));	/* reset the data buffer index */
+ 		for (i = 0; i < len; i++)
+ 			data->block[i + 1] = inb_p(SMBBLKDAT(priv));
+ 	}
+diff --git a/drivers/i2c/busses/i2c-pasemi-core.c b/drivers/i2c/busses/i2c-pasemi-core.c
+index 7d54a9f34c74b..bd8becbdeeb28 100644
+--- a/drivers/i2c/busses/i2c-pasemi-core.c
++++ b/drivers/i2c/busses/i2c-pasemi-core.c
+@@ -369,6 +369,7 @@ int pasemi_i2c_common_probe(struct pasemi_smbus *smbus)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(pasemi_i2c_common_probe);
+ 
+ irqreturn_t pasemi_irq_handler(int irq, void *dev_id)
+ {
+@@ -378,3 +379,8 @@ irqreturn_t pasemi_irq_handler(int irq, void *dev_id)
+ 	complete(&smbus->irq_completion);
+ 	return IRQ_HANDLED;
+ }
++EXPORT_SYMBOL_GPL(pasemi_irq_handler);
++
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Olof Johansson <olof@lixom.net>");
++MODULE_DESCRIPTION("PA Semi PWRficient SMBus driver");
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 0d2e7171e3a6f..da94df466e83c 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -613,20 +613,20 @@ static int geni_i2c_gpi_xfer(struct geni_i2c_dev *gi2c, struct i2c_msg msgs[], i
+ 
+ 		peripheral.addr = msgs[i].addr;
+ 
++		ret =  geni_i2c_gpi(gi2c, &msgs[i], &config,
++				    &tx_addr, &tx_buf, I2C_WRITE, gi2c->tx_c);
++		if (ret)
++			goto err;
++
+ 		if (msgs[i].flags & I2C_M_RD) {
+ 			ret =  geni_i2c_gpi(gi2c, &msgs[i], &config,
+ 					    &rx_addr, &rx_buf, I2C_READ, gi2c->rx_c);
+ 			if (ret)
+ 				goto err;
+-		}
+-
+-		ret =  geni_i2c_gpi(gi2c, &msgs[i], &config,
+-				    &tx_addr, &tx_buf, I2C_WRITE, gi2c->tx_c);
+-		if (ret)
+-			goto err;
+ 
+-		if (msgs[i].flags & I2C_M_RD)
+ 			dma_async_issue_pending(gi2c->rx_c);
++		}
++
+ 		dma_async_issue_pending(gi2c->tx_c);
+ 
+ 		timeout = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT);
+diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
+index f113dae590483..d2221fe9ad661 100644
+--- a/drivers/iio/accel/Kconfig
++++ b/drivers/iio/accel/Kconfig
+@@ -219,10 +219,12 @@ config BMA400
+ 
+ config BMA400_I2C
+ 	tristate
++	select REGMAP_I2C
+ 	depends on BMA400
+ 
+ config BMA400_SPI
+ 	tristate
++	select REGMAP_SPI
+ 	depends on BMA400
+ 
+ config BMC150_ACCEL
+diff --git a/drivers/iio/adc/ad4130.c b/drivers/iio/adc/ad4130.c
+index feb86fe6c422d..62490424b6aed 100644
+--- a/drivers/iio/adc/ad4130.c
++++ b/drivers/iio/adc/ad4130.c
+@@ -1821,7 +1821,7 @@ static int ad4130_setup_int_clk(struct ad4130_state *st)
+ {
+ 	struct device *dev = &st->spi->dev;
+ 	struct device_node *of_node = dev_of_node(dev);
+-	struct clk_init_data init;
++	struct clk_init_data init = {};
+ 	const char *clk_name;
+ 	int ret;
+ 
+@@ -1891,10 +1891,14 @@ static int ad4130_setup(struct iio_dev *indio_dev)
+ 		return ret;
+ 
+ 	/*
+-	 * Configure all GPIOs for output. If configured, the interrupt function
+-	 * of P2 takes priority over the GPIO out function.
++	 * Configure unused GPIOs for output. If configured, the interrupt
++	 * function of P2 takes priority over the GPIO out function.
+ 	 */
+-	val =  AD4130_IO_CONTROL_GPIO_CTRL_MASK;
++	val = 0;
++	for (i = 0; i < AD4130_MAX_GPIOS; i++)
++		if (st->pins_fn[i + AD4130_AIN2_P1] == AD4130_PIN_FN_NONE)
++			val |= FIELD_PREP(AD4130_IO_CONTROL_GPIO_CTRL_MASK, BIT(i));
++
+ 	val |= FIELD_PREP(AD4130_IO_CONTROL_INT_PIN_SEL_MASK, st->int_pin_sel);
+ 
+ 	ret = regmap_write(st->regmap, AD4130_IO_CONTROL_REG, val);
+diff --git a/drivers/iio/imu/bno055/Kconfig b/drivers/iio/imu/bno055/Kconfig
+index 83e53acfbe880..c7f5866a177d9 100644
+--- a/drivers/iio/imu/bno055/Kconfig
++++ b/drivers/iio/imu/bno055/Kconfig
+@@ -8,6 +8,7 @@ config BOSCH_BNO055
+ config BOSCH_BNO055_SERIAL
+ 	tristate "Bosch BNO055 attached via UART"
+ 	depends on SERIAL_DEV_BUS
++	select REGMAP
+ 	select BOSCH_BNO055
+ 	help
+ 	  Enable this to support Bosch BNO055 IMUs attached via UART.
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index c77745b594bd6..e6d3d07a4c83d 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1581,10 +1581,13 @@ static int iio_device_register_sysfs(struct iio_dev *indio_dev)
+ 	ret = iio_device_register_sysfs_group(indio_dev,
+ 					      &iio_dev_opaque->chan_attr_group);
+ 	if (ret)
+-		goto error_clear_attrs;
++		goto error_free_chan_attrs;
+ 
+ 	return 0;
+ 
++error_free_chan_attrs:
++	kfree(iio_dev_opaque->chan_attr_group.attrs);
++	iio_dev_opaque->chan_attr_group.attrs = NULL;
+ error_clear_attrs:
+ 	iio_free_chan_devattr_list(&iio_dev_opaque->channel_attr_list);
+ 
+diff --git a/drivers/iio/light/hid-sensor-als.c b/drivers/iio/light/hid-sensor-als.c
+index 5cd27f04b45e6..b6c4bef2a7bb2 100644
+--- a/drivers/iio/light/hid-sensor-als.c
++++ b/drivers/iio/light/hid-sensor-als.c
+@@ -226,6 +226,7 @@ static int als_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 	case HID_USAGE_SENSOR_TIME_TIMESTAMP:
+ 		als_state->timestamp = hid_sensor_convert_timestamp(&als_state->common_attributes,
+ 								    *(s64 *)raw_data);
++		ret = 0;
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/iio/magnetometer/rm3100-core.c b/drivers/iio/magnetometer/rm3100-core.c
+index 69938204456f8..42b70cd42b393 100644
+--- a/drivers/iio/magnetometer/rm3100-core.c
++++ b/drivers/iio/magnetometer/rm3100-core.c
+@@ -530,6 +530,7 @@ int rm3100_common_probe(struct device *dev, struct regmap *regmap, int irq)
+ 	struct rm3100_data *data;
+ 	unsigned int tmp;
+ 	int ret;
++	int samp_rate_index;
+ 
+ 	indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
+ 	if (!indio_dev)
+@@ -586,9 +587,14 @@ int rm3100_common_probe(struct device *dev, struct regmap *regmap, int irq)
+ 	ret = regmap_read(regmap, RM3100_REG_TMRC, &tmp);
+ 	if (ret < 0)
+ 		return ret;
++
++	samp_rate_index = tmp - RM3100_TMRC_OFFSET;
++	if (samp_rate_index < 0 || samp_rate_index >=  RM3100_SAMP_NUM) {
++		dev_err(dev, "The value read from RM3100_REG_TMRC is invalid!\n");
++		return -EINVAL;
++	}
+ 	/* Initializing max wait time, which is double conversion time. */
+-	data->conversion_time = rm3100_samp_rates[tmp - RM3100_TMRC_OFFSET][2]
+-				* 2;
++	data->conversion_time = rm3100_samp_rates[samp_rate_index][2] * 2;
+ 
+ 	/* Cycle count values may not be what we want. */
+ 	if ((tmp - RM3100_TMRC_OFFSET) == 0)
+diff --git a/drivers/iio/pressure/bmp280-spi.c b/drivers/iio/pressure/bmp280-spi.c
+index 1dff9bb7c4e90..967de99c1bb97 100644
+--- a/drivers/iio/pressure/bmp280-spi.c
++++ b/drivers/iio/pressure/bmp280-spi.c
+@@ -91,6 +91,7 @@ static const struct of_device_id bmp280_of_spi_match[] = {
+ MODULE_DEVICE_TABLE(of, bmp280_of_spi_match);
+ 
+ static const struct spi_device_id bmp280_spi_id[] = {
++	{ "bmp085", (kernel_ulong_t)&bmp180_chip_info },
+ 	{ "bmp180", (kernel_ulong_t)&bmp180_chip_info },
+ 	{ "bmp181", (kernel_ulong_t)&bmp180_chip_info },
+ 	{ "bmp280", (kernel_ulong_t)&bmp280_chip_info },
+diff --git a/drivers/interconnect/qcom/sc8180x.c b/drivers/interconnect/qcom/sc8180x.c
+index 20331e119beb6..03d626776ba17 100644
+--- a/drivers/interconnect/qcom/sc8180x.c
++++ b/drivers/interconnect/qcom/sc8180x.c
+@@ -1372,6 +1372,7 @@ static struct qcom_icc_bcm bcm_mm0 = {
+ 
+ static struct qcom_icc_bcm bcm_co0 = {
+ 	.name = "CO0",
++	.keepalive = true,
+ 	.num_nodes = 1,
+ 	.nodes = { &slv_qns_cdsp_mem_noc }
+ };
+diff --git a/drivers/interconnect/qcom/sm8550.c b/drivers/interconnect/qcom/sm8550.c
+index 629faa4c9aaee..fc22cecf650fc 100644
+--- a/drivers/interconnect/qcom/sm8550.c
++++ b/drivers/interconnect/qcom/sm8550.c
+@@ -2223,6 +2223,7 @@ static struct platform_driver qnoc_driver = {
+ 	.driver = {
+ 		.name = "qnoc-sm8550",
+ 		.of_match_table = qnoc_of_match,
++		.sync_state = icc_sync_state,
+ 	},
+ };
+ 
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index 5559c943f03f9..2b0b3175cea06 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -2,7 +2,7 @@
+ /*
+  * Generic Broadcom Set Top Box Level 2 Interrupt controller driver
+  *
+- * Copyright (C) 2014-2017 Broadcom
++ * Copyright (C) 2014-2024 Broadcom
+  */
+ 
+ #define pr_fmt(fmt)	KBUILD_MODNAME	": " fmt
+@@ -112,6 +112,9 @@ static void brcmstb_l2_intc_irq_handle(struct irq_desc *desc)
+ 		generic_handle_domain_irq(b->domain, irq);
+ 	} while (status);
+ out:
++	/* Don't ack parent before all device writes are done */
++	wmb();
++
+ 	chained_irq_exit(chip, desc);
+ }
+ 
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 9a7a74239eabb..3632c92cd183c 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -207,6 +207,11 @@ static bool require_its_list_vmovp(struct its_vm *vm, struct its_node *its)
+ 	return (gic_rdists->has_rvpeid || vm->vlpi_count[its->list_nr]);
+ }
+ 
++static bool rdists_support_shareable(void)
++{
++	return !(gic_rdists->flags & RDIST_FLAGS_FORCE_NON_SHAREABLE);
++}
++
+ static u16 get_its_list(struct its_vm *vm)
+ {
+ 	struct its_node *its;
+@@ -2710,10 +2715,12 @@ static u64 inherit_vpe_l1_table_from_its(void)
+ 			break;
+ 		}
+ 		val |= FIELD_PREP(GICR_VPROPBASER_4_1_ADDR, addr >> 12);
+-		val |= FIELD_PREP(GICR_VPROPBASER_SHAREABILITY_MASK,
+-				  FIELD_GET(GITS_BASER_SHAREABILITY_MASK, baser));
+-		val |= FIELD_PREP(GICR_VPROPBASER_INNER_CACHEABILITY_MASK,
+-				  FIELD_GET(GITS_BASER_INNER_CACHEABILITY_MASK, baser));
++		if (rdists_support_shareable()) {
++			val |= FIELD_PREP(GICR_VPROPBASER_SHAREABILITY_MASK,
++					  FIELD_GET(GITS_BASER_SHAREABILITY_MASK, baser));
++			val |= FIELD_PREP(GICR_VPROPBASER_INNER_CACHEABILITY_MASK,
++					  FIELD_GET(GITS_BASER_INNER_CACHEABILITY_MASK, baser));
++		}
+ 		val |= FIELD_PREP(GICR_VPROPBASER_4_1_SIZE, GITS_BASER_NR_PAGES(baser) - 1);
+ 
+ 		return val;
+@@ -2936,8 +2943,10 @@ static int allocate_vpe_l1_table(void)
+ 	WARN_ON(!IS_ALIGNED(pa, psz));
+ 
+ 	val |= FIELD_PREP(GICR_VPROPBASER_4_1_ADDR, pa >> 12);
+-	val |= GICR_VPROPBASER_RaWb;
+-	val |= GICR_VPROPBASER_InnerShareable;
++	if (rdists_support_shareable()) {
++		val |= GICR_VPROPBASER_RaWb;
++		val |= GICR_VPROPBASER_InnerShareable;
++	}
+ 	val |= GICR_VPROPBASER_4_1_Z;
+ 	val |= GICR_VPROPBASER_4_1_VALID;
+ 
+@@ -3126,7 +3135,7 @@ static void its_cpu_init_lpis(void)
+ 	gicr_write_propbaser(val, rbase + GICR_PROPBASER);
+ 	tmp = gicr_read_propbaser(rbase + GICR_PROPBASER);
+ 
+-	if (gic_rdists->flags & RDIST_FLAGS_FORCE_NON_SHAREABLE)
++	if (!rdists_support_shareable())
+ 		tmp &= ~GICR_PROPBASER_SHAREABILITY_MASK;
+ 
+ 	if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) {
+@@ -3153,7 +3162,7 @@ static void its_cpu_init_lpis(void)
+ 	gicr_write_pendbaser(val, rbase + GICR_PENDBASER);
+ 	tmp = gicr_read_pendbaser(rbase + GICR_PENDBASER);
+ 
+-	if (gic_rdists->flags & RDIST_FLAGS_FORCE_NON_SHAREABLE)
++	if (!rdists_support_shareable())
+ 		tmp &= ~GICR_PENDBASER_SHAREABILITY_MASK;
+ 
+ 	if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) {
+@@ -3817,8 +3826,9 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ 				bool force)
+ {
+ 	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+-	int from, cpu = cpumask_first(mask_val);
++	struct cpumask common, *table_mask;
+ 	unsigned long flags;
++	int from, cpu;
+ 
+ 	/*
+ 	 * Changing affinity is mega expensive, so let's be as lazy as
+@@ -3834,19 +3844,22 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ 	 * taken on any vLPI handling path that evaluates vpe->col_idx.
+ 	 */
+ 	from = vpe_to_cpuid_lock(vpe, &flags);
+-	if (from == cpu)
+-		goto out;
+-
+-	vpe->col_idx = cpu;
++	table_mask = gic_data_rdist_cpu(from)->vpe_table_mask;
+ 
+ 	/*
+-	 * GICv4.1 allows us to skip VMOVP if moving to a cpu whose RD
+-	 * is sharing its VPE table with the current one.
++	 * If we are offered another CPU in the same GICv4.1 ITS
++	 * affinity, pick this one. Otherwise, any CPU will do.
+ 	 */
+-	if (gic_data_rdist_cpu(cpu)->vpe_table_mask &&
+-	    cpumask_test_cpu(from, gic_data_rdist_cpu(cpu)->vpe_table_mask))
++	if (table_mask && cpumask_and(&common, mask_val, table_mask))
++		cpu = cpumask_test_cpu(from, &common) ? from : cpumask_first(&common);
++	else
++		cpu = cpumask_first(mask_val);
++
++	if (from == cpu)
+ 		goto out;
+ 
++	vpe->col_idx = cpu;
++
+ 	its_send_vmovp(vpe);
+ 	its_vpe_db_proxy_move(vpe, from, cpu);
+ 
+@@ -3880,14 +3893,18 @@ static void its_vpe_schedule(struct its_vpe *vpe)
+ 	val  = virt_to_phys(page_address(vpe->its_vm->vprop_page)) &
+ 		GENMASK_ULL(51, 12);
+ 	val |= (LPI_NRBITS - 1) & GICR_VPROPBASER_IDBITS_MASK;
+-	val |= GICR_VPROPBASER_RaWb;
+-	val |= GICR_VPROPBASER_InnerShareable;
++	if (rdists_support_shareable()) {
++		val |= GICR_VPROPBASER_RaWb;
++		val |= GICR_VPROPBASER_InnerShareable;
++	}
+ 	gicr_write_vpropbaser(val, vlpi_base + GICR_VPROPBASER);
+ 
+ 	val  = virt_to_phys(page_address(vpe->vpt_page)) &
+ 		GENMASK_ULL(51, 16);
+-	val |= GICR_VPENDBASER_RaWaWb;
+-	val |= GICR_VPENDBASER_InnerShareable;
++	if (rdists_support_shareable()) {
++		val |= GICR_VPENDBASER_RaWaWb;
++		val |= GICR_VPENDBASER_InnerShareable;
++	}
+ 	/*
+ 	 * There is no good way of finding out if the pending table is
+ 	 * empty as we can race against the doorbell interrupt very
+@@ -5078,6 +5095,8 @@ static int __init its_probe_one(struct its_node *its)
+ 	u32 ctlr;
+ 	int err;
+ 
++	its_enable_quirks(its);
++
+ 	if (is_v4(its)) {
+ 		if (!(its->typer & GITS_TYPER_VMOVP)) {
+ 			err = its_compute_its_list_map(its);
+@@ -5429,7 +5448,6 @@ static int __init its_of_probe(struct device_node *node)
+ 		if (!its)
+ 			return -ENOMEM;
+ 
+-		its_enable_quirks(its);
+ 		err = its_probe_one(its);
+ 		if (err)  {
+ 			its_node_destroy(its);
+diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c
+index 1623cd7791752..b3736bdd4b9f2 100644
+--- a/drivers/irqchip/irq-loongson-eiointc.c
++++ b/drivers/irqchip/irq-loongson-eiointc.c
+@@ -241,7 +241,7 @@ static int eiointc_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ 	int ret;
+ 	unsigned int i, type;
+ 	unsigned long hwirq = 0;
+-	struct eiointc *priv = domain->host_data;
++	struct eiointc_priv *priv = domain->host_data;
+ 
+ 	ret = irq_domain_translate_onecell(domain, arg, &hwirq, &type);
+ 	if (ret)
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 095b9b49aa825..e6757a30dccad 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -22,6 +22,8 @@
+ #include "dm-ima.h"
+ 
+ #define DM_RESERVED_MAX_IOS		1024
++#define DM_MAX_TARGETS			1048576
++#define DM_MAX_TARGET_PARAMS		1024
+ 
+ struct dm_io;
+ 
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 2ae8560b6a14a..08681f4265205 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -73,10 +73,8 @@ struct dm_crypt_io {
+ 	struct bio *base_bio;
+ 	u8 *integrity_metadata;
+ 	bool integrity_metadata_from_pool:1;
+-	bool in_tasklet:1;
+ 
+ 	struct work_struct work;
+-	struct tasklet_struct tasklet;
+ 
+ 	struct convert_context ctx;
+ 
+@@ -1762,7 +1760,6 @@ static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc,
+ 	io->ctx.r.req = NULL;
+ 	io->integrity_metadata = NULL;
+ 	io->integrity_metadata_from_pool = false;
+-	io->in_tasklet = false;
+ 	atomic_set(&io->io_pending, 0);
+ }
+ 
+@@ -1771,13 +1768,6 @@ static void crypt_inc_pending(struct dm_crypt_io *io)
+ 	atomic_inc(&io->io_pending);
+ }
+ 
+-static void kcryptd_io_bio_endio(struct work_struct *work)
+-{
+-	struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work);
+-
+-	bio_endio(io->base_bio);
+-}
+-
+ /*
+  * One of the bios was finished. Check for completion of
+  * the whole request and correctly clean up the buffer.
+@@ -1801,20 +1791,6 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
+ 
+ 	base_bio->bi_status = error;
+ 
+-	/*
+-	 * If we are running this function from our tasklet,
+-	 * we can't call bio_endio() here, because it will call
+-	 * clone_endio() from dm.c, which in turn will
+-	 * free the current struct dm_crypt_io structure with
+-	 * our tasklet. In this case we need to delay bio_endio()
+-	 * execution to after the tasklet is done and dequeued.
+-	 */
+-	if (io->in_tasklet) {
+-		INIT_WORK(&io->work, kcryptd_io_bio_endio);
+-		queue_work(cc->io_queue, &io->work);
+-		return;
+-	}
+-
+ 	bio_endio(base_bio);
+ }
+ 
+@@ -2246,11 +2222,6 @@ static void kcryptd_crypt(struct work_struct *work)
+ 		kcryptd_crypt_write_convert(io);
+ }
+ 
+-static void kcryptd_crypt_tasklet(unsigned long work)
+-{
+-	kcryptd_crypt((struct work_struct *)work);
+-}
+-
+ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
+ {
+ 	struct crypt_config *cc = io->cc;
+@@ -2262,15 +2233,10 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
+ 		 * irqs_disabled(): the kernel may run some IO completion from the idle thread, but
+ 		 * it is being executed with irqs disabled.
+ 		 */
+-		if (in_hardirq() || irqs_disabled()) {
+-			io->in_tasklet = true;
+-			tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work);
+-			tasklet_schedule(&io->tasklet);
++		if (!(in_hardirq() || irqs_disabled())) {
++			kcryptd_crypt(&io->work);
+ 			return;
+ 		}
+-
+-		kcryptd_crypt(&io->work);
+-		return;
+ 	}
+ 
+ 	INIT_WORK(&io->work, kcryptd_crypt);
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index e65058e0ed06a..3b1ad7127cb84 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1941,7 +1941,8 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 			   minimum_data_size - sizeof(param_kernel->version)))
+ 		return -EFAULT;
+ 
+-	if (param_kernel->data_size < minimum_data_size) {
++	if (unlikely(param_kernel->data_size < minimum_data_size) ||
++	    unlikely(param_kernel->data_size > DM_MAX_TARGETS * DM_MAX_TARGET_PARAMS)) {
+ 		DMERR("Invalid data size in the ioctl structure: %u",
+ 		      param_kernel->data_size);
+ 		return -EINVAL;
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 198d38b53322c..08c9c20f9c663 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -129,7 +129,12 @@ static int alloc_targets(struct dm_table *t, unsigned int num)
+ int dm_table_create(struct dm_table **result, blk_mode_t mode,
+ 		    unsigned int num_targets, struct mapped_device *md)
+ {
+-	struct dm_table *t = kzalloc(sizeof(*t), GFP_KERNEL);
++	struct dm_table *t;
++
++	if (num_targets > DM_MAX_TARGETS)
++		return -EOVERFLOW;
++
++	t = kzalloc(sizeof(*t), GFP_KERNEL);
+ 
+ 	if (!t)
+ 		return -ENOMEM;
+@@ -144,7 +149,7 @@ int dm_table_create(struct dm_table **result, blk_mode_t mode,
+ 
+ 	if (!num_targets) {
+ 		kfree(t);
+-		return -ENOMEM;
++		return -EOVERFLOW;
+ 	}
+ 
+ 	if (alloc_targets(t, num_targets)) {
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 14e58ae705218..82662f5769c4a 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -645,23 +645,6 @@ static void verity_work(struct work_struct *w)
+ 	verity_finish_io(io, errno_to_blk_status(verity_verify_io(io)));
+ }
+ 
+-static void verity_tasklet(unsigned long data)
+-{
+-	struct dm_verity_io *io = (struct dm_verity_io *)data;
+-	int err;
+-
+-	io->in_tasklet = true;
+-	err = verity_verify_io(io);
+-	if (err == -EAGAIN || err == -ENOMEM) {
+-		/* fallback to retrying with work-queue */
+-		INIT_WORK(&io->work, verity_work);
+-		queue_work(io->v->verify_wq, &io->work);
+-		return;
+-	}
+-
+-	verity_finish_io(io, errno_to_blk_status(err));
+-}
+-
+ static void verity_end_io(struct bio *bio)
+ {
+ 	struct dm_verity_io *io = bio->bi_private;
+@@ -674,13 +657,8 @@ static void verity_end_io(struct bio *bio)
+ 		return;
+ 	}
+ 
+-	if (static_branch_unlikely(&use_tasklet_enabled) && io->v->use_tasklet) {
+-		tasklet_init(&io->tasklet, verity_tasklet, (unsigned long)io);
+-		tasklet_schedule(&io->tasklet);
+-	} else {
+-		INIT_WORK(&io->work, verity_work);
+-		queue_work(io->v->verify_wq, &io->work);
+-	}
++	INIT_WORK(&io->work, verity_work);
++	queue_work(io->v->verify_wq, &io->work);
+ }
+ 
+ /*
+diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
+index f9d522c870e61..f3f6070084196 100644
+--- a/drivers/md/dm-verity.h
++++ b/drivers/md/dm-verity.h
+@@ -83,7 +83,6 @@ struct dm_verity_io {
+ 	struct bvec_iter iter;
+ 
+ 	struct work_struct work;
+-	struct tasklet_struct tasklet;
+ 
+ 	/*
+ 	 * Three variably-size fields follow this struct:
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 21b04607d53a9..d243c196c5980 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1037,9 +1037,10 @@ void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
+ 		return;
+ 
+ 	bio = bio_alloc_bioset(rdev->meta_bdev ? rdev->meta_bdev : rdev->bdev,
+-			       1,
+-			       REQ_OP_WRITE | REQ_SYNC | REQ_PREFLUSH | REQ_FUA,
+-			       GFP_NOIO, &mddev->sync_set);
++			      1,
++			      REQ_OP_WRITE | REQ_SYNC | REQ_IDLE | REQ_META
++				  | REQ_PREFLUSH | REQ_FUA,
++			      GFP_NOIO, &mddev->sync_set);
+ 
+ 	atomic_inc(&rdev->nr_pending);
+ 
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+index f96f821a7b50d..acc559652d6eb 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+@@ -559,7 +559,7 @@ static int rkisp1_probe(struct platform_device *pdev)
+ 				rkisp1->irqs[il] = irq;
+ 		}
+ 
+-		ret = devm_request_irq(dev, irq, info->isrs[i].isr, 0,
++		ret = devm_request_irq(dev, irq, info->isrs[i].isr, IRQF_SHARED,
+ 				       dev_driver_string(dev), dev);
+ 		if (ret) {
+ 			dev_err(dev, "request irq failed: %d\n", ret);
+diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
+index fe17c7f98e810..52d82cbe7685f 100644
+--- a/drivers/media/rc/bpf-lirc.c
++++ b/drivers/media/rc/bpf-lirc.c
+@@ -253,7 +253,7 @@ int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	if (attr->attach_flags)
+ 		return -EINVAL;
+ 
+-	rcdev = rc_dev_get_from_fd(attr->target_fd);
++	rcdev = rc_dev_get_from_fd(attr->target_fd, true);
+ 	if (IS_ERR(rcdev))
+ 		return PTR_ERR(rcdev);
+ 
+@@ -278,7 +278,7 @@ int lirc_prog_detach(const union bpf_attr *attr)
+ 	if (IS_ERR(prog))
+ 		return PTR_ERR(prog);
+ 
+-	rcdev = rc_dev_get_from_fd(attr->target_fd);
++	rcdev = rc_dev_get_from_fd(attr->target_fd, true);
+ 	if (IS_ERR(rcdev)) {
+ 		bpf_prog_put(prog);
+ 		return PTR_ERR(rcdev);
+@@ -303,7 +303,7 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
+ 	if (attr->query.query_flags)
+ 		return -EINVAL;
+ 
+-	rcdev = rc_dev_get_from_fd(attr->query.target_fd);
++	rcdev = rc_dev_get_from_fd(attr->query.target_fd, false);
+ 	if (IS_ERR(rcdev))
+ 		return PTR_ERR(rcdev);
+ 
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 1968067092594..69e630d85262f 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -332,6 +332,7 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 			    sizeof(COMMAND_SMODE_EXIT), STATE_COMMAND_NO_RESP);
+ 	if (err) {
+ 		dev_err(irtoy->dev, "exit sample mode: %d\n", err);
++		kfree(buf);
+ 		return err;
+ 	}
+ 
+@@ -339,6 +340,7 @@ static int irtoy_tx(struct rc_dev *rc, uint *txbuf, uint count)
+ 			    sizeof(COMMAND_SMODE_ENTER), STATE_COMMAND);
+ 	if (err) {
+ 		dev_err(irtoy->dev, "enter sample mode: %d\n", err);
++		kfree(buf);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
+index a537734832c50..caad59f76793f 100644
+--- a/drivers/media/rc/lirc_dev.c
++++ b/drivers/media/rc/lirc_dev.c
+@@ -814,7 +814,7 @@ void __exit lirc_dev_exit(void)
+ 	unregister_chrdev_region(lirc_base_dev, RC_DEV_MAX);
+ }
+ 
+-struct rc_dev *rc_dev_get_from_fd(int fd)
++struct rc_dev *rc_dev_get_from_fd(int fd, bool write)
+ {
+ 	struct fd f = fdget(fd);
+ 	struct lirc_fh *fh;
+@@ -828,6 +828,9 @@ struct rc_dev *rc_dev_get_from_fd(int fd)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
++	if (write && !(f.file->f_mode & FMODE_WRITE))
++		return ERR_PTR(-EPERM);
++
+ 	fh = f.file->private_data;
+ 	dev = fh->rc;
+ 
+diff --git a/drivers/media/rc/rc-core-priv.h b/drivers/media/rc/rc-core-priv.h
+index ef1e95e1af7fc..7df949fc65e2b 100644
+--- a/drivers/media/rc/rc-core-priv.h
++++ b/drivers/media/rc/rc-core-priv.h
+@@ -325,7 +325,7 @@ void lirc_raw_event(struct rc_dev *dev, struct ir_raw_event ev);
+ void lirc_scancode_event(struct rc_dev *dev, struct lirc_scancode *lsc);
+ int lirc_register(struct rc_dev *dev);
+ void lirc_unregister(struct rc_dev *dev);
+-struct rc_dev *rc_dev_get_from_fd(int fd);
++struct rc_dev *rc_dev_get_from_fd(int fd, bool write);
+ #else
+ static inline int lirc_dev_init(void) { return 0; }
+ static inline void lirc_dev_exit(void) {}
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 1c6c62a7f7f55..03319a1fa97fd 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -2191,7 +2191,7 @@ static int fastrpc_cb_remove(struct platform_device *pdev)
+ 	int i;
+ 
+ 	spin_lock_irqsave(&cctx->lock, flags);
+-	for (i = 1; i < FASTRPC_MAX_SESSIONS; i++) {
++	for (i = 0; i < FASTRPC_MAX_SESSIONS; i++) {
+ 		if (cctx->session[i].sid == sess->sid) {
+ 			cctx->session[i].valid = false;
+ 			cctx->sesscount--;
+diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c
+index 2a2d949a9344e..39f45c2b6de8a 100644
+--- a/drivers/mmc/core/slot-gpio.c
++++ b/drivers/mmc/core/slot-gpio.c
+@@ -75,11 +75,15 @@ EXPORT_SYMBOL(mmc_gpio_set_cd_irq);
+ int mmc_gpio_get_ro(struct mmc_host *host)
+ {
+ 	struct mmc_gpio *ctx = host->slot.handler_priv;
++	int cansleep;
+ 
+ 	if (!ctx || !ctx->ro_gpio)
+ 		return -ENOSYS;
+ 
+-	return gpiod_get_value_cansleep(ctx->ro_gpio);
++	cansleep = gpiod_cansleep(ctx->ro_gpio);
++	return cansleep ?
++		gpiod_get_value_cansleep(ctx->ro_gpio) :
++		gpiod_get_value(ctx->ro_gpio);
+ }
+ EXPORT_SYMBOL(mmc_gpio_get_ro);
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 7bfee28116af1..d4a02184784a3 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -693,6 +693,35 @@ static int sdhci_pci_o2_init_sd_express(struct mmc_host *mmc, struct mmc_ios *io
+ 	return 0;
+ }
+ 
++static void sdhci_pci_o2_set_power(struct sdhci_host *host, unsigned char mode,  unsigned short vdd)
++{
++	struct sdhci_pci_chip *chip;
++	struct sdhci_pci_slot *slot = sdhci_priv(host);
++	u32 scratch_32 = 0;
++	u8 scratch_8 = 0;
++
++	chip = slot->chip;
++
++	if (mode == MMC_POWER_OFF) {
++		/* UnLock WP */
++		pci_read_config_byte(chip->pdev, O2_SD_LOCK_WP, &scratch_8);
++		scratch_8 &= 0x7f;
++		pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch_8);
++
++		/* Set PCR 0x354[16] to switch Clock Source back to OPE Clock */
++		pci_read_config_dword(chip->pdev, O2_SD_OUTPUT_CLK_SOURCE_SWITCH, &scratch_32);
++		scratch_32 &= ~(O2_SD_SEL_DLL);
++		pci_write_config_dword(chip->pdev, O2_SD_OUTPUT_CLK_SOURCE_SWITCH, scratch_32);
++
++		/* Lock WP */
++		pci_read_config_byte(chip->pdev, O2_SD_LOCK_WP, &scratch_8);
++		scratch_8 |= 0x80;
++		pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch_8);
++	}
++
++	sdhci_set_power(host, mode, vdd);
++}
++
+ static int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot)
+ {
+ 	struct sdhci_pci_chip *chip;
+@@ -1051,6 +1080,7 @@ static const struct sdhci_ops sdhci_pci_o2_ops = {
+ 	.set_bus_width = sdhci_set_bus_width,
+ 	.reset = sdhci_reset,
+ 	.set_uhs_signaling = sdhci_set_uhs_signaling,
++	.set_power = sdhci_pci_o2_set_power,
+ };
+ 
+ const struct sdhci_pci_fixes sdhci_o2 = {
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 8e6cc0e133b7f..6cf7f364704e8 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1819,6 +1819,8 @@ void bond_xdp_set_features(struct net_device *bond_dev)
+ 	bond_for_each_slave(bond, slave, iter)
+ 		val &= slave->dev->xdp_features;
+ 
++	val &= ~NETDEV_XDP_ACT_XSK_ZEROCOPY;
++
+ 	xdp_set_features_flag(bond_dev, val);
+ }
+ 
+@@ -5934,9 +5936,6 @@ void bond_setup(struct net_device *bond_dev)
+ 	if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP)
+ 		bond_dev->features |= BOND_XFRM_FEATURES;
+ #endif /* CONFIG_XFRM_OFFLOAD */
+-
+-	if (bond_xdp_check(bond))
+-		bond_dev->xdp_features = NETDEV_XDP_ACT_MASK;
+ }
+ 
+ /* Destroy a bonding device.
+diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
+index 036d85ef07f5b..dfdc039d92a6c 100644
+--- a/drivers/net/can/dev/netlink.c
++++ b/drivers/net/can/dev/netlink.c
+@@ -346,7 +346,7 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ 			/* Neither of TDC parameters nor TDC flags are
+ 			 * provided: do calculation
+ 			 */
+-			can_calc_tdco(&priv->tdc, priv->tdc_const, &priv->data_bittiming,
++			can_calc_tdco(&priv->tdc, priv->tdc_const, &dbt,
+ 				      &priv->ctrlmode, priv->ctrlmode_supported);
+ 		} /* else: both CAN_CTRLMODE_TDC_{AUTO,MANUAL} are explicitly
+ 		   * turned off. TDC is disabled: do nothing
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 07a22c74fe810..dd2cbffcfe29c 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3545,7 +3545,7 @@ static int mv88e6xxx_mdio_read_c45(struct mii_bus *bus, int phy, int devad,
+ 	int err;
+ 
+ 	if (!chip->info->ops->phy_read_c45)
+-		return -EOPNOTSUPP;
++		return 0xffff;
+ 
+ 	mv88e6xxx_reg_lock(chip);
+ 	err = chip->info->ops->phy_read_c45(chip, bus, phy, devad, reg, &val);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 2bd7b29fb2516..d9716bcec81bb 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5361,7 +5361,7 @@ static int i40e_pf_wait_queues_disabled(struct i40e_pf *pf)
+ {
+ 	int v, ret = 0;
+ 
+-	for (v = 0; v < pf->hw.func_caps.num_vsis; v++) {
++	for (v = 0; v < pf->num_alloc_vsi; v++) {
+ 		if (pf->vsi[v]) {
+ 			ret = i40e_vsi_wait_queues_disabled(pf->vsi[v]);
+ 			if (ret)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 7db89b2945107..3d8a23d3352e2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2850,6 +2850,24 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
+ 				      (u8 *)&stats, sizeof(stats));
+ }
+ 
++/**
++ * i40e_can_vf_change_mac
++ * @vf: pointer to the VF info
++ *
++ * Return true if the VF is allowed to change its MAC filters, false otherwise
++ */
++static bool i40e_can_vf_change_mac(struct i40e_vf *vf)
++{
++	/* If the VF MAC address has been set administratively (via the
++	 * ndo_set_vf_mac command), then deny permission to the VF to
++	 * add/delete unicast MAC addresses, unless the VF is trusted
++	 */
++	if (vf->pf_set_mac && !vf->trusted)
++		return false;
++
++	return true;
++}
++
+ #define I40E_MAX_MACVLAN_PER_HW 3072
+ #define I40E_MAX_MACVLAN_PER_PF(num_ports) (I40E_MAX_MACVLAN_PER_HW /	\
+ 	(num_ports))
+@@ -2909,8 +2927,8 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		 * The VF may request to set the MAC address filter already
+ 		 * assigned to it so do not return an error in that case.
+ 		 */
+-		if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) &&
+-		    !is_multicast_ether_addr(addr) && vf->pf_set_mac &&
++		if (!i40e_can_vf_change_mac(vf) &&
++		    !is_multicast_ether_addr(addr) &&
+ 		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
+@@ -3116,19 +3134,29 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ 			ret = -EINVAL;
+ 			goto error_param;
+ 		}
+-		if (ether_addr_equal(al->list[i].addr, vf->default_lan_addr.addr))
+-			was_unimac_deleted = true;
+ 	}
+ 	vsi = pf->vsi[vf->lan_vsi_idx];
+ 
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 	/* delete addresses from the list */
+-	for (i = 0; i < al->num_elements; i++)
++	for (i = 0; i < al->num_elements; i++) {
++		const u8 *addr = al->list[i].addr;
++
++		/* Allow to delete VF primary MAC only if it was not set
++		 * administratively by PF or if VF is trusted.
++		 */
++		if (ether_addr_equal(addr, vf->default_lan_addr.addr) &&
++		    i40e_can_vf_change_mac(vf))
++			was_unimac_deleted = true;
++		else
++			continue;
++
+ 		if (i40e_del_mac_filter(vsi, al->list[i].addr)) {
+ 			ret = -EINVAL;
+ 			spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ 			goto error_param;
+ 		}
++	}
+ 
+ 	spin_unlock_bh(&vsi->mac_filter_hash_lock);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
+index 2cd81bb32c66b..8ce5c8bcda1c7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
+@@ -374,7 +374,7 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev)
+ 	struct mlx5_dpll *mdpll = auxiliary_get_drvdata(adev);
+ 	struct mlx5_core_dev *mdev = mdpll->mdev;
+ 
+-	cancel_delayed_work(&mdpll->work);
++	cancel_delayed_work_sync(&mdpll->work);
+ 	mlx5_dpll_mdev_netdev_untrack(mdpll, mdev);
+ 	destroy_workqueue(mdpll->wq);
+ 	dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_lag.c b/drivers/net/ethernet/microchip/lan966x/lan966x_lag.c
+index 41fa2523d91d3..5f2cd9a8cf8fb 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_lag.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_lag.c
+@@ -37,19 +37,24 @@ static void lan966x_lag_set_aggr_pgids(struct lan966x *lan966x)
+ 
+ 	/* Now, set PGIDs for each active LAG */
+ 	for (lag = 0; lag < lan966x->num_phys_ports; ++lag) {
+-		struct net_device *bond = lan966x->ports[lag]->bond;
++		struct lan966x_port *port = lan966x->ports[lag];
+ 		int num_active_ports = 0;
++		struct net_device *bond;
+ 		unsigned long bond_mask;
+ 		u8 aggr_idx[16];
+ 
+-		if (!bond || (visited & BIT(lag)))
++		if (!port || !port->bond || (visited & BIT(lag)))
+ 			continue;
+ 
++		bond = port->bond;
+ 		bond_mask = lan966x_lag_get_mask(lan966x, bond);
+ 
+ 		for_each_set_bit(p, &bond_mask, lan966x->num_phys_ports) {
+ 			struct lan966x_port *port = lan966x->ports[p];
+ 
++			if (!port)
++				continue;
++
+ 			lan_wr(ANA_PGID_PGID_SET(bond_mask),
+ 			       lan966x, ANA_PGID(p));
+ 			if (port->lag_tx_active)
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
+index 2967bab725056..15180538b80a1 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
+@@ -1424,10 +1424,30 @@ static void nfp_nft_ct_translate_mangle_action(struct flow_action_entry *mangle_
+ 		mangle_action->mangle.mask = (__force u32)cpu_to_be32(mangle_action->mangle.mask);
+ 		return;
+ 
++	/* Both struct tcphdr and struct udphdr start with
++	 *	__be16 source;
++	 *	__be16 dest;
++	 * so we can use the same code for both.
++	 */
+ 	case FLOW_ACT_MANGLE_HDR_TYPE_TCP:
+ 	case FLOW_ACT_MANGLE_HDR_TYPE_UDP:
+-		mangle_action->mangle.val = (__force u16)cpu_to_be16(mangle_action->mangle.val);
+-		mangle_action->mangle.mask = (__force u16)cpu_to_be16(mangle_action->mangle.mask);
++		if (mangle_action->mangle.offset == offsetof(struct tcphdr, source)) {
++			mangle_action->mangle.val =
++				(__force u32)cpu_to_be32(mangle_action->mangle.val << 16);
++			/* The mask of mangle action is inverse mask,
++			 * so clear the dest tp port with 0xFFFF to
++			 * instead of rotate-left operation.
++			 */
++			mangle_action->mangle.mask =
++				(__force u32)cpu_to_be32(mangle_action->mangle.mask << 16 | 0xFFFF);
++		}
++		if (mangle_action->mangle.offset == offsetof(struct tcphdr, dest)) {
++			mangle_action->mangle.offset = 0;
++			mangle_action->mangle.val =
++				(__force u32)cpu_to_be32(mangle_action->mangle.val);
++			mangle_action->mangle.mask =
++				(__force u32)cpu_to_be32(mangle_action->mangle.mask);
++		}
+ 		return;
+ 
+ 	default:
+@@ -1864,10 +1884,30 @@ int nfp_fl_ct_handle_post_ct(struct nfp_flower_priv *priv,
+ {
+ 	struct flow_rule *rule = flow_cls_offload_flow_rule(flow);
+ 	struct nfp_fl_ct_flow_entry *ct_entry;
++	struct flow_action_entry *ct_goto;
+ 	struct nfp_fl_ct_zone_entry *zt;
++	struct flow_action_entry *act;
+ 	bool wildcarded = false;
+ 	struct flow_match_ct ct;
+-	struct flow_action_entry *ct_goto;
++	int i;
++
++	flow_action_for_each(i, act, &rule->action) {
++		switch (act->id) {
++		case FLOW_ACTION_REDIRECT:
++		case FLOW_ACTION_REDIRECT_INGRESS:
++		case FLOW_ACTION_MIRRED:
++		case FLOW_ACTION_MIRRED_INGRESS:
++			if (act->dev->rtnl_link_ops &&
++			    !strcmp(act->dev->rtnl_link_ops->kind, "openvswitch")) {
++				NL_SET_ERR_MSG_MOD(extack,
++						   "unsupported offload: out port is openvswitch internal port");
++				return -EOPNOTSUPP;
++			}
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	flow_rule_match_ct(rule, &ct);
+ 	if (!ct.mask->ct_zone) {
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index e522845c7c211..0d7d138d6e0d7 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -1084,7 +1084,7 @@ nfp_tunnel_add_shared_mac(struct nfp_app *app, struct net_device *netdev,
+ 	u16 nfp_mac_idx = 0;
+ 
+ 	entry = nfp_tunnel_lookup_offloaded_macs(app, netdev->dev_addr);
+-	if (entry && nfp_tunnel_is_mac_idx_global(entry->index)) {
++	if (entry && (nfp_tunnel_is_mac_idx_global(entry->index) || netif_is_lag_port(netdev))) {
+ 		if (entry->bridge_count ||
+ 		    !nfp_flower_is_supported_bridge(netdev)) {
+ 			nfp_tunnel_offloaded_macs_inc_ref_and_link(entry,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index de0a5d5ded305..f2085340a1cfe 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -2588,6 +2588,7 @@ static void nfp_net_netdev_init(struct nfp_net *nn)
+ 	case NFP_NFD_VER_NFD3:
+ 		netdev->netdev_ops = &nfp_nfd3_netdev_ops;
+ 		netdev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
++		netdev->xdp_features |= NETDEV_XDP_ACT_REDIRECT;
+ 		break;
+ 	case NFP_NFD_VER_NFDK:
+ 		netdev->netdev_ops = &nfp_nfdk_netdev_ops;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
+index 33b4c28563162..3f10c5365c80e 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
+@@ -537,11 +537,13 @@ static int enable_bars(struct nfp6000_pcie *nfp, u16 interface)
+ 	const u32 barcfg_msix_general =
+ 		NFP_PCIE_BAR_PCIE2CPP_MapType(
+ 			NFP_PCIE_BAR_PCIE2CPP_MapType_GENERAL) |
+-		NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT;
++		NFP_PCIE_BAR_PCIE2CPP_LengthSelect(
++			NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT);
+ 	const u32 barcfg_msix_xpb =
+ 		NFP_PCIE_BAR_PCIE2CPP_MapType(
+ 			NFP_PCIE_BAR_PCIE2CPP_MapType_BULK) |
+-		NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT |
++		NFP_PCIE_BAR_PCIE2CPP_LengthSelect(
++			NFP_PCIE_BAR_PCIE2CPP_LengthSelect_32BIT) |
+ 		NFP_PCIE_BAR_PCIE2CPP_Target_BaseAddress(
+ 			NFP_CPP_TARGET_ISLAND_XPB);
+ 	const u32 barcfg_explicit[4] = {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index 588e44d57f291..c75c64a938394 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -59,28 +59,51 @@
+ #undef FRAME_FILTER_DEBUG
+ /* #define FRAME_FILTER_DEBUG */
+ 
++struct stmmac_q_tx_stats {
++	u64_stats_t tx_bytes;
++	u64_stats_t tx_set_ic_bit;
++	u64_stats_t tx_tso_frames;
++	u64_stats_t tx_tso_nfrags;
++};
++
++struct stmmac_napi_tx_stats {
++	u64_stats_t tx_packets;
++	u64_stats_t tx_pkt_n;
++	u64_stats_t poll;
++	u64_stats_t tx_clean;
++	u64_stats_t tx_set_ic_bit;
++};
++
+ struct stmmac_txq_stats {
+-	u64 tx_bytes;
+-	u64 tx_packets;
+-	u64 tx_pkt_n;
+-	u64 tx_normal_irq_n;
+-	u64 napi_poll;
+-	u64 tx_clean;
+-	u64 tx_set_ic_bit;
+-	u64 tx_tso_frames;
+-	u64 tx_tso_nfrags;
+-	struct u64_stats_sync syncp;
++	/* Updates protected by tx queue lock. */
++	struct u64_stats_sync q_syncp;
++	struct stmmac_q_tx_stats q;
++
++	/* Updates protected by NAPI poll logic. */
++	struct u64_stats_sync napi_syncp;
++	struct stmmac_napi_tx_stats napi;
+ } ____cacheline_aligned_in_smp;
+ 
++struct stmmac_napi_rx_stats {
++	u64_stats_t rx_bytes;
++	u64_stats_t rx_packets;
++	u64_stats_t rx_pkt_n;
++	u64_stats_t poll;
++};
++
+ struct stmmac_rxq_stats {
+-	u64 rx_bytes;
+-	u64 rx_packets;
+-	u64 rx_pkt_n;
+-	u64 rx_normal_irq_n;
+-	u64 napi_poll;
+-	struct u64_stats_sync syncp;
++	/* Updates protected by NAPI poll logic. */
++	struct u64_stats_sync napi_syncp;
++	struct stmmac_napi_rx_stats napi;
+ } ____cacheline_aligned_in_smp;
+ 
++/* Updates on each CPU protected by not allowing nested irqs. */
++struct stmmac_pcpu_stats {
++	struct u64_stats_sync syncp;
++	u64_stats_t rx_normal_irq_n[MTL_MAX_TX_QUEUES];
++	u64_stats_t tx_normal_irq_n[MTL_MAX_RX_QUEUES];
++};
++
+ /* Extra statistic and debug information exposed by ethtool */
+ struct stmmac_extra_stats {
+ 	/* Transmit errors */
+@@ -205,6 +228,7 @@ struct stmmac_extra_stats {
+ 	/* per queue statistics */
+ 	struct stmmac_txq_stats txq_stats[MTL_MAX_TX_QUEUES];
+ 	struct stmmac_rxq_stats rxq_stats[MTL_MAX_RX_QUEUES];
++	struct stmmac_pcpu_stats __percpu *pcpu_stats;
+ 	unsigned long rx_dropped;
+ 	unsigned long rx_errors;
+ 	unsigned long tx_dropped;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 137741b94122e..b21d99faa2d04 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -441,8 +441,7 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv,
+ 				     struct stmmac_extra_stats *x, u32 chan,
+ 				     u32 dir)
+ {
+-	struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+-	struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++	struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ 	int ret = 0;
+ 	u32 v;
+ 
+@@ -455,9 +454,9 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv,
+ 
+ 	if (v & EMAC_TX_INT) {
+ 		ret |= handle_tx;
+-		u64_stats_update_begin(&txq_stats->syncp);
+-		txq_stats->tx_normal_irq_n++;
+-		u64_stats_update_end(&txq_stats->syncp);
++		u64_stats_update_begin(&stats->syncp);
++		u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++		u64_stats_update_end(&stats->syncp);
+ 	}
+ 
+ 	if (v & EMAC_TX_DMA_STOP_INT)
+@@ -479,9 +478,9 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv,
+ 
+ 	if (v & EMAC_RX_INT) {
+ 		ret |= handle_rx;
+-		u64_stats_update_begin(&rxq_stats->syncp);
+-		rxq_stats->rx_normal_irq_n++;
+-		u64_stats_update_end(&rxq_stats->syncp);
++		u64_stats_update_begin(&stats->syncp);
++		u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++		u64_stats_update_end(&stats->syncp);
+ 	}
+ 
+ 	if (v & EMAC_RX_BUF_UA_INT)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+index 9470d3fd2dede..0d185e54eb7e2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+@@ -171,8 +171,7 @@ int dwmac4_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
+ 	const struct dwmac4_addrs *dwmac4_addrs = priv->plat->dwmac4_addrs;
+ 	u32 intr_status = readl(ioaddr + DMA_CHAN_STATUS(dwmac4_addrs, chan));
+ 	u32 intr_en = readl(ioaddr + DMA_CHAN_INTR_ENA(dwmac4_addrs, chan));
+-	struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+-	struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++	struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ 	int ret = 0;
+ 
+ 	if (dir == DMA_DIR_RX)
+@@ -201,15 +200,15 @@ int dwmac4_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
+ 	}
+ 	/* TX/RX NORMAL interrupts */
+ 	if (likely(intr_status & DMA_CHAN_STATUS_RI)) {
+-		u64_stats_update_begin(&rxq_stats->syncp);
+-		rxq_stats->rx_normal_irq_n++;
+-		u64_stats_update_end(&rxq_stats->syncp);
++		u64_stats_update_begin(&stats->syncp);
++		u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++		u64_stats_update_end(&stats->syncp);
+ 		ret |= handle_rx;
+ 	}
+ 	if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
+-		u64_stats_update_begin(&txq_stats->syncp);
+-		txq_stats->tx_normal_irq_n++;
+-		u64_stats_update_end(&txq_stats->syncp);
++		u64_stats_update_begin(&stats->syncp);
++		u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++		u64_stats_update_end(&stats->syncp);
+ 		ret |= handle_tx;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
+index 7907d62d34375..85e18f9a22f92 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
+@@ -162,8 +162,7 @@ static void show_rx_process_state(unsigned int status)
+ int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
+ 			struct stmmac_extra_stats *x, u32 chan, u32 dir)
+ {
+-	struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+-	struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++	struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ 	int ret = 0;
+ 	/* read the status register (CSR5) */
+ 	u32 intr_status = readl(ioaddr + DMA_STATUS);
+@@ -215,16 +214,16 @@ int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
+ 			u32 value = readl(ioaddr + DMA_INTR_ENA);
+ 			/* to schedule NAPI on real RIE event. */
+ 			if (likely(value & DMA_INTR_ENA_RIE)) {
+-				u64_stats_update_begin(&rxq_stats->syncp);
+-				rxq_stats->rx_normal_irq_n++;
+-				u64_stats_update_end(&rxq_stats->syncp);
++				u64_stats_update_begin(&stats->syncp);
++				u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++				u64_stats_update_end(&stats->syncp);
+ 				ret |= handle_rx;
+ 			}
+ 		}
+ 		if (likely(intr_status & DMA_STATUS_TI)) {
+-			u64_stats_update_begin(&txq_stats->syncp);
+-			txq_stats->tx_normal_irq_n++;
+-			u64_stats_update_end(&txq_stats->syncp);
++			u64_stats_update_begin(&stats->syncp);
++			u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++			u64_stats_update_end(&stats->syncp);
+ 			ret |= handle_tx;
+ 		}
+ 		if (unlikely(intr_status & DMA_STATUS_ERI))
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+index 3cde695fec91b..dd2ab6185c40e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -337,8 +337,7 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
+ 				  struct stmmac_extra_stats *x, u32 chan,
+ 				  u32 dir)
+ {
+-	struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+-	struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++	struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ 	u32 intr_status = readl(ioaddr + XGMAC_DMA_CH_STATUS(chan));
+ 	u32 intr_en = readl(ioaddr + XGMAC_DMA_CH_INT_EN(chan));
+ 	int ret = 0;
+@@ -367,15 +366,15 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
+ 	/* TX/RX NORMAL interrupts */
+ 	if (likely(intr_status & XGMAC_NIS)) {
+ 		if (likely(intr_status & XGMAC_RI)) {
+-			u64_stats_update_begin(&rxq_stats->syncp);
+-			rxq_stats->rx_normal_irq_n++;
+-			u64_stats_update_end(&rxq_stats->syncp);
++			u64_stats_update_begin(&stats->syncp);
++			u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++			u64_stats_update_end(&stats->syncp);
+ 			ret |= handle_rx;
+ 		}
+ 		if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) {
+-			u64_stats_update_begin(&txq_stats->syncp);
+-			txq_stats->tx_normal_irq_n++;
+-			u64_stats_update_end(&txq_stats->syncp);
++			u64_stats_update_begin(&stats->syncp);
++			u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++			u64_stats_update_end(&stats->syncp);
+ 			ret |= handle_tx;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index 67bc98f991351..f9c1f3f4a17a5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -539,44 +539,79 @@ stmmac_set_pauseparam(struct net_device *netdev,
+ 	}
+ }
+ 
++static u64 stmmac_get_rx_normal_irq_n(struct stmmac_priv *priv, int q)
++{
++	u64 total;
++	int cpu;
++
++	total = 0;
++	for_each_possible_cpu(cpu) {
++		struct stmmac_pcpu_stats *pcpu;
++		unsigned int start;
++		u64 irq_n;
++
++		pcpu = per_cpu_ptr(priv->xstats.pcpu_stats, cpu);
++		do {
++			start = u64_stats_fetch_begin(&pcpu->syncp);
++			irq_n = u64_stats_read(&pcpu->rx_normal_irq_n[q]);
++		} while (u64_stats_fetch_retry(&pcpu->syncp, start));
++		total += irq_n;
++	}
++	return total;
++}
++
++static u64 stmmac_get_tx_normal_irq_n(struct stmmac_priv *priv, int q)
++{
++	u64 total;
++	int cpu;
++
++	total = 0;
++	for_each_possible_cpu(cpu) {
++		struct stmmac_pcpu_stats *pcpu;
++		unsigned int start;
++		u64 irq_n;
++
++		pcpu = per_cpu_ptr(priv->xstats.pcpu_stats, cpu);
++		do {
++			start = u64_stats_fetch_begin(&pcpu->syncp);
++			irq_n = u64_stats_read(&pcpu->tx_normal_irq_n[q]);
++		} while (u64_stats_fetch_retry(&pcpu->syncp, start));
++		total += irq_n;
++	}
++	return total;
++}
++
+ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
+ {
+ 	u32 tx_cnt = priv->plat->tx_queues_to_use;
+ 	u32 rx_cnt = priv->plat->rx_queues_to_use;
+ 	unsigned int start;
+-	int q, stat;
+-	char *p;
++	int q;
+ 
+ 	for (q = 0; q < tx_cnt; q++) {
+ 		struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
+-		struct stmmac_txq_stats snapshot;
++		u64 pkt_n;
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&txq_stats->syncp);
+-			snapshot = *txq_stats;
+-		} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
++			start = u64_stats_fetch_begin(&txq_stats->napi_syncp);
++			pkt_n = u64_stats_read(&txq_stats->napi.tx_pkt_n);
++		} while (u64_stats_fetch_retry(&txq_stats->napi_syncp, start));
+ 
+-		p = (char *)&snapshot + offsetof(struct stmmac_txq_stats, tx_pkt_n);
+-		for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
+-			*data++ = (*(u64 *)p);
+-			p += sizeof(u64);
+-		}
++		*data++ = pkt_n;
++		*data++ = stmmac_get_tx_normal_irq_n(priv, q);
+ 	}
+ 
+ 	for (q = 0; q < rx_cnt; q++) {
+ 		struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
+-		struct stmmac_rxq_stats snapshot;
++		u64 pkt_n;
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&rxq_stats->syncp);
+-			snapshot = *rxq_stats;
+-		} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
++			start = u64_stats_fetch_begin(&rxq_stats->napi_syncp);
++			pkt_n = u64_stats_read(&rxq_stats->napi.rx_pkt_n);
++		} while (u64_stats_fetch_retry(&rxq_stats->napi_syncp, start));
+ 
+-		p = (char *)&snapshot + offsetof(struct stmmac_rxq_stats, rx_pkt_n);
+-		for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
+-			*data++ = (*(u64 *)p);
+-			p += sizeof(u64);
+-		}
++		*data++ = pkt_n;
++		*data++ = stmmac_get_rx_normal_irq_n(priv, q);
+ 	}
+ }
+ 
+@@ -635,39 +670,49 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
+ 	pos = j;
+ 	for (i = 0; i < rx_queues_count; i++) {
+ 		struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[i];
+-		struct stmmac_rxq_stats snapshot;
++		struct stmmac_napi_rx_stats snapshot;
++		u64 n_irq;
+ 
+ 		j = pos;
+ 		do {
+-			start = u64_stats_fetch_begin(&rxq_stats->syncp);
+-			snapshot = *rxq_stats;
+-		} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+-
+-		data[j++] += snapshot.rx_pkt_n;
+-		data[j++] += snapshot.rx_normal_irq_n;
+-		normal_irq_n += snapshot.rx_normal_irq_n;
+-		napi_poll += snapshot.napi_poll;
++			start = u64_stats_fetch_begin(&rxq_stats->napi_syncp);
++			snapshot = rxq_stats->napi;
++		} while (u64_stats_fetch_retry(&rxq_stats->napi_syncp, start));
++
++		data[j++] += u64_stats_read(&snapshot.rx_pkt_n);
++		n_irq = stmmac_get_rx_normal_irq_n(priv, i);
++		data[j++] += n_irq;
++		normal_irq_n += n_irq;
++		napi_poll += u64_stats_read(&snapshot.poll);
+ 	}
+ 
+ 	pos = j;
+ 	for (i = 0; i < tx_queues_count; i++) {
+ 		struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[i];
+-		struct stmmac_txq_stats snapshot;
++		struct stmmac_napi_tx_stats napi_snapshot;
++		struct stmmac_q_tx_stats q_snapshot;
++		u64 n_irq;
+ 
+ 		j = pos;
+ 		do {
+-			start = u64_stats_fetch_begin(&txq_stats->syncp);
+-			snapshot = *txq_stats;
+-		} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+-
+-		data[j++] += snapshot.tx_pkt_n;
+-		data[j++] += snapshot.tx_normal_irq_n;
+-		normal_irq_n += snapshot.tx_normal_irq_n;
+-		data[j++] += snapshot.tx_clean;
+-		data[j++] += snapshot.tx_set_ic_bit;
+-		data[j++] += snapshot.tx_tso_frames;
+-		data[j++] += snapshot.tx_tso_nfrags;
+-		napi_poll += snapshot.napi_poll;
++			start = u64_stats_fetch_begin(&txq_stats->q_syncp);
++			q_snapshot = txq_stats->q;
++		} while (u64_stats_fetch_retry(&txq_stats->q_syncp, start));
++		do {
++			start = u64_stats_fetch_begin(&txq_stats->napi_syncp);
++			napi_snapshot = txq_stats->napi;
++		} while (u64_stats_fetch_retry(&txq_stats->napi_syncp, start));
++
++		data[j++] += u64_stats_read(&napi_snapshot.tx_pkt_n);
++		n_irq = stmmac_get_tx_normal_irq_n(priv, i);
++		data[j++] += n_irq;
++		normal_irq_n += n_irq;
++		data[j++] += u64_stats_read(&napi_snapshot.tx_clean);
++		data[j++] += u64_stats_read(&q_snapshot.tx_set_ic_bit) +
++			u64_stats_read(&napi_snapshot.tx_set_ic_bit);
++		data[j++] += u64_stats_read(&q_snapshot.tx_tso_frames);
++		data[j++] += u64_stats_read(&q_snapshot.tx_tso_nfrags);
++		napi_poll += u64_stats_read(&napi_snapshot.poll);
+ 	}
+ 	normal_irq_n += priv->xstats.rx_early_irq;
+ 	data[j++] = normal_irq_n;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index d094c3c1e2ee7..ec34768e054da 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2442,7 +2442,6 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+ 	struct xdp_desc xdp_desc;
+ 	bool work_done = true;
+ 	u32 tx_set_ic_bit = 0;
+-	unsigned long flags;
+ 
+ 	/* Avoids TX time-out as we are sharing with slow path */
+ 	txq_trans_cond_update(nq);
+@@ -2515,9 +2514,9 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+ 		tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size);
+ 		entry = tx_q->cur_tx;
+ 	}
+-	flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-	txq_stats->tx_set_ic_bit += tx_set_ic_bit;
+-	u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++	u64_stats_update_begin(&txq_stats->napi_syncp);
++	u64_stats_add(&txq_stats->napi.tx_set_ic_bit, tx_set_ic_bit);
++	u64_stats_update_end(&txq_stats->napi_syncp);
+ 
+ 	if (tx_desc) {
+ 		stmmac_flush_tx_descriptors(priv, queue);
+@@ -2565,7 +2564,6 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue,
+ 	unsigned int bytes_compl = 0, pkts_compl = 0;
+ 	unsigned int entry, xmits = 0, count = 0;
+ 	u32 tx_packets = 0, tx_errors = 0;
+-	unsigned long flags;
+ 
+ 	__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+ 
+@@ -2721,11 +2719,11 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue,
+ 	if (tx_q->dirty_tx != tx_q->cur_tx)
+ 		*pending_packets = true;
+ 
+-	flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-	txq_stats->tx_packets += tx_packets;
+-	txq_stats->tx_pkt_n += tx_packets;
+-	txq_stats->tx_clean++;
+-	u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++	u64_stats_update_begin(&txq_stats->napi_syncp);
++	u64_stats_add(&txq_stats->napi.tx_packets, tx_packets);
++	u64_stats_add(&txq_stats->napi.tx_pkt_n, tx_packets);
++	u64_stats_inc(&txq_stats->napi.tx_clean);
++	u64_stats_update_end(&txq_stats->napi_syncp);
+ 
+ 	priv->xstats.tx_errors += tx_errors;
+ 
+@@ -3869,6 +3867,9 @@ static int __stmmac_open(struct net_device *dev,
+ 	priv->rx_copybreak = STMMAC_RX_COPYBREAK;
+ 
+ 	buf_sz = dma_conf->dma_buf_sz;
++	for (int i = 0; i < MTL_MAX_TX_QUEUES; i++)
++		if (priv->dma_conf.tx_queue[i].tbs & STMMAC_TBS_EN)
++			dma_conf->tx_queue[i].tbs = priv->dma_conf.tx_queue[i].tbs;
+ 	memcpy(&priv->dma_conf, dma_conf, sizeof(*dma_conf));
+ 
+ 	stmmac_reset_queues_param(priv);
+@@ -4147,7 +4148,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct stmmac_tx_queue *tx_q;
+ 	bool has_vlan, set_ic;
+ 	u8 proto_hdr_len, hdr;
+-	unsigned long flags;
+ 	u32 pay_len, mss;
+ 	dma_addr_t des;
+ 	int i;
+@@ -4312,13 +4312,13 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+ 	}
+ 
+-	flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-	txq_stats->tx_bytes += skb->len;
+-	txq_stats->tx_tso_frames++;
+-	txq_stats->tx_tso_nfrags += nfrags;
++	u64_stats_update_begin(&txq_stats->q_syncp);
++	u64_stats_add(&txq_stats->q.tx_bytes, skb->len);
++	u64_stats_inc(&txq_stats->q.tx_tso_frames);
++	u64_stats_add(&txq_stats->q.tx_tso_nfrags, nfrags);
+ 	if (set_ic)
+-		txq_stats->tx_set_ic_bit++;
+-	u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++		u64_stats_inc(&txq_stats->q.tx_set_ic_bit);
++	u64_stats_update_end(&txq_stats->q_syncp);
+ 
+ 	if (priv->sarc_type)
+ 		stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+@@ -4417,7 +4417,6 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct stmmac_tx_queue *tx_q;
+ 	bool has_vlan, set_ic;
+ 	int entry, first_tx;
+-	unsigned long flags;
+ 	dma_addr_t des;
+ 
+ 	tx_q = &priv->dma_conf.tx_queue[queue];
+@@ -4587,11 +4586,11 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+ 	}
+ 
+-	flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-	txq_stats->tx_bytes += skb->len;
++	u64_stats_update_begin(&txq_stats->q_syncp);
++	u64_stats_add(&txq_stats->q.tx_bytes, skb->len);
+ 	if (set_ic)
+-		txq_stats->tx_set_ic_bit++;
+-	u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++		u64_stats_inc(&txq_stats->q.tx_set_ic_bit);
++	u64_stats_update_end(&txq_stats->q_syncp);
+ 
+ 	if (priv->sarc_type)
+ 		stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+@@ -4855,12 +4854,11 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
+ 		set_ic = false;
+ 
+ 	if (set_ic) {
+-		unsigned long flags;
+ 		tx_q->tx_count_frames = 0;
+ 		stmmac_set_tx_ic(priv, tx_desc);
+-		flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-		txq_stats->tx_set_ic_bit++;
+-		u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++		u64_stats_update_begin(&txq_stats->q_syncp);
++		u64_stats_inc(&txq_stats->q.tx_set_ic_bit);
++		u64_stats_update_end(&txq_stats->q_syncp);
+ 	}
+ 
+ 	stmmac_enable_dma_transmission(priv, priv->ioaddr);
+@@ -5010,7 +5008,6 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue,
+ 	unsigned int len = xdp->data_end - xdp->data;
+ 	enum pkt_hash_types hash_type;
+ 	int coe = priv->hw->rx_csum;
+-	unsigned long flags;
+ 	struct sk_buff *skb;
+ 	u32 hash;
+ 
+@@ -5035,10 +5032,10 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue,
+ 	skb_record_rx_queue(skb, queue);
+ 	napi_gro_receive(&ch->rxtx_napi, skb);
+ 
+-	flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+-	rxq_stats->rx_pkt_n++;
+-	rxq_stats->rx_bytes += len;
+-	u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++	u64_stats_update_begin(&rxq_stats->napi_syncp);
++	u64_stats_inc(&rxq_stats->napi.rx_pkt_n);
++	u64_stats_add(&rxq_stats->napi.rx_bytes, len);
++	u64_stats_update_end(&rxq_stats->napi_syncp);
+ }
+ 
+ static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+@@ -5120,7 +5117,6 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
+ 	unsigned int desc_size;
+ 	struct bpf_prog *prog;
+ 	bool failure = false;
+-	unsigned long flags;
+ 	int xdp_status = 0;
+ 	int status = 0;
+ 
+@@ -5275,9 +5271,9 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
+ 
+ 	stmmac_finalize_xdp_rx(priv, xdp_status);
+ 
+-	flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+-	rxq_stats->rx_pkt_n += count;
+-	u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++	u64_stats_update_begin(&rxq_stats->napi_syncp);
++	u64_stats_add(&rxq_stats->napi.rx_pkt_n, count);
++	u64_stats_update_end(&rxq_stats->napi_syncp);
+ 
+ 	priv->xstats.rx_dropped += rx_dropped;
+ 	priv->xstats.rx_errors += rx_errors;
+@@ -5315,7 +5311,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 	unsigned int desc_size;
+ 	struct sk_buff *skb = NULL;
+ 	struct stmmac_xdp_buff ctx;
+-	unsigned long flags;
+ 	int xdp_status = 0;
+ 	int buf_sz;
+ 
+@@ -5568,11 +5563,11 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 
+ 	stmmac_rx_refill(priv, queue);
+ 
+-	flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+-	rxq_stats->rx_packets += rx_packets;
+-	rxq_stats->rx_bytes += rx_bytes;
+-	rxq_stats->rx_pkt_n += count;
+-	u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++	u64_stats_update_begin(&rxq_stats->napi_syncp);
++	u64_stats_add(&rxq_stats->napi.rx_packets, rx_packets);
++	u64_stats_add(&rxq_stats->napi.rx_bytes, rx_bytes);
++	u64_stats_add(&rxq_stats->napi.rx_pkt_n, count);
++	u64_stats_update_end(&rxq_stats->napi_syncp);
+ 
+ 	priv->xstats.rx_dropped += rx_dropped;
+ 	priv->xstats.rx_errors += rx_errors;
+@@ -5587,13 +5582,12 @@ static int stmmac_napi_poll_rx(struct napi_struct *napi, int budget)
+ 	struct stmmac_priv *priv = ch->priv_data;
+ 	struct stmmac_rxq_stats *rxq_stats;
+ 	u32 chan = ch->index;
+-	unsigned long flags;
+ 	int work_done;
+ 
+ 	rxq_stats = &priv->xstats.rxq_stats[chan];
+-	flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+-	rxq_stats->napi_poll++;
+-	u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++	u64_stats_update_begin(&rxq_stats->napi_syncp);
++	u64_stats_inc(&rxq_stats->napi.poll);
++	u64_stats_update_end(&rxq_stats->napi_syncp);
+ 
+ 	work_done = stmmac_rx(priv, budget, chan);
+ 	if (work_done < budget && napi_complete_done(napi, work_done)) {
+@@ -5615,13 +5609,12 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
+ 	struct stmmac_txq_stats *txq_stats;
+ 	bool pending_packets = false;
+ 	u32 chan = ch->index;
+-	unsigned long flags;
+ 	int work_done;
+ 
+ 	txq_stats = &priv->xstats.txq_stats[chan];
+-	flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-	txq_stats->napi_poll++;
+-	u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++	u64_stats_update_begin(&txq_stats->napi_syncp);
++	u64_stats_inc(&txq_stats->napi.poll);
++	u64_stats_update_end(&txq_stats->napi_syncp);
+ 
+ 	work_done = stmmac_tx_clean(priv, budget, chan, &pending_packets);
+ 	work_done = min(work_done, budget);
+@@ -5651,17 +5644,16 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 	struct stmmac_rxq_stats *rxq_stats;
+ 	struct stmmac_txq_stats *txq_stats;
+ 	u32 chan = ch->index;
+-	unsigned long flags;
+ 
+ 	rxq_stats = &priv->xstats.rxq_stats[chan];
+-	flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+-	rxq_stats->napi_poll++;
+-	u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++	u64_stats_update_begin(&rxq_stats->napi_syncp);
++	u64_stats_inc(&rxq_stats->napi.poll);
++	u64_stats_update_end(&rxq_stats->napi_syncp);
+ 
+ 	txq_stats = &priv->xstats.txq_stats[chan];
+-	flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+-	txq_stats->napi_poll++;
+-	u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++	u64_stats_update_begin(&txq_stats->napi_syncp);
++	u64_stats_inc(&txq_stats->napi.poll);
++	u64_stats_update_end(&txq_stats->napi_syncp);
+ 
+ 	tx_done = stmmac_tx_clean(priv, budget, chan, &tx_pending_packets);
+ 	tx_done = min(tx_done, budget);
+@@ -6987,10 +6979,13 @@ static void stmmac_get_stats64(struct net_device *dev, struct rtnl_link_stats64
+ 		u64 tx_bytes;
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&txq_stats->syncp);
+-			tx_packets = txq_stats->tx_packets;
+-			tx_bytes   = txq_stats->tx_bytes;
+-		} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
++			start = u64_stats_fetch_begin(&txq_stats->q_syncp);
++			tx_bytes   = u64_stats_read(&txq_stats->q.tx_bytes);
++		} while (u64_stats_fetch_retry(&txq_stats->q_syncp, start));
++		do {
++			start = u64_stats_fetch_begin(&txq_stats->napi_syncp);
++			tx_packets = u64_stats_read(&txq_stats->napi.tx_packets);
++		} while (u64_stats_fetch_retry(&txq_stats->napi_syncp, start));
+ 
+ 		stats->tx_packets += tx_packets;
+ 		stats->tx_bytes += tx_bytes;
+@@ -7002,10 +6997,10 @@ static void stmmac_get_stats64(struct net_device *dev, struct rtnl_link_stats64
+ 		u64 rx_bytes;
+ 
+ 		do {
+-			start = u64_stats_fetch_begin(&rxq_stats->syncp);
+-			rx_packets = rxq_stats->rx_packets;
+-			rx_bytes   = rxq_stats->rx_bytes;
+-		} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
++			start = u64_stats_fetch_begin(&rxq_stats->napi_syncp);
++			rx_packets = u64_stats_read(&rxq_stats->napi.rx_packets);
++			rx_bytes   = u64_stats_read(&rxq_stats->napi.rx_bytes);
++		} while (u64_stats_fetch_retry(&rxq_stats->napi_syncp, start));
+ 
+ 		stats->rx_packets += rx_packets;
+ 		stats->rx_bytes += rx_bytes;
+@@ -7399,9 +7394,16 @@ int stmmac_dvr_probe(struct device *device,
+ 	priv->dev = ndev;
+ 
+ 	for (i = 0; i < MTL_MAX_RX_QUEUES; i++)
+-		u64_stats_init(&priv->xstats.rxq_stats[i].syncp);
+-	for (i = 0; i < MTL_MAX_TX_QUEUES; i++)
+-		u64_stats_init(&priv->xstats.txq_stats[i].syncp);
++		u64_stats_init(&priv->xstats.rxq_stats[i].napi_syncp);
++	for (i = 0; i < MTL_MAX_TX_QUEUES; i++) {
++		u64_stats_init(&priv->xstats.txq_stats[i].q_syncp);
++		u64_stats_init(&priv->xstats.txq_stats[i].napi_syncp);
++	}
++
++	priv->xstats.pcpu_stats =
++		devm_netdev_alloc_pcpu_stats(device, struct stmmac_pcpu_stats);
++	if (!priv->xstats.pcpu_stats)
++		return -ENOMEM;
+ 
+ 	stmmac_set_ethtool_ops(ndev);
+ 	priv->pause = pause;
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index ca4d4548f85e3..2ed165dcdbdcf 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -631,6 +631,8 @@ static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv)
+ 		}
+ 	}
+ 
++	phy->mac_managed_pm = true;
++
+ 	slave->phy = phy;
+ 
+ 	phy_attached_info(slave->phy);
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 0e4f526b17532..9061dca97fcbf 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -773,6 +773,9 @@ static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv)
+ 			slave->slave_num);
+ 		return;
+ 	}
++
++	phy->mac_managed_pm = true;
++
+ 	slave->phy = phy;
+ 
+ 	phy_attached_info(slave->phy);
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 1dafa44155d0e..a6fcbda64ecc6 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -708,7 +708,10 @@ void netvsc_device_remove(struct hv_device *device)
+ 	/* Disable NAPI and disassociate its context from the device. */
+ 	for (i = 0; i < net_device->num_chn; i++) {
+ 		/* See also vmbus_reset_channel_cb(). */
+-		napi_disable(&net_device->chan_table[i].napi);
++		/* only disable enabled NAPI channel */
++		if (i < ndev->real_num_rx_queues)
++			napi_disable(&net_device->chan_table[i].napi);
++
+ 		netif_napi_del(&net_device->chan_table[i].napi);
+ 	}
+ 
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index cd15d7b380ab5..9d2d66a4aafd5 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -42,6 +42,10 @@
+ #define LINKCHANGE_INT (2 * HZ)
+ #define VF_TAKEOVER_INT (HZ / 10)
+ 
++/* Macros to define the context of vf registration */
++#define VF_REG_IN_PROBE		1
++#define VF_REG_IN_NOTIFIER	2
++
+ static unsigned int ring_size __ro_after_init = 128;
+ module_param(ring_size, uint, 0444);
+ MODULE_PARM_DESC(ring_size, "Ring buffer size (# of 4K pages)");
+@@ -2183,7 +2187,7 @@ static rx_handler_result_t netvsc_vf_handle_frame(struct sk_buff **pskb)
+ }
+ 
+ static int netvsc_vf_join(struct net_device *vf_netdev,
+-			  struct net_device *ndev)
++			  struct net_device *ndev, int context)
+ {
+ 	struct net_device_context *ndev_ctx = netdev_priv(ndev);
+ 	int ret;
+@@ -2206,7 +2210,11 @@ static int netvsc_vf_join(struct net_device *vf_netdev,
+ 		goto upper_link_failed;
+ 	}
+ 
+-	schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
++	/* If this registration is called from probe context vf_takeover
++	 * is taken care of later in probe itself.
++	 */
++	if (context == VF_REG_IN_NOTIFIER)
++		schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
+ 
+ 	call_netdevice_notifiers(NETDEV_JOIN, vf_netdev);
+ 
+@@ -2344,7 +2352,7 @@ static int netvsc_prepare_bonding(struct net_device *vf_netdev)
+ 	return NOTIFY_DONE;
+ }
+ 
+-static int netvsc_register_vf(struct net_device *vf_netdev)
++static int netvsc_register_vf(struct net_device *vf_netdev, int context)
+ {
+ 	struct net_device_context *net_device_ctx;
+ 	struct netvsc_device *netvsc_dev;
+@@ -2384,7 +2392,7 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+ 
+ 	netdev_info(ndev, "VF registering: %s\n", vf_netdev->name);
+ 
+-	if (netvsc_vf_join(vf_netdev, ndev) != 0)
++	if (netvsc_vf_join(vf_netdev, ndev, context) != 0)
+ 		return NOTIFY_DONE;
+ 
+ 	dev_hold(vf_netdev);
+@@ -2482,10 +2490,31 @@ static int netvsc_unregister_vf(struct net_device *vf_netdev)
+ 	return NOTIFY_OK;
+ }
+ 
++static int check_dev_is_matching_vf(struct net_device *event_ndev)
++{
++	/* Skip NetVSC interfaces */
++	if (event_ndev->netdev_ops == &device_ops)
++		return -ENODEV;
++
++	/* Avoid non-Ethernet type devices */
++	if (event_ndev->type != ARPHRD_ETHER)
++		return -ENODEV;
++
++	/* Avoid Vlan dev with same MAC registering as VF */
++	if (is_vlan_dev(event_ndev))
++		return -ENODEV;
++
++	/* Avoid Bonding master dev with same MAC registering as VF */
++	if (netif_is_bond_master(event_ndev))
++		return -ENODEV;
++
++	return 0;
++}
++
+ static int netvsc_probe(struct hv_device *dev,
+ 			const struct hv_vmbus_device_id *dev_id)
+ {
+-	struct net_device *net = NULL;
++	struct net_device *net = NULL, *vf_netdev;
+ 	struct net_device_context *net_device_ctx;
+ 	struct netvsc_device_info *device_info = NULL;
+ 	struct netvsc_device *nvdev;
+@@ -2597,6 +2626,30 @@ static int netvsc_probe(struct hv_device *dev,
+ 	}
+ 
+ 	list_add(&net_device_ctx->list, &netvsc_dev_list);
++
++	/* When the hv_netvsc driver is unloaded and reloaded, the
++	 * NET_DEVICE_REGISTER for the vf device is replayed before probe
++	 * is complete. This is because register_netdevice_notifier() gets
++	 * registered before vmbus_driver_register() so that callback func
++	 * is set before probe and we don't miss events like NETDEV_POST_INIT
++	 * So, in this section we try to register the matching vf device that
++	 * is present as a netdevice, knowing that its register call is not
++	 * processed in the netvsc_netdev_notifier(as probing is progress and
++	 * get_netvsc_byslot fails).
++	 */
++	for_each_netdev(dev_net(net), vf_netdev) {
++		ret = check_dev_is_matching_vf(vf_netdev);
++		if (ret != 0)
++			continue;
++
++		if (net != get_netvsc_byslot(vf_netdev))
++			continue;
++
++		netvsc_prepare_bonding(vf_netdev);
++		netvsc_register_vf(vf_netdev, VF_REG_IN_PROBE);
++		__netvsc_vf_setup(net, vf_netdev);
++		break;
++	}
+ 	rtnl_unlock();
+ 
+ 	netvsc_devinfo_put(device_info);
+@@ -2752,28 +2805,17 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ 			       unsigned long event, void *ptr)
+ {
+ 	struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
++	int ret = 0;
+ 
+-	/* Skip our own events */
+-	if (event_dev->netdev_ops == &device_ops)
+-		return NOTIFY_DONE;
+-
+-	/* Avoid non-Ethernet type devices */
+-	if (event_dev->type != ARPHRD_ETHER)
+-		return NOTIFY_DONE;
+-
+-	/* Avoid Vlan dev with same MAC registering as VF */
+-	if (is_vlan_dev(event_dev))
+-		return NOTIFY_DONE;
+-
+-	/* Avoid Bonding master dev with same MAC registering as VF */
+-	if (netif_is_bond_master(event_dev))
++	ret = check_dev_is_matching_vf(event_dev);
++	if (ret != 0)
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {
+ 	case NETDEV_POST_INIT:
+ 		return netvsc_prepare_bonding(event_dev);
+ 	case NETDEV_REGISTER:
+-		return netvsc_register_vf(event_dev);
++		return netvsc_register_vf(event_dev, VF_REG_IN_NOTIFIER);
+ 	case NETDEV_UNREGISTER:
+ 		return netvsc_unregister_vf(event_dev);
+ 	case NETDEV_UP:
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index b96f30d11644e..dcc4810cb3247 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -618,7 +618,7 @@ int iwl_sar_get_wrds_table(struct iwl_fw_runtime *fwrt)
+ 					 &tbl_rev);
+ 	if (!IS_ERR(wifi_pkg)) {
+ 		if (tbl_rev != 2) {
+-			ret = PTR_ERR(wifi_pkg);
++			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+@@ -634,7 +634,7 @@ int iwl_sar_get_wrds_table(struct iwl_fw_runtime *fwrt)
+ 					 &tbl_rev);
+ 	if (!IS_ERR(wifi_pkg)) {
+ 		if (tbl_rev != 1) {
+-			ret = PTR_ERR(wifi_pkg);
++			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+@@ -650,7 +650,7 @@ int iwl_sar_get_wrds_table(struct iwl_fw_runtime *fwrt)
+ 					 &tbl_rev);
+ 	if (!IS_ERR(wifi_pkg)) {
+ 		if (tbl_rev != 0) {
+-			ret = PTR_ERR(wifi_pkg);
++			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+@@ -707,7 +707,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ 					 &tbl_rev);
+ 	if (!IS_ERR(wifi_pkg)) {
+ 		if (tbl_rev != 2) {
+-			ret = PTR_ERR(wifi_pkg);
++			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+@@ -723,7 +723,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ 					 &tbl_rev);
+ 	if (!IS_ERR(wifi_pkg)) {
+ 		if (tbl_rev != 1) {
+-			ret = PTR_ERR(wifi_pkg);
++			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+@@ -739,7 +739,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ 					 &tbl_rev);
+ 	if (!IS_ERR(wifi_pkg)) {
+ 		if (tbl_rev != 0) {
+-			ret = PTR_ERR(wifi_pkg);
++			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+@@ -1116,6 +1116,9 @@ int iwl_acpi_get_ppag_table(struct iwl_fw_runtime *fwrt)
+ 		goto read_table;
+ 	}
+ 
++	ret = PTR_ERR(wifi_pkg);
++	goto out_free;
++
+ read_table:
+ 	fwrt->ppag_ver = tbl_rev;
+ 	flags = &wifi_pkg->package.elements[1];
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index ffe2670720c92..abf8001bdac17 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -128,6 +128,7 @@ static void iwl_dealloc_ucode(struct iwl_drv *drv)
+ 	kfree(drv->fw.ucode_capa.cmd_versions);
+ 	kfree(drv->fw.phy_integration_ver);
+ 	kfree(drv->trans->dbg.pc_data);
++	drv->trans->dbg.pc_data = NULL;
+ 
+ 	for (i = 0; i < IWL_UCODE_TYPE_MAX; i++)
+ 		iwl_free_fw_img(drv, drv->fw.img + i);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 586334eee0566..350dc6b8a3302 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3673,6 +3673,9 @@ iwl_mvm_sta_state_notexist_to_none(struct iwl_mvm *mvm,
+ 					   NL80211_TDLS_SETUP);
+ 	}
+ 
++	if (ret)
++		return ret;
++
+ 	for_each_sta_active_link(vif, sta, link_sta, i)
+ 		link_sta->agg.max_rc_amsdu_len = 1;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 886d000985287..af15d470c69bd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -505,6 +505,10 @@ static bool iwl_mvm_is_dup(struct ieee80211_sta *sta, int queue,
+ 		return false;
+ 
+ 	mvm_sta = iwl_mvm_sta_from_mac80211(sta);
++
++	if (WARN_ON_ONCE(!mvm_sta->dup_data))
++		return false;
++
+ 	dup_data = &mvm_sta->dup_data[queue];
+ 
+ 	/*
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 218fdf1ed5304..2e653a417d626 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2024 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2017 Intel Deutschland GmbH
+  */
+@@ -972,6 +972,7 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
+ 	if (!le32_to_cpu(notif->status) || !le32_to_cpu(notif->start)) {
+ 		/* End TE, notify mac80211 */
+ 		mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;
++		mvmvif->time_event_data.link_id = -1;
+ 		iwl_mvm_p2p_roc_finished(mvm);
+ 		ieee80211_remain_on_channel_expired(mvm->hw);
+ 	} else if (le32_to_cpu(notif->start)) {
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index d7503aef599f0..fab361a250d60 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -104,13 +104,12 @@ bool provides_xdp_headroom = true;
+ module_param(provides_xdp_headroom, bool, 0644);
+ 
+ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+-			       u8 status);
++			       s8 status);
+ 
+ static void make_tx_response(struct xenvif_queue *queue,
+-			     struct xen_netif_tx_request *txp,
++			     const struct xen_netif_tx_request *txp,
+ 			     unsigned int extra_count,
+-			     s8       st);
+-static void push_tx_responses(struct xenvif_queue *queue);
++			     s8 status);
+ 
+ static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
+ 
+@@ -208,13 +207,9 @@ static void xenvif_tx_err(struct xenvif_queue *queue,
+ 			  unsigned int extra_count, RING_IDX end)
+ {
+ 	RING_IDX cons = queue->tx.req_cons;
+-	unsigned long flags;
+ 
+ 	do {
+-		spin_lock_irqsave(&queue->response_lock, flags);
+ 		make_tx_response(queue, txp, extra_count, XEN_NETIF_RSP_ERROR);
+-		push_tx_responses(queue);
+-		spin_unlock_irqrestore(&queue->response_lock, flags);
+ 		if (cons == end)
+ 			break;
+ 		RING_COPY_REQUEST(&queue->tx, cons++, txp);
+@@ -465,12 +460,7 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 	for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS;
+ 	     nr_slots--) {
+ 		if (unlikely(!txp->size)) {
+-			unsigned long flags;
+-
+-			spin_lock_irqsave(&queue->response_lock, flags);
+ 			make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY);
+-			push_tx_responses(queue);
+-			spin_unlock_irqrestore(&queue->response_lock, flags);
+ 			++txp;
+ 			continue;
+ 		}
+@@ -496,14 +486,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
+ 
+ 		for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) {
+ 			if (unlikely(!txp->size)) {
+-				unsigned long flags;
+-
+-				spin_lock_irqsave(&queue->response_lock, flags);
+ 				make_tx_response(queue, txp, 0,
+ 						 XEN_NETIF_RSP_OKAY);
+-				push_tx_responses(queue);
+-				spin_unlock_irqrestore(&queue->response_lock,
+-						       flags);
+ 				continue;
+ 			}
+ 
+@@ -995,7 +979,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 					 (ret == 0) ?
+ 					 XEN_NETIF_RSP_OKAY :
+ 					 XEN_NETIF_RSP_ERROR);
+-			push_tx_responses(queue);
+ 			continue;
+ 		}
+ 
+@@ -1007,7 +990,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
+ 
+ 			make_tx_response(queue, &txreq, extra_count,
+ 					 XEN_NETIF_RSP_OKAY);
+-			push_tx_responses(queue);
+ 			continue;
+ 		}
+ 
+@@ -1433,8 +1415,35 @@ int xenvif_tx_action(struct xenvif_queue *queue, int budget)
+ 	return work_done;
+ }
+ 
++static void _make_tx_response(struct xenvif_queue *queue,
++			     const struct xen_netif_tx_request *txp,
++			     unsigned int extra_count,
++			     s8 status)
++{
++	RING_IDX i = queue->tx.rsp_prod_pvt;
++	struct xen_netif_tx_response *resp;
++
++	resp = RING_GET_RESPONSE(&queue->tx, i);
++	resp->id     = txp->id;
++	resp->status = status;
++
++	while (extra_count-- != 0)
++		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
++
++	queue->tx.rsp_prod_pvt = ++i;
++}
++
++static void push_tx_responses(struct xenvif_queue *queue)
++{
++	int notify;
++
++	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
++	if (notify)
++		notify_remote_via_irq(queue->tx_irq);
++}
++
+ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+-			       u8 status)
++			       s8 status)
+ {
+ 	struct pending_tx_info *pending_tx_info;
+ 	pending_ring_idx_t index;
+@@ -1444,8 +1453,8 @@ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+ 
+ 	spin_lock_irqsave(&queue->response_lock, flags);
+ 
+-	make_tx_response(queue, &pending_tx_info->req,
+-			 pending_tx_info->extra_count, status);
++	_make_tx_response(queue, &pending_tx_info->req,
++			  pending_tx_info->extra_count, status);
+ 
+ 	/* Release the pending index before pusing the Tx response so
+ 	 * its available before a new Tx request is pushed by the
+@@ -1459,32 +1468,19 @@ static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
+ 	spin_unlock_irqrestore(&queue->response_lock, flags);
+ }
+ 
+-
+ static void make_tx_response(struct xenvif_queue *queue,
+-			     struct xen_netif_tx_request *txp,
++			     const struct xen_netif_tx_request *txp,
+ 			     unsigned int extra_count,
+-			     s8       st)
++			     s8 status)
+ {
+-	RING_IDX i = queue->tx.rsp_prod_pvt;
+-	struct xen_netif_tx_response *resp;
+-
+-	resp = RING_GET_RESPONSE(&queue->tx, i);
+-	resp->id     = txp->id;
+-	resp->status = st;
+-
+-	while (extra_count-- != 0)
+-		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
++	unsigned long flags;
+ 
+-	queue->tx.rsp_prod_pvt = ++i;
+-}
++	spin_lock_irqsave(&queue->response_lock, flags);
+ 
+-static void push_tx_responses(struct xenvif_queue *queue)
+-{
+-	int notify;
++	_make_tx_response(queue, txp, extra_count, status);
++	push_tx_responses(queue);
+ 
+-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
+-	if (notify)
+-		notify_remote_via_irq(queue->tx_irq);
++	spin_unlock_irqrestore(&queue->response_lock, flags);
+ }
+ 
+ static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index afdaefbd03f61..885ee040ee2b3 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -762,7 +762,9 @@ struct device_node *of_graph_get_port_parent(struct device_node *node)
+ 	/* Walk 3 levels up only if there is 'ports' node. */
+ 	for (depth = 3; depth && node; depth--) {
+ 		node = of_get_next_parent(node);
+-		if (depth == 2 && !of_node_name_eq(node, "ports"))
++		if (depth == 2 && !of_node_name_eq(node, "ports") &&
++		    !of_node_name_eq(node, "in-ports") &&
++		    !of_node_name_eq(node, "out-ports"))
+ 			break;
+ 	}
+ 	return node;
+@@ -1062,36 +1064,6 @@ of_fwnode_device_get_match_data(const struct fwnode_handle *fwnode,
+ 	return of_device_get_match_data(dev);
+ }
+ 
+-static struct device_node *of_get_compat_node(struct device_node *np)
+-{
+-	of_node_get(np);
+-
+-	while (np) {
+-		if (!of_device_is_available(np)) {
+-			of_node_put(np);
+-			np = NULL;
+-		}
+-
+-		if (of_property_present(np, "compatible"))
+-			break;
+-
+-		np = of_get_next_parent(np);
+-	}
+-
+-	return np;
+-}
+-
+-static struct device_node *of_get_compat_node_parent(struct device_node *np)
+-{
+-	struct device_node *parent, *node;
+-
+-	parent = of_get_parent(np);
+-	node = of_get_compat_node(parent);
+-	of_node_put(parent);
+-
+-	return node;
+-}
+-
+ static void of_link_to_phandle(struct device_node *con_np,
+ 			      struct device_node *sup_np)
+ {
+@@ -1221,10 +1193,10 @@ static struct device_node *parse_##fname(struct device_node *np,	     \
+  * @parse_prop.prop_name: Name of property holding a phandle value
+  * @parse_prop.index: For properties holding a list of phandles, this is the
+  *		      index into the list
++ * @get_con_dev: If the consumer node containing the property is never converted
++ *		 to a struct device, implement this ops so fw_devlink can use it
++ *		 to find the true consumer.
+  * @optional: Describes whether a supplier is mandatory or not
+- * @node_not_dev: The consumer node containing the property is never converted
+- *		  to a struct device. Instead, parse ancestor nodes for the
+- *		  compatible property to find a node corresponding to a device.
+  *
+  * Returns:
+  * parse_prop() return values are
+@@ -1235,15 +1207,15 @@ static struct device_node *parse_##fname(struct device_node *np,	     \
+ struct supplier_bindings {
+ 	struct device_node *(*parse_prop)(struct device_node *np,
+ 					  const char *prop_name, int index);
++	struct device_node *(*get_con_dev)(struct device_node *np);
+ 	bool optional;
+-	bool node_not_dev;
+ };
+ 
+ DEFINE_SIMPLE_PROP(clocks, "clocks", "#clock-cells")
+ DEFINE_SIMPLE_PROP(interconnects, "interconnects", "#interconnect-cells")
+ DEFINE_SIMPLE_PROP(iommus, "iommus", "#iommu-cells")
+ DEFINE_SIMPLE_PROP(mboxes, "mboxes", "#mbox-cells")
+-DEFINE_SIMPLE_PROP(io_channels, "io-channel", "#io-channel-cells")
++DEFINE_SIMPLE_PROP(io_channels, "io-channels", "#io-channel-cells")
+ DEFINE_SIMPLE_PROP(interrupt_parent, "interrupt-parent", NULL)
+ DEFINE_SIMPLE_PROP(dmas, "dmas", "#dma-cells")
+ DEFINE_SIMPLE_PROP(power_domains, "power-domains", "#power-domain-cells")
+@@ -1261,7 +1233,6 @@ DEFINE_SIMPLE_PROP(pinctrl5, "pinctrl-5", NULL)
+ DEFINE_SIMPLE_PROP(pinctrl6, "pinctrl-6", NULL)
+ DEFINE_SIMPLE_PROP(pinctrl7, "pinctrl-7", NULL)
+ DEFINE_SIMPLE_PROP(pinctrl8, "pinctrl-8", NULL)
+-DEFINE_SIMPLE_PROP(remote_endpoint, "remote-endpoint", NULL)
+ DEFINE_SIMPLE_PROP(pwms, "pwms", "#pwm-cells")
+ DEFINE_SIMPLE_PROP(resets, "resets", "#reset-cells")
+ DEFINE_SIMPLE_PROP(leds, "leds", NULL)
+@@ -1327,6 +1298,17 @@ static struct device_node *parse_interrupts(struct device_node *np,
+ 	return of_irq_parse_one(np, index, &sup_args) ? NULL : sup_args.np;
+ }
+ 
++static struct device_node *parse_remote_endpoint(struct device_node *np,
++						 const char *prop_name,
++						 int index)
++{
++	/* Return NULL for index > 0 to signify end of remote-endpoints. */
++	if (!index || strcmp(prop_name, "remote-endpoint"))
++		return NULL;
++
++	return of_graph_get_remote_port_parent(np);
++}
++
+ static const struct supplier_bindings of_supplier_bindings[] = {
+ 	{ .parse_prop = parse_clocks, },
+ 	{ .parse_prop = parse_interconnects, },
+@@ -1351,7 +1333,10 @@ static const struct supplier_bindings of_supplier_bindings[] = {
+ 	{ .parse_prop = parse_pinctrl6, },
+ 	{ .parse_prop = parse_pinctrl7, },
+ 	{ .parse_prop = parse_pinctrl8, },
+-	{ .parse_prop = parse_remote_endpoint, .node_not_dev = true, },
++	{
++		.parse_prop = parse_remote_endpoint,
++		.get_con_dev = of_graph_get_port_parent,
++	},
+ 	{ .parse_prop = parse_pwms, },
+ 	{ .parse_prop = parse_resets, },
+ 	{ .parse_prop = parse_leds, },
+@@ -1402,8 +1387,8 @@ static int of_link_property(struct device_node *con_np, const char *prop_name)
+ 		while ((phandle = s->parse_prop(con_np, prop_name, i))) {
+ 			struct device_node *con_dev_np;
+ 
+-			con_dev_np = s->node_not_dev
+-					? of_get_compat_node_parent(con_np)
++			con_dev_np = s->get_con_dev
++					? s->get_con_dev(con_np)
+ 					: of_node_get(con_np);
+ 			matched = true;
+ 			i++;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index cfd60e35a8992..d7593bde2d02f 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -50,6 +50,12 @@ static struct unittest_results {
+ 	failed; \
+ })
+ 
++#ifdef CONFIG_OF_KOBJ
++#define OF_KREF_READ(NODE) kref_read(&(NODE)->kobj.kref)
++#else
++#define OF_KREF_READ(NODE) 1
++#endif
++
+ /*
+  * Expected message may have a message level other than KERN_INFO.
+  * Print the expected message only if the current loglevel will allow
+@@ -570,7 +576,7 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 			pr_err("missing testcase data\n");
+ 			return;
+ 		}
+-		prefs[i] = kref_read(&p[i]->kobj.kref);
++		prefs[i] = OF_KREF_READ(p[i]);
+ 	}
+ 
+ 	rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
+@@ -693,9 +699,9 @@ static void __init of_unittest_parse_phandle_with_args_map(void)
+ 	unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(p); ++i) {
+-		unittest(prefs[i] == kref_read(&p[i]->kobj.kref),
++		unittest(prefs[i] == OF_KREF_READ(p[i]),
+ 			 "provider%d: expected:%d got:%d\n",
+-			 i, prefs[i], kref_read(&p[i]->kobj.kref));
++			 i, prefs[i], OF_KREF_READ(p[i]));
+ 		of_node_put(p[i]);
+ 	}
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index bc94d7f395357..c2630db745608 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -6,6 +6,7 @@
+  * Author: Kishon Vijay Abraham I <kishon@ti.com>
+  */
+ 
++#include <linux/align.h>
+ #include <linux/bitfield.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -615,7 +616,7 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
+ 	}
+ 
+ 	aligned_offset = msg_addr & (epc->mem->window.page_size - 1);
+-	msg_addr &= ~aligned_offset;
++	msg_addr = ALIGN_DOWN(msg_addr, epc->mem->window.page_size);
+ 	ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
+ 				  epc->mem->window.page_size);
+ 	if (ret)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index b2000b14faa0d..e62de97304226 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2459,29 +2459,36 @@ static void pci_pme_list_scan(struct work_struct *work)
+ 		if (pdev->pme_poll) {
+ 			struct pci_dev *bridge = pdev->bus->self;
+ 			struct device *dev = &pdev->dev;
+-			int pm_status;
++			struct device *bdev = bridge ? &bridge->dev : NULL;
++			int bref = 0;
+ 
+ 			/*
+-			 * If bridge is in low power state, the
+-			 * configuration space of subordinate devices
+-			 * may be not accessible
++			 * If we have a bridge, it should be in an active/D0
++			 * state or the configuration space of subordinate
++			 * devices may not be accessible or stable over the
++			 * course of the call.
+ 			 */
+-			if (bridge && bridge->current_state != PCI_D0)
+-				continue;
++			if (bdev) {
++				bref = pm_runtime_get_if_active(bdev, true);
++				if (!bref)
++					continue;
++
++				if (bridge->current_state != PCI_D0)
++					goto put_bridge;
++			}
+ 
+ 			/*
+-			 * If the device is in a low power state it
+-			 * should not be polled either.
++			 * The device itself should be suspended but config
++			 * space must be accessible, therefore it cannot be in
++			 * D3cold.
+ 			 */
+-			pm_status = pm_runtime_get_if_active(dev, true);
+-			if (!pm_status)
+-				continue;
+-
+-			if (pdev->current_state != PCI_D3cold)
++			if (pm_runtime_suspended(dev) &&
++			    pdev->current_state != PCI_D3cold)
+ 				pci_pme_wakeup(pdev, NULL);
+ 
+-			if (pm_status > 0)
+-				pm_runtime_put(dev);
++put_bridge:
++			if (bref > 0)
++				pm_runtime_put(bdev);
+ 		} else {
+ 			list_del(&pme_dev->list);
+ 			kfree(pme_dev);
+diff --git a/drivers/perf/cxl_pmu.c b/drivers/perf/cxl_pmu.c
+index 365d964b0f6a6..bc0d414a6aff9 100644
+--- a/drivers/perf/cxl_pmu.c
++++ b/drivers/perf/cxl_pmu.c
+@@ -419,7 +419,7 @@ static struct attribute *cxl_pmu_event_attrs[] = {
+ 	CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmp,			CXL_PMU_GID_S2M_NDR, BIT(0)),
+ 	CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmps,			CXL_PMU_GID_S2M_NDR, BIT(1)),
+ 	CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_cmpe,			CXL_PMU_GID_S2M_NDR, BIT(2)),
+-	CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_biconflictack,		CXL_PMU_GID_S2M_NDR, BIT(3)),
++	CXL_PMU_EVENT_CXL_ATTR(s2m_ndr_biconflictack,		CXL_PMU_GID_S2M_NDR, BIT(4)),
+ 	/* CXL rev 3.0 Table 3-46 S2M DRS opcodes */
+ 	CXL_PMU_EVENT_CXL_ATTR(s2m_drs_memdata,			CXL_PMU_GID_S2M_DRS, BIT(0)),
+ 	CXL_PMU_EVENT_CXL_ATTR(s2m_drs_memdatanxm,		CXL_PMU_GID_S2M_DRS, BIT(1)),
+diff --git a/drivers/pmdomain/mediatek/mtk-pm-domains.c b/drivers/pmdomain/mediatek/mtk-pm-domains.c
+index e26dc17d07ad7..e274e3315fe7a 100644
+--- a/drivers/pmdomain/mediatek/mtk-pm-domains.c
++++ b/drivers/pmdomain/mediatek/mtk-pm-domains.c
+@@ -561,6 +561,11 @@ static int scpsys_add_subdomain(struct scpsys *scpsys, struct device_node *paren
+ 			goto err_put_node;
+ 		}
+ 
++		/* recursive call to add all subdomains */
++		ret = scpsys_add_subdomain(scpsys, child);
++		if (ret)
++			goto err_put_node;
++
+ 		ret = pm_genpd_add_subdomain(parent_pd, child_pd);
+ 		if (ret) {
+ 			dev_err(scpsys->dev, "failed to add %s subdomain to parent %s\n",
+@@ -570,11 +575,6 @@ static int scpsys_add_subdomain(struct scpsys *scpsys, struct device_node *paren
+ 			dev_dbg(scpsys->dev, "%s add subdomain: %s\n", parent_pd->name,
+ 				child_pd->name);
+ 		}
+-
+-		/* recursive call to add all subdomains */
+-		ret = scpsys_add_subdomain(scpsys, child);
+-		if (ret)
+-			goto err_put_node;
+ 	}
+ 
+ 	return 0;
+@@ -588,9 +588,6 @@ static void scpsys_remove_one_domain(struct scpsys_domain *pd)
+ {
+ 	int ret;
+ 
+-	if (scpsys_domain_is_on(pd))
+-		scpsys_power_off(&pd->genpd);
+-
+ 	/*
+ 	 * We're in the error cleanup already, so we only complain,
+ 	 * but won't emit another error on top of the original one.
+@@ -600,6 +597,8 @@ static void scpsys_remove_one_domain(struct scpsys_domain *pd)
+ 		dev_err(pd->scpsys->dev,
+ 			"failed to remove domain '%s' : %d - state may be inconsistent\n",
+ 			pd->genpd.name, ret);
++	if (scpsys_domain_is_on(pd))
++		scpsys_power_off(&pd->genpd);
+ 
+ 	clk_bulk_put(pd->num_clks, pd->clks);
+ 	clk_bulk_put(pd->num_subsys_clks, pd->subsys_clks);
+diff --git a/drivers/pmdomain/renesas/r8a77980-sysc.c b/drivers/pmdomain/renesas/r8a77980-sysc.c
+index 39ca84a67daad..621e411fc9991 100644
+--- a/drivers/pmdomain/renesas/r8a77980-sysc.c
++++ b/drivers/pmdomain/renesas/r8a77980-sysc.c
+@@ -25,7 +25,8 @@ static const struct rcar_sysc_area r8a77980_areas[] __initconst = {
+ 	  PD_CPU_NOCR },
+ 	{ "ca53-cpu3",	0x200, 3, R8A77980_PD_CA53_CPU3, R8A77980_PD_CA53_SCU,
+ 	  PD_CPU_NOCR },
+-	{ "cr7",	0x240, 0, R8A77980_PD_CR7,	R8A77980_PD_ALWAYS_ON },
++	{ "cr7",	0x240, 0, R8A77980_PD_CR7,	R8A77980_PD_ALWAYS_ON,
++	  PD_CPU_NOCR },
+ 	{ "a3ir",	0x180, 0, R8A77980_PD_A3IR,	R8A77980_PD_ALWAYS_ON },
+ 	{ "a2ir0",	0x400, 0, R8A77980_PD_A2IR0,	R8A77980_PD_A3IR },
+ 	{ "a2ir1",	0x400, 1, R8A77980_PD_A2IR1,	R8A77980_PD_A3IR },
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index b92a32b4b1141..04c64ce0a1ca1 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -255,9 +255,10 @@ static void qeth_l3_clear_ip_htable(struct qeth_card *card, int recover)
+ 		if (!recover) {
+ 			hash_del(&addr->hnode);
+ 			kfree(addr);
+-			continue;
++		} else {
++			/* prepare for recovery */
++			addr->disp_flag = QETH_DISP_ADDR_ADD;
+ 		}
+-		addr->disp_flag = QETH_DISP_ADDR_ADD;
+ 	}
+ 
+ 	mutex_unlock(&card->ip_lock);
+@@ -278,9 +279,11 @@ static void qeth_l3_recover_ip(struct qeth_card *card)
+ 		if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
+ 			rc = qeth_l3_register_addr_entry(card, addr);
+ 
+-			if (!rc) {
++			if (!rc || rc == -EADDRINUSE || rc == -ENETDOWN) {
++				/* keep it in the records */
+ 				addr->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
+ 			} else {
++				/* bad address */
+ 				hash_del(&addr->hnode);
+ 				kfree(addr);
+ 			}
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 19eee108db021..5c8d1ba3f8f3c 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -319,17 +319,16 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *sel;
+ 	struct fcoe_fcf *fcf;
+-	unsigned long flags;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_irqsave(&fip->ctlr_lock, flags);
++	spin_lock_bh(&fip->ctlr_lock);
+ 
+ 	kfree_skb(fip->flogi_req);
+ 	fip->flogi_req = NULL;
+ 	list_for_each_entry(fcf, &fip->fcfs, list)
+ 		fcf->flogi_sent = 0;
+ 
+-	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++	spin_unlock_bh(&fip->ctlr_lock);
+ 	sel = fip->sel_fcf;
+ 
+ 	if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr))
+@@ -700,7 +699,6 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ {
+ 	struct fc_frame *fp;
+ 	struct fc_frame_header *fh;
+-	unsigned long flags;
+ 	u16 old_xid;
+ 	u8 op;
+ 	u8 mac[ETH_ALEN];
+@@ -734,11 +732,11 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ 		op = FIP_DT_FLOGI;
+ 		if (fip->mode == FIP_MODE_VN2VN)
+ 			break;
+-		spin_lock_irqsave(&fip->ctlr_lock, flags);
++		spin_lock_bh(&fip->ctlr_lock);
+ 		kfree_skb(fip->flogi_req);
+ 		fip->flogi_req = skb;
+ 		fip->flogi_req_send = 1;
+-		spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++		spin_unlock_bh(&fip->ctlr_lock);
+ 		schedule_work(&fip->timer_work);
+ 		return -EINPROGRESS;
+ 	case ELS_FDISC:
+@@ -1707,11 +1705,10 @@ static int fcoe_ctlr_flogi_send_locked(struct fcoe_ctlr *fip)
+ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
+-	unsigned long flags;
+ 	int error;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_irqsave(&fip->ctlr_lock, flags);
++	spin_lock_bh(&fip->ctlr_lock);
+ 	LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n");
+ 	fcf = fcoe_ctlr_select(fip);
+ 	if (!fcf || fcf->flogi_sent) {
+@@ -1722,7 +1719,7 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ 		fcoe_ctlr_solicit(fip, NULL);
+ 		error = fcoe_ctlr_flogi_send_locked(fip);
+ 	}
+-	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++	spin_unlock_bh(&fip->ctlr_lock);
+ 	mutex_unlock(&fip->ctlr_mutex);
+ 	return error;
+ }
+@@ -1739,9 +1736,8 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&fip->ctlr_lock, flags);
++	spin_lock_bh(&fip->ctlr_lock);
+ 	fcf = fip->sel_fcf;
+ 	if (!fcf || !fip->flogi_req_send)
+ 		goto unlock;
+@@ -1768,7 +1764,7 @@ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ 	} else /* XXX */
+ 		LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n");
+ unlock:
+-	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
++	spin_unlock_bh(&fip->ctlr_lock);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index a95936b18f695..7ceb982040a5d 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -330,6 +330,7 @@ enum storvsc_request_type {
+  */
+ 
+ static int storvsc_ringbuffer_size = (128 * 1024);
++static int aligned_ringbuffer_size;
+ static u32 max_outstanding_req_per_channel;
+ static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth);
+ 
+@@ -687,8 +688,8 @@ static void handle_sc_creation(struct vmbus_channel *new_sc)
+ 	new_sc->next_request_id_callback = storvsc_next_request_id;
+ 
+ 	ret = vmbus_open(new_sc,
+-			 storvsc_ringbuffer_size,
+-			 storvsc_ringbuffer_size,
++			 aligned_ringbuffer_size,
++			 aligned_ringbuffer_size,
+ 			 (void *)&props,
+ 			 sizeof(struct vmstorage_channel_properties),
+ 			 storvsc_on_channel_callback, new_sc);
+@@ -1973,7 +1974,7 @@ static int storvsc_probe(struct hv_device *device,
+ 	dma_set_min_align_mask(&device->device, HV_HYP_PAGE_SIZE - 1);
+ 
+ 	stor_device->port_number = host->host_no;
+-	ret = storvsc_connect_to_vsp(device, storvsc_ringbuffer_size, is_fc);
++	ret = storvsc_connect_to_vsp(device, aligned_ringbuffer_size, is_fc);
+ 	if (ret)
+ 		goto err_out1;
+ 
+@@ -2164,7 +2165,7 @@ static int storvsc_resume(struct hv_device *hv_dev)
+ {
+ 	int ret;
+ 
+-	ret = storvsc_connect_to_vsp(hv_dev, storvsc_ringbuffer_size,
++	ret = storvsc_connect_to_vsp(hv_dev, aligned_ringbuffer_size,
+ 				     hv_dev_is_fc(hv_dev));
+ 	return ret;
+ }
+@@ -2198,8 +2199,9 @@ static int __init storvsc_drv_init(void)
+ 	 * the ring buffer indices) by the max request size (which is
+ 	 * vmbus_channel_packet_multipage_buffer + struct vstor_packet + u64)
+ 	 */
++	aligned_ringbuffer_size = VMBUS_RING_SIZE(storvsc_ringbuffer_size);
+ 	max_outstanding_req_per_channel =
+-		((storvsc_ringbuffer_size - PAGE_SIZE) /
++		((aligned_ringbuffer_size - PAGE_SIZE) /
+ 		ALIGN(MAX_MULTIPAGE_BUFFER_PACKET +
+ 		sizeof(struct vstor_packet) + sizeof(u64),
+ 		sizeof(u64)));
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 272bc871a848b..e2d3e3ec13789 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -2,6 +2,7 @@
+ // Copyright 2004-2007 Freescale Semiconductor, Inc. All Rights Reserved.
+ // Copyright (C) 2008 Juergen Beisert
+ 
++#include <linux/bits.h>
+ #include <linux/clk.h>
+ #include <linux/completion.h>
+ #include <linux/delay.h>
+@@ -660,15 +661,15 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 			<< MX51_ECSPI_CTRL_BL_OFFSET;
+ 	else {
+ 		if (spi_imx->usedma) {
+-			ctrl |= (spi_imx->bits_per_word *
+-				spi_imx_bytes_per_word(spi_imx->bits_per_word) - 1)
++			ctrl |= (spi_imx->bits_per_word - 1)
+ 				<< MX51_ECSPI_CTRL_BL_OFFSET;
+ 		} else {
+ 			if (spi_imx->count >= MX51_ECSPI_CTRL_MAX_BURST)
+-				ctrl |= (MX51_ECSPI_CTRL_MAX_BURST - 1)
++				ctrl |= (MX51_ECSPI_CTRL_MAX_BURST * BITS_PER_BYTE - 1)
+ 						<< MX51_ECSPI_CTRL_BL_OFFSET;
+ 			else
+-				ctrl |= (spi_imx->count * spi_imx->bits_per_word - 1)
++				ctrl |= spi_imx->count / DIV_ROUND_UP(spi_imx->bits_per_word,
++						BITS_PER_BYTE) * spi_imx->bits_per_word
+ 						<< MX51_ECSPI_CTRL_BL_OFFSET;
+ 		}
+ 	}
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index a0c9fea908f55..ddf1c684bcc7d 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -53,8 +53,6 @@
+ 
+ /* per-register bitmasks: */
+ #define OMAP2_MCSPI_IRQSTATUS_EOW	BIT(17)
+-#define OMAP2_MCSPI_IRQSTATUS_TX0_EMPTY    BIT(0)
+-#define OMAP2_MCSPI_IRQSTATUS_RX0_FULL    BIT(2)
+ 
+ #define OMAP2_MCSPI_MODULCTRL_SINGLE	BIT(0)
+ #define OMAP2_MCSPI_MODULCTRL_MS	BIT(2)
+@@ -293,7 +291,7 @@ static void omap2_mcspi_set_mode(struct spi_controller *ctlr)
+ }
+ 
+ static void omap2_mcspi_set_fifo(const struct spi_device *spi,
+-				struct spi_transfer *t, int enable, int dma_enabled)
++				struct spi_transfer *t, int enable)
+ {
+ 	struct spi_controller *ctlr = spi->controller;
+ 	struct omap2_mcspi_cs *cs = spi->controller_state;
+@@ -314,28 +312,20 @@ static void omap2_mcspi_set_fifo(const struct spi_device *spi,
+ 			max_fifo_depth = OMAP2_MCSPI_MAX_FIFODEPTH / 2;
+ 		else
+ 			max_fifo_depth = OMAP2_MCSPI_MAX_FIFODEPTH;
+-		if (dma_enabled)
+-			wcnt = t->len / bytes_per_word;
+-		else
+-			wcnt = 0;
++
++		wcnt = t->len / bytes_per_word;
+ 		if (wcnt > OMAP2_MCSPI_MAX_FIFOWCNT)
+ 			goto disable_fifo;
+ 
+ 		xferlevel = wcnt << 16;
+ 		if (t->rx_buf != NULL) {
+ 			chconf |= OMAP2_MCSPI_CHCONF_FFER;
+-			if (dma_enabled)
+-				xferlevel |= (bytes_per_word - 1) << 8;
+-			else
+-				xferlevel |= (max_fifo_depth - 1) << 8;
++			xferlevel |= (bytes_per_word - 1) << 8;
+ 		}
+ 
+ 		if (t->tx_buf != NULL) {
+ 			chconf |= OMAP2_MCSPI_CHCONF_FFET;
+-			if (dma_enabled)
+-				xferlevel |= bytes_per_word - 1;
+-			else
+-				xferlevel |= (max_fifo_depth - 1);
++			xferlevel |= bytes_per_word - 1;
+ 		}
+ 
+ 		mcspi_write_reg(ctlr, OMAP2_MCSPI_XFERLEVEL, xferlevel);
+@@ -892,113 +882,6 @@ omap2_mcspi_txrx_pio(struct spi_device *spi, struct spi_transfer *xfer)
+ 	return count - c;
+ }
+ 
+-static unsigned
+-omap2_mcspi_txrx_piofifo(struct spi_device *spi, struct spi_transfer *xfer)
+-{
+-	struct omap2_mcspi_cs	*cs = spi->controller_state;
+-	struct omap2_mcspi    *mcspi;
+-	unsigned int		count, c;
+-	unsigned int		iter, cwc;
+-	int last_request;
+-	void __iomem		*base = cs->base;
+-	void __iomem		*tx_reg;
+-	void __iomem		*rx_reg;
+-	void __iomem		*chstat_reg;
+-	void __iomem        *irqstat_reg;
+-	int			word_len, bytes_per_word;
+-	u8		*rx;
+-	const u8	*tx;
+-
+-	mcspi = spi_controller_get_devdata(spi->controller);
+-	count = xfer->len;
+-	c = count;
+-	word_len = cs->word_len;
+-	bytes_per_word = mcspi_bytes_per_word(word_len);
+-
+-	/*
+-	 * We store the pre-calculated register addresses on stack to speed
+-	 * up the transfer loop.
+-	 */
+-	tx_reg		= base + OMAP2_MCSPI_TX0;
+-	rx_reg		= base + OMAP2_MCSPI_RX0;
+-	chstat_reg	= base + OMAP2_MCSPI_CHSTAT0;
+-	irqstat_reg    = base + OMAP2_MCSPI_IRQSTATUS;
+-
+-	if (c < (word_len >> 3))
+-		return 0;
+-
+-	rx = xfer->rx_buf;
+-	tx = xfer->tx_buf;
+-
+-	do {
+-		/* calculate number of words in current iteration */
+-		cwc = min((unsigned int)mcspi->fifo_depth / bytes_per_word,
+-			  c / bytes_per_word);
+-		last_request = cwc != (mcspi->fifo_depth / bytes_per_word);
+-		if (tx) {
+-			if (mcspi_wait_for_reg_bit(irqstat_reg,
+-						   OMAP2_MCSPI_IRQSTATUS_TX0_EMPTY) < 0) {
+-				dev_err(&spi->dev, "TX Empty timed out\n");
+-				goto out;
+-			}
+-			writel_relaxed(OMAP2_MCSPI_IRQSTATUS_TX0_EMPTY, irqstat_reg);
+-
+-			for (iter = 0; iter < cwc; iter++, tx += bytes_per_word) {
+-				if (bytes_per_word == 1)
+-					writel_relaxed(*tx, tx_reg);
+-				else if (bytes_per_word == 2)
+-					writel_relaxed(*((u16 *)tx), tx_reg);
+-				else if (bytes_per_word == 4)
+-					writel_relaxed(*((u32 *)tx), tx_reg);
+-			}
+-		}
+-
+-		if (rx) {
+-			if (!last_request &&
+-			    mcspi_wait_for_reg_bit(irqstat_reg,
+-						   OMAP2_MCSPI_IRQSTATUS_RX0_FULL) < 0) {
+-				dev_err(&spi->dev, "RX_FULL timed out\n");
+-				goto out;
+-			}
+-			writel_relaxed(OMAP2_MCSPI_IRQSTATUS_RX0_FULL, irqstat_reg);
+-
+-			for (iter = 0; iter < cwc; iter++, rx += bytes_per_word) {
+-				if (last_request &&
+-				    mcspi_wait_for_reg_bit(chstat_reg,
+-							   OMAP2_MCSPI_CHSTAT_RXS) < 0) {
+-					dev_err(&spi->dev, "RXS timed out\n");
+-					goto out;
+-				}
+-				if (bytes_per_word == 1)
+-					*rx = readl_relaxed(rx_reg);
+-				else if (bytes_per_word == 2)
+-					*((u16 *)rx) = readl_relaxed(rx_reg);
+-				else if (bytes_per_word == 4)
+-					*((u32 *)rx) = readl_relaxed(rx_reg);
+-			}
+-		}
+-
+-		if (last_request) {
+-			if (mcspi_wait_for_reg_bit(chstat_reg,
+-						   OMAP2_MCSPI_CHSTAT_EOT) < 0) {
+-				dev_err(&spi->dev, "EOT timed out\n");
+-				goto out;
+-			}
+-			if (mcspi_wait_for_reg_bit(chstat_reg,
+-						   OMAP2_MCSPI_CHSTAT_TXFFE) < 0) {
+-				dev_err(&spi->dev, "TXFFE timed out\n");
+-				goto out;
+-			}
+-			omap2_mcspi_set_enable(spi, 0);
+-		}
+-		c -= cwc * bytes_per_word;
+-	} while (c >= bytes_per_word);
+-
+-out:
+-	omap2_mcspi_set_enable(spi, 1);
+-	return count - c;
+-}
+-
+ static u32 omap2_mcspi_calc_divisor(u32 speed_hz, u32 ref_clk_hz)
+ {
+ 	u32 div;
+@@ -1323,9 +1206,7 @@ static int omap2_mcspi_transfer_one(struct spi_controller *ctlr,
+ 		if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) &&
+ 		    ctlr->cur_msg_mapped &&
+ 		    ctlr->can_dma(ctlr, spi, t))
+-			omap2_mcspi_set_fifo(spi, t, 1, 1);
+-		else if (t->len > OMAP2_MCSPI_MAX_FIFODEPTH)
+-			omap2_mcspi_set_fifo(spi, t, 1, 0);
++			omap2_mcspi_set_fifo(spi, t, 1);
+ 
+ 		omap2_mcspi_set_enable(spi, 1);
+ 
+@@ -1338,8 +1219,6 @@ static int omap2_mcspi_transfer_one(struct spi_controller *ctlr,
+ 		    ctlr->cur_msg_mapped &&
+ 		    ctlr->can_dma(ctlr, spi, t))
+ 			count = omap2_mcspi_txrx_dma(spi, t);
+-		else if (mcspi->fifo_depth > 0)
+-			count = omap2_mcspi_txrx_piofifo(spi, t);
+ 		else
+ 			count = omap2_mcspi_txrx_pio(spi, t);
+ 
+@@ -1352,7 +1231,7 @@ static int omap2_mcspi_transfer_one(struct spi_controller *ctlr,
+ 	omap2_mcspi_set_enable(spi, 0);
+ 
+ 	if (mcspi->fifo_depth > 0)
+-		omap2_mcspi_set_fifo(spi, t, 0, 0);
++		omap2_mcspi_set_fifo(spi, t, 0);
+ 
+ out:
+ 	/* Restore defaults if they were overriden */
+@@ -1375,7 +1254,7 @@ static int omap2_mcspi_transfer_one(struct spi_controller *ctlr,
+ 		omap2_mcspi_set_cs(spi, !(spi->mode & SPI_CS_HIGH));
+ 
+ 	if (mcspi->fifo_depth > 0 && t)
+-		omap2_mcspi_set_fifo(spi, t, 0, 0);
++		omap2_mcspi_set_fifo(spi, t, 0);
+ 
+ 	return status;
+ }
+diff --git a/drivers/spi/spi-ppc4xx.c b/drivers/spi/spi-ppc4xx.c
+index 03aab661be9d3..e982d3189fdce 100644
+--- a/drivers/spi/spi-ppc4xx.c
++++ b/drivers/spi/spi-ppc4xx.c
+@@ -166,10 +166,8 @@ static int spi_ppc4xx_setupxfer(struct spi_device *spi, struct spi_transfer *t)
+ 	int scr;
+ 	u8 cdm = 0;
+ 	u32 speed;
+-	u8 bits_per_word;
+ 
+ 	/* Start with the generic configuration for this device. */
+-	bits_per_word = spi->bits_per_word;
+ 	speed = spi->max_speed_hz;
+ 
+ 	/*
+@@ -177,9 +175,6 @@ static int spi_ppc4xx_setupxfer(struct spi_device *spi, struct spi_transfer *t)
+ 	 * the transfer to overwrite the generic configuration with zeros.
+ 	 */
+ 	if (t) {
+-		if (t->bits_per_word)
+-			bits_per_word = t->bits_per_word;
+-
+ 		if (t->speed_hz)
+ 			speed = min(t->speed_hz, spi->max_speed_hz);
+ 	}
+diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c
+index e748a5d04e970..9149d41fe65b7 100644
+--- a/drivers/staging/iio/impedance-analyzer/ad5933.c
++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c
+@@ -608,7 +608,7 @@ static void ad5933_work(struct work_struct *work)
+ 		struct ad5933_state, work.work);
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(st->client);
+ 	__be16 buf[2];
+-	int val[2];
++	u16 val[2];
+ 	unsigned char status;
+ 	int ret;
+ 
+diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h
+index 87e4795275fe6..6f798f6a2b848 100644
+--- a/drivers/thunderbolt/tb_regs.h
++++ b/drivers/thunderbolt/tb_regs.h
+@@ -203,7 +203,7 @@ struct tb_regs_switch_header {
+ #define ROUTER_CS_5_WOP				BIT(1)
+ #define ROUTER_CS_5_WOU				BIT(2)
+ #define ROUTER_CS_5_WOD				BIT(3)
+-#define ROUTER_CS_5_C3S				BIT(23)
++#define ROUTER_CS_5_CNS				BIT(23)
+ #define ROUTER_CS_5_PTO				BIT(24)
+ #define ROUTER_CS_5_UTO				BIT(25)
+ #define ROUTER_CS_5_HCO				BIT(26)
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index f8f0d24ff6e46..1515eff8cc3e2 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -290,7 +290,7 @@ int usb4_switch_setup(struct tb_switch *sw)
+ 	}
+ 
+ 	/* TBT3 supported by the CM */
+-	val |= ROUTER_CS_5_C3S;
++	val &= ~ROUTER_CS_5_CNS;
+ 
+ 	return tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
+ }
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 97e4965b73d40..07f7b48bad13d 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -237,6 +237,14 @@
+ #define MAX310x_REV_MASK		(0xf8)
+ #define MAX310X_WRITE_BIT		0x80
+ 
++/* Port startup definitions */
++#define MAX310X_PORT_STARTUP_WAIT_RETRIES	20 /* Number of retries */
++#define MAX310X_PORT_STARTUP_WAIT_DELAY_MS	10 /* Delay between retries */
++
++/* Crystal-related definitions */
++#define MAX310X_XTAL_WAIT_RETRIES	20 /* Number of retries */
++#define MAX310X_XTAL_WAIT_DELAY_MS	10 /* Delay between retries */
++
+ /* MAX3107 specific */
+ #define MAX3107_REV_ID			(0xa0)
+ 
+@@ -583,7 +591,7 @@ static int max310x_update_best_err(unsigned long f, long *besterr)
+ 	return 1;
+ }
+ 
+-static u32 max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
++static s32 max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
+ 			       unsigned long freq, bool xtal)
+ {
+ 	unsigned int div, clksrc, pllcfg = 0;
+@@ -641,12 +649,20 @@ static u32 max310x_set_ref_clk(struct device *dev, struct max310x_port *s,
+ 
+ 	/* Wait for crystal */
+ 	if (xtal) {
+-		unsigned int val;
+-		msleep(10);
+-		regmap_read(s->regmap, MAX310X_STS_IRQSTS_REG, &val);
+-		if (!(val & MAX310X_STS_CLKREADY_BIT)) {
+-			dev_warn(dev, "clock is not stable yet\n");
+-		}
++		bool stable = false;
++		unsigned int try = 0, val = 0;
++
++		do {
++			msleep(MAX310X_XTAL_WAIT_DELAY_MS);
++			regmap_read(s->regmap, MAX310X_STS_IRQSTS_REG, &val);
++
++			if (val & MAX310X_STS_CLKREADY_BIT)
++				stable = true;
++		} while (!stable && (++try < MAX310X_XTAL_WAIT_RETRIES));
++
++		if (!stable)
++			return dev_err_probe(dev, -EAGAIN,
++					     "clock is not stable\n");
+ 	}
+ 
+ 	return bestfreq;
+@@ -1271,7 +1287,7 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ {
+ 	int i, ret, fmin, fmax, freq;
+ 	struct max310x_port *s;
+-	u32 uartclk = 0;
++	s32 uartclk = 0;
+ 	bool xtal;
+ 
+ 	for (i = 0; i < devtype->nr; i++)
+@@ -1334,6 +1350,9 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ 		goto out_clk;
+ 
+ 	for (i = 0; i < devtype->nr; i++) {
++		bool started = false;
++		unsigned int try = 0, val = 0;
++
+ 		/* Reset port */
+ 		regmap_write(regmaps[i], MAX310X_MODE2_REG,
+ 			     MAX310X_MODE2_RST_BIT);
+@@ -1342,13 +1361,27 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ 
+ 		/* Wait for port startup */
+ 		do {
+-			regmap_read(regmaps[i], MAX310X_BRGDIVLSB_REG, &ret);
+-		} while (ret != 0x01);
++			msleep(MAX310X_PORT_STARTUP_WAIT_DELAY_MS);
++			regmap_read(regmaps[i], MAX310X_BRGDIVLSB_REG, &val);
++
++			if (val == 0x01)
++				started = true;
++		} while (!started && (++try < MAX310X_PORT_STARTUP_WAIT_RETRIES));
++
++		if (!started) {
++			ret = dev_err_probe(dev, -EAGAIN, "port reset failed\n");
++			goto out_uart;
++		}
+ 
+ 		regmap_write(regmaps[i], MAX310X_MODE1_REG, devtype->mode1);
+ 	}
+ 
+ 	uartclk = max310x_set_ref_clk(dev, s, freq, xtal);
++	if (uartclk < 0) {
++		ret = uartclk;
++		goto out_uart;
++	}
++
+ 	dev_dbg(dev, "Reference clock set to %i Hz\n", uartclk);
+ 
+ 	for (i = 0; i < devtype->nr; i++) {
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index 8eeecf8ad3596..380a8b0590e34 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -605,13 +605,16 @@ static void mxs_auart_tx_chars(struct mxs_auart_port *s)
+ 		return;
+ 	}
+ 
+-	pending = uart_port_tx(&s->port, ch,
++	pending = uart_port_tx_flags(&s->port, ch, UART_TX_NOSTOP,
+ 		!(mxs_read(s, REG_STAT) & AUART_STAT_TXFF),
+ 		mxs_write(ch, s, REG_DATA));
+ 	if (pending)
+ 		mxs_set(AUART_INTR_TXIEN, s, REG_INTR);
+ 	else
+ 		mxs_clr(AUART_INTR_TXIEN, s, REG_INTR);
++
++	if (uart_tx_stopped(&s->port))
++               mxs_auart_stop_tx(&s->port);
+ }
+ 
+ static void mxs_auart_rx_char(struct mxs_auart_port *s)
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 93e4e1693601f..39fa9ae5099bf 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1085,8 +1085,8 @@ static int uart_tiocmget(struct tty_struct *tty)
+ 		goto out;
+ 
+ 	if (!tty_io_error(tty)) {
+-		result = uport->mctrl;
+ 		uart_port_lock_irq(uport);
++		result = uport->mctrl;
+ 		result |= uport->ops->get_mctrl(uport);
+ 		uart_port_unlock_irq(uport);
+ 	}
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index d9bb3d3f026e6..2a38e1eb65466 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -176,6 +176,7 @@ struct hw_bank {
+  * @enabled_otg_timer_bits: bits of enabled otg timers
+  * @next_otg_timer: next nearest enabled timer to be expired
+  * @work: work for role changing
++ * @power_lost_work: work for power lost handling
+  * @wq: workqueue thread
+  * @qh_pool: allocation pool for queue heads
+  * @td_pool: allocation pool for transfer descriptors
+@@ -226,6 +227,7 @@ struct ci_hdrc {
+ 	enum otg_fsm_timer		next_otg_timer;
+ 	struct usb_role_switch		*role_switch;
+ 	struct work_struct		work;
++	struct work_struct		power_lost_work;
+ 	struct workqueue_struct		*wq;
+ 
+ 	struct dma_pool			*qh_pool;
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 85e9c3ab66e94..ca71df4f32e4c 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -856,6 +856,27 @@ static int ci_extcon_register(struct ci_hdrc *ci)
+ 	return 0;
+ }
+ 
++static void ci_power_lost_work(struct work_struct *work)
++{
++	struct ci_hdrc *ci = container_of(work, struct ci_hdrc, power_lost_work);
++	enum ci_role role;
++
++	disable_irq_nosync(ci->irq);
++	pm_runtime_get_sync(ci->dev);
++	if (!ci_otg_is_fsm_mode(ci)) {
++		role = ci_get_role(ci);
++
++		if (ci->role != role) {
++			ci_handle_id_switch(ci);
++		} else if (role == CI_ROLE_GADGET) {
++			if (ci->is_otg && hw_read_otgsc(ci, OTGSC_BSV))
++				usb_gadget_vbus_connect(&ci->gadget);
++		}
++	}
++	pm_runtime_put_sync(ci->dev);
++	enable_irq(ci->irq);
++}
++
+ static DEFINE_IDA(ci_ida);
+ 
+ struct platform_device *ci_hdrc_add_device(struct device *dev,
+@@ -1045,6 +1066,8 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 
+ 	spin_lock_init(&ci->lock);
+ 	mutex_init(&ci->mutex);
++	INIT_WORK(&ci->power_lost_work, ci_power_lost_work);
++
+ 	ci->dev = dev;
+ 	ci->platdata = dev_get_platdata(dev);
+ 	ci->imx28_write_fix = !!(ci->platdata->flags &
+@@ -1396,25 +1419,6 @@ static int ci_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static void ci_handle_power_lost(struct ci_hdrc *ci)
+-{
+-	enum ci_role role;
+-
+-	disable_irq_nosync(ci->irq);
+-	if (!ci_otg_is_fsm_mode(ci)) {
+-		role = ci_get_role(ci);
+-
+-		if (ci->role != role) {
+-			ci_handle_id_switch(ci);
+-		} else if (role == CI_ROLE_GADGET) {
+-			if (ci->is_otg && hw_read_otgsc(ci, OTGSC_BSV))
+-				usb_gadget_vbus_connect(&ci->gadget);
+-		}
+-	}
+-
+-	enable_irq(ci->irq);
+-}
+-
+ static int ci_resume(struct device *dev)
+ {
+ 	struct ci_hdrc *ci = dev_get_drvdata(dev);
+@@ -1446,7 +1450,7 @@ static int ci_resume(struct device *dev)
+ 		ci_role(ci)->resume(ci, power_lost);
+ 
+ 	if (power_lost)
+-		ci_handle_power_lost(ci);
++		queue_work(system_freezable_wq, &ci->power_lost_work);
+ 
+ 	if (ci->supports_runtime_pm) {
+ 		pm_runtime_disable(dev);
+diff --git a/drivers/usb/common/ulpi.c b/drivers/usb/common/ulpi.c
+index 84d91b1c1eed5..0886b19d2e1c8 100644
+--- a/drivers/usb/common/ulpi.c
++++ b/drivers/usb/common/ulpi.c
+@@ -301,7 +301,7 @@ static int ulpi_register(struct device *dev, struct ulpi *ulpi)
+ 		return ret;
+ 	}
+ 
+-	root = debugfs_create_dir(dev_name(dev), ulpi_root);
++	root = debugfs_create_dir(dev_name(&ulpi->dev), ulpi_root);
+ 	debugfs_create_file("regs", 0444, root, ulpi, &ulpi_regs_fops);
+ 
+ 	dev_dbg(&ulpi->dev, "registered ULPI PHY: vendor %04x, product %04x\n",
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index ef8d9bda94ac1..4854d883e601d 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2047,9 +2047,19 @@ static void update_port_device_state(struct usb_device *udev)
+ 
+ 	if (udev->parent) {
+ 		hub = usb_hub_to_struct_hub(udev->parent);
+-		port_dev = hub->ports[udev->portnum - 1];
+-		WRITE_ONCE(port_dev->state, udev->state);
+-		sysfs_notify_dirent(port_dev->state_kn);
++
++		/*
++		 * The Link Layer Validation System Driver (lvstest)
++		 * has a test step to unbind the hub before running the
++		 * rest of the procedure. This triggers hub_disconnect
++		 * which will set the hub's maxchild to 0, further
++		 * resulting in usb_hub_to_struct_hub returning NULL.
++		 */
++		if (hub) {
++			port_dev = hub->ports[udev->portnum - 1];
++			WRITE_ONCE(port_dev->state, udev->state);
++			sysfs_notify_dirent(port_dev->state_kn);
++		}
+ 	}
+ }
+ 
+@@ -2382,17 +2392,25 @@ static int usb_enumerate_device_otg(struct usb_device *udev)
+ 			}
+ 		} else if (desc->bLength == sizeof
+ 				(struct usb_otg_descriptor)) {
+-			/* Set a_alt_hnp_support for legacy otg device */
+-			err = usb_control_msg(udev,
+-				usb_sndctrlpipe(udev, 0),
+-				USB_REQ_SET_FEATURE, 0,
+-				USB_DEVICE_A_ALT_HNP_SUPPORT,
+-				0, NULL, 0,
+-				USB_CTRL_SET_TIMEOUT);
+-			if (err < 0)
+-				dev_err(&udev->dev,
+-					"set a_alt_hnp_support failed: %d\n",
+-					err);
++			/*
++			 * We are operating on a legacy OTP device
++			 * These should be told that they are operating
++			 * on the wrong port if we have another port that does
++			 * support HNP
++			 */
++			if (bus->otg_port != 0) {
++				/* Set a_alt_hnp_support for legacy otg device */
++				err = usb_control_msg(udev,
++					usb_sndctrlpipe(udev, 0),
++					USB_REQ_SET_FEATURE, 0,
++					USB_DEVICE_A_ALT_HNP_SUPPORT,
++					0, NULL, 0,
++					USB_CTRL_SET_TIMEOUT);
++				if (err < 0)
++					dev_err(&udev->dev,
++						"set a_alt_hnp_support failed: %d\n",
++						err);
++			}
+ 		}
+ 	}
+ #endif
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 89de363ecf8bb..4c8dd67246788 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -4703,15 +4703,13 @@ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	if (!dwc->gadget_driver)
+-		return 0;
+-
+ 	ret = dwc3_gadget_soft_disconnect(dwc);
+ 	if (ret)
+ 		goto err;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
+-	dwc3_disconnect_gadget(dwc);
++	if (dwc->gadget_driver)
++		dwc3_disconnect_gadget(dwc);
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
+ 	return 0;
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index 722a3ab2b3379..c265a1f62fc14 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -545,21 +545,37 @@ static int start_transfer(struct fsg_dev *fsg, struct usb_ep *ep,
+ 
+ static bool start_in_transfer(struct fsg_common *common, struct fsg_buffhd *bh)
+ {
++	int rc;
++
+ 	if (!fsg_is_set(common))
+ 		return false;
+ 	bh->state = BUF_STATE_SENDING;
+-	if (start_transfer(common->fsg, common->fsg->bulk_in, bh->inreq))
++	rc = start_transfer(common->fsg, common->fsg->bulk_in, bh->inreq);
++	if (rc) {
+ 		bh->state = BUF_STATE_EMPTY;
++		if (rc == -ESHUTDOWN) {
++			common->running = 0;
++			return false;
++		}
++	}
+ 	return true;
+ }
+ 
+ static bool start_out_transfer(struct fsg_common *common, struct fsg_buffhd *bh)
+ {
++	int rc;
++
+ 	if (!fsg_is_set(common))
+ 		return false;
+ 	bh->state = BUF_STATE_RECEIVING;
+-	if (start_transfer(common->fsg, common->fsg->bulk_out, bh->outreq))
++	rc = start_transfer(common->fsg, common->fsg->bulk_out, bh->outreq);
++	if (rc) {
+ 		bh->state = BUF_STATE_FULL;
++		if (rc == -ESHUTDOWN) {
++			common->running = 0;
++			return false;
++		}
++	}
+ 	return true;
+ }
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index a1487bc670f15..bfb6f9481e87f 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4862,7 +4862,8 @@ static void run_state_machine(struct tcpm_port *port)
+ 		break;
+ 	case PORT_RESET:
+ 		tcpm_reset_port(port);
+-		tcpm_set_cc(port, TYPEC_CC_OPEN);
++		tcpm_set_cc(port, tcpm_default_state(port) == SNK_UNATTACHED ?
++			    TYPEC_CC_RD : tcpm_rp_cc(port));
+ 		tcpm_set_state(port, PORT_RESET_WAIT_OFF,
+ 			       PD_T_ERROR_RECOVERY);
+ 		break;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 61b64558f96c5..8f9dff993b3da 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -935,7 +935,9 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 
+ 	clear_bit(EVENT_PENDING, &con->ucsi->flags);
+ 
++	mutex_lock(&ucsi->ppm_lock);
+ 	ret = ucsi_acknowledge_connector_change(ucsi);
++	mutex_unlock(&ucsi->ppm_lock);
+ 	if (ret)
+ 		dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret);
+ 
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 6bbf490ac4010..fa222080887d5 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -73,9 +73,13 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 				const void *val, size_t val_len)
+ {
+ 	struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
++	bool ack = UCSI_COMMAND(*(u64 *)val) == UCSI_ACK_CC_CI;
+ 	int ret;
+ 
+-	set_bit(COMMAND_PENDING, &ua->flags);
++	if (ack)
++		set_bit(ACK_PENDING, &ua->flags);
++	else
++		set_bit(COMMAND_PENDING, &ua->flags);
+ 
+ 	ret = ucsi_acpi_async_write(ucsi, offset, val, val_len);
+ 	if (ret)
+@@ -85,7 +89,10 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 		ret = -ETIMEDOUT;
+ 
+ out_clear_bit:
+-	clear_bit(COMMAND_PENDING, &ua->flags);
++	if (ack)
++		clear_bit(ACK_PENDING, &ua->flags);
++	else
++		clear_bit(COMMAND_PENDING, &ua->flags);
+ 
+ 	return ret;
+ }
+@@ -142,8 +149,10 @@ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+ 	if (UCSI_CCI_CONNECTOR(cci))
+ 		ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci));
+ 
+-	if (test_bit(COMMAND_PENDING, &ua->flags) &&
+-	    cci & (UCSI_CCI_ACK_COMPLETE | UCSI_CCI_COMMAND_COMPLETE))
++	if (cci & UCSI_CCI_ACK_COMPLETE && test_bit(ACK_PENDING, &ua->flags))
++		complete(&ua->complete);
++	if (cci & UCSI_CCI_COMMAND_COMPLETE &&
++	    test_bit(COMMAND_PENDING, &ua->flags))
+ 		complete(&ua->complete);
+ }
+ 
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index b8cfea7812d6b..3b9f080109d7e 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -923,8 +923,8 @@ static void shutdown_pirq(struct irq_data *data)
+ 		return;
+ 
+ 	do_mask(info, EVT_MASK_REASON_EXPLICIT);
+-	xen_evtchn_close(evtchn);
+ 	xen_irq_info_cleanup(info);
++	xen_evtchn_close(evtchn);
+ }
+ 
+ static void enable_pirq(struct irq_data *data)
+@@ -956,6 +956,7 @@ EXPORT_SYMBOL_GPL(xen_irq_from_gsi);
+ static void __unbind_from_irq(struct irq_info *info, unsigned int irq)
+ {
+ 	evtchn_port_t evtchn;
++	bool close_evtchn = false;
+ 
+ 	if (!info) {
+ 		xen_irq_free_desc(irq);
+@@ -975,7 +976,7 @@ static void __unbind_from_irq(struct irq_info *info, unsigned int irq)
+ 		struct xenbus_device *dev;
+ 
+ 		if (!info->is_static)
+-			xen_evtchn_close(evtchn);
++			close_evtchn = true;
+ 
+ 		switch (info->type) {
+ 		case IRQT_VIRQ:
+@@ -995,6 +996,9 @@ static void __unbind_from_irq(struct irq_info *info, unsigned int irq)
+ 		}
+ 
+ 		xen_irq_info_cleanup(info);
++
++		if (close_evtchn)
++			xen_evtchn_close(evtchn);
+ 	}
+ 
+ 	xen_free_irq(info);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 6e5dc68ff661f..aca24186d66bc 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1467,6 +1467,7 @@ static bool clean_pinned_extents(struct btrfs_trans_handle *trans,
+  */
+ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ {
++	LIST_HEAD(retry_list);
+ 	struct btrfs_block_group *block_group;
+ 	struct btrfs_space_info *space_info;
+ 	struct btrfs_trans_handle *trans;
+@@ -1488,6 +1489,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 
+ 	spin_lock(&fs_info->unused_bgs_lock);
+ 	while (!list_empty(&fs_info->unused_bgs)) {
++		u64 used;
+ 		int trimming;
+ 
+ 		block_group = list_first_entry(&fs_info->unused_bgs,
+@@ -1523,9 +1525,9 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 			goto next;
+ 		}
+ 
++		spin_lock(&space_info->lock);
+ 		spin_lock(&block_group->lock);
+-		if (block_group->reserved || block_group->pinned ||
+-		    block_group->used || block_group->ro ||
++		if (btrfs_is_block_group_used(block_group) || block_group->ro ||
+ 		    list_is_singular(&block_group->list)) {
+ 			/*
+ 			 * We want to bail if we made new allocations or have
+@@ -1535,10 +1537,49 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 			 */
+ 			trace_btrfs_skip_unused_block_group(block_group);
+ 			spin_unlock(&block_group->lock);
++			spin_unlock(&space_info->lock);
+ 			up_write(&space_info->groups_sem);
+ 			goto next;
+ 		}
++
++		/*
++		 * The block group may be unused but there may be space reserved
++		 * accounting with the existence of that block group, that is,
++		 * space_info->bytes_may_use was incremented by a task but no
++		 * space was yet allocated from the block group by the task.
++		 * That space may or may not be allocated, as we are generally
++		 * pessimistic about space reservation for metadata as well as
++		 * for data when using compression (as we reserve space based on
++		 * the worst case, when data can't be compressed, and before
++		 * actually attempting compression, before starting writeback).
++		 *
++		 * So check if the total space of the space_info minus the size
++		 * of this block group is less than the used space of the
++		 * space_info - if that's the case, then it means we have tasks
++		 * that might be relying on the block group in order to allocate
++		 * extents, and add back the block group to the unused list when
++		 * we finish, so that we retry later in case no tasks ended up
++		 * needing to allocate extents from the block group.
++		 */
++		used = btrfs_space_info_used(space_info, true);
++		if (space_info->total_bytes - block_group->length < used) {
++			/*
++			 * Add a reference for the list, compensate for the ref
++			 * drop under the "next" label for the
++			 * fs_info->unused_bgs list.
++			 */
++			btrfs_get_block_group(block_group);
++			list_add_tail(&block_group->bg_list, &retry_list);
++
++			trace_btrfs_skip_unused_block_group(block_group);
++			spin_unlock(&block_group->lock);
++			spin_unlock(&space_info->lock);
++			up_write(&space_info->groups_sem);
++			goto next;
++		}
++
+ 		spin_unlock(&block_group->lock);
++		spin_unlock(&space_info->lock);
+ 
+ 		/* We don't want to force the issue, only flip if it's ok. */
+ 		ret = inc_block_group_ro(block_group, 0);
+@@ -1662,12 +1703,16 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		btrfs_put_block_group(block_group);
+ 		spin_lock(&fs_info->unused_bgs_lock);
+ 	}
++	list_splice_tail(&retry_list, &fs_info->unused_bgs);
+ 	spin_unlock(&fs_info->unused_bgs_lock);
+ 	mutex_unlock(&fs_info->reclaim_bgs_lock);
+ 	return;
+ 
+ flip_async:
+ 	btrfs_end_transaction(trans);
++	spin_lock(&fs_info->unused_bgs_lock);
++	list_splice_tail(&retry_list, &fs_info->unused_bgs);
++	spin_unlock(&fs_info->unused_bgs_lock);
+ 	mutex_unlock(&fs_info->reclaim_bgs_lock);
+ 	btrfs_put_block_group(block_group);
+ 	btrfs_discard_punt_unused_bgs_list(fs_info);
+@@ -2712,6 +2757,37 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ 		btrfs_dec_delayed_refs_rsv_bg_inserts(fs_info);
+ 		list_del_init(&block_group->bg_list);
+ 		clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags);
++
++		/*
++		 * If the block group is still unused, add it to the list of
++		 * unused block groups. The block group may have been created in
++		 * order to satisfy a space reservation, in which case the
++		 * extent allocation only happens later. But often we don't
++		 * actually need to allocate space that we previously reserved,
++		 * so the block group may become unused for a long time. For
++		 * example for metadata we generally reserve space for a worst
++		 * possible scenario, but then don't end up allocating all that
++		 * space or none at all (due to no need to COW, extent buffers
++		 * were already COWed in the current transaction and still
++		 * unwritten, tree heights lower than the maximum possible
++		 * height, etc). For data we generally reserve the axact amount
++		 * of space we are going to allocate later, the exception is
++		 * when using compression, as we must reserve space based on the
++		 * uncompressed data size, because the compression is only done
++		 * when writeback triggered and we don't know how much space we
++		 * are actually going to need, so we reserve the uncompressed
++		 * size because the data may be uncompressible in the worst case.
++		 */
++		if (ret == 0) {
++			bool used;
++
++			spin_lock(&block_group->lock);
++			used = btrfs_is_block_group_used(block_group);
++			spin_unlock(&block_group->lock);
++
++			if (!used)
++				btrfs_mark_bg_unused(block_group);
++		}
+ 	}
+ 	btrfs_trans_release_chunk_metadata(trans);
+ }
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index 2bdbcb834f954..089979981e4aa 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -255,6 +255,13 @@ static inline u64 btrfs_block_group_end(struct btrfs_block_group *block_group)
+ 	return (block_group->start + block_group->length);
+ }
+ 
++static inline bool btrfs_is_block_group_used(const struct btrfs_block_group *bg)
++{
++	lockdep_assert_held(&bg->lock);
++
++	return (bg->used > 0 || bg->reserved > 0 || bg->pinned > 0);
++}
++
+ static inline bool btrfs_is_block_group_data_only(
+ 					struct btrfs_block_group *block_group)
+ {
+diff --git a/fs/btrfs/delalloc-space.c b/fs/btrfs/delalloc-space.c
+index 2833e8ef4c098..acf9f4b6c0440 100644
+--- a/fs/btrfs/delalloc-space.c
++++ b/fs/btrfs/delalloc-space.c
+@@ -245,7 +245,6 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_block_rsv *block_rsv = &inode->block_rsv;
+ 	u64 reserve_size = 0;
+ 	u64 qgroup_rsv_size = 0;
+-	u64 csum_leaves;
+ 	unsigned outstanding_extents;
+ 
+ 	lockdep_assert_held(&inode->lock);
+@@ -260,10 +259,12 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
+ 						outstanding_extents);
+ 		reserve_size += btrfs_calc_metadata_size(fs_info, 1);
+ 	}
+-	csum_leaves = btrfs_csum_bytes_to_leaves(fs_info,
+-						 inode->csum_bytes);
+-	reserve_size += btrfs_calc_insert_metadata_size(fs_info,
+-							csum_leaves);
++	if (!(inode->flags & BTRFS_INODE_NODATASUM)) {
++		u64 csum_leaves;
++
++		csum_leaves = btrfs_csum_bytes_to_leaves(fs_info, inode->csum_bytes);
++		reserve_size += btrfs_calc_insert_metadata_size(fs_info, csum_leaves);
++	}
+ 	/*
+ 	 * For qgroup rsv, the calculation is very simple:
+ 	 * account one nodesize for each outstanding extent
+@@ -278,14 +279,20 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
+ 	spin_unlock(&block_rsv->lock);
+ }
+ 
+-static void calc_inode_reservations(struct btrfs_fs_info *fs_info,
++static void calc_inode_reservations(struct btrfs_inode *inode,
+ 				    u64 num_bytes, u64 disk_num_bytes,
+ 				    u64 *meta_reserve, u64 *qgroup_reserve)
+ {
++	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ 	u64 nr_extents = count_max_extents(fs_info, num_bytes);
+-	u64 csum_leaves = btrfs_csum_bytes_to_leaves(fs_info, disk_num_bytes);
++	u64 csum_leaves;
+ 	u64 inode_update = btrfs_calc_metadata_size(fs_info, 1);
+ 
++	if (inode->flags & BTRFS_INODE_NODATASUM)
++		csum_leaves = 0;
++	else
++		csum_leaves = btrfs_csum_bytes_to_leaves(fs_info, disk_num_bytes);
++
+ 	*meta_reserve = btrfs_calc_insert_metadata_size(fs_info,
+ 						nr_extents + csum_leaves);
+ 
+@@ -337,7 +344,7 @@ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes,
+ 	 * everything out and try again, which is bad.  This way we just
+ 	 * over-reserve slightly, and clean up the mess when we are done.
+ 	 */
+-	calc_inode_reservations(fs_info, num_bytes, disk_num_bytes,
++	calc_inode_reservations(inode, num_bytes, disk_num_bytes,
+ 				&meta_reserve, &qgroup_reserve);
+ 	ret = btrfs_qgroup_reserve_meta_prealloc(root, qgroup_reserve, true,
+ 						 noflush);
+@@ -359,7 +366,8 @@ int btrfs_delalloc_reserve_metadata(struct btrfs_inode *inode, u64 num_bytes,
+ 	nr_extents = count_max_extents(fs_info, num_bytes);
+ 	spin_lock(&inode->lock);
+ 	btrfs_mod_outstanding_extents(inode, nr_extents);
+-	inode->csum_bytes += disk_num_bytes;
++	if (!(inode->flags & BTRFS_INODE_NODATASUM))
++		inode->csum_bytes += disk_num_bytes;
+ 	btrfs_calculate_inode_block_rsv_size(fs_info, inode);
+ 	spin_unlock(&inode->lock);
+ 
+@@ -393,7 +401,8 @@ void btrfs_delalloc_release_metadata(struct btrfs_inode *inode, u64 num_bytes,
+ 
+ 	num_bytes = ALIGN(num_bytes, fs_info->sectorsize);
+ 	spin_lock(&inode->lock);
+-	inode->csum_bytes -= num_bytes;
++	if (!(inode->flags & BTRFS_INODE_NODATASUM))
++		inode->csum_bytes -= num_bytes;
+ 	btrfs_calculate_inode_block_rsv_size(fs_info, inode);
+ 	spin_unlock(&inode->lock);
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 62cb97f7c94fa..c03091793aaa4 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1315,8 +1315,17 @@ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+ again:
+ 	root = btrfs_lookup_fs_root(fs_info, objectid);
+ 	if (root) {
+-		/* Shouldn't get preallocated anon_dev for cached roots */
+-		ASSERT(!anon_dev);
++		/*
++		 * Some other caller may have read out the newly inserted
++		 * subvolume already (for things like backref walk etc).  Not
++		 * that common but still possible.  In that case, we just need
++		 * to free the anon_dev.
++		 */
++		if (unlikely(anon_dev)) {
++			free_anon_bdev(anon_dev);
++			anon_dev = 0;
++		}
++
+ 		if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
+ 			btrfs_put_root(root);
+ 			return ERR_PTR(-ENOENT);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 97cd57e710cb2..6c2e794efff01 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3175,8 +3175,23 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
+ 			unwritten_start += logical_len;
+ 		clear_extent_uptodate(io_tree, unwritten_start, end, NULL);
+ 
+-		/* Drop extent maps for the part of the extent we didn't write. */
+-		btrfs_drop_extent_map_range(inode, unwritten_start, end, false);
++		/*
++		 * Drop extent maps for the part of the extent we didn't write.
++		 *
++		 * We have an exception here for the free_space_inode, this is
++		 * because when we do btrfs_get_extent() on the free space inode
++		 * we will search the commit root.  If this is a new block group
++		 * we won't find anything, and we will trip over the assert in
++		 * writepage where we do ASSERT(em->block_start !=
++		 * EXTENT_MAP_HOLE).
++		 *
++		 * Theoretically we could also skip this for any NOCOW extent as
++		 * we don't mess with the extent map tree in the NOCOW case, but
++		 * for now simply skip this if we are the free space inode.
++		 */
++		if (!btrfs_is_free_space_inode(inode))
++			btrfs_drop_extent_map_range(inode, unwritten_start,
++						    end, false);
+ 
+ 		/*
+ 		 * If the ordered extent had an IOERR or something else went
+@@ -10233,6 +10248,13 @@ ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from,
+ 	if (encoded->encryption != BTRFS_ENCODED_IO_ENCRYPTION_NONE)
+ 		return -EINVAL;
+ 
++	/*
++	 * Compressed extents should always have checksums, so error out if we
++	 * have a NOCOW file or inode was created while mounted with NODATASUM.
++	 */
++	if (inode->flags & BTRFS_INODE_NODATASUM)
++		return -EINVAL;
++
+ 	orig_count = iov_iter_count(from);
+ 
+ 	/* The extent size must be sane. */
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 69e91341fd637..0bd43b863c43a 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3815,6 +3815,11 @@ static long btrfs_ioctl_qgroup_create(struct file *file, void __user *arg)
+ 		goto out;
+ 	}
+ 
++	if (sa->create && is_fstree(sa->qgroupid)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	trans = btrfs_join_transaction(root);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index e46774e8f49fd..bbfa44b89bc45 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1736,6 +1736,15 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	return ret;
+ }
+ 
++static bool qgroup_has_usage(struct btrfs_qgroup *qgroup)
++{
++	return (qgroup->rfer > 0 || qgroup->rfer_cmpr > 0 ||
++		qgroup->excl > 0 || qgroup->excl_cmpr > 0 ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] > 0 ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] > 0 ||
++		qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > 0);
++}
++
+ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+@@ -1755,6 +1764,11 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 		goto out;
+ 	}
+ 
++	if (is_fstree(qgroupid) && qgroup_has_usage(qgroup)) {
++		ret = -EBUSY;
++		goto out;
++	}
++
+ 	/* Check if there are no children of this qgroup */
+ 	if (!list_empty(&qgroup->members)) {
+ 		ret = -EBUSY;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 4e36550618e58..77b182225809e 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -8111,7 +8111,7 @@ long btrfs_ioctl_send(struct inode *inode, struct btrfs_ioctl_send_args *arg)
+ 	}
+ 
+ 	if (arg->flags & ~BTRFS_SEND_FLAG_MASK) {
+-		ret = -EINVAL;
++		ret = -EOPNOTSUPP;
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 5b3333ceef048..c52807d97efa5 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -564,56 +564,22 @@ static int btrfs_reserve_trans_metadata(struct btrfs_fs_info *fs_info,
+ 					u64 num_bytes,
+ 					u64 *delayed_refs_bytes)
+ {
+-	struct btrfs_block_rsv *delayed_refs_rsv = &fs_info->delayed_refs_rsv;
+ 	struct btrfs_space_info *si = fs_info->trans_block_rsv.space_info;
+-	u64 extra_delayed_refs_bytes = 0;
+-	u64 bytes;
++	u64 bytes = num_bytes + *delayed_refs_bytes;
+ 	int ret;
+ 
+-	/*
+-	 * If there's a gap between the size of the delayed refs reserve and
+-	 * its reserved space, than some tasks have added delayed refs or bumped
+-	 * its size otherwise (due to block group creation or removal, or block
+-	 * group item update). Also try to allocate that gap in order to prevent
+-	 * using (and possibly abusing) the global reserve when committing the
+-	 * transaction.
+-	 */
+-	if (flush == BTRFS_RESERVE_FLUSH_ALL &&
+-	    !btrfs_block_rsv_full(delayed_refs_rsv)) {
+-		spin_lock(&delayed_refs_rsv->lock);
+-		if (delayed_refs_rsv->size > delayed_refs_rsv->reserved)
+-			extra_delayed_refs_bytes = delayed_refs_rsv->size -
+-				delayed_refs_rsv->reserved;
+-		spin_unlock(&delayed_refs_rsv->lock);
+-	}
+-
+-	bytes = num_bytes + *delayed_refs_bytes + extra_delayed_refs_bytes;
+-
+ 	/*
+ 	 * We want to reserve all the bytes we may need all at once, so we only
+ 	 * do 1 enospc flushing cycle per transaction start.
+ 	 */
+ 	ret = btrfs_reserve_metadata_bytes(fs_info, si, bytes, flush);
+-	if (ret == 0) {
+-		if (extra_delayed_refs_bytes > 0)
+-			btrfs_migrate_to_delayed_refs_rsv(fs_info,
+-							  extra_delayed_refs_bytes);
+-		return 0;
+-	}
+-
+-	if (extra_delayed_refs_bytes > 0) {
+-		bytes -= extra_delayed_refs_bytes;
+-		ret = btrfs_reserve_metadata_bytes(fs_info, si, bytes, flush);
+-		if (ret == 0)
+-			return 0;
+-	}
+ 
+ 	/*
+ 	 * If we are an emergency flush, which can steal from the global block
+ 	 * reserve, then attempt to not reserve space for the delayed refs, as
+ 	 * we will consume space for them from the global block reserve.
+ 	 */
+-	if (flush == BTRFS_RESERVE_FLUSH_ALL_STEAL) {
++	if (ret && flush == BTRFS_RESERVE_FLUSH_ALL_STEAL) {
+ 		bytes -= *delayed_refs_bytes;
+ 		*delayed_refs_bytes = 0;
+ 		ret = btrfs_reserve_metadata_bytes(fs_info, si, bytes, flush);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 9c02f328c966c..e8bf082105d87 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1452,7 +1452,7 @@ static void __prep_cap(struct cap_msg_args *arg, struct ceph_cap *cap,
+ 	if (flushing & CEPH_CAP_XATTR_EXCL) {
+ 		arg->old_xattr_buf = __ceph_build_xattrs_blob(ci);
+ 		arg->xattr_version = ci->i_xattrs.version;
+-		arg->xattr_buf = ci->i_xattrs.blob;
++		arg->xattr_buf = ceph_buffer_get(ci->i_xattrs.blob);
+ 	} else {
+ 		arg->xattr_buf = NULL;
+ 		arg->old_xattr_buf = NULL;
+@@ -1553,6 +1553,7 @@ static void __send_cap(struct cap_msg_args *arg, struct ceph_inode_info *ci)
+ 	encode_cap_msg(msg, arg);
+ 	ceph_con_send(&arg->session->s_con, msg);
+ 	ceph_buffer_put(arg->old_xattr_buf);
++	ceph_buffer_put(arg->xattr_buf);
+ 	if (arg->wake)
+ 		wake_up_all(&ci->i_cap_wq);
+ }
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 3c5786841c6c8..be63744c95d3b 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1910,11 +1910,6 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 	mb_check_buddy(e4b);
+ 	mb_free_blocks_double(inode, e4b, first, count);
+ 
+-	this_cpu_inc(discard_pa_seq);
+-	e4b->bd_info->bb_free += count;
+-	if (first < e4b->bd_info->bb_first_free)
+-		e4b->bd_info->bb_first_free = first;
+-
+ 	/* access memory sequentially: check left neighbour,
+ 	 * clear range and then check right neighbour
+ 	 */
+@@ -1928,23 +1923,31 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 		struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 		ext4_fsblk_t blocknr;
+ 
++		/*
++		 * Fastcommit replay can free already freed blocks which
++		 * corrupts allocation info. Regenerate it.
++		 */
++		if (sbi->s_mount_state & EXT4_FC_REPLAY) {
++			mb_regenerate_buddy(e4b);
++			goto check;
++		}
++
+ 		blocknr = ext4_group_first_block_no(sb, e4b->bd_group);
+ 		blocknr += EXT4_C2B(sbi, block);
+-		if (!(sbi->s_mount_state & EXT4_FC_REPLAY)) {
+-			ext4_grp_locked_error(sb, e4b->bd_group,
+-					      inode ? inode->i_ino : 0,
+-					      blocknr,
+-					      "freeing already freed block (bit %u); block bitmap corrupt.",
+-					      block);
+-			ext4_mark_group_bitmap_corrupted(
+-				sb, e4b->bd_group,
++		ext4_grp_locked_error(sb, e4b->bd_group,
++				      inode ? inode->i_ino : 0, blocknr,
++				      "freeing already freed block (bit %u); block bitmap corrupt.",
++				      block);
++		ext4_mark_group_bitmap_corrupted(sb, e4b->bd_group,
+ 				EXT4_GROUP_INFO_BBITMAP_CORRUPT);
+-		} else {
+-			mb_regenerate_buddy(e4b);
+-		}
+-		goto done;
++		return;
+ 	}
+ 
++	this_cpu_inc(discard_pa_seq);
++	e4b->bd_info->bb_free += count;
++	if (first < e4b->bd_info->bb_first_free)
++		e4b->bd_info->bb_first_free = first;
++
+ 	/* let's maintain fragments counter */
+ 	if (left_is_free && right_is_free)
+ 		e4b->bd_info->bb_fragments--;
+@@ -1969,9 +1972,9 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
+ 	if (first <= last)
+ 		mb_buddy_mark_free(e4b, first >> 1, last >> 1);
+ 
+-done:
+ 	mb_set_largest_free_order(sb, e4b->bd_info);
+ 	mb_update_avg_fragment_size(sb, e4b->bd_info);
++check:
+ 	mb_check_buddy(e4b);
+ }
+ 
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 3aa57376d9c2e..391efa6d4c56f 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -618,6 +618,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		goto out;
+ 	o_end = o_start + len;
+ 
++	*moved_len = 0;
+ 	while (o_start < o_end) {
+ 		struct ext4_extent *ex;
+ 		ext4_lblk_t cur_blk, next_blk;
+@@ -672,7 +673,7 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		 */
+ 		ext4_double_up_write_data_sem(orig_inode, donor_inode);
+ 		/* Swap original branches with new branches */
+-		move_extent_per_page(o_filp, donor_inode,
++		*moved_len += move_extent_per_page(o_filp, donor_inode,
+ 				     orig_page_index, donor_page_index,
+ 				     offset_in_page, cur_len,
+ 				     unwritten, &ret);
+@@ -682,9 +683,6 @@ ext4_move_extents(struct file *o_filp, struct file *d_filp, __u64 orig_blk,
+ 		o_start += cur_len;
+ 		d_start += cur_len;
+ 	}
+-	*moved_len = o_start - orig_blk;
+-	if (*moved_len > len)
+-		*moved_len = len;
+ 
+ out:
+ 	if (*moved_len) {
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index f757d4f7ad98a..6f57fa75a6642 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -100,6 +100,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	loff_t len, vma_len;
+ 	int ret;
+ 	struct hstate *h = hstate_file(file);
++	vm_flags_t vm_flags;
+ 
+ 	/*
+ 	 * vma address alignment (but not the pgoff alignment) has
+@@ -141,10 +142,20 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	file_accessed(file);
+ 
+ 	ret = -ENOMEM;
++
++	vm_flags = vma->vm_flags;
++	/*
++	 * for SHM_HUGETLB, the pages are reserved in the shmget() call so skip
++	 * reserving here. Note: only for SHM hugetlbfs file, the inode
++	 * flag S_PRIVATE is set.
++	 */
++	if (inode->i_flags & S_PRIVATE)
++		vm_flags |= VM_NORESERVE;
++
+ 	if (!hugetlb_reserve_pages(inode,
+ 				vma->vm_pgoff >> huge_page_order(h),
+ 				len >> huge_page_shift(h), vma,
+-				vma->vm_flags))
++				vm_flags))
+ 		goto out;
+ 
+ 	ret = 0;
+@@ -340,7 +351,7 @@ static ssize_t hugetlbfs_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ 		} else {
+ 			folio_unlock(folio);
+ 
+-			if (!folio_test_has_hwpoisoned(folio))
++			if (!folio_test_hwpoison(folio))
+ 				want = nr;
+ 			else {
+ 				/*
+@@ -1354,6 +1365,7 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ {
+ 	struct hugetlbfs_fs_context *ctx = fc->fs_private;
+ 	struct fs_parse_result result;
++	struct hstate *h;
+ 	char *rest;
+ 	unsigned long ps;
+ 	int opt;
+@@ -1398,11 +1410,12 @@ static int hugetlbfs_parse_param(struct fs_context *fc, struct fs_parameter *par
+ 
+ 	case Opt_pagesize:
+ 		ps = memparse(param->string, &rest);
+-		ctx->hstate = size_to_hstate(ps);
+-		if (!ctx->hstate) {
++		h = size_to_hstate(ps);
++		if (!h) {
+ 			pr_err("Unsupported page size %lu MB\n", ps / SZ_1M);
+ 			return -EINVAL;
+ 		}
++		ctx->hstate = h;
+ 		return 0;
+ 
+ 	case Opt_min_size:
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 6c39ec020a5f4..3ffae74652749 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -4472,10 +4472,15 @@ static int do_mount_setattr(struct path *path, struct mount_kattr *kattr)
+ 	/*
+ 	 * If this is an attached mount make sure it's located in the callers
+ 	 * mount namespace. If it's not don't let the caller interact with it.
+-	 * If this is a detached mount make sure it has an anonymous mount
+-	 * namespace attached to it, i.e. we've created it via OPEN_TREE_CLONE.
++	 *
++	 * If this mount doesn't have a parent it's most often simply a
++	 * detached mount with an anonymous mount namespace. IOW, something
++	 * that's simply not attached yet. But there are apparently also users
++	 * that do change mount properties on the rootfs itself. That obviously
++	 * neither has a parent nor is it a detached mount so we cannot
++	 * unconditionally check for detached mounts.
+ 	 */
+-	if (!(mnt_has_parent(mnt) ? check_mnt(mnt) : is_anon_ns(mnt->mnt_ns)))
++	if ((mnt_has_parent(mnt) || !is_anon_ns(mnt->mnt_ns)) && !check_mnt(mnt))
+ 		goto out;
+ 
+ 	/*
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 21180fa953806..57968484c3f89 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4945,10 +4945,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
+ 	 */
+ 	fl->fl_break_time = 0;
+ 
+-	spin_lock(&fp->fi_lock);
+ 	fp->fi_had_conflict = true;
+ 	nfsd_break_one_deleg(dp);
+-	spin_unlock(&fp->fi_lock);
+ 	return false;
+ }
+ 
+@@ -5557,12 +5555,13 @@ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ 	if (status)
+ 		goto out_unlock;
+ 
++	status = -EAGAIN;
++	if (fp->fi_had_conflict)
++		goto out_unlock;
++
+ 	spin_lock(&state_lock);
+ 	spin_lock(&fp->fi_lock);
+-	if (fp->fi_had_conflict)
+-		status = -EAGAIN;
+-	else
+-		status = hash_delegation_locked(dp, fp);
++	status = hash_delegation_locked(dp, fp);
+ 	spin_unlock(&fp->fi_lock);
+ 	spin_unlock(&state_lock);
+ 
+diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
+index 740ce26d1e765..0505feef79f4a 100644
+--- a/fs/nilfs2/file.c
++++ b/fs/nilfs2/file.c
+@@ -105,7 +105,13 @@ static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf)
+ 	nilfs_transaction_commit(inode->i_sb);
+ 
+  mapped:
+-	wait_for_stable_page(page);
++	/*
++	 * Since checksumming including data blocks is performed to determine
++	 * the validity of the log to be written and used for recovery, it is
++	 * necessary to wait for writeback to finish here, regardless of the
++	 * stable write requirement of the backing device.
++	 */
++	wait_on_page_writeback(page);
+  out:
+ 	sb_end_pagefault(inode->i_sb);
+ 	return vmf_fs_error(ret);
+diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c
+index 0955b657938ff..a9b8d77c8c1d5 100644
+--- a/fs/nilfs2/recovery.c
++++ b/fs/nilfs2/recovery.c
+@@ -472,9 +472,10 @@ static int nilfs_prepare_segment_for_recovery(struct the_nilfs *nilfs,
+ 
+ static int nilfs_recovery_copy_block(struct the_nilfs *nilfs,
+ 				     struct nilfs_recovery_block *rb,
+-				     struct page *page)
++				     loff_t pos, struct page *page)
+ {
+ 	struct buffer_head *bh_org;
++	size_t from = pos & ~PAGE_MASK;
+ 	void *kaddr;
+ 
+ 	bh_org = __bread(nilfs->ns_bdev, rb->blocknr, nilfs->ns_blocksize);
+@@ -482,7 +483,7 @@ static int nilfs_recovery_copy_block(struct the_nilfs *nilfs,
+ 		return -EIO;
+ 
+ 	kaddr = kmap_atomic(page);
+-	memcpy(kaddr + bh_offset(bh_org), bh_org->b_data, bh_org->b_size);
++	memcpy(kaddr + from, bh_org->b_data, bh_org->b_size);
+ 	kunmap_atomic(kaddr);
+ 	brelse(bh_org);
+ 	return 0;
+@@ -521,7 +522,7 @@ static int nilfs_recover_dsync_blocks(struct the_nilfs *nilfs,
+ 			goto failed_inode;
+ 		}
+ 
+-		err = nilfs_recovery_copy_block(nilfs, rb, page);
++		err = nilfs_recovery_copy_block(nilfs, rb, pos, page);
+ 		if (unlikely(err))
+ 			goto failed_page;
+ 
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 55e31cc903d13..0f21dbcd0bfb3 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -1703,7 +1703,6 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 
+ 		list_for_each_entry(bh, &segbuf->sb_payload_buffers,
+ 				    b_assoc_buffers) {
+-			set_buffer_async_write(bh);
+ 			if (bh == segbuf->sb_super_root) {
+ 				if (bh->b_page != bd_page) {
+ 					lock_page(bd_page);
+@@ -1714,6 +1713,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci)
+ 				}
+ 				break;
+ 			}
++			set_buffer_async_write(bh);
+ 			if (bh->b_page != fs_page) {
+ 				nilfs_begin_page_io(fs_page);
+ 				fs_page = bh->b_page;
+@@ -1799,7 +1799,6 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 
+ 		list_for_each_entry(bh, &segbuf->sb_payload_buffers,
+ 				    b_assoc_buffers) {
+-			clear_buffer_async_write(bh);
+ 			if (bh == segbuf->sb_super_root) {
+ 				clear_buffer_uptodate(bh);
+ 				if (bh->b_page != bd_page) {
+@@ -1808,6 +1807,7 @@ static void nilfs_abort_logs(struct list_head *logs, int err)
+ 				}
+ 				break;
+ 			}
++			clear_buffer_async_write(bh);
+ 			if (bh->b_page != fs_page) {
+ 				nilfs_end_page_io(fs_page, err);
+ 				fs_page = bh->b_page;
+@@ -1895,8 +1895,9 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci)
+ 				 BIT(BH_Delay) | BIT(BH_NILFS_Volatile) |
+ 				 BIT(BH_NILFS_Redirected));
+ 
+-			set_mask_bits(&bh->b_state, clear_bits, set_bits);
+ 			if (bh == segbuf->sb_super_root) {
++				set_buffer_uptodate(bh);
++				clear_buffer_dirty(bh);
+ 				if (bh->b_page != bd_page) {
+ 					end_page_writeback(bd_page);
+ 					bd_page = bh->b_page;
+@@ -1904,6 +1905,7 @@ static void nilfs_segctor_complete_write(struct nilfs_sc_info *sci)
+ 				update_sr = true;
+ 				break;
+ 			}
++			set_mask_bits(&bh->b_state, clear_bits, set_bits);
+ 			if (bh->b_page != fs_page) {
+ 				nilfs_end_page_io(fs_page, 0);
+ 				fs_page = bh->b_page;
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index ff08a8957552a..34a47fb0c57f2 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -477,13 +477,13 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ 	int permitted;
+ 	struct mm_struct *mm;
+ 	unsigned long long start_time;
+-	unsigned long cmin_flt = 0, cmaj_flt = 0;
+-	unsigned long  min_flt = 0,  maj_flt = 0;
+-	u64 cutime, cstime, utime, stime;
+-	u64 cgtime, gtime;
++	unsigned long cmin_flt, cmaj_flt, min_flt, maj_flt;
++	u64 cutime, cstime, cgtime, utime, stime, gtime;
+ 	unsigned long rsslim = 0;
+ 	unsigned long flags;
+ 	int exit_code = task->exit_code;
++	struct signal_struct *sig = task->signal;
++	unsigned int seq = 1;
+ 
+ 	state = *get_task_state(task);
+ 	vsize = eip = esp = 0;
+@@ -511,12 +511,8 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ 
+ 	sigemptyset(&sigign);
+ 	sigemptyset(&sigcatch);
+-	cutime = cstime = utime = stime = 0;
+-	cgtime = gtime = 0;
+ 
+ 	if (lock_task_sighand(task, &flags)) {
+-		struct signal_struct *sig = task->signal;
+-
+ 		if (sig->tty) {
+ 			struct pid *pgrp = tty_get_pgrp(sig->tty);
+ 			tty_pgrp = pid_nr_ns(pgrp, ns);
+@@ -527,28 +523,9 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ 		num_threads = get_nr_threads(task);
+ 		collect_sigign_sigcatch(task, &sigign, &sigcatch);
+ 
+-		cmin_flt = sig->cmin_flt;
+-		cmaj_flt = sig->cmaj_flt;
+-		cutime = sig->cutime;
+-		cstime = sig->cstime;
+-		cgtime = sig->cgtime;
+ 		rsslim = READ_ONCE(sig->rlim[RLIMIT_RSS].rlim_cur);
+ 
+-		/* add up live thread stats at the group level */
+ 		if (whole) {
+-			struct task_struct *t;
+-
+-			__for_each_thread(sig, t) {
+-				min_flt += t->min_flt;
+-				maj_flt += t->maj_flt;
+-				gtime += task_gtime(t);
+-			}
+-
+-			min_flt += sig->min_flt;
+-			maj_flt += sig->maj_flt;
+-			thread_group_cputime_adjusted(task, &utime, &stime);
+-			gtime += sig->gtime;
+-
+ 			if (sig->flags & (SIGNAL_GROUP_EXIT | SIGNAL_STOP_STOPPED))
+ 				exit_code = sig->group_exit_code;
+ 		}
+@@ -562,10 +539,41 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ 
+ 	if (permitted && (!whole || num_threads < 2))
+ 		wchan = !task_is_running(task);
+-	if (!whole) {
++
++	do {
++		seq++; /* 2 on the 1st/lockless path, otherwise odd */
++		flags = read_seqbegin_or_lock_irqsave(&sig->stats_lock, &seq);
++
++		cmin_flt = sig->cmin_flt;
++		cmaj_flt = sig->cmaj_flt;
++		cutime = sig->cutime;
++		cstime = sig->cstime;
++		cgtime = sig->cgtime;
++
++		if (whole) {
++			struct task_struct *t;
++
++			min_flt = sig->min_flt;
++			maj_flt = sig->maj_flt;
++			gtime = sig->gtime;
++
++			rcu_read_lock();
++			__for_each_thread(sig, t) {
++				min_flt += t->min_flt;
++				maj_flt += t->maj_flt;
++				gtime += task_gtime(t);
++			}
++			rcu_read_unlock();
++		}
++	} while (need_seqretry(&sig->stats_lock, seq));
++	done_seqretry_irqrestore(&sig->stats_lock, seq, flags);
++
++	if (whole) {
++		thread_group_cputime_adjusted(task, &utime, &stime);
++	} else {
++		task_cputime_adjusted(task, &utime, &stime);
+ 		min_flt = task->min_flt;
+ 		maj_flt = task->maj_flt;
+-		task_cputime_adjusted(task, &utime, &stime);
+ 		gtime = task_gtime(task);
+ 	}
+ 
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index dc9b95ca71e69..ea1da3ce0d401 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -3425,8 +3425,18 @@ int cifs_mount_get_tcon(struct cifs_mount_ctx *mnt_ctx)
+ 	 * the user on mount
+ 	 */
+ 	if ((cifs_sb->ctx->wsize == 0) ||
+-	    (cifs_sb->ctx->wsize > server->ops->negotiate_wsize(tcon, ctx)))
+-		cifs_sb->ctx->wsize = server->ops->negotiate_wsize(tcon, ctx);
++	    (cifs_sb->ctx->wsize > server->ops->negotiate_wsize(tcon, ctx))) {
++		cifs_sb->ctx->wsize =
++			round_down(server->ops->negotiate_wsize(tcon, ctx), PAGE_SIZE);
++		/*
++		 * in the very unlikely event that the server sent a max write size under PAGE_SIZE,
++		 * (which would get rounded down to 0) then reset wsize to absolute minimum eg 4096
++		 */
++		if (cifs_sb->ctx->wsize == 0) {
++			cifs_sb->ctx->wsize = PAGE_SIZE;
++			cifs_dbg(VFS, "wsize too small, reset to minimum ie PAGE_SIZE, usually 4096\n");
++		}
++	}
+ 	if ((cifs_sb->ctx->rsize == 0) ||
+ 	    (cifs_sb->ctx->rsize > server->ops->negotiate_rsize(tcon, ctx)))
+ 		cifs_sb->ctx->rsize = server->ops->negotiate_rsize(tcon, ctx);
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index a3493da12ad1e..75f2c8734ff56 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1107,6 +1107,17 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 	case Opt_wsize:
+ 		ctx->wsize = result.uint_32;
+ 		ctx->got_wsize = true;
++		if (ctx->wsize % PAGE_SIZE != 0) {
++			ctx->wsize = round_down(ctx->wsize, PAGE_SIZE);
++			if (ctx->wsize == 0) {
++				ctx->wsize = PAGE_SIZE;
++				cifs_dbg(VFS, "wsize too small, reset to minimum %ld\n", PAGE_SIZE);
++			} else {
++				cifs_dbg(VFS,
++					 "wsize rounded down to %d to multiple of PAGE_SIZE %ld\n",
++					 ctx->wsize, PAGE_SIZE);
++			}
++		}
+ 		break;
+ 	case Opt_acregmax:
+ 		ctx->acregmax = HZ * result.uint_32;
+diff --git a/fs/smb/client/namespace.c b/fs/smb/client/namespace.c
+index a6968573b775e..4a517b280f2b7 100644
+--- a/fs/smb/client/namespace.c
++++ b/fs/smb/client/namespace.c
+@@ -168,6 +168,21 @@ static char *automount_fullpath(struct dentry *dentry, void *page)
+ 	return s;
+ }
+ 
++static void fs_context_set_ids(struct smb3_fs_context *ctx)
++{
++	kuid_t uid = current_fsuid();
++	kgid_t gid = current_fsgid();
++
++	if (ctx->multiuser) {
++		if (!ctx->uid_specified)
++			ctx->linux_uid = uid;
++		if (!ctx->gid_specified)
++			ctx->linux_gid = gid;
++	}
++	if (!ctx->cruid_specified)
++		ctx->cred_uid = uid;
++}
++
+ /*
+  * Create a vfsmount that we can automount
+  */
+@@ -205,6 +220,7 @@ static struct vfsmount *cifs_do_automount(struct path *path)
+ 	tmp.leaf_fullpath = NULL;
+ 	tmp.UNC = tmp.prepath = NULL;
+ 	tmp.dfs_root_ses = NULL;
++	fs_context_set_ids(&tmp);
+ 
+ 	rc = smb3_fs_context_dup(ctx, &tmp);
+ 	if (rc) {
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index beb81fa00cff4..ba734395b0360 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -619,7 +619,7 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 		goto out;
+ 	}
+ 
+-	while (bytes_left >= sizeof(*p)) {
++	while (bytes_left >= (ssize_t)sizeof(*p)) {
+ 		memset(&tmp_iface, 0, sizeof(tmp_iface));
+ 		tmp_iface.speed = le64_to_cpu(p->LinkSpeed);
+ 		tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index ba7a72a6a4f45..0c97d3c860726 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -6173,8 +6173,10 @@ static noinline int smb2_read_pipe(struct ksmbd_work *work)
+ 		err = ksmbd_iov_pin_rsp_read(work, (void *)rsp,
+ 					     offsetof(struct smb2_read_rsp, Buffer),
+ 					     aux_payload_buf, nbytes);
+-		if (err)
++		if (err) {
++			kvfree(aux_payload_buf);
+ 			goto out;
++		}
+ 		kvfree(rpc_resp);
+ 	} else {
+ 		err = ksmbd_iov_pin_rsp(work, (void *)rsp,
+@@ -6384,8 +6386,10 @@ int smb2_read(struct ksmbd_work *work)
+ 	err = ksmbd_iov_pin_rsp_read(work, (void *)rsp,
+ 				     offsetof(struct smb2_read_rsp, Buffer),
+ 				     aux_payload_buf, nbytes);
+-	if (err)
++	if (err) {
++		kvfree(aux_payload_buf);
+ 		goto out;
++	}
+ 	ksmbd_fd_put(work, fp);
+ 	return 0;
+ 
+diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c
+index f2d7ad155c00f..110e8a2721890 100644
+--- a/fs/tracefs/event_inode.c
++++ b/fs/tracefs/event_inode.c
+@@ -32,6 +32,18 @@
+  */
+ static DEFINE_MUTEX(eventfs_mutex);
+ 
++/* Choose something "unique" ;-) */
++#define EVENTFS_FILE_INODE_INO		0x12c4e37
++
++/* Just try to make something consistent and unique */
++static int eventfs_dir_ino(struct eventfs_inode *ei)
++{
++	if (!ei->ino)
++		ei->ino = get_next_ino();
++
++	return ei->ino;
++}
++
+ /*
+  * The eventfs_inode (ei) itself is protected by SRCU. It is released from
+  * its parent's list and will have is_freed set (under eventfs_mutex).
+@@ -50,12 +62,50 @@ enum {
+ 
+ #define EVENTFS_MODE_MASK	(EVENTFS_SAVE_MODE - 1)
+ 
++/*
++ * eventfs_inode reference count management.
++ *
++ * NOTE! We count only references from dentries, in the
++ * form 'dentry->d_fsdata'. There are also references from
++ * directory inodes ('ti->private'), but the dentry reference
++ * count is always a superset of the inode reference count.
++ */
++static void release_ei(struct kref *ref)
++{
++	struct eventfs_inode *ei = container_of(ref, struct eventfs_inode, kref);
++
++	WARN_ON_ONCE(!ei->is_freed);
++
++	kfree(ei->entry_attrs);
++	kfree_const(ei->name);
++	kfree_rcu(ei, rcu);
++}
++
++static inline void put_ei(struct eventfs_inode *ei)
++{
++	if (ei)
++		kref_put(&ei->kref, release_ei);
++}
++
++static inline void free_ei(struct eventfs_inode *ei)
++{
++	if (ei) {
++		ei->is_freed = 1;
++		put_ei(ei);
++	}
++}
++
++static inline struct eventfs_inode *get_ei(struct eventfs_inode *ei)
++{
++	if (ei)
++		kref_get(&ei->kref);
++	return ei;
++}
++
+ static struct dentry *eventfs_root_lookup(struct inode *dir,
+ 					  struct dentry *dentry,
+ 					  unsigned int flags);
+-static int dcache_dir_open_wrapper(struct inode *inode, struct file *file);
+-static int dcache_readdir_wrapper(struct file *file, struct dir_context *ctx);
+-static int eventfs_release(struct inode *inode, struct file *file);
++static int eventfs_iterate(struct file *file, struct dir_context *ctx);
+ 
+ static void update_attr(struct eventfs_attr *attr, struct iattr *iattr)
+ {
+@@ -95,7 +145,7 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	/* Preallocate the children mode array if necessary */
+ 	if (!(dentry->d_inode->i_mode & S_IFDIR)) {
+ 		if (!ei->entry_attrs) {
+-			ei->entry_attrs = kzalloc(sizeof(*ei->entry_attrs) * ei->nr_entries,
++			ei->entry_attrs = kcalloc(ei->nr_entries, sizeof(*ei->entry_attrs),
+ 						  GFP_NOFS);
+ 			if (!ei->entry_attrs) {
+ 				ret = -ENOMEM;
+@@ -146,33 +196,30 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	return ret;
+ }
+ 
+-static void update_top_events_attr(struct eventfs_inode *ei, struct dentry *dentry)
++static void update_top_events_attr(struct eventfs_inode *ei, struct super_block *sb)
+ {
+-	struct inode *inode;
++	struct inode *root;
+ 
+ 	/* Only update if the "events" was on the top level */
+ 	if (!ei || !(ei->attr.mode & EVENTFS_TOPLEVEL))
+ 		return;
+ 
+ 	/* Get the tracefs root inode. */
+-	inode = d_inode(dentry->d_sb->s_root);
+-	ei->attr.uid = inode->i_uid;
+-	ei->attr.gid = inode->i_gid;
++	root = d_inode(sb->s_root);
++	ei->attr.uid = root->i_uid;
++	ei->attr.gid = root->i_gid;
+ }
+ 
+ static void set_top_events_ownership(struct inode *inode)
+ {
+ 	struct tracefs_inode *ti = get_tracefs(inode);
+ 	struct eventfs_inode *ei = ti->private;
+-	struct dentry *dentry;
+ 
+ 	/* The top events directory doesn't get automatically updated */
+ 	if (!ei || !ei->is_events || !(ei->attr.mode & EVENTFS_TOPLEVEL))
+ 		return;
+ 
+-	dentry = ei->dentry;
+-
+-	update_top_events_attr(ei, dentry);
++	update_top_events_attr(ei, inode->i_sb);
+ 
+ 	if (!(ei->attr.mode & EVENTFS_SAVE_UID))
+ 		inode->i_uid = ei->attr.uid;
+@@ -213,11 +260,9 @@ static const struct inode_operations eventfs_file_inode_operations = {
+ };
+ 
+ static const struct file_operations eventfs_file_operations = {
+-	.open		= dcache_dir_open_wrapper,
+ 	.read		= generic_read_dir,
+-	.iterate_shared	= dcache_readdir_wrapper,
++	.iterate_shared	= eventfs_iterate,
+ 	.llseek		= generic_file_llseek,
+-	.release	= eventfs_release,
+ };
+ 
+ /* Return the evenfs_inode of the "events" directory */
+@@ -225,10 +270,11 @@ static struct eventfs_inode *eventfs_find_events(struct dentry *dentry)
+ {
+ 	struct eventfs_inode *ei;
+ 
+-	mutex_lock(&eventfs_mutex);
+ 	do {
+-		/* The parent always has an ei, except for events itself */
+-		ei = dentry->d_parent->d_fsdata;
++		// The parent is stable because we do not do renames
++		dentry = dentry->d_parent;
++		// ... and directories always have d_fsdata
++		ei = dentry->d_fsdata;
+ 
+ 		/*
+ 		 * If the ei is being freed, the ownership of the children
+@@ -238,12 +284,10 @@ static struct eventfs_inode *eventfs_find_events(struct dentry *dentry)
+ 			ei = NULL;
+ 			break;
+ 		}
+-
+-		dentry = ei->dentry;
++		// Walk upwards until you find the events inode
+ 	} while (!ei->is_events);
+-	mutex_unlock(&eventfs_mutex);
+ 
+-	update_top_events_attr(ei, dentry);
++	update_top_events_attr(ei, dentry->d_sb);
+ 
+ 	return ei;
+ }
+@@ -274,11 +318,10 @@ static void update_inode_attr(struct dentry *dentry, struct inode *inode,
+ }
+ 
+ /**
+- * create_file - create a file in the tracefs filesystem
+- * @name: the name of the file to create.
++ * lookup_file - look up a file in the tracefs filesystem
++ * @dentry: the dentry to look up
+  * @mode: the permission that the file should have.
+  * @attr: saved attributes changed by user
+- * @parent: parent dentry for this file.
+  * @data: something that the caller will want to get to later on.
+  * @fop: struct file_operations that should be used for this file.
+  *
+@@ -286,30 +329,25 @@ static void update_inode_attr(struct dentry *dentry, struct inode *inode,
+  * directory. The inode.i_private pointer will point to @data in the open()
+  * call.
+  */
+-static struct dentry *create_file(const char *name, umode_t mode,
++static struct dentry *lookup_file(struct eventfs_inode *parent_ei,
++				  struct dentry *dentry,
++				  umode_t mode,
+ 				  struct eventfs_attr *attr,
+-				  struct dentry *parent, void *data,
++				  void *data,
+ 				  const struct file_operations *fop)
+ {
+ 	struct tracefs_inode *ti;
+-	struct dentry *dentry;
+ 	struct inode *inode;
+ 
+ 	if (!(mode & S_IFMT))
+ 		mode |= S_IFREG;
+ 
+ 	if (WARN_ON_ONCE(!S_ISREG(mode)))
+-		return NULL;
+-
+-	WARN_ON_ONCE(!parent);
+-	dentry = eventfs_start_creating(name, parent);
+-
+-	if (IS_ERR(dentry))
+-		return dentry;
++		return ERR_PTR(-EIO);
+ 
+ 	inode = tracefs_get_inode(dentry->d_sb);
+ 	if (unlikely(!inode))
+-		return eventfs_failed_creating(dentry);
++		return ERR_PTR(-ENOMEM);
+ 
+ 	/* If the user updated the directory's attributes, use them */
+ 	update_inode_attr(dentry, inode, attr, mode);
+@@ -318,34 +356,36 @@ static struct dentry *create_file(const char *name, umode_t mode,
+ 	inode->i_fop = fop;
+ 	inode->i_private = data;
+ 
++	/* All files will have the same inode number */
++	inode->i_ino = EVENTFS_FILE_INODE_INO;
++
+ 	ti = get_tracefs(inode);
+ 	ti->flags |= TRACEFS_EVENT_INODE;
+-	d_instantiate(dentry, inode);
+-	fsnotify_create(dentry->d_parent->d_inode, dentry);
+-	return eventfs_end_creating(dentry);
++
++	// Files have their parent's ei as their fsdata
++	dentry->d_fsdata = get_ei(parent_ei);
++
++	d_add(dentry, inode);
++	return NULL;
+ };
+ 
+ /**
+- * create_dir - create a dir in the tracefs filesystem
++ * lookup_dir_entry - look up a dir in the tracefs filesystem
++ * @dentry: the directory to look up
+  * @ei: the eventfs_inode that represents the directory to create
+- * @parent: parent dentry for this file.
+  *
+- * This function will create a dentry for a directory represented by
++ * This function will look up a dentry for a directory represented by
+  * a eventfs_inode.
+  */
+-static struct dentry *create_dir(struct eventfs_inode *ei, struct dentry *parent)
++static struct dentry *lookup_dir_entry(struct dentry *dentry,
++	struct eventfs_inode *pei, struct eventfs_inode *ei)
+ {
+ 	struct tracefs_inode *ti;
+-	struct dentry *dentry;
+ 	struct inode *inode;
+ 
+-	dentry = eventfs_start_creating(ei->name, parent);
+-	if (IS_ERR(dentry))
+-		return dentry;
+-
+ 	inode = tracefs_get_inode(dentry->d_sb);
+ 	if (unlikely(!inode))
+-		return eventfs_failed_creating(dentry);
++		return ERR_PTR(-ENOMEM);
+ 
+ 	/* If the user updated the directory's attributes, use them */
+ 	update_inode_attr(dentry, inode, &ei->attr,
+@@ -354,247 +394,72 @@ static struct dentry *create_dir(struct eventfs_inode *ei, struct dentry *parent
+ 	inode->i_op = &eventfs_root_dir_inode_operations;
+ 	inode->i_fop = &eventfs_file_operations;
+ 
++	/* All directories will have the same inode number */
++	inode->i_ino = eventfs_dir_ino(ei);
++
+ 	ti = get_tracefs(inode);
+ 	ti->flags |= TRACEFS_EVENT_INODE;
++	/* Only directories have ti->private set to an ei, not files */
++	ti->private = ei;
+ 
+-	inc_nlink(inode);
+-	d_instantiate(dentry, inode);
+-	inc_nlink(dentry->d_parent->d_inode);
+-	fsnotify_mkdir(dentry->d_parent->d_inode, dentry);
+-	return eventfs_end_creating(dentry);
++	dentry->d_fsdata = get_ei(ei);
++
++	d_add(dentry, inode);
++	return NULL;
+ }
+ 
+-static void free_ei(struct eventfs_inode *ei)
++static inline struct eventfs_inode *alloc_ei(const char *name)
+ {
+-	kfree_const(ei->name);
+-	kfree(ei->d_children);
+-	kfree(ei->entry_attrs);
+-	kfree(ei);
++	struct eventfs_inode *ei = kzalloc(sizeof(*ei), GFP_KERNEL);
++
++	if (!ei)
++		return NULL;
++
++	ei->name = kstrdup_const(name, GFP_KERNEL);
++	if (!ei->name) {
++		kfree(ei);
++		return NULL;
++	}
++	kref_init(&ei->kref);
++	return ei;
+ }
+ 
+ /**
+- * eventfs_set_ei_status_free - remove the dentry reference from an eventfs_inode
+- * @ti: the tracefs_inode of the dentry
++ * eventfs_d_release - dentry is going away
+  * @dentry: dentry which has the reference to remove.
+  *
+  * Remove the association between a dentry from an eventfs_inode.
+  */
+-void eventfs_set_ei_status_free(struct tracefs_inode *ti, struct dentry *dentry)
++void eventfs_d_release(struct dentry *dentry)
+ {
+-	struct eventfs_inode *ei;
+-	int i;
+-
+-	mutex_lock(&eventfs_mutex);
+-
+-	ei = dentry->d_fsdata;
+-	if (!ei)
+-		goto out;
+-
+-	/* This could belong to one of the files of the ei */
+-	if (ei->dentry != dentry) {
+-		for (i = 0; i < ei->nr_entries; i++) {
+-			if (ei->d_children[i] == dentry)
+-				break;
+-		}
+-		if (WARN_ON_ONCE(i == ei->nr_entries))
+-			goto out;
+-		ei->d_children[i] = NULL;
+-	} else if (ei->is_freed) {
+-		free_ei(ei);
+-	} else {
+-		ei->dentry = NULL;
+-	}
+-
+-	dentry->d_fsdata = NULL;
+- out:
+-	mutex_unlock(&eventfs_mutex);
++	put_ei(dentry->d_fsdata);
+ }
+ 
+ /**
+- * create_file_dentry - create a dentry for a file of an eventfs_inode
++ * lookup_file_dentry - create a dentry for a file of an eventfs_inode
+  * @ei: the eventfs_inode that the file will be created under
+- * @idx: the index into the d_children[] of the @ei
++ * @idx: the index into the entry_attrs[] of the @ei
+  * @parent: The parent dentry of the created file.
+  * @name: The name of the file to create
+  * @mode: The mode of the file.
+  * @data: The data to use to set the inode of the file with on open()
+  * @fops: The fops of the file to be created.
+- * @lookup: If called by the lookup routine, in which case, dput() the created dentry.
+  *
+  * Create a dentry for a file of an eventfs_inode @ei and place it into the
+- * address located at @e_dentry. If the @e_dentry already has a dentry, then
+- * just do a dget() on it and return. Otherwise create the dentry and attach it.
++ * address located at @e_dentry.
+  */
+ static struct dentry *
+-create_file_dentry(struct eventfs_inode *ei, int idx,
+-		   struct dentry *parent, const char *name, umode_t mode, void *data,
+-		   const struct file_operations *fops, bool lookup)
++lookup_file_dentry(struct dentry *dentry,
++		   struct eventfs_inode *ei, int idx,
++		   umode_t mode, void *data,
++		   const struct file_operations *fops)
+ {
+ 	struct eventfs_attr *attr = NULL;
+-	struct dentry **e_dentry = &ei->d_children[idx];
+-	struct dentry *dentry;
+ 
+-	WARN_ON_ONCE(!inode_is_locked(parent->d_inode));
+-
+-	mutex_lock(&eventfs_mutex);
+-	if (ei->is_freed) {
+-		mutex_unlock(&eventfs_mutex);
+-		return NULL;
+-	}
+-	/* If the e_dentry already has a dentry, use it */
+-	if (*e_dentry) {
+-		/* lookup does not need to up the ref count */
+-		if (!lookup)
+-			dget(*e_dentry);
+-		mutex_unlock(&eventfs_mutex);
+-		return *e_dentry;
+-	}
+-
+-	/* ei->entry_attrs are protected by SRCU */
+ 	if (ei->entry_attrs)
+ 		attr = &ei->entry_attrs[idx];
+ 
+-	mutex_unlock(&eventfs_mutex);
+-
+-	dentry = create_file(name, mode, attr, parent, data, fops);
+-
+-	mutex_lock(&eventfs_mutex);
+-
+-	if (IS_ERR_OR_NULL(dentry)) {
+-		/*
+-		 * When the mutex was released, something else could have
+-		 * created the dentry for this e_dentry. In which case
+-		 * use that one.
+-		 *
+-		 * If ei->is_freed is set, the e_dentry is currently on its
+-		 * way to being freed, don't return it. If e_dentry is NULL
+-		 * it means it was already freed.
+-		 */
+-		if (ei->is_freed)
+-			dentry = NULL;
+-		else
+-			dentry = *e_dentry;
+-		/* The lookup does not need to up the dentry refcount */
+-		if (dentry && !lookup)
+-			dget(dentry);
+-		mutex_unlock(&eventfs_mutex);
+-		return dentry;
+-	}
+-
+-	if (!*e_dentry && !ei->is_freed) {
+-		*e_dentry = dentry;
+-		dentry->d_fsdata = ei;
+-	} else {
+-		/*
+-		 * Should never happen unless we get here due to being freed.
+-		 * Otherwise it means two dentries exist with the same name.
+-		 */
+-		WARN_ON_ONCE(!ei->is_freed);
+-		dentry = NULL;
+-	}
+-	mutex_unlock(&eventfs_mutex);
+-
+-	if (lookup)
+-		dput(dentry);
+-
+-	return dentry;
+-}
+-
+-/**
+- * eventfs_post_create_dir - post create dir routine
+- * @ei: eventfs_inode of recently created dir
+- *
+- * Map the meta-data of files within an eventfs dir to their parent dentry
+- */
+-static void eventfs_post_create_dir(struct eventfs_inode *ei)
+-{
+-	struct eventfs_inode *ei_child;
+-	struct tracefs_inode *ti;
+-
+-	lockdep_assert_held(&eventfs_mutex);
+-
+-	/* srcu lock already held */
+-	/* fill parent-child relation */
+-	list_for_each_entry_srcu(ei_child, &ei->children, list,
+-				 srcu_read_lock_held(&eventfs_srcu)) {
+-		ei_child->d_parent = ei->dentry;
+-	}
+-
+-	ti = get_tracefs(ei->dentry->d_inode);
+-	ti->private = ei;
+-}
+-
+-/**
+- * create_dir_dentry - Create a directory dentry for the eventfs_inode
+- * @pei: The eventfs_inode parent of ei.
+- * @ei: The eventfs_inode to create the directory for
+- * @parent: The dentry of the parent of this directory
+- * @lookup: True if this is called by the lookup code
+- *
+- * This creates and attaches a directory dentry to the eventfs_inode @ei.
+- */
+-static struct dentry *
+-create_dir_dentry(struct eventfs_inode *pei, struct eventfs_inode *ei,
+-		  struct dentry *parent, bool lookup)
+-{
+-	struct dentry *dentry = NULL;
+-
+-	WARN_ON_ONCE(!inode_is_locked(parent->d_inode));
+-
+-	mutex_lock(&eventfs_mutex);
+-	if (pei->is_freed || ei->is_freed) {
+-		mutex_unlock(&eventfs_mutex);
+-		return NULL;
+-	}
+-	if (ei->dentry) {
+-		/* If the dentry already has a dentry, use it */
+-		dentry = ei->dentry;
+-		/* lookup does not need to up the ref count */
+-		if (!lookup)
+-			dget(dentry);
+-		mutex_unlock(&eventfs_mutex);
+-		return dentry;
+-	}
+-	mutex_unlock(&eventfs_mutex);
+-
+-	dentry = create_dir(ei, parent);
+-
+-	mutex_lock(&eventfs_mutex);
+-
+-	if (IS_ERR_OR_NULL(dentry) && !ei->is_freed) {
+-		/*
+-		 * When the mutex was released, something else could have
+-		 * created the dentry for this e_dentry. In which case
+-		 * use that one.
+-		 *
+-		 * If ei->is_freed is set, the e_dentry is currently on its
+-		 * way to being freed.
+-		 */
+-		dentry = ei->dentry;
+-		if (dentry && !lookup)
+-			dget(dentry);
+-		mutex_unlock(&eventfs_mutex);
+-		return dentry;
+-	}
+-
+-	if (!ei->dentry && !ei->is_freed) {
+-		ei->dentry = dentry;
+-		eventfs_post_create_dir(ei);
+-		dentry->d_fsdata = ei;
+-	} else {
+-		/*
+-		 * Should never happen unless we get here due to being freed.
+-		 * Otherwise it means two dentries exist with the same name.
+-		 */
+-		WARN_ON_ONCE(!ei->is_freed);
+-		dentry = NULL;
+-	}
+-	mutex_unlock(&eventfs_mutex);
+-
+-	if (lookup)
+-		dput(dentry);
+-
+-	return dentry;
++	return lookup_file(ei, dentry, mode, attr, data, fops);
+ }
+ 
+ /**
+@@ -611,250 +476,153 @@ static struct dentry *eventfs_root_lookup(struct inode *dir,
+ 					  struct dentry *dentry,
+ 					  unsigned int flags)
+ {
+-	const struct file_operations *fops;
+-	const struct eventfs_entry *entry;
+ 	struct eventfs_inode *ei_child;
+ 	struct tracefs_inode *ti;
+ 	struct eventfs_inode *ei;
+-	struct dentry *ei_dentry = NULL;
+-	struct dentry *ret = NULL;
+ 	const char *name = dentry->d_name.name;
+-	bool created = false;
+-	umode_t mode;
+-	void *data;
+-	int idx;
+-	int i;
+-	int r;
++	struct dentry *result = NULL;
+ 
+ 	ti = get_tracefs(dir);
+ 	if (!(ti->flags & TRACEFS_EVENT_INODE))
+-		return NULL;
++		return ERR_PTR(-EIO);
+ 
+-	/* Grab srcu to prevent the ei from going away */
+-	idx = srcu_read_lock(&eventfs_srcu);
+-
+-	/*
+-	 * Grab the eventfs_mutex to consistent value from ti->private.
+-	 * This s
+-	 */
+ 	mutex_lock(&eventfs_mutex);
+-	ei = READ_ONCE(ti->private);
+-	if (ei && !ei->is_freed)
+-		ei_dentry = READ_ONCE(ei->dentry);
+-	mutex_unlock(&eventfs_mutex);
+ 
+-	if (!ei || !ei_dentry)
++	ei = ti->private;
++	if (!ei || ei->is_freed)
+ 		goto out;
+ 
+-	data = ei->data;
+-
+-	list_for_each_entry_srcu(ei_child, &ei->children, list,
+-				 srcu_read_lock_held(&eventfs_srcu)) {
++	list_for_each_entry(ei_child, &ei->children, list) {
+ 		if (strcmp(ei_child->name, name) != 0)
+ 			continue;
+-		ret = simple_lookup(dir, dentry, flags);
+-		if (IS_ERR(ret))
++		if (ei_child->is_freed)
+ 			goto out;
+-		create_dir_dentry(ei, ei_child, ei_dentry, true);
+-		created = true;
+-		break;
+-	}
+-
+-	if (created)
++		result = lookup_dir_entry(dentry, ei, ei_child);
+ 		goto out;
+-
+-	for (i = 0; i < ei->nr_entries; i++) {
+-		entry = &ei->entries[i];
+-		if (strcmp(name, entry->name) == 0) {
+-			void *cdata = data;
+-			mutex_lock(&eventfs_mutex);
+-			/* If ei->is_freed, then the event itself may be too */
+-			if (!ei->is_freed)
+-				r = entry->callback(name, &mode, &cdata, &fops);
+-			else
+-				r = -1;
+-			mutex_unlock(&eventfs_mutex);
+-			if (r <= 0)
+-				continue;
+-			ret = simple_lookup(dir, dentry, flags);
+-			if (IS_ERR(ret))
+-				goto out;
+-			create_file_dentry(ei, i, ei_dentry, name, mode, cdata,
+-					   fops, true);
+-			break;
+-		}
+ 	}
+- out:
+-	srcu_read_unlock(&eventfs_srcu, idx);
+-	return ret;
+-}
+ 
+-struct dentry_list {
+-	void			*cursor;
+-	struct dentry		**dentries;
+-};
+-
+-/**
+- * eventfs_release - called to release eventfs file/dir
+- * @inode: inode to be released
+- * @file: file to be released (not used)
+- */
+-static int eventfs_release(struct inode *inode, struct file *file)
+-{
+-	struct tracefs_inode *ti;
+-	struct dentry_list *dlist = file->private_data;
+-	void *cursor;
+-	int i;
++	for (int i = 0; i < ei->nr_entries; i++) {
++		void *data;
++		umode_t mode;
++		const struct file_operations *fops;
++		const struct eventfs_entry *entry = &ei->entries[i];
+ 
+-	ti = get_tracefs(inode);
+-	if (!(ti->flags & TRACEFS_EVENT_INODE))
+-		return -EINVAL;
++		if (strcmp(name, entry->name) != 0)
++			continue;
+ 
+-	if (WARN_ON_ONCE(!dlist))
+-		return -EINVAL;
++		data = ei->data;
++		if (entry->callback(name, &mode, &data, &fops) <= 0)
++			goto out;
+ 
+-	for (i = 0; dlist->dentries && dlist->dentries[i]; i++) {
+-		dput(dlist->dentries[i]);
++		result = lookup_file_dentry(dentry, ei, i, mode, data, fops);
++		goto out;
+ 	}
+-
+-	cursor = dlist->cursor;
+-	kfree(dlist->dentries);
+-	kfree(dlist);
+-	file->private_data = cursor;
+-	return dcache_dir_close(inode, file);
+-}
+-
+-static int add_dentries(struct dentry ***dentries, struct dentry *d, int cnt)
+-{
+-	struct dentry **tmp;
+-
+-	tmp = krealloc(*dentries, sizeof(d) * (cnt + 2), GFP_NOFS);
+-	if (!tmp)
+-		return -1;
+-	tmp[cnt] = d;
+-	tmp[cnt + 1] = NULL;
+-	*dentries = tmp;
+-	return 0;
++ out:
++	mutex_unlock(&eventfs_mutex);
++	return result;
+ }
+ 
+-/**
+- * dcache_dir_open_wrapper - eventfs open wrapper
+- * @inode: not used
+- * @file: dir to be opened (to create it's children)
+- *
+- * Used to dynamic create file/dir with-in @file, all the
+- * file/dir will be created. If already created then references
+- * will be increased
++/*
++ * Walk the children of a eventfs_inode to fill in getdents().
+  */
+-static int dcache_dir_open_wrapper(struct inode *inode, struct file *file)
++static int eventfs_iterate(struct file *file, struct dir_context *ctx)
+ {
+ 	const struct file_operations *fops;
++	struct inode *f_inode = file_inode(file);
+ 	const struct eventfs_entry *entry;
+ 	struct eventfs_inode *ei_child;
+ 	struct tracefs_inode *ti;
+ 	struct eventfs_inode *ei;
+-	struct dentry_list *dlist;
+-	struct dentry **dentries = NULL;
+-	struct dentry *parent = file_dentry(file);
+-	struct dentry *d;
+-	struct inode *f_inode = file_inode(file);
+-	const char *name = parent->d_name.name;
++	const char *name;
+ 	umode_t mode;
+-	void *data;
+-	int cnt = 0;
+ 	int idx;
+-	int ret;
+-	int i;
+-	int r;
++	int ret = -EINVAL;
++	int ino;
++	int i, r, c;
++
++	if (!dir_emit_dots(file, ctx))
++		return 0;
+ 
+ 	ti = get_tracefs(f_inode);
+ 	if (!(ti->flags & TRACEFS_EVENT_INODE))
+ 		return -EINVAL;
+ 
+-	if (WARN_ON_ONCE(file->private_data))
+-		return -EINVAL;
++	c = ctx->pos - 2;
+ 
+ 	idx = srcu_read_lock(&eventfs_srcu);
+ 
+ 	mutex_lock(&eventfs_mutex);
+ 	ei = READ_ONCE(ti->private);
++	if (ei && ei->is_freed)
++		ei = NULL;
+ 	mutex_unlock(&eventfs_mutex);
+ 
+-	if (!ei) {
+-		srcu_read_unlock(&eventfs_srcu, idx);
+-		return -EINVAL;
+-	}
+-
+-
+-	data = ei->data;
++	if (!ei)
++		goto out;
+ 
+-	dlist = kmalloc(sizeof(*dlist), GFP_KERNEL);
+-	if (!dlist) {
+-		srcu_read_unlock(&eventfs_srcu, idx);
+-		return -ENOMEM;
+-	}
++	/*
++	 * Need to create the dentries and inodes to have a consistent
++	 * inode number.
++	 */
++	ret = 0;
+ 
+-	inode_lock(parent->d_inode);
+-	list_for_each_entry_srcu(ei_child, &ei->children, list,
+-				 srcu_read_lock_held(&eventfs_srcu)) {
+-		d = create_dir_dentry(ei, ei_child, parent, false);
+-		if (d) {
+-			ret = add_dentries(&dentries, d, cnt);
+-			if (ret < 0)
+-				break;
+-			cnt++;
+-		}
+-	}
++	/* Start at 'c' to jump over already read entries */
++	for (i = c; i < ei->nr_entries; i++, ctx->pos++) {
++		void *cdata = ei->data;
+ 
+-	for (i = 0; i < ei->nr_entries; i++) {
+-		void *cdata = data;
+ 		entry = &ei->entries[i];
+ 		name = entry->name;
++
+ 		mutex_lock(&eventfs_mutex);
+-		/* If ei->is_freed, then the event itself may be too */
+-		if (!ei->is_freed)
+-			r = entry->callback(name, &mode, &cdata, &fops);
+-		else
+-			r = -1;
++		/* If ei->is_freed then just bail here, nothing more to do */
++		if (ei->is_freed) {
++			mutex_unlock(&eventfs_mutex);
++			goto out;
++		}
++		r = entry->callback(name, &mode, &cdata, &fops);
+ 		mutex_unlock(&eventfs_mutex);
+ 		if (r <= 0)
+ 			continue;
+-		d = create_file_dentry(ei, i, parent, name, mode, cdata, fops, false);
+-		if (d) {
+-			ret = add_dentries(&dentries, d, cnt);
+-			if (ret < 0)
+-				break;
+-			cnt++;
++
++		ino = EVENTFS_FILE_INODE_INO;
++
++		if (!dir_emit(ctx, name, strlen(name), ino, DT_REG))
++			goto out;
++	}
++
++	/* Subtract the skipped entries above */
++	c -= min((unsigned int)c, (unsigned int)ei->nr_entries);
++
++	list_for_each_entry_srcu(ei_child, &ei->children, list,
++				 srcu_read_lock_held(&eventfs_srcu)) {
++
++		if (c > 0) {
++			c--;
++			continue;
+ 		}
++
++		ctx->pos++;
++
++		if (ei_child->is_freed)
++			continue;
++
++		name = ei_child->name;
++
++		ino = eventfs_dir_ino(ei_child);
++
++		if (!dir_emit(ctx, name, strlen(name), ino, DT_DIR))
++			goto out_dec;
+ 	}
+-	inode_unlock(parent->d_inode);
++	ret = 1;
++ out:
+ 	srcu_read_unlock(&eventfs_srcu, idx);
+-	ret = dcache_dir_open(inode, file);
+ 
+-	/*
+-	 * dcache_dir_open() sets file->private_data to a dentry cursor.
+-	 * Need to save that but also save all the dentries that were
+-	 * opened by this function.
+-	 */
+-	dlist->cursor = file->private_data;
+-	dlist->dentries = dentries;
+-	file->private_data = dlist;
+ 	return ret;
+-}
+ 
+-/*
+- * This just sets the file->private_data back to the cursor and back.
+- */
+-static int dcache_readdir_wrapper(struct file *file, struct dir_context *ctx)
+-{
+-	struct dentry_list *dlist = file->private_data;
+-	int ret;
+-
+-	file->private_data = dlist->cursor;
+-	ret = dcache_readdir(file, ctx);
+-	dlist->cursor = file->private_data;
+-	file->private_data = dlist;
+-	return ret;
++ out_dec:
++	/* Incremented ctx->pos without adding something, reset it */
++	ctx->pos--;
++	goto out;
+ }
+ 
+ /**
+@@ -901,25 +669,10 @@ struct eventfs_inode *eventfs_create_dir(const char *name, struct eventfs_inode
+ 	if (!parent)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	ei = kzalloc(sizeof(*ei), GFP_KERNEL);
++	ei = alloc_ei(name);
+ 	if (!ei)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	ei->name = kstrdup_const(name, GFP_KERNEL);
+-	if (!ei->name) {
+-		kfree(ei);
+-		return ERR_PTR(-ENOMEM);
+-	}
+-
+-	if (size) {
+-		ei->d_children = kzalloc(sizeof(*ei->d_children) * size, GFP_KERNEL);
+-		if (!ei->d_children) {
+-			kfree_const(ei->name);
+-			kfree(ei);
+-			return ERR_PTR(-ENOMEM);
+-		}
+-	}
+-
+ 	ei->entries = entries;
+ 	ei->nr_entries = size;
+ 	ei->data = data;
+@@ -927,10 +680,8 @@ struct eventfs_inode *eventfs_create_dir(const char *name, struct eventfs_inode
+ 	INIT_LIST_HEAD(&ei->list);
+ 
+ 	mutex_lock(&eventfs_mutex);
+-	if (!parent->is_freed) {
++	if (!parent->is_freed)
+ 		list_add_tail(&ei->list, &parent->children);
+-		ei->d_parent = parent->dentry;
+-	}
+ 	mutex_unlock(&eventfs_mutex);
+ 
+ 	/* Was the parent freed? */
+@@ -970,28 +721,20 @@ struct eventfs_inode *eventfs_create_events_dir(const char *name, struct dentry
+ 	if (IS_ERR(dentry))
+ 		return ERR_CAST(dentry);
+ 
+-	ei = kzalloc(sizeof(*ei), GFP_KERNEL);
++	ei = alloc_ei(name);
+ 	if (!ei)
+-		goto fail_ei;
++		goto fail;
+ 
+ 	inode = tracefs_get_inode(dentry->d_sb);
+ 	if (unlikely(!inode))
+ 		goto fail;
+ 
+-	if (size) {
+-		ei->d_children = kzalloc(sizeof(*ei->d_children) * size, GFP_KERNEL);
+-		if (!ei->d_children)
+-			goto fail;
+-	}
+-
+-	ei->dentry = dentry;
++	// Note: we have a ref to the dentry from tracefs_start_creating()
++	ei->events_dir = dentry;
+ 	ei->entries = entries;
+ 	ei->nr_entries = size;
+ 	ei->is_events = 1;
+ 	ei->data = data;
+-	ei->name = kstrdup_const(name, GFP_KERNEL);
+-	if (!ei->name)
+-		goto fail;
+ 
+ 	/* Save the ownership of this directory */
+ 	uid = d_inode(dentry->d_parent)->i_uid;
+@@ -1022,11 +765,19 @@ struct eventfs_inode *eventfs_create_events_dir(const char *name, struct dentry
+ 	inode->i_op = &eventfs_root_dir_inode_operations;
+ 	inode->i_fop = &eventfs_file_operations;
+ 
+-	dentry->d_fsdata = ei;
++	dentry->d_fsdata = get_ei(ei);
+ 
+-	/* directory inodes start off with i_nlink == 2 (for "." entry) */
+-	inc_nlink(inode);
++	/*
++	 * Keep all eventfs directories with i_nlink == 1.
++	 * Due to the dynamic nature of the dentry creations and not
++	 * wanting to add a pointer to the parent eventfs_inode in the
++	 * eventfs_inode structure, keeping the i_nlink in sync with the
++	 * number of directories would cause too much complexity for
++	 * something not worth much. Keeping directory links at 1
++	 * tells userspace not to trust the link number.
++	 */
+ 	d_instantiate(dentry, inode);
++	/* The dentry of the "events" parent does keep track though */
+ 	inc_nlink(dentry->d_parent->d_inode);
+ 	fsnotify_mkdir(dentry->d_parent->d_inode, dentry);
+ 	tracefs_end_creating(dentry);
+@@ -1034,72 +785,11 @@ struct eventfs_inode *eventfs_create_events_dir(const char *name, struct dentry
+ 	return ei;
+ 
+  fail:
+-	kfree(ei->d_children);
+-	kfree(ei);
+- fail_ei:
++	free_ei(ei);
+ 	tracefs_failed_creating(dentry);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ 
+-static LLIST_HEAD(free_list);
+-
+-static void eventfs_workfn(struct work_struct *work)
+-{
+-        struct eventfs_inode *ei, *tmp;
+-        struct llist_node *llnode;
+-
+-	llnode = llist_del_all(&free_list);
+-        llist_for_each_entry_safe(ei, tmp, llnode, llist) {
+-		/* This dput() matches the dget() from unhook_dentry() */
+-		for (int i = 0; i < ei->nr_entries; i++) {
+-			if (ei->d_children[i])
+-				dput(ei->d_children[i]);
+-		}
+-		/* This should only get here if it had a dentry */
+-		if (!WARN_ON_ONCE(!ei->dentry))
+-			dput(ei->dentry);
+-        }
+-}
+-
+-static DECLARE_WORK(eventfs_work, eventfs_workfn);
+-
+-static void free_rcu_ei(struct rcu_head *head)
+-{
+-	struct eventfs_inode *ei = container_of(head, struct eventfs_inode, rcu);
+-
+-	if (ei->dentry) {
+-		/* Do not free the ei until all references of dentry are gone */
+-		if (llist_add(&ei->llist, &free_list))
+-			queue_work(system_unbound_wq, &eventfs_work);
+-		return;
+-	}
+-
+-	/* If the ei doesn't have a dentry, neither should its children */
+-	for (int i = 0; i < ei->nr_entries; i++) {
+-		WARN_ON_ONCE(ei->d_children[i]);
+-	}
+-
+-	free_ei(ei);
+-}
+-
+-static void unhook_dentry(struct dentry *dentry)
+-{
+-	if (!dentry)
+-		return;
+-	/*
+-	 * Need to add a reference to the dentry that is expected by
+-	 * simple_recursive_removal(), which will include a dput().
+-	 */
+-	dget(dentry);
+-
+-	/*
+-	 * Also add a reference for the dput() in eventfs_workfn().
+-	 * That is required as that dput() will free the ei after
+-	 * the SRCU grace period is over.
+-	 */
+-	dget(dentry);
+-}
+-
+ /**
+  * eventfs_remove_rec - remove eventfs dir or file from list
+  * @ei: eventfs_inode to be removed.
+@@ -1112,8 +802,6 @@ static void eventfs_remove_rec(struct eventfs_inode *ei, int level)
+ {
+ 	struct eventfs_inode *ei_child;
+ 
+-	if (!ei)
+-		return;
+ 	/*
+ 	 * Check recursion depth. It should never be greater than 3:
+ 	 * 0 - events/
+@@ -1125,28 +813,11 @@ static void eventfs_remove_rec(struct eventfs_inode *ei, int level)
+ 		return;
+ 
+ 	/* search for nested folders or files */
+-	list_for_each_entry_srcu(ei_child, &ei->children, list,
+-				 lockdep_is_held(&eventfs_mutex)) {
+-		/* Children only have dentry if parent does */
+-		WARN_ON_ONCE(ei_child->dentry && !ei->dentry);
++	list_for_each_entry(ei_child, &ei->children, list)
+ 		eventfs_remove_rec(ei_child, level + 1);
+-	}
+-
+-
+-	ei->is_freed = 1;
+-
+-	for (int i = 0; i < ei->nr_entries; i++) {
+-		if (ei->d_children[i]) {
+-			/* Children only have dentry if parent does */
+-			WARN_ON_ONCE(!ei->dentry);
+-			unhook_dentry(ei->d_children[i]);
+-		}
+-	}
+ 
+-	unhook_dentry(ei->dentry);
+-
+-	list_del_rcu(&ei->list);
+-	call_srcu(&eventfs_srcu, &ei->rcu, free_rcu_ei);
++	list_del(&ei->list);
++	free_ei(ei);
+ }
+ 
+ /**
+@@ -1157,22 +828,12 @@ static void eventfs_remove_rec(struct eventfs_inode *ei, int level)
+  */
+ void eventfs_remove_dir(struct eventfs_inode *ei)
+ {
+-	struct dentry *dentry;
+-
+ 	if (!ei)
+ 		return;
+ 
+ 	mutex_lock(&eventfs_mutex);
+-	dentry = ei->dentry;
+ 	eventfs_remove_rec(ei, 0);
+ 	mutex_unlock(&eventfs_mutex);
+-
+-	/*
+-	 * If any of the ei children has a dentry, then the ei itself
+-	 * must have a dentry.
+-	 */
+-	if (dentry)
+-		simple_recursive_removal(dentry, NULL);
+ }
+ 
+ /**
+@@ -1185,7 +846,11 @@ void eventfs_remove_events_dir(struct eventfs_inode *ei)
+ {
+ 	struct dentry *dentry;
+ 
+-	dentry = ei->dentry;
++	dentry = ei->events_dir;
++	if (!dentry)
++		return;
++
++	ei->events_dir = NULL;
+ 	eventfs_remove_dir(ei);
+ 
+ 	/*
+@@ -1195,5 +860,6 @@ void eventfs_remove_events_dir(struct eventfs_inode *ei)
+ 	 * sticks around while the other ei->dentry are created
+ 	 * and destroyed dynamically.
+ 	 */
++	d_invalidate(dentry);
+ 	dput(dentry);
+ }
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index e1b172c0e091a..d65ffad4c327c 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -38,8 +38,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
+ 	if (!ti)
+ 		return NULL;
+ 
+-	ti->flags = 0;
+-
+ 	return &ti->vfs_inode;
+ }
+ 
+@@ -379,21 +377,30 @@ static const struct super_operations tracefs_super_operations = {
+ 	.show_options	= tracefs_show_options,
+ };
+ 
+-static void tracefs_dentry_iput(struct dentry *dentry, struct inode *inode)
++/*
++ * It would be cleaner if eventfs had its own dentry ops.
++ *
++ * Note that d_revalidate is called potentially under RCU,
++ * so it can't take the eventfs mutex etc. It's fine - if
++ * we open a file just as it's marked dead, things will
++ * still work just fine, and just see the old stale case.
++ */
++static void tracefs_d_release(struct dentry *dentry)
+ {
+-	struct tracefs_inode *ti;
++	if (dentry->d_fsdata)
++		eventfs_d_release(dentry);
++}
+ 
+-	if (!dentry || !inode)
+-		return;
++static int tracefs_d_revalidate(struct dentry *dentry, unsigned int flags)
++{
++	struct eventfs_inode *ei = dentry->d_fsdata;
+ 
+-	ti = get_tracefs(inode);
+-	if (ti && ti->flags & TRACEFS_EVENT_INODE)
+-		eventfs_set_ei_status_free(ti, dentry);
+-	iput(inode);
++	return !(ei && ei->is_freed);
+ }
+ 
+ static const struct dentry_operations tracefs_dentry_operations = {
+-	.d_iput = tracefs_dentry_iput,
++	.d_revalidate = tracefs_d_revalidate,
++	.d_release = tracefs_d_release,
+ };
+ 
+ static int trace_fill_super(struct super_block *sb, void *data, int silent)
+@@ -497,75 +504,6 @@ struct dentry *tracefs_end_creating(struct dentry *dentry)
+ 	return dentry;
+ }
+ 
+-/**
+- * eventfs_start_creating - start the process of creating a dentry
+- * @name: Name of the file created for the dentry
+- * @parent: The parent dentry where this dentry will be created
+- *
+- * This is a simple helper function for the dynamically created eventfs
+- * files. When the directory of the eventfs files are accessed, their
+- * dentries are created on the fly. This function is used to start that
+- * process.
+- */
+-struct dentry *eventfs_start_creating(const char *name, struct dentry *parent)
+-{
+-	struct dentry *dentry;
+-	int error;
+-
+-	/* Must always have a parent. */
+-	if (WARN_ON_ONCE(!parent))
+-		return ERR_PTR(-EINVAL);
+-
+-	error = simple_pin_fs(&trace_fs_type, &tracefs_mount,
+-			      &tracefs_mount_count);
+-	if (error)
+-		return ERR_PTR(error);
+-
+-	if (unlikely(IS_DEADDIR(parent->d_inode)))
+-		dentry = ERR_PTR(-ENOENT);
+-	else
+-		dentry = lookup_one_len(name, parent, strlen(name));
+-
+-	if (!IS_ERR(dentry) && dentry->d_inode) {
+-		dput(dentry);
+-		dentry = ERR_PTR(-EEXIST);
+-	}
+-
+-	if (IS_ERR(dentry))
+-		simple_release_fs(&tracefs_mount, &tracefs_mount_count);
+-
+-	return dentry;
+-}
+-
+-/**
+- * eventfs_failed_creating - clean up a failed eventfs dentry creation
+- * @dentry: The dentry to clean up
+- *
+- * If after calling eventfs_start_creating(), a failure is detected, the
+- * resources created by eventfs_start_creating() needs to be cleaned up. In
+- * that case, this function should be called to perform that clean up.
+- */
+-struct dentry *eventfs_failed_creating(struct dentry *dentry)
+-{
+-	dput(dentry);
+-	simple_release_fs(&tracefs_mount, &tracefs_mount_count);
+-	return NULL;
+-}
+-
+-/**
+- * eventfs_end_creating - Finish the process of creating a eventfs dentry
+- * @dentry: The dentry that has successfully been created.
+- *
+- * This function is currently just a place holder to match
+- * eventfs_start_creating(). In case any synchronization needs to be added,
+- * this function will be used to implement that without having to modify
+- * the callers of eventfs_start_creating().
+- */
+-struct dentry *eventfs_end_creating(struct dentry *dentry)
+-{
+-	return dentry;
+-}
+-
+ /* Find the inode that this will use for default */
+ static struct inode *instance_inode(struct dentry *parent, struct inode *inode)
+ {
+@@ -779,7 +717,11 @@ static void init_once(void *foo)
+ {
+ 	struct tracefs_inode *ti = (struct tracefs_inode *) foo;
+ 
++	/* inode_init_once() calls memset() on the vfs_inode portion */
+ 	inode_init_once(&ti->vfs_inode);
++
++	/* Zero out the rest */
++	memset_after(ti, 0, vfs_inode);
+ }
+ 
+ static int __init tracefs_init(void)
+diff --git a/fs/tracefs/internal.h b/fs/tracefs/internal.h
+index af3b43fa75cdb..beb3dcd0e4342 100644
+--- a/fs/tracefs/internal.h
++++ b/fs/tracefs/internal.h
+@@ -11,9 +11,10 @@ enum {
+ };
+ 
+ struct tracefs_inode {
++	struct inode            vfs_inode;
++	/* The below gets initialized with memset_after(ti, 0, vfs_inode) */
+ 	unsigned long           flags;
+ 	void                    *private;
+-	struct inode            vfs_inode;
+ };
+ 
+ /*
+@@ -31,42 +32,37 @@ struct eventfs_attr {
+ /*
+  * struct eventfs_inode - hold the properties of the eventfs directories.
+  * @list:	link list into the parent directory
++ * @rcu:	Union with @list for freeing
++ * @children:	link list into the child eventfs_inode
+  * @entries:	the array of entries representing the files in the directory
+  * @name:	the name of the directory to create
+- * @children:	link list into the child eventfs_inode
+- * @dentry:     the dentry of the directory
+- * @d_parent:   pointer to the parent's dentry
+- * @d_children: The array of dentries to represent the files when created
++ * @events_dir: the dentry of the events directory
+  * @entry_attrs: Saved mode and ownership of the @d_children
+- * @attr:	Saved mode and ownership of eventfs_inode itself
+  * @data:	The private data to pass to the callbacks
++ * @attr:	Saved mode and ownership of eventfs_inode itself
+  * @is_freed:	Flag set if the eventfs is on its way to be freed
+  *                Note if is_freed is set, then dentry is corrupted.
++ * @is_events:	Flag set for only the top level "events" directory
+  * @nr_entries: The number of items in @entries
++ * @ino:	The saved inode number
+  */
+ struct eventfs_inode {
+-	struct list_head		list;
++	union {
++		struct list_head	list;
++		struct rcu_head		rcu;
++	};
++	struct list_head		children;
+ 	const struct eventfs_entry	*entries;
+ 	const char			*name;
+-	struct list_head		children;
+-	struct dentry			*dentry; /* Check is_freed to access */
+-	struct dentry			*d_parent;
+-	struct dentry			**d_children;
++	struct dentry			*events_dir;
+ 	struct eventfs_attr		*entry_attrs;
+-	struct eventfs_attr		attr;
+ 	void				*data;
+-	/*
+-	 * Union - used for deletion
+-	 * @llist:	for calling dput() if needed after RCU
+-	 * @rcu:	eventfs_inode to delete in RCU
+-	 */
+-	union {
+-		struct llist_node	llist;
+-		struct rcu_head		rcu;
+-	};
++	struct eventfs_attr		attr;
++	struct kref			kref;
+ 	unsigned int			is_freed:1;
+ 	unsigned int			is_events:1;
+ 	unsigned int			nr_entries:30;
++	unsigned int			ino;
+ };
+ 
+ static inline struct tracefs_inode *get_tracefs(const struct inode *inode)
+@@ -78,9 +74,7 @@ struct dentry *tracefs_start_creating(const char *name, struct dentry *parent);
+ struct dentry *tracefs_end_creating(struct dentry *dentry);
+ struct dentry *tracefs_failed_creating(struct dentry *dentry);
+ struct inode *tracefs_get_inode(struct super_block *sb);
+-struct dentry *eventfs_start_creating(const char *name, struct dentry *parent);
+-struct dentry *eventfs_failed_creating(struct dentry *dentry);
+-struct dentry *eventfs_end_creating(struct dentry *dentry);
+-void eventfs_set_ei_status_free(struct tracefs_inode *ti, struct dentry *dentry);
++
++void eventfs_d_release(struct dentry *dentry);
+ 
+ #endif /* _TRACEFS_INTERNAL_H */
+diff --git a/fs/zonefs/file.c b/fs/zonefs/file.c
+index b2c9b35df8f76..897b12ec61e29 100644
+--- a/fs/zonefs/file.c
++++ b/fs/zonefs/file.c
+@@ -348,7 +348,12 @@ static int zonefs_file_write_dio_end_io(struct kiocb *iocb, ssize_t size,
+ 	struct zonefs_inode_info *zi = ZONEFS_I(inode);
+ 
+ 	if (error) {
+-		zonefs_io_error(inode, true);
++		/*
++		 * For Sync IOs, error recovery is called from
++		 * zonefs_file_dio_write().
++		 */
++		if (!is_sync_kiocb(iocb))
++			zonefs_io_error(inode, true);
+ 		return error;
+ 	}
+ 
+@@ -491,6 +496,14 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
+ 			ret = -EINVAL;
+ 			goto inode_unlock;
+ 		}
++		/*
++		 * Advance the zone write pointer offset. This assumes that the
++		 * IO will succeed, which is OK to do because we do not allow
++		 * partial writes (IOMAP_DIO_PARTIAL is not set) and if the IO
++		 * fails, the error path will correct the write pointer offset.
++		 */
++		z->z_wpoffset += count;
++		zonefs_inode_account_active(inode);
+ 		mutex_unlock(&zi->i_truncate_mutex);
+ 	}
+ 
+@@ -504,20 +517,19 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (ret == -ENOTBLK)
+ 		ret = -EBUSY;
+ 
+-	if (zonefs_zone_is_seq(z) &&
+-	    (ret > 0 || ret == -EIOCBQUEUED)) {
+-		if (ret > 0)
+-			count = ret;
+-
+-		/*
+-		 * Update the zone write pointer offset assuming the write
+-		 * operation succeeded. If it did not, the error recovery path
+-		 * will correct it. Also do active seq file accounting.
+-		 */
+-		mutex_lock(&zi->i_truncate_mutex);
+-		z->z_wpoffset += count;
+-		zonefs_inode_account_active(inode);
+-		mutex_unlock(&zi->i_truncate_mutex);
++	/*
++	 * For a failed IO or partial completion, trigger error recovery
++	 * to update the zone write pointer offset to a correct value.
++	 * For asynchronous IOs, zonefs_file_write_dio_end_io() may already
++	 * have executed error recovery if the IO already completed when we
++	 * reach here. However, we cannot know that and execute error recovery
++	 * again (that will not change anything).
++	 */
++	if (zonefs_zone_is_seq(z)) {
++		if (ret > 0 && ret != count)
++			ret = -EIO;
++		if (ret < 0 && ret != -EIOCBQUEUED)
++			zonefs_io_error(inode, true);
+ 	}
+ 
+ inode_unlock:
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index e6a75401677de..a46fd023ecdea 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -246,16 +246,18 @@ static void zonefs_inode_update_mode(struct inode *inode)
+ 	z->z_mode = inode->i_mode;
+ }
+ 
+-struct zonefs_ioerr_data {
+-	struct inode	*inode;
+-	bool		write;
+-};
+-
+ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 			      void *data)
+ {
+-	struct zonefs_ioerr_data *err = data;
+-	struct inode *inode = err->inode;
++	struct blk_zone *z = data;
++
++	*z = *zone;
++	return 0;
++}
++
++static void zonefs_handle_io_error(struct inode *inode, struct blk_zone *zone,
++				   bool write)
++{
+ 	struct zonefs_zone *z = zonefs_inode_zone(inode);
+ 	struct super_block *sb = inode->i_sb;
+ 	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+@@ -270,8 +272,8 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 	data_size = zonefs_check_zone_condition(sb, z, zone);
+ 	isize = i_size_read(inode);
+ 	if (!(z->z_flags & (ZONEFS_ZONE_READONLY | ZONEFS_ZONE_OFFLINE)) &&
+-	    !err->write && isize == data_size)
+-		return 0;
++	    !write && isize == data_size)
++		return;
+ 
+ 	/*
+ 	 * At this point, we detected either a bad zone or an inconsistency
+@@ -292,7 +294,7 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 	 * In all cases, warn about inode size inconsistency and handle the
+ 	 * IO error according to the zone condition and to the mount options.
+ 	 */
+-	if (zonefs_zone_is_seq(z) && isize != data_size)
++	if (isize != data_size)
+ 		zonefs_warn(sb,
+ 			    "inode %lu: invalid size %lld (should be %lld)\n",
+ 			    inode->i_ino, isize, data_size);
+@@ -352,8 +354,6 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 	zonefs_i_size_write(inode, data_size);
+ 	z->z_wpoffset = data_size;
+ 	zonefs_inode_account_active(inode);
+-
+-	return 0;
+ }
+ 
+ /*
+@@ -367,23 +367,25 @@ void __zonefs_io_error(struct inode *inode, bool write)
+ {
+ 	struct zonefs_zone *z = zonefs_inode_zone(inode);
+ 	struct super_block *sb = inode->i_sb;
+-	struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+ 	unsigned int noio_flag;
+-	unsigned int nr_zones = 1;
+-	struct zonefs_ioerr_data err = {
+-		.inode = inode,
+-		.write = write,
+-	};
++	struct blk_zone zone;
+ 	int ret;
+ 
+ 	/*
+-	 * The only files that have more than one zone are conventional zone
+-	 * files with aggregated conventional zones, for which the inode zone
+-	 * size is always larger than the device zone size.
++	 * Conventional zone have no write pointer and cannot become read-only
++	 * or offline. So simply fake a report for a single or aggregated zone
++	 * and let zonefs_handle_io_error() correct the zone inode information
++	 * according to the mount options.
+ 	 */
+-	if (z->z_size > bdev_zone_sectors(sb->s_bdev))
+-		nr_zones = z->z_size >>
+-			(sbi->s_zone_sectors_shift + SECTOR_SHIFT);
++	if (!zonefs_zone_is_seq(z)) {
++		zone.start = z->z_sector;
++		zone.len = z->z_size >> SECTOR_SHIFT;
++		zone.wp = zone.start + zone.len;
++		zone.type = BLK_ZONE_TYPE_CONVENTIONAL;
++		zone.cond = BLK_ZONE_COND_NOT_WP;
++		zone.capacity = zone.len;
++		goto handle_io_error;
++	}
+ 
+ 	/*
+ 	 * Memory allocations in blkdev_report_zones() can trigger a memory
+@@ -394,12 +396,20 @@ void __zonefs_io_error(struct inode *inode, bool write)
+ 	 * the GFP_NOIO context avoids both problems.
+ 	 */
+ 	noio_flag = memalloc_noio_save();
+-	ret = blkdev_report_zones(sb->s_bdev, z->z_sector, nr_zones,
+-				  zonefs_io_error_cb, &err);
+-	if (ret != nr_zones)
++	ret = blkdev_report_zones(sb->s_bdev, z->z_sector, 1,
++				  zonefs_io_error_cb, &zone);
++	memalloc_noio_restore(noio_flag);
++
++	if (ret != 1) {
+ 		zonefs_err(sb, "Get inode %lu zone information failed %d\n",
+ 			   inode->i_ino, ret);
+-	memalloc_noio_restore(noio_flag);
++		zonefs_warn(sb, "remounting filesystem read-only\n");
++		sb->s_flags |= SB_RDONLY;
++		return;
++	}
++
++handle_io_error:
++	zonefs_handle_io_error(inode, &zone, write);
+ }
+ 
+ static struct kmem_cache *zonefs_inode_cachep;
+diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
+index ae12696ec492c..2ad261082bba5 100644
+--- a/include/linux/backing-dev-defs.h
++++ b/include/linux/backing-dev-defs.h
+@@ -141,8 +141,6 @@ struct bdi_writeback {
+ 	struct delayed_work dwork;	/* work item used for writeback */
+ 	struct delayed_work bw_dwork;	/* work item used for bandwidth estimate */
+ 
+-	unsigned long dirty_sleep;	/* last wait */
+-
+ 	struct list_head bdi_node;	/* anchored at bdi->wb_list */
+ 
+ #ifdef CONFIG_CGROUP_WRITEBACK
+@@ -179,6 +177,11 @@ struct backing_dev_info {
+ 	 * any dirty wbs, which is depended upon by bdi_has_dirty().
+ 	 */
+ 	atomic_long_t tot_write_bandwidth;
++	/*
++	 * Jiffies when last process was dirty throttled on this bdi. Used by
++	 * blk-wbt.
++	 */
++	unsigned long last_bdp_sleep;
+ 
+ 	struct bdi_writeback wb;  /* the root writeback info for this bdi */
+ 	struct list_head wb_list; /* list of all wbs */
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 2ceba3fe4ec16..687aaaff807ec 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -64,6 +64,26 @@
+ 		__builtin_unreachable();	\
+ 	} while (0)
+ 
++/*
++ * GCC 'asm goto' with outputs miscompiles certain code sequences:
++ *
++ *   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113921
++ *
++ * Work around it via the same compiler barrier quirk that we used
++ * to use for the old 'asm goto' workaround.
++ *
++ * Also, always mark such 'asm goto' statements as volatile: all
++ * asm goto statements are supposed to be volatile as per the
++ * documentation, but some versions of gcc didn't actually do
++ * that for asms with outputs:
++ *
++ *    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98619
++ */
++#ifdef CONFIG_GCC_ASM_GOTO_OUTPUT_WORKAROUND
++#define asm_goto_output(x...) \
++	do { asm volatile goto(x); asm (""); } while (0)
++#endif
++
+ #if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP)
+ #define __HAVE_BUILTIN_BSWAP32__
+ #define __HAVE_BUILTIN_BSWAP64__
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index 6f1ca49306d2f..0caf354cb94b5 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -362,8 +362,15 @@ struct ftrace_likely_data {
+ #define __member_size(p)	__builtin_object_size(p, 1)
+ #endif
+ 
+-#ifndef asm_volatile_goto
+-#define asm_volatile_goto(x...) asm goto(x)
++/*
++ * Some versions of gcc do not mark 'asm goto' volatile:
++ *
++ *  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103979
++ *
++ * We do it here by hand, because it doesn't hurt.
++ */
++#ifndef asm_goto_output
++#define asm_goto_output(x...) asm volatile goto(x)
+ #endif
+ 
+ #ifdef CONFIG_CC_HAS_ASM_INLINE
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index f2878b3614634..725e7eca99772 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -800,6 +800,18 @@ static inline struct gpio_chip *gpiod_to_chip(const struct gpio_desc *desc)
+ 	return ERR_PTR(-ENODEV);
+ }
+ 
++static inline int gpio_device_get_base(struct gpio_device *gdev)
++{
++	WARN_ON(1);
++	return -ENODEV;
++}
++
++static inline struct gpio_device *gpiod_to_gpio_device(struct gpio_desc *desc)
++{
++	WARN_ON(1);
++	return ERR_PTR(-ENODEV);
++}
++
+ static inline int gpiochip_lock_as_irq(struct gpio_chip *gc,
+ 				       unsigned int offset)
+ {
+diff --git a/include/linux/iio/adc/ad_sigma_delta.h b/include/linux/iio/adc/ad_sigma_delta.h
+index 7852f6c9a714c..719cf9cc6e1ac 100644
+--- a/include/linux/iio/adc/ad_sigma_delta.h
++++ b/include/linux/iio/adc/ad_sigma_delta.h
+@@ -8,6 +8,8 @@
+ #ifndef __AD_SIGMA_DELTA_H__
+ #define __AD_SIGMA_DELTA_H__
+ 
++#include <linux/iio/iio.h>
++
+ enum ad_sigma_delta_mode {
+ 	AD_SD_MODE_CONTINUOUS = 0,
+ 	AD_SD_MODE_SINGLE = 1,
+@@ -99,7 +101,7 @@ struct ad_sigma_delta {
+ 	 * 'rx_buf' is up to 32 bits per sample + 64 bit timestamp,
+ 	 * rounded to 16 bytes to take into account padding.
+ 	 */
+-	uint8_t				tx_buf[4] ____cacheline_aligned;
++	uint8_t				tx_buf[4] __aligned(IIO_DMA_MINALIGN);
+ 	uint8_t				rx_buf[16] __aligned(8);
+ };
+ 
+diff --git a/include/linux/iio/common/st_sensors.h b/include/linux/iio/common/st_sensors.h
+index 607c3a89a6471..f9ae5cdd884f5 100644
+--- a/include/linux/iio/common/st_sensors.h
++++ b/include/linux/iio/common/st_sensors.h
+@@ -258,9 +258,9 @@ struct st_sensor_data {
+ 	bool hw_irq_trigger;
+ 	s64 hw_timestamp;
+ 
+-	char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] ____cacheline_aligned;
+-
+ 	struct mutex odr_lock;
++
++	char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] __aligned(IIO_DMA_MINALIGN);
+ };
+ 
+ #ifdef CONFIG_IIO_BUFFER
+diff --git a/include/linux/iio/imu/adis.h b/include/linux/iio/imu/adis.h
+index dc9ea299e0885..8898966bc0f08 100644
+--- a/include/linux/iio/imu/adis.h
++++ b/include/linux/iio/imu/adis.h
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/spi/spi.h>
+ #include <linux/interrupt.h>
++#include <linux/iio/iio.h>
+ #include <linux/iio/types.h>
+ 
+ #define ADIS_WRITE_REG(reg) ((0x80 | (reg)))
+@@ -131,7 +132,7 @@ struct adis {
+ 	unsigned long		irq_flag;
+ 	void			*buffer;
+ 
+-	u8			tx[10] ____cacheline_aligned;
++	u8			tx[10] __aligned(IIO_DMA_MINALIGN);
+ 	u8			rx[4];
+ };
+ 
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 472cb16458b0b..ed3d517460f8a 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -311,9 +311,9 @@ LSM_HOOK(int, 0, socket_getsockopt, struct socket *sock, int level, int optname)
+ LSM_HOOK(int, 0, socket_setsockopt, struct socket *sock, int level, int optname)
+ LSM_HOOK(int, 0, socket_shutdown, struct socket *sock, int how)
+ LSM_HOOK(int, 0, socket_sock_rcv_skb, struct sock *sk, struct sk_buff *skb)
+-LSM_HOOK(int, 0, socket_getpeersec_stream, struct socket *sock,
++LSM_HOOK(int, -ENOPROTOOPT, socket_getpeersec_stream, struct socket *sock,
+ 	 sockptr_t optval, sockptr_t optlen, unsigned int len)
+-LSM_HOOK(int, 0, socket_getpeersec_dgram, struct socket *sock,
++LSM_HOOK(int, -ENOPROTOOPT, socket_getpeersec_dgram, struct socket *sock,
+ 	 struct sk_buff *skb, u32 *secid)
+ LSM_HOOK(int, 0, sk_alloc_security, struct sock *sk, int family, gfp_t priority)
+ LSM_HOOK(void, LSM_RET_VOID, sk_free_security, struct sock *sk)
+diff --git a/include/linux/mman.h b/include/linux/mman.h
+index 40d94411d4920..dc7048824be81 100644
+--- a/include/linux/mman.h
++++ b/include/linux/mman.h
+@@ -156,6 +156,7 @@ calc_vm_flag_bits(unsigned long flags)
+ 	return _calc_vm_trans(flags, MAP_GROWSDOWN,  VM_GROWSDOWN ) |
+ 	       _calc_vm_trans(flags, MAP_LOCKED,     VM_LOCKED    ) |
+ 	       _calc_vm_trans(flags, MAP_SYNC,	     VM_SYNC      ) |
++	       _calc_vm_trans(flags, MAP_STACK,	     VM_NOHUGEPAGE) |
+ 	       arch_calc_vm_flag_bits(flags);
+ }
+ 
+diff --git a/include/linux/netfilter/ipset/ip_set.h b/include/linux/netfilter/ipset/ip_set.h
+index e8c350a3ade15..e9f4f845d760a 100644
+--- a/include/linux/netfilter/ipset/ip_set.h
++++ b/include/linux/netfilter/ipset/ip_set.h
+@@ -186,6 +186,8 @@ struct ip_set_type_variant {
+ 	/* Return true if "b" set is the same as "a"
+ 	 * according to the create set parameters */
+ 	bool (*same_set)(const struct ip_set *a, const struct ip_set *b);
++	/* Cancel ongoing garbage collectors before destroying the set*/
++	void (*cancel_gc)(struct ip_set *set);
+ 	/* Region-locking is used */
+ 	bool region_lock;
+ };
+@@ -242,6 +244,8 @@ extern void ip_set_type_unregister(struct ip_set_type *set_type);
+ 
+ /* A generic IP set */
+ struct ip_set {
++	/* For call_cru in destroy */
++	struct rcu_head rcu;
+ 	/* The name of the set */
+ 	char name[IPSET_MAXNAMELEN];
+ 	/* Lock protecting the set data */
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index eaaef3ffec221..90507d4afcd6d 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -393,6 +393,10 @@ static inline void user_single_step_report(struct pt_regs *regs)
+ #define current_user_stack_pointer() user_stack_pointer(current_pt_regs())
+ #endif
+ 
++#ifndef exception_ip
++#define exception_ip(x) instruction_pointer(x)
++#endif
++
+ extern int task_current_syscall(struct task_struct *target, struct syscall_info *info);
+ 
+ extern void sigaction_compat_abi(struct k_sigaction *act, struct k_sigaction *oact);
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index 89f7b6c63598c..f43aca7f3b01e 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -748,8 +748,17 @@ struct uart_driver {
+ 
+ void uart_write_wakeup(struct uart_port *port);
+ 
+-#define __uart_port_tx(uport, ch, tx_ready, put_char, tx_done, for_test,      \
+-		for_post)						      \
++/**
++ * enum UART_TX_FLAGS -- flags for uart_port_tx_flags()
++ *
++ * @UART_TX_NOSTOP: don't call port->ops->stop_tx() on empty buffer
++ */
++enum UART_TX_FLAGS {
++	UART_TX_NOSTOP = BIT(0),
++};
++
++#define __uart_port_tx(uport, ch, flags, tx_ready, put_char, tx_done,	      \
++		       for_test, for_post)				      \
+ ({									      \
+ 	struct uart_port *__port = (uport);				      \
+ 	struct circ_buf *xmit = &__port->state->xmit;			      \
+@@ -777,7 +786,7 @@ void uart_write_wakeup(struct uart_port *port);
+ 	if (pending < WAKEUP_CHARS) {					      \
+ 		uart_write_wakeup(__port);				      \
+ 									      \
+-		if (pending == 0)					      \
++		if (!((flags) & UART_TX_NOSTOP) && pending == 0)	      \
+ 			__port->ops->stop_tx(__port);			      \
+ 	}								      \
+ 									      \
+@@ -812,7 +821,7 @@ void uart_write_wakeup(struct uart_port *port);
+  */
+ #define uart_port_tx_limited(port, ch, count, tx_ready, put_char, tx_done) ({ \
+ 	unsigned int __count = (count);					      \
+-	__uart_port_tx(port, ch, tx_ready, put_char, tx_done, __count,	      \
++	__uart_port_tx(port, ch, 0, tx_ready, put_char, tx_done, __count,     \
+ 			__count--);					      \
+ })
+ 
+@@ -826,8 +835,21 @@ void uart_write_wakeup(struct uart_port *port);
+  * See uart_port_tx_limited() for more details.
+  */
+ #define uart_port_tx(port, ch, tx_ready, put_char)			\
+-	__uart_port_tx(port, ch, tx_ready, put_char, ({}), true, ({}))
++	__uart_port_tx(port, ch, 0, tx_ready, put_char, ({}), true, ({}))
++
+ 
++/**
++ * uart_port_tx_flags -- transmit helper for uart_port with flags
++ * @port: uart port
++ * @ch: variable to store a character to be written to the HW
++ * @flags: %UART_TX_NOSTOP or similar
++ * @tx_ready: can HW accept more data function
++ * @put_char: function to write a character
++ *
++ * See uart_port_tx_limited() for more details.
++ */
++#define uart_port_tx_flags(port, ch, flags, tx_ready, put_char)		\
++	__uart_port_tx(port, ch, flags, tx_ready, put_char, ({}), true, ({}))
+ /*
+  * Baud rate helpers.
+  */
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 962f0c501111b..340ad43971e47 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -97,9 +97,6 @@ struct tls_sw_context_tx {
+ 	struct tls_rec *open_rec;
+ 	struct list_head tx_list;
+ 	atomic_t encrypt_pending;
+-	/* protect crypto_wait with encrypt_pending */
+-	spinlock_t encrypt_compl_lock;
+-	int async_notify;
+ 	u8 async_capable:1;
+ 
+ #define BIT_TX_SCHEDULED	0
+@@ -136,8 +133,6 @@ struct tls_sw_context_rx {
+ 	struct tls_strparser strp;
+ 
+ 	atomic_t decrypt_pending;
+-	/* protect crypto_wait with decrypt_pending*/
+-	spinlock_t decrypt_compl_lock;
+ 	struct sk_buff_head async_hold;
+ 	struct wait_queue_head wq;
+ };
+diff --git a/include/sound/tas2781.h b/include/sound/tas2781.h
+index a6c808b223183..475294c853aa4 100644
+--- a/include/sound/tas2781.h
++++ b/include/sound/tas2781.h
+@@ -135,6 +135,7 @@ struct tasdevice_priv {
+ 
+ void tas2781_reset(struct tasdevice_priv *tas_dev);
+ int tascodec_init(struct tasdevice_priv *tas_priv, void *codec,
++	struct module *module,
+ 	void (*cont)(const struct firmware *fw, void *context));
+ struct tasdevice_priv *tasdevice_kzalloc(struct i2c_client *i2c);
+ int tasdevice_init(struct tasdevice_priv *tas_priv);
+diff --git a/init/Kconfig b/init/Kconfig
+index 9ffb103fc927b..bfde8189c2bec 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -89,6 +89,15 @@ config CC_HAS_ASM_GOTO_TIED_OUTPUT
+ 	# Detect buggy gcc and clang, fixed in gcc-11 clang-14.
+ 	def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null)
+ 
++config GCC_ASM_GOTO_OUTPUT_WORKAROUND
++	bool
++	depends on CC_IS_GCC && CC_HAS_ASM_GOTO_OUTPUT
++	# Fixed in GCC 14, 13.3, 12.4 and 11.5
++	# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113921
++	default y if GCC_VERSION < 110500
++	default y if GCC_VERSION >= 120000 && GCC_VERSION < 120400
++	default y if GCC_VERSION >= 130000 && GCC_VERSION < 130300
++
+ config TOOLS_SUPPORT_RELR
+ 	def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh)
+ 
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 43bc9a5f96f9d..161622029147c 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -1372,7 +1372,7 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+ 			 * has already been done
+ 			 */
+ 			if (issue_flags & IO_URING_F_MULTISHOT)
+-				ret = IOU_ISSUE_SKIP_COMPLETE;
++				return IOU_ISSUE_SKIP_COMPLETE;
+ 			return ret;
+ 		}
+ 		if (ret == -ERESTARTSYS)
+@@ -1397,7 +1397,8 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+ 				ret, IORING_CQE_F_MORE))
+ 		goto retry;
+ 
+-	return -ECANCELED;
++	io_req_set_res(req, ret, 0);
++	return IOU_STOP_MULTISHOT;
+ }
+ 
+ int io_socket_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
+index 2ad881d07752c..4e715b9b278e7 100644
+--- a/kernel/sched/membarrier.c
++++ b/kernel/sched/membarrier.c
+@@ -162,6 +162,9 @@
+ 	| MEMBARRIER_PRIVATE_EXPEDITED_RSEQ_BITMASK			\
+ 	| MEMBARRIER_CMD_GET_REGISTRATIONS)
+ 
++static DEFINE_MUTEX(membarrier_ipi_mutex);
++#define SERIALIZE_IPI() guard(mutex)(&membarrier_ipi_mutex)
++
+ static void ipi_mb(void *info)
+ {
+ 	smp_mb();	/* IPIs should be serializing but paranoid. */
+@@ -259,6 +262,7 @@ static int membarrier_global_expedited(void)
+ 	if (!zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
++	SERIALIZE_IPI();
+ 	cpus_read_lock();
+ 	rcu_read_lock();
+ 	for_each_online_cpu(cpu) {
+@@ -347,6 +351,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
+ 	if (cpu_id < 0 && !zalloc_cpumask_var(&tmpmask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
++	SERIALIZE_IPI();
+ 	cpus_read_lock();
+ 
+ 	if (cpu_id >= 0) {
+@@ -460,6 +465,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
+ 	 * between threads which are users of @mm has its membarrier state
+ 	 * updated.
+ 	 */
++	SERIALIZE_IPI();
+ 	cpus_read_lock();
+ 	rcu_read_lock();
+ 	for_each_online_cpu(cpu) {
+diff --git a/kernel/sys.c b/kernel/sys.c
+index e219fcfa112d8..f8e543f1e38a0 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1785,21 +1785,24 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
+ 	struct task_struct *t;
+ 	unsigned long flags;
+ 	u64 tgutime, tgstime, utime, stime;
+-	unsigned long maxrss = 0;
++	unsigned long maxrss;
++	struct mm_struct *mm;
+ 	struct signal_struct *sig = p->signal;
++	unsigned int seq = 0;
+ 
+-	memset((char *)r, 0, sizeof (*r));
++retry:
++	memset(r, 0, sizeof(*r));
+ 	utime = stime = 0;
++	maxrss = 0;
+ 
+ 	if (who == RUSAGE_THREAD) {
+ 		task_cputime_adjusted(current, &utime, &stime);
+ 		accumulate_thread_rusage(p, r);
+ 		maxrss = sig->maxrss;
+-		goto out;
++		goto out_thread;
+ 	}
+ 
+-	if (!lock_task_sighand(p, &flags))
+-		return;
++	flags = read_seqbegin_or_lock_irqsave(&sig->stats_lock, &seq);
+ 
+ 	switch (who) {
+ 	case RUSAGE_BOTH:
+@@ -1819,9 +1822,6 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
+ 		fallthrough;
+ 
+ 	case RUSAGE_SELF:
+-		thread_group_cputime_adjusted(p, &tgutime, &tgstime);
+-		utime += tgutime;
+-		stime += tgstime;
+ 		r->ru_nvcsw += sig->nvcsw;
+ 		r->ru_nivcsw += sig->nivcsw;
+ 		r->ru_minflt += sig->min_flt;
+@@ -1830,28 +1830,42 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
+ 		r->ru_oublock += sig->oublock;
+ 		if (maxrss < sig->maxrss)
+ 			maxrss = sig->maxrss;
++
++		rcu_read_lock();
+ 		__for_each_thread(sig, t)
+ 			accumulate_thread_rusage(t, r);
++		rcu_read_unlock();
++
+ 		break;
+ 
+ 	default:
+ 		BUG();
+ 	}
+-	unlock_task_sighand(p, &flags);
+ 
+-out:
+-	r->ru_utime = ns_to_kernel_old_timeval(utime);
+-	r->ru_stime = ns_to_kernel_old_timeval(stime);
++	if (need_seqretry(&sig->stats_lock, seq)) {
++		seq = 1;
++		goto retry;
++	}
++	done_seqretry_irqrestore(&sig->stats_lock, seq, flags);
+ 
+-	if (who != RUSAGE_CHILDREN) {
+-		struct mm_struct *mm = get_task_mm(p);
++	if (who == RUSAGE_CHILDREN)
++		goto out_children;
+ 
+-		if (mm) {
+-			setmax_mm_hiwater_rss(&maxrss, mm);
+-			mmput(mm);
+-		}
++	thread_group_cputime_adjusted(p, &tgutime, &tgstime);
++	utime += tgutime;
++	stime += tgstime;
++
++out_thread:
++	mm = get_task_mm(p);
++	if (mm) {
++		setmax_mm_hiwater_rss(&maxrss, mm);
++		mmput(mm);
+ 	}
++
++out_children:
+ 	r->ru_maxrss = maxrss * (PAGE_SIZE / 1024); /* convert pages to KBs */
++	r->ru_utime = ns_to_kernel_old_timeval(utime);
++	r->ru_stime = ns_to_kernel_old_timeval(stime);
+ }
+ 
+ SYSCALL_DEFINE2(getrusage, int, who, struct rusage __user *, ru)
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index b01ae7d360218..83ba342aef31f 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5325,7 +5325,17 @@ static LIST_HEAD(ftrace_direct_funcs);
+ 
+ static int register_ftrace_function_nolock(struct ftrace_ops *ops);
+ 
++/*
++ * If there are multiple ftrace_ops, use SAVE_REGS by default, so that direct
++ * call will be jumped from ftrace_regs_caller. Only if the architecture does
++ * not support ftrace_regs_caller but direct_call, use SAVE_ARGS so that it
++ * jumps from ftrace_caller for multiple ftrace_ops.
++ */
++#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS
+ #define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_ARGS)
++#else
++#define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS)
++#endif
+ 
+ static int check_direct_multi(struct ftrace_ops *ops)
+ {
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 9286f88fcd32a..6fa67c297e8fa 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1091,7 +1091,7 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 		full = 0;
+ 	} else {
+ 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
+-			return -EINVAL;
++			return EPOLLERR;
+ 
+ 		cpu_buffer = buffer->buffers[cpu];
+ 		work = &cpu_buffer->irq_work;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index a0defe156b571..3fdc57450e79e 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -39,6 +39,7 @@
+ #include <linux/ctype.h>
+ #include <linux/init.h>
+ #include <linux/panic_notifier.h>
++#include <linux/kmemleak.h>
+ #include <linux/poll.h>
+ #include <linux/nmi.h>
+ #include <linux/fs.h>
+@@ -2312,7 +2313,7 @@ struct saved_cmdlines_buffer {
+ 	unsigned *map_cmdline_to_pid;
+ 	unsigned cmdline_num;
+ 	int cmdline_idx;
+-	char *saved_cmdlines;
++	char saved_cmdlines[];
+ };
+ static struct saved_cmdlines_buffer *savedcmd;
+ 
+@@ -2326,47 +2327,60 @@ static inline void set_cmdline(int idx, const char *cmdline)
+ 	strncpy(get_saved_cmdlines(idx), cmdline, TASK_COMM_LEN);
+ }
+ 
+-static int allocate_cmdlines_buffer(unsigned int val,
+-				    struct saved_cmdlines_buffer *s)
++static void free_saved_cmdlines_buffer(struct saved_cmdlines_buffer *s)
+ {
++	int order = get_order(sizeof(*s) + s->cmdline_num * TASK_COMM_LEN);
++
++	kfree(s->map_cmdline_to_pid);
++	kmemleak_free(s);
++	free_pages((unsigned long)s, order);
++}
++
++static struct saved_cmdlines_buffer *allocate_cmdlines_buffer(unsigned int val)
++{
++	struct saved_cmdlines_buffer *s;
++	struct page *page;
++	int orig_size, size;
++	int order;
++
++	/* Figure out how much is needed to hold the given number of cmdlines */
++	orig_size = sizeof(*s) + val * TASK_COMM_LEN;
++	order = get_order(orig_size);
++	size = 1 << (order + PAGE_SHIFT);
++	page = alloc_pages(GFP_KERNEL, order);
++	if (!page)
++		return NULL;
++
++	s = page_address(page);
++	kmemleak_alloc(s, size, 1, GFP_KERNEL);
++	memset(s, 0, sizeof(*s));
++
++	/* Round up to actual allocation */
++	val = (size - sizeof(*s)) / TASK_COMM_LEN;
++	s->cmdline_num = val;
++
+ 	s->map_cmdline_to_pid = kmalloc_array(val,
+ 					      sizeof(*s->map_cmdline_to_pid),
+ 					      GFP_KERNEL);
+-	if (!s->map_cmdline_to_pid)
+-		return -ENOMEM;
+-
+-	s->saved_cmdlines = kmalloc_array(TASK_COMM_LEN, val, GFP_KERNEL);
+-	if (!s->saved_cmdlines) {
+-		kfree(s->map_cmdline_to_pid);
+-		return -ENOMEM;
++	if (!s->map_cmdline_to_pid) {
++		free_saved_cmdlines_buffer(s);
++		return NULL;
+ 	}
+ 
+ 	s->cmdline_idx = 0;
+-	s->cmdline_num = val;
+ 	memset(&s->map_pid_to_cmdline, NO_CMDLINE_MAP,
+ 	       sizeof(s->map_pid_to_cmdline));
+ 	memset(s->map_cmdline_to_pid, NO_CMDLINE_MAP,
+ 	       val * sizeof(*s->map_cmdline_to_pid));
+ 
+-	return 0;
++	return s;
+ }
+ 
+ static int trace_create_savedcmd(void)
+ {
+-	int ret;
+-
+-	savedcmd = kmalloc(sizeof(*savedcmd), GFP_KERNEL);
+-	if (!savedcmd)
+-		return -ENOMEM;
++	savedcmd = allocate_cmdlines_buffer(SAVED_CMDLINES_DEFAULT);
+ 
+-	ret = allocate_cmdlines_buffer(SAVED_CMDLINES_DEFAULT, savedcmd);
+-	if (ret < 0) {
+-		kfree(savedcmd);
+-		savedcmd = NULL;
+-		return -ENOMEM;
+-	}
+-
+-	return 0;
++	return savedcmd ? 0 : -ENOMEM;
+ }
+ 
+ int is_tracing_stopped(void)
+@@ -6048,26 +6062,14 @@ tracing_saved_cmdlines_size_read(struct file *filp, char __user *ubuf,
+ 	return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
+ }
+ 
+-static void free_saved_cmdlines_buffer(struct saved_cmdlines_buffer *s)
+-{
+-	kfree(s->saved_cmdlines);
+-	kfree(s->map_cmdline_to_pid);
+-	kfree(s);
+-}
+-
+ static int tracing_resize_saved_cmdlines(unsigned int val)
+ {
+ 	struct saved_cmdlines_buffer *s, *savedcmd_temp;
+ 
+-	s = kmalloc(sizeof(*s), GFP_KERNEL);
++	s = allocate_cmdlines_buffer(val);
+ 	if (!s)
+ 		return -ENOMEM;
+ 
+-	if (allocate_cmdlines_buffer(val, s) < 0) {
+-		kfree(s);
+-		return -ENOMEM;
+-	}
+-
+ 	preempt_disable();
+ 	arch_spin_lock(&trace_cmdline_lock);
+ 	savedcmd_temp = savedcmd;
+diff --git a/kernel/trace/trace_btf.c b/kernel/trace/trace_btf.c
+index ca224d53bfdcd..5bbdbcbbde3cd 100644
+--- a/kernel/trace/trace_btf.c
++++ b/kernel/trace/trace_btf.c
+@@ -91,8 +91,8 @@ const struct btf_member *btf_find_struct_member(struct btf *btf,
+ 	for_each_member(i, type, member) {
+ 		if (!member->name_off) {
+ 			/* Anonymous union/struct: push it for later use */
+-			type = btf_type_skip_modifiers(btf, member->type, &tid);
+-			if (type && top < BTF_ANON_STACK_MAX) {
++			if (btf_type_skip_modifiers(btf, member->type, &tid) &&
++			    top < BTF_ANON_STACK_MAX) {
+ 				anon_stack[top].tid = tid;
+ 				anon_stack[top++].offset =
+ 					cur_offset + member->offset;
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index e7af286af4f1a..c82b401a294d9 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -441,8 +441,9 @@ static unsigned int trace_string(struct synth_trace_event *entry,
+ 	if (is_dynamic) {
+ 		union trace_synth_field *data = &entry->fields[*n_u64];
+ 
++		len = fetch_store_strlen((unsigned long)str_val);
+ 		data->as_dynamic.offset = struct_size(entry, fields, event->n_u64) + data_size;
+-		data->as_dynamic.len = fetch_store_strlen((unsigned long)str_val);
++		data->as_dynamic.len = len;
+ 
+ 		ret = fetch_store_string((unsigned long)str_val, &entry->fields[*n_u64], entry);
+ 
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 46439e3bcec4d..b33c3861fbbbf 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1470,8 +1470,10 @@ register_snapshot_trigger(char *glob,
+ 			  struct event_trigger_data *data,
+ 			  struct trace_event_file *file)
+ {
+-	if (tracing_alloc_snapshot_instance(file->tr) != 0)
+-		return 0;
++	int ret = tracing_alloc_snapshot_instance(file->tr);
++
++	if (ret < 0)
++		return ret;
+ 
+ 	return register_trigger(glob, data, file);
+ }
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index bd0d01d00fb9d..a8e28f9b9271c 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2444,6 +2444,9 @@ static int timerlat_fd_open(struct inode *inode, struct file *file)
+ 	tlat = this_cpu_tmr_var();
+ 	tlat->count = 0;
+ 
++	hrtimer_init(&tlat->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
++	tlat->timer.function = timerlat_irq;
++
+ 	migrate_enable();
+ 	return 0;
+ };
+@@ -2526,9 +2529,6 @@ timerlat_fd_read(struct file *file, char __user *ubuf, size_t count,
+ 		tlat->tracing_thread = false;
+ 		tlat->kthread = current;
+ 
+-		hrtimer_init(&tlat->timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
+-		tlat->timer.function = timerlat_irq;
+-
+ 		/* Annotate now to drift new period */
+ 		tlat->abs_period = hrtimer_cb_get_time(&tlat->timer);
+ 
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 4dc74d73fc1df..34289f9c67076 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -1159,9 +1159,12 @@ static int traceprobe_parse_probe_arg_body(const char *argv, ssize_t *size,
+ 	if (!(ctx->flags & TPARG_FL_TEVENT) &&
+ 	    (strcmp(arg, "$comm") == 0 || strcmp(arg, "$COMM") == 0 ||
+ 	     strncmp(arg, "\\\"", 2) == 0)) {
+-		/* The type of $comm must be "string", and not an array. */
+-		if (parg->count || (t && strcmp(t, "string")))
++		/* The type of $comm must be "string", and not an array type. */
++		if (parg->count || (t && strcmp(t, "string"))) {
++			trace_probe_log_err(ctx->offset + (t ? (t - arg) : 0),
++					NEED_STRING_TYPE);
+ 			goto out;
++		}
+ 		parg->type = find_fetch_type("string", ctx->flags);
+ 	} else
+ 		parg->type = find_fetch_type(t, ctx->flags);
+@@ -1169,18 +1172,6 @@ static int traceprobe_parse_probe_arg_body(const char *argv, ssize_t *size,
+ 		trace_probe_log_err(ctx->offset + (t ? (t - arg) : 0), BAD_TYPE);
+ 		goto out;
+ 	}
+-	parg->offset = *size;
+-	*size += parg->type->size * (parg->count ?: 1);
+-
+-	ret = -ENOMEM;
+-	if (parg->count) {
+-		len = strlen(parg->type->fmttype) + 6;
+-		parg->fmt = kmalloc(len, GFP_KERNEL);
+-		if (!parg->fmt)
+-			goto out;
+-		snprintf(parg->fmt, len, "%s[%d]", parg->type->fmttype,
+-			 parg->count);
+-	}
+ 
+ 	code = tmp = kcalloc(FETCH_INSN_MAX, sizeof(*code), GFP_KERNEL);
+ 	if (!code)
+@@ -1204,6 +1195,19 @@ static int traceprobe_parse_probe_arg_body(const char *argv, ssize_t *size,
+ 				goto fail;
+ 		}
+ 	}
++	parg->offset = *size;
++	*size += parg->type->size * (parg->count ?: 1);
++
++	if (parg->count) {
++		len = strlen(parg->type->fmttype) + 6;
++		parg->fmt = kmalloc(len, GFP_KERNEL);
++		if (!parg->fmt) {
++			ret = -ENOMEM;
++			goto out;
++		}
++		snprintf(parg->fmt, len, "%s[%d]", parg->type->fmttype,
++			 parg->count);
++	}
+ 
+ 	ret = -EINVAL;
+ 	/* Store operation */
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index 850d9ecb6765a..c1877d0182691 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -515,7 +515,8 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
+ 	C(BAD_HYPHEN,		"Failed to parse single hyphen. Forgot '>'?"),	\
+ 	C(NO_BTF_FIELD,		"This field is not found."),	\
+ 	C(BAD_BTF_TID,		"Failed to get BTF type info."),\
+-	C(BAD_TYPE4STR,		"This type does not fit for string."),
++	C(BAD_TYPE4STR,		"This type does not fit for string."),\
++	C(NEED_STRING_TYPE,	"$comm and immediate-string only accepts string type"),
+ 
+ #undef C
+ #define C(a, b)		TP_ERR_##a
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 2989b57e154a7..4f87b1851c74a 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5793,13 +5793,9 @@ static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
+ 	list_for_each_entry(wq, &workqueues, list) {
+ 		if (!(wq->flags & WQ_UNBOUND))
+ 			continue;
+-
+ 		/* creating multiple pwqs breaks ordering guarantee */
+-		if (!list_empty(&wq->pwqs)) {
+-			if (wq->flags & __WQ_ORDERED_EXPLICIT)
+-				continue;
+-			wq->flags &= ~__WQ_ORDERED;
+-		}
++		if (wq->flags & __WQ_ORDERED)
++			continue;
+ 
+ 		ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs, unbound_cpumask);
+ 		if (IS_ERR(ctx)) {
+diff --git a/lib/kobject.c b/lib/kobject.c
+index 59dbcbdb1c916..72fa20f405f15 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -74,10 +74,12 @@ static int create_dir(struct kobject *kobj)
+ 	if (error)
+ 		return error;
+ 
+-	error = sysfs_create_groups(kobj, ktype->default_groups);
+-	if (error) {
+-		sysfs_remove_dir(kobj);
+-		return error;
++	if (ktype) {
++		error = sysfs_create_groups(kobj, ktype->default_groups);
++		if (error) {
++			sysfs_remove_dir(kobj);
++			return error;
++		}
+ 	}
+ 
+ 	/*
+@@ -589,7 +591,8 @@ static void __kobject_del(struct kobject *kobj)
+ 	sd = kobj->sd;
+ 	ktype = get_ktype(kobj);
+ 
+-	sysfs_remove_groups(kobj, ktype->default_groups);
++	if (ktype)
++		sysfs_remove_groups(kobj, ktype->default_groups);
+ 
+ 	/* send "remove" if the caller did not do it but sent "add" */
+ 	if (kobj->state_add_uevent_sent && !kobj->state_remove_uevent_sent) {
+@@ -666,6 +669,10 @@ static void kobject_cleanup(struct kobject *kobj)
+ 	pr_debug("'%s' (%p): %s, parent %p\n",
+ 		 kobject_name(kobj), kobj, __func__, kobj->parent);
+ 
++	if (t && !t->release)
++		pr_debug("'%s' (%p): does not have a release() function, it is broken and must be fixed. See Documentation/core-api/kobject.rst.\n",
++			 kobject_name(kobj), kobj);
++
+ 	/* remove from sysfs if the caller did not do it */
+ 	if (kobj->state_in_sysfs) {
+ 		pr_debug("'%s' (%p): auto cleanup kobject_del\n",
+@@ -676,13 +683,10 @@ static void kobject_cleanup(struct kobject *kobj)
+ 		parent = NULL;
+ 	}
+ 
+-	if (t->release) {
++	if (t && t->release) {
+ 		pr_debug("'%s' (%p): calling ktype release\n",
+ 			 kobject_name(kobj), kobj);
+ 		t->release(kobj);
+-	} else {
+-		pr_debug("'%s' (%p): does not have a release() function, it is broken and must be fixed. See Documentation/core-api/kobject.rst.\n",
+-			 kobject_name(kobj), kobj);
+ 	}
+ 
+ 	/* free name if we allocated it */
+@@ -1056,7 +1060,7 @@ const struct kobj_ns_type_operations *kobj_child_ns_ops(const struct kobject *pa
+ {
+ 	const struct kobj_ns_type_operations *ops = NULL;
+ 
+-	if (parent && parent->ktype->child_ns_type)
++	if (parent && parent->ktype && parent->ktype->child_ns_type)
+ 		ops = parent->ktype->child_ns_type(parent);
+ 
+ 	return ops;
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 1e3447bccdb14..e039d05304dd9 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -436,7 +436,6 @@ static int wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi,
+ 	INIT_LIST_HEAD(&wb->work_list);
+ 	INIT_DELAYED_WORK(&wb->dwork, wb_workfn);
+ 	INIT_DELAYED_WORK(&wb->bw_dwork, wb_update_bandwidth_workfn);
+-	wb->dirty_sleep = jiffies;
+ 
+ 	err = fprop_local_init_percpu(&wb->completions, gfp);
+ 	if (err)
+@@ -921,6 +920,7 @@ int bdi_init(struct backing_dev_info *bdi)
+ 	INIT_LIST_HEAD(&bdi->bdi_list);
+ 	INIT_LIST_HEAD(&bdi->wb_list);
+ 	init_waitqueue_head(&bdi->wb_waitq);
++	bdi->last_bdp_sleep = jiffies;
+ 
+ 	return cgwb_bdi_init(bdi);
+ }
+diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c
+index fe0fe2562000b..786b06239ccfd 100644
+--- a/mm/damon/sysfs-schemes.c
++++ b/mm/damon/sysfs-schemes.c
+@@ -1928,7 +1928,7 @@ static void damos_tried_regions_init_upd_status(
+ 		sysfs_regions->upd_timeout_jiffies = jiffies +
+ 			2 * usecs_to_jiffies(scheme->apply_interval_us ?
+ 					scheme->apply_interval_us :
+-					ctx->attrs.sample_interval);
++					ctx->attrs.aggr_interval);
+ 	}
+ }
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 86ee29b5c39cc..3f50578eb933f 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -37,6 +37,7 @@
+ #include <linux/page_owner.h>
+ #include <linux/sched/sysctl.h>
+ #include <linux/memory-tiers.h>
++#include <linux/compat.h>
+ 
+ #include <asm/tlb.h>
+ #include <asm/pgalloc.h>
+@@ -632,7 +633,10 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
+ {
+ 	loff_t off_end = off + len;
+ 	loff_t off_align = round_up(off, size);
+-	unsigned long len_pad, ret;
++	unsigned long len_pad, ret, off_sub;
++
++	if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall())
++		return 0;
+ 
+ 	if (off_end <= off_align || (off_end - off_align) < size)
+ 		return 0;
+@@ -658,7 +662,13 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
+ 	if (ret == addr)
+ 		return addr;
+ 
+-	ret += (off - ret) & (size - 1);
++	off_sub = (off - ret) & (size - 1);
++
++	if (current->mm->get_unmapped_area == arch_get_unmapped_area_topdown &&
++	    !off_sub)
++		return ret + size;
++
++	ret += off_sub;
+ 	return ret;
+ }
+ 
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 455093f73a70c..17298a615a12c 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -976,7 +976,7 @@ static bool has_extra_refcount(struct page_state *ps, struct page *p,
+ 	int count = page_count(p) - 1;
+ 
+ 	if (extra_pins)
+-		count -= 1;
++		count -= folio_nr_pages(page_folio(p));
+ 
+ 	if (count > 0) {
+ 		pr_err("%#lx: %s still referenced by %d users\n",
+diff --git a/mm/memory.c b/mm/memory.c
+index 6e0712d06cd41..f941489d60414 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5373,7 +5373,7 @@ static inline bool get_mmap_lock_carefully(struct mm_struct *mm, struct pt_regs
+ 		return true;
+ 
+ 	if (regs && !user_mode(regs)) {
+-		unsigned long ip = instruction_pointer(regs);
++		unsigned long ip = exception_ip(regs);
+ 		if (!search_exception_tables(ip))
+ 			return false;
+ 	}
+@@ -5398,7 +5398,7 @@ static inline bool upgrade_mmap_lock_carefully(struct mm_struct *mm, struct pt_r
+ {
+ 	mmap_read_unlock(mm);
+ 	if (regs && !user_mode(regs)) {
+-		unsigned long ip = instruction_pointer(regs);
++		unsigned long ip = exception_ip(regs);
+ 		if (!search_exception_tables(ip))
+ 			return false;
+ 	}
+diff --git a/mm/mmap.c b/mm/mmap.c
+index aa82eec17489c..b9a43872acadb 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1825,15 +1825,17 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ 		/*
+ 		 * mmap_region() will call shmem_zero_setup() to create a file,
+ 		 * so use shmem's get_unmapped_area in case it can be huge.
+-		 * do_mmap() will clear pgoff, so match alignment.
+ 		 */
+-		pgoff = 0;
+ 		get_area = shmem_get_unmapped_area;
+ 	} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+ 		/* Ensures that larger anonymous mappings are THP aligned. */
+ 		get_area = thp_get_unmapped_area;
+ 	}
+ 
++	/* Always treat pgoff as zero for anonymous memory. */
++	if (!file)
++		pgoff = 0;
++
+ 	addr = get_area(file, addr, len, pgoff, flags);
+ 	if (IS_ERR_VALUE(addr))
+ 		return addr;
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 05e5c425b3ff7..f735d28de38ca 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -1638,7 +1638,7 @@ static inline void wb_dirty_limits(struct dirty_throttle_control *dtc)
+ 	 */
+ 	dtc->wb_thresh = __wb_calc_thresh(dtc);
+ 	dtc->wb_bg_thresh = dtc->thresh ?
+-		div_u64((u64)dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;
++		div64_u64(dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;
+ 
+ 	/*
+ 	 * In order to avoid the stacked BDI deadlock we need
+@@ -1921,7 +1921,7 @@ static int balance_dirty_pages(struct bdi_writeback *wb,
+ 			break;
+ 		}
+ 		__set_current_state(TASK_KILLABLE);
+-		wb->dirty_sleep = now;
++		bdi->last_bdp_sleep = jiffies;
+ 		io_schedule_timeout(pause);
+ 
+ 		current->dirty_paused_when = now + pause;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 0b6ca553bebec..2213d887ec7d8 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -357,6 +357,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
+ 					      unsigned long dst_start,
+ 					      unsigned long src_start,
+ 					      unsigned long len,
++					      atomic_t *mmap_changing,
+ 					      uffd_flags_t flags)
+ {
+ 	struct mm_struct *dst_mm = dst_vma->vm_mm;
+@@ -472,6 +473,15 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
+ 				goto out;
+ 			}
+ 			mmap_read_lock(dst_mm);
++			/*
++			 * If memory mappings are changing because of non-cooperative
++			 * operation (e.g. mremap) running in parallel, bail out and
++			 * request the user to retry later
++			 */
++			if (mmap_changing && atomic_read(mmap_changing)) {
++				err = -EAGAIN;
++				break;
++			}
+ 
+ 			dst_vma = NULL;
+ 			goto retry;
+@@ -506,6 +516,7 @@ extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma,
+ 				    unsigned long dst_start,
+ 				    unsigned long src_start,
+ 				    unsigned long len,
++				    atomic_t *mmap_changing,
+ 				    uffd_flags_t flags);
+ #endif /* CONFIG_HUGETLB_PAGE */
+ 
+@@ -622,8 +633,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm,
+ 	 * If this is a HUGETLB vma, pass off to appropriate routine
+ 	 */
+ 	if (is_vm_hugetlb_page(dst_vma))
+-		return  mfill_atomic_hugetlb(dst_vma, dst_start,
+-					     src_start, len, flags);
++		return  mfill_atomic_hugetlb(dst_vma, dst_start, src_start,
++					     len, mmap_changing, flags);
+ 
+ 	if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
+ 		goto out_unlock;
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index 16af1a7f80f60..31a93cae5111b 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -86,7 +86,7 @@ struct j1939_priv {
+ 	unsigned int tp_max_packet_size;
+ 
+ 	/* lock for j1939_socks list */
+-	spinlock_t j1939_socks_lock;
++	rwlock_t j1939_socks_lock;
+ 	struct list_head j1939_socks;
+ 
+ 	struct kref rx_kref;
+@@ -301,6 +301,7 @@ struct j1939_sock {
+ 
+ 	int ifindex;
+ 	struct j1939_addr addr;
++	spinlock_t filters_lock;
+ 	struct j1939_filter *filters;
+ 	int nfilters;
+ 	pgn_t pgn_rx_filter;
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index ecff1c947d683..a6fb89fa62785 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -274,7 +274,7 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	j1939_tp_init(priv);
+-	spin_lock_init(&priv->j1939_socks_lock);
++	rwlock_init(&priv->j1939_socks_lock);
+ 	INIT_LIST_HEAD(&priv->j1939_socks);
+ 
+ 	mutex_lock(&j1939_netdev_lock);
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 14c4316632339..305dd72c844c7 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -80,16 +80,16 @@ static void j1939_jsk_add(struct j1939_priv *priv, struct j1939_sock *jsk)
+ 	jsk->state |= J1939_SOCK_BOUND;
+ 	j1939_priv_get(priv);
+ 
+-	spin_lock_bh(&priv->j1939_socks_lock);
++	write_lock_bh(&priv->j1939_socks_lock);
+ 	list_add_tail(&jsk->list, &priv->j1939_socks);
+-	spin_unlock_bh(&priv->j1939_socks_lock);
++	write_unlock_bh(&priv->j1939_socks_lock);
+ }
+ 
+ static void j1939_jsk_del(struct j1939_priv *priv, struct j1939_sock *jsk)
+ {
+-	spin_lock_bh(&priv->j1939_socks_lock);
++	write_lock_bh(&priv->j1939_socks_lock);
+ 	list_del_init(&jsk->list);
+-	spin_unlock_bh(&priv->j1939_socks_lock);
++	write_unlock_bh(&priv->j1939_socks_lock);
+ 
+ 	j1939_priv_put(priv);
+ 	jsk->state &= ~J1939_SOCK_BOUND;
+@@ -262,12 +262,17 @@ static bool j1939_sk_match_dst(struct j1939_sock *jsk,
+ static bool j1939_sk_match_filter(struct j1939_sock *jsk,
+ 				  const struct j1939_sk_buff_cb *skcb)
+ {
+-	const struct j1939_filter *f = jsk->filters;
+-	int nfilter = jsk->nfilters;
++	const struct j1939_filter *f;
++	int nfilter;
++
++	spin_lock_bh(&jsk->filters_lock);
++
++	f = jsk->filters;
++	nfilter = jsk->nfilters;
+ 
+ 	if (!nfilter)
+ 		/* receive all when no filters are assigned */
+-		return true;
++		goto filter_match_found;
+ 
+ 	for (; nfilter; ++f, --nfilter) {
+ 		if ((skcb->addr.pgn & f->pgn_mask) != f->pgn)
+@@ -276,9 +281,15 @@ static bool j1939_sk_match_filter(struct j1939_sock *jsk,
+ 			continue;
+ 		if ((skcb->addr.src_name & f->name_mask) != f->name)
+ 			continue;
+-		return true;
++		goto filter_match_found;
+ 	}
++
++	spin_unlock_bh(&jsk->filters_lock);
+ 	return false;
++
++filter_match_found:
++	spin_unlock_bh(&jsk->filters_lock);
++	return true;
+ }
+ 
+ static bool j1939_sk_recv_match_one(struct j1939_sock *jsk,
+@@ -329,13 +340,13 @@ bool j1939_sk_recv_match(struct j1939_priv *priv, struct j1939_sk_buff_cb *skcb)
+ 	struct j1939_sock *jsk;
+ 	bool match = false;
+ 
+-	spin_lock_bh(&priv->j1939_socks_lock);
++	read_lock_bh(&priv->j1939_socks_lock);
+ 	list_for_each_entry(jsk, &priv->j1939_socks, list) {
+ 		match = j1939_sk_recv_match_one(jsk, skcb);
+ 		if (match)
+ 			break;
+ 	}
+-	spin_unlock_bh(&priv->j1939_socks_lock);
++	read_unlock_bh(&priv->j1939_socks_lock);
+ 
+ 	return match;
+ }
+@@ -344,11 +355,11 @@ void j1939_sk_recv(struct j1939_priv *priv, struct sk_buff *skb)
+ {
+ 	struct j1939_sock *jsk;
+ 
+-	spin_lock_bh(&priv->j1939_socks_lock);
++	read_lock_bh(&priv->j1939_socks_lock);
+ 	list_for_each_entry(jsk, &priv->j1939_socks, list) {
+ 		j1939_sk_recv_one(jsk, skb);
+ 	}
+-	spin_unlock_bh(&priv->j1939_socks_lock);
++	read_unlock_bh(&priv->j1939_socks_lock);
+ }
+ 
+ static void j1939_sk_sock_destruct(struct sock *sk)
+@@ -401,6 +412,7 @@ static int j1939_sk_init(struct sock *sk)
+ 	atomic_set(&jsk->skb_pending, 0);
+ 	spin_lock_init(&jsk->sk_session_queue_lock);
+ 	INIT_LIST_HEAD(&jsk->sk_session_queue);
++	spin_lock_init(&jsk->filters_lock);
+ 
+ 	/* j1939_sk_sock_destruct() depends on SOCK_RCU_FREE flag */
+ 	sock_set_flag(sk, SOCK_RCU_FREE);
+@@ -703,9 +715,11 @@ static int j1939_sk_setsockopt(struct socket *sock, int level, int optname,
+ 		}
+ 
+ 		lock_sock(&jsk->sk);
++		spin_lock_bh(&jsk->filters_lock);
+ 		ofilters = jsk->filters;
+ 		jsk->filters = filters;
+ 		jsk->nfilters = count;
++		spin_unlock_bh(&jsk->filters_lock);
+ 		release_sock(&jsk->sk);
+ 		kfree(ofilters);
+ 		return 0;
+@@ -1080,12 +1094,12 @@ void j1939_sk_errqueue(struct j1939_session *session,
+ 	}
+ 
+ 	/* spread RX notifications to all sockets subscribed to this session */
+-	spin_lock_bh(&priv->j1939_socks_lock);
++	read_lock_bh(&priv->j1939_socks_lock);
+ 	list_for_each_entry(jsk, &priv->j1939_socks, list) {
+ 		if (j1939_sk_recv_match_one(jsk, &session->skcb))
+ 			__j1939_sk_errqueue(session, &jsk->sk, type);
+ 	}
+-	spin_unlock_bh(&priv->j1939_socks_lock);
++	read_unlock_bh(&priv->j1939_socks_lock);
+ };
+ 
+ void j1939_sk_send_loop_abort(struct sock *sk, int err)
+@@ -1273,7 +1287,7 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
+ 	struct j1939_sock *jsk;
+ 	int error_code = ENETDOWN;
+ 
+-	spin_lock_bh(&priv->j1939_socks_lock);
++	read_lock_bh(&priv->j1939_socks_lock);
+ 	list_for_each_entry(jsk, &priv->j1939_socks, list) {
+ 		jsk->sk.sk_err = error_code;
+ 		if (!sock_flag(&jsk->sk, SOCK_DEAD))
+@@ -1281,7 +1295,7 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
+ 
+ 		j1939_sk_queue_drop_all(priv, jsk, error_code);
+ 	}
+-	spin_unlock_bh(&priv->j1939_socks_lock);
++	read_unlock_bh(&priv->j1939_socks_lock);
+ }
+ 
+ static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+diff --git a/net/handshake/handshake-test.c b/net/handshake/handshake-test.c
+index 16ed7bfd29e4f..34fd1d9b2db86 100644
+--- a/net/handshake/handshake-test.c
++++ b/net/handshake/handshake-test.c
+@@ -471,7 +471,10 @@ static void handshake_req_destroy_test1(struct kunit *test)
+ 	handshake_req_cancel(sock->sk);
+ 
+ 	/* Act */
+-	fput(filp);
++	/* Ensure the close/release/put process has run to
++	 * completion before checking the result.
++	 */
++	__fput_sync(filp);
+ 
+ 	/* Assert */
+ 	KUNIT_EXPECT_PTR_EQ(test, handshake_req_destroy_test, req);
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 306f942c3b28a..dd4b5f0aa1318 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -291,7 +291,7 @@ static void send_hsr_supervision_frame(struct hsr_port *master,
+ 
+ 	skb = hsr_init_skb(master);
+ 	if (!skb) {
+-		WARN_ONCE(1, "HSR: Could not send supervision frame\n");
++		netdev_warn_once(master->dev, "HSR: Could not send supervision frame\n");
+ 		return;
+ 	}
+ 
+@@ -338,7 +338,7 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ 
+ 	skb = hsr_init_skb(master);
+ 	if (!skb) {
+-		WARN_ONCE(1, "PRP: Could not send supervision frame\n");
++		netdev_warn_once(master->dev, "PRP: Could not send supervision frame\n");
+ 		return;
+ 	}
+ 
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index a94feb6b579bb..d7aa75b7fd917 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -5,7 +5,7 @@
+  * Copyright 2006-2007	Jiri Benc <jbenc@suse.cz>
+  * Copyright 2007	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+- * Copyright (C) 2018-2022 Intel Corporation
++ * Copyright (C) 2018-2024 Intel Corporation
+  *
+  * Transmit and frame generation functions.
+  */
+@@ -3927,6 +3927,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 			goto begin;
+ 
+ 		skb = __skb_dequeue(&tx.skbs);
++		info = IEEE80211_SKB_CB(skb);
+ 
+ 		if (!skb_queue_empty(&tx.skbs)) {
+ 			spin_lock_bh(&fq->lock);
+@@ -3971,7 +3972,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 	}
+ 
+ encap_out:
+-	IEEE80211_SKB_CB(skb)->control.vif = vif;
++	info->control.vif = vif;
+ 
+ 	if (tx.sta &&
+ 	    wiphy_ext_feature_isset(local->hw.wiphy, NL80211_EXT_FEATURE_AQL)) {
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 5c01b9bc619a8..80857ef3b90f5 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -130,10 +130,21 @@ int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk,
+ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk,
+ 				    struct mptcp_addr_info *skc)
+ {
+-	struct mptcp_pm_addr_entry new_entry;
++	struct mptcp_pm_addr_entry *entry = NULL, *e, new_entry;
+ 	__be16 msk_sport =  ((struct inet_sock *)
+ 			     inet_sk((struct sock *)msk))->inet_sport;
+ 
++	spin_lock_bh(&msk->pm.lock);
++	list_for_each_entry(e, &msk->pm.userspace_pm_local_addr_list, list) {
++		if (mptcp_addresses_equal(&e->addr, skc, false)) {
++			entry = e;
++			break;
++		}
++	}
++	spin_unlock_bh(&msk->pm.lock);
++	if (entry)
++		return entry->addr.id;
++
+ 	memset(&new_entry, 0, sizeof(struct mptcp_pm_addr_entry));
+ 	new_entry.addr = *skc;
+ 	new_entry.addr.id = 0;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 5cd5c3f535a82..0b42ce7de45cc 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1519,8 +1519,11 @@ static void mptcp_update_post_push(struct mptcp_sock *msk,
+ 
+ void mptcp_check_and_set_pending(struct sock *sk)
+ {
+-	if (mptcp_send_head(sk))
+-		mptcp_sk(sk)->push_pending |= BIT(MPTCP_PUSH_PENDING);
++	if (mptcp_send_head(sk)) {
++		mptcp_data_lock(sk);
++		mptcp_sk(sk)->cb_flags |= BIT(MPTCP_PUSH_PENDING);
++		mptcp_data_unlock(sk);
++	}
+ }
+ 
+ static int __subflow_push_pending(struct sock *sk, struct sock *ssk,
+@@ -1974,6 +1977,9 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
+ 	if (copied <= 0)
+ 		return;
+ 
++	if (!msk->rcvspace_init)
++		mptcp_rcv_space_init(msk, msk->first);
++
+ 	msk->rcvq_space.copied += copied;
+ 
+ 	mstamp = div_u64(tcp_clock_ns(), NSEC_PER_USEC);
+@@ -2328,9 +2334,6 @@ bool __mptcp_retransmit_pending_data(struct sock *sk)
+ 	if (__mptcp_check_fallback(msk))
+ 		return false;
+ 
+-	if (tcp_rtx_and_write_queues_empty(sk))
+-		return false;
+-
+ 	/* the closing socket has some data untransmitted and/or unacked:
+ 	 * some data in the mptcp rtx queue has not really xmitted yet.
+ 	 * keep it simple and re-inject the whole mptcp level rtx queue
+@@ -3141,7 +3144,6 @@ static int mptcp_disconnect(struct sock *sk, int flags)
+ 	mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
+ 	WRITE_ONCE(msk->flags, 0);
+ 	msk->cb_flags = 0;
+-	msk->push_pending = 0;
+ 	msk->recovery = false;
+ 	msk->can_ack = false;
+ 	msk->fully_established = false;
+@@ -3157,6 +3159,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
+ 	msk->bytes_received = 0;
+ 	msk->bytes_sent = 0;
+ 	msk->bytes_retrans = 0;
++	msk->rcvspace_init = 0;
+ 
+ 	WRITE_ONCE(sk->sk_shutdown, 0);
+ 	sk_error_report(sk);
+@@ -3244,6 +3247,7 @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk)
+ {
+ 	const struct tcp_sock *tp = tcp_sk(ssk);
+ 
++	msk->rcvspace_init = 1;
+ 	msk->rcvq_space.copied = 0;
+ 	msk->rcvq_space.rtt_us = 0;
+ 
+@@ -3254,8 +3258,6 @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk)
+ 				      TCP_INIT_CWND * tp->advmss);
+ 	if (msk->rcvq_space.space == 0)
+ 		msk->rcvq_space.space = TCP_INIT_CWND * TCP_MSS_DEFAULT;
+-
+-	WRITE_ONCE(msk->wnd_end, msk->snd_nxt + tcp_sk(ssk)->snd_wnd);
+ }
+ 
+ static struct sock *mptcp_accept(struct sock *ssk, int flags, int *err,
+@@ -3367,8 +3369,7 @@ static void mptcp_release_cb(struct sock *sk)
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+ 	for (;;) {
+-		unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED) |
+-				      msk->push_pending;
++		unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
+ 		struct list_head join_list;
+ 
+ 		if (!flags)
+@@ -3384,7 +3385,6 @@ static void mptcp_release_cb(struct sock *sk)
+ 		 *    datapath acquires the msk socket spinlock while helding
+ 		 *    the subflow socket lock
+ 		 */
+-		msk->push_pending = 0;
+ 		msk->cb_flags &= ~flags;
+ 		spin_unlock_bh(&sk->sk_lock.slock);
+ 
+@@ -3515,10 +3515,9 @@ void mptcp_finish_connect(struct sock *ssk)
+ 	WRITE_ONCE(msk->write_seq, subflow->idsn + 1);
+ 	WRITE_ONCE(msk->snd_nxt, msk->write_seq);
+ 	WRITE_ONCE(msk->snd_una, msk->write_seq);
++	WRITE_ONCE(msk->wnd_end, msk->snd_nxt + tcp_sk(ssk)->snd_wnd);
+ 
+ 	mptcp_pm_new_connection(msk, ssk, 0);
+-
+-	mptcp_rcv_space_init(msk, ssk);
+ }
+ 
+ void mptcp_sock_graft(struct sock *sk, struct socket *parent)
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index aa1a93fe40ffa..27f3fedb9c366 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -286,7 +286,6 @@ struct mptcp_sock {
+ 	int		rmem_released;
+ 	unsigned long	flags;
+ 	unsigned long	cb_flags;
+-	unsigned long	push_pending;
+ 	bool		recovery;		/* closing subflow write queue reinjected */
+ 	bool		can_ack;
+ 	bool		fully_established;
+@@ -305,7 +304,8 @@ struct mptcp_sock {
+ 			nodelay:1,
+ 			fastopening:1,
+ 			in_accept_queue:1,
+-			free_first:1;
++			free_first:1,
++			rcvspace_init:1;
+ 	struct work_struct work;
+ 	struct sk_buff  *ooo_last_skb;
+ 	struct rb_root  out_of_order_queue;
+@@ -1118,7 +1118,8 @@ static inline bool subflow_simultaneous_connect(struct sock *sk)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 
+-	return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_FIN_WAIT1) &&
++	return (1 << sk->sk_state) &
++	       (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | TCPF_CLOSING) &&
+ 	       is_active_ssk(subflow) &&
+ 	       !subflow->conn_finished;
+ }
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 7785cda48e6bd..5155ce5b71088 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -424,6 +424,8 @@ void __mptcp_sync_state(struct sock *sk, int state)
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+ 	__mptcp_propagate_sndbuf(sk, msk->first);
++	if (!msk->rcvspace_init)
++		mptcp_rcv_space_init(msk, msk->first);
+ 	if (sk->sk_state == TCP_SYN_SENT) {
+ 		inet_sk_state_store(sk, state);
+ 		sk->sk_state_change(sk);
+@@ -545,7 +547,6 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		}
+ 	} else if (mptcp_check_fallback(sk)) {
+ fallback:
+-		mptcp_rcv_space_init(msk, sk);
+ 		mptcp_propagate_state(parent, sk);
+ 	}
+ 	return;
+@@ -1744,7 +1745,6 @@ static void subflow_state_change(struct sock *sk)
+ 	msk = mptcp_sk(parent);
+ 	if (subflow_simultaneous_connect(sk)) {
+ 		mptcp_do_fallback(sk);
+-		mptcp_rcv_space_init(msk, sk);
+ 		pr_fallback(msk);
+ 		subflow->conn_finished = 1;
+ 		mptcp_propagate_state(parent, sk);
+diff --git a/net/netfilter/ipset/ip_set_bitmap_gen.h b/net/netfilter/ipset/ip_set_bitmap_gen.h
+index 26ab0e9612d82..9523104a90da4 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_gen.h
++++ b/net/netfilter/ipset/ip_set_bitmap_gen.h
+@@ -28,6 +28,7 @@
+ #define mtype_del		IPSET_TOKEN(MTYPE, _del)
+ #define mtype_list		IPSET_TOKEN(MTYPE, _list)
+ #define mtype_gc		IPSET_TOKEN(MTYPE, _gc)
++#define mtype_cancel_gc		IPSET_TOKEN(MTYPE, _cancel_gc)
+ #define mtype			MTYPE
+ 
+ #define get_ext(set, map, id)	((map)->extensions + ((set)->dsize * (id)))
+@@ -57,9 +58,6 @@ mtype_destroy(struct ip_set *set)
+ {
+ 	struct mtype *map = set->data;
+ 
+-	if (SET_WITH_TIMEOUT(set))
+-		del_timer_sync(&map->gc);
+-
+ 	if (set->dsize && set->extensions & IPSET_EXT_DESTROY)
+ 		mtype_ext_cleanup(set);
+ 	ip_set_free(map->members);
+@@ -288,6 +286,15 @@ mtype_gc(struct timer_list *t)
+ 	add_timer(&map->gc);
+ }
+ 
++static void
++mtype_cancel_gc(struct ip_set *set)
++{
++	struct mtype *map = set->data;
++
++	if (SET_WITH_TIMEOUT(set))
++		del_timer_sync(&map->gc);
++}
++
+ static const struct ip_set_type_variant mtype = {
+ 	.kadt	= mtype_kadt,
+ 	.uadt	= mtype_uadt,
+@@ -301,6 +308,7 @@ static const struct ip_set_type_variant mtype = {
+ 	.head	= mtype_head,
+ 	.list	= mtype_list,
+ 	.same_set = mtype_same_set,
++	.cancel_gc = mtype_cancel_gc,
+ };
+ 
+ #endif /* __IP_SET_BITMAP_IP_GEN_H */
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 4c133e06be1de..3184cc6be4c9d 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -1154,6 +1154,7 @@ static int ip_set_create(struct sk_buff *skb, const struct nfnl_info *info,
+ 	return ret;
+ 
+ cleanup:
++	set->variant->cancel_gc(set);
+ 	set->variant->destroy(set);
+ put_out:
+ 	module_put(set->type->me);
+@@ -1182,6 +1183,14 @@ ip_set_destroy_set(struct ip_set *set)
+ 	kfree(set);
+ }
+ 
++static void
++ip_set_destroy_set_rcu(struct rcu_head *head)
++{
++	struct ip_set *set = container_of(head, struct ip_set, rcu);
++
++	ip_set_destroy_set(set);
++}
++
+ static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,
+ 			  const struct nlattr * const attr[])
+ {
+@@ -1193,8 +1202,6 @@ static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,
+ 	if (unlikely(protocol_min_failed(attr)))
+ 		return -IPSET_ERR_PROTOCOL;
+ 
+-	/* Must wait for flush to be really finished in list:set */
+-	rcu_barrier();
+ 
+ 	/* Commands are serialized and references are
+ 	 * protected by the ip_set_ref_lock.
+@@ -1206,8 +1213,10 @@ static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,
+ 	 * counter, so if it's already zero, we can proceed
+ 	 * without holding the lock.
+ 	 */
+-	read_lock_bh(&ip_set_ref_lock);
+ 	if (!attr[IPSET_ATTR_SETNAME]) {
++		/* Must wait for flush to be really finished in list:set */
++		rcu_barrier();
++		read_lock_bh(&ip_set_ref_lock);
+ 		for (i = 0; i < inst->ip_set_max; i++) {
+ 			s = ip_set(inst, i);
+ 			if (s && (s->ref || s->ref_netlink)) {
+@@ -1221,6 +1230,8 @@ static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,
+ 			s = ip_set(inst, i);
+ 			if (s) {
+ 				ip_set(inst, i) = NULL;
++				/* Must cancel garbage collectors */
++				s->variant->cancel_gc(s);
+ 				ip_set_destroy_set(s);
+ 			}
+ 		}
+@@ -1228,6 +1239,9 @@ static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,
+ 		inst->is_destroyed = false;
+ 	} else {
+ 		u32 flags = flag_exist(info->nlh);
++		u16 features = 0;
++
++		read_lock_bh(&ip_set_ref_lock);
+ 		s = find_set_and_id(inst, nla_data(attr[IPSET_ATTR_SETNAME]),
+ 				    &i);
+ 		if (!s) {
+@@ -1238,10 +1252,16 @@ static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,
+ 			ret = -IPSET_ERR_BUSY;
+ 			goto out;
+ 		}
++		features = s->type->features;
+ 		ip_set(inst, i) = NULL;
+ 		read_unlock_bh(&ip_set_ref_lock);
+-
+-		ip_set_destroy_set(s);
++		if (features & IPSET_TYPE_NAME) {
++			/* Must wait for flush to be really finished  */
++			rcu_barrier();
++		}
++		/* Must cancel garbage collectors */
++		s->variant->cancel_gc(s);
++		call_rcu(&s->rcu, ip_set_destroy_set_rcu);
+ 	}
+ 	return 0;
+ out:
+@@ -1394,9 +1414,6 @@ static int ip_set_swap(struct sk_buff *skb, const struct nfnl_info *info,
+ 	ip_set(inst, to_id) = from;
+ 	write_unlock_bh(&ip_set_ref_lock);
+ 
+-	/* Make sure all readers of the old set pointers are completed. */
+-	synchronize_rcu();
+-
+ 	return 0;
+ }
+ 
+@@ -2362,6 +2379,7 @@ ip_set_net_exit(struct net *net)
+ 		set = ip_set(inst, i);
+ 		if (set) {
+ 			ip_set(inst, i) = NULL;
++			set->variant->cancel_gc(set);
+ 			ip_set_destroy_set(set);
+ 		}
+ 	}
+@@ -2409,8 +2427,11 @@ ip_set_fini(void)
+ {
+ 	nf_unregister_sockopt(&so_set);
+ 	nfnetlink_subsys_unregister(&ip_set_netlink_subsys);
+-
+ 	unregister_pernet_subsys(&ip_set_net_ops);
++
++	/* Wait for call_rcu() in destroy */
++	rcu_barrier();
++
+ 	pr_debug("these are the famous last words\n");
+ }
+ 
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index 7c2399541771f..20aad81fcad7e 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -221,6 +221,7 @@ static const union nf_inet_addr zeromask = {};
+ #undef mtype_gc_do
+ #undef mtype_gc
+ #undef mtype_gc_init
++#undef mtype_cancel_gc
+ #undef mtype_variant
+ #undef mtype_data_match
+ 
+@@ -265,6 +266,7 @@ static const union nf_inet_addr zeromask = {};
+ #define mtype_gc_do		IPSET_TOKEN(MTYPE, _gc_do)
+ #define mtype_gc		IPSET_TOKEN(MTYPE, _gc)
+ #define mtype_gc_init		IPSET_TOKEN(MTYPE, _gc_init)
++#define mtype_cancel_gc		IPSET_TOKEN(MTYPE, _cancel_gc)
+ #define mtype_variant		IPSET_TOKEN(MTYPE, _variant)
+ #define mtype_data_match	IPSET_TOKEN(MTYPE, _data_match)
+ 
+@@ -429,7 +431,7 @@ mtype_ahash_destroy(struct ip_set *set, struct htable *t, bool ext_destroy)
+ 	u32 i;
+ 
+ 	for (i = 0; i < jhash_size(t->htable_bits); i++) {
+-		n = __ipset_dereference(hbucket(t, i));
++		n = (__force struct hbucket *)hbucket(t, i);
+ 		if (!n)
+ 			continue;
+ 		if (set->extensions & IPSET_EXT_DESTROY && ext_destroy)
+@@ -449,10 +451,7 @@ mtype_destroy(struct ip_set *set)
+ 	struct htype *h = set->data;
+ 	struct list_head *l, *lt;
+ 
+-	if (SET_WITH_TIMEOUT(set))
+-		cancel_delayed_work_sync(&h->gc.dwork);
+-
+-	mtype_ahash_destroy(set, ipset_dereference_nfnl(h->table), true);
++	mtype_ahash_destroy(set, (__force struct htable *)h->table, true);
+ 	list_for_each_safe(l, lt, &h->ad) {
+ 		list_del(l);
+ 		kfree(l);
+@@ -598,6 +597,15 @@ mtype_gc_init(struct htable_gc *gc)
+ 	queue_delayed_work(system_power_efficient_wq, &gc->dwork, HZ);
+ }
+ 
++static void
++mtype_cancel_gc(struct ip_set *set)
++{
++	struct htype *h = set->data;
++
++	if (SET_WITH_TIMEOUT(set))
++		cancel_delayed_work_sync(&h->gc.dwork);
++}
++
+ static int
+ mtype_add(struct ip_set *set, void *value, const struct ip_set_ext *ext,
+ 	  struct ip_set_ext *mext, u32 flags);
+@@ -1440,6 +1448,7 @@ static const struct ip_set_type_variant mtype_variant = {
+ 	.uref	= mtype_uref,
+ 	.resize	= mtype_resize,
+ 	.same_set = mtype_same_set,
++	.cancel_gc = mtype_cancel_gc,
+ 	.region_lock = true,
+ };
+ 
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index e162636525cfb..6c3f28bc59b32 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -426,9 +426,6 @@ list_set_destroy(struct ip_set *set)
+ 	struct list_set *map = set->data;
+ 	struct set_elem *e, *n;
+ 
+-	if (SET_WITH_TIMEOUT(set))
+-		timer_shutdown_sync(&map->gc);
+-
+ 	list_for_each_entry_safe(e, n, &map->members, list) {
+ 		list_del(&e->list);
+ 		ip_set_put_byindex(map->net, e->id);
+@@ -545,6 +542,15 @@ list_set_same_set(const struct ip_set *a, const struct ip_set *b)
+ 	       a->extensions == b->extensions;
+ }
+ 
++static void
++list_set_cancel_gc(struct ip_set *set)
++{
++	struct list_set *map = set->data;
++
++	if (SET_WITH_TIMEOUT(set))
++		timer_shutdown_sync(&map->gc);
++}
++
+ static const struct ip_set_type_variant set_variant = {
+ 	.kadt	= list_set_kadt,
+ 	.uadt	= list_set_uadt,
+@@ -558,6 +564,7 @@ static const struct ip_set_type_variant set_variant = {
+ 	.head	= list_set_head,
+ 	.list	= list_set_list,
+ 	.same_set = list_set_same_set,
++	.cancel_gc = list_set_cancel_gc,
+ };
+ 
+ static void
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index 90e275bb3e5d7..a3a8ddca99189 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -57,7 +57,7 @@
+ 
+ /* Jump to label if @reg is zero */
+ #define NFT_PIPAPO_AVX2_NOMATCH_GOTO(reg, label)			\
+-	asm_volatile_goto("vptest %%ymm" #reg ", %%ymm" #reg ";"	\
++	asm goto("vptest %%ymm" #reg ", %%ymm" #reg ";"	\
+ 			  "je %l[" #label "]" : : : : label)
+ 
+ /* Store 256 bits from YMM register into memory. Contrary to bucket load
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index 6c9592d051206..12684d835cb53 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -1208,6 +1208,10 @@ void nci_free_device(struct nci_dev *ndev)
+ {
+ 	nfc_free_device(ndev->nfc_dev);
+ 	nci_hci_deallocate(ndev);
++
++	/* drop partial rx data packet if present */
++	if (ndev->rx_data_reassembly)
++		kfree_skb(ndev->rx_data_reassembly);
+ 	kfree(ndev);
+ }
+ EXPORT_SYMBOL(nci_free_device);
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 88965e2068ac6..ebc5728aab4ea 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -48,6 +48,7 @@ struct ovs_len_tbl {
+ 
+ #define OVS_ATTR_NESTED -1
+ #define OVS_ATTR_VARIABLE -2
++#define OVS_COPY_ACTIONS_MAX_DEPTH 16
+ 
+ static bool actions_may_change_flow(const struct nlattr *actions)
+ {
+@@ -2545,13 +2546,15 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				  const struct sw_flow_key *key,
+ 				  struct sw_flow_actions **sfa,
+ 				  __be16 eth_type, __be16 vlan_tci,
+-				  u32 mpls_label_count, bool log);
++				  u32 mpls_label_count, bool log,
++				  u32 depth);
+ 
+ static int validate_and_copy_sample(struct net *net, const struct nlattr *attr,
+ 				    const struct sw_flow_key *key,
+ 				    struct sw_flow_actions **sfa,
+ 				    __be16 eth_type, __be16 vlan_tci,
+-				    u32 mpls_label_count, bool log, bool last)
++				    u32 mpls_label_count, bool log, bool last,
++				    u32 depth)
+ {
+ 	const struct nlattr *attrs[OVS_SAMPLE_ATTR_MAX + 1];
+ 	const struct nlattr *probability, *actions;
+@@ -2602,7 +2605,8 @@ static int validate_and_copy_sample(struct net *net, const struct nlattr *attr,
+ 		return err;
+ 
+ 	err = __ovs_nla_copy_actions(net, actions, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 
+ 	if (err)
+ 		return err;
+@@ -2617,7 +2621,8 @@ static int validate_and_copy_dec_ttl(struct net *net,
+ 				     const struct sw_flow_key *key,
+ 				     struct sw_flow_actions **sfa,
+ 				     __be16 eth_type, __be16 vlan_tci,
+-				     u32 mpls_label_count, bool log)
++				     u32 mpls_label_count, bool log,
++				     u32 depth)
+ {
+ 	const struct nlattr *attrs[OVS_DEC_TTL_ATTR_MAX + 1];
+ 	int start, action_start, err, rem;
+@@ -2660,7 +2665,8 @@ static int validate_and_copy_dec_ttl(struct net *net,
+ 		return action_start;
+ 
+ 	err = __ovs_nla_copy_actions(net, actions, key, sfa, eth_type,
+-				     vlan_tci, mpls_label_count, log);
++				     vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 	if (err)
+ 		return err;
+ 
+@@ -2674,7 +2680,8 @@ static int validate_and_copy_clone(struct net *net,
+ 				   const struct sw_flow_key *key,
+ 				   struct sw_flow_actions **sfa,
+ 				   __be16 eth_type, __be16 vlan_tci,
+-				   u32 mpls_label_count, bool log, bool last)
++				   u32 mpls_label_count, bool log, bool last,
++				   u32 depth)
+ {
+ 	int start, err;
+ 	u32 exec;
+@@ -2694,7 +2701,8 @@ static int validate_and_copy_clone(struct net *net,
+ 		return err;
+ 
+ 	err = __ovs_nla_copy_actions(net, attr, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 	if (err)
+ 		return err;
+ 
+@@ -3063,7 +3071,7 @@ static int validate_and_copy_check_pkt_len(struct net *net,
+ 					   struct sw_flow_actions **sfa,
+ 					   __be16 eth_type, __be16 vlan_tci,
+ 					   u32 mpls_label_count,
+-					   bool log, bool last)
++					   bool log, bool last, u32 depth)
+ {
+ 	const struct nlattr *acts_if_greater, *acts_if_lesser_eq;
+ 	struct nlattr *a[OVS_CHECK_PKT_LEN_ATTR_MAX + 1];
+@@ -3111,7 +3119,8 @@ static int validate_and_copy_check_pkt_len(struct net *net,
+ 		return nested_acts_start;
+ 
+ 	err = __ovs_nla_copy_actions(net, acts_if_lesser_eq, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 
+ 	if (err)
+ 		return err;
+@@ -3124,7 +3133,8 @@ static int validate_and_copy_check_pkt_len(struct net *net,
+ 		return nested_acts_start;
+ 
+ 	err = __ovs_nla_copy_actions(net, acts_if_greater, key, sfa,
+-				     eth_type, vlan_tci, mpls_label_count, log);
++				     eth_type, vlan_tci, mpls_label_count, log,
++				     depth + 1);
+ 
+ 	if (err)
+ 		return err;
+@@ -3152,12 +3162,16 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				  const struct sw_flow_key *key,
+ 				  struct sw_flow_actions **sfa,
+ 				  __be16 eth_type, __be16 vlan_tci,
+-				  u32 mpls_label_count, bool log)
++				  u32 mpls_label_count, bool log,
++				  u32 depth)
+ {
+ 	u8 mac_proto = ovs_key_mac_proto(key);
+ 	const struct nlattr *a;
+ 	int rem, err;
+ 
++	if (depth > OVS_COPY_ACTIONS_MAX_DEPTH)
++		return -EOVERFLOW;
++
+ 	nla_for_each_nested(a, attr, rem) {
+ 		/* Expected argument lengths, (u32)-1 for variable length. */
+ 		static const u32 action_lens[OVS_ACTION_ATTR_MAX + 1] = {
+@@ -3355,7 +3369,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			err = validate_and_copy_sample(net, a, key, sfa,
+ 						       eth_type, vlan_tci,
+ 						       mpls_label_count,
+-						       log, last);
++						       log, last, depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3426,7 +3440,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			err = validate_and_copy_clone(net, a, key, sfa,
+ 						      eth_type, vlan_tci,
+ 						      mpls_label_count,
+-						      log, last);
++						      log, last, depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3440,7 +3454,8 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 							      eth_type,
+ 							      vlan_tci,
+ 							      mpls_label_count,
+-							      log, last);
++							      log, last,
++							      depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3450,7 +3465,8 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 		case OVS_ACTION_ATTR_DEC_TTL:
+ 			err = validate_and_copy_dec_ttl(net, a, key, sfa,
+ 							eth_type, vlan_tci,
+-							mpls_label_count, log);
++							mpls_label_count, log,
++							depth);
+ 			if (err)
+ 				return err;
+ 			skip_copy = true;
+@@ -3495,7 +3511,8 @@ int ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 
+ 	(*sfa)->orig_len = nla_len(attr);
+ 	err = __ovs_nla_copy_actions(net, attr, key, sfa, key->eth.type,
+-				     key->eth.vlan.tci, mpls_label_count, log);
++				     key->eth.vlan.tci, mpls_label_count, log,
++				     0);
+ 	if (err)
+ 		ovs_nla_free_flow_actions(*sfa);
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 31e8a94dfc111..9fbc70200cd0f 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -63,6 +63,7 @@ struct tls_decrypt_ctx {
+ 	u8 iv[TLS_MAX_IV_SIZE];
+ 	u8 aad[TLS_MAX_AAD_SIZE];
+ 	u8 tail;
++	bool free_sgout;
+ 	struct scatterlist sg[];
+ };
+ 
+@@ -187,7 +188,6 @@ static void tls_decrypt_done(void *data, int err)
+ 	struct aead_request *aead_req = data;
+ 	struct crypto_aead *aead = crypto_aead_reqtfm(aead_req);
+ 	struct scatterlist *sgout = aead_req->dst;
+-	struct scatterlist *sgin = aead_req->src;
+ 	struct tls_sw_context_rx *ctx;
+ 	struct tls_decrypt_ctx *dctx;
+ 	struct tls_context *tls_ctx;
+@@ -196,6 +196,17 @@ static void tls_decrypt_done(void *data, int err)
+ 	struct sock *sk;
+ 	int aead_size;
+ 
++	/* If requests get too backlogged crypto API returns -EBUSY and calls
++	 * ->complete(-EINPROGRESS) immediately followed by ->complete(0)
++	 * to make waiting for backlog to flush with crypto_wait_req() easier.
++	 * First wait converts -EBUSY -> -EINPROGRESS, and the second one
++	 * -EINPROGRESS -> 0.
++	 * We have a single struct crypto_async_request per direction, this
++	 * scheme doesn't help us, so just ignore the first ->complete().
++	 */
++	if (err == -EINPROGRESS)
++		return;
++
+ 	aead_size = sizeof(*aead_req) + crypto_aead_reqsize(aead);
+ 	aead_size = ALIGN(aead_size, __alignof__(*dctx));
+ 	dctx = (void *)((u8 *)aead_req + aead_size);
+@@ -213,7 +224,7 @@ static void tls_decrypt_done(void *data, int err)
+ 	}
+ 
+ 	/* Free the destination pages if skb was not decrypted inplace */
+-	if (sgout != sgin) {
++	if (dctx->free_sgout) {
+ 		/* Skip the first S/G entry as it points to AAD */
+ 		for_each_sg(sg_next(sgout), sg, UINT_MAX, pages) {
+ 			if (!sg)
+@@ -224,10 +235,17 @@ static void tls_decrypt_done(void *data, int err)
+ 
+ 	kfree(aead_req);
+ 
+-	spin_lock_bh(&ctx->decrypt_compl_lock);
+-	if (!atomic_dec_return(&ctx->decrypt_pending))
++	if (atomic_dec_and_test(&ctx->decrypt_pending))
+ 		complete(&ctx->async_wait.completion);
+-	spin_unlock_bh(&ctx->decrypt_compl_lock);
++}
++
++static int tls_decrypt_async_wait(struct tls_sw_context_rx *ctx)
++{
++	if (!atomic_dec_and_test(&ctx->decrypt_pending))
++		crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
++	atomic_inc(&ctx->decrypt_pending);
++
++	return ctx->async_wait.err;
+ }
+ 
+ static int tls_do_decryption(struct sock *sk,
+@@ -253,6 +271,7 @@ static int tls_do_decryption(struct sock *sk,
+ 		aead_request_set_callback(aead_req,
+ 					  CRYPTO_TFM_REQ_MAY_BACKLOG,
+ 					  tls_decrypt_done, aead_req);
++		DEBUG_NET_WARN_ON_ONCE(atomic_read(&ctx->decrypt_pending) < 1);
+ 		atomic_inc(&ctx->decrypt_pending);
+ 	} else {
+ 		aead_request_set_callback(aead_req,
+@@ -261,6 +280,10 @@ static int tls_do_decryption(struct sock *sk,
+ 	}
+ 
+ 	ret = crypto_aead_decrypt(aead_req);
++	if (ret == -EBUSY) {
++		ret = tls_decrypt_async_wait(ctx);
++		ret = ret ?: -EINPROGRESS;
++	}
+ 	if (ret == -EINPROGRESS) {
+ 		if (darg->async)
+ 			return 0;
+@@ -439,9 +462,10 @@ static void tls_encrypt_done(void *data, int err)
+ 	struct tls_rec *rec = data;
+ 	struct scatterlist *sge;
+ 	struct sk_msg *msg_en;
+-	bool ready = false;
+ 	struct sock *sk;
+-	int pending;
++
++	if (err == -EINPROGRESS) /* see the comment in tls_decrypt_done() */
++		return;
+ 
+ 	msg_en = &rec->msg_encrypted;
+ 
+@@ -476,23 +500,25 @@ static void tls_encrypt_done(void *data, int err)
+ 		/* If received record is at head of tx_list, schedule tx */
+ 		first_rec = list_first_entry(&ctx->tx_list,
+ 					     struct tls_rec, list);
+-		if (rec == first_rec)
+-			ready = true;
++		if (rec == first_rec) {
++			/* Schedule the transmission */
++			if (!test_and_set_bit(BIT_TX_SCHEDULED,
++					      &ctx->tx_bitmask))
++				schedule_delayed_work(&ctx->tx_work.work, 1);
++		}
+ 	}
+ 
+-	spin_lock_bh(&ctx->encrypt_compl_lock);
+-	pending = atomic_dec_return(&ctx->encrypt_pending);
+-
+-	if (!pending && ctx->async_notify)
++	if (atomic_dec_and_test(&ctx->encrypt_pending))
+ 		complete(&ctx->async_wait.completion);
+-	spin_unlock_bh(&ctx->encrypt_compl_lock);
++}
+ 
+-	if (!ready)
+-		return;
++static int tls_encrypt_async_wait(struct tls_sw_context_tx *ctx)
++{
++	if (!atomic_dec_and_test(&ctx->encrypt_pending))
++		crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
++	atomic_inc(&ctx->encrypt_pending);
+ 
+-	/* Schedule the transmission */
+-	if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
+-		schedule_delayed_work(&ctx->tx_work.work, 1);
++	return ctx->async_wait.err;
+ }
+ 
+ static int tls_do_encryption(struct sock *sk,
+@@ -541,9 +567,14 @@ static int tls_do_encryption(struct sock *sk,
+ 
+ 	/* Add the record in tx_list */
+ 	list_add_tail((struct list_head *)&rec->list, &ctx->tx_list);
++	DEBUG_NET_WARN_ON_ONCE(atomic_read(&ctx->encrypt_pending) < 1);
+ 	atomic_inc(&ctx->encrypt_pending);
+ 
+ 	rc = crypto_aead_encrypt(aead_req);
++	if (rc == -EBUSY) {
++		rc = tls_encrypt_async_wait(ctx);
++		rc = rc ?: -EINPROGRESS;
++	}
+ 	if (!rc || rc != -EINPROGRESS) {
+ 		atomic_dec(&ctx->encrypt_pending);
+ 		sge->offset -= prot->prepend_size;
+@@ -984,7 +1015,6 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
+ 	int num_zc = 0;
+ 	int orig_size;
+ 	int ret = 0;
+-	int pending;
+ 
+ 	if (!eor && (msg->msg_flags & MSG_EOR))
+ 		return -EINVAL;
+@@ -1163,24 +1193,12 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
+ 	if (!num_async) {
+ 		goto send_end;
+ 	} else if (num_zc) {
+-		/* Wait for pending encryptions to get completed */
+-		spin_lock_bh(&ctx->encrypt_compl_lock);
+-		ctx->async_notify = true;
+-
+-		pending = atomic_read(&ctx->encrypt_pending);
+-		spin_unlock_bh(&ctx->encrypt_compl_lock);
+-		if (pending)
+-			crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
+-		else
+-			reinit_completion(&ctx->async_wait.completion);
+-
+-		/* There can be no concurrent accesses, since we have no
+-		 * pending encrypt operations
+-		 */
+-		WRITE_ONCE(ctx->async_notify, false);
++		int err;
+ 
+-		if (ctx->async_wait.err) {
+-			ret = ctx->async_wait.err;
++		/* Wait for pending encryptions to get completed */
++		err = tls_encrypt_async_wait(ctx);
++		if (err) {
++			ret = err;
+ 			copied = 0;
+ 		}
+ 	}
+@@ -1229,7 +1247,6 @@ void tls_sw_splice_eof(struct socket *sock)
+ 	ssize_t copied = 0;
+ 	bool retrying = false;
+ 	int ret = 0;
+-	int pending;
+ 
+ 	if (!ctx->open_rec)
+ 		return;
+@@ -1264,22 +1281,7 @@ void tls_sw_splice_eof(struct socket *sock)
+ 	}
+ 
+ 	/* Wait for pending encryptions to get completed */
+-	spin_lock_bh(&ctx->encrypt_compl_lock);
+-	ctx->async_notify = true;
+-
+-	pending = atomic_read(&ctx->encrypt_pending);
+-	spin_unlock_bh(&ctx->encrypt_compl_lock);
+-	if (pending)
+-		crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
+-	else
+-		reinit_completion(&ctx->async_wait.completion);
+-
+-	/* There can be no concurrent accesses, since we have no pending
+-	 * encrypt operations
+-	 */
+-	WRITE_ONCE(ctx->async_notify, false);
+-
+-	if (ctx->async_wait.err)
++	if (tls_encrypt_async_wait(ctx))
+ 		goto unlock;
+ 
+ 	/* Transmit if any encryptions have completed */
+@@ -1581,6 +1583,7 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov,
+ 	} else if (out_sg) {
+ 		memcpy(sgout, out_sg, n_sgout * sizeof(*sgout));
+ 	}
++	dctx->free_sgout = !!pages;
+ 
+ 	/* Prepare and submit AEAD request */
+ 	err = tls_do_decryption(sk, sgin, sgout, dctx->iv,
+@@ -2109,16 +2112,10 @@ int tls_sw_recvmsg(struct sock *sk,
+ 
+ recv_end:
+ 	if (async) {
+-		int ret, pending;
++		int ret;
+ 
+ 		/* Wait for all previously submitted records to be decrypted */
+-		spin_lock_bh(&ctx->decrypt_compl_lock);
+-		reinit_completion(&ctx->async_wait.completion);
+-		pending = atomic_read(&ctx->decrypt_pending);
+-		spin_unlock_bh(&ctx->decrypt_compl_lock);
+-		ret = 0;
+-		if (pending)
+-			ret = crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
++		ret = tls_decrypt_async_wait(ctx);
+ 		__skb_queue_purge(&ctx->async_hold);
+ 
+ 		if (ret) {
+@@ -2135,7 +2132,6 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		else
+ 			err = process_rx_list(ctx, msg, &control, 0,
+ 					      async_copy_bytes, is_peek);
+-		decrypted += max(err, 0);
+ 	}
+ 
+ 	copied += decrypted;
+@@ -2435,16 +2431,9 @@ void tls_sw_release_resources_tx(struct sock *sk)
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+ 	struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
+ 	struct tls_rec *rec, *tmp;
+-	int pending;
+ 
+ 	/* Wait for any pending async encryptions to complete */
+-	spin_lock_bh(&ctx->encrypt_compl_lock);
+-	ctx->async_notify = true;
+-	pending = atomic_read(&ctx->encrypt_pending);
+-	spin_unlock_bh(&ctx->encrypt_compl_lock);
+-
+-	if (pending)
+-		crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
++	tls_encrypt_async_wait(ctx);
+ 
+ 	tls_tx_records(sk, -1);
+ 
+@@ -2607,7 +2596,7 @@ static struct tls_sw_context_tx *init_ctx_tx(struct tls_context *ctx, struct soc
+ 	}
+ 
+ 	crypto_init_wait(&sw_ctx_tx->async_wait);
+-	spin_lock_init(&sw_ctx_tx->encrypt_compl_lock);
++	atomic_set(&sw_ctx_tx->encrypt_pending, 1);
+ 	INIT_LIST_HEAD(&sw_ctx_tx->tx_list);
+ 	INIT_DELAYED_WORK(&sw_ctx_tx->tx_work.work, tx_work_handler);
+ 	sw_ctx_tx->tx_work.sk = sk;
+@@ -2628,7 +2617,7 @@ static struct tls_sw_context_rx *init_ctx_rx(struct tls_context *ctx)
+ 	}
+ 
+ 	crypto_init_wait(&sw_ctx_rx->async_wait);
+-	spin_lock_init(&sw_ctx_rx->decrypt_compl_lock);
++	atomic_set(&sw_ctx_rx->decrypt_pending, 1);
+ 	init_waitqueue_head(&sw_ctx_rx->wq);
+ 	skb_queue_head_init(&sw_ctx_rx->rx_list);
+ 	skb_queue_head_init(&sw_ctx_rx->async_hold);
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 409d74c57ca0d..3fb1b637352a9 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -5,7 +5,7 @@
+  * Copyright 2006-2010		Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright 2015-2017	Intel Deutschland GmbH
+- * Copyright (C) 2018-2023 Intel Corporation
++ * Copyright (C) 2018-2024 Intel Corporation
+  */
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+@@ -1661,6 +1661,7 @@ void wiphy_delayed_work_queue(struct wiphy *wiphy,
+ 			      unsigned long delay)
+ {
+ 	if (!delay) {
++		del_timer(&dwork->timer);
+ 		wiphy_work_queue(wiphy, &dwork->work);
+ 		return;
+ 	}
+diff --git a/samples/bpf/asm_goto_workaround.h b/samples/bpf/asm_goto_workaround.h
+index 7048bb3594d65..634e81d83efd9 100644
+--- a/samples/bpf/asm_goto_workaround.h
++++ b/samples/bpf/asm_goto_workaround.h
+@@ -4,14 +4,14 @@
+ #define __ASM_GOTO_WORKAROUND_H
+ 
+ /*
+- * This will bring in asm_volatile_goto and asm_inline macro definitions
++ * This will bring in asm_goto_output and asm_inline macro definitions
+  * if enabled by compiler and config options.
+  */
+ #include <linux/types.h>
+ 
+-#ifdef asm_volatile_goto
+-#undef asm_volatile_goto
+-#define asm_volatile_goto(x...) asm volatile("invalid use of asm_volatile_goto")
++#ifdef asm_goto_output
++#undef asm_goto_output
++#define asm_goto_output(x...) asm volatile("invalid use of asm_goto_output")
+ #endif
+ 
+ /*
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index a432b171be826..7862a81017477 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -135,8 +135,13 @@ gen_btf()
+ 	${OBJCOPY} --only-section=.BTF --set-section-flags .BTF=alloc,readonly \
+ 		--strip-all ${1} ${2} 2>/dev/null
+ 	# Change e_type to ET_REL so that it can be used to link final vmlinux.
+-	# Unlike GNU ld, lld does not allow an ET_EXEC input.
+-	printf '\1' | dd of=${2} conv=notrunc bs=1 seek=16 status=none
++	# GNU ld 2.35+ and lld do not allow an ET_EXEC input.
++	if is_enabled CONFIG_CPU_BIG_ENDIAN; then
++		et_rel='\0\1'
++	else
++		et_rel='\1\0'
++	fi
++	printf "${et_rel}" | dd of=${2} conv=notrunc bs=1 seek=16 status=none
+ }
+ 
+ # Create ${2} .S file with all symbols from the ${1} object file
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index 9ba1c9da0a40f..57ff5656d566f 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -48,17 +48,8 @@ ${NM} -n ${1} | sed >${2} -e "
+ / __kvm_nvhe_\\$/d
+ / __kvm_nvhe_\.L/d
+ 
+-# arm64 lld
+-/ __AArch64ADRPThunk_/d
+-
+-# arm lld
+-/ __ARMV5PILongThunk_/d
+-/ __ARMV7PILongThunk_/d
+-/ __ThumbV7PILongThunk_/d
+-
+-# mips lld
+-/ __LA25Thunk_/d
+-/ __microLA25Thunk_/d
++# lld arm/aarch64/mips thunks
++/ __[[:alnum:]]*Thunk_/d
+ 
+ # CFI type identifiers
+ / __kcfi_typeid_/d
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index cb6406f485a96..f7c4d3fe4381f 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -807,7 +807,8 @@ static void check_section(const char *modname, struct elf_info *elf,
+ 
+ #define DATA_SECTIONS ".data", ".data.rel"
+ #define TEXT_SECTIONS ".text", ".text.*", ".sched.text", \
+-		".kprobes.text", ".cpuidle.text", ".noinstr.text"
++		".kprobes.text", ".cpuidle.text", ".noinstr.text", \
++		".ltext", ".ltext.*"
+ #define OTHER_TEXT_SECTIONS ".ref.text", ".head.text", ".spinlock.text", \
+ 		".fixup", ".entry.text", ".exception.text", \
+ 		".coldtext", ".softirqentry.text"
+diff --git a/scripts/mod/sumversion.c b/scripts/mod/sumversion.c
+index 31066bfdba04e..dc4878502276c 100644
+--- a/scripts/mod/sumversion.c
++++ b/scripts/mod/sumversion.c
+@@ -326,7 +326,12 @@ static int parse_source_files(const char *objfile, struct md4_ctx *md)
+ 
+ 	/* Sum all files in the same dir or subdirs. */
+ 	while ((line = get_line(&pos))) {
+-		char* p = line;
++		char* p;
++
++		/* trim the leading spaces away */
++		while (isspace(*line))
++			line++;
++		p = line;
+ 
+ 		if (strncmp(line, "source_", sizeof("source_")-1) == 0) {
+ 			p = strrchr(line, ' ');
+diff --git a/security/security.c b/security/security.c
+index 266cec94369b2..2cfecdb054c36 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -4030,7 +4030,19 @@ EXPORT_SYMBOL(security_inode_setsecctx);
+  */
+ int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen)
+ {
+-	return call_int_hook(inode_getsecctx, -EOPNOTSUPP, inode, ctx, ctxlen);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.inode_getsecctx, list) {
++		rc = hp->hook.inode_getsecctx(inode, ctx, ctxlen);
++		if (rc != LSM_RET_DEFAULT(inode_getsecctx))
++			return rc;
++	}
++
++	return LSM_RET_DEFAULT(inode_getsecctx);
+ }
+ EXPORT_SYMBOL(security_inode_getsecctx);
+ 
+@@ -4387,8 +4399,20 @@ EXPORT_SYMBOL(security_sock_rcv_skb);
+ int security_socket_getpeersec_stream(struct socket *sock, sockptr_t optval,
+ 				      sockptr_t optlen, unsigned int len)
+ {
+-	return call_int_hook(socket_getpeersec_stream, -ENOPROTOOPT, sock,
+-			     optval, optlen, len);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.socket_getpeersec_stream,
++			     list) {
++		rc = hp->hook.socket_getpeersec_stream(sock, optval, optlen,
++						       len);
++		if (rc != LSM_RET_DEFAULT(socket_getpeersec_stream))
++			return rc;
++	}
++	return LSM_RET_DEFAULT(socket_getpeersec_stream);
+ }
+ 
+ /**
+@@ -4408,8 +4432,19 @@ int security_socket_getpeersec_stream(struct socket *sock, sockptr_t optval,
+ int security_socket_getpeersec_dgram(struct socket *sock,
+ 				     struct sk_buff *skb, u32 *secid)
+ {
+-	return call_int_hook(socket_getpeersec_dgram, -ENOPROTOOPT, sock,
+-			     skb, secid);
++	struct security_hook_list *hp;
++	int rc;
++
++	/*
++	 * Only one module will provide a security context.
++	 */
++	hlist_for_each_entry(hp, &security_hook_heads.socket_getpeersec_dgram,
++			     list) {
++		rc = hp->hook.socket_getpeersec_dgram(sock, skb, secid);
++		if (rc != LSM_RET_DEFAULT(socket_getpeersec_dgram))
++			return rc;
++	}
++	return LSM_RET_DEFAULT(socket_getpeersec_dgram);
+ }
+ EXPORT_SYMBOL(security_socket_getpeersec_dgram);
+ 
+diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
+index 21a90b3c4cc73..8e0ff70fb6101 100644
+--- a/sound/pci/hda/Kconfig
++++ b/sound/pci/hda/Kconfig
+@@ -156,7 +156,7 @@ config SND_HDA_SCODEC_CS35L56_I2C
+ 	depends on I2C
+ 	depends on ACPI || COMPILE_TEST
+ 	depends on SND_SOC
+-	select CS_DSP
++	select FW_CS_DSP
+ 	select SND_HDA_GENERIC
+ 	select SND_SOC_CS35L56_SHARED
+ 	select SND_HDA_SCODEC_CS35L56
+@@ -171,7 +171,7 @@ config SND_HDA_SCODEC_CS35L56_SPI
+ 	depends on SPI_MASTER
+ 	depends on ACPI || COMPILE_TEST
+ 	depends on SND_SOC
+-	select CS_DSP
++	select FW_CS_DSP
+ 	select SND_HDA_GENERIC
+ 	select SND_SOC_CS35L56_SHARED
+ 	select SND_HDA_SCODEC_CS35L56
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index e8819e8a98763..e8209178d87bb 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -344,6 +344,7 @@ enum {
+ 	CXT_FIXUP_HP_ZBOOK_MUTE_LED,
+ 	CXT_FIXUP_HEADSET_MIC,
+ 	CXT_FIXUP_HP_MIC_NO_PRESENCE,
++	CXT_PINCFG_SWS_JS201D,
+ };
+ 
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -841,6 +842,17 @@ static const struct hda_pintbl cxt_pincfg_lemote[] = {
+ 	{}
+ };
+ 
++/* SuoWoSi/South-holding JS201D with sn6140 */
++static const struct hda_pintbl cxt_pincfg_sws_js201d[] = {
++	{ 0x16, 0x03211040 }, /* hp out */
++	{ 0x17, 0x91170110 }, /* SPK/Class_D */
++	{ 0x18, 0x95a70130 }, /* Internal mic */
++	{ 0x19, 0x03a11020 }, /* Headset Mic */
++	{ 0x1a, 0x40f001f0 }, /* Not used */
++	{ 0x21, 0x40f001f0 }, /* Not used */
++	{}
++};
++
+ static const struct hda_fixup cxt_fixups[] = {
+ 	[CXT_PINCFG_LENOVO_X200] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -996,6 +1008,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = CXT_FIXUP_HEADSET_MIC,
+ 	},
++	[CXT_PINCFG_SWS_JS201D] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = cxt_pincfg_sws_js201d,
++	},
+ };
+ 
+ static const struct snd_pci_quirk cxt5045_fixups[] = {
+@@ -1069,6 +1085,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
++	SND_PCI_QUIRK(0x14f1, 0x0265, "SWS JS201D", CXT_PINCFG_SWS_JS201D),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo T410", CXT_PINCFG_LENOVO_TP410),
+@@ -1109,6 +1126,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ 	{ .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
+ 	{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
+ 	{ .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
++	{ .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" },
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c
+index 627899959ffe8..e41316e2e9833 100644
+--- a/sound/pci/hda/patch_cs8409.c
++++ b/sound/pci/hda/patch_cs8409.c
+@@ -1371,6 +1371,7 @@ void dolphin_fixups(struct hda_codec *codec, const struct hda_fixup *fix, int ac
+ 		spec->scodecs[CS8409_CODEC1] = &dolphin_cs42l42_1;
+ 		spec->scodecs[CS8409_CODEC1]->codec = codec;
+ 		spec->num_scodecs = 2;
++		spec->gen.suppress_vmaster = 1;
+ 
+ 		codec->patch_ops = cs8409_dolphin_patch_ops;
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 084857723ce6e..e3096572881dc 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -439,6 +439,10 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
+ 		fallthrough;
+ 	case 0x10ec0215:
++	case 0x10ec0285:
++	case 0x10ec0289:
++		alc_update_coef_idx(codec, 0x36, 1<<13, 0);
++		fallthrough;
+ 	case 0x10ec0230:
+ 	case 0x10ec0233:
+ 	case 0x10ec0235:
+@@ -452,9 +456,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0283:
+ 	case 0x10ec0286:
+ 	case 0x10ec0288:
+-	case 0x10ec0285:
+ 	case 0x10ec0298:
+-	case 0x10ec0289:
+ 	case 0x10ec0300:
+ 		alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ 		break;
+@@ -9570,7 +9572,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_i2c_two,
+ 		.chained = true,
+-		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++		.chain_id = ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 	},
+ 	[ALC287_FIXUP_TAS2781_I2C] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -9591,6 +9593,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC287_FIXUP_THINKPAD_I2S_SPK] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc287_fixup_bind_dacs,
++		.chained = true,
++		.chain_id = ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 	},
+ 	[ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -9640,6 +9644,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1247, "Acer vCopperbox", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x1248, "Acer Veriton N4660G", ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1269, "Acer SWIFT SF314-54", ALC256_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x126a, "Acer Swift SF114-32", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+@@ -9719,6 +9724,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0b71, "Dell Inspiron 16 Plus 7620", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1028, 0x0beb, "Dell XPS 15 9530 (2023)", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1028, 0x0c03, "Dell Precision 5340", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x0c0b, "Dell Oasis 14 RPL-P", ALC289_FIXUP_RTK_AMP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0c0d, "Dell Oasis", ALC289_FIXUP_RTK_AMP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x0c0e, "Dell Oasis 16", ALC289_FIXUP_RTK_AMP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x0c19, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c1a, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c1b, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
+@@ -9839,6 +9847,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8786, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -9908,6 +9917,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8b0f, "HP Elite mt645 G7 Mobile Thin Client U81", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b2f, "HP 255 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8b42, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b43, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -9915,6 +9925,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b45, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b46, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b47, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8b59, "HP Elite mt645 G7 Mobile Thin Client U89", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b63, "HP Elite Dragonfly 13.5 inch G4", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+@@ -9944,6 +9955,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c72, "HP EliteBook 865 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+@@ -10308,6 +10321,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
++	SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index dfe281b57aa64..9f98965742e83 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -627,7 +627,7 @@ static int tas2781_hda_bind(struct device *dev, struct device *master,
+ 
+ 	strscpy(comps->name, dev_name(dev), sizeof(comps->name));
+ 
+-	ret = tascodec_init(tas_hda->priv, codec, tasdev_fw_ready);
++	ret = tascodec_init(tas_hda->priv, codec, THIS_MODULE, tasdev_fw_ready);
+ 	if (!ret)
+ 		comps->playback_hook = tas2781_hda_playback_hook;
+ 
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index d83cb6e4c62ae..80ad60d485ea0 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -248,6 +248,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82YM"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "83AS"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -297,6 +304,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 B7ED"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 C7VF"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index edcb85bd8ea7f..ea08b7cfc31da 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3314,6 +3314,7 @@ static void rt5645_jack_detect_work(struct work_struct *work)
+ 				    report, SND_JACK_HEADPHONE);
+ 		snd_soc_jack_report(rt5645->mic_jack,
+ 				    report, SND_JACK_MICROPHONE);
++		mutex_unlock(&rt5645->jd_mutex);
+ 		return;
+ 	case 4:
+ 		val = snd_soc_component_read(rt5645->component, RT5645_A_JD_CTRL1) & 0x0020;
+diff --git a/sound/soc/codecs/tas2781-comlib.c b/sound/soc/codecs/tas2781-comlib.c
+index 00e35169ae495..add16302f711e 100644
+--- a/sound/soc/codecs/tas2781-comlib.c
++++ b/sound/soc/codecs/tas2781-comlib.c
+@@ -267,6 +267,7 @@ void tas2781_reset(struct tasdevice_priv *tas_dev)
+ EXPORT_SYMBOL_GPL(tas2781_reset);
+ 
+ int tascodec_init(struct tasdevice_priv *tas_priv, void *codec,
++	struct module *module,
+ 	void (*cont)(const struct firmware *fw, void *context))
+ {
+ 	int ret = 0;
+@@ -280,7 +281,7 @@ int tascodec_init(struct tasdevice_priv *tas_priv, void *codec,
+ 		tas_priv->dev_name, tas_priv->ndev);
+ 	crc8_populate_msb(tas_priv->crc8_lkp_tbl, TASDEVICE_CRC8_POLYNOMIAL);
+ 	tas_priv->codec = codec;
+-	ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_UEVENT,
++	ret = request_firmware_nowait(module, FW_ACTION_UEVENT,
+ 		tas_priv->rca_binaryname, tas_priv->dev, GFP_KERNEL, tas_priv,
+ 		cont);
+ 	if (ret)
+diff --git a/sound/soc/codecs/tas2781-i2c.c b/sound/soc/codecs/tas2781-i2c.c
+index 917b1c15f71d4..2f7f8b18c36fa 100644
+--- a/sound/soc/codecs/tas2781-i2c.c
++++ b/sound/soc/codecs/tas2781-i2c.c
+@@ -564,7 +564,7 @@ static int tasdevice_codec_probe(struct snd_soc_component *codec)
+ {
+ 	struct tasdevice_priv *tas_priv = snd_soc_component_get_drvdata(codec);
+ 
+-	return tascodec_init(tas_priv, codec, tasdevice_fw_ready);
++	return tascodec_init(tas_priv, codec, THIS_MODULE, tasdevice_fw_ready);
+ }
+ 
+ static void tasdevice_deinit(void *context)
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index 98055dd39b782..75834ed0365d9 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -3589,7 +3589,7 @@ static int wcd938x_probe(struct platform_device *pdev)
+ 	ret = wcd938x_populate_dt_data(wcd938x, dev);
+ 	if (ret) {
+ 		dev_err(dev, "%s: Fail to obtain platform data\n", __func__);
+-		return -EINVAL;
++		return ret;
+ 	}
+ 
+ 	ret = wcd938x_add_slave_components(wcd938x, dev, &match);
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 59c3793f65df0..db78eb2f01080 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -477,6 +477,9 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	return 0;
+ 
+ err_i915_init:
++	pci_free_irq(pci, 0, adev);
++	pci_free_irq(pci, 0, bus);
++	pci_free_irq_vectors(pci);
+ 	pci_clear_master(pci);
+ 	pci_set_drvdata(pci, NULL);
+ err_acquire_irq:
+diff --git a/sound/soc/intel/avs/topology.c b/sound/soc/intel/avs/topology.c
+index c74e9d622e4c6..41020409ffb60 100644
+--- a/sound/soc/intel/avs/topology.c
++++ b/sound/soc/intel/avs/topology.c
+@@ -857,7 +857,7 @@ assign_copier_gtw_instance(struct snd_soc_component *comp, struct avs_tplg_modcf
+ 	}
+ 
+ 	/* If topology sets value don't overwrite it */
+-	if (cfg->copier.vindex.i2s.instance)
++	if (cfg->copier.vindex.val)
+ 		return;
+ 
+ 	mach = dev_get_platdata(comp->card->dev);
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index 2c7a5e7a364cf..d96555438c6bf 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -2309,27 +2309,16 @@ static int sof_tear_down_left_over_pipelines(struct snd_sof_dev *sdev)
+ 	return 0;
+ }
+ 
+-/*
+- * For older firmware, this function doesn't free widgets for static pipelines during suspend.
+- * It only resets use_count for all widgets.
+- */
+-static int sof_ipc3_tear_down_all_pipelines(struct snd_sof_dev *sdev, bool verify)
++static int sof_ipc3_free_widgets_in_list(struct snd_sof_dev *sdev, bool include_scheduler,
++					 bool *dyn_widgets, bool verify)
+ {
+ 	struct sof_ipc_fw_version *v = &sdev->fw_ready.version;
+ 	struct snd_sof_widget *swidget;
+-	struct snd_sof_route *sroute;
+-	bool dyn_widgets = false;
+ 	int ret;
+ 
+-	/*
+-	 * This function is called during suspend and for one-time topology verification during
+-	 * first boot. In both cases, there is no need to protect swidget->use_count and
+-	 * sroute->setup because during suspend all running streams are suspended and during
+-	 * topology loading the sound card unavailable to open PCMs.
+-	 */
+ 	list_for_each_entry(swidget, &sdev->widget_list, list) {
+ 		if (swidget->dynamic_pipeline_widget) {
+-			dyn_widgets = true;
++			*dyn_widgets = true;
+ 			continue;
+ 		}
+ 
+@@ -2344,11 +2333,49 @@ static int sof_ipc3_tear_down_all_pipelines(struct snd_sof_dev *sdev, bool verif
+ 			continue;
+ 		}
+ 
++		if (include_scheduler && swidget->id != snd_soc_dapm_scheduler)
++			continue;
++
++		if (!include_scheduler && swidget->id == snd_soc_dapm_scheduler)
++			continue;
++
+ 		ret = sof_widget_free(sdev, swidget);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+ 
++	return 0;
++}
++
++/*
++ * For older firmware, this function doesn't free widgets for static pipelines during suspend.
++ * It only resets use_count for all widgets.
++ */
++static int sof_ipc3_tear_down_all_pipelines(struct snd_sof_dev *sdev, bool verify)
++{
++	struct sof_ipc_fw_version *v = &sdev->fw_ready.version;
++	struct snd_sof_widget *swidget;
++	struct snd_sof_route *sroute;
++	bool dyn_widgets = false;
++	int ret;
++
++	/*
++	 * This function is called during suspend and for one-time topology verification during
++	 * first boot. In both cases, there is no need to protect swidget->use_count and
++	 * sroute->setup because during suspend all running streams are suspended and during
++	 * topology loading the sound card unavailable to open PCMs. Do not free the scheduler
++	 * widgets yet so that the secondary cores do not get powered down before all the widgets
++	 * associated with the scheduler are freed.
++	 */
++	ret = sof_ipc3_free_widgets_in_list(sdev, false, &dyn_widgets, verify);
++	if (ret < 0)
++		return ret;
++
++	/* free all the scheduler widgets now */
++	ret = sof_ipc3_free_widgets_in_list(sdev, true, &dyn_widgets, verify);
++	if (ret < 0)
++		return ret;
++
+ 	/*
+ 	 * Tear down all pipelines associated with PCMs that did not get suspended
+ 	 * and unset the prepare flag so that they can be set up again during resume.
+diff --git a/sound/soc/sof/ipc3.c b/sound/soc/sof/ipc3.c
+index fb40378ad0840..c03dd513fbff1 100644
+--- a/sound/soc/sof/ipc3.c
++++ b/sound/soc/sof/ipc3.c
+@@ -1067,7 +1067,7 @@ static void sof_ipc3_rx_msg(struct snd_sof_dev *sdev)
+ 		return;
+ 	}
+ 
+-	if (hdr.size < sizeof(hdr)) {
++	if (hdr.size < sizeof(hdr) || hdr.size > SOF_IPC_MSG_MAX_SIZE) {
+ 		dev_err(sdev->dev, "The received message size is invalid\n");
+ 		return;
+ 	}
+diff --git a/tools/arch/x86/include/asm/rmwcc.h b/tools/arch/x86/include/asm/rmwcc.h
+index 11ff975242cac..e2ff22b379a44 100644
+--- a/tools/arch/x86/include/asm/rmwcc.h
++++ b/tools/arch/x86/include/asm/rmwcc.h
+@@ -4,7 +4,7 @@
+ 
+ #define __GEN_RMWcc(fullop, var, cc, ...)				\
+ do {									\
+-	asm_volatile_goto (fullop "; j" cc " %l[cc_label]"		\
++	asm goto (fullop "; j" cc " %l[cc_label]"		\
+ 			: : "m" (var), ## __VA_ARGS__ 			\
+ 			: "memory" : cc_label);				\
+ 	return 0;							\
+diff --git a/tools/include/linux/compiler_types.h b/tools/include/linux/compiler_types.h
+index 1bdd834bdd571..d09f9dc172a48 100644
+--- a/tools/include/linux/compiler_types.h
++++ b/tools/include/linux/compiler_types.h
+@@ -36,8 +36,8 @@
+ #include <linux/compiler-gcc.h>
+ #endif
+ 
+-#ifndef asm_volatile_goto
+-#define asm_volatile_goto(x...) asm goto(x)
++#ifndef asm_goto_output
++#define asm_goto_output(x...) asm goto(x)
+ #endif
+ 
+ #endif /* __LINUX_COMPILER_TYPES_H */
+diff --git a/tools/testing/selftests/dt/test_unprobed_devices.sh b/tools/testing/selftests/dt/test_unprobed_devices.sh
+index b07af2a4c4de0..7fae90293a9d8 100755
+--- a/tools/testing/selftests/dt/test_unprobed_devices.sh
++++ b/tools/testing/selftests/dt/test_unprobed_devices.sh
+@@ -33,8 +33,8 @@ if [[ ! -d "${PDT}" ]]; then
+ fi
+ 
+ nodes_compatible=$(
+-	for node_compat in $(find ${PDT} -name compatible); do
+-		node=$(dirname "${node_compat}")
++	for node in $(find ${PDT} -type d); do
++		[ ! -f "${node}"/compatible ] && continue
+ 		# Check if node is available
+ 		if [[ -e "${node}"/status ]]; then
+ 			status=$(tr -d '\000' < "${node}"/status)
+@@ -46,10 +46,11 @@ nodes_compatible=$(
+ 
+ nodes_dev_bound=$(
+ 	IFS=$'\n'
+-	for uevent in $(find /sys/devices -name uevent); do
+-		if [[ -d "$(dirname "${uevent}")"/driver ]]; then
+-			grep '^OF_FULLNAME=' "${uevent}" | sed -e 's|OF_FULLNAME=||'
+-		fi
++	for dev_dir in $(find /sys/devices -type d); do
++		[ ! -f "${dev_dir}"/uevent ] && continue
++		[ ! -d "${dev_dir}"/driver ] && continue
++
++		grep '^OF_FULLNAME=' "${dev_dir}"/uevent | sed -e 's|OF_FULLNAME=||'
+ 	done
+ 	)
+ 
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 5b79758cae627..e64bbdf0e86ea 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -9,6 +9,7 @@
+ 
+ #include <errno.h>
+ #include <linux/landlock.h>
++#include <linux/securebits.h>
+ #include <sys/capability.h>
+ #include <sys/socket.h>
+ #include <sys/syscall.h>
+@@ -115,11 +116,16 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ 		/* clang-format off */
+ 		CAP_DAC_OVERRIDE,
+ 		CAP_MKNOD,
++		CAP_NET_ADMIN,
++		CAP_NET_BIND_SERVICE,
+ 		CAP_SYS_ADMIN,
+ 		CAP_SYS_CHROOT,
+-		CAP_NET_BIND_SERVICE,
+ 		/* clang-format on */
+ 	};
++	const unsigned int noroot = SECBIT_NOROOT | SECBIT_NOROOT_LOCKED;
++
++	if ((cap_get_secbits() & noroot) != noroot)
++		EXPECT_EQ(0, cap_set_secbits(noroot));
+ 
+ 	cap_p = cap_get_proc();
+ 	EXPECT_NE(NULL, cap_p)
+@@ -137,6 +143,8 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ 			TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
+ 		}
+ 	}
++
++	/* Automatically resets ambient capabilities. */
+ 	EXPECT_NE(-1, cap_set_proc(cap_p))
+ 	{
+ 		TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
+@@ -145,6 +153,9 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ 	{
+ 		TH_LOG("Failed to cap_free: %s", strerror(errno));
+ 	}
++
++	/* Quickly checks that ambient capabilities are cleared. */
++	EXPECT_NE(-1, cap_get_ambient(caps[0]));
+ }
+ 
+ /* We cannot put such helpers in a library because of kselftest_harness.h . */
+@@ -158,8 +169,9 @@ static void __maybe_unused drop_caps(struct __test_metadata *const _metadata)
+ 	_init_caps(_metadata, true);
+ }
+ 
+-static void _effective_cap(struct __test_metadata *const _metadata,
+-			   const cap_value_t caps, const cap_flag_value_t value)
++static void _change_cap(struct __test_metadata *const _metadata,
++			const cap_flag_t flag, const cap_value_t cap,
++			const cap_flag_value_t value)
+ {
+ 	cap_t cap_p;
+ 
+@@ -168,7 +180,7 @@ static void _effective_cap(struct __test_metadata *const _metadata,
+ 	{
+ 		TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
+ 	}
+-	EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value))
++	EXPECT_NE(-1, cap_set_flag(cap_p, flag, 1, &cap, value))
+ 	{
+ 		TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
+ 	}
+@@ -183,15 +195,35 @@ static void _effective_cap(struct __test_metadata *const _metadata,
+ }
+ 
+ static void __maybe_unused set_cap(struct __test_metadata *const _metadata,
+-				   const cap_value_t caps)
++				   const cap_value_t cap)
+ {
+-	_effective_cap(_metadata, caps, CAP_SET);
++	_change_cap(_metadata, CAP_EFFECTIVE, cap, CAP_SET);
+ }
+ 
+ static void __maybe_unused clear_cap(struct __test_metadata *const _metadata,
+-				     const cap_value_t caps)
++				     const cap_value_t cap)
++{
++	_change_cap(_metadata, CAP_EFFECTIVE, cap, CAP_CLEAR);
++}
++
++static void __maybe_unused
++set_ambient_cap(struct __test_metadata *const _metadata, const cap_value_t cap)
++{
++	_change_cap(_metadata, CAP_INHERITABLE, cap, CAP_SET);
++
++	EXPECT_NE(-1, cap_set_ambient(cap, CAP_SET))
++	{
++		TH_LOG("Failed to set ambient capability %d: %s", cap,
++		       strerror(errno));
++	}
++}
++
++static void __maybe_unused clear_ambient_cap(
++	struct __test_metadata *const _metadata, const cap_value_t cap)
+ {
+-	_effective_cap(_metadata, caps, CAP_CLEAR);
++	EXPECT_EQ(1, cap_get_ambient(cap));
++	_change_cap(_metadata, CAP_INHERITABLE, cap, CAP_CLEAR);
++	EXPECT_EQ(0, cap_get_ambient(cap));
+ }
+ 
+ /* Receives an FD from a UNIX socket. Returns the received FD, or -errno. */
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 18e1f86a6234c..fde1a96ef9f41 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -241,9 +241,11 @@ struct mnt_opt {
+ 	const char *const data;
+ };
+ 
+-const struct mnt_opt mnt_tmp = {
++#define MNT_TMP_DATA "size=4m,mode=700"
++
++static const struct mnt_opt mnt_tmp = {
+ 	.type = "tmpfs",
+-	.data = "size=4m,mode=700",
++	.data = MNT_TMP_DATA,
+ };
+ 
+ static int mount_opt(const struct mnt_opt *const mnt, const char *const target)
+@@ -4572,7 +4574,10 @@ FIXTURE_VARIANT(layout3_fs)
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(layout3_fs, tmpfs) {
+ 	/* clang-format on */
+-	.mnt = mnt_tmp,
++	.mnt = {
++		.type = "tmpfs",
++		.data = MNT_TMP_DATA,
++	},
+ 	.file_path = file1_s1d1,
+ };
+ 
+diff --git a/tools/testing/selftests/landlock/net_test.c b/tools/testing/selftests/landlock/net_test.c
+index 929e21c4db05f..4499b2736e1a2 100644
+--- a/tools/testing/selftests/landlock/net_test.c
++++ b/tools/testing/selftests/landlock/net_test.c
+@@ -17,6 +17,7 @@
+ #include <string.h>
+ #include <sys/prctl.h>
+ #include <sys/socket.h>
++#include <sys/syscall.h>
+ #include <sys/un.h>
+ 
+ #include "common.h"
+@@ -54,6 +55,11 @@ struct service_fixture {
+ 	};
+ };
+ 
++static pid_t sys_gettid(void)
++{
++	return syscall(__NR_gettid);
++}
++
+ static int set_service(struct service_fixture *const srv,
+ 		       const struct protocol_variant prot,
+ 		       const unsigned short index)
+@@ -88,7 +94,7 @@ static int set_service(struct service_fixture *const srv,
+ 	case AF_UNIX:
+ 		srv->unix_addr.sun_family = prot.domain;
+ 		sprintf(srv->unix_addr.sun_path,
+-			"_selftests-landlock-net-tid%d-index%d", gettid(),
++			"_selftests-landlock-net-tid%d-index%d", sys_gettid(),
+ 			index);
+ 		srv->unix_addr_len = SUN_LEN(&srv->unix_addr);
+ 		srv->unix_addr.sun_path[0] = '\0';
+@@ -101,8 +107,11 @@ static void setup_loopback(struct __test_metadata *const _metadata)
+ {
+ 	set_cap(_metadata, CAP_SYS_ADMIN);
+ 	ASSERT_EQ(0, unshare(CLONE_NEWNET));
+-	ASSERT_EQ(0, system("ip link set dev lo up"));
+ 	clear_cap(_metadata, CAP_SYS_ADMIN);
++
++	set_ambient_cap(_metadata, CAP_NET_ADMIN);
++	ASSERT_EQ(0, system("ip link set dev lo up"));
++	clear_ambient_cap(_metadata, CAP_NET_ADMIN);
+ }
+ 
+ static bool is_restricted(const struct protocol_variant *const prot,
+diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+index 0899019a7fcb4..e14bdd4455f2d 100755
+--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ # Kselftest framework requirement - SKIP code is 4.
+diff --git a/tools/testing/selftests/mm/ksm_tests.c b/tools/testing/selftests/mm/ksm_tests.c
+index 380b691d3eb9f..b748c48908d9d 100644
+--- a/tools/testing/selftests/mm/ksm_tests.c
++++ b/tools/testing/selftests/mm/ksm_tests.c
+@@ -566,7 +566,7 @@ static int ksm_merge_hugepages_time(int merge_type, int mapping, int prot,
+ 	if (map_ptr_orig == MAP_FAILED)
+ 		err(2, "initial mmap");
+ 
+-	if (madvise(map_ptr, len + HPAGE_SIZE, MADV_HUGEPAGE))
++	if (madvise(map_ptr, len, MADV_HUGEPAGE))
+ 		err(2, "MADV_HUGEPAGE");
+ 
+ 	pagemap_fd = open("/proc/self/pagemap", O_RDONLY);
+diff --git a/tools/testing/selftests/mm/map_hugetlb.c b/tools/testing/selftests/mm/map_hugetlb.c
+index 193281560b61b..86e8f2048a409 100644
+--- a/tools/testing/selftests/mm/map_hugetlb.c
++++ b/tools/testing/selftests/mm/map_hugetlb.c
+@@ -15,6 +15,7 @@
+ #include <unistd.h>
+ #include <sys/mman.h>
+ #include <fcntl.h>
++#include "vm_util.h"
+ 
+ #define LENGTH (256UL*1024*1024)
+ #define PROTECTION (PROT_READ | PROT_WRITE)
+@@ -58,10 +59,16 @@ int main(int argc, char **argv)
+ {
+ 	void *addr;
+ 	int ret;
++	size_t hugepage_size;
+ 	size_t length = LENGTH;
+ 	int flags = FLAGS;
+ 	int shift = 0;
+ 
++	hugepage_size = default_huge_page_size();
++	/* munmap with fail if the length is not page aligned */
++	if (hugepage_size > length)
++		length = hugepage_size;
++
+ 	if (argc > 1)
+ 		length = atol(argv[1]) << 20;
+ 	if (argc > 2) {
+diff --git a/tools/testing/selftests/mm/va_high_addr_switch.sh b/tools/testing/selftests/mm/va_high_addr_switch.sh
+index 45cae7cab27e1..a0a75f3029043 100755
+--- a/tools/testing/selftests/mm/va_high_addr_switch.sh
++++ b/tools/testing/selftests/mm/va_high_addr_switch.sh
+@@ -29,9 +29,15 @@ check_supported_x86_64()
+ 	# See man 1 gzip under '-f'.
+ 	local pg_table_levels=$(gzip -dcfq "${config}" | grep PGTABLE_LEVELS | cut -d'=' -f 2)
+ 
++	local cpu_supports_pl5=$(awk '/^flags/ {if (/la57/) {print 0;}
++		else {print 1}; exit}' /proc/cpuinfo 2>/dev/null)
++
+ 	if [[ "${pg_table_levels}" -lt 5 ]]; then
+ 		echo "$0: PGTABLE_LEVELS=${pg_table_levels}, must be >= 5 to run this test"
+ 		exit $ksft_skip
++	elif [[ "${cpu_supports_pl5}" -ne 0 ]]; then
++		echo "$0: CPU does not have the necessary la57 flag to support page table level 5"
++		exit $ksft_skip
+ 	fi
+ }
+ 
+diff --git a/tools/testing/selftests/mm/write_hugetlb_memory.sh b/tools/testing/selftests/mm/write_hugetlb_memory.sh
+index 70a02301f4c27..3d2d2eb9d6fff 100755
+--- a/tools/testing/selftests/mm/write_hugetlb_memory.sh
++++ b/tools/testing/selftests/mm/write_hugetlb_memory.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ set -e
+diff --git a/tools/testing/selftests/net/forwarding/bridge_locked_port.sh b/tools/testing/selftests/net/forwarding/bridge_locked_port.sh
+index 9af9f6964808b..c62331b2e0060 100755
+--- a/tools/testing/selftests/net/forwarding/bridge_locked_port.sh
++++ b/tools/testing/selftests/net/forwarding/bridge_locked_port.sh
+@@ -327,10 +327,10 @@ locked_port_mab_redirect()
+ 	RET=0
+ 	check_port_mab_support || return 0
+ 
+-	bridge link set dev $swp1 learning on locked on mab on
+ 	tc qdisc add dev $swp1 clsact
+ 	tc filter add dev $swp1 ingress protocol all pref 1 handle 101 flower \
+ 		action mirred egress redirect dev $swp2
++	bridge link set dev $swp1 learning on locked on mab on
+ 
+ 	ping_do $h1 192.0.2.2
+ 	check_err $? "Ping did not work with redirection"
+@@ -349,8 +349,8 @@ locked_port_mab_redirect()
+ 	check_err $? "Locked entry not created after deleting filter"
+ 
+ 	bridge fdb del `mac_get $h1` vlan 1 dev $swp1 master
+-	tc qdisc del dev $swp1 clsact
+ 	bridge link set dev $swp1 learning off locked off mab off
++	tc qdisc del dev $swp1 clsact
+ 
+ 	log_test "Locked port MAB redirect"
+ }
+diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb.sh b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
+index e4e3e9405056e..a3678dfe5848a 100755
+--- a/tools/testing/selftests/net/forwarding/bridge_mdb.sh
++++ b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
+@@ -329,7 +329,7 @@ __cfg_test_port_ip_star_g()
+ 
+ 	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
+ 	check_err $? "(*, G) \"permanent\" entry has a pending group timer"
+-	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
++	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "/0.00"
+ 	check_err $? "\"permanent\" source entry has a pending source timer"
+ 
+ 	bridge mdb del dev br0 port $swp1 grp $grp vid 10
+@@ -346,7 +346,7 @@ __cfg_test_port_ip_star_g()
+ 
+ 	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
+ 	check_fail $? "(*, G) EXCLUDE entry does not have a pending group timer"
+-	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
++	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "/0.00"
+ 	check_err $? "\"blocked\" source entry has a pending source timer"
+ 
+ 	bridge mdb del dev br0 port $swp1 grp $grp vid 10
+@@ -363,7 +363,7 @@ __cfg_test_port_ip_star_g()
+ 
+ 	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
+ 	check_err $? "(*, G) INCLUDE entry has a pending group timer"
+-	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
++	bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "/0.00"
+ 	check_fail $? "Source entry does not have a pending source timer"
+ 
+ 	bridge mdb del dev br0 port $swp1 grp $grp vid 10
+@@ -1065,14 +1065,17 @@ fwd_test()
+ 	echo
+ 	log_info "# Forwarding tests"
+ 
++	# Set the Max Response Delay to 100 centiseconds (1 second) so that the
++	# bridge will start forwarding according to its MDB soon after a
++	# multicast querier is enabled.
++	ip link set dev br0 type bridge mcast_query_response_interval 100
++
+ 	# Forwarding according to MDB entries only takes place when the bridge
+ 	# detects that there is a valid querier in the network. Set the bridge
+ 	# as the querier and assign it a valid IPv6 link-local address to be
+ 	# used as the source address for MLD queries.
+ 	ip -6 address add fe80::1/64 nodad dev br0
+ 	ip link set dev br0 type bridge mcast_querier 1
+-	# Wait the default Query Response Interval (10 seconds) for the bridge
+-	# to determine that there are no other queriers in the network.
+ 	sleep 10
+ 
+ 	fwd_test_host
+@@ -1080,6 +1083,7 @@ fwd_test()
+ 
+ 	ip link set dev br0 type bridge mcast_querier 0
+ 	ip -6 address del fe80::1/64 dev br0
++	ip link set dev br0 type bridge mcast_query_response_interval 1000
+ }
+ 
+ ctrl_igmpv3_is_in_test()
+diff --git a/tools/testing/selftests/net/forwarding/tc_flower_l2_miss.sh b/tools/testing/selftests/net/forwarding/tc_flower_l2_miss.sh
+index 20a7cb7222b8b..c2420bb72c128 100755
+--- a/tools/testing/selftests/net/forwarding/tc_flower_l2_miss.sh
++++ b/tools/testing/selftests/net/forwarding/tc_flower_l2_miss.sh
+@@ -209,14 +209,17 @@ test_l2_miss_multicast()
+ 	# both registered and unregistered multicast traffic.
+ 	bridge link set dev $swp2 mcast_router 2
+ 
++	# Set the Max Response Delay to 100 centiseconds (1 second) so that the
++	# bridge will start forwarding according to its MDB soon after a
++	# multicast querier is enabled.
++	ip link set dev br1 type bridge mcast_query_response_interval 100
++
+ 	# Forwarding according to MDB entries only takes place when the bridge
+ 	# detects that there is a valid querier in the network. Set the bridge
+ 	# as the querier and assign it a valid IPv6 link-local address to be
+ 	# used as the source address for MLD queries.
+ 	ip link set dev br1 type bridge mcast_querier 1
+ 	ip -6 address add fe80::1/64 nodad dev br1
+-	# Wait the default Query Response Interval (10 seconds) for the bridge
+-	# to determine that there are no other queriers in the network.
+ 	sleep 10
+ 
+ 	test_l2_miss_multicast_ipv4
+@@ -224,6 +227,7 @@ test_l2_miss_multicast()
+ 
+ 	ip -6 address del fe80::1/64 dev br1
+ 	ip link set dev br1 type bridge mcast_querier 0
++	ip link set dev br1 type bridge mcast_query_response_interval 1000
+ 	bridge link set dev $swp2 mcast_router 1
+ }
+ 
+diff --git a/tools/testing/selftests/net/mptcp/config b/tools/testing/selftests/net/mptcp/config
+index e317c2e44dae8..4f80014cae494 100644
+--- a/tools/testing/selftests/net/mptcp/config
++++ b/tools/testing/selftests/net/mptcp/config
+@@ -22,8 +22,11 @@ CONFIG_NFT_TPROXY=m
+ CONFIG_NFT_SOCKET=m
+ CONFIG_IP_ADVANCED_ROUTER=y
+ CONFIG_IP_MULTIPLE_TABLES=y
++CONFIG_IP_NF_FILTER=m
++CONFIG_IP_NF_MANGLE=m
+ CONFIG_IP_NF_TARGET_REJECT=m
+ CONFIG_IPV6_MULTIPLE_TABLES=y
++CONFIG_IP6_NF_FILTER=m
+ CONFIG_NET_ACT_CSUM=m
+ CONFIG_NET_ACT_PEDIT=m
+ CONFIG_NET_CLS_ACT=y
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index 24a57b3ae2155..ed665e0057d33 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -682,16 +682,10 @@ wait_mpj()
+ 	done
+ }
+ 
+-kill_wait()
+-{
+-	kill $1 > /dev/null 2>&1
+-	wait $1 2>/dev/null
+-}
+-
+ kill_events_pids()
+ {
+-	kill_wait $evts_ns1_pid
+-	kill_wait $evts_ns2_pid
++	mptcp_lib_kill_wait $evts_ns1_pid
++	mptcp_lib_kill_wait $evts_ns2_pid
+ }
+ 
+ kill_tests_wait()
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+index 92a5befe80394..4cd4297ca86de 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_lib.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -6,7 +6,7 @@ readonly KSFT_FAIL=1
+ readonly KSFT_SKIP=4
+ 
+ # shellcheck disable=SC2155 # declare and assign separately
+-readonly KSFT_TEST=$(basename "${0}" | sed 's/\.sh$//g')
++readonly KSFT_TEST="${MPTCP_LIB_KSFT_TEST:-$(basename "${0}" .sh)}"
+ 
+ MPTCP_LIB_SUBTESTS=()
+ 
+@@ -207,3 +207,12 @@ mptcp_lib_result_print_all_tap() {
+ 		printf "%s\n" "${subtest}"
+ 	done
+ }
++
++# $1: PID
++mptcp_lib_kill_wait() {
++	[ "${1}" -eq 0 ] && return 0
++
++	kill -SIGUSR1 "${1}" > /dev/null 2>&1
++	kill "${1}" > /dev/null 2>&1
++	wait "${1}" 2>/dev/null
++}
+diff --git a/tools/testing/selftests/net/mptcp/settings b/tools/testing/selftests/net/mptcp/settings
+index 79b65bdf05db6..abc5648b59abd 100644
+--- a/tools/testing/selftests/net/mptcp/settings
++++ b/tools/testing/selftests/net/mptcp/settings
+@@ -1 +1 @@
+-timeout=1200
++timeout=1800
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index b25a3e33eb253..c44bf5c7c6e04 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -108,15 +108,6 @@ test_fail()
+ 	mptcp_lib_result_fail "${test_name}"
+ }
+ 
+-kill_wait()
+-{
+-	[ $1 -eq 0 ] && return 0
+-
+-	kill -SIGUSR1 $1 > /dev/null 2>&1
+-	kill $1 > /dev/null 2>&1
+-	wait $1 2>/dev/null
+-}
+-
+ # This function is used in the cleanup trap
+ #shellcheck disable=SC2317
+ cleanup()
+@@ -128,7 +119,7 @@ cleanup()
+ 	for pid in $client4_pid $server4_pid $client6_pid $server6_pid\
+ 		   $server_evts_pid $client_evts_pid
+ 	do
+-		kill_wait $pid
++		mptcp_lib_kill_wait $pid
+ 	done
+ 
+ 	local netns
+@@ -210,7 +201,7 @@ make_connection()
+ 	fi
+ 	:>"$client_evts"
+ 	if [ $client_evts_pid -ne 0 ]; then
+-		kill_wait $client_evts_pid
++		mptcp_lib_kill_wait $client_evts_pid
+ 	fi
+ 	ip netns exec "$ns2" ./pm_nl_ctl events >> "$client_evts" 2>&1 &
+ 	client_evts_pid=$!
+@@ -219,7 +210,7 @@ make_connection()
+ 	fi
+ 	:>"$server_evts"
+ 	if [ $server_evts_pid -ne 0 ]; then
+-		kill_wait $server_evts_pid
++		mptcp_lib_kill_wait $server_evts_pid
+ 	fi
+ 	ip netns exec "$ns1" ./pm_nl_ctl events >> "$server_evts" 2>&1 &
+ 	server_evts_pid=$!
+@@ -627,7 +618,7 @@ test_subflows()
+ 			      "10.0.2.2" "$client4_port" "23" "$client_addr_id" "ns1" "ns2"
+ 
+ 	# Delete the listener from the client ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	local sport
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
+@@ -666,7 +657,7 @@ test_subflows()
+ 			      "$client_addr_id" "ns1" "ns2"
+ 
+ 	# Delete the listener from the client ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
+ 
+@@ -705,7 +696,7 @@ test_subflows()
+ 			      "$client_addr_id" "ns1" "ns2"
+ 
+ 	# Delete the listener from the client ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
+ 
+@@ -743,7 +734,7 @@ test_subflows()
+ 			      "10.0.2.1" "$app4_port" "23" "$server_addr_id" "ns2" "ns1"
+ 
+ 	# Delete the listener from the server ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
+ 
+@@ -782,7 +773,7 @@ test_subflows()
+ 			      "$server_addr_id" "ns2" "ns1"
+ 
+ 	# Delete the listener from the server ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
+ 
+@@ -819,7 +810,7 @@ test_subflows()
+ 			      "10.0.2.2" "10.0.2.1" "$new4_port" "23" "$server_addr_id" "ns2" "ns1"
+ 
+ 	# Delete the listener from the server ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
+ 
+@@ -865,7 +856,7 @@ test_subflows_v4_v6_mix()
+ 			      "$server_addr_id" "ns2" "ns1"
+ 
+ 	# Delete the listener from the server ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
+ 
+@@ -982,7 +973,7 @@ test_listener()
+ 	sleep 0.5
+ 
+ 	# Delete the listener from the client ns, if one was created
+-	kill_wait $listener_pid
++	mptcp_lib_kill_wait $listener_pid
+ 
+ 	sleep 0.5
+ 	verify_listener_events $client_evts $LISTENER_CLOSED $AF_INET 10.0.2.2 $client4_port
+diff --git a/tools/testing/selftests/net/test_bridge_backup_port.sh b/tools/testing/selftests/net/test_bridge_backup_port.sh
+index 112cfd8a10ad9..1b3f89e2b86e6 100755
+--- a/tools/testing/selftests/net/test_bridge_backup_port.sh
++++ b/tools/testing/selftests/net/test_bridge_backup_port.sh
+@@ -35,9 +35,8 @@
+ # | sw1                                | | sw2                                |
+ # +------------------------------------+ +------------------------------------+
+ 
++source lib.sh
+ ret=0
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
+ 
+ # All tests in this script. Can be overridden with -t option.
+ TESTS="
+@@ -125,6 +124,16 @@ tc_check_packets()
+ 	[[ $pkts == $count ]]
+ }
+ 
++bridge_link_check()
++{
++	local ns=$1; shift
++	local dev=$1; shift
++	local state=$1; shift
++
++	bridge -n $ns -d -j link show dev $dev | \
++		jq -e ".[][\"state\"] == \"$state\"" &> /dev/null
++}
++
+ ################################################################################
+ # Setup
+ 
+@@ -132,9 +141,6 @@ setup_topo_ns()
+ {
+ 	local ns=$1; shift
+ 
+-	ip netns add $ns
+-	ip -n $ns link set dev lo up
+-
+ 	ip netns exec $ns sysctl -qw net.ipv6.conf.all.keep_addr_on_down=1
+ 	ip netns exec $ns sysctl -qw net.ipv6.conf.default.ignore_routes_with_linkdown=1
+ 	ip netns exec $ns sysctl -qw net.ipv6.conf.all.accept_dad=0
+@@ -145,13 +151,14 @@ setup_topo()
+ {
+ 	local ns
+ 
+-	for ns in sw1 sw2; do
++	setup_ns sw1 sw2
++	for ns in $sw1 $sw2; do
+ 		setup_topo_ns $ns
+ 	done
+ 
+ 	ip link add name veth0 type veth peer name veth1
+-	ip link set dev veth0 netns sw1 name veth0
+-	ip link set dev veth1 netns sw2 name veth0
++	ip link set dev veth0 netns $sw1 name veth0
++	ip link set dev veth1 netns $sw2 name veth0
+ }
+ 
+ setup_sw_common()
+@@ -190,7 +197,7 @@ setup_sw_common()
+ 
+ setup_sw1()
+ {
+-	local ns=sw1
++	local ns=$sw1
+ 	local local_addr=192.0.2.33
+ 	local remote_addr=192.0.2.34
+ 	local veth_addr=192.0.2.49
+@@ -203,7 +210,7 @@ setup_sw1()
+ 
+ setup_sw2()
+ {
+-	local ns=sw2
++	local ns=$sw2
+ 	local local_addr=192.0.2.34
+ 	local remote_addr=192.0.2.33
+ 	local veth_addr=192.0.2.50
+@@ -229,11 +236,7 @@ setup()
+ 
+ cleanup()
+ {
+-	local ns
+-
+-	for ns in h1 h2 sw1 sw2; do
+-		ip netns del $ns &> /dev/null
+-	done
++	cleanup_ns $sw1 $sw2
+ }
+ 
+ ################################################################################
+@@ -248,85 +251,90 @@ backup_port()
+ 	echo "Backup port"
+ 	echo "-----------"
+ 
+-	run_cmd "tc -n sw1 qdisc replace dev swp1 clsact"
+-	run_cmd "tc -n sw1 filter replace dev swp1 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
++	run_cmd "tc -n $sw1 qdisc replace dev swp1 clsact"
++	run_cmd "tc -n $sw1 filter replace dev swp1 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
+ 
+-	run_cmd "tc -n sw1 qdisc replace dev vx0 clsact"
+-	run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
++	run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact"
++	run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
+ 
+-	run_cmd "bridge -n sw1 fdb replace $dmac dev swp1 master static vlan 10"
++	run_cmd "bridge -n $sw1 fdb replace $dmac dev swp1 master static vlan 10"
+ 
+ 	# Initial state - check that packets are forwarded out of swp1 when it
+ 	# has a carrier and not forwarded out of any port when it does not have
+ 	# a carrier.
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 1
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 1
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 0
++	tc_check_packets $sw1 "dev vx0 egress" 101 0
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 1
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 1
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 0
++	tc_check_packets $sw1 "dev vx0 egress" 101 0
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier on"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier on"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 forwarding
+ 	log_test $? 0 "swp1 carrier on"
+ 
+ 	# Configure vx0 as the backup port of swp1 and check that packets are
+ 	# forwarded out of swp1 when it has a carrier and out of vx0 when swp1
+ 	# does not have a carrier.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_port vx0"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_port vx0\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_port vx0"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_port vx0\""
+ 	log_test $? 0 "vx0 configured as backup port of swp1"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 2
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 2
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 0
++	tc_check_packets $sw1 "dev vx0 egress" 101 0
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 2
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 2
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "Forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier on"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier on"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 forwarding
+ 	log_test $? 0 "swp1 carrier on"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 3
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 3
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+ 	# Remove vx0 as the backup port of swp1 and check that packets are no
+ 	# longer forwarded out of vx0 when swp1 does not have a carrier.
+-	run_cmd "bridge -n sw1 link set dev swp1 nobackup_port"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_port vx0\""
++	run_cmd "bridge -n $sw1 link set dev swp1 nobackup_port"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_port vx0\""
+ 	log_test $? 1 "vx0 not configured as backup port of swp1"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 4
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 4
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 4
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 4
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "No forwarding out of vx0"
+ }
+ 
+@@ -339,125 +347,130 @@ backup_nhid()
+ 	echo "Backup nexthop ID"
+ 	echo "-----------------"
+ 
+-	run_cmd "tc -n sw1 qdisc replace dev swp1 clsact"
+-	run_cmd "tc -n sw1 filter replace dev swp1 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
++	run_cmd "tc -n $sw1 qdisc replace dev swp1 clsact"
++	run_cmd "tc -n $sw1 filter replace dev swp1 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
+ 
+-	run_cmd "tc -n sw1 qdisc replace dev vx0 clsact"
+-	run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
++	run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact"
++	run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
+ 
+-	run_cmd "ip -n sw1 nexthop replace id 1 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 2 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 10 group 1/2 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 1 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 2 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 10 group 1/2 fdb"
+ 
+-	run_cmd "bridge -n sw1 fdb replace $dmac dev swp1 master static vlan 10"
+-	run_cmd "bridge -n sw1 fdb replace $dmac dev vx0 self static dst 192.0.2.36 src_vni 10010"
++	run_cmd "bridge -n $sw1 fdb replace $dmac dev swp1 master static vlan 10"
++	run_cmd "bridge -n $sw1 fdb replace $dmac dev vx0 self static dst 192.0.2.36 src_vni 10010"
+ 
+-	run_cmd "ip -n sw2 address replace 192.0.2.36/32 dev lo"
++	run_cmd "ip -n $sw2 address replace 192.0.2.36/32 dev lo"
+ 
+ 	# The first filter matches on packets forwarded using the backup
+ 	# nexthop ID and the second filter matches on packets forwarded using a
+ 	# regular VXLAN FDB entry.
+-	run_cmd "tc -n sw2 qdisc replace dev vx0 clsact"
+-	run_cmd "tc -n sw2 filter replace dev vx0 ingress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac enc_key_id 10010 enc_dst_ip 192.0.2.34 action pass"
+-	run_cmd "tc -n sw2 filter replace dev vx0 ingress pref 1 handle 102 proto ip flower src_mac $smac dst_mac $dmac enc_key_id 10010 enc_dst_ip 192.0.2.36 action pass"
++	run_cmd "tc -n $sw2 qdisc replace dev vx0 clsact"
++	run_cmd "tc -n $sw2 filter replace dev vx0 ingress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac enc_key_id 10010 enc_dst_ip 192.0.2.34 action pass"
++	run_cmd "tc -n $sw2 filter replace dev vx0 ingress pref 1 handle 102 proto ip flower src_mac $smac dst_mac $dmac enc_key_id 10010 enc_dst_ip 192.0.2.36 action pass"
+ 
+ 	# Configure vx0 as the backup port of swp1 and check that packets are
+ 	# forwarded out of swp1 when it has a carrier and out of vx0 when swp1
+ 	# does not have a carrier. When packets are forwarded out of vx0, check
+ 	# that they are forwarded by the VXLAN FDB entry.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_port vx0"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_port vx0\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_port vx0"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_port vx0\""
+ 	log_test $? 0 "vx0 configured as backup port of swp1"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 1
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 1
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 0
++	tc_check_packets $sw1 "dev vx0 egress" 101 0
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 1
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 1
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 0
++	tc_check_packets $sw2 "dev vx0 ingress" 101 0
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	tc_check_packets sw2 "dev vx0 ingress" 102 1
++	tc_check_packets $sw2 "dev vx0 ingress" 102 1
+ 	log_test $? 0 "Forwarding using VXLAN FDB entry"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier on"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier on"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 forwarding
+ 	log_test $? 0 "swp1 carrier on"
+ 
+ 	# Configure nexthop ID 10 as the backup nexthop ID of swp1 and check
+ 	# that when packets are forwarded out of vx0, they are forwarded using
+ 	# the backup nexthop ID.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 10"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid 10\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 10"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid 10\""
+ 	log_test $? 0 "nexthop ID 10 configured as backup nexthop ID of swp1"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 2
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 2
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "No forwarding out of vx0"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 2
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 2
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 2
++	tc_check_packets $sw1 "dev vx0 egress" 101 2
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "Forwarding using backup nexthop ID"
+-	tc_check_packets sw2 "dev vx0 ingress" 102 1
++	tc_check_packets $sw2 "dev vx0 ingress" 102 1
+ 	log_test $? 0 "No forwarding using VXLAN FDB entry"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier on"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier on"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 forwarding
+ 	log_test $? 0 "swp1 carrier on"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 3
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 3
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 2
++	tc_check_packets $sw1 "dev vx0 egress" 101 2
+ 	log_test $? 0 "No forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	tc_check_packets sw2 "dev vx0 ingress" 102 1
++	tc_check_packets $sw2 "dev vx0 ingress" 102 1
+ 	log_test $? 0 "No forwarding using VXLAN FDB entry"
+ 
+ 	# Reset the backup nexthop ID to 0 and check that packets are no longer
+ 	# forwarded using the backup nexthop ID when swp1 does not have a
+ 	# carrier and are instead forwarded by the VXLAN FDB.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 0"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 0"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid\""
+ 	log_test $? 1 "No backup nexthop ID configured for swp1"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 4
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 4
+ 	log_test $? 0 "Forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 2
++	tc_check_packets $sw1 "dev vx0 egress" 101 2
+ 	log_test $? 0 "No forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	tc_check_packets sw2 "dev vx0 ingress" 102 1
++	tc_check_packets $sw2 "dev vx0 ingress" 102 1
+ 	log_test $? 0 "No forwarding using VXLAN FDB entry"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 4
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 4
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 3
++	tc_check_packets $sw1 "dev vx0 egress" 101 3
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	tc_check_packets sw2 "dev vx0 ingress" 102 2
++	tc_check_packets $sw2 "dev vx0 ingress" 102 2
+ 	log_test $? 0 "Forwarding using VXLAN FDB entry"
+ }
+ 
+@@ -475,109 +488,110 @@ backup_nhid_invalid()
+ 	# is forwarded out of the VXLAN port, but dropped by the VXLAN driver
+ 	# and does not crash the host.
+ 
+-	run_cmd "tc -n sw1 qdisc replace dev swp1 clsact"
+-	run_cmd "tc -n sw1 filter replace dev swp1 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
++	run_cmd "tc -n $sw1 qdisc replace dev swp1 clsact"
++	run_cmd "tc -n $sw1 filter replace dev swp1 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
+ 
+-	run_cmd "tc -n sw1 qdisc replace dev vx0 clsact"
+-	run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
++	run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact"
++	run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac action pass"
+ 	# Drop all other Tx traffic to avoid changes to Tx drop counter.
+-	run_cmd "tc -n sw1 filter replace dev vx0 egress pref 2 handle 102 proto all matchall action drop"
++	run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 2 handle 102 proto all matchall action drop"
+ 
+-	tx_drop=$(ip -n sw1 -s -j link show dev vx0 | jq '.[]["stats64"]["tx"]["dropped"]')
++	tx_drop=$(ip -n $sw1 -s -j link show dev vx0 | jq '.[]["stats64"]["tx"]["dropped"]')
+ 
+-	run_cmd "ip -n sw1 nexthop replace id 1 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 2 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 10 group 1/2 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 1 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 2 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 10 group 1/2 fdb"
+ 
+-	run_cmd "bridge -n sw1 fdb replace $dmac dev swp1 master static vlan 10"
++	run_cmd "bridge -n $sw1 fdb replace $dmac dev swp1 master static vlan 10"
+ 
+-	run_cmd "tc -n sw2 qdisc replace dev vx0 clsact"
+-	run_cmd "tc -n sw2 filter replace dev vx0 ingress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac enc_key_id 10010 enc_dst_ip 192.0.2.34 action pass"
++	run_cmd "tc -n $sw2 qdisc replace dev vx0 clsact"
++	run_cmd "tc -n $sw2 filter replace dev vx0 ingress pref 1 handle 101 proto ip flower src_mac $smac dst_mac $dmac enc_key_id 10010 enc_dst_ip 192.0.2.34 action pass"
+ 
+ 	# First, check that redirection works.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_port vx0"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_port vx0\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_port vx0"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_port vx0\""
+ 	log_test $? 0 "vx0 configured as backup port of swp1"
+ 
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 10"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid 10\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 10"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid 10\""
+ 	log_test $? 0 "Valid nexthop as backup nexthop"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
+ 	log_test $? 0 "swp1 carrier off"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 0
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 0
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 1
++	tc_check_packets $sw1 "dev vx0 egress" 101 1
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "Forwarding using backup nexthop ID"
+-	run_cmd "ip -n sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $tx_drop'"
++	run_cmd "ip -n $sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $tx_drop'"
+ 	log_test $? 0 "No Tx drop increase"
+ 
+ 	# Use a non-existent nexthop ID.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 20"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid 20\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 20"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid 20\""
+ 	log_test $? 0 "Non-existent nexthop as backup nexthop"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 0
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 0
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 2
++	tc_check_packets $sw1 "dev vx0 egress" 101 2
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	run_cmd "ip -n sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 1))'"
++	run_cmd "ip -n $sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 1))'"
+ 	log_test $? 0 "Tx drop increased"
+ 
+ 	# Use a blckhole nexthop.
+-	run_cmd "ip -n sw1 nexthop replace id 30 blackhole"
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 30"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid 30\""
++	run_cmd "ip -n $sw1 nexthop replace id 30 blackhole"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 30"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid 30\""
+ 	log_test $? 0 "Blackhole nexthop as backup nexthop"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 0
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 0
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 3
++	tc_check_packets $sw1 "dev vx0 egress" 101 3
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	run_cmd "ip -n sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 2))'"
++	run_cmd "ip -n $sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 2))'"
+ 	log_test $? 0 "Tx drop increased"
+ 
+ 	# Non-group FDB nexthop.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 1"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid 1\""
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 1"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid 1\""
+ 	log_test $? 0 "Non-group FDB nexthop as backup nexthop"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 0
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 0
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 4
++	tc_check_packets $sw1 "dev vx0 egress" 101 4
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	run_cmd "ip -n sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 3))'"
++	run_cmd "ip -n $sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 3))'"
+ 	log_test $? 0 "Tx drop increased"
+ 
+ 	# IPv6 address family nexthop.
+-	run_cmd "ip -n sw1 nexthop replace id 100 via 2001:db8:100::1 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 200 via 2001:db8:100::1 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 300 group 100/200 fdb"
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 300"
+-	run_cmd "bridge -n sw1 -d link show dev swp1 | grep \"backup_nhid 300\""
++	run_cmd "ip -n $sw1 nexthop replace id 100 via 2001:db8:100::1 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 200 via 2001:db8:100::1 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 300 group 100/200 fdb"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 300"
++	run_cmd "bridge -n $sw1 -d link show dev swp1 | grep \"backup_nhid 300\""
+ 	log_test $? 0 "IPv6 address family nexthop as backup nexthop"
+ 
+-	run_cmd "ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
+-	tc_check_packets sw1 "dev swp1 egress" 101 0
++	run_cmd "ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 1"
++	tc_check_packets $sw1 "dev swp1 egress" 101 0
+ 	log_test $? 0 "No forwarding out of swp1"
+-	tc_check_packets sw1 "dev vx0 egress" 101 5
++	tc_check_packets $sw1 "dev vx0 egress" 101 5
+ 	log_test $? 0 "Forwarding out of vx0"
+-	tc_check_packets sw2 "dev vx0 ingress" 101 1
++	tc_check_packets $sw2 "dev vx0 ingress" 101 1
+ 	log_test $? 0 "No forwarding using backup nexthop ID"
+-	run_cmd "ip -n sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 4))'"
++	run_cmd "ip -n $sw1 -s -j link show dev vx0 | jq -e '.[][\"stats64\"][\"tx\"][\"dropped\"] == $((tx_drop + 4))'"
+ 	log_test $? 0 "Tx drop increased"
+ }
+ 
+@@ -591,44 +605,46 @@ backup_nhid_ping()
+ 	echo "------------------------"
+ 
+ 	# Test bidirectional traffic when traffic is redirected in both VTEPs.
+-	sw1_mac=$(ip -n sw1 -j -p link show br0.10 | jq -r '.[]["address"]')
+-	sw2_mac=$(ip -n sw2 -j -p link show br0.10 | jq -r '.[]["address"]')
++	sw1_mac=$(ip -n $sw1 -j -p link show br0.10 | jq -r '.[]["address"]')
++	sw2_mac=$(ip -n $sw2 -j -p link show br0.10 | jq -r '.[]["address"]')
+ 
+-	run_cmd "bridge -n sw1 fdb replace $sw2_mac dev swp1 master static vlan 10"
+-	run_cmd "bridge -n sw2 fdb replace $sw1_mac dev swp1 master static vlan 10"
++	run_cmd "bridge -n $sw1 fdb replace $sw2_mac dev swp1 master static vlan 10"
++	run_cmd "bridge -n $sw2 fdb replace $sw1_mac dev swp1 master static vlan 10"
+ 
+-	run_cmd "ip -n sw1 neigh replace 192.0.2.66 lladdr $sw2_mac nud perm dev br0.10"
+-	run_cmd "ip -n sw2 neigh replace 192.0.2.65 lladdr $sw1_mac nud perm dev br0.10"
++	run_cmd "ip -n $sw1 neigh replace 192.0.2.66 lladdr $sw2_mac nud perm dev br0.10"
++	run_cmd "ip -n $sw2 neigh replace 192.0.2.65 lladdr $sw1_mac nud perm dev br0.10"
+ 
+-	run_cmd "ip -n sw1 nexthop replace id 1 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw2 nexthop replace id 1 via 192.0.2.33 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 10 group 1 fdb"
+-	run_cmd "ip -n sw2 nexthop replace id 10 group 1 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 1 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw2 nexthop replace id 1 via 192.0.2.33 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 10 group 1 fdb"
++	run_cmd "ip -n $sw2 nexthop replace id 10 group 1 fdb"
+ 
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_port vx0"
+-	run_cmd "bridge -n sw2 link set dev swp1 backup_port vx0"
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 10"
+-	run_cmd "bridge -n sw2 link set dev swp1 backup_nhid 10"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_port vx0"
++	run_cmd "bridge -n $sw2 link set dev swp1 backup_port vx0"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 10"
++	run_cmd "bridge -n $sw2 link set dev swp1 backup_nhid 10"
+ 
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
+-	run_cmd "ip -n sw2 link set dev swp1 carrier off"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw1 swp1 disabled
++	run_cmd "ip -n $sw2 link set dev swp1 carrier off"
++	busywait $BUSYWAIT_TIMEOUT bridge_link_check $sw2 swp1 disabled
+ 
+-	run_cmd "ip netns exec sw1 ping -i 0.1 -c 10 -w $PING_TIMEOUT 192.0.2.66"
++	run_cmd "ip netns exec $sw1 ping -i 0.1 -c 10 -w $PING_TIMEOUT 192.0.2.66"
+ 	log_test $? 0 "Ping with backup nexthop ID"
+ 
+ 	# Reset the backup nexthop ID to 0 and check that ping fails.
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 0"
+-	run_cmd "bridge -n sw2 link set dev swp1 backup_nhid 0"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 0"
++	run_cmd "bridge -n $sw2 link set dev swp1 backup_nhid 0"
+ 
+-	run_cmd "ip netns exec sw1 ping -i 0.1 -c 10 -w $PING_TIMEOUT 192.0.2.66"
++	run_cmd "ip netns exec $sw1 ping -i 0.1 -c 10 -w $PING_TIMEOUT 192.0.2.66"
+ 	log_test $? 1 "Ping after disabling backup nexthop ID"
+ }
+ 
+ backup_nhid_add_del_loop()
+ {
+ 	while true; do
+-		ip -n sw1 nexthop del id 10
+-		ip -n sw1 nexthop replace id 10 group 1/2 fdb
++		ip -n $sw1 nexthop del id 10
++		ip -n $sw1 nexthop replace id 10 group 1/2 fdb
+ 	done >/dev/null 2>&1
+ }
+ 
+@@ -648,19 +664,19 @@ backup_nhid_torture()
+ 	# deleting the group. The test is considered successful if nothing
+ 	# crashed.
+ 
+-	run_cmd "ip -n sw1 nexthop replace id 1 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 2 via 192.0.2.34 fdb"
+-	run_cmd "ip -n sw1 nexthop replace id 10 group 1/2 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 1 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 2 via 192.0.2.34 fdb"
++	run_cmd "ip -n $sw1 nexthop replace id 10 group 1/2 fdb"
+ 
+-	run_cmd "bridge -n sw1 fdb replace $dmac dev swp1 master static vlan 10"
++	run_cmd "bridge -n $sw1 fdb replace $dmac dev swp1 master static vlan 10"
+ 
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_port vx0"
+-	run_cmd "bridge -n sw1 link set dev swp1 backup_nhid 10"
+-	run_cmd "ip -n sw1 link set dev swp1 carrier off"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_port vx0"
++	run_cmd "bridge -n $sw1 link set dev swp1 backup_nhid 10"
++	run_cmd "ip -n $sw1 link set dev swp1 carrier off"
+ 
+ 	backup_nhid_add_del_loop &
+ 	pid1=$!
+-	ip netns exec sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 0 &
++	ip netns exec $sw1 mausezahn br0.10 -a $smac -b $dmac -A 198.51.100.1 -B 198.51.100.2 -t ip -p 100 -q -c 0 &
+ 	pid2=$!
+ 
+ 	sleep 30
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 2456a399eb9ae..afd18c678ff5a 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -28,10 +28,15 @@ FOPTS	:=	-flto=auto -ffat-lto-objects -fexceptions -fstack-protector-strong \
+ 		-fasynchronous-unwind-tables -fstack-clash-protection
+ WOPTS	:= 	-Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -Wno-maybe-uninitialized
+ 
++ifeq ($(CC),clang)
++  FOPTS := $(filter-out -ffat-lto-objects, $(FOPTS))
++  WOPTS := $(filter-out -Wno-maybe-uninitialized, $(WOPTS))
++endif
++
+ TRACEFS_HEADERS	:= $$($(PKG_CONFIG) --cflags libtracefs)
+ 
+ CFLAGS	:=	-O -g -DVERSION=\"$(VERSION)\" $(FOPTS) $(MOPTS) $(WOPTS) $(TRACEFS_HEADERS) $(EXTRA_CFLAGS)
+-LDFLAGS	:=	-ggdb $(EXTRA_LDFLAGS)
++LDFLAGS	:=	-flto=auto -ggdb $(EXTRA_LDFLAGS)
+ LIBS	:=	$$($(PKG_CONFIG) --libs libtracefs)
+ 
+ SRC	:=	$(wildcard src/*.c)
+diff --git a/tools/tracing/rtla/src/osnoise_hist.c b/tools/tracing/rtla/src/osnoise_hist.c
+index 8f81fa0073648..01870d50942a1 100644
+--- a/tools/tracing/rtla/src/osnoise_hist.c
++++ b/tools/tracing/rtla/src/osnoise_hist.c
+@@ -135,8 +135,7 @@ static void osnoise_hist_update_multiple(struct osnoise_tool *tool, int cpu,
+ 	if (params->output_divisor)
+ 		duration = duration / params->output_divisor;
+ 
+-	if (data->bucket_size)
+-		bucket = duration / data->bucket_size;
++	bucket = duration / data->bucket_size;
+ 
+ 	total_duration = duration * count;
+ 
+@@ -480,7 +479,11 @@ static void osnoise_hist_usage(char *usage)
+ 
+ 	for (i = 0; msg[i]; i++)
+ 		fprintf(stderr, "%s\n", msg[i]);
+-	exit(1);
++
++	if (usage)
++		exit(EXIT_FAILURE);
++
++	exit(EXIT_SUCCESS);
+ }
+ 
+ /*
+diff --git a/tools/tracing/rtla/src/osnoise_top.c b/tools/tracing/rtla/src/osnoise_top.c
+index f7c959be86777..457360db07673 100644
+--- a/tools/tracing/rtla/src/osnoise_top.c
++++ b/tools/tracing/rtla/src/osnoise_top.c
+@@ -331,7 +331,11 @@ static void osnoise_top_usage(struct osnoise_top_params *params, char *usage)
+ 
+ 	for (i = 0; msg[i]; i++)
+ 		fprintf(stderr, "%s\n", msg[i]);
+-	exit(1);
++
++	if (usage)
++		exit(EXIT_FAILURE);
++
++	exit(EXIT_SUCCESS);
+ }
+ 
+ /*
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 47d3d8b53cb21..dbf154082f958 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -178,8 +178,7 @@ timerlat_hist_update(struct osnoise_tool *tool, int cpu,
+ 	if (params->output_divisor)
+ 		latency = latency / params->output_divisor;
+ 
+-	if (data->bucket_size)
+-		bucket = latency / data->bucket_size;
++	bucket = latency / data->bucket_size;
+ 
+ 	if (!context) {
+ 		hist = data->hist[cpu].irq;
+@@ -546,7 +545,11 @@ static void timerlat_hist_usage(char *usage)
+ 
+ 	for (i = 0; msg[i]; i++)
+ 		fprintf(stderr, "%s\n", msg[i]);
+-	exit(1);
++
++	if (usage)
++		exit(EXIT_FAILURE);
++
++	exit(EXIT_SUCCESS);
+ }
+ 
+ /*
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 1640f121baca5..3e9af2c386888 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -375,7 +375,11 @@ static void timerlat_top_usage(char *usage)
+ 
+ 	for (i = 0; msg[i]; i++)
+ 		fprintf(stderr, "%s\n", msg[i]);
+-	exit(1);
++
++	if (usage)
++		exit(EXIT_FAILURE);
++
++	exit(EXIT_SUCCESS);
+ }
+ 
+ /*
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index c769d7b3842c0..9ac71a66840c1 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -238,12 +238,6 @@ static inline int sched_setattr(pid_t pid, const struct sched_attr *attr,
+ 	return syscall(__NR_sched_setattr, pid, attr, flags);
+ }
+ 
+-static inline int sched_getattr(pid_t pid, struct sched_attr *attr,
+-				unsigned int size, unsigned int flags)
+-{
+-	return syscall(__NR_sched_getattr, pid, attr, size, flags);
+-}
+-
+ int __set_sched_attr(int pid, struct sched_attr *attr)
+ {
+ 	int flags = 0;
+@@ -479,13 +473,13 @@ int parse_prio(char *arg, struct sched_attr *sched_param)
+ 		if (prio == INVALID_VAL)
+ 			return -1;
+ 
+-		if (prio < sched_get_priority_min(SCHED_OTHER))
++		if (prio < MIN_NICE)
+ 			return -1;
+-		if (prio > sched_get_priority_max(SCHED_OTHER))
++		if (prio > MAX_NICE)
+ 			return -1;
+ 
+ 		sched_param->sched_policy   = SCHED_OTHER;
+-		sched_param->sched_priority = prio;
++		sched_param->sched_nice = prio;
+ 		break;
+ 	default:
+ 		return -1;
+@@ -536,7 +530,7 @@ int set_cpu_dma_latency(int32_t latency)
+  */
+ static const int find_mount(const char *fs, char *mp, int sizeof_mp)
+ {
+-	char mount_point[MAX_PATH];
++	char mount_point[MAX_PATH+1];
+ 	char type[100];
+ 	int found = 0;
+ 	FILE *fp;
+diff --git a/tools/tracing/rtla/src/utils.h b/tools/tracing/rtla/src/utils.h
+index 04ed1e650495a..d44513e6c66a0 100644
+--- a/tools/tracing/rtla/src/utils.h
++++ b/tools/tracing/rtla/src/utils.h
+@@ -9,6 +9,8 @@
+  */
+ #define BUFF_U64_STR_SIZE	24
+ #define MAX_PATH		1024
++#define MAX_NICE		20
++#define MIN_NICE		-19
+ 
+ #define container_of(ptr, type, member)({			\
+ 	const typeof(((type *)0)->member) *__mptr = (ptr);	\
+diff --git a/tools/verification/rv/Makefile b/tools/verification/rv/Makefile
+index 3d0f3888a58c6..485f8aeddbe03 100644
+--- a/tools/verification/rv/Makefile
++++ b/tools/verification/rv/Makefile
+@@ -28,10 +28,15 @@ FOPTS	:=	-flto=auto -ffat-lto-objects -fexceptions -fstack-protector-strong \
+ 		-fasynchronous-unwind-tables -fstack-clash-protection
+ WOPTS	:= 	-Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -Wno-maybe-uninitialized
+ 
++ifeq ($(CC),clang)
++  FOPTS := $(filter-out -ffat-lto-objects, $(FOPTS))
++  WOPTS := $(filter-out -Wno-maybe-uninitialized, $(WOPTS))
++endif
++
+ TRACEFS_HEADERS	:= $$($(PKG_CONFIG) --cflags libtracefs)
+ 
+ CFLAGS	:=	-O -g -DVERSION=\"$(VERSION)\" $(FOPTS) $(MOPTS) $(WOPTS) $(TRACEFS_HEADERS) $(EXTRA_CFLAGS) -I include
+-LDFLAGS	:=	-ggdb $(EXTRA_LDFLAGS)
++LDFLAGS	:=	-flto=auto -ggdb $(EXTRA_LDFLAGS)
+ LIBS	:=	$$($(PKG_CONFIG) --libs libtracefs)
+ 
+ SRC	:=	$(wildcard src/*.c)
+diff --git a/tools/verification/rv/src/in_kernel.c b/tools/verification/rv/src/in_kernel.c
+index ad28582bcf2b1..f04479ecc96c0 100644
+--- a/tools/verification/rv/src/in_kernel.c
++++ b/tools/verification/rv/src/in_kernel.c
+@@ -210,9 +210,9 @@ static char *ikm_read_reactor(char *monitor_name)
+ static char *ikm_get_current_reactor(char *monitor_name)
+ {
+ 	char *reactors = ikm_read_reactor(monitor_name);
++	char *curr_reactor = NULL;
+ 	char *start;
+ 	char *end;
+-	char *curr_reactor;
+ 
+ 	if (!reactors)
+ 		return NULL;


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-23 13:28 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-23 13:28 UTC (permalink / raw
  To: gentoo-commits

commit:     d9c8947f54fabbccc95968fa207c21e8843cb4f6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 23 13:28:43 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 23 13:28:43 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d9c8947f

Remove redundant patch

Removed:
1800_mm-32-bit-huge-page-alignment-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |  4 --
 1800_mm-32-bit-huge-page-alignment-fix.patch | 57 ----------------------------
 2 files changed, 61 deletions(-)

diff --git a/0000_README b/0000_README
index 4a76a3a5..879736ee 100644
--- a/0000_README
+++ b/0000_README
@@ -79,10 +79,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:	  https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:	  prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1800_mm-32-bit-huge-page-alignment-fix.patch
-From:   https://lore.kernel.org/all/20240118180505.2914778-1-shy828301@gmail.com/T/
-Desc:   mm: huge_memory: don't force huge page alignment on 32 bit
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1800_mm-32-bit-huge-page-alignment-fix.patch b/1800_mm-32-bit-huge-page-alignment-fix.patch
deleted file mode 100644
index 5ad5d49b..00000000
--- a/1800_mm-32-bit-huge-page-alignment-fix.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From: Yang Shi @ 2024-01-18 13:35 UTC (permalink / raw)
-  To: jirislaby, surenb, riel, willy, cl, akpm; +Cc: yang, linux-mm, linux-kernel
-
-From: Yang Shi <yang@os.amperecomputing.com>
-
-The commit efa7df3e3bb5 ("mm: align larger anonymous mappings on THP
-boundaries") caused two issues [1] [2] reported on 32 bit system or compat
-userspace.
-
-It doesn't make too much sense to force huge page alignment on 32 bit
-system due to the constrained virtual address space.
-
-[1] https://lore.kernel.org/linux-mm/CAHbLzkqa1SCBA10yjWTtA2mKCsoK5+M1BthSDL8ROvUq2XxZMw@mail.gmail.com/T/#mf211643a0427f8d6495b5b53f8132f453d60ab95
-[2] https://lore.kernel.org/linux-mm/CAHbLzkqa1SCBA10yjWTtA2mKCsoK5+M1BthSDL8ROvUq2XxZMw@mail.gmail.com/T/#me93dff2ccbd9902c3e395e1c022fb454e48ecb1d
-
-Fixes: efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries")
-Reported-by: Jiri Slaby <jirislaby@kernel.org>
-Reported-by: Suren Baghdasaryan <surenb@google.com>
-Tested-by: Jiri Slaby <jirislaby@kernel.org>
-Tested-by: Suren Baghdasaryan <surenb@google.com>
-Cc: Rik van Riel <riel@surriel.com>
-Cc: Matthew Wilcox <willy@infradead.org>
-Cc: Christopher Lameter <cl@linux.com>
-Signed-off-by: Yang Shi <yang@os.amperecomputing.com>
----
- mm/huge_memory.c | 9 +++++++++
- 1 file changed, 9 insertions(+)
-
-diff --git a/mm/huge_memory.c b/mm/huge_memory.c
-index 94ef5c02b459..e9fbaccbe0c0 100644
---- a/mm/huge_memory.c
-+++ b/mm/huge_memory.c
-@@ -37,6 +37,7 @@
- #include <linux/page_owner.h>
- #include <linux/sched/sysctl.h>
- #include <linux/memory-tiers.h>
-+#include <linux/compat.h>
- 
- #include <asm/tlb.h>
- #include <asm/pgalloc.h>
-@@ -811,6 +812,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
- 	loff_t off_align = round_up(off, size);
- 	unsigned long len_pad, ret;
- 
-+	/*
-+	 * It doesn't make too much sense to froce huge page alignment on
-+	 * 32 bit system or compat userspace due to the contrained virtual
-+	 * address space and address entropy.
-+	 */
-+	if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall())
-+		return 0;
-+
- 	if (off_end <= off_align || (off_end - off_align) < size)
- 		return 0;
- 
--- 
-2.41.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-23 14:19 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-23 14:19 UTC (permalink / raw
  To: gentoo-commits

commit:     c47d61c29e3ec292ae842c52419716f21dec82ab
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 23 14:19:22 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 23 14:19:22 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c47d61c2

Update cpu optimization patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 71 ++++++++++++++-------------
 1 file changed, 36 insertions(+), 35 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index 7a1b717a..596cade6 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,6 +1,6 @@
-From 70d4906b87983ed2ed5da78930a701625d881dd0 Mon Sep 17 00:00:00 2001
+From 71dd30c3e2ab2852b0290ae1f34ce1c7f8655040 Mon Sep 17 00:00:00 2001
 From: graysky <therealgraysky@proton.me>
-Date: Thu, 5 Jan 2023 14:29:37 -0500
+Date: Wed, 21 Feb 2024 08:38:13 -0500
 
 FEATURES
 This patch adds additional CPU options to the Linux kernel accessible under:
@@ -13,9 +13,10 @@ offered which are good for supported Intel or AMD CPUs:
 • x86-64-v3
 • x86-64-v4
 
-Users of glibc 2.33 and above can see which level is supported by current
-hardware by running:
+Users of glibc 2.33 and above can see which level is supported by running:
   /lib/ld-linux-x86-64.so.2 --help | grep supported
+Or
+  /lib64/ld-linux-x86-64.so.2 --help | grep supported
 
 Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
 
@@ -106,12 +107,12 @@ REFERENCES
  3 files changed, 528 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 542377cd419d..f589971df2d3 100644
+index 87396575c..5ac6e8463 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
- 
- 
+
+
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -120,7 +121,7 @@ index 542377cd419d..f589971df2d3 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
- 
+
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -129,7 +130,7 @@ index 542377cd419d..f589971df2d3 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,106 @@ config MK7
  	  flags to GCC.
- 
+
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -137,7 +138,7 @@ index 542377cd419d..f589971df2d3 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
- 
+
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -237,17 +238,17 @@ index 542377cd419d..f589971df2d3 100644
  	depends on X86_32
 @@ -270,7 +364,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
+
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
- 
+
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +372,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
- 
+
 +	  Enables -march=core2
 +
  config MATOM
@@ -256,7 +257,7 @@ index 542377cd419d..f589971df2d3 100644
 @@ -287,6 +383,212 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
- 
+
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -469,7 +470,7 @@ index 542377cd419d..f589971df2d3 100644
 @@ -294,6 +596,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
- 
+
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -515,7 +516,7 @@ index 542377cd419d..f589971df2d3 100644
 +	  Enables -march=native
 +
  endchoice
- 
+
  config X86_GENERIC
 @@ -318,9 +664,17 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -534,17 +535,17 @@ index 542377cd419d..f589971df2d3 100644
 -	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
 +	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII \
 +	|| MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
+
  config X86_F00F_BUG
  	def_bool y
 @@ -332,15 +686,27 @@ config X86_INVD_BUG
- 
+
  config X86_ALIGNMENT_16
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC || M586 || M486SX || M486 || MVIAC3_2 || MGEODEGX1
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MELAN || MK6 || M586MMX || M586TSC \
 +	|| M586 || M486SX || M486 || MVIAC3_2 || MGEODEGX1
- 
+
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
@@ -553,7 +554,7 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX \
 +	|| MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS \
 +	|| MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
- 
+
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
@@ -565,7 +566,7 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE \
 +	|| MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE \
 +	|| MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  #
  # P6_NOPs are a relatively minor optimization that require a family >=
 @@ -356,32 +722,63 @@ config X86_USE_PPRO_CHECKSUM
@@ -578,7 +579,7 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE \
 +	|| MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS \
 +	|| MNATIVE_INTEL)
- 
+
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
@@ -590,7 +591,7 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE \
 +	|| MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS \
 +	|| MNATIVE_INTEL || MNATIVE_AMD) || X86_64
- 
+
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
@@ -601,7 +602,7 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE \
 +	|| MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE \
 +	|| MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
@@ -614,13 +615,13 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX \
 +	|| MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS \
 +	|| MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD)
- 
+
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
--	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 \
-+	|| MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 \
++	|| MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8 ||  MK8SSE3 \
 +	|| MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER \
 +	|| MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MNEHALEM || MWESTMERE || MSILVERMONT \
 +	|| MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL \
@@ -629,20 +630,20 @@ index 542377cd419d..f589971df2d3 100644
 +	|| MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
- 
+
  config X86_DEBUGCTLMSR
  	def_bool y
 -	depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486SX || M486) && !UML
 +	depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 \
 +	|| M486SX || M486) && !UML
- 
+
  config IA32_FEAT_CTL
  	def_bool y
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 415a5d138de4..17b1e039d955 100644
+index 1a068de12..23b2ec69d 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -151,8 +151,48 @@ else
+@@ -152,8 +152,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
          cflags-$(CONFIG_MK8)		+= -march=k8
          cflags-$(CONFIG_MPSC)		+= -march=nocona
@@ -692,9 +693,9 @@ index 415a5d138de4..17b1e039d955 100644
 +        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
          cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
          KBUILD_CFLAGS += $(cflags-y)
- 
+
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec3..02c1386eb653 100644
+index 75884d2cd..02c1386eb 100644
 --- a/arch/x86/include/asm/vermagic.h
 +++ b/arch/x86/include/asm/vermagic.h
 @@ -17,6 +17,54 @@
@@ -785,5 +786,5 @@ index 75884d2cdec3..02c1386eb653 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
--- 
-2.39.0
+--
+2.43.0.232.ge79552d197


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-02-24 19:39 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-02-24 19:39 UTC (permalink / raw
  To: gentoo-commits

commit:     57890d89359c90e488de0b9eb51bc3a0c6de1720
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 24 19:38:57 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 24 19:38:57 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=57890d89

Add the BMQ(BitMap Queue) Scheduler

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                 |     4 +
 5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch | 11452 ++++++++++++++++++++++++++
 2 files changed, 11456 insertions(+)

diff --git a/0000_README b/0000_README
index 879736ee..b343fd35 100644
--- a/0000_README
+++ b/0000_README
@@ -122,3 +122,7 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
+From:   https://gitlab.com/alfredchen/projectc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
new file mode 100644
index 00000000..977bdcd6
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.7-r2.patch
@@ -0,0 +1,11452 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 6584a1f9bfe3..226c79dd34cc 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1646,3 +1646,13 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++  0 - No yield.
++  1 - Requeue task. (default)
++  2 - Set run queue skip task. Same as CFS.
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index dd31e3b6bf77..12d1248cb4df 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -480,7 +480,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 292c31697248..f5b026795dc6 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -769,8 +769,14 @@ struct task_struct {
+ 	unsigned int			ptrace;
+ 
+ #ifdef CONFIG_SMP
+-	int				on_cpu;
+ 	struct __call_single_node	wake_entry;
++#endif
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
++	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -784,6 +790,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -792,6 +799,20 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	int				sq_idx;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -802,6 +823,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -1561,6 +1583,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ static inline struct pid *task_pid(struct task_struct *task)
+ {
+ 	return task->thread_pid;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index df3aca89d4f5..1df1f7635188 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..a9a1dfa99140 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,32 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++
++#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
++#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
++#endif
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index b2b9e6eb9683..09bd4d8758b2 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index de545ba85218..941bb18ff72c 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -238,7 +238,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index 9ffb103fc927..8f0b7eeff77e 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -629,6 +629,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -794,6 +795,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -840,6 +842,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -893,6 +924,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -990,6 +1022,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -1012,6 +1045,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config SCHED_MM_CID
+@@ -1260,6 +1294,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 5727d42149c3..e2e2622d50d5 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -75,9 +75,15 @@ struct task_struct init_task
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -89,6 +95,17 @@ struct task_struct init_task
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++	.sq_idx		= 15,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -96,6 +113,7 @@ struct task_struct init_task
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index c2f1fd95a821..41654679b1b2 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 615daaf87f1f..16fb54ec732c 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -848,7 +848,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1247,7 +1247,7 @@ static void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3206,12 +3206,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3232,6 +3235,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3255,12 +3259,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ 	if (!cs->attach_in_progress)
+ 		wake_up(&cpuset_attach_wq);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index 6f0c358e73d8..8111481ce8b1 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -150,7 +150,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index aedc0832c9f4..ff8bf6cddc34 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -174,7 +174,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -195,7 +195,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 4a10e8c16fd2..cfbbdd64b851 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -362,7 +362,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -383,16 +383,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -401,16 +405,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -419,8 +429,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..5b6bdff6e630
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,8944 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.7-r2"
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++struct affinity_context {
++	const struct cpumask *new_mask;
++	struct cpumask *user_mask;
++	unsigned int flags;
++};
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ * 2: Set rq skip task. (Same as mainline)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
++#endif
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS] ____cacheline_aligned_in_smp;
++static cpumask_t *const sched_idle_mask = &sched_preempt_mask[0];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
++	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
++	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
++}
++
++static inline void
++clear_recorded_preempt_mask(int pr, int low, int high, int cpu)
++{
++	if (low < pr && pr <= high)
++		cpumask_clear_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
++}
++
++static inline void
++set_recorded_preempt_mask(int pr, int low, int high, int cpu)
++{
++	if (low < pr && pr <= high)
++		cpumask_set_cpu(cpu, sched_preempt_mask + SCHED_QUEUE_BITS - pr);
++}
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	unsigned long prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	unsigned long last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++#ifdef CONFIG_SCHED_SMT
++			if (static_branch_likely(&sched_smt_present))
++				cpumask_andnot(&sched_sg_idle_mask,
++					       &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++			cpumask_clear_cpu(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		clear_recorded_preempt_mask(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    cpumask_intersects(cpu_smt_mask(cpu), sched_idle_mask))
++			cpumask_or(&sched_sg_idle_mask,
++				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		cpumask_set_cpu(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	set_recorded_preempt_mask(pr, last_prio, prio, cpu);
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_prio2idx(rq->prio, rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct *
++sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	unsigned long idx = p->sq_idx;
++	struct list_head *head = &rq->queue.heads[idx];
++
++	if (list_is_last(&p->sq_node, head)) {
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++static inline struct task_struct *rq_runnable_task(struct rq *rq)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (unlikely(next == rq->skip))
++		next = sched_rq_next_task(next, rq);
++
++	return next;
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq
++*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void
++__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq
++*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
++			  unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq &&
++				   rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
++			      unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void
++rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void
++rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++	delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp),
++			RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++						  cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)				\
++	sched_info_dequeue(rq, p);						\
++										\
++	list_del(&p->sq_node);							\
++	if (list_empty(&rq->queue.heads[p->sq_idx])) { 				\
++		clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);	\
++		func;								\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
++	sched_info_enqueue(rq, p);					\
++									\
++	p->sq_idx = task_sched_prio_idx(p, rq);				\
++	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
++	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags);
++	update_sched_preempt_mask(rq);
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq, int idx)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++
++	list_del(&p->sq_node);
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
++	if (idx != p->sq_idx) {
++		if (list_empty(&rq->queue.heads[p->sq_idx]))
++			clear_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		p->sq_idx = idx;
++		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		update_sched_preempt_mask(rq);
++	}
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++void nohz_balance_enter_idle(int cpu) {}
++
++void select_nohz_load_balancer(int stop_tick) {}
++
++void set_cpu_sd_state_idle(void) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) :
++		static_prio + MAX_PRIORITY_ADJ;
++}
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++	p->on_rq = 0;
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	if (WARN_ON_ONCE(!p->migration_disabled))
++		return;
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
++				   new_cpu)
++{
++	int src_cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	src_cpu = cpu_of(rq);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p, src_cpu);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
++				 dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++static cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_idle_mask);
++
++	for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++		if (prio < cpu_rq(cpu)->prio)
++			cpumask_set_cpu(cpu, mask);
++	}
++}
++
++static inline int
++preempt_mask_check(struct task_struct *p, cpumask_t *allow_mask, cpumask_t *preempt_mask)
++{
++	int task_prio = task_sched_prio(p);
++	cpumask_t *mask = sched_preempt_mask + SCHED_QUEUE_BITS - 1 - task_prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != task_prio) {
++		sched_preempt_mask_flush(mask, task_prio);
++		atomic_set(&sched_prio_record, task_prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&mask, &allow_mask, &sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&mask, &allow_mask, sched_idle_mask) ||
++	    preempt_mask_check(p, &allow_mask, &mask))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (p->migration_disabled) {
++			if (likely(p->cpus_ptr != &p->cpus_mask))
++				__do_set_cpus_ptr(p, &p->cpus_mask);
++			p->migration_disabled = 0;
++			p->migration_flags |= MDF_FORCE_ENABLED;
++			/* When p is migrate_disabled, rq->lock should be held */
++			rq->nr_pinned--;
++		}
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is ok to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local irq enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
++ * to work out what the state is, if required.  Given that @func can be invoked
++ * with a runqueue lock held, it had better be quite lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sysctl_sched_base_slice;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++	{}
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void scheduler_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_SCHED_SMT
++static inline int sg_balance_cpu_stop(void *data)
++{
++	struct rq *rq = this_rq();
++	struct task_struct *p = data;
++	cpumask_t tmp;
++	unsigned long flags;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	rq->active_balance = 0;
++	/* _something_ may have changed the task, double check again */
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
++	    !is_migration_disabled(p)) {
++		int cpu = cpu_of(rq);
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock(&p->pi_lock);
++
++	local_irq_restore(flags);
++
++	return 0;
++}
++
++/* sg_balance_trigger - trigger slibing group balance for @cpu */
++static inline int sg_balance_trigger(const int cpu)
++{
++	struct rq *rq= cpu_rq(cpu);
++	unsigned long flags;
++	struct task_struct *curr;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++	curr = rq->curr;
++	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
++	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
++	      !is_migration_disabled(curr) && (!rq->active_balance);
++
++	if (res)
++		rq->active_balance = 1;
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res)
++		stop_one_cpu_nowait(cpu, sg_balance_cpu_stop, curr,
++				    &rq->active_balance_work);
++	return res;
++}
++
++/*
++ * sg_balance - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance(struct rq *rq, int cpu)
++{
++	cpumask_t chk;
++
++	/* exit when cpu is offline */
++	if (unlikely(!rq->online))
++		return;
++
++	/*
++	 * Only cpu in slibing idle group will do the checking and then
++	 * find potential cpus which can migrate the current running task
++	 */
++	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
++	    cpumask_andnot(&chk, cpu_online_mask, sched_idle_mask) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (!cpumask_intersects(cpu_smt_mask(i), sched_idle_mask) &&\
++			    sg_balance_trigger(i))
++				return;
++		}
++	}
++}
++#endif /* CONFIG_SCHED_SMT */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p, cpu_of(rq));
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	struct cpumask *topo_mask, *end_mask;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq, task_sched_prio_idx(p, rq));
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next;
++
++	if (unlikely(rq->skip)) {
++		next = rq_runnable_task(rq);
++		if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++			if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++				rq->skip = NULL;
++				schedstat_inc(rq->sched_goidle);
++				return next;
++#ifdef	CONFIG_SMP
++			}
++			next = rq_runnable_task(rq);
++#endif
++		}
++		rq->skip = NULL;
++#ifdef CONFIG_HIGH_RES_TIMERS
++		hrtick_start(rq, next->time_slice);
++#endif
++		return next;
++	}
++
++	next = sched_rq_first_task(rq);
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
++ * optimize the AND operation out and just check for zero.
++ */
++#define SM_NONE			0x0
++#define SM_PREEMPT		0x1
++#define SM_RTLOCK_WAIT		0x2
++
++#ifndef CONFIG_PREEMPT_RT
++# define SM_MASK_PREEMPT	(~0U)
++#else
++# define SM_MASK_PREEMPT	SM_PREEMPT
++#endif
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler scheduler_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(unsigned int sched_mode)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, !!sched_mode);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(!!sched_mode);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
++		if (signal_pending_state(prev_state, prev)) {
++			WRITE_ONCE(prev->__state, TASK_RUNNING);
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev_state & TASK_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++#ifdef CONFIG_SCHED_BMQ
++		rq->last_ts_switch = rq->clock;
++#endif
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
++		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 */
++		++*switch_count;
++
++		trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	sg_balance(rq, cpu);
++#endif
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(unsigned int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_NONE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		int idx;
++
++		update_rq_clock(rq);
++		idx = task_sched_prio_idx(p, rq);
++		if (idx != p->sq_idx) {
++			requeue_task(p, rq, idx);
++			wakeup_preempt(rq);
++		}
++	}
++}
++
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * is_nice_reduction - check if nice value is an actual reduction
++ *
++ * Similar to can_nice() but does not perform a capability check.
++ *
++ * @p: task
++ * @nice: nice value
++ */
++static bool is_nice_reduction(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40]: */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE));
++}
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	return is_nice_reduction(p, nice) || capable(CAP_SYS_NICE);
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++static struct task_struct *find_get_task(pid_t pid)
++{
++	struct task_struct *p;
++	guard(rcu)();
++
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++
++	return p;
++}
++
++DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T),
++	     find_get_task(pid), pid_t pid)
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	guard(rcu)();
++
++	pcred = __task_cred(p);
++	return (uid_eq(cred->euid, pcred->euid) ||
++	        uid_eq(cred->euid, pcred->uid));
++}
++
++/*
++ * Allow unprivileged RT tasks to decrease priority.
++ * Only issue a capable test if needed and only once to avoid an audit
++ * event on permitted non-privileged operations:
++ */
++static int user_check_sched_setscheduler(struct task_struct *p,
++					 const struct sched_attr *attr,
++					 int policy, int reset_on_fork)
++{
++	if (rt_policy(policy)) {
++		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
++
++		/* Can't set/change the rt policy: */
++		if (policy != p->policy && !rlim_rtprio)
++			goto req_priv;
++
++		/* Can't increase priority: */
++		if (attr->sched_priority > p->rt_priority &&
++		    attr->sched_priority > rlim_rtprio)
++			goto req_priv;
++	}
++
++	/* Can't change other user's priorities: */
++	if (!check_same_owner(p))
++		goto req_priv;
++
++	/* Normal users shall not reset the sched_reset_on_fork flag: */
++	if (p->sched_reset_on_fork && !reset_on_fork)
++		goto req_priv;
++
++	return 0;
++
++req_priv:
++	if (!capable(CAP_SYS_NICE))
++		return -EPERM;
++
++	return 0;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++EXPORT_SYMBOL_GPL(sched_setattr_nocheck);
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	return sched_setscheduler(p, policy, &lparam);
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++static void get_params(struct task_struct *p, struct sched_attr *attr)
++{
++	if (task_has_rt_policy(p))
++		attr->sched_priority = p->rt_priority;
++	else
++		attr->sched_nice = task_nice(p);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS)
++		get_params(p, &attr);
++
++	return sched_setattr(p, &attr);
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		return -ESRCH;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (!retval)
++		retval = p->policy;
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		int retval;
++
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -EINVAL;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		if (task_has_rt_policy(p))
++			lp.sched_priority = p->rt_priority;
++	}
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	scoped_guard (rcu) {
++		p = find_process_by_pid(pid);
++		if (!p)
++			return -ESRCH;
++
++		retval = security_task_getscheduler(p);
++		if (retval)
++			return retval;
++
++		kattr.sched_policy = p->policy;
++		if (p->sched_reset_on_fork)
++			kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		get_params(p, &kattr);
++		kattr.sched_flags &= SCHED_FLAG_ALL;
++
++#ifdef CONFIG_UCLAMP_TASK
++		kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++		kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++	}
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++}
++
++#ifdef CONFIG_SMP
++int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
++{
++	return 0;
++}
++#endif
++
++static int
++__sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
++{
++	int retval;
++	cpumask_var_t cpus_allowed, new_mask;
++
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
++		return -ENOMEM;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, ctx->new_mask, cpus_allowed);
++
++	ctx->new_mask = new_mask;
++	ctx->flags |= SCA_CHECK;
++
++	retval = __set_cpus_allowed_ptr(p, ctx);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	if (!cpumask_subset(new_mask, cpus_allowed)) {
++		/*
++		 * We must have raced with a concurrent cpuset
++		 * update. Just reset the cpus_allowed to the
++		 * cpuset's cpus_allowed
++		 */
++		cpumask_copy(new_mask, cpus_allowed);
++
++		/*
++		 * If SCA_USER is set, a 2nd call to __set_cpus_allowed_ptr()
++		 * will restore the previous user_cpus_ptr value.
++		 *
++		 * In the unlikely event a previous user_cpus_ptr exists,
++		 * we need to further restrict the mask to what is allowed
++		 * by that old user_cpus_ptr.
++		 */
++		if (unlikely((ctx->flags & SCA_USER) && ctx->user_mask)) {
++			bool empty = !cpumask_and(new_mask, new_mask,
++						  ctx->user_mask);
++
++			if (WARN_ON_ONCE(empty))
++				cpumask_copy(new_mask, cpus_allowed);
++		}
++		__set_cpus_allowed_ptr(p, ctx);
++		retval = -EINVAL;
++	}
++
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	struct affinity_context ac;
++	struct cpumask *user_mask;
++	int retval;
++
++	CLASS(find_get_task, p)(pid);
++	if (!p)
++		return -ESRCH;
++
++	if (p->flags & PF_NO_SETAFFINITY)
++		return -EINVAL;
++
++	if (!check_same_owner(p)) {
++		guard(rcu)();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE))
++			return -EPERM;
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		return retval;
++
++	/*
++	 * With non-SMP configs, user_cpus_ptr/user_mask isn't used and
++	 * alloc_user_cpus_ptr() returns NULL.
++	 */
++	user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
++	if (user_mask) {
++		cpumask_copy(user_mask, in_mask);
++	} else if (IS_ENABLED(CONFIG_SMP)) {
++		return -ENOMEM;
++	}
++
++	ac = (struct affinity_context){
++		.new_mask  = in_mask,
++		.user_mask = user_mask,
++		.flags     = SCA_USER,
++	};
++
++	retval = __sched_setaffinity(p, &ac);
++	kfree(ac.user_mask);
++
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	int retval;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -ESRCH;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	guard(raw_spinlock_irqsave)(&p->pi_lock);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min(len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++	struct task_struct *p;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq, task_sched_prio_idx(p, rq));
++	} else if (rq->nr_running > 1) {
++		if (1 == sched_yield_type) {
++			do_sched_yield_type_1(p, rq);
++			if (task_on_rq_queued(p))
++				requeue_task(p, rq, task_sched_prio_idx(p, rq));
++		} else if (2 == sched_yield_type) {
++			rq->skip = p;
++		}
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	klp_sched_try_switch();
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	if (!klp_override)
++		preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		if (!klp_override)
++			preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++	__klp_sched_try_switch();
++	return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = true;
++	static_call_update(cond_resched, klp_cond_resched);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++	mutex_lock(&sched_dynamic_mutex);
++
++	klp_override = false;
++	__sched_dynamic_update(preempt_dynamic_mode);
++
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	guard(rcu)();
++	p = find_process_by_pid(pid);
++	if (!p)
++		return -EINVAL;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		return retval;
++
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
++	return 0;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (cpu == smp_processor_id() && in_hardirq()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(&sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpumask_of(cpu));
++		tmp++;
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++		/*per_cpu(sd_llc_id, cpu) = cpu;*/
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sysctl_sched_base_slice;
++
++		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++		rq->skip = NULL;
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->active_balance = 0;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	rcu_read_lock();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		rcu_read_unlock();
++		t->last_mm_cid = -1;
++		return -1;
++	}
++	rcu_read_unlock();
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, dst_cid;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many cpus.
++	 *
++	 * If destination cid is already set, we may have to just clear
++	 * the src cid to ensure compactness in frequent migrations
++	 * scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed cpus, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed cpus.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++	if (!mm_cid_is_unset(dst_cid) &&
++	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (!mm_cid_is_unset(dst_cid)) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1212a031700e
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,31 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..0eff5391092c
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,923 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct task_struct		*skip;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue	queue;
++#ifdef CONFIG_SCHED_PDS
++	u64			time_edge;
++#endif
++	unsigned long		prio;
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	int active_balance;
++	struct cpu_stop_work	active_balance_work;
++#endif
++	struct balance_callback	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++#ifdef CONFIG_SCHED_BMQ
++	u64			last_ts_switch;
++#endif
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++	ITSELF_LEVEL_SPACE_HOLDER,
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++extern void flush_smp_call_function_queue(void);
++
++#else  /* !CONFIG_SMP */
++static inline void flush_smp_call_function_queue(void) { }
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++				  struct task_struct *p)
++{
++	return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++	struct cpumask *cpumask;
++	int cid;
++
++	cpumask = mm_cidmask(mm);
++	/*
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cpumask);
++		if (cid < nr_cpu_ids)
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cpumask))
++		return -1;
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t, int src_cpu) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..840009dc1e8d
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,99 @@
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
++#define boost_threshold(p)	(sysctl_sched_base_slice >> ((14 - (p)->boost_prio) / 2))
++
++static inline void boost_task(struct task_struct *p)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	if (p->boost_prio > limit)
++		p->boost_prio--;
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MAX_RT_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MAX_RT_PRIO) / 2;
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return task_sched_prio(p);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (rq_switch_time(rq) > sysctl_sched_base_slice)
++		deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	if(this_rq()->clock_task - p->last_ran > sysctl_sched_base_slice)
++		boost_task(p);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	if (rq_switch_time(rq) < boost_threshold(p))
++		boost_task(p);
++}
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index d9dc9ab3773f..71a25540d65e 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -42,13 +42,19 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
+-#include "deadline.c"
+ 
++#ifndef CONFIG_SCHED_ALT
++#include "deadline.c"
++#endif
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..bc17d5a6fc41 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -84,7 +84,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 5888176354e2..6ab2534714f6 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -155,12 +155,18 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ {
+-	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
+ 	struct rq *rq = cpu_rq(sg_cpu->cpu);
+ 
++#ifndef CONFIG_SCHED_ALT
++	unsigned long util = cpu_util_cfs_boost(sg_cpu->cpu);
++
+ 	sg_cpu->bw_dl = cpu_bw_dl(rq);
+ 	sg_cpu->util = effective_cpu_util(sg_cpu->cpu, util,
+ 					  FREQUENCY_UTIL, NULL);
++#else
++	sg_cpu->bw_dl = 0;
++	sg_cpu->util = rq_load_util(rq, arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -306,8 +312,10 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -636,6 +644,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index af7952f12e6c..6461cbbb734d 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -630,7 +630,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 4580a450700e..8c8fd7da4617 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -332,6 +335,7 @@ static const struct file_operations sched_debug_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -341,14 +345,17 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -373,11 +380,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1106,6 +1115,7 @@ void proc_sched_set_task(struct task_struct *p)
+ 	memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 565f8374ddbb..67d51e05a8ac 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -380,6 +380,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -501,3 +502,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..c35dfb909f23
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,141 @@
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms)
++{
++	if (2 == timeslice_ms)
++		sched_timeslice_shift = 22;
++}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	s64 delta = p->deadline - rq->time_edge + SCHED_EDGE_DELTA;
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", delta))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return max(0LL, delta);
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	u64 idx;
++
++	if (p->prio < MIN_NORMAL_PRIO)
++		return p->prio >> 2;
++
++	idx = max(p->deadline + SCHED_EDGE_DELTA, rq->time_edge);
++	/*printk(KERN_INFO "sched: task_sched_prio_idx edge:%llu, deadline=%llu idx=%llu\n", rq->time_edge, p->deadline, idx);*/
++	return MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(idx);
++}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		struct task_struct *p;
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		list_for_each_entry(p, &head, sq_node)
++			p->sq_idx = idx;
++
++		list_splice(&head, rq->queue.heads + idx);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = (rq->prio < MIN_SCHED_NORMAL_PRIO + delta) ?
++		MIN_SCHED_NORMAL_PRIO : rq->prio - delta;
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 63b6cf898220..9ca10ece4d3a 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * thermal:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index 3a0e0dc28721..e8a7d84aa5a5 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 thermal_load_avg(struct rq *rq)
+@@ -44,6 +46,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -180,9 +183,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -200,6 +205,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 2e5a95486a42..0c86131a2a64 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3509,4 +3513,9 @@ static inline void init_sched_mm_cid(struct task_struct *t) { }
+ extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
+ extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 857f837f52cb..5486c63e4790 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -125,8 +125,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -143,6 +145,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -171,6 +174,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 38f3698f5e5b..b9d597394316 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 10d1391e7416..120933a5b206 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1445,8 +1446,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1680,6 +1683,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1716,6 +1720,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2793,3 +2798,20 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 157f7ce2942d..63083a9a2935 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ 
+ /* Constants used for minimum and maximum */
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1912,6 +1916,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 760793998cdd..3198ed8ab40a 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2091,8 +2091,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+-	if (rt_task(current))
++	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index e9c6f9d0e42c..43ee0a94abdd 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -867,6 +867,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -874,6 +875,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -901,8 +903,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -916,7 +920,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1152,8 +1156,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index 529590499b1f..d04bb99b4f0e 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1155,10 +1155,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 2989b57e154a..7313d9f5585f 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1114,6 +1114,7 @@ static bool kick_pool(struct worker_pool *pool)
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1139,6 +1140,8 @@ static bool kick_pool(struct worker_pool *pool)
+ 		get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++;
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1263,7 +1266,11 @@ void wq_worker_running(struct task_struct *task)
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1348,7 +1355,11 @@ void wq_worker_tick(struct task_struct *task)
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -2559,7 +2570,11 @@ __acquires(&pool->lock)
+ 	worker->current_work = work;
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-03-01 13:06 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-03-01 13:06 UTC (permalink / raw
  To: gentoo-commits

commit:     b0ddef42428ee2b183e51e4db4fdc18dad61fbd9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar  1 13:06:04 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar  1 13:06:04 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b0ddef42

Linux patch 6.7.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1006_linux-6.7.7.patch | 14267 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 14271 insertions(+)

diff --git a/0000_README b/0000_README
index b343fd35..178ef64f 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-6.7.6.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.6
 
+Patch:  1006_linux-6.7.7.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.7
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1006_linux-6.7.7.patch b/1006_linux-6.7.7.patch
new file mode 100644
index 00000000..97237e43
--- /dev/null
+++ b/1006_linux-6.7.7.patch
@@ -0,0 +1,14267 @@
+diff --git a/Documentation/conf.py b/Documentation/conf.py
+index d4fdf6a3875a8..dfc19c915d5c4 100644
+--- a/Documentation/conf.py
++++ b/Documentation/conf.py
+@@ -383,6 +383,12 @@ latex_elements = {
+         verbatimhintsturnover=false,
+     ''',
+ 
++    #
++    # Some of our authors are fond of deep nesting; tell latex to
++    # cope.
++    #
++    'maxlistdepth': '10',
++
+     # For CJK One-half spacing, need to be in front of hyperref
+     'extrapackages': r'\usepackage{setspace}',
+ 
+diff --git a/Documentation/dev-tools/kunit/usage.rst b/Documentation/dev-tools/kunit/usage.rst
+index c27e1646ecd9c..9db12e91668e3 100644
+--- a/Documentation/dev-tools/kunit/usage.rst
++++ b/Documentation/dev-tools/kunit/usage.rst
+@@ -651,12 +651,16 @@ For example:
+ 	}
+ 
+ Note that, for functions like device_unregister which only accept a single
+-pointer-sized argument, it's possible to directly cast that function to
+-a ``kunit_action_t`` rather than writing a wrapper function, for example:
++pointer-sized argument, it's possible to automatically generate a wrapper
++with the ``KUNIT_DEFINE_ACTION_WRAPPER()`` macro, for example:
+ 
+ .. code-block:: C
+ 
+-	kunit_add_action(test, (kunit_action_t *)&device_unregister, &dev);
++	KUNIT_DEFINE_ACTION_WRAPPER(device_unregister, device_unregister_wrapper, struct device *);
++	kunit_add_action(test, &device_unregister_wrapper, &dev);
++
++You should do this in preference to manually casting to the ``kunit_action_t`` type,
++as casting function pointers will break Control Flow Integrity (CFI).
+ 
+ ``kunit_add_action`` can fail if, for example, the system is out of memory.
+ You can use ``kunit_add_action_or_reset`` instead which runs the action
+diff --git a/Makefile b/Makefile
+index d07a8e0179acf..72ee3609aae34 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-bletchley.dts b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-bletchley.dts
+index e899de681f475..5be0e8fd2633c 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-bletchley.dts
++++ b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-bletchley.dts
+@@ -45,8 +45,8 @@ spi1_gpio: spi1-gpio {
+ 		num-chipselects = <1>;
+ 		cs-gpios = <&gpio0 ASPEED_GPIO(Z, 0) GPIO_ACTIVE_LOW>;
+ 
+-		tpmdev@0 {
+-			compatible = "tcg,tpm_tis-spi";
++		tpm@0 {
++			compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
+ 			spi-max-frequency = <33000000>;
+ 			reg = <0>;
+ 		};
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-wedge400.dts b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-wedge400.dts
+index a677c827e758f..5a8169bbda879 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-wedge400.dts
++++ b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-wedge400.dts
+@@ -80,8 +80,8 @@ spi_gpio: spi {
+ 		gpio-miso = <&gpio ASPEED_GPIO(R, 5) GPIO_ACTIVE_HIGH>;
+ 		num-chipselects = <1>;
+ 
+-		tpmdev@0 {
+-			compatible = "tcg,tpm_tis-spi";
++		tpm@0 {
++			compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
+ 			spi-max-frequency = <33000000>;
+ 			reg = <0>;
+ 		};
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-bmc-opp-tacoma.dts b/arch/arm/boot/dts/aspeed/aspeed-bmc-opp-tacoma.dts
+index 3f6010ef2b86f..213023bc5aec4 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-bmc-opp-tacoma.dts
++++ b/arch/arm/boot/dts/aspeed/aspeed-bmc-opp-tacoma.dts
+@@ -456,7 +456,7 @@ &i2c1 {
+ 	status = "okay";
+ 
+ 	tpm: tpm@2e {
+-		compatible = "tcg,tpm-tis-i2c";
++		compatible = "nuvoton,npct75x", "tcg,tpm-tis-i2c";
+ 		reg = <0x2e>;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/aspeed/ast2600-facebook-netbmc-common.dtsi b/arch/arm/boot/dts/aspeed/ast2600-facebook-netbmc-common.dtsi
+index 31590d3186a2e..00e5887c926f1 100644
+--- a/arch/arm/boot/dts/aspeed/ast2600-facebook-netbmc-common.dtsi
++++ b/arch/arm/boot/dts/aspeed/ast2600-facebook-netbmc-common.dtsi
+@@ -35,8 +35,8 @@ spi_gpio: spi {
+ 		gpio-mosi = <&gpio0 ASPEED_GPIO(X, 4) GPIO_ACTIVE_HIGH>;
+ 		gpio-miso = <&gpio0 ASPEED_GPIO(X, 5) GPIO_ACTIVE_HIGH>;
+ 
+-		tpmdev@0 {
+-			compatible = "tcg,tpm_tis-spi";
++		tpm@0 {
++			compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
+ 			spi-max-frequency = <33000000>;
+ 			reg = <0>;
+ 		};
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ull-phytec-tauri.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ull-phytec-tauri.dtsi
+index 44cc4ff1d0df3..d12fb44aeb140 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ull-phytec-tauri.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ull-phytec-tauri.dtsi
+@@ -116,7 +116,7 @@ &ecspi1 {
+ 	tpm_tis: tpm@1 {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_tpm>;
+-		compatible = "tcg,tpm_tis-spi";
++		compatible = "infineon,slb9670", "tcg,tpm_tis-spi";
+ 		reg = <1>;
+ 		spi-max-frequency = <20000000>;
+ 		interrupt-parent = <&gpio5>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7d-flex-concentrator.dts b/arch/arm/boot/dts/nxp/imx/imx7d-flex-concentrator.dts
+index 3a723843d5626..9984b343cdf0c 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7d-flex-concentrator.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx7d-flex-concentrator.dts
+@@ -130,7 +130,7 @@ &ecspi4 {
+ 	 * TCG specification - Section 6.4.1 Clocking:
+ 	 * TPM shall support a SPI clock frequency range of 10-24 MHz.
+ 	 */
+-	st33htph: tpm-tis@0 {
++	st33htph: tpm@0 {
+ 		compatible = "st,st33htpm-spi", "tcg,tpm_tis-spi";
+ 		reg = <0>;
+ 		spi-max-frequency = <24000000>;
+diff --git a/arch/arm/boot/dts/ti/omap/am335x-moxa-uc-2100-common.dtsi b/arch/arm/boot/dts/ti/omap/am335x-moxa-uc-2100-common.dtsi
+index b8730aa52ce6f..a59331aa58e55 100644
+--- a/arch/arm/boot/dts/ti/omap/am335x-moxa-uc-2100-common.dtsi
++++ b/arch/arm/boot/dts/ti/omap/am335x-moxa-uc-2100-common.dtsi
+@@ -217,7 +217,7 @@ &spi1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&spi1_pins>;
+ 
+-	tpm_spi_tis@0 {
++	tpm@0 {
+ 		compatible = "tcg,tpm_tis-spi";
+ 		reg = <0>;
+ 		spi-max-frequency = <500000>;
+diff --git a/arch/arm/mach-ep93xx/core.c b/arch/arm/mach-ep93xx/core.c
+index 71b1139764204..8b1ec60a9a467 100644
+--- a/arch/arm/mach-ep93xx/core.c
++++ b/arch/arm/mach-ep93xx/core.c
+@@ -339,6 +339,7 @@ static struct gpiod_lookup_table ep93xx_i2c_gpiod_table = {
+ 				GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
+ 		GPIO_LOOKUP_IDX("G", 0, NULL, 1,
+ 				GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
++		{ }
+ 	},
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
+index d98a040860a48..5828c9d7821de 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
+@@ -486,7 +486,7 @@ &uart3 {	/* A53 Debug */
+ &uart4 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_uart4>;
+-	status = "okay";
++	status = "disabled";
+ };
+ 
+ &usb3_phy0 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
+index 4240e20d38ac3..258e90cc16ff3 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
+@@ -168,6 +168,13 @@ reg_vcc_12v0: regulator-12v0 {
+ 		enable-active-high;
+ 	};
+ 
++	reg_vcc_1v8: regulator-1v8 {
++		compatible = "regulator-fixed";
++		regulator-name = "VCC_1V8";
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++	};
++
+ 	reg_vcc_3v3: regulator-3v3 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "VCC_3V3";
+@@ -464,7 +471,7 @@ tlv320aic3x04: audio-codec@18 {
+ 		clock-names = "mclk";
+ 		clocks = <&audio_blk_ctrl IMX8MP_CLK_AUDIOMIX_SAI3_MCLK1>;
+ 		reset-gpios = <&gpio4 29 GPIO_ACTIVE_LOW>;
+-		iov-supply = <&reg_vcc_3v3>;
++		iov-supply = <&reg_vcc_1v8>;
+ 		ldoin-supply = <&reg_vcc_3v3>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 42ce78beb4134..20955556b624d 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -632,6 +632,7 @@ spi0: spi@ff1d0000 {
+ 		clock-names = "spiclk", "apb_pclk";
+ 		dmas = <&dmac 12>, <&dmac 13>;
+ 		dma-names = "tx", "rx";
++		num-cs = <2>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&spi0_clk &spi0_csn &spi0_miso &spi0_mosi>;
+ 		#address-cells = <1>;
+@@ -647,6 +648,7 @@ spi1: spi@ff1d8000 {
+ 		clock-names = "spiclk", "apb_pclk";
+ 		dmas = <&dmac 14>, <&dmac 15>;
+ 		dma-names = "tx", "rx";
++		num-cs = <2>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&spi1_clk &spi1_csn0 &spi1_csn1 &spi1_miso &spi1_mosi>;
+ 		#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+index 60f00ceb630e1..3b675fd0c5ea5 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+@@ -196,13 +196,13 @@ &gpio0 {
+ 
+ &gpio1 {
+ 	gpio-line-names = /* GPIO1 A0-A7 */
+-			  "HEADER_27_3v3", "HEADER_28_3v3", "", "",
++			  "HEADER_27_3v3", "", "", "",
+ 			  "HEADER_29_1v8", "", "HEADER_7_1v8", "",
+ 			  /* GPIO1 B0-B7 */
+ 			  "", "HEADER_31_1v8", "HEADER_33_1v8", "",
+ 			  "HEADER_11_1v8", "HEADER_13_1v8", "", "",
+ 			  /* GPIO1 C0-C7 */
+-			  "", "", "", "",
++			  "", "HEADER_28_3v3", "", "",
+ 			  "", "", "", "",
+ 			  /* GPIO1 D0-D7 */
+ 			  "", "", "", "",
+@@ -226,11 +226,11 @@ &gpio3 {
+ 
+ &gpio4 {
+ 	gpio-line-names = /* GPIO4 A0-A7 */
+-			  "", "", "HEADER_37_3v3", "HEADER_32_3v3",
+-			  "HEADER_36_3v3", "", "HEADER_35_3v3", "HEADER_38_3v3",
++			  "", "", "HEADER_37_3v3", "HEADER_8_3v3",
++			  "HEADER_10_3v3", "", "HEADER_32_3v3", "HEADER_35_3v3",
+ 			  /* GPIO4 B0-B7 */
+ 			  "", "", "", "HEADER_40_3v3",
+-			  "HEADER_8_3v3", "HEADER_10_3v3", "", "",
++			  "HEADER_38_3v3", "HEADER_36_3v3", "", "",
+ 			  /* GPIO4 C0-C7 */
+ 			  "", "", "", "",
+ 			  "", "", "", "",
+diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
+index 50e5f25d3024c..7780d343ef080 100644
+--- a/arch/arm64/include/asm/fpsimd.h
++++ b/arch/arm64/include/asm/fpsimd.h
+@@ -386,6 +386,7 @@ extern void sme_alloc(struct task_struct *task, bool flush);
+ extern unsigned int sme_get_vl(void);
+ extern int sme_set_current_vl(unsigned long arg);
+ extern int sme_get_current_vl(void);
++extern void sme_suspend_exit(void);
+ 
+ /*
+  * Return how many bytes of memory are required to store the full SME
+@@ -421,6 +422,7 @@ static inline int sme_max_vl(void) { return 0; }
+ static inline int sme_max_virtualisable_vl(void) { return 0; }
+ static inline int sme_set_current_vl(unsigned long arg) { return -EINVAL; }
+ static inline int sme_get_current_vl(void) { return -EINVAL; }
++static inline void sme_suspend_exit(void) { }
+ 
+ static inline size_t sme_state_size(struct task_struct const *task)
+ {
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index f7d8f5d81cfe9..0898ac9979045 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1339,6 +1339,22 @@ void __init sme_setup(void)
+ 		get_sme_default_vl());
+ }
+ 
++void sme_suspend_exit(void)
++{
++	u64 smcr = 0;
++
++	if (!system_supports_sme())
++		return;
++
++	if (system_supports_fa64())
++		smcr |= SMCR_ELx_FA64;
++	if (system_supports_sme2())
++		smcr |= SMCR_ELx_EZT0;
++
++	write_sysreg_s(smcr, SYS_SMCR_EL1);
++	write_sysreg_s(0, SYS_SMPRI_EL1);
++}
++
+ #endif /* CONFIG_ARM64_SME */
+ 
+ static void sve_init_regs(void)
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index eca4d04352118..eaaff94329cdd 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -12,6 +12,7 @@
+ #include <asm/daifflags.h>
+ #include <asm/debug-monitors.h>
+ #include <asm/exec.h>
++#include <asm/fpsimd.h>
+ #include <asm/mte.h>
+ #include <asm/memory.h>
+ #include <asm/mmu_context.h>
+@@ -80,6 +81,8 @@ void notrace __cpu_suspend_exit(void)
+ 	 */
+ 	spectre_v4_enable_mitigation(NULL);
+ 
++	sme_suspend_exit();
++
+ 	/* Restore additional feature-specific configuration */
+ 	ptrauth_suspend_exit();
+ }
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index e2764d0ffa9f3..28a93074eca17 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -468,6 +468,9 @@ static int its_sync_lpi_pending_table(struct kvm_vcpu *vcpu)
+ 		}
+ 
+ 		irq = vgic_get_irq(vcpu->kvm, NULL, intids[i]);
++		if (!irq)
++			continue;
++
+ 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ 		irq->pending_latch = pendmask & (1U << bit_nr);
+ 		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+@@ -1432,6 +1435,8 @@ static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its,
+ 
+ 	for (i = 0; i < irq_count; i++) {
+ 		irq = vgic_get_irq(kvm, NULL, intids[i]);
++		if (!irq)
++			continue;
+ 
+ 		update_affinity(irq, vcpu2);
+ 
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index ee123820a4760..205956041d7d0 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -11,6 +11,7 @@ config LOONGARCH
+ 	select ARCH_DISABLE_KASAN_INLINE
+ 	select ARCH_ENABLE_MEMORY_HOTPLUG
+ 	select ARCH_ENABLE_MEMORY_HOTREMOVE
++	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
+ 	select ARCH_HAS_ACPI_TABLE_UPGRADE	if ACPI
+ 	select ARCH_HAS_CPU_FINALIZE_INIT
+ 	select ARCH_HAS_FORTIFY_SOURCE
+@@ -97,6 +98,7 @@ config LOONGARCH
+ 	select HAVE_ARCH_KFENCE
+ 	select HAVE_ARCH_KGDB if PERF_EVENTS
+ 	select HAVE_ARCH_MMAP_RND_BITS if MMU
++	select HAVE_ARCH_SECCOMP
+ 	select HAVE_ARCH_SECCOMP_FILTER
+ 	select HAVE_ARCH_TRACEHOOK
+ 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+@@ -608,23 +610,6 @@ config RANDOMIZE_BASE_MAX_OFFSET
+ 
+ 	  This is limited by the size of the lower address memory, 256MB.
+ 
+-config SECCOMP
+-	bool "Enable seccomp to safely compute untrusted bytecode"
+-	depends on PROC_FS
+-	default y
+-	help
+-	  This kernel feature is useful for number crunching applications
+-	  that may need to compute untrusted bytecode during their
+-	  execution. By using pipes or other transports made available to
+-	  the process as file descriptors supporting the read/write
+-	  syscalls, it's possible to isolate those applications in
+-	  their own address space using seccomp. Once seccomp is
+-	  enabled via /proc/<pid>/seccomp, it cannot be disabled
+-	  and the task is only allowed to execute a few safe syscalls
+-	  defined by each seccomp mode.
+-
+-	  If unsure, say Y. Only embedded should say N here.
+-
+ endmenu
+ 
+ config ARCH_SELECT_MEMORY_MODEL
+@@ -643,10 +628,6 @@ config ARCH_SPARSEMEM_ENABLE
+ 	  or have huge holes in the physical address space for other reasons.
+ 	  See <file:Documentation/mm/numa.rst> for more.
+ 
+-config ARCH_ENABLE_THP_MIGRATION
+-	def_bool y
+-	depends on TRANSPARENT_HUGEPAGE
+-
+ config ARCH_MEMORY_PROBE
+ 	def_bool y
+ 	depends on MEMORY_HOTPLUG
+diff --git a/arch/loongarch/include/asm/acpi.h b/arch/loongarch/include/asm/acpi.h
+index 8de6c4b83a61a..49e29b29996f0 100644
+--- a/arch/loongarch/include/asm/acpi.h
++++ b/arch/loongarch/include/asm/acpi.h
+@@ -32,8 +32,10 @@ static inline bool acpi_has_cpu_in_madt(void)
+ 	return true;
+ }
+ 
++#define MAX_CORE_PIC 256
++
+ extern struct list_head acpi_wakeup_device_list;
+-extern struct acpi_madt_core_pic acpi_core_pic[NR_CPUS];
++extern struct acpi_madt_core_pic acpi_core_pic[MAX_CORE_PIC];
+ 
+ extern int __init parse_acpi_topology(void);
+ 
+diff --git a/arch/loongarch/kernel/acpi.c b/arch/loongarch/kernel/acpi.c
+index 8e00a754e5489..55d6a48c76a82 100644
+--- a/arch/loongarch/kernel/acpi.c
++++ b/arch/loongarch/kernel/acpi.c
+@@ -29,11 +29,9 @@ int disabled_cpus;
+ 
+ u64 acpi_saved_sp;
+ 
+-#define MAX_CORE_PIC 256
+-
+ #define PREFIX			"ACPI: "
+ 
+-struct acpi_madt_core_pic acpi_core_pic[NR_CPUS];
++struct acpi_madt_core_pic acpi_core_pic[MAX_CORE_PIC];
+ 
+ void __init __iomem * __acpi_map_table(unsigned long phys, unsigned long size)
+ {
+diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
+index d183a745fb85d..b7c14a1242276 100644
+--- a/arch/loongarch/kernel/setup.c
++++ b/arch/loongarch/kernel/setup.c
+@@ -366,6 +366,8 @@ void __init platform_init(void)
+ 	acpi_gbl_use_default_register_widths = false;
+ 	acpi_boot_table_init();
+ #endif
++
++	early_init_fdt_scan_reserved_mem();
+ 	unflatten_and_copy_device_tree();
+ 
+ #ifdef CONFIG_NUMA
+@@ -399,8 +401,6 @@ static void __init arch_mem_init(char **cmdline_p)
+ 
+ 	check_kernel_sections_mem();
+ 
+-	early_init_fdt_scan_reserved_mem();
+-
+ 	/*
+ 	 * In order to reduce the possibility of kernel panic when failed to
+ 	 * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate
+diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
+index 2fbf541d7b4f6..378ffa78ffeb4 100644
+--- a/arch/loongarch/kernel/smp.c
++++ b/arch/loongarch/kernel/smp.c
+@@ -88,6 +88,73 @@ void show_ipi_list(struct seq_file *p, int prec)
+ 	}
+ }
+ 
++static inline void set_cpu_core_map(int cpu)
++{
++	int i;
++
++	cpumask_set_cpu(cpu, &cpu_core_setup_map);
++
++	for_each_cpu(i, &cpu_core_setup_map) {
++		if (cpu_data[cpu].package == cpu_data[i].package) {
++			cpumask_set_cpu(i, &cpu_core_map[cpu]);
++			cpumask_set_cpu(cpu, &cpu_core_map[i]);
++		}
++	}
++}
++
++static inline void set_cpu_sibling_map(int cpu)
++{
++	int i;
++
++	cpumask_set_cpu(cpu, &cpu_sibling_setup_map);
++
++	for_each_cpu(i, &cpu_sibling_setup_map) {
++		if (cpus_are_siblings(cpu, i)) {
++			cpumask_set_cpu(i, &cpu_sibling_map[cpu]);
++			cpumask_set_cpu(cpu, &cpu_sibling_map[i]);
++		}
++	}
++}
++
++static inline void clear_cpu_sibling_map(int cpu)
++{
++	int i;
++
++	for_each_cpu(i, &cpu_sibling_setup_map) {
++		if (cpus_are_siblings(cpu, i)) {
++			cpumask_clear_cpu(i, &cpu_sibling_map[cpu]);
++			cpumask_clear_cpu(cpu, &cpu_sibling_map[i]);
++		}
++	}
++
++	cpumask_clear_cpu(cpu, &cpu_sibling_setup_map);
++}
++
++/*
++ * Calculate a new cpu_foreign_map mask whenever a
++ * new cpu appears or disappears.
++ */
++void calculate_cpu_foreign_map(void)
++{
++	int i, k, core_present;
++	cpumask_t temp_foreign_map;
++
++	/* Re-calculate the mask */
++	cpumask_clear(&temp_foreign_map);
++	for_each_online_cpu(i) {
++		core_present = 0;
++		for_each_cpu(k, &temp_foreign_map)
++			if (cpus_are_siblings(i, k))
++				core_present = 1;
++		if (!core_present)
++			cpumask_set_cpu(i, &temp_foreign_map);
++	}
++
++	for_each_online_cpu(i)
++		cpumask_andnot(&cpu_foreign_map[i],
++			       &temp_foreign_map, &cpu_sibling_map[i]);
++}
++
+ /* Send mailbox buffer via Mail_Send */
+ static void csr_mail_send(uint64_t data, int cpu, int mailbox)
+ {
+@@ -300,6 +367,7 @@ int loongson_cpu_disable(void)
+ 	numa_remove_cpu(cpu);
+ #endif
+ 	set_cpu_online(cpu, false);
++	clear_cpu_sibling_map(cpu);
+ 	calculate_cpu_foreign_map();
+ 	local_irq_save(flags);
+ 	irq_migrate_all_off_this_cpu();
+@@ -334,6 +402,7 @@ void __noreturn arch_cpu_idle_dead(void)
+ 		addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
+ 	} while (addr == 0);
+ 
++	local_irq_disable();
+ 	init_fn = (void *)TO_CACHE(addr);
+ 	iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR);
+ 
+@@ -376,59 +445,6 @@ static int __init ipi_pm_init(void)
+ core_initcall(ipi_pm_init);
+ #endif
+ 
+-static inline void set_cpu_sibling_map(int cpu)
+-{
+-	int i;
+-
+-	cpumask_set_cpu(cpu, &cpu_sibling_setup_map);
+-
+-	for_each_cpu(i, &cpu_sibling_setup_map) {
+-		if (cpus_are_siblings(cpu, i)) {
+-			cpumask_set_cpu(i, &cpu_sibling_map[cpu]);
+-			cpumask_set_cpu(cpu, &cpu_sibling_map[i]);
+-		}
+-	}
+-}
+-
+-static inline void set_cpu_core_map(int cpu)
+-{
+-	int i;
+-
+-	cpumask_set_cpu(cpu, &cpu_core_setup_map);
+-
+-	for_each_cpu(i, &cpu_core_setup_map) {
+-		if (cpu_data[cpu].package == cpu_data[i].package) {
+-			cpumask_set_cpu(i, &cpu_core_map[cpu]);
+-			cpumask_set_cpu(cpu, &cpu_core_map[i]);
+-		}
+-	}
+-}
+-
+-/*
+- * Calculate a new cpu_foreign_map mask whenever a
+- * new cpu appears or disappears.
+- */
+-void calculate_cpu_foreign_map(void)
+-{
+-	int i, k, core_present;
+-	cpumask_t temp_foreign_map;
+-
+-	/* Re-calculate the mask */
+-	cpumask_clear(&temp_foreign_map);
+-	for_each_online_cpu(i) {
+-		core_present = 0;
+-		for_each_cpu(k, &temp_foreign_map)
+-			if (cpus_are_siblings(i, k))
+-				core_present = 1;
+-		if (!core_present)
+-			cpumask_set_cpu(i, &temp_foreign_map);
+-	}
+-
+-	for_each_online_cpu(i)
+-		cpumask_andnot(&cpu_foreign_map[i],
+-			       &temp_foreign_map, &cpu_sibling_map[i]);
+-}
+-
+ /* Preload SMP state for boot cpu */
+ void smp_prepare_boot_cpu(void)
+ {
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index c74c9921304f2..f597cd08a96be 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -2,6 +2,7 @@
+ # Objects to go into the VDSO.
+ 
+ KASAN_SANITIZE := n
++UBSAN_SANITIZE := n
+ KCOV_INSTRUMENT := n
+ 
+ # Include the generic Makefile to check the built vdso.
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index 246c6a6b02614..5b778995d4483 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -2007,7 +2007,13 @@ unsigned long vi_handlers[64];
+ 
+ void reserve_exception_space(phys_addr_t addr, unsigned long size)
+ {
+-	memblock_reserve(addr, size);
++	/*
++	 * reserve exception space on CPUs other than CPU0
++	 * is too late, since memblock is unavailable when APs
++	 * up
++	 */
++	if (smp_processor_id() == 0)
++		memblock_reserve(addr, size);
+ }
+ 
+ void __init *set_except_vector(int n, void *addr)
+diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c
+index e95a977ba5f37..bf73562706b2e 100644
+--- a/arch/parisc/kernel/processor.c
++++ b/arch/parisc/kernel/processor.c
+@@ -172,7 +172,6 @@ static int __init processor_probe(struct parisc_device *dev)
+ 	p->cpu_num = cpu_info.cpu_num;
+ 	p->cpu_loc = cpu_info.cpu_loc;
+ 
+-	set_cpu_possible(cpuid, true);
+ 	store_cpu_topology(cpuid);
+ 
+ #ifdef CONFIG_SMP
+@@ -474,13 +473,6 @@ static struct parisc_driver cpu_driver __refdata = {
+  */
+ void __init processor_init(void)
+ {
+-	unsigned int cpu;
+-
+ 	reset_cpu_topology();
+-
+-	/* reset possible mask. We will mark those which are possible. */
+-	for_each_possible_cpu(cpu)
+-		set_cpu_possible(cpu, false);
+-
+ 	register_parisc_driver(&cpu_driver);
+ }
+diff --git a/arch/parisc/kernel/unwind.c b/arch/parisc/kernel/unwind.c
+index 27ae40a443b80..f7e0fee5ee55a 100644
+--- a/arch/parisc/kernel/unwind.c
++++ b/arch/parisc/kernel/unwind.c
+@@ -228,10 +228,8 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ #ifdef CONFIG_IRQSTACKS
+ 	extern void * const _call_on_stack;
+ #endif /* CONFIG_IRQSTACKS */
+-	void *ptr;
+ 
+-	ptr = dereference_kernel_function_descriptor(&handle_interruption);
+-	if (pc_is_kernel_fn(pc, ptr)) {
++	if (pc_is_kernel_fn(pc, handle_interruption)) {
+ 		struct pt_regs *regs = (struct pt_regs *)(info->sp - frame_size - PT_SZ_ALGN);
+ 		dbg("Unwinding through handle_interruption()\n");
+ 		info->prev_sp = regs->gr[30];
+@@ -239,13 +237,13 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 		return 1;
+ 	}
+ 
+-	if (pc_is_kernel_fn(pc, ret_from_kernel_thread) ||
+-	    pc_is_kernel_fn(pc, syscall_exit)) {
++	if (pc == (unsigned long)&ret_from_kernel_thread ||
++	    pc == (unsigned long)&syscall_exit) {
+ 		info->prev_sp = info->prev_ip = 0;
+ 		return 1;
+ 	}
+ 
+-	if (pc_is_kernel_fn(pc, intr_return)) {
++	if (pc == (unsigned long)&intr_return) {
+ 		struct pt_regs *regs;
+ 
+ 		dbg("Found intr_return()\n");
+@@ -257,14 +255,14 @@ static int unwind_special(struct unwind_frame_info *info, unsigned long pc, int
+ 	}
+ 
+ 	if (pc_is_kernel_fn(pc, _switch_to) ||
+-	    pc_is_kernel_fn(pc, _switch_to_ret)) {
++	    pc == (unsigned long)&_switch_to_ret) {
+ 		info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE;
+ 		info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET);
+ 		return 1;
+ 	}
+ 
+ #ifdef CONFIG_IRQSTACKS
+-	if (pc_is_kernel_fn(pc, _call_on_stack)) {
++	if (pc == (unsigned long)&_call_on_stack) {
+ 		info->prev_sp = *(unsigned long *)(info->sp - FRAME_SIZE - REG_SZ);
+ 		info->prev_ip = *(unsigned long *)(info->sp - FRAME_SIZE - RP_OFFSET);
+ 		return 1;
+diff --git a/arch/powerpc/include/asm/ppc-pci.h b/arch/powerpc/include/asm/ppc-pci.h
+index d9fcff5750271..2689e7139b9ea 100644
+--- a/arch/powerpc/include/asm/ppc-pci.h
++++ b/arch/powerpc/include/asm/ppc-pci.h
+@@ -30,6 +30,16 @@ void *pci_traverse_device_nodes(struct device_node *start,
+ 				void *data);
+ extern void pci_devs_phb_init_dynamic(struct pci_controller *phb);
+ 
++#if defined(CONFIG_IOMMU_API) && (defined(CONFIG_PPC_PSERIES) || \
++				  defined(CONFIG_PPC_POWERNV))
++extern void ppc_iommu_register_device(struct pci_controller *phb);
++extern void ppc_iommu_unregister_device(struct pci_controller *phb);
++#else
++static inline void ppc_iommu_register_device(struct pci_controller *phb) { }
++static inline void ppc_iommu_unregister_device(struct pci_controller *phb) { }
++#endif
++
++
+ /* From rtas_pci.h */
+ extern void init_pci_config_tokens (void);
+ extern unsigned long get_phb_buid (struct device_node *);
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index df17b33b89d13..2c0173e7094da 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -1341,7 +1341,7 @@ static struct iommu_device *spapr_tce_iommu_probe_device(struct device *dev)
+ 	struct pci_controller *hose;
+ 
+ 	if (!dev_is_pci(dev))
+-		return ERR_PTR(-EPERM);
++		return ERR_PTR(-ENODEV);
+ 
+ 	pdev = to_pci_dev(dev);
+ 	hose = pdev->bus->sysdata;
+@@ -1390,6 +1390,21 @@ static const struct attribute_group *spapr_tce_iommu_groups[] = {
+ 	NULL,
+ };
+ 
++void ppc_iommu_register_device(struct pci_controller *phb)
++{
++	iommu_device_sysfs_add(&phb->iommu, phb->parent,
++				spapr_tce_iommu_groups, "iommu-phb%04x",
++				phb->global_number);
++	iommu_device_register(&phb->iommu, &spapr_tce_iommu_ops,
++				phb->parent);
++}
++
++void ppc_iommu_unregister_device(struct pci_controller *phb)
++{
++	iommu_device_unregister(&phb->iommu);
++	iommu_device_sysfs_remove(&phb->iommu);
++}
++
+ /*
+  * This registers IOMMU devices of PHBs. This needs to happen
+  * after core_initcall(iommu_init) + postcore_initcall(pci_driver_init) and
+@@ -1400,11 +1415,7 @@ static int __init spapr_tce_setup_phb_iommus_initcall(void)
+ 	struct pci_controller *hose;
+ 
+ 	list_for_each_entry(hose, &hose_list, list_node) {
+-		iommu_device_sysfs_add(&hose->iommu, hose->parent,
+-				       spapr_tce_iommu_groups, "iommu-phb%04x",
+-				       hose->global_number);
+-		iommu_device_register(&hose->iommu, &spapr_tce_iommu_ops,
+-				      hose->parent);
++		ppc_iommu_register_device(hose);
+ 	}
+ 	return 0;
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 002a7573a5d44..b5c6af0bef81e 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -391,6 +391,24 @@ static void kvmppc_set_pvr_hv(struct kvm_vcpu *vcpu, u32 pvr)
+ /* Dummy value used in computing PCR value below */
+ #define PCR_ARCH_31    (PCR_ARCH_300 << 1)
+ 
++static inline unsigned long map_pcr_to_cap(unsigned long pcr)
++{
++	unsigned long cap = 0;
++
++	switch (pcr) {
++	case PCR_ARCH_300:
++		cap = H_GUEST_CAP_POWER9;
++		break;
++	case PCR_ARCH_31:
++		cap = H_GUEST_CAP_POWER10;
++		break;
++	default:
++		break;
++	}
++
++	return cap;
++}
++
+ static int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
+ {
+ 	unsigned long host_pcr_bit = 0, guest_pcr_bit = 0, cap = 0;
+@@ -424,11 +442,9 @@ static int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
+ 			break;
+ 		case PVR_ARCH_300:
+ 			guest_pcr_bit = PCR_ARCH_300;
+-			cap = H_GUEST_CAP_POWER9;
+ 			break;
+ 		case PVR_ARCH_31:
+ 			guest_pcr_bit = PCR_ARCH_31;
+-			cap = H_GUEST_CAP_POWER10;
+ 			break;
+ 		default:
+ 			return -EINVAL;
+@@ -440,6 +456,12 @@ static int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat)
+ 		return -EINVAL;
+ 
+ 	if (kvmhv_on_pseries() && kvmhv_is_nestedv2()) {
++		/*
++		 * 'arch_compat == 0' would mean the guest should default to
++		 * L1's compatibility. In this case, the guest would pick
++		 * host's PCR and evaluate the corresponding capabilities.
++		 */
++		cap = map_pcr_to_cap(guest_pcr_bit);
+ 		if (!(cap & nested_capabilities))
+ 			return -EINVAL;
+ 	}
+diff --git a/arch/powerpc/kvm/book3s_hv_nestedv2.c b/arch/powerpc/kvm/book3s_hv_nestedv2.c
+index fd3c4f2d94805..f354af7e85114 100644
+--- a/arch/powerpc/kvm/book3s_hv_nestedv2.c
++++ b/arch/powerpc/kvm/book3s_hv_nestedv2.c
+@@ -138,6 +138,7 @@ static int gs_msg_ops_vcpu_fill_info(struct kvmppc_gs_buff *gsb,
+ 	vector128 v;
+ 	int rc, i;
+ 	u16 iden;
++	u32 arch_compat = 0;
+ 
+ 	vcpu = gsm->data;
+ 
+@@ -347,8 +348,23 @@ static int gs_msg_ops_vcpu_fill_info(struct kvmppc_gs_buff *gsb,
+ 			break;
+ 		}
+ 		case KVMPPC_GSID_LOGICAL_PVR:
+-			rc = kvmppc_gse_put_u32(gsb, iden,
+-						vcpu->arch.vcore->arch_compat);
++			/*
++			 * Though 'arch_compat == 0' would mean the default
++			 * compatibility, arch_compat, being a Guest Wide
++			 * Element, cannot be filled with a value of 0 in GSB
++			 * as this would result into a kernel trap.
++			 * Hence, when `arch_compat == 0`, arch_compat should
++			 * default to L1's PVR.
++			 */
++			if (!vcpu->arch.vcore->arch_compat) {
++				if (cpu_has_feature(CPU_FTR_ARCH_31))
++					arch_compat = PVR_ARCH_31;
++				else if (cpu_has_feature(CPU_FTR_ARCH_300))
++					arch_compat = PVR_ARCH_300;
++			} else {
++				arch_compat = vcpu->arch.vcore->arch_compat;
++			}
++			rc = kvmppc_gse_put_u32(gsb, iden, arch_compat);
+ 			break;
+ 		}
+ 
+diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
+index 4ba8245681192..4448386268d99 100644
+--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
++++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
+@@ -35,6 +35,8 @@ struct pci_controller *init_phb_dynamic(struct device_node *dn)
+ 
+ 	pseries_msi_allocate_domains(phb);
+ 
++	ppc_iommu_register_device(phb);
++
+ 	/* Create EEH devices for the PHB */
+ 	eeh_phb_pe_create(phb);
+ 
+@@ -76,6 +78,8 @@ int remove_phb_dynamic(struct pci_controller *phb)
+ 		}
+ 	}
+ 
++	ppc_iommu_unregister_device(phb);
++
+ 	pseries_msi_free_domains(phb);
+ 
+ 	/* Keep a reference so phb isn't freed yet */
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 676ac74026a82..52a44e353796c 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -252,7 +252,7 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
+ /* combine single writes by using store-block insn */
+ void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
+ {
+-       zpci_memcpy_toio(to, from, count);
++	zpci_memcpy_toio(to, from, count * 8);
+ }
+ 
+ void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,
+diff --git a/arch/sparc/Makefile b/arch/sparc/Makefile
+index 5f60359361312..2a03daa68f285 100644
+--- a/arch/sparc/Makefile
++++ b/arch/sparc/Makefile
+@@ -60,7 +60,7 @@ libs-y                 += arch/sparc/prom/
+ libs-y                 += arch/sparc/lib/
+ 
+ drivers-$(CONFIG_PM) += arch/sparc/power/
+-drivers-$(CONFIG_FB) += arch/sparc/video/
++drivers-$(CONFIG_FB_CORE) += arch/sparc/video/
+ 
+ boot := arch/sparc/boot
+ 
+diff --git a/arch/sparc/video/Makefile b/arch/sparc/video/Makefile
+index 6baddbd58e4db..d4d83f1702c61 100644
+--- a/arch/sparc/video/Makefile
++++ b/arch/sparc/video/Makefile
+@@ -1,3 +1,3 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
+-obj-$(CONFIG_FB) += fbdev.o
++obj-$(CONFIG_FB_CORE) += fbdev.o
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index 8c8d38f0cb1df..0033790499245 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -6,6 +6,9 @@
+ #include <linux/export.h>
+ #include <linux/linkage.h>
+ #include <asm/msr-index.h>
++#include <asm/unwind_hints.h>
++#include <asm/segment.h>
++#include <asm/cache.h>
+ 
+ .pushsection .noinstr.text, "ax"
+ 
+@@ -20,3 +23,23 @@ SYM_FUNC_END(entry_ibpb)
+ EXPORT_SYMBOL_GPL(entry_ibpb);
+ 
+ .popsection
++
++/*
++ * Define the VERW operand that is disguised as entry code so that
++ * it can be referenced with KPTI enabled. This ensure VERW can be
++ * used late in exit-to-user path after page tables are switched.
++ */
++.pushsection .entry.text, "ax"
++
++.align L1_CACHE_BYTES, 0xcc
++SYM_CODE_START_NOALIGN(mds_verw_sel)
++	UNWIND_HINT_UNDEFINED
++	ANNOTATE_NOENDBR
++	.word __KERNEL_DS
++.align L1_CACHE_BYTES, 0xcc
++SYM_CODE_END(mds_verw_sel);
++/* For KVM */
++EXPORT_SYMBOL_GPL(mds_verw_sel);
++
++.popsection
++
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 3e973ff20e235..caf4cf2e1036f 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -97,7 +97,7 @@
+ #define X86_FEATURE_SYSENTER32		( 3*32+15) /* "" sysenter in IA32 userspace */
+ #define X86_FEATURE_REP_GOOD		( 3*32+16) /* REP microcode works well */
+ #define X86_FEATURE_AMD_LBR_V2		( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
+-/* FREE, was #define X86_FEATURE_LFENCE_RDTSC		( 3*32+18) "" LFENCE synchronizes RDTSC */
++#define X86_FEATURE_CLEAR_CPU_BUF	( 3*32+18) /* "" Clear CPU buffers using VERW */
+ #define X86_FEATURE_ACC_POWER		( 3*32+19) /* AMD Accumulated Power Mechanism */
+ #define X86_FEATURE_NOPL		( 3*32+20) /* The NOPL (0F 1F) instructions */
+ #define X86_FEATURE_ALWAYS		( 3*32+21) /* "" Always-present feature */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index f93e9b96927ac..2519615936e95 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -315,6 +315,17 @@
+ #endif
+ .endm
+ 
++/*
++ * Macro to execute VERW instruction that mitigate transient data sampling
++ * attacks such as MDS. On affected systems a microcode update overloaded VERW
++ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
++ *
++ * Note: Only the memory operand variant of VERW clears the CPU buffers.
++ */
++.macro CLEAR_CPU_BUFFERS
++	ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
++.endm
++
+ #else /* __ASSEMBLY__ */
+ 
+ #define ANNOTATE_RETPOLINE_SAFE					\
+@@ -536,6 +547,8 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
+ 
+ DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+ 
++extern u16 mds_verw_sel;
++
+ #include <asm/segment.h>
+ 
+ /**
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index c876f1d36a81a..832f4413d96a8 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -591,7 +591,7 @@ static bool try_fixup_enqcmd_gp(void)
+ 	if (!mm_valid_pasid(current->mm))
+ 		return false;
+ 
+-	pasid = current->mm->pasid;
++	pasid = mm_get_enqcmd_pasid(current->mm);
+ 
+ 	/*
+ 	 * Did this thread already have its PASID activated?
+diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
+index b29ceb19e46ec..9d63cfff1fd41 100644
+--- a/arch/x86/mm/numa.c
++++ b/arch/x86/mm/numa.c
+@@ -964,7 +964,7 @@ static int __init cmp_memblk(const void *a, const void *b)
+ 	const struct numa_memblk *ma = *(const struct numa_memblk **)a;
+ 	const struct numa_memblk *mb = *(const struct numa_memblk **)b;
+ 
+-	return ma->start - mb->start;
++	return (ma->start > mb->start) - (ma->start < mb->start);
+ }
+ 
+ static struct numa_memblk *numa_memblk_list[NR_NODE_MEMBLKS] __initdata;
+@@ -974,14 +974,12 @@ static struct numa_memblk *numa_memblk_list[NR_NODE_MEMBLKS] __initdata;
+  * @start: address to begin fill
+  * @end: address to end fill
+  *
+- * Find and extend numa_meminfo memblks to cover the @start-@end
+- * physical address range, such that the first memblk includes
+- * @start, the last memblk includes @end, and any gaps in between
+- * are filled.
++ * Find and extend numa_meminfo memblks to cover the physical
++ * address range @start-@end
+  *
+  * RETURNS:
+  * 0		  : Success
+- * NUMA_NO_MEMBLK : No memblk exists in @start-@end range
++ * NUMA_NO_MEMBLK : No memblks exist in address range @start-@end
+  */
+ 
+ int __init numa_fill_memblks(u64 start, u64 end)
+@@ -993,17 +991,14 @@ int __init numa_fill_memblks(u64 start, u64 end)
+ 
+ 	/*
+ 	 * Create a list of pointers to numa_meminfo memblks that
+-	 * overlap start, end. Exclude (start == bi->end) since
+-	 * end addresses in both a CFMWS range and a memblk range
+-	 * are exclusive.
+-	 *
+-	 * This list of pointers is used to make in-place changes
+-	 * that fill out the numa_meminfo memblks.
++	 * overlap start, end. The list is used to make in-place
++	 * changes that fill out the numa_meminfo memblks.
+ 	 */
+ 	for (int i = 0; i < mi->nr_blks; i++) {
+ 		struct numa_memblk *bi = &mi->blk[i];
+ 
+-		if (start < bi->end && end >= bi->start) {
++		if (memblock_addrs_overlap(start, end - start, bi->start,
++					   bi->end - bi->start)) {
+ 			blk[count] = &mi->blk[i];
+ 			count++;
+ 		}
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 8584babf3ea0c..71210cdb34426 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -205,12 +205,19 @@ static int bio_copy_user_iov(struct request *rq, struct rq_map_data *map_data,
+ 	/*
+ 	 * success
+ 	 */
+-	if ((iov_iter_rw(iter) == WRITE &&
+-	     (!map_data || !map_data->null_mapped)) ||
+-	    (map_data && map_data->from_user)) {
++	if (iov_iter_rw(iter) == WRITE &&
++	     (!map_data || !map_data->null_mapped)) {
+ 		ret = bio_copy_from_iter(bio, iter);
+ 		if (ret)
+ 			goto cleanup;
++	} else if (map_data && map_data->from_user) {
++		struct iov_iter iter2 = *iter;
++
++		/* This is the copy-in part of SG_DXFER_TO_FROM_DEV. */
++		iter2.data_source = ITER_SOURCE;
++		ret = bio_copy_from_iter(bio, &iter2);
++		if (ret)
++			goto cleanup;
+ 	} else {
+ 		if (bmd->is_our_pages)
+ 			zero_fill_bio(bio);
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index 7906030176538..c856c417a1451 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -479,9 +479,8 @@ static int ivpu_pci_init(struct ivpu_device *vdev)
+ 	/* Clear any pending errors */
+ 	pcie_capability_clear_word(pdev, PCI_EXP_DEVSTA, 0x3f);
+ 
+-	/* VPU 37XX does not require 10m D3hot delay */
+-	if (ivpu_hw_gen(vdev) == IVPU_HW_37XX)
+-		pdev->d3hot_delay = 0;
++	/* NPU does not require 10m D3hot delay */
++	pdev->d3hot_delay = 0;
+ 
+ 	ret = pcim_enable_device(pdev);
+ 	if (ret) {
+diff --git a/drivers/accel/ivpu/ivpu_hw_37xx.c b/drivers/accel/ivpu/ivpu_hw_37xx.c
+index d530384f8d607..e658fcf849f7a 100644
+--- a/drivers/accel/ivpu/ivpu_hw_37xx.c
++++ b/drivers/accel/ivpu/ivpu_hw_37xx.c
+@@ -523,7 +523,7 @@ static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev)
+ 	u32 val = REGV_RD32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES);
+ 
+ 	val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, NOSNOOP_OVERRIDE_EN, val);
+-	val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AW_NOSNOOP_OVERRIDE, val);
++	val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AW_NOSNOOP_OVERRIDE, val);
+ 	val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val);
+ 
+ 	REGV_WR32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, val);
+diff --git a/drivers/accel/ivpu/ivpu_hw_40xx.c b/drivers/accel/ivpu/ivpu_hw_40xx.c
+index e691c49c98410..40f9ee99eccc1 100644
+--- a/drivers/accel/ivpu/ivpu_hw_40xx.c
++++ b/drivers/accel/ivpu/ivpu_hw_40xx.c
+@@ -24,7 +24,7 @@
+ #define SKU_HW_ID_SHIFT              16u
+ #define SKU_HW_ID_MASK               0xffff0000u
+ 
+-#define PLL_CONFIG_DEFAULT           0x1
++#define PLL_CONFIG_DEFAULT           0x0
+ #define PLL_CDYN_DEFAULT             0x80
+ #define PLL_EPP_DEFAULT              0x80
+ #define PLL_REF_CLK_FREQ	     (50 * 1000000)
+@@ -526,7 +526,7 @@ static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev)
+ 	u32 val = REGV_RD32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES);
+ 
+ 	val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, SNOOP_OVERRIDE_EN, val);
+-	val = REG_CLR_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AW_SNOOP_OVERRIDE, val);
++	val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AW_SNOOP_OVERRIDE, val);
+ 	val = REG_CLR_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val);
+ 
+ 	REGV_WR32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, val);
+@@ -700,7 +700,6 @@ static int ivpu_hw_40xx_info_init(struct ivpu_device *vdev)
+ {
+ 	struct ivpu_hw_info *hw = vdev->hw;
+ 	u32 tile_disable;
+-	u32 tile_enable;
+ 	u32 fuse;
+ 
+ 	fuse = REGB_RD32(VPU_40XX_BUTTRESS_TILE_FUSE);
+@@ -721,10 +720,6 @@ static int ivpu_hw_40xx_info_init(struct ivpu_device *vdev)
+ 	else
+ 		ivpu_dbg(vdev, MISC, "Fuse: All %d tiles enabled\n", TILE_MAX_NUM);
+ 
+-	tile_enable = (~tile_disable) & TILE_MAX_MASK;
+-
+-	hw->sku = REG_SET_FLD_NUM(SKU, HW_ID, LNL_HW_ID, hw->sku);
+-	hw->sku = REG_SET_FLD_NUM(SKU, TILE, tile_enable, hw->sku);
+ 	hw->tile_fuse = tile_disable;
+ 	hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT;
+ 
+diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c
+index 2538c78fbebe2..9898946174fd9 100644
+--- a/drivers/accel/ivpu/ivpu_mmu.c
++++ b/drivers/accel/ivpu/ivpu_mmu.c
+@@ -533,7 +533,6 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
+ 	mmu->cmdq.cons = 0;
+ 
+ 	memset(mmu->evtq.base, 0, IVPU_MMU_EVTQ_SIZE);
+-	clflush_cache_range(mmu->evtq.base, IVPU_MMU_EVTQ_SIZE);
+ 	mmu->evtq.prod = 0;
+ 	mmu->evtq.cons = 0;
+ 
+@@ -847,8 +846,6 @@ static u32 *ivpu_mmu_get_event(struct ivpu_device *vdev)
+ 	if (!CIRC_CNT(IVPU_MMU_Q_IDX(evtq->prod), IVPU_MMU_Q_IDX(evtq->cons), IVPU_MMU_Q_COUNT))
+ 		return NULL;
+ 
+-	clflush_cache_range(evt, IVPU_MMU_EVTQ_CMD_SIZE);
+-
+ 	evtq->cons = (evtq->cons + 1) & IVPU_MMU_Q_WRAP_MASK;
+ 	REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, evtq->cons);
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 3a5f3255f51b3..da2e74fce2d99 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -48,6 +48,7 @@ enum {
+ enum board_ids {
+ 	/* board IDs by feature in alphabetical order */
+ 	board_ahci,
++	board_ahci_43bit_dma,
+ 	board_ahci_ign_iferr,
+ 	board_ahci_low_power,
+ 	board_ahci_no_debounce_delay,
+@@ -128,6 +129,13 @@ static const struct ata_port_info ahci_port_info[] = {
+ 		.udma_mask	= ATA_UDMA6,
+ 		.port_ops	= &ahci_ops,
+ 	},
++	[board_ahci_43bit_dma] = {
++		AHCI_HFLAGS	(AHCI_HFLAG_43BIT_ONLY),
++		.flags		= AHCI_FLAG_COMMON,
++		.pio_mask	= ATA_PIO4,
++		.udma_mask	= ATA_UDMA6,
++		.port_ops	= &ahci_ops,
++	},
+ 	[board_ahci_ign_iferr] = {
+ 		AHCI_HFLAGS	(AHCI_HFLAG_IGN_IRQ_IF_ERR),
+ 		.flags		= AHCI_FLAG_COMMON,
+@@ -597,14 +605,14 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(PROMISE, 0x3f20), board_ahci },	/* PDC42819 */
+ 	{ PCI_VDEVICE(PROMISE, 0x3781), board_ahci },   /* FastTrak TX8660 ahci-mode */
+ 
+-	/* Asmedia */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci },	/* ASM1060 */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci },	/* ASM1060 */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci },	/* ASM1061 */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci },	/* ASM1062 */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0621), board_ahci },   /* ASM1061R */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0622), board_ahci },   /* ASM1062R */
+-	{ PCI_VDEVICE(ASMEDIA, 0x0624), board_ahci },   /* ASM1062+JMB575 */
++	/* ASMedia */
++	{ PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci_43bit_dma },	/* ASM1060 */
++	{ PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci_43bit_dma },	/* ASM1060 */
++	{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci_43bit_dma },	/* ASM1061 */
++	{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci_43bit_dma },	/* ASM1061/1062 */
++	{ PCI_VDEVICE(ASMEDIA, 0x0621), board_ahci_43bit_dma },	/* ASM1061R */
++	{ PCI_VDEVICE(ASMEDIA, 0x0622), board_ahci_43bit_dma },	/* ASM1062R */
++	{ PCI_VDEVICE(ASMEDIA, 0x0624), board_ahci_43bit_dma },	/* ASM1062+JMB575 */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x1062), board_ahci },	/* ASM1062A */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x1064), board_ahci },	/* ASM1064 */
+ 	{ PCI_VDEVICE(ASMEDIA, 0x1164), board_ahci },   /* ASM1164 */
+@@ -663,6 +671,11 @@ MODULE_PARM_DESC(mobile_lpm_policy, "Default LPM policy for mobile chipsets");
+ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ 					 struct ahci_host_priv *hpriv)
+ {
++	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) {
++		dev_info(&pdev->dev, "ASM1166 has only six ports\n");
++		hpriv->saved_port_map = 0x3f;
++	}
++
+ 	if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
+ 		dev_info(&pdev->dev, "JMB361 has only one port\n");
+ 		hpriv->saved_port_map = 1;
+@@ -949,11 +962,20 @@ static int ahci_pci_device_resume(struct device *dev)
+ 
+ #endif /* CONFIG_PM */
+ 
+-static int ahci_configure_dma_masks(struct pci_dev *pdev, int using_dac)
++static int ahci_configure_dma_masks(struct pci_dev *pdev,
++				    struct ahci_host_priv *hpriv)
+ {
+-	const int dma_bits = using_dac ? 64 : 32;
++	int dma_bits;
+ 	int rc;
+ 
++	if (hpriv->cap & HOST_CAP_64) {
++		dma_bits = 64;
++		if (hpriv->flags & AHCI_HFLAG_43BIT_ONLY)
++			dma_bits = 43;
++	} else {
++		dma_bits = 32;
++	}
++
+ 	/*
+ 	 * If the device fixup already set the dma_mask to some non-standard
+ 	 * value, don't extend it here. This happens on STA2X11, for example.
+@@ -1926,7 +1948,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	ahci_gtf_filter_workaround(host);
+ 
+ 	/* initialize adapter */
+-	rc = ahci_configure_dma_masks(pdev, hpriv->cap & HOST_CAP_64);
++	rc = ahci_configure_dma_masks(pdev, hpriv);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index 4bae95b06ae3c..df8f8a1a3a34c 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -247,6 +247,7 @@ enum {
+ 	AHCI_HFLAG_SUSPEND_PHYS		= BIT(26), /* handle PHYs during
+ 						      suspend/resume */
+ 	AHCI_HFLAG_NO_SXS		= BIT(28), /* SXS not supported */
++	AHCI_HFLAG_43BIT_ONLY		= BIT(29), /* 43bit DMA addr limit */
+ 
+ 	/* ap->flags bits */
+ 
+diff --git a/drivers/ata/ahci_ceva.c b/drivers/ata/ahci_ceva.c
+index 64f7f7d6ba84e..11a2c199a7c24 100644
+--- a/drivers/ata/ahci_ceva.c
++++ b/drivers/ata/ahci_ceva.c
+@@ -88,7 +88,6 @@ struct ceva_ahci_priv {
+ 	u32 axicc;
+ 	bool is_cci_enabled;
+ 	int flags;
+-	struct reset_control *rst;
+ };
+ 
+ static unsigned int ceva_ahci_read_id(struct ata_device *dev,
+@@ -189,6 +188,60 @@ static const struct scsi_host_template ahci_platform_sht = {
+ 	AHCI_SHT(DRV_NAME),
+ };
+ 
++static int ceva_ahci_platform_enable_resources(struct ahci_host_priv *hpriv)
++{
++	int rc, i;
++
++	rc = ahci_platform_enable_regulators(hpriv);
++	if (rc)
++		return rc;
++
++	rc = ahci_platform_enable_clks(hpriv);
++	if (rc)
++		goto disable_regulator;
++
++	/* Assert the controller reset */
++	rc = ahci_platform_assert_rsts(hpriv);
++	if (rc)
++		goto disable_clks;
++
++	for (i = 0; i < hpriv->nports; i++) {
++		rc = phy_init(hpriv->phys[i]);
++		if (rc)
++			goto disable_rsts;
++	}
++
++	/* De-assert the controller reset */
++	ahci_platform_deassert_rsts(hpriv);
++
++	for (i = 0; i < hpriv->nports; i++) {
++		rc = phy_power_on(hpriv->phys[i]);
++		if (rc) {
++			phy_exit(hpriv->phys[i]);
++			goto disable_phys;
++		}
++	}
++
++	return 0;
++
++disable_rsts:
++	ahci_platform_deassert_rsts(hpriv);
++
++disable_phys:
++	while (--i >= 0) {
++		phy_power_off(hpriv->phys[i]);
++		phy_exit(hpriv->phys[i]);
++	}
++
++disable_clks:
++	ahci_platform_disable_clks(hpriv);
++
++disable_regulator:
++	ahci_platform_disable_regulators(hpriv);
++
++	return rc;
++}
++
+ static int ceva_ahci_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+@@ -203,47 +256,19 @@ static int ceva_ahci_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	cevapriv->ahci_pdev = pdev;
+-
+-	cevapriv->rst = devm_reset_control_get_optional_exclusive(&pdev->dev,
+-								  NULL);
+-	if (IS_ERR(cevapriv->rst))
+-		dev_err_probe(&pdev->dev, PTR_ERR(cevapriv->rst),
+-			      "failed to get reset\n");
+-
+ 	hpriv = ahci_platform_get_resources(pdev, 0);
+ 	if (IS_ERR(hpriv))
+ 		return PTR_ERR(hpriv);
+ 
+-	if (!cevapriv->rst) {
+-		rc = ahci_platform_enable_resources(hpriv);
+-		if (rc)
+-			return rc;
+-	} else {
+-		int i;
++	hpriv->rsts = devm_reset_control_get_optional_exclusive(&pdev->dev,
++								NULL);
++	if (IS_ERR(hpriv->rsts))
++		return dev_err_probe(&pdev->dev, PTR_ERR(hpriv->rsts),
++				     "failed to get reset\n");
+ 
+-		rc = ahci_platform_enable_clks(hpriv);
+-		if (rc)
+-			return rc;
+-		/* Assert the controller reset */
+-		reset_control_assert(cevapriv->rst);
+-
+-		for (i = 0; i < hpriv->nports; i++) {
+-			rc = phy_init(hpriv->phys[i]);
+-			if (rc)
+-				return rc;
+-		}
+-
+-		/* De-assert the controller reset */
+-		reset_control_deassert(cevapriv->rst);
+-
+-		for (i = 0; i < hpriv->nports; i++) {
+-			rc = phy_power_on(hpriv->phys[i]);
+-			if (rc) {
+-				phy_exit(hpriv->phys[i]);
+-				return rc;
+-			}
+-		}
+-	}
++	rc = ceva_ahci_platform_enable_resources(hpriv);
++	if (rc)
++		return rc;
+ 
+ 	if (of_property_read_bool(np, "ceva,broken-gen2"))
+ 		cevapriv->flags = CEVA_FLAG_BROKEN_GEN2;
+@@ -252,52 +277,60 @@ static int ceva_ahci_probe(struct platform_device *pdev)
+ 	if (of_property_read_u8_array(np, "ceva,p0-cominit-params",
+ 					(u8 *)&cevapriv->pp2c[0], 4) < 0) {
+ 		dev_warn(dev, "ceva,p0-cominit-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	if (of_property_read_u8_array(np, "ceva,p1-cominit-params",
+ 					(u8 *)&cevapriv->pp2c[1], 4) < 0) {
+ 		dev_warn(dev, "ceva,p1-cominit-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	/* Read OOB timing value for COMWAKE from device-tree*/
+ 	if (of_property_read_u8_array(np, "ceva,p0-comwake-params",
+ 					(u8 *)&cevapriv->pp3c[0], 4) < 0) {
+ 		dev_warn(dev, "ceva,p0-comwake-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	if (of_property_read_u8_array(np, "ceva,p1-comwake-params",
+ 					(u8 *)&cevapriv->pp3c[1], 4) < 0) {
+ 		dev_warn(dev, "ceva,p1-comwake-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	/* Read phy BURST timing value from device-tree */
+ 	if (of_property_read_u8_array(np, "ceva,p0-burst-params",
+ 					(u8 *)&cevapriv->pp4c[0], 4) < 0) {
+ 		dev_warn(dev, "ceva,p0-burst-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	if (of_property_read_u8_array(np, "ceva,p1-burst-params",
+ 					(u8 *)&cevapriv->pp4c[1], 4) < 0) {
+ 		dev_warn(dev, "ceva,p1-burst-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	/* Read phy RETRY interval timing value from device-tree */
+ 	if (of_property_read_u16_array(np, "ceva,p0-retry-params",
+ 					(u16 *)&cevapriv->pp5c[0], 2) < 0) {
+ 		dev_warn(dev, "ceva,p0-retry-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	if (of_property_read_u16_array(np, "ceva,p1-retry-params",
+ 					(u16 *)&cevapriv->pp5c[1], 2) < 0) {
+ 		dev_warn(dev, "ceva,p1-retry-params property not defined\n");
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto disable_resources;
+ 	}
+ 
+ 	/*
+@@ -335,7 +368,7 @@ static int __maybe_unused ceva_ahci_resume(struct device *dev)
+ 	struct ahci_host_priv *hpriv = host->private_data;
+ 	int rc;
+ 
+-	rc = ahci_platform_enable_resources(hpriv);
++	rc = ceva_ahci_platform_enable_resources(hpriv);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 09ed67772fae4..be3412cdb22e7 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2001,6 +2001,33 @@ bool ata_dev_power_init_tf(struct ata_device *dev, struct ata_taskfile *tf,
+ 	return true;
+ }
+ 
++static bool ata_dev_power_is_active(struct ata_device *dev)
++{
++	struct ata_taskfile tf;
++	unsigned int err_mask;
++
++	ata_tf_init(dev, &tf);
++	tf.flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR;
++	tf.protocol = ATA_PROT_NODATA;
++	tf.command = ATA_CMD_CHK_POWER;
++
++	err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
++	if (err_mask) {
++		ata_dev_err(dev, "Check power mode failed (err_mask=0x%x)\n",
++			    err_mask);
++		/*
++		 * Assume we are in standby mode so that we always force a
++		 * spinup in ata_dev_power_set_active().
++		 */
++		return false;
++	}
++
++	ata_dev_dbg(dev, "Power mode: 0x%02x\n", tf.nsect);
++
++	/* Active or idle */
++	return tf.nsect == 0xff;
++}
++
+ /**
+  *	ata_dev_power_set_standby - Set a device power mode to standby
+  *	@dev: target device
+@@ -2017,6 +2044,11 @@ void ata_dev_power_set_standby(struct ata_device *dev)
+ 	struct ata_taskfile tf;
+ 	unsigned int err_mask;
+ 
++	/* If the device is already sleeping or in standby, do nothing. */
++	if ((dev->flags & ATA_DFLAG_SLEEPING) ||
++	    !ata_dev_power_is_active(dev))
++		return;
++
+ 	/*
+ 	 * Some odd clown BIOSes issue spindown on power off (ACPI S4 or S5)
+ 	 * causing some drives to spin up and down again. For these, do nothing
+@@ -2042,33 +2074,6 @@ void ata_dev_power_set_standby(struct ata_device *dev)
+ 			    err_mask);
+ }
+ 
+-static bool ata_dev_power_is_active(struct ata_device *dev)
+-{
+-	struct ata_taskfile tf;
+-	unsigned int err_mask;
+-
+-	ata_tf_init(dev, &tf);
+-	tf.flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR;
+-	tf.protocol = ATA_PROT_NODATA;
+-	tf.command = ATA_CMD_CHK_POWER;
+-
+-	err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
+-	if (err_mask) {
+-		ata_dev_err(dev, "Check power mode failed (err_mask=0x%x)\n",
+-			    err_mask);
+-		/*
+-		 * Assume we are in standby mode so that we always force a
+-		 * spinup in ata_dev_power_set_active().
+-		 */
+-		return false;
+-	}
+-
+-	ata_dev_dbg(dev, "Power mode: 0x%02x\n", tf.nsect);
+-
+-	/* Active or idle */
+-	return tf.nsect == 0xff;
+-}
+-
+ /**
+  *	ata_dev_power_set_active -  Set a device power mode to active
+  *	@dev: target device
+diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
+index cf6883756155a..37eff1c974515 100644
+--- a/drivers/block/aoe/aoeblk.c
++++ b/drivers/block/aoe/aoeblk.c
+@@ -333,6 +333,7 @@ aoeblk_gdalloc(void *vp)
+ 	struct gendisk *gd;
+ 	mempool_t *mp;
+ 	struct blk_mq_tag_set *set;
++	sector_t ssize;
+ 	ulong flags;
+ 	int late = 0;
+ 	int err;
+@@ -395,7 +396,7 @@ aoeblk_gdalloc(void *vp)
+ 	gd->minors = AOE_PARTITIONS;
+ 	gd->fops = &aoe_bdops;
+ 	gd->private_data = d;
+-	set_capacity(gd, d->ssize);
++	ssize = d->ssize;
+ 	snprintf(gd->disk_name, sizeof gd->disk_name, "etherd/e%ld.%d",
+ 		d->aoemajor, d->aoeminor);
+ 
+@@ -404,6 +405,8 @@ aoeblk_gdalloc(void *vp)
+ 
+ 	spin_unlock_irqrestore(&d->lock, flags);
+ 
++	set_capacity(gd, ssize);
++
+ 	err = device_add_disk(NULL, gd, aoe_attr_groups);
+ 	if (err)
+ 		goto out_disk_cleanup;
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 47556d8ccc320..2c846eed5a2af 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -1627,14 +1627,15 @@ static int virtblk_freeze(struct virtio_device *vdev)
+ {
+ 	struct virtio_blk *vblk = vdev->priv;
+ 
++	/* Ensure no requests in virtqueues before deleting vqs. */
++	blk_mq_freeze_queue(vblk->disk->queue);
++
+ 	/* Ensure we don't receive any more interrupts */
+ 	virtio_reset_device(vdev);
+ 
+ 	/* Make sure no work handler is accessing the device. */
+ 	flush_work(&vblk->config_work);
+ 
+-	blk_mq_quiesce_queue(vblk->disk->queue);
+-
+ 	vdev->config->del_vqs(vdev);
+ 	kfree(vblk->vqs);
+ 
+@@ -1652,7 +1653,7 @@ static int virtblk_restore(struct virtio_device *vdev)
+ 
+ 	virtio_device_ready(vdev);
+ 
+-	blk_mq_unquiesce_queue(vblk->disk->queue);
++	blk_mq_unfreeze_queue(vblk->disk->queue);
+ 	return 0;
+ }
+ #endif
+diff --git a/drivers/bus/imx-weim.c b/drivers/bus/imx-weim.c
+index 42c9386a7b423..f9fd1582f150d 100644
+--- a/drivers/bus/imx-weim.c
++++ b/drivers/bus/imx-weim.c
+@@ -117,7 +117,7 @@ static int imx_weim_gpr_setup(struct platform_device *pdev)
+ 		i++;
+ 	}
+ 
+-	if (i == 0 || i % 4)
++	if (i == 0)
+ 		goto err;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(gprvals); i++) {
+diff --git a/drivers/cache/ax45mp_cache.c b/drivers/cache/ax45mp_cache.c
+index 57186c58dc849..1d7dd3d2c101c 100644
+--- a/drivers/cache/ax45mp_cache.c
++++ b/drivers/cache/ax45mp_cache.c
+@@ -129,8 +129,12 @@ static void ax45mp_dma_cache_wback(phys_addr_t paddr, size_t size)
+ 	unsigned long line_size;
+ 	unsigned long flags;
+ 
++	if (unlikely(start == end))
++		return;
++
+ 	line_size = ax45mp_priv.ax45mp_cache_line_size;
+ 	start = start & (~(line_size - 1));
++	end = ((end + line_size - 1) & (~(line_size - 1)));
+ 	local_irq_save(flags);
+ 	ax45mp_cpu_dcache_wb_range(start, end);
+ 	local_irq_restore(flags);
+diff --git a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
+index 2621ff8a93764..de53eddf6796b 100644
+--- a/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
++++ b/drivers/crypto/virtio/virtio_crypto_akcipher_algs.c
+@@ -104,7 +104,8 @@ static void virtio_crypto_dataq_akcipher_callback(struct virtio_crypto_request *
+ }
+ 
+ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher_ctx *ctx,
+-		struct virtio_crypto_ctrl_header *header, void *para,
++		struct virtio_crypto_ctrl_header *header,
++		struct virtio_crypto_akcipher_session_para *para,
+ 		const uint8_t *key, unsigned int keylen)
+ {
+ 	struct scatterlist outhdr_sg, key_sg, inhdr_sg, *sgs[3];
+@@ -128,7 +129,7 @@ static int virtio_crypto_alg_akcipher_init_session(struct virtio_crypto_akcipher
+ 
+ 	ctrl = &vc_ctrl_req->ctrl;
+ 	memcpy(&ctrl->header, header, sizeof(ctrl->header));
+-	memcpy(&ctrl->u, para, sizeof(ctrl->u));
++	memcpy(&ctrl->u.akcipher_create_session.para, para, sizeof(*para));
+ 	input = &vc_ctrl_req->input;
+ 	input->status = cpu_to_le32(VIRTIO_CRYPTO_ERR);
+ 
+diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
+index 2034eb4ce83fb..b3cec89126b24 100644
+--- a/drivers/cxl/acpi.c
++++ b/drivers/cxl/acpi.c
+@@ -194,31 +194,27 @@ struct cxl_cfmws_context {
+ 	int id;
+ };
+ 
+-static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
+-			   const unsigned long end)
++static int __cxl_parse_cfmws(struct acpi_cedt_cfmws *cfmws,
++			     struct cxl_cfmws_context *ctx)
+ {
+ 	int target_map[CXL_DECODER_MAX_INTERLEAVE];
+-	struct cxl_cfmws_context *ctx = arg;
+ 	struct cxl_port *root_port = ctx->root_port;
+ 	struct resource *cxl_res = ctx->cxl_res;
+ 	struct cxl_cxims_context cxims_ctx;
+ 	struct cxl_root_decoder *cxlrd;
+ 	struct device *dev = ctx->dev;
+-	struct acpi_cedt_cfmws *cfmws;
+ 	cxl_calc_hb_fn cxl_calc_hb;
+ 	struct cxl_decoder *cxld;
+ 	unsigned int ways, i, ig;
+ 	struct resource *res;
+ 	int rc;
+ 
+-	cfmws = (struct acpi_cedt_cfmws *) header;
+-
+ 	rc = cxl_acpi_cfmws_verify(dev, cfmws);
+ 	if (rc) {
+ 		dev_err(dev, "CFMWS range %#llx-%#llx not registered\n",
+ 			cfmws->base_hpa,
+ 			cfmws->base_hpa + cfmws->window_size - 1);
+-		return 0;
++		return rc;
+ 	}
+ 
+ 	rc = eiw_to_ways(cfmws->interleave_ways, &ways);
+@@ -254,7 +250,7 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
+ 
+ 	cxlrd = cxl_root_decoder_alloc(root_port, ways, cxl_calc_hb);
+ 	if (IS_ERR(cxlrd))
+-		return 0;
++		return PTR_ERR(cxlrd);
+ 
+ 	cxld = &cxlrd->cxlsd.cxld;
+ 	cxld->flags = cfmws_to_decoder_flags(cfmws->restrictions);
+@@ -298,16 +294,7 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
+ 		put_device(&cxld->dev);
+ 	else
+ 		rc = cxl_decoder_autoremove(dev, cxld);
+-	if (rc) {
+-		dev_err(dev, "Failed to add decode range: %pr", res);
+-		return rc;
+-	}
+-	dev_dbg(dev, "add: %s node: %d range [%#llx - %#llx]\n",
+-		dev_name(&cxld->dev),
+-		phys_to_target_node(cxld->hpa_range.start),
+-		cxld->hpa_range.start, cxld->hpa_range.end);
+-
+-	return 0;
++	return rc;
+ 
+ err_insert:
+ 	kfree(res->name);
+@@ -316,6 +303,29 @@ static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
+ 	return -ENOMEM;
+ }
+ 
++static int cxl_parse_cfmws(union acpi_subtable_headers *header, void *arg,
++			   const unsigned long end)
++{
++	struct acpi_cedt_cfmws *cfmws = (struct acpi_cedt_cfmws *)header;
++	struct cxl_cfmws_context *ctx = arg;
++	struct device *dev = ctx->dev;
++	int rc;
++
++	rc = __cxl_parse_cfmws(cfmws, ctx);
++	if (rc)
++		dev_err(dev,
++			"Failed to add decode range: [%#llx - %#llx] (%d)\n",
++			cfmws->base_hpa,
++			cfmws->base_hpa + cfmws->window_size - 1, rc);
++	else
++		dev_dbg(dev, "decode range: node: %d range [%#llx - %#llx]\n",
++			phys_to_target_node(cfmws->base_hpa), cfmws->base_hpa,
++			cfmws->base_hpa + cfmws->window_size - 1);
++
++	/* never fail cxl_acpi load for a single window failure */
++	return 0;
++}
++
+ __mock struct acpi_device *to_cxl_host_bridge(struct device *host,
+ 					      struct device *dev)
+ {
+diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
+index 37e1652afbc7e..deae3289de844 100644
+--- a/drivers/cxl/core/pci.c
++++ b/drivers/cxl/core/pci.c
+@@ -476,9 +476,9 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
+ 		allowed++;
+ 	}
+ 
+-	if (!allowed) {
+-		cxl_set_mem_enable(cxlds, 0);
+-		info->mem_enabled = 0;
++	if (!allowed && info->mem_enabled) {
++		dev_err(dev, "Range register decodes outside platform defined CXL ranges.\n");
++		return -ENXIO;
+ 	}
+ 
+ 	/*
+@@ -931,11 +931,21 @@ static void cxl_handle_rdport_errors(struct cxl_dev_state *cxlds) { }
+ void cxl_cor_error_detected(struct pci_dev *pdev)
+ {
+ 	struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
++	struct device *dev = &cxlds->cxlmd->dev;
++
++	scoped_guard(device, dev) {
++		if (!dev->driver) {
++			dev_warn(&pdev->dev,
++				 "%s: memdev disabled, abort error handling\n",
++				 dev_name(dev));
++			return;
++		}
+ 
+-	if (cxlds->rcd)
+-		cxl_handle_rdport_errors(cxlds);
++		if (cxlds->rcd)
++			cxl_handle_rdport_errors(cxlds);
+ 
+-	cxl_handle_endpoint_cor_ras(cxlds);
++		cxl_handle_endpoint_cor_ras(cxlds);
++	}
+ }
+ EXPORT_SYMBOL_NS_GPL(cxl_cor_error_detected, CXL);
+ 
+@@ -947,16 +957,25 @@ pci_ers_result_t cxl_error_detected(struct pci_dev *pdev,
+ 	struct device *dev = &cxlmd->dev;
+ 	bool ue;
+ 
+-	if (cxlds->rcd)
+-		cxl_handle_rdport_errors(cxlds);
++	scoped_guard(device, dev) {
++		if (!dev->driver) {
++			dev_warn(&pdev->dev,
++				 "%s: memdev disabled, abort error handling\n",
++				 dev_name(dev));
++			return PCI_ERS_RESULT_DISCONNECT;
++		}
++
++		if (cxlds->rcd)
++			cxl_handle_rdport_errors(cxlds);
++		/*
++		 * A frozen channel indicates an impending reset which is fatal to
++		 * CXL.mem operation, and will likely crash the system. On the off
++		 * chance the situation is recoverable dump the status of the RAS
++		 * capability registers and bounce the active state of the memdev.
++		 */
++		ue = cxl_handle_endpoint_ras(cxlds);
++	}
+ 
+-	/*
+-	 * A frozen channel indicates an impending reset which is fatal to
+-	 * CXL.mem operation, and will likely crash the system. On the off
+-	 * chance the situation is recoverable dump the status of the RAS
+-	 * capability registers and bounce the active state of the memdev.
+-	 */
+-	ue = cxl_handle_endpoint_ras(cxlds);
+ 
+ 	switch (state) {
+ 	case pci_channel_io_normal:
+diff --git a/drivers/dma/apple-admac.c b/drivers/dma/apple-admac.c
+index 5b63996640d9d..9588773dd2eb6 100644
+--- a/drivers/dma/apple-admac.c
++++ b/drivers/dma/apple-admac.c
+@@ -57,6 +57,8 @@
+ 
+ #define REG_BUS_WIDTH(ch)	(0x8040 + (ch) * 0x200)
+ 
++#define BUS_WIDTH_WORD_SIZE	GENMASK(3, 0)
++#define BUS_WIDTH_FRAME_SIZE	GENMASK(7, 4)
+ #define BUS_WIDTH_8BIT		0x00
+ #define BUS_WIDTH_16BIT		0x01
+ #define BUS_WIDTH_32BIT		0x02
+@@ -740,7 +742,8 @@ static int admac_device_config(struct dma_chan *chan,
+ 	struct admac_data *ad = adchan->host;
+ 	bool is_tx = admac_chan_direction(adchan->no) == DMA_MEM_TO_DEV;
+ 	int wordsize = 0;
+-	u32 bus_width = 0;
++	u32 bus_width = readl_relaxed(ad->base + REG_BUS_WIDTH(adchan->no)) &
++		~(BUS_WIDTH_WORD_SIZE | BUS_WIDTH_FRAME_SIZE);
+ 
+ 	switch (is_tx ? config->dst_addr_width : config->src_addr_width) {
+ 	case DMA_SLAVE_BUSWIDTH_1_BYTE:
+diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
+index 0745d9e7d259b..406f169b09a75 100644
+--- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
++++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
+@@ -176,7 +176,7 @@ dw_edma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent)
+ 	};
+ 	struct dentry *regs_dent, *ch_dent;
+ 	int nr_entries, i;
+-	char name[16];
++	char name[32];
+ 
+ 	regs_dent = debugfs_create_dir(WRITE_STR, dent);
+ 
+@@ -239,7 +239,7 @@ static noinline_for_stack void dw_edma_debugfs_regs_rd(struct dw_edma *dw,
+ 	};
+ 	struct dentry *regs_dent, *ch_dent;
+ 	int nr_entries, i;
+-	char name[16];
++	char name[32];
+ 
+ 	regs_dent = debugfs_create_dir(READ_STR, dent);
+ 
+diff --git a/drivers/dma/dw-edma/dw-hdma-v0-debugfs.c b/drivers/dma/dw-edma/dw-hdma-v0-debugfs.c
+index 520c81978b085..dcdc57fe976c1 100644
+--- a/drivers/dma/dw-edma/dw-hdma-v0-debugfs.c
++++ b/drivers/dma/dw-edma/dw-hdma-v0-debugfs.c
+@@ -116,7 +116,7 @@ static void dw_hdma_debugfs_regs_ch(struct dw_edma *dw, enum dw_edma_dir dir,
+ static void dw_hdma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent)
+ {
+ 	struct dentry *regs_dent, *ch_dent;
+-	char name[16];
++	char name[32];
+ 	int i;
+ 
+ 	regs_dent = debugfs_create_dir(WRITE_STR, dent);
+@@ -133,7 +133,7 @@ static void dw_hdma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent)
+ static void dw_hdma_debugfs_regs_rd(struct dw_edma *dw, struct dentry *dent)
+ {
+ 	struct dentry *regs_dent, *ch_dent;
+-	char name[16];
++	char name[32];
+ 	int i;
+ 
+ 	regs_dent = debugfs_create_dir(READ_STR, dent);
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index 3a5595a1d4421..9b141369bea5d 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -805,7 +805,7 @@ fsl_qdma_irq_init(struct platform_device *pdev,
+ 	int i;
+ 	int cpu;
+ 	int ret;
+-	char irq_name[20];
++	char irq_name[32];
+ 
+ 	fsl_qdma->error_irq =
+ 		platform_get_irq_byname(pdev, "qdma-error");
+diff --git a/drivers/dma/sh/shdma.h b/drivers/dma/sh/shdma.h
+index 9c121a4b33ad8..f97d80343aea4 100644
+--- a/drivers/dma/sh/shdma.h
++++ b/drivers/dma/sh/shdma.h
+@@ -25,7 +25,7 @@ struct sh_dmae_chan {
+ 	const struct sh_dmae_slave_config *config; /* Slave DMA configuration */
+ 	int xmit_shift;			/* log_2(bytes_per_xfer) */
+ 	void __iomem *base;
+-	char dev_id[16];		/* unique name per DMAC of channel */
++	char dev_id[32];		/* unique name per DMAC of channel */
+ 	int pm_error;
+ 	dma_addr_t slave_addr;
+ };
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index f1f920861fa9d..5f8d2e93ff3fb 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2404,6 +2404,11 @@ static int edma_probe(struct platform_device *pdev)
+ 	if (irq > 0) {
+ 		irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccint",
+ 					  dev_name(dev));
++		if (!irq_name) {
++			ret = -ENOMEM;
++			goto err_disable_pm;
++		}
++
+ 		ret = devm_request_irq(dev, irq, dma_irq_handler, 0, irq_name,
+ 				       ecc);
+ 		if (ret) {
+@@ -2420,6 +2425,11 @@ static int edma_probe(struct platform_device *pdev)
+ 	if (irq > 0) {
+ 		irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccerrint",
+ 					  dev_name(dev));
++		if (!irq_name) {
++			ret = -ENOMEM;
++			goto err_disable_pm;
++		}
++
+ 		ret = devm_request_irq(dev, irq, dma_ccerr_handler, 0, irq_name,
+ 				       ecc);
+ 		if (ret) {
+diff --git a/drivers/firewire/core-card.c b/drivers/firewire/core-card.c
+index 6ac5ff20a2fe2..8aaa7fcb2630d 100644
+--- a/drivers/firewire/core-card.c
++++ b/drivers/firewire/core-card.c
+@@ -429,7 +429,23 @@ static void bm_work(struct work_struct *work)
+ 	 */
+ 	card->bm_generation = generation;
+ 
+-	if (root_device == NULL) {
++	if (card->gap_count == 0) {
++		/*
++		 * If self IDs have inconsistent gap counts, do a
++		 * bus reset ASAP. The config rom read might never
++		 * complete, so don't wait for it. However, still
++		 * send a PHY configuration packet prior to the
++		 * bus reset. The PHY configuration packet might
++		 * fail, but 1394-2008 8.4.5.2 explicitly permits
++		 * it in this case, so it should be safe to try.
++		 */
++		new_root_id = local_id;
++		/*
++		 * We must always send a bus reset if the gap count
++		 * is inconsistent, so bypass the 5-reset limit.
++		 */
++		card->bm_retries = 0;
++	} else if (root_device == NULL) {
+ 		/*
+ 		 * Either link_on is false, or we failed to read the
+ 		 * config rom.  In either case, pick another root.
+diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
+index 83f5bb57fa4c4..83092d93f36a6 100644
+--- a/drivers/firmware/efi/arm-runtime.c
++++ b/drivers/firmware/efi/arm-runtime.c
+@@ -107,7 +107,7 @@ static int __init arm_enable_runtime_services(void)
+ 		efi_memory_desc_t *md;
+ 
+ 		for_each_efi_memory_desc(md) {
+-			int md_size = md->num_pages << EFI_PAGE_SHIFT;
++			u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
+ 			struct resource *res;
+ 
+ 			if (!(md->attribute & EFI_MEMORY_SP))
+diff --git a/drivers/firmware/efi/efi-init.c b/drivers/firmware/efi/efi-init.c
+index d4987d0130801..a00e07b853f22 100644
+--- a/drivers/firmware/efi/efi-init.c
++++ b/drivers/firmware/efi/efi-init.c
+@@ -143,15 +143,6 @@ static __init int is_usable_memory(efi_memory_desc_t *md)
+ 	case EFI_BOOT_SERVICES_DATA:
+ 	case EFI_CONVENTIONAL_MEMORY:
+ 	case EFI_PERSISTENT_MEMORY:
+-		/*
+-		 * Special purpose memory is 'soft reserved', which means it
+-		 * is set aside initially, but can be hotplugged back in or
+-		 * be assigned to the dax driver after boot.
+-		 */
+-		if (efi_soft_reserve_enabled() &&
+-		    (md->attribute & EFI_MEMORY_SP))
+-			return false;
+-
+ 		/*
+ 		 * According to the spec, these regions are no longer reserved
+ 		 * after calling ExitBootServices(). However, we can only use
+@@ -196,6 +187,16 @@ static __init void reserve_regions(void)
+ 		size = npages << PAGE_SHIFT;
+ 
+ 		if (is_memory(md)) {
++			/*
++			 * Special purpose memory is 'soft reserved', which
++			 * means it is set aside initially. Don't add a memblock
++			 * for it now so that it can be hotplugged back in or
++			 * be assigned to the dax driver after boot.
++			 */
++			if (efi_soft_reserve_enabled() &&
++			    (md->attribute & EFI_MEMORY_SP))
++				continue;
++
+ 			early_init_dt_add_memory_arch(paddr, size);
+ 
+ 			if (!is_usable_memory(md))
+diff --git a/drivers/firmware/efi/riscv-runtime.c b/drivers/firmware/efi/riscv-runtime.c
+index 09525fb5c240e..01f0f90ea4183 100644
+--- a/drivers/firmware/efi/riscv-runtime.c
++++ b/drivers/firmware/efi/riscv-runtime.c
+@@ -85,7 +85,7 @@ static int __init riscv_enable_runtime_services(void)
+ 		efi_memory_desc_t *md;
+ 
+ 		for_each_efi_memory_desc(md) {
+-			int md_size = md->num_pages << EFI_PAGE_SHIFT;
++			u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
+ 			struct resource *res;
+ 
+ 			if (!(md->attribute & EFI_MEMORY_SP))
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 95d2a7b2ea3e2..15de124d5b402 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -2043,6 +2043,11 @@ EXPORT_SYMBOL_GPL(gpiochip_generic_free);
+ int gpiochip_generic_config(struct gpio_chip *gc, unsigned int offset,
+ 			    unsigned long config)
+ {
++#ifdef CONFIG_PINCTRL
++	if (list_empty(&gc->gpiodev->pin_ranges))
++		return -ENOTSUPP;
++#endif
++
+ 	return pinctrl_gpio_set_config(gc, offset, config);
+ }
+ EXPORT_SYMBOL_GPL(gpiochip_generic_config);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 50f57d4dfd8ff..31d4b5a2c5e83 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1041,6 +1041,8 @@ struct amdgpu_device {
+ 	bool				in_s3;
+ 	bool				in_s4;
+ 	bool				in_s0ix;
++	/* indicate amdgpu suspension status */
++	bool				suspend_complete;
+ 
+ 	enum pp_mp1_state               mp1_state;
+ 	struct amdgpu_doorbell_index doorbell_index;
+@@ -1509,9 +1511,11 @@ static inline int amdgpu_acpi_smart_shift_update(struct drm_device *dev,
+ #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
+ bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev);
+ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev);
++void amdgpu_choose_low_power_state(struct amdgpu_device *adev);
+ #else
+ static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; }
+ static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; }
++static inline void amdgpu_choose_low_power_state(struct amdgpu_device *adev) { }
+ #endif
+ 
+ #if defined(CONFIG_DRM_AMD_DC)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 2deebece810e7..7099ff9cf8c50 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -1519,4 +1519,22 @@ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev)
+ #endif /* CONFIG_AMD_PMC */
+ }
+ 
++/**
++ * amdgpu_choose_low_power_state
++ *
++ * @adev: amdgpu_device_pointer
++ *
++ * Choose the target low power state for the GPU
++ */
++void amdgpu_choose_low_power_state(struct amdgpu_device *adev)
++{
++	if (adev->in_runpm)
++		return;
++
++	if (amdgpu_acpi_is_s0ix_active(adev))
++		adev->in_s0ix = true;
++	else if (amdgpu_acpi_is_s3_active(adev))
++		adev->in_s3 = true;
++}
++
+ #endif /* CONFIG_SUSPEND */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 19bc8d47317bb..7f48c7ec41366 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4441,13 +4441,15 @@ int amdgpu_device_prepare(struct drm_device *dev)
+ 	struct amdgpu_device *adev = drm_to_adev(dev);
+ 	int i, r;
+ 
++	amdgpu_choose_low_power_state(adev);
++
+ 	if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ 		return 0;
+ 
+ 	/* Evict the majority of BOs before starting suspend sequence */
+ 	r = amdgpu_device_evict_resources(adev);
+ 	if (r)
+-		return r;
++		goto unprepare;
+ 
+ 	for (i = 0; i < adev->num_ip_blocks; i++) {
+ 		if (!adev->ip_blocks[i].status.valid)
+@@ -4456,10 +4458,15 @@ int amdgpu_device_prepare(struct drm_device *dev)
+ 			continue;
+ 		r = adev->ip_blocks[i].version->funcs->prepare_suspend((void *)adev);
+ 		if (r)
+-			return r;
++			goto unprepare;
+ 	}
+ 
+ 	return 0;
++
++unprepare:
++	adev->in_s0ix = adev->in_s3 = false;
++
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index a7ad77ed09ca4..10c4a8cfa18a0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2449,6 +2449,7 @@ static int amdgpu_pmops_suspend(struct device *dev)
+ 	struct drm_device *drm_dev = dev_get_drvdata(dev);
+ 	struct amdgpu_device *adev = drm_to_adev(drm_dev);
+ 
++	adev->suspend_complete = false;
+ 	if (amdgpu_acpi_is_s0ix_active(adev))
+ 		adev->in_s0ix = true;
+ 	else if (amdgpu_acpi_is_s3_active(adev))
+@@ -2463,6 +2464,7 @@ static int amdgpu_pmops_suspend_noirq(struct device *dev)
+ 	struct drm_device *drm_dev = dev_get_drvdata(dev);
+ 	struct amdgpu_device *adev = drm_to_adev(drm_dev);
+ 
++	adev->suspend_complete = true;
+ 	if (amdgpu_acpi_should_gpu_reset(adev))
+ 		return amdgpu_asic_reset(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+index 468a67b302d4c..ca5c86e5f7cd6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.c
+@@ -362,7 +362,7 @@ static ssize_t ta_if_invoke_debugfs_write(struct file *fp, const char *buf, size
+ 		}
+ 	}
+ 
+-	if (copy_to_user((char *)buf, context->mem_context.shared_buf, shared_buf_len))
++	if (copy_to_user((char *)&buf[copy_pos], context->mem_context.shared_buf, shared_buf_len))
+ 		ret = -EFAULT;
+ 
+ err_free_shared_buf:
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 69c5009107460..3bc6943365a4f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3034,6 +3034,14 @@ static int gfx_v9_0_cp_gfx_start(struct amdgpu_device *adev)
+ 
+ 	gfx_v9_0_cp_gfx_enable(adev, true);
+ 
++	/* Now only limit the quirk on the APU gfx9 series and already
++	 * confirmed that the APU gfx10/gfx11 needn't such update.
++	 */
++	if (adev->flags & AMD_IS_APU &&
++			adev->in_s3 && !adev->suspend_complete) {
++		DRM_INFO(" Will skip the CSB packet resubmit\n");
++		return 0;
++	}
+ 	r = amdgpu_ring_alloc(ring, gfx_v9_0_get_csb_size(adev) + 4 + 3);
+ 	if (r) {
+ 		DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
+index 25a3da83e0fb9..db013f86147a1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
+@@ -431,6 +431,12 @@ static void nbio_v7_9_init_registers(struct amdgpu_device *adev)
+ 	u32 inst_mask;
+ 	int i;
+ 
++	if (amdgpu_sriov_vf(adev))
++		adev->rmmio_remap.reg_offset =
++			SOC15_REG_OFFSET(
++				NBIO, 0,
++				regBIF_BX_DEV0_EPF0_VF0_HDP_MEM_COHERENCY_FLUSH_CNTL)
++			<< 2;
+ 	WREG32_SOC15(NBIO, 0, regXCC_DOORBELL_FENCE,
+ 		0xff & ~(adev->gfx.xcc_mask));
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 51342809af034..9b5af3f1383a7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1297,10 +1297,32 @@ static int soc15_common_suspend(void *handle)
+ 	return soc15_common_hw_fini(adev);
+ }
+ 
++static bool soc15_need_reset_on_resume(struct amdgpu_device *adev)
++{
++	u32 sol_reg;
++
++	sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
++
++	/* Will reset for the following suspend abort cases.
++	 * 1) Only reset limit on APU side, dGPU hasn't checked yet.
++	 * 2) S3 suspend abort and TOS already launched.
++	 */
++	if (adev->flags & AMD_IS_APU && adev->in_s3 &&
++			!adev->suspend_complete &&
++			sol_reg)
++		return true;
++
++	return false;
++}
++
+ static int soc15_common_resume(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	if (soc15_need_reset_on_resume(adev)) {
++		dev_info(adev->dev, "S3 suspend abort case, let's reset ASIC.\n");
++		soc15_asic_reset(adev);
++	}
+ 	return soc15_common_hw_init(adev);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 28162bfbe1b33..71445ab63b5e5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -1482,10 +1482,15 @@ void kfd_dec_compute_active(struct kfd_node *dev);
+ 
+ /* Cgroup Support */
+ /* Check with device cgroup if @kfd device is accessible */
+-static inline int kfd_devcgroup_check_permission(struct kfd_node *kfd)
++static inline int kfd_devcgroup_check_permission(struct kfd_node *node)
+ {
+ #if defined(CONFIG_CGROUP_DEVICE) || defined(CONFIG_CGROUP_BPF)
+-	struct drm_device *ddev = adev_to_drm(kfd->adev);
++	struct drm_device *ddev;
++
++	if (node->xcp)
++		ddev = node->xcp->ddev;
++	else
++		ddev = adev_to_drm(node->adev);
+ 
+ 	return devcgroup_check_permission(DEVCG_DEV_CHAR, DRM_MAJOR,
+ 					  ddev->render->index,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d3d1ccf66ece0..272c27495ede6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1810,21 +1810,12 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 			DRM_ERROR("amdgpu: fail to register dmub aux callback");
+ 			goto error;
+ 		}
+-		if (!register_dmub_notify_callback(adev, DMUB_NOTIFICATION_HPD, dmub_hpd_callback, true)) {
+-			DRM_ERROR("amdgpu: fail to register dmub hpd callback");
+-			goto error;
+-		}
+-		if (!register_dmub_notify_callback(adev, DMUB_NOTIFICATION_HPD_IRQ, dmub_hpd_callback, true)) {
+-			DRM_ERROR("amdgpu: fail to register dmub hpd callback");
+-			goto error;
+-		}
+-	}
+-
+-	/* Enable outbox notification only after IRQ handlers are registered and DMUB is alive.
+-	 * It is expected that DMUB will resend any pending notifications at this point, for
+-	 * example HPD from DPIA.
+-	 */
+-	if (dc_is_dmub_outbox_supported(adev->dm.dc)) {
++		/* Enable outbox notification only after IRQ handlers are registered and DMUB is alive.
++		 * It is expected that DMUB will resend any pending notifications at this point. Note
++		 * that hpd and hpd_irq handler registration are deferred to register_hpd_handlers() to
++		 * align legacy interface initialization sequence. Connection status will be proactivly
++		 * detected once in the amdgpu_dm_initialize_drm_device.
++		 */
+ 		dc_enable_dmub_outbox(adev->dm.dc);
+ 
+ 		/* DPIA trace goes to dmesg logs only if outbox is enabled */
+@@ -2254,6 +2245,7 @@ static int dm_sw_fini(void *handle)
+ 
+ 	if (adev->dm.dmub_srv) {
+ 		dmub_srv_destroy(adev->dm.dmub_srv);
++		kfree(adev->dm.dmub_srv);
+ 		adev->dm.dmub_srv = NULL;
+ 	}
+ 
+@@ -3494,6 +3486,14 @@ static void register_hpd_handlers(struct amdgpu_device *adev)
+ 	int_params.requested_polarity = INTERRUPT_POLARITY_DEFAULT;
+ 	int_params.current_polarity = INTERRUPT_POLARITY_DEFAULT;
+ 
++	if (dc_is_dmub_outbox_supported(adev->dm.dc)) {
++		if (!register_dmub_notify_callback(adev, DMUB_NOTIFICATION_HPD, dmub_hpd_callback, true))
++			DRM_ERROR("amdgpu: fail to register dmub hpd callback");
++
++		if (!register_dmub_notify_callback(adev, DMUB_NOTIFICATION_HPD_IRQ, dmub_hpd_callback, true))
++			DRM_ERROR("amdgpu: fail to register dmub hpd callback");
++	}
++
+ 	list_for_each_entry(connector,
+ 			&dev->mode_config.connector_list, head)	{
+ 
+@@ -3519,10 +3519,6 @@ static void register_hpd_handlers(struct amdgpu_device *adev)
+ 					handle_hpd_rx_irq,
+ 					(void *) aconnector);
+ 		}
+-
+-		if (adev->dm.hpd_rx_offload_wq)
+-			adev->dm.hpd_rx_offload_wq[connector->index].aconnector =
+-				aconnector;
+ 	}
+ }
+ 
+@@ -4493,6 +4489,10 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 
+ 		link = dc_get_link_at_index(dm->dc, i);
+ 
++		if (dm->hpd_rx_offload_wq)
++			dm->hpd_rx_offload_wq[aconnector->base.index].aconnector =
++				aconnector;
++
+ 		if (!dc_link_detect_connection_type(link, &new_connection_type))
+ 			DRM_ERROR("KMS: Failed to detect connector\n");
+ 
+@@ -6445,10 +6445,15 @@ amdgpu_dm_connector_late_register(struct drm_connector *connector)
+ static void amdgpu_dm_connector_funcs_force(struct drm_connector *connector)
+ {
+ 	struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+-	struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
+ 	struct dc_link *dc_link = aconnector->dc_link;
+ 	struct dc_sink *dc_em_sink = aconnector->dc_em_sink;
+ 	struct edid *edid;
++	struct i2c_adapter *ddc;
++
++	if (dc_link->aux_mode)
++		ddc = &aconnector->dm_dp_aux.aux.ddc;
++	else
++		ddc = &aconnector->i2c->base;
+ 
+ 	/*
+ 	 * Note: drm_get_edid gets edid in the following order:
+@@ -6456,7 +6461,7 @@ static void amdgpu_dm_connector_funcs_force(struct drm_connector *connector)
+ 	 * 2) firmware EDID if set via edid_firmware module parameter
+ 	 * 3) regular DDC read.
+ 	 */
+-	edid = drm_get_edid(connector, &amdgpu_connector->ddc_bus->aux.ddc);
++	edid = drm_get_edid(connector, ddc);
+ 	if (!edid) {
+ 		DRM_ERROR("No EDID found on connector: %s.\n", connector->name);
+ 		return;
+@@ -6497,12 +6502,18 @@ static int get_modes(struct drm_connector *connector)
+ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
+ {
+ 	struct drm_connector *connector = &aconnector->base;
+-	struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(&aconnector->base);
++	struct dc_link *dc_link = aconnector->dc_link;
+ 	struct dc_sink_init_data init_params = {
+ 			.link = aconnector->dc_link,
+ 			.sink_signal = SIGNAL_TYPE_VIRTUAL
+ 	};
+ 	struct edid *edid;
++	struct i2c_adapter *ddc;
++
++	if (dc_link->aux_mode)
++		ddc = &aconnector->dm_dp_aux.aux.ddc;
++	else
++		ddc = &aconnector->i2c->base;
+ 
+ 	/*
+ 	 * Note: drm_get_edid gets edid in the following order:
+@@ -6510,7 +6521,7 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
+ 	 * 2) firmware EDID if set via edid_firmware module parameter
+ 	 * 3) regular DDC read.
+ 	 */
+-	edid = drm_get_edid(connector, &amdgpu_connector->ddc_bus->aux.ddc);
++	edid = drm_get_edid(connector, ddc);
+ 	if (!edid) {
+ 		DRM_ERROR("No EDID found on connector: %s.\n", connector->name);
+ 		return;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 51467f132c260..d595030c3359e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -711,7 +711,7 @@ static inline int dm_irq_state(struct amdgpu_device *adev,
+ {
+ 	bool st;
+ 	enum dc_irq_source irq_source;
+-
++	struct dc *dc = adev->dm.dc;
+ 	struct amdgpu_crtc *acrtc = adev->mode_info.crtcs[crtc_id];
+ 
+ 	if (!acrtc) {
+@@ -729,6 +729,9 @@ static inline int dm_irq_state(struct amdgpu_device *adev,
+ 
+ 	st = (state == AMDGPU_IRQ_STATE_ENABLE);
+ 
++	if (dc && dc->caps.ips_support && dc->idle_optimizations_allowed)
++		dc_allow_idle_optimizations(dc, false);
++
+ 	dc_interrupt_set(adev->dm.dc, irq_source, st);
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index b5b29451d2db8..bc7a375f43c0c 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -1850,19 +1850,21 @@ static enum bp_result get_firmware_info_v3_2(
+ 		/* Vega12 */
+ 		smu_info_v3_2 = GET_IMAGE(struct atom_smu_info_v3_2,
+ 							DATA_TABLES(smu_info));
+-		DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_2->gpuclk_ss_percentage);
+ 		if (!smu_info_v3_2)
+ 			return BP_RESULT_BADBIOSTABLE;
+ 
++		DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_2->gpuclk_ss_percentage);
++
+ 		info->default_engine_clk = smu_info_v3_2->bootup_dcefclk_10khz * 10;
+ 	} else if (revision.minor == 3) {
+ 		/* Vega20 */
+ 		smu_info_v3_3 = GET_IMAGE(struct atom_smu_info_v3_3,
+ 							DATA_TABLES(smu_info));
+-		DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_3->gpuclk_ss_percentage);
+ 		if (!smu_info_v3_3)
+ 			return BP_RESULT_BADBIOSTABLE;
+ 
++		DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", smu_info_v3_3->gpuclk_ss_percentage);
++
+ 		info->default_engine_clk = smu_info_v3_3->bootup_dcefclk_10khz * 10;
+ 	}
+ 
+@@ -2423,10 +2425,11 @@ static enum bp_result get_integrated_info_v11(
+ 	info_v11 = GET_IMAGE(struct atom_integrated_system_info_v1_11,
+ 					DATA_TABLES(integratedsysteminfo));
+ 
+-	DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v11->gpuclk_ss_percentage);
+ 	if (info_v11 == NULL)
+ 		return BP_RESULT_BADBIOSTABLE;
+ 
++	DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v11->gpuclk_ss_percentage);
++
+ 	info->gpu_cap_info =
+ 	le32_to_cpu(info_v11->gpucapinfo);
+ 	/*
+@@ -2638,11 +2641,12 @@ static enum bp_result get_integrated_info_v2_1(
+ 
+ 	info_v2_1 = GET_IMAGE(struct atom_integrated_system_info_v2_1,
+ 					DATA_TABLES(integratedsysteminfo));
+-	DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_1->gpuclk_ss_percentage);
+ 
+ 	if (info_v2_1 == NULL)
+ 		return BP_RESULT_BADBIOSTABLE;
+ 
++	DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_1->gpuclk_ss_percentage);
++
+ 	info->gpu_cap_info =
+ 	le32_to_cpu(info_v2_1->gpucapinfo);
+ 	/*
+@@ -2800,11 +2804,11 @@ static enum bp_result get_integrated_info_v2_2(
+ 	info_v2_2 = GET_IMAGE(struct atom_integrated_system_info_v2_2,
+ 					DATA_TABLES(integratedsysteminfo));
+ 
+-	DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_2->gpuclk_ss_percentage);
+-
+ 	if (info_v2_2 == NULL)
+ 		return BP_RESULT_BADBIOSTABLE;
+ 
++	DC_LOG_BIOS("gpuclk_ss_percentage (unit of 0.001 percent): %d\n", info_v2_2->gpuclk_ss_percentage);
++
+ 	info->gpu_cap_info =
+ 	le32_to_cpu(info_v2_2->gpucapinfo);
+ 	/*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
+index ed94187c2afa2..f365773d57148 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_exports.c
+@@ -497,7 +497,7 @@ void dc_link_enable_hpd_filter(struct dc_link *link, bool enable)
+ 	link->dc->link_srv->enable_hpd_filter(link, enable);
+ }
+ 
+-bool dc_link_validate(struct dc *dc, const struct dc_stream_state *streams, const unsigned int count)
++bool dc_link_dp_dpia_validate(struct dc *dc, const struct dc_stream_state *streams, const unsigned int count)
+ {
+ 	return dc->link_srv->validate_dpia_bandwidth(streams, count);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 2cafd644baff8..8164a534048c4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -2187,11 +2187,11 @@ int dc_link_dp_dpia_handle_usb4_bandwidth_allocation_for_link(
+  *
+  * @dc: pointer to dc struct
+  * @stream: pointer to all possible streams
+- * @num_streams: number of valid DPIA streams
++ * @count: number of valid DPIA streams
+  *
+  * return: TRUE if bw used by DPIAs doesn't exceed available BW else return FALSE
+  */
+-bool dc_link_validate(struct dc *dc, const struct dc_stream_state *streams,
++bool dc_link_dp_dpia_validate(struct dc *dc, const struct dc_stream_state *streams,
+ 		const unsigned int count);
+ 
+ /* Sink Interfaces - A sink corresponds to a display output device */
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index 61d1b4eadbee3..05b3433cbb0b4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -124,7 +124,7 @@ bool dc_dmub_srv_cmd_list_queue_execute(struct dc_dmub_srv *dc_dmub_srv,
+ 		unsigned int count,
+ 		union dmub_rb_cmd *cmd_list)
+ {
+-	struct dc_context *dc_ctx = dc_dmub_srv->ctx;
++	struct dc_context *dc_ctx;
+ 	struct dmub_srv *dmub;
+ 	enum dmub_status status;
+ 	int i;
+@@ -132,6 +132,7 @@ bool dc_dmub_srv_cmd_list_queue_execute(struct dc_dmub_srv *dc_dmub_srv,
+ 	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
+ 		return false;
+ 
++	dc_ctx = dc_dmub_srv->ctx;
+ 	dmub = dc_dmub_srv->dmub;
+ 
+ 	for (i = 0 ; i < count; i++) {
+@@ -1129,7 +1130,7 @@ void dc_dmub_srv_subvp_save_surf_addr(const struct dc_dmub_srv *dc_dmub_srv, con
+ 
+ bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait)
+ {
+-	struct dc_context *dc_ctx = dc_dmub_srv->ctx;
++	struct dc_context *dc_ctx;
+ 	enum dmub_status status;
+ 
+ 	if (!dc_dmub_srv || !dc_dmub_srv->dmub)
+@@ -1138,6 +1139,8 @@ bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait)
+ 	if (dc_dmub_srv->ctx->dc->debug.dmcub_emulation)
+ 		return true;
+ 
++	dc_ctx = dc_dmub_srv->ctx;
++
+ 	if (wait) {
+ 		status = dmub_srv_wait_for_hw_pwr_up(dc_dmub_srv->dmub, 500000);
+ 		if (status != DMUB_STATUS_OK) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+index eeeeeef4d7173..1cb7765f593aa 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+@@ -1377,6 +1377,12 @@ struct dp_trace {
+ #ifndef DP_TUNNELING_STATUS
+ #define DP_TUNNELING_STATUS				0xE0025 /* 1.4a */
+ #endif
++#ifndef DP_TUNNELING_MAX_LINK_RATE
++#define DP_TUNNELING_MAX_LINK_RATE			0xE0028 /* 1.4a */
++#endif
++#ifndef DP_TUNNELING_MAX_LANE_COUNT
++#define DP_TUNNELING_MAX_LANE_COUNT			0xE0029 /* 1.4a */
++#endif
+ #ifndef DPTX_BW_ALLOCATION_MODE_CONTROL
+ #define DPTX_BW_ALLOCATION_MODE_CONTROL			0xE0030 /* 1.4a */
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 35d146217aef0..66d0774bef527 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -1100,21 +1100,25 @@ struct dc_panel_config {
+ 	} ilr;
+ };
+ 
++#define MAX_SINKS_PER_LINK 4
++
+ /*
+  *  USB4 DPIA BW ALLOCATION STRUCTS
+  */
+ struct dc_dpia_bw_alloc {
+-	int sink_verified_bw;  // The Verified BW that sink can allocated and use that has been verified already
+-	int sink_allocated_bw; // The Actual Allocated BW that sink currently allocated
+-	int sink_max_bw;       // The Max BW that sink can require/support
++	int remote_sink_req_bw[MAX_SINKS_PER_LINK]; // BW requested by remote sinks
++	int link_verified_bw;  // The Verified BW that link can allocated and use that has been verified already
++	int link_max_bw;       // The Max BW that link can require/support
++	int allocated_bw;      // The Actual Allocated BW for this DPIA
+ 	int estimated_bw;      // The estimated available BW for this DPIA
+ 	int bw_granularity;    // BW Granularity
++	int dp_overhead;       // DP overhead in dp tunneling
+ 	bool bw_alloc_enabled; // The BW Alloc Mode Support is turned ON for all 3:  DP-Tx & Dpia & CM
+ 	bool response_ready;   // Response ready from the CM side
++	uint8_t nrd_max_lane_count; // Non-reduced max lane count
++	uint8_t nrd_max_link_rate; // Non-reduced max link rate
+ };
+ 
+-#define MAX_SINKS_PER_LINK 4
+-
+ enum dc_hpd_enable_select {
+ 	HPD_EN_FOR_ALL_EDP = 0,
+ 	HPD_EN_FOR_PRIMARY_EDP_ONLY,
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+index e8570060d007b..5bca67407c5b1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+@@ -290,4 +290,5 @@ void dce_panel_cntl_construct(
+ 	dce_panel_cntl->base.funcs = &dce_link_panel_cntl_funcs;
+ 	dce_panel_cntl->base.ctx = init_data->ctx;
+ 	dce_panel_cntl->base.inst = init_data->inst;
++	dce_panel_cntl->base.pwrseq_inst = 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_panel_cntl.c
+index ad0df1a72a90a..9e96a3ace2077 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_panel_cntl.c
+@@ -215,4 +215,5 @@ void dcn301_panel_cntl_construct(
+ 	dcn301_panel_cntl->base.funcs = &dcn301_link_panel_cntl_funcs;
+ 	dcn301_panel_cntl->base.ctx = init_data->ctx;
+ 	dcn301_panel_cntl->base.inst = init_data->inst;
++	dcn301_panel_cntl->base.pwrseq_inst = 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
+index 03248422d6ffd..281be20b1a107 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_panel_cntl.c
+@@ -154,8 +154,24 @@ void dcn31_panel_cntl_construct(
+ 	struct dcn31_panel_cntl *dcn31_panel_cntl,
+ 	const struct panel_cntl_init_data *init_data)
+ {
++	uint8_t pwrseq_inst = 0xF;
++
+ 	dcn31_panel_cntl->base.funcs = &dcn31_link_panel_cntl_funcs;
+ 	dcn31_panel_cntl->base.ctx = init_data->ctx;
+ 	dcn31_panel_cntl->base.inst = init_data->inst;
+-	dcn31_panel_cntl->base.pwrseq_inst = init_data->pwrseq_inst;
++
++	switch (init_data->eng_id) {
++	case ENGINE_ID_DIGA:
++		pwrseq_inst = 0;
++		break;
++	case ENGINE_ID_DIGB:
++		pwrseq_inst = 1;
++		break;
++	default:
++		DC_LOG_WARNING("Unsupported pwrseq engine id: %d!\n", init_data->eng_id);
++		ASSERT(false);
++		break;
++	}
++
++	dcn31_panel_cntl->base.pwrseq_inst = pwrseq_inst;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
+index 501388014855c..d761b0df28784 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
+@@ -203,12 +203,12 @@ void dcn32_link_encoder_construct(
+ 	enc10->base.hpd_source = init_data->hpd_source;
+ 	enc10->base.connector = init_data->connector;
+ 
+-	if (enc10->base.connector.id == CONNECTOR_ID_USBC)
+-		enc10->base.features.flags.bits.DP_IS_USB_C = 1;
+ 
+ 	enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
+ 
+ 	enc10->base.features = *enc_features;
++	if (enc10->base.connector.id == CONNECTOR_ID_USBC)
++		enc10->base.features.flags.bits.DP_IS_USB_C = 1;
+ 
+ 	enc10->base.transmitter = init_data->transmitter;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c
+index da94e5309fbaf..81e349d5835bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn35/dcn35_dio_link_encoder.c
+@@ -184,8 +184,6 @@ void dcn35_link_encoder_construct(
+ 	enc10->base.hpd_source = init_data->hpd_source;
+ 	enc10->base.connector = init_data->connector;
+ 
+-	if (enc10->base.connector.id == CONNECTOR_ID_USBC)
+-		enc10->base.features.flags.bits.DP_IS_USB_C = 1;
+ 
+ 	enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
+ 
+@@ -240,6 +238,8 @@ void dcn35_link_encoder_construct(
+ 	}
+ 
+ 	enc10->base.features.flags.bits.HDMI_6GB_EN = 1;
++	if (enc10->base.connector.id == CONNECTOR_ID_USBC)
++		enc10->base.features.flags.bits.DP_IS_USB_C = 1;
+ 
+ 	if (bp_funcs->get_connector_speed_cap_info)
+ 		result = bp_funcs->get_connector_speed_cap_info(enc10->base.ctx->dc_bios,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 9fedf99475695..c1f1665e553d6 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -1182,9 +1182,9 @@ void dce110_disable_stream(struct pipe_ctx *pipe_ctx)
+ 		dto_params.timing = &pipe_ctx->stream->timing;
+ 		dp_hpo_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst;
+ 		if (dccg) {
+-			dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
+ 			dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
+ 			dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst, dp_hpo_inst);
++			dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
+ 		}
+ 	} else if (dccg && dccg->funcs->disable_symclk_se) {
+ 		dccg->funcs->disable_symclk_se(dccg, stream_enc->stream_enc_inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index da0181fef411f..c966f38583cb9 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -2776,18 +2776,17 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx)
+ 	}
+ 
+ 	if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx)) {
+-		dp_hpo_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst;
+-		dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, dp_hpo_inst);
+-
+-		phyd32clk = get_phyd32clk_src(link);
+-		dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
+-
+ 		dto_params.otg_inst = tg->inst;
+ 		dto_params.pixclk_khz = pipe_ctx->stream->timing.pix_clk_100hz / 10;
+ 		dto_params.num_odm_segments = get_odm_segment_count(pipe_ctx);
+ 		dto_params.timing = &pipe_ctx->stream->timing;
+ 		dto_params.ref_dtbclk_khz = dc->clk_mgr->funcs->get_dtb_ref_clk_frequency(dc->clk_mgr);
+ 		dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
++		dp_hpo_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst;
++		dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, dp_hpo_inst);
++
++		phyd32clk = get_phyd32clk_src(link);
++		dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
+ 	} else {
+ 		if (dccg->funcs->enable_symclk_se)
+ 			dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h b/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h
+index 248adc1705e35..660897e12a9ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/panel_cntl.h
+@@ -56,7 +56,7 @@ struct panel_cntl_funcs {
+ struct panel_cntl_init_data {
+ 	struct dc_context *ctx;
+ 	uint32_t inst;
+-	uint32_t pwrseq_inst;
++	uint32_t eng_id;
+ };
+ 
+ struct panel_cntl {
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index a08ae59c1ea9f..007ee32c202e8 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -2098,17 +2098,11 @@ static enum dc_status enable_link_dp(struct dc_state *state,
+ 		}
+ 	}
+ 
+-	/*
+-	 * If the link is DP-over-USB4 do the following:
+-	 * - Train with fallback when enabling DPIA link. Conventional links are
++	/* Train with fallback when enabling DPIA link. Conventional links are
+ 	 * trained with fallback during sink detection.
+-	 * - Allocate only what the stream needs for bw in Gbps. Inform the CM
+-	 * in case stream needs more or less bw from what has been allocated
+-	 * earlier at plug time.
+ 	 */
+-	if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) {
++	if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
+ 		do_fallback = true;
+-	}
+ 
+ 	/*
+ 	 * Temporary w/a to get DP2.0 link rates to work with SST.
+@@ -2290,6 +2284,32 @@ static enum dc_status enable_link(
+ 	return status;
+ }
+ 
++static bool allocate_usb4_bandwidth_for_stream(struct dc_stream_state *stream, int bw)
++{
++	return true;
++}
++
++static bool allocate_usb4_bandwidth(struct dc_stream_state *stream)
++{
++	bool ret;
++
++	int bw = dc_bandwidth_in_kbps_from_timing(&stream->timing,
++			dc_link_get_highest_encoding_format(stream->sink->link));
++
++	ret = allocate_usb4_bandwidth_for_stream(stream, bw);
++
++	return ret;
++}
++
++static bool deallocate_usb4_bandwidth(struct dc_stream_state *stream)
++{
++	bool ret;
++
++	ret = allocate_usb4_bandwidth_for_stream(stream, 0);
++
++	return ret;
++}
++
+ void link_set_dpms_off(struct pipe_ctx *pipe_ctx)
+ {
+ 	struct dc  *dc = pipe_ctx->stream->ctx->dc;
+@@ -2325,6 +2345,9 @@ void link_set_dpms_off(struct pipe_ctx *pipe_ctx)
+ 	update_psp_stream_config(pipe_ctx, true);
+ 	dc->hwss.blank_stream(pipe_ctx);
+ 
++	if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
++		deallocate_usb4_bandwidth(pipe_ctx->stream);
++
+ 	if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ 		deallocate_mst_payload(pipe_ctx);
+ 	else if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT &&
+@@ -2567,6 +2590,9 @@ void link_set_dpms_on(
+ 		}
+ 	}
+ 
++	if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
++		allocate_usb4_bandwidth(pipe_ctx->stream);
++
+ 	if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ 		allocate_mst_payload(pipe_ctx);
+ 	else if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT &&
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+index ff7801aa552a4..41f8230f942bd 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+@@ -368,30 +368,6 @@ static enum transmitter translate_encoder_to_transmitter(
+ 	}
+ }
+ 
+-static uint8_t translate_dig_inst_to_pwrseq_inst(struct dc_link *link)
+-{
+-	uint8_t pwrseq_inst = 0xF;
+-	struct dc_context *dc_ctx = link->dc->ctx;
+-
+-	DC_LOGGER_INIT(dc_ctx->logger);
+-
+-	switch (link->eng_id) {
+-	case ENGINE_ID_DIGA:
+-		pwrseq_inst = 0;
+-		break;
+-	case ENGINE_ID_DIGB:
+-		pwrseq_inst = 1;
+-		break;
+-	default:
+-		DC_LOG_WARNING("Unsupported pwrseq engine id: %d!\n", link->eng_id);
+-		ASSERT(false);
+-		break;
+-	}
+-
+-	return pwrseq_inst;
+-}
+-
+-
+ static void link_destruct(struct dc_link *link)
+ {
+ 	int i;
+@@ -655,7 +631,7 @@ static bool construct_phy(struct dc_link *link,
+ 			link->link_id.id == CONNECTOR_ID_LVDS)) {
+ 		panel_cntl_init_data.ctx = dc_ctx;
+ 		panel_cntl_init_data.inst = panel_cntl_init_data.ctx->dc_edp_id_count;
+-		panel_cntl_init_data.pwrseq_inst = translate_dig_inst_to_pwrseq_inst(link);
++		panel_cntl_init_data.eng_id = link->eng_id;
+ 		link->panel_cntl =
+ 			link->dc->res_pool->funcs->panel_cntl_create(
+ 								&panel_cntl_init_data);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_validation.c b/drivers/gpu/drm/amd/display/dc/link/link_validation.c
+index b45fda96eaf64..5b0bc7f6a188c 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_validation.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_validation.c
+@@ -346,23 +346,61 @@ enum dc_status link_validate_mode_timing(
+ 	return DC_OK;
+ }
+ 
++/*
++ * This function calculates the bandwidth required for the stream timing
++ * and aggregates the stream bandwidth for the respective dpia link
++ *
++ * @stream: pointer to the dc_stream_state struct instance
++ * @num_streams: number of streams to be validated
++ *
++ * return: true if validation is succeeded
++ */
+ bool link_validate_dpia_bandwidth(const struct dc_stream_state *stream, const unsigned int num_streams)
+ {
+-	bool ret = true;
+-	int bw_needed[MAX_DPIA_NUM];
+-	struct dc_link *link[MAX_DPIA_NUM];
++	int bw_needed[MAX_DPIA_NUM] = {0};
++	struct dc_link *dpia_link[MAX_DPIA_NUM] = {0};
++	int num_dpias = 0;
++
++	for (unsigned int i = 0; i < num_streams; ++i) {
++		if (stream[i].signal == SIGNAL_TYPE_DISPLAY_PORT) {
++			/* new dpia sst stream, check whether it exceeds max dpia */
++			if (num_dpias >= MAX_DPIA_NUM)
++				return false;
+ 
+-	if (!num_streams || num_streams > MAX_DPIA_NUM)
+-		return ret;
++			dpia_link[num_dpias] = stream[i].link;
++			bw_needed[num_dpias] = dc_bandwidth_in_kbps_from_timing(&stream[i].timing,
++					dc_link_get_highest_encoding_format(dpia_link[num_dpias]));
++			num_dpias++;
++		} else if (stream[i].signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
++			uint8_t j = 0;
++			/* check whether its a known dpia link */
++			for (; j < num_dpias; ++j) {
++				if (dpia_link[j] == stream[i].link)
++					break;
++			}
++
++			if (j == num_dpias) {
++				/* new dpia mst stream, check whether it exceeds max dpia */
++				if (num_dpias >= MAX_DPIA_NUM)
++					return false;
++				else {
++					dpia_link[j] = stream[i].link;
++					num_dpias++;
++				}
++			}
++
++			bw_needed[j] += dc_bandwidth_in_kbps_from_timing(&stream[i].timing,
++				dc_link_get_highest_encoding_format(dpia_link[j]));
++		}
++	}
+ 
+-	for (uint8_t i = 0; i < num_streams; ++i) {
++	/* Include dp overheads */
++	for (uint8_t i = 0; i < num_dpias; ++i) {
++		int dp_overhead = 0;
+ 
+-		link[i] = stream[i].link;
+-		bw_needed[i] = dc_bandwidth_in_kbps_from_timing(&stream[i].timing,
+-				dc_link_get_highest_encoding_format(link[i]));
++		dp_overhead = link_dp_dpia_get_dp_overhead_in_dp_tunneling(dpia_link[i]);
++		bw_needed[i] += dp_overhead;
+ 	}
+ 
+-	ret = dpia_validate_usb4_bw(link, bw_needed, num_streams);
+-
+-	return ret;
++	return dpia_validate_usb4_bw(dpia_link, bw_needed, num_dpias);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+index d6e1f969bfd52..5491b707cec88 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+@@ -54,11 +54,18 @@ static bool get_bw_alloc_proceed_flag(struct dc_link *tmp)
+ static void reset_bw_alloc_struct(struct dc_link *link)
+ {
+ 	link->dpia_bw_alloc_config.bw_alloc_enabled = false;
+-	link->dpia_bw_alloc_config.sink_verified_bw = 0;
+-	link->dpia_bw_alloc_config.sink_max_bw = 0;
++	link->dpia_bw_alloc_config.link_verified_bw = 0;
++	link->dpia_bw_alloc_config.link_max_bw = 0;
++	link->dpia_bw_alloc_config.allocated_bw = 0;
+ 	link->dpia_bw_alloc_config.estimated_bw = 0;
+ 	link->dpia_bw_alloc_config.bw_granularity = 0;
++	link->dpia_bw_alloc_config.dp_overhead = 0;
+ 	link->dpia_bw_alloc_config.response_ready = false;
++	link->dpia_bw_alloc_config.nrd_max_lane_count = 0;
++	link->dpia_bw_alloc_config.nrd_max_link_rate = 0;
++	for (int i = 0; i < MAX_SINKS_PER_LINK; i++)
++		link->dpia_bw_alloc_config.remote_sink_req_bw[i] = 0;
++	DC_LOG_DEBUG("reset usb4 bw alloc of link(%d)\n", link->link_index);
+ }
+ 
+ #define BW_GRANULARITY_0 4 // 0.25 Gbps
+@@ -104,6 +111,32 @@ static int get_estimated_bw(struct dc_link *link)
+ 	return bw_estimated_bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+ }
+ 
++static int get_non_reduced_max_link_rate(struct dc_link *link)
++{
++	uint8_t nrd_max_link_rate = 0;
++
++	core_link_read_dpcd(
++			link,
++			DP_TUNNELING_MAX_LINK_RATE,
++			&nrd_max_link_rate,
++			sizeof(uint8_t));
++
++	return nrd_max_link_rate;
++}
++
++static int get_non_reduced_max_lane_count(struct dc_link *link)
++{
++	uint8_t nrd_max_lane_count = 0;
++
++	core_link_read_dpcd(
++			link,
++			DP_TUNNELING_MAX_LANE_COUNT,
++			&nrd_max_lane_count,
++			sizeof(uint8_t));
++
++	return nrd_max_lane_count;
++}
++
+ /*
+  * Read all New BW alloc configuration ex: estimated_bw, allocated_bw,
+  * granuality, Driver_ID, CM_Group, & populate the BW allocation structs
+@@ -111,13 +144,20 @@ static int get_estimated_bw(struct dc_link *link)
+  */
+ static void init_usb4_bw_struct(struct dc_link *link)
+ {
+-	// Init the known values
++	reset_bw_alloc_struct(link);
++
++	/* init the known values */
+ 	link->dpia_bw_alloc_config.bw_granularity = get_bw_granularity(link);
+ 	link->dpia_bw_alloc_config.estimated_bw = get_estimated_bw(link);
++	link->dpia_bw_alloc_config.nrd_max_link_rate = get_non_reduced_max_link_rate(link);
++	link->dpia_bw_alloc_config.nrd_max_lane_count = get_non_reduced_max_lane_count(link);
+ 
+ 	DC_LOG_DEBUG("%s: bw_granularity(%d), estimated_bw(%d)\n",
+ 		__func__, link->dpia_bw_alloc_config.bw_granularity,
+ 		link->dpia_bw_alloc_config.estimated_bw);
++	DC_LOG_DEBUG("%s: nrd_max_link_rate(%d), nrd_max_lane_count(%d)\n",
++		__func__, link->dpia_bw_alloc_config.nrd_max_link_rate,
++		link->dpia_bw_alloc_config.nrd_max_lane_count);
+ }
+ 
+ static uint8_t get_lowest_dpia_index(struct dc_link *link)
+@@ -142,39 +182,50 @@ static uint8_t get_lowest_dpia_index(struct dc_link *link)
+ }
+ 
+ /*
+- * Get the Max Available BW or Max Estimated BW for each Host Router
++ * Get the maximum dp tunnel banwidth of host router
+  *
+- * @link: pointer to the dc_link struct instance
+- * @type: ESTIMATD BW or MAX AVAILABLE BW
++ * @dc: pointer to the dc struct instance
++ * @hr_index: host router index
+  *
+- * return: response_ready flag from dc_link struct
++ * return: host router maximum dp tunnel bandwidth
+  */
+-static int get_host_router_total_bw(struct dc_link *link, uint8_t type)
++static int get_host_router_total_dp_tunnel_bw(const struct dc *dc, uint8_t hr_index)
+ {
+-	const struct dc *dc_struct = link->dc;
+-	uint8_t lowest_dpia_index = get_lowest_dpia_index(link);
+-	uint8_t idx = (link->link_index - lowest_dpia_index) / 2, idx_temp = 0;
+-	struct dc_link *link_temp;
++	uint8_t lowest_dpia_index = get_lowest_dpia_index(dc->links[0]);
++	uint8_t hr_index_temp = 0;
++	struct dc_link *link_dpia_primary, *link_dpia_secondary;
+ 	int total_bw = 0;
+-	int i;
+ 
+-	for (i = 0; i < MAX_PIPES * 2; ++i) {
++	for (uint8_t i = 0; i < (MAX_PIPES * 2) - 1; ++i) {
+ 
+-		if (!dc_struct->links[i] || dc_struct->links[i]->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
++		if (!dc->links[i] || dc->links[i]->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
+ 			continue;
+ 
+-		link_temp = dc_struct->links[i];
+-		if (!link_temp || !link_temp->hpd_status)
+-			continue;
+-
+-		idx_temp = (link_temp->link_index - lowest_dpia_index) / 2;
+-
+-		if (idx_temp == idx) {
+-
+-			if (type == HOST_ROUTER_BW_ESTIMATED)
+-				total_bw += link_temp->dpia_bw_alloc_config.estimated_bw;
+-			else if (type == HOST_ROUTER_BW_ALLOCATED)
+-				total_bw += link_temp->dpia_bw_alloc_config.sink_allocated_bw;
++		hr_index_temp = (dc->links[i]->link_index - lowest_dpia_index) / 2;
++
++		if (hr_index_temp == hr_index) {
++			link_dpia_primary = dc->links[i];
++			link_dpia_secondary = dc->links[i + 1];
++
++			/**
++			 * If BW allocation enabled on both DPIAs, then
++			 * HR BW = Estimated(dpia_primary) + Allocated(dpia_secondary)
++			 * otherwise HR BW = Estimated(bw alloc enabled dpia)
++			 */
++			if ((link_dpia_primary->hpd_status &&
++				link_dpia_primary->dpia_bw_alloc_config.bw_alloc_enabled) &&
++				(link_dpia_secondary->hpd_status &&
++				link_dpia_secondary->dpia_bw_alloc_config.bw_alloc_enabled)) {
++					total_bw += link_dpia_primary->dpia_bw_alloc_config.estimated_bw +
++						link_dpia_secondary->dpia_bw_alloc_config.allocated_bw;
++			} else if (link_dpia_primary->hpd_status &&
++					link_dpia_primary->dpia_bw_alloc_config.bw_alloc_enabled) {
++				total_bw = link_dpia_primary->dpia_bw_alloc_config.estimated_bw;
++			} else if (link_dpia_secondary->hpd_status &&
++				link_dpia_secondary->dpia_bw_alloc_config.bw_alloc_enabled) {
++				total_bw += link_dpia_secondary->dpia_bw_alloc_config.estimated_bw;
++			}
++			break;
+ 		}
+ 	}
+ 
+@@ -194,7 +245,6 @@ static void dpia_bw_alloc_unplug(struct dc_link *link)
+ 	if (link) {
+ 		DC_LOG_DEBUG("%s: resetting bw alloc config for link(%d)\n",
+ 			__func__, link->link_index);
+-		link->dpia_bw_alloc_config.sink_allocated_bw = 0;
+ 		reset_bw_alloc_struct(link);
+ 	}
+ }
+@@ -220,7 +270,7 @@ static void set_usb4_req_bw_req(struct dc_link *link, int req_bw)
+ 
+ 	/* Error check whether requested and allocated are equal */
+ 	req_bw = requested_bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
+-	if (req_bw == link->dpia_bw_alloc_config.sink_allocated_bw) {
++	if (req_bw == link->dpia_bw_alloc_config.allocated_bw) {
+ 		DC_LOG_ERROR("%s: Request bw equals to allocated bw for link(%d)\n",
+ 			__func__, link->link_index);
+ 	}
+@@ -343,9 +393,9 @@ void dpia_handle_bw_alloc_response(struct dc_link *link, uint8_t bw, uint8_t res
+ 		DC_LOG_DEBUG("%s: BW REQ SUCCESS for DP-TX Request for link(%d)\n",
+ 			__func__, link->link_index);
+ 		DC_LOG_DEBUG("%s: current allocated_bw(%d), new allocated_bw(%d)\n",
+-			__func__, link->dpia_bw_alloc_config.sink_allocated_bw, bw_needed);
++			__func__, link->dpia_bw_alloc_config.allocated_bw, bw_needed);
+ 
+-		link->dpia_bw_alloc_config.sink_allocated_bw = bw_needed;
++		link->dpia_bw_alloc_config.allocated_bw = bw_needed;
+ 
+ 		link->dpia_bw_alloc_config.response_ready = true;
+ 		break;
+@@ -383,8 +433,8 @@ int dpia_handle_usb4_bandwidth_allocation_for_link(struct dc_link *link, int pea
+ 	if (link->hpd_status && peak_bw > 0) {
+ 
+ 		// If DP over USB4 then we need to check BW allocation
+-		link->dpia_bw_alloc_config.sink_max_bw = peak_bw;
+-		set_usb4_req_bw_req(link, link->dpia_bw_alloc_config.sink_max_bw);
++		link->dpia_bw_alloc_config.link_max_bw = peak_bw;
++		set_usb4_req_bw_req(link, link->dpia_bw_alloc_config.link_max_bw);
+ 
+ 		do {
+ 			if (timeout > 0)
+@@ -396,8 +446,8 @@ int dpia_handle_usb4_bandwidth_allocation_for_link(struct dc_link *link, int pea
+ 
+ 		if (!timeout)
+ 			ret = 0;// ERROR TIMEOUT waiting for response for allocating bw
+-		else if (link->dpia_bw_alloc_config.sink_allocated_bw > 0)
+-			ret = get_host_router_total_bw(link, HOST_ROUTER_BW_ALLOCATED);
++		else if (link->dpia_bw_alloc_config.allocated_bw > 0)
++			ret = link->dpia_bw_alloc_config.allocated_bw;
+ 	}
+ 	//2. Cold Unplug
+ 	else if (!link->hpd_status)
+@@ -406,7 +456,6 @@ int dpia_handle_usb4_bandwidth_allocation_for_link(struct dc_link *link, int pea
+ out:
+ 	return ret;
+ }
+-
+ bool link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int req_bw)
+ {
+ 	bool ret = false;
+@@ -414,7 +463,7 @@ bool link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int r
+ 
+ 	DC_LOG_DEBUG("%s: ENTER: link(%d), hpd_status(%d), current allocated_bw(%d), req_bw(%d)\n",
+ 		__func__, link->link_index, link->hpd_status,
+-		link->dpia_bw_alloc_config.sink_allocated_bw, req_bw);
++		link->dpia_bw_alloc_config.allocated_bw, req_bw);
+ 
+ 	if (!get_bw_alloc_proceed_flag(link))
+ 		goto out;
+@@ -439,31 +488,70 @@ bool link_dp_dpia_allocate_usb4_bandwidth_for_stream(struct dc_link *link, int r
+ bool dpia_validate_usb4_bw(struct dc_link **link, int *bw_needed_per_dpia, const unsigned int num_dpias)
+ {
+ 	bool ret = true;
+-	int bw_needed_per_hr[MAX_HR_NUM] = { 0, 0 };
+-	uint8_t lowest_dpia_index = 0, dpia_index = 0;
+-	uint8_t i;
++	int bw_needed_per_hr[MAX_HR_NUM] = { 0, 0 }, host_router_total_dp_bw = 0;
++	uint8_t lowest_dpia_index, i, hr_index;
+ 
+ 	if (!num_dpias || num_dpias > MAX_DPIA_NUM)
+ 		return ret;
+ 
+-	//Get total Host Router BW & Validate against each Host Router max BW
++	lowest_dpia_index = get_lowest_dpia_index(link[0]);
++
++	/* get total Host Router BW with granularity for the given modes */
+ 	for (i = 0; i < num_dpias; ++i) {
++		int granularity_Gbps = 0;
++		int bw_granularity = 0;
+ 
+ 		if (!link[i]->dpia_bw_alloc_config.bw_alloc_enabled)
+ 			continue;
+ 
+-		lowest_dpia_index = get_lowest_dpia_index(link[i]);
+ 		if (link[i]->link_index < lowest_dpia_index)
+ 			continue;
+ 
+-		dpia_index = (link[i]->link_index - lowest_dpia_index) / 2;
+-		bw_needed_per_hr[dpia_index] += bw_needed_per_dpia[i];
+-		if (bw_needed_per_hr[dpia_index] > get_host_router_total_bw(link[i], HOST_ROUTER_BW_ALLOCATED)) {
++		granularity_Gbps = (Kbps_TO_Gbps / link[i]->dpia_bw_alloc_config.bw_granularity);
++		bw_granularity = (bw_needed_per_dpia[i] / granularity_Gbps) * granularity_Gbps +
++				((bw_needed_per_dpia[i] % granularity_Gbps) ? granularity_Gbps : 0);
+ 
+-			ret = false;
+-			break;
++		hr_index = (link[i]->link_index - lowest_dpia_index) / 2;
++		bw_needed_per_hr[hr_index] += bw_granularity;
++	}
++
++	/* validate against each Host Router max BW */
++	for (hr_index = 0; hr_index < MAX_HR_NUM; ++hr_index) {
++		if (bw_needed_per_hr[hr_index]) {
++			host_router_total_dp_bw = get_host_router_total_dp_tunnel_bw(link[0]->dc, hr_index);
++			if (bw_needed_per_hr[hr_index] > host_router_total_dp_bw) {
++				ret = false;
++				break;
++			}
+ 		}
+ 	}
+ 
+ 	return ret;
+ }
++
++int link_dp_dpia_get_dp_overhead_in_dp_tunneling(struct dc_link *link)
++{
++	int dp_overhead = 0, link_mst_overhead = 0;
++
++	if (!get_bw_alloc_proceed_flag((link)))
++		return dp_overhead;
++
++	/* if its mst link, add MTPH overhead */
++	if ((link->type == dc_connection_mst_branch) &&
++		!link->dpcd_caps.channel_coding_cap.bits.DP_128b_132b_SUPPORTED) {
++		/* For 8b/10b encoding: MTP is 64 time slots long, slot 0 is used for MTPH
++		 * MST overhead is 1/64 of link bandwidth (excluding any overhead)
++		 */
++		const struct dc_link_settings *link_cap =
++			dc_link_get_link_cap(link);
++		uint32_t link_bw_in_kbps = (uint32_t)link_cap->link_rate *
++					   (uint32_t)link_cap->lane_count *
++					   LINK_RATE_REF_FREQ_IN_KHZ * 8;
++		link_mst_overhead = (link_bw_in_kbps / 64) + ((link_bw_in_kbps % 64) ? 1 : 0);
++	}
++
++	/* add all the overheads */
++	dp_overhead = link_mst_overhead;
++
++	return dp_overhead;
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h
+index 981bc4eb6120e..3b6d8494f9d5d 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.h
+@@ -99,4 +99,13 @@ void dpia_handle_bw_alloc_response(struct dc_link *link, uint8_t bw, uint8_t res
+  */
+ bool dpia_validate_usb4_bw(struct dc_link **link, int *bw_needed, const unsigned int num_dpias);
+ 
++/*
++ * Obtain all the DP overheads in dp tunneling for the dpia link
++ *
++ * @link: pointer to the dc_link struct instance
++ *
++ * return: DP overheads in DP tunneling
++ */
++int link_dp_dpia_get_dp_overhead_in_dp_tunneling(struct dc_link *link);
++
+ #endif /* DC_INC_LINK_DP_DPIA_BW_H_ */
+diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
+index c1a99bf4dffd1..c4222b886db7c 100644
+--- a/drivers/gpu/drm/drm_buddy.c
++++ b/drivers/gpu/drm/drm_buddy.c
+@@ -538,13 +538,13 @@ static int __alloc_range(struct drm_buddy *mm,
+ 		list_add(&block->left->tmp_link, dfs);
+ 	} while (1);
+ 
+-	list_splice_tail(&allocated, blocks);
+-
+ 	if (total_allocated < size) {
+ 		err = -ENOSPC;
+ 		goto err_free;
+ 	}
+ 
++	list_splice_tail(&allocated, blocks);
++
+ 	return 0;
+ 
+ err_undo:
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index 01da6789d0440..5860428da8de8 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -1034,7 +1034,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 	uint64_t *points;
+ 	uint32_t signaled_count, i;
+ 
+-	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT)
++	if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
++		     DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
+ 		lockdep_assert_none_held_once();
+ 
+ 	points = kmalloc_array(count, sizeof(*points), GFP_KERNEL);
+@@ -1103,7 +1104,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 	 * fallthough and try a 0 timeout wait!
+ 	 */
+ 
+-	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
++	if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
++		     DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE)) {
+ 		for (i = 0; i < count; ++i)
+ 			drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
+ 	}
+@@ -1378,10 +1380,21 @@ syncobj_eventfd_entry_func(struct drm_syncobj *syncobj,
+ 
+ 	/* This happens inside the syncobj lock */
+ 	fence = dma_fence_get(rcu_dereference_protected(syncobj->fence, 1));
++	if (!fence)
++		return;
++
+ 	ret = dma_fence_chain_find_seqno(&fence, entry->point);
+-	if (ret != 0 || !fence) {
++	if (ret != 0) {
++		/* The given seqno has not been submitted yet. */
+ 		dma_fence_put(fence);
+ 		return;
++	} else if (!fence) {
++		/* If dma_fence_chain_find_seqno returns 0 but sets the fence
++		 * to NULL, it implies that the given seqno is signaled and a
++		 * later seqno has already been submitted. Assign a stub fence
++		 * so that the eventfd still gets signaled below.
++		 */
++		fence = dma_fence_get_stub();
+ 	}
+ 
+ 	list_del_init(&entry->node);
+diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c b/drivers/gpu/drm/i915/display/intel_sdvo.c
+index a9ac7d45d1f33..312f88d90af95 100644
+--- a/drivers/gpu/drm/i915/display/intel_sdvo.c
++++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
+@@ -1208,7 +1208,7 @@ static bool intel_sdvo_set_tv_format(struct intel_sdvo *intel_sdvo,
+ 	struct intel_sdvo_tv_format format;
+ 	u32 format_map;
+ 
+-	format_map = 1 << conn_state->tv.mode;
++	format_map = 1 << conn_state->tv.legacy_mode;
+ 	memset(&format, 0, sizeof(format));
+ 	memcpy(&format, &format_map, min(sizeof(format), sizeof(format_map)));
+ 
+@@ -2288,7 +2288,7 @@ static int intel_sdvo_get_tv_modes(struct drm_connector *connector)
+ 	 * Read the list of supported input resolutions for the selected TV
+ 	 * format.
+ 	 */
+-	format_map = 1 << conn_state->tv.mode;
++	format_map = 1 << conn_state->tv.legacy_mode;
+ 	memcpy(&tv_res, &format_map,
+ 	       min(sizeof(format_map), sizeof(struct intel_sdvo_sdtv_resolution_request)));
+ 
+@@ -2353,7 +2353,7 @@ intel_sdvo_connector_atomic_get_property(struct drm_connector *connector,
+ 		int i;
+ 
+ 		for (i = 0; i < intel_sdvo_connector->format_supported_num; i++)
+-			if (state->tv.mode == intel_sdvo_connector->tv_format_supported[i]) {
++			if (state->tv.legacy_mode == intel_sdvo_connector->tv_format_supported[i]) {
+ 				*val = i;
+ 
+ 				return 0;
+@@ -2409,7 +2409,7 @@ intel_sdvo_connector_atomic_set_property(struct drm_connector *connector,
+ 	struct intel_sdvo_connector_state *sdvo_state = to_intel_sdvo_connector_state(state);
+ 
+ 	if (property == intel_sdvo_connector->tv_format) {
+-		state->tv.mode = intel_sdvo_connector->tv_format_supported[val];
++		state->tv.legacy_mode = intel_sdvo_connector->tv_format_supported[val];
+ 
+ 		if (state->crtc) {
+ 			struct drm_crtc_state *crtc_state =
+@@ -3066,7 +3066,7 @@ static bool intel_sdvo_tv_create_property(struct intel_sdvo *intel_sdvo,
+ 		drm_property_add_enum(intel_sdvo_connector->tv_format, i,
+ 				      tv_format_names[intel_sdvo_connector->tv_format_supported[i]]);
+ 
+-	intel_sdvo_connector->base.base.state->tv.mode = intel_sdvo_connector->tv_format_supported[0];
++	intel_sdvo_connector->base.base.state->tv.legacy_mode = intel_sdvo_connector->tv_format_supported[0];
+ 	drm_object_attach_property(&intel_sdvo_connector->base.base.base,
+ 				   intel_sdvo_connector->tv_format, 0);
+ 	return true;
+diff --git a/drivers/gpu/drm/i915/display/intel_tv.c b/drivers/gpu/drm/i915/display/intel_tv.c
+index 2ee4f0d958513..f790fd10ba00a 100644
+--- a/drivers/gpu/drm/i915/display/intel_tv.c
++++ b/drivers/gpu/drm/i915/display/intel_tv.c
+@@ -949,7 +949,7 @@ intel_disable_tv(struct intel_atomic_state *state,
+ 
+ static const struct tv_mode *intel_tv_mode_find(const struct drm_connector_state *conn_state)
+ {
+-	int format = conn_state->tv.mode;
++	int format = conn_state->tv.legacy_mode;
+ 
+ 	return &tv_modes[format];
+ }
+@@ -1710,7 +1710,7 @@ static void intel_tv_find_better_format(struct drm_connector *connector)
+ 			break;
+ 	}
+ 
+-	connector->state->tv.mode = i;
++	connector->state->tv.legacy_mode = i;
+ }
+ 
+ static int
+@@ -1865,7 +1865,7 @@ static int intel_tv_atomic_check(struct drm_connector *connector,
+ 	old_state = drm_atomic_get_old_connector_state(state, connector);
+ 	new_crtc_state = drm_atomic_get_new_crtc_state(state, new_state->crtc);
+ 
+-	if (old_state->tv.mode != new_state->tv.mode ||
++	if (old_state->tv.legacy_mode != new_state->tv.legacy_mode ||
+ 	    old_state->tv.margins.left != new_state->tv.margins.left ||
+ 	    old_state->tv.margins.right != new_state->tv.margins.right ||
+ 	    old_state->tv.margins.top != new_state->tv.margins.top ||
+@@ -1902,7 +1902,7 @@ static void intel_tv_add_properties(struct drm_connector *connector)
+ 	conn_state->tv.margins.right = 46;
+ 	conn_state->tv.margins.bottom = 37;
+ 
+-	conn_state->tv.mode = 0;
++	conn_state->tv.legacy_mode = 0;
+ 
+ 	/* Create TV properties then attach current values */
+ 	for (i = 0; i < ARRAY_SIZE(tv_modes); i++) {
+@@ -1916,7 +1916,7 @@ static void intel_tv_add_properties(struct drm_connector *connector)
+ 
+ 	drm_object_attach_property(&connector->base,
+ 				   i915->drm.mode_config.legacy_tv_mode_property,
+-				   conn_state->tv.mode);
++				   conn_state->tv.legacy_mode);
+ 	drm_object_attach_property(&connector->base,
+ 				   i915->drm.mode_config.tv_left_margin_property,
+ 				   conn_state->tv.margins.left);
+diff --git a/drivers/gpu/drm/meson/meson_encoder_cvbs.c b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
+index 3f73b211fa8e3..3407450435e20 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_cvbs.c
++++ b/drivers/gpu/drm/meson/meson_encoder_cvbs.c
+@@ -294,6 +294,5 @@ void meson_encoder_cvbs_remove(struct meson_drm *priv)
+ 	if (priv->encoders[MESON_ENC_CVBS]) {
+ 		meson_encoder_cvbs = priv->encoders[MESON_ENC_CVBS];
+ 		drm_bridge_remove(&meson_encoder_cvbs->bridge);
+-		drm_bridge_remove(meson_encoder_cvbs->next_bridge);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/meson/meson_encoder_dsi.c b/drivers/gpu/drm/meson/meson_encoder_dsi.c
+index 3f93c70488cad..311b91630fbe5 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_dsi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_dsi.c
+@@ -168,6 +168,5 @@ void meson_encoder_dsi_remove(struct meson_drm *priv)
+ 	if (priv->encoders[MESON_ENC_DSI]) {
+ 		meson_encoder_dsi = priv->encoders[MESON_ENC_DSI];
+ 		drm_bridge_remove(&meson_encoder_dsi->bridge);
+-		drm_bridge_remove(meson_encoder_dsi->next_bridge);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index 25ea765586908..c4686568c9ca5 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -474,6 +474,5 @@ void meson_encoder_hdmi_remove(struct meson_drm *priv)
+ 	if (priv->encoders[MESON_ENC_HDMI]) {
+ 		meson_encoder_hdmi = priv->encoders[MESON_ENC_HDMI];
+ 		drm_bridge_remove(&meson_encoder_hdmi->bridge);
+-		drm_bridge_remove(meson_encoder_hdmi->next_bridge);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c
+index 4135690326f44..3a30bea30e366 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/r535.c
+@@ -168,12 +168,11 @@ r535_bar_new_(const struct nvkm_bar_func *hw, struct nvkm_device *device,
+ 	rm->flush = r535_bar_flush;
+ 
+ 	ret = gf100_bar_new_(rm, device, type, inst, &bar);
+-	*pbar = bar;
+ 	if (ret) {
+-		if (!bar)
+-			kfree(rm);
++		kfree(rm);
+ 		return ret;
+ 	}
++	*pbar = bar;
+ 
+ 	bar->flushBAR2PhysMode = ioremap(device->func->resource_addr(device, 3), PAGE_SIZE);
+ 	if (!bar->flushBAR2PhysMode)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+index 19188683c8fca..8c2bf1c16f2a9 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c
+@@ -154,11 +154,17 @@ shadow_fw_init(struct nvkm_bios *bios, const char *name)
+ 	return (void *)fw;
+ }
+ 
++static void
++shadow_fw_release(void *fw)
++{
++	release_firmware(fw);
++}
++
+ static const struct nvbios_source
+ shadow_fw = {
+ 	.name = "firmware",
+ 	.init = shadow_fw_init,
+-	.fini = (void(*)(void *))release_firmware,
++	.fini = shadow_fw_release,
+ 	.read = shadow_fw_read,
+ 	.rw = false,
+ };
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index 6208ddd929645..a41735ab60683 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -1950,20 +1950,20 @@ nvkm_gsp_radix3_dtor(struct nvkm_gsp *gsp, struct nvkm_gsp_radix3 *rx3)
+  * See kgspCreateRadix3_IMPL
+  */
+ static int
+-nvkm_gsp_radix3_sg(struct nvkm_device *device, struct sg_table *sgt, u64 size,
++nvkm_gsp_radix3_sg(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size,
+ 		   struct nvkm_gsp_radix3 *rx3)
+ {
+ 	u64 addr;
+ 
+ 	for (int i = ARRAY_SIZE(rx3->mem) - 1; i >= 0; i--) {
+ 		u64 *ptes;
+-		int idx;
++		size_t bufsize;
++		int ret, idx;
+ 
+-		rx3->mem[i].size = ALIGN((size / GSP_PAGE_SIZE) * sizeof(u64), GSP_PAGE_SIZE);
+-		rx3->mem[i].data = dma_alloc_coherent(device->dev, rx3->mem[i].size,
+-						      &rx3->mem[i].addr, GFP_KERNEL);
+-		if (WARN_ON(!rx3->mem[i].data))
+-			return -ENOMEM;
++		bufsize = ALIGN((size / GSP_PAGE_SIZE) * sizeof(u64), GSP_PAGE_SIZE);
++		ret = nvkm_gsp_mem_ctor(gsp, bufsize, &rx3->mem[i]);
++		if (ret)
++			return ret;
+ 
+ 		ptes = rx3->mem[i].data;
+ 		if (i == 2) {
+@@ -2003,7 +2003,7 @@ r535_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = nvkm_gsp_radix3_sg(gsp->subdev.device, &gsp->sr.sgt, len, &gsp->sr.radix3);
++		ret = nvkm_gsp_radix3_sg(gsp, &gsp->sr.sgt, len, &gsp->sr.radix3);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -2211,7 +2211,7 @@ r535_gsp_oneinit(struct nvkm_gsp *gsp)
+ 	memcpy(gsp->sig.data, data, size);
+ 
+ 	/* Build radix3 page table for ELF image. */
+-	ret = nvkm_gsp_radix3_sg(device, &gsp->fw.mem.sgt, gsp->fw.len, &gsp->radix3);
++	ret = nvkm_gsp_radix3_sg(gsp, &gsp->fw.mem.sgt, gsp->fw.len, &gsp->radix3);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
+index fe610a3cace00..dbc96984d331f 100644
+--- a/drivers/gpu/drm/ttm/ttm_pool.c
++++ b/drivers/gpu/drm/ttm/ttm_pool.c
+@@ -387,7 +387,7 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
+ 				enum ttm_caching caching,
+ 				pgoff_t start_page, pgoff_t end_page)
+ {
+-	struct page **pages = tt->pages;
++	struct page **pages = &tt->pages[start_page];
+ 	unsigned int order;
+ 	pgoff_t i, nr;
+ 
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index fd6d8f1d9b8f6..6ef0c88e3e60a 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -4610,6 +4610,8 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC088) },
+ 	{ /* Logitech G Pro X Superlight Gaming Mouse over USB */
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC094) },
++	{ /* Logitech G Pro X Superlight 2 Gaming Mouse over USB */
++	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC09b) },
+ 
+ 	{ /* G935 Gaming Headset */
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0x0a87),
+diff --git a/drivers/hid/hid-nvidia-shield.c b/drivers/hid/hid-nvidia-shield.c
+index 82d0a77359c46..58b15750dbb0a 100644
+--- a/drivers/hid/hid-nvidia-shield.c
++++ b/drivers/hid/hid-nvidia-shield.c
+@@ -800,6 +800,8 @@ static inline int thunderstrike_led_create(struct thunderstrike *ts)
+ 
+ 	led->name = devm_kasprintf(&ts->base.hdev->dev, GFP_KERNEL,
+ 				   "thunderstrike%d:blue:led", ts->id);
++	if (!led->name)
++		return -ENOMEM;
+ 	led->max_brightness = 1;
+ 	led->flags = LED_CORE_SUSPENDRESUME | LED_RETAIN_AT_SHUTDOWN;
+ 	led->brightness_get = &thunderstrike_led_get_brightness;
+@@ -831,6 +833,8 @@ static inline int thunderstrike_psy_create(struct shield_device *shield_dev)
+ 	shield_dev->battery_dev.desc.name =
+ 		devm_kasprintf(&ts->base.hdev->dev, GFP_KERNEL,
+ 			       "thunderstrike_%d", ts->id);
++	if (!shield_dev->battery_dev.desc.name)
++		return -ENOMEM;
+ 
+ 	shield_dev->battery_dev.psy = power_supply_register(
+ 		&hdev->dev, &shield_dev->battery_dev.desc, &psy_cfg);
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index 95f4c0b00b2d8..b8fc8d1ef20df 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -41,7 +41,7 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
+ 
+ #define PKG_SYSFS_ATTR_NO	1	/* Sysfs attribute for package temp */
+ #define BASE_SYSFS_ATTR_NO	2	/* Sysfs Base attr no for coretemp */
+-#define NUM_REAL_CORES		128	/* Number of Real cores per cpu */
++#define NUM_REAL_CORES		512	/* Number of Real cores per cpu */
+ #define CORETEMP_NAME_LENGTH	28	/* String Length of attrs */
+ #define MAX_CORE_ATTRS		4	/* Maximum no of basic attrs */
+ #define TOTAL_ATTRS		(MAX_CORE_ATTRS + 1)
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index 92a49fafe2c02..f3bf2e4701c38 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -3512,6 +3512,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 	const u16 *reg_temp_mon, *reg_temp_alternate, *reg_temp_crit;
+ 	const u16 *reg_temp_crit_l = NULL, *reg_temp_crit_h = NULL;
+ 	int num_reg_temp, num_reg_temp_mon, num_reg_tsi_temp;
++	int num_reg_temp_config;
+ 	struct device *hwmon_dev;
+ 	struct sensor_template_group tsi_temp_tg;
+ 
+@@ -3594,6 +3595,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6106_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6106_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6106_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6106_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6106_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6106_REG_TEMP_CRIT;
+ 		reg_temp_crit_l = NCT6106_REG_TEMP_CRIT_L;
+@@ -3669,6 +3671,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6106_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6106_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6106_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6106_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6106_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6106_REG_TEMP_CRIT;
+ 		reg_temp_crit_l = NCT6106_REG_TEMP_CRIT_L;
+@@ -3746,6 +3749,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6775_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6775_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6775_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6775_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6775_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6775_REG_TEMP_CRIT;
+ 
+@@ -3821,6 +3825,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6775_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6775_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6776_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6776_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6776_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6776_REG_TEMP_CRIT;
+ 
+@@ -3900,6 +3905,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6779_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6779_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6779_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6779_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6779_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6779_REG_TEMP_CRIT;
+ 
+@@ -4034,6 +4040,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6779_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6779_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6779_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6779_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6779_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6779_REG_TEMP_CRIT;
+ 
+@@ -4123,6 +4130,7 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		reg_temp_over = NCT6798_REG_TEMP_OVER;
+ 		reg_temp_hyst = NCT6798_REG_TEMP_HYST;
+ 		reg_temp_config = NCT6779_REG_TEMP_CONFIG;
++		num_reg_temp_config = ARRAY_SIZE(NCT6779_REG_TEMP_CONFIG);
+ 		reg_temp_alternate = NCT6798_REG_TEMP_ALTERNATE;
+ 		reg_temp_crit = NCT6798_REG_TEMP_CRIT;
+ 
+@@ -4204,7 +4212,8 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 				  = reg_temp_crit[src - 1];
+ 			if (reg_temp_crit_l && reg_temp_crit_l[i])
+ 				data->reg_temp[4][src - 1] = reg_temp_crit_l[i];
+-			data->reg_temp_config[src - 1] = reg_temp_config[i];
++			if (i < num_reg_temp_config)
++				data->reg_temp_config[src - 1] = reg_temp_config[i];
+ 			data->temp_src[src - 1] = src;
+ 			continue;
+ 		}
+@@ -4217,7 +4226,8 @@ int nct6775_probe(struct device *dev, struct nct6775_data *data,
+ 		data->reg_temp[0][s] = reg_temp[i];
+ 		data->reg_temp[1][s] = reg_temp_over[i];
+ 		data->reg_temp[2][s] = reg_temp_hyst[i];
+-		data->reg_temp_config[s] = reg_temp_config[i];
++		if (i < num_reg_temp_config)
++			data->reg_temp_config[s] = reg_temp_config[i];
+ 		if (reg_temp_crit_h && reg_temp_crit_h[i])
+ 			data->reg_temp[3][s] = reg_temp_crit_h[i];
+ 		else if (reg_temp_crit[src - 1])
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 1775a79aeba2a..0951bfdc89cfa 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -803,6 +803,11 @@ static irqreturn_t i2c_imx_slave_handle(struct imx_i2c_struct *i2c_imx,
+ 		ctl &= ~I2CR_MTX;
+ 		imx_i2c_write_reg(ctl, i2c_imx, IMX_I2C_I2CR);
+ 		imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR);
++
++		/* flag the last byte as processed */
++		i2c_imx_slave_event(i2c_imx,
++				    I2C_SLAVE_READ_PROCESSED, &value);
++
+ 		i2c_imx_slave_finish_op(i2c_imx);
+ 		return IRQ_HANDLED;
+ 	}
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index faa88d12ee868..cc466dfd792b0 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1809,7 +1809,7 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
+ 	switch (srq_attr_mask) {
+ 	case IB_SRQ_MAX_WR:
+ 		/* SRQ resize is not supported */
+-		break;
++		return -EINVAL;
+ 	case IB_SRQ_LIMIT:
+ 		/* Change the SRQ threshold */
+ 		if (srq_attr->srq_limit > srq->qplib_srq.max_wqe)
+@@ -1824,13 +1824,12 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
+ 		/* On success, update the shadow */
+ 		srq->srq_limit = srq_attr->srq_limit;
+ 		/* No need to Build and send response back to udata */
+-		break;
++		return 0;
+ 	default:
+ 		ibdev_err(&rdev->ibdev,
+ 			  "Unsupported srq_attr_mask 0x%x", srq_attr_mask);
+ 		return -EINVAL;
+ 	}
+-	return 0;
+ }
+ 
+ int bnxt_re_query_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index abbabea7f5fa3..2a62239187622 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -748,7 +748,8 @@ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res,
+ 	bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req),
+ 				sizeof(resp), 0);
+ 	rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
+-	srq->threshold = le16_to_cpu(sb->srq_limit);
++	if (!rc)
++		srq->threshold = le16_to_cpu(sb->srq_limit);
+ 	dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
+ 			  sbuf.sb, sbuf.dma_addr);
+ 
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 68c621ff59d03..5a91cbda4aee6 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -2086,7 +2086,7 @@ int init_credit_return(struct hfi1_devdata *dd)
+ 				   "Unable to allocate credit return DMA range for NUMA %d\n",
+ 				   i);
+ 			ret = -ENOMEM;
+-			goto done;
++			goto free_cr_base;
+ 		}
+ 	}
+ 	set_dev_node(&dd->pcidev->dev, dd->node);
+@@ -2094,6 +2094,10 @@ int init_credit_return(struct hfi1_devdata *dd)
+ 	ret = 0;
+ done:
+ 	return ret;
++
++free_cr_base:
++	free_credit_return(dd);
++	goto done;
+ }
+ 
+ void free_credit_return(struct hfi1_devdata *dd)
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 6e5ac2023328a..b67d23b1f2862 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -3158,7 +3158,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ {
+ 	int rval = 0;
+ 
+-	if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
++	if ((unlikely(tx->num_desc == tx->desc_limit))) {
+ 		rval = _extend_sdma_tx_descs(dd, tx);
+ 		if (rval) {
+ 			__sdma_txclean(dd, tx);
+diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h
+index 8fb752f2eda29..2cb4b96db7212 100644
+--- a/drivers/infiniband/hw/irdma/defs.h
++++ b/drivers/infiniband/hw/irdma/defs.h
+@@ -346,6 +346,7 @@ enum irdma_cqp_op_type {
+ #define IRDMA_AE_LLP_TOO_MANY_KEEPALIVE_RETRIES				0x050b
+ #define IRDMA_AE_LLP_DOUBT_REACHABILITY					0x050c
+ #define IRDMA_AE_LLP_CONNECTION_ESTABLISHED				0x050e
++#define IRDMA_AE_LLP_TOO_MANY_RNRS					0x050f
+ #define IRDMA_AE_RESOURCE_EXHAUSTION					0x0520
+ #define IRDMA_AE_RESET_SENT						0x0601
+ #define IRDMA_AE_TERMINATE_SENT						0x0602
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index bd4b2b8964444..ad50b77282f8a 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -387,6 +387,7 @@ static void irdma_process_aeq(struct irdma_pci_f *rf)
+ 		case IRDMA_AE_LLP_TOO_MANY_RETRIES:
+ 		case IRDMA_AE_LCE_QP_CATASTROPHIC:
+ 		case IRDMA_AE_LCE_FUNCTION_CATASTROPHIC:
++		case IRDMA_AE_LLP_TOO_MANY_RNRS:
+ 		case IRDMA_AE_LCE_CQ_CATASTROPHIC:
+ 		case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG:
+ 		default:
+@@ -570,6 +571,13 @@ static void irdma_destroy_irq(struct irdma_pci_f *rf,
+ 	dev->irq_ops->irdma_dis_irq(dev, msix_vec->idx);
+ 	irq_update_affinity_hint(msix_vec->irq, NULL);
+ 	free_irq(msix_vec->irq, dev_id);
++	if (rf == dev_id) {
++		tasklet_kill(&rf->dpc_tasklet);
++	} else {
++		struct irdma_ceq *iwceq = (struct irdma_ceq *)dev_id;
++
++		tasklet_kill(&iwceq->dpc_tasklet);
++	}
+ }
+ 
+ /**
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index b5eb8d421988c..0b046c061742b 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -839,7 +839,9 @@ static int irdma_validate_qp_attrs(struct ib_qp_init_attr *init_attr,
+ 
+ 	if (init_attr->cap.max_inline_data > uk_attrs->max_hw_inline ||
+ 	    init_attr->cap.max_send_sge > uk_attrs->max_hw_wq_frags ||
+-	    init_attr->cap.max_recv_sge > uk_attrs->max_hw_wq_frags)
++	    init_attr->cap.max_recv_sge > uk_attrs->max_hw_wq_frags ||
++	    init_attr->cap.max_send_wr > uk_attrs->max_hw_wq_quanta ||
++	    init_attr->cap.max_recv_wr > uk_attrs->max_hw_rq_quanta)
+ 		return -EINVAL;
+ 
+ 	if (rdma_protocol_roce(&iwdev->ibdev, 1)) {
+@@ -2184,9 +2186,8 @@ static int irdma_create_cq(struct ib_cq *ibcq,
+ 		info.cq_base_pa = iwcq->kmem.pa;
+ 	}
+ 
+-	if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2)
+-		info.shadow_read_threshold = min(info.cq_uk_init_info.cq_size / 2,
+-						 (u32)IRDMA_MAX_CQ_READ_THRESH);
++	info.shadow_read_threshold = min(info.cq_uk_init_info.cq_size / 2,
++					 (u32)IRDMA_MAX_CQ_READ_THRESH);
+ 
+ 	if (irdma_sc_cq_init(cq, &info)) {
+ 		ibdev_dbg(&iwdev->ibdev, "VERBS: init cq fail\n");
+diff --git a/drivers/infiniband/hw/mlx5/cong.c b/drivers/infiniband/hw/mlx5/cong.c
+index f87531318feb8..a78a067e3ce7f 100644
+--- a/drivers/infiniband/hw/mlx5/cong.c
++++ b/drivers/infiniband/hw/mlx5/cong.c
+@@ -458,6 +458,12 @@ void mlx5_ib_init_cong_debugfs(struct mlx5_ib_dev *dev, u32 port_num)
+ 	dbg_cc_params->root = debugfs_create_dir("cc_params", mlx5_debugfs_get_dev_root(mdev));
+ 
+ 	for (i = 0; i < MLX5_IB_DBG_CC_MAX; i++) {
++		if ((i == MLX5_IB_DBG_CC_GENERAL_RTT_RESP_DSCP_VALID ||
++		     i == MLX5_IB_DBG_CC_GENERAL_RTT_RESP_DSCP))
++			if (!MLX5_CAP_GEN(mdev, roce) ||
++			    !MLX5_CAP_ROCE(mdev, roce_cc_general))
++				continue;
++
+ 		dbg_cc_params->params[i].offset = i;
+ 		dbg_cc_params->params[i].dev = dev;
+ 		dbg_cc_params->params[i].port_num = port_num;
+diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
+index 7887a6786ed43..f118ce0a9a617 100644
+--- a/drivers/infiniband/hw/qedr/verbs.c
++++ b/drivers/infiniband/hw/qedr/verbs.c
+@@ -1879,8 +1879,17 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
+ 		/* RQ - read access only (0) */
+ 		rc = qedr_init_user_queue(udata, dev, &qp->urq, ureq.rq_addr,
+ 					  ureq.rq_len, true, 0, alloc_and_init);
+-		if (rc)
++		if (rc) {
++			ib_umem_release(qp->usq.umem);
++			qp->usq.umem = NULL;
++			if (rdma_protocol_roce(&dev->ibdev, 1)) {
++				qedr_free_pbl(dev, &qp->usq.pbl_info,
++					      qp->usq.pbl_tbl);
++			} else {
++				kfree(qp->usq.pbl_tbl);
++			}
+ 			return rc;
++		}
+ 	}
+ 
+ 	memset(&in_params, 0, sizeof(in_params));
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 58f70cfec45a7..040234c01be4d 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -79,12 +79,16 @@ module_param(srpt_srq_size, int, 0444);
+ MODULE_PARM_DESC(srpt_srq_size,
+ 		 "Shared receive queue (SRQ) size.");
+ 
++static int srpt_set_u64_x(const char *buffer, const struct kernel_param *kp)
++{
++	return kstrtou64(buffer, 16, (u64 *)kp->arg);
++}
+ static int srpt_get_u64_x(char *buffer, const struct kernel_param *kp)
+ {
+ 	return sprintf(buffer, "0x%016llx\n", *(u64 *)kp->arg);
+ }
+-module_param_call(srpt_service_guid, NULL, srpt_get_u64_x, &srpt_service_guid,
+-		  0444);
++module_param_call(srpt_service_guid, srpt_set_u64_x, srpt_get_u64_x,
++		  &srpt_service_guid, 0444);
+ MODULE_PARM_DESC(srpt_service_guid,
+ 		 "Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA.");
+ 
+@@ -210,10 +214,12 @@ static const char *get_ch_state_name(enum rdma_ch_state s)
+ /**
+  * srpt_qp_event - QP event callback function
+  * @event: Description of the event that occurred.
+- * @ch: SRPT RDMA channel.
++ * @ptr: SRPT RDMA channel.
+  */
+-static void srpt_qp_event(struct ib_event *event, struct srpt_rdma_ch *ch)
++static void srpt_qp_event(struct ib_event *event, void *ptr)
+ {
++	struct srpt_rdma_ch *ch = ptr;
++
+ 	pr_debug("QP event %d on ch=%p sess_name=%s-%d state=%s\n",
+ 		 event->event, ch, ch->sess_name, ch->qp->qp_num,
+ 		 get_ch_state_name(ch->state));
+@@ -1807,8 +1813,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
+ 	ch->cq_size = ch->rq_size + sq_size;
+ 
+ 	qp_init->qp_context = (void *)ch;
+-	qp_init->event_handler
+-		= (void(*)(struct ib_event *, void*))srpt_qp_event;
++	qp_init->event_handler = srpt_qp_event;
+ 	qp_init->send_cq = ch->cq;
+ 	qp_init->recv_cq = ch->cq;
+ 	qp_init->sq_sig_type = IB_SIGNAL_REQ_WR;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index e2c1848182de9..d0bb3edfd0a09 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -294,6 +294,7 @@ static const struct xpad_device {
+ 	{ 0x1689, 0xfd00, "Razer Onza Tournament Edition", 0, XTYPE_XBOX360 },
+ 	{ 0x1689, 0xfd01, "Razer Onza Classic Edition", 0, XTYPE_XBOX360 },
+ 	{ 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 },
++	{ 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 },
+ 	{ 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
+ 	{ 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+@@ -491,6 +492,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOX360_VENDOR(0x15e4),		/* Numark Xbox 360 controllers */
+ 	XPAD_XBOX360_VENDOR(0x162e),		/* Joytech Xbox 360 controllers */
+ 	XPAD_XBOX360_VENDOR(0x1689),		/* Razer Onza */
++	XPAD_XBOX360_VENDOR(0x17ef),		/* Lenovo */
+ 	XPAD_XBOX360_VENDOR(0x1949),		/* Amazon controllers */
+ 	XPAD_XBOX360_VENDOR(0x1bad),		/* Harmonix Rock Band guitar and drums */
+ 	XPAD_XBOX360_VENDOR(0x20d6),		/* PowerA controllers */
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index cd45a65e17f2c..dfc6c581873b7 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -634,6 +634,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		},
+ 		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
+ 	},
++	{
++		/* Fujitsu Lifebook U728 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U728"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
+ 	{
+ 		/* Gigabyte M912 */
+ 		.matches = {
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index af32fbe57b630..b068ff8afbc9a 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -884,7 +884,8 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
+ 		}
+ 	}
+ 
+-	if (ts->gpio_count == 2 && ts->gpio_int_idx == 0) {
++	/* Some devices with gpio_int_idx 0 list a third unused GPIO */
++	if ((ts->gpio_count == 2 || ts->gpio_count == 3) && ts->gpio_int_idx == 0) {
+ 		ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO;
+ 		gpio_mapping = acpi_goodix_int_first_gpios;
+ 	} else if (ts->gpio_count == 2 && ts->gpio_int_idx == 1) {
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+index 353248ab18e76..4a27fbdb2d844 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+@@ -246,7 +246,8 @@ static void arm_smmu_mm_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn,
+ 						    smmu_domain);
+ 	}
+ 
+-	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
++	arm_smmu_atc_inv_domain(smmu_domain, mm_get_enqcmd_pasid(mm), start,
++				size);
+ }
+ 
+ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
+@@ -264,10 +265,11 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
+ 	 * DMA may still be running. Keep the cd valid to avoid C_BAD_CD events,
+ 	 * but disable translation.
+ 	 */
+-	arm_smmu_update_ctx_desc_devices(smmu_domain, mm->pasid, &quiet_cd);
++	arm_smmu_update_ctx_desc_devices(smmu_domain, mm_get_enqcmd_pasid(mm),
++					 &quiet_cd);
+ 
+ 	arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid);
+-	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
++	arm_smmu_atc_inv_domain(smmu_domain, mm_get_enqcmd_pasid(mm), 0, 0);
+ 
+ 	smmu_mn->cleared = true;
+ 	mutex_unlock(&sva_lock);
+@@ -290,10 +292,8 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
+ 			  struct mm_struct *mm)
+ {
+ 	int ret;
+-	unsigned long flags;
+ 	struct arm_smmu_ctx_desc *cd;
+ 	struct arm_smmu_mmu_notifier *smmu_mn;
+-	struct arm_smmu_master *master;
+ 
+ 	list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) {
+ 		if (smmu_mn->mn.mm == mm) {
+@@ -323,25 +323,9 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain,
+ 		goto err_free_cd;
+ 	}
+ 
+-	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+-	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+-		ret = arm_smmu_write_ctx_desc(master, mm->pasid, cd);
+-		if (ret) {
+-			list_for_each_entry_from_reverse(master, &smmu_domain->devices, domain_head)
+-				arm_smmu_write_ctx_desc(master, mm->pasid, NULL);
+-			break;
+-		}
+-	}
+-	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+-	if (ret)
+-		goto err_put_notifier;
+-
+ 	list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers);
+ 	return smmu_mn;
+ 
+-err_put_notifier:
+-	/* Frees smmu_mn */
+-	mmu_notifier_put(&smmu_mn->mn);
+ err_free_cd:
+ 	arm_smmu_free_shared_cd(cd);
+ 	return ERR_PTR(ret);
+@@ -358,15 +342,14 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
+ 
+ 	list_del(&smmu_mn->list);
+ 
+-	arm_smmu_update_ctx_desc_devices(smmu_domain, mm->pasid, NULL);
+-
+ 	/*
+ 	 * If we went through clear(), we've already invalidated, and no
+ 	 * new TLB entry can have been formed.
+ 	 */
+ 	if (!smmu_mn->cleared) {
+ 		arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid);
+-		arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
++		arm_smmu_atc_inv_domain(smmu_domain, mm_get_enqcmd_pasid(mm), 0,
++					0);
+ 	}
+ 
+ 	/* Frees smmu_mn */
+@@ -374,7 +357,8 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
+ 	arm_smmu_free_shared_cd(cd);
+ }
+ 
+-static int __arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm)
++static int __arm_smmu_sva_bind(struct device *dev, ioasid_t pasid,
++			       struct mm_struct *mm)
+ {
+ 	int ret;
+ 	struct arm_smmu_bond *bond;
+@@ -397,9 +381,15 @@ static int __arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm)
+ 		goto err_free_bond;
+ 	}
+ 
++	ret = arm_smmu_write_ctx_desc(master, pasid, bond->smmu_mn->cd);
++	if (ret)
++		goto err_put_notifier;
++
+ 	list_add(&bond->list, &master->bonds);
+ 	return 0;
+ 
++err_put_notifier:
++	arm_smmu_mmu_notifier_put(bond->smmu_mn);
+ err_free_bond:
+ 	kfree(bond);
+ 	return ret;
+@@ -561,6 +551,9 @@ void arm_smmu_sva_remove_dev_pasid(struct iommu_domain *domain,
+ 	struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+ 
+ 	mutex_lock(&sva_lock);
++
++	arm_smmu_write_ctx_desc(master, id, NULL);
++
+ 	list_for_each_entry(t, &master->bonds, list) {
+ 		if (t->mm == mm) {
+ 			bond = t;
+@@ -583,7 +576,7 @@ static int arm_smmu_sva_set_dev_pasid(struct iommu_domain *domain,
+ 	struct mm_struct *mm = domain->mm;
+ 
+ 	mutex_lock(&sva_lock);
+-	ret = __arm_smmu_sva_bind(dev, mm);
++	ret = __arm_smmu_sva_bind(dev, id, mm);
+ 	mutex_unlock(&sva_lock);
+ 
+ 	return ret;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 897159dba47de..a8366b1f4f48b 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -540,8 +540,6 @@ static int domain_update_device_node(struct dmar_domain *domain)
+ 	return nid;
+ }
+ 
+-static void domain_update_iotlb(struct dmar_domain *domain);
+-
+ /* Return the super pagesize bitmap if supported. */
+ static unsigned long domain_super_pgsize_bitmap(struct dmar_domain *domain)
+ {
+@@ -1362,7 +1360,7 @@ domain_lookup_dev_info(struct dmar_domain *domain,
+ 	return NULL;
+ }
+ 
+-static void domain_update_iotlb(struct dmar_domain *domain)
++void domain_update_iotlb(struct dmar_domain *domain)
+ {
+ 	struct dev_pasid_info *dev_pasid;
+ 	struct device_domain_info *info;
+@@ -4071,6 +4069,7 @@ intel_iommu_domain_alloc_user(struct device *dev, u32 flags,
+ 	bool dirty_tracking = flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING;
+ 	bool nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
+ 	struct intel_iommu *iommu = info->iommu;
++	struct dmar_domain *dmar_domain;
+ 	struct iommu_domain *domain;
+ 
+ 	/* Must be NESTING domain */
+@@ -4096,11 +4095,16 @@ intel_iommu_domain_alloc_user(struct device *dev, u32 flags,
+ 	if (!domain)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	if (nested_parent)
+-		to_dmar_domain(domain)->nested_parent = true;
++	dmar_domain = to_dmar_domain(domain);
++
++	if (nested_parent) {
++		dmar_domain->nested_parent = true;
++		INIT_LIST_HEAD(&dmar_domain->s1_domains);
++		spin_lock_init(&dmar_domain->s1_lock);
++	}
+ 
+ 	if (dirty_tracking) {
+-		if (to_dmar_domain(domain)->use_first_level) {
++		if (dmar_domain->use_first_level) {
+ 			iommu_domain_free(domain);
+ 			return ERR_PTR(-EOPNOTSUPP);
+ 		}
+@@ -4112,8 +4116,12 @@ intel_iommu_domain_alloc_user(struct device *dev, u32 flags,
+ 
+ static void intel_iommu_domain_free(struct iommu_domain *domain)
+ {
++	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
++
++	WARN_ON(dmar_domain->nested_parent &&
++		!list_empty(&dmar_domain->s1_domains));
+ 	if (domain != &si_domain->domain)
+-		domain_exit(to_dmar_domain(domain));
++		domain_exit(dmar_domain);
+ }
+ 
+ int prepare_domain_attach_device(struct iommu_domain *domain,
+@@ -4857,21 +4865,70 @@ static void *intel_iommu_hw_info(struct device *dev, u32 *length, u32 *type)
+ 	return vtd;
+ }
+ 
++/*
++ * Set dirty tracking for the device list of a domain. The caller must
++ * hold the domain->lock when calling it.
++ */
++static int device_set_dirty_tracking(struct list_head *devices, bool enable)
++{
++	struct device_domain_info *info;
++	int ret = 0;
++
++	list_for_each_entry(info, devices, link) {
++		ret = intel_pasid_setup_dirty_tracking(info->iommu, info->dev,
++						       IOMMU_NO_PASID, enable);
++		if (ret)
++			break;
++	}
++
++	return ret;
++}
++
++static int parent_domain_set_dirty_tracking(struct dmar_domain *domain,
++					    bool enable)
++{
++	struct dmar_domain *s1_domain;
++	unsigned long flags;
++	int ret;
++
++	spin_lock(&domain->s1_lock);
++	list_for_each_entry(s1_domain, &domain->s1_domains, s2_link) {
++		spin_lock_irqsave(&s1_domain->lock, flags);
++		ret = device_set_dirty_tracking(&s1_domain->devices, enable);
++		spin_unlock_irqrestore(&s1_domain->lock, flags);
++		if (ret)
++			goto err_unwind;
++	}
++	spin_unlock(&domain->s1_lock);
++	return 0;
++
++err_unwind:
++	list_for_each_entry(s1_domain, &domain->s1_domains, s2_link) {
++		spin_lock_irqsave(&s1_domain->lock, flags);
++		device_set_dirty_tracking(&s1_domain->devices,
++					  domain->dirty_tracking);
++		spin_unlock_irqrestore(&s1_domain->lock, flags);
++	}
++	spin_unlock(&domain->s1_lock);
++	return ret;
++}
++
+ static int intel_iommu_set_dirty_tracking(struct iommu_domain *domain,
+ 					  bool enable)
+ {
+ 	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
+-	struct device_domain_info *info;
+ 	int ret;
+ 
+ 	spin_lock(&dmar_domain->lock);
+ 	if (dmar_domain->dirty_tracking == enable)
+ 		goto out_unlock;
+ 
+-	list_for_each_entry(info, &dmar_domain->devices, link) {
+-		ret = intel_pasid_setup_dirty_tracking(info->iommu,
+-						       info->domain, info->dev,
+-						       IOMMU_NO_PASID, enable);
++	ret = device_set_dirty_tracking(&dmar_domain->devices, enable);
++	if (ret)
++		goto err_unwind;
++
++	if (dmar_domain->nested_parent) {
++		ret = parent_domain_set_dirty_tracking(dmar_domain, enable);
+ 		if (ret)
+ 			goto err_unwind;
+ 	}
+@@ -4883,10 +4940,8 @@ static int intel_iommu_set_dirty_tracking(struct iommu_domain *domain,
+ 	return 0;
+ 
+ err_unwind:
+-	list_for_each_entry(info, &dmar_domain->devices, link)
+-		intel_pasid_setup_dirty_tracking(info->iommu, dmar_domain,
+-						 info->dev, IOMMU_NO_PASID,
+-						 dmar_domain->dirty_tracking);
++	device_set_dirty_tracking(&dmar_domain->devices,
++				  dmar_domain->dirty_tracking);
+ 	spin_unlock(&dmar_domain->lock);
+ 	return ret;
+ }
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index ce030c5b5772a..efc00d2b4527a 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -630,6 +630,10 @@ struct dmar_domain {
+ 			int		agaw;
+ 			/* maximum mapped address */
+ 			u64		max_addr;
++			/* Protect the s1_domains list */
++			spinlock_t	s1_lock;
++			/* Track s1_domains nested on this domain */
++			struct list_head s1_domains;
+ 		};
+ 
+ 		/* Nested user domain */
+@@ -640,6 +644,8 @@ struct dmar_domain {
+ 			unsigned long s1_pgtbl;
+ 			/* page table attributes */
+ 			struct iommu_hwpt_vtd_s1 s1_cfg;
++			/* link to parent domain siblings */
++			struct list_head s2_link;
+ 		};
+ 	};
+ 
+@@ -888,6 +894,7 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
+  */
+ #define QI_OPT_WAIT_DRAIN		BIT(0)
+ 
++void domain_update_iotlb(struct dmar_domain *domain);
+ int domain_attach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu);
+ void domain_detach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu);
+ void device_block_translation(struct device *dev);
+diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c
+index b5a5563ab32c6..92e82b33ea979 100644
+--- a/drivers/iommu/intel/nested.c
++++ b/drivers/iommu/intel/nested.c
+@@ -65,12 +65,20 @@ static int intel_nested_attach_dev(struct iommu_domain *domain,
+ 	list_add(&info->link, &dmar_domain->devices);
+ 	spin_unlock_irqrestore(&dmar_domain->lock, flags);
+ 
++	domain_update_iotlb(dmar_domain);
++
+ 	return 0;
+ }
+ 
+ static void intel_nested_domain_free(struct iommu_domain *domain)
+ {
+-	kfree(to_dmar_domain(domain));
++	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
++	struct dmar_domain *s2_domain = dmar_domain->s2_domain;
++
++	spin_lock(&s2_domain->s1_lock);
++	list_del(&dmar_domain->s2_link);
++	spin_unlock(&s2_domain->s1_lock);
++	kfree(dmar_domain);
+ }
+ 
+ static const struct iommu_domain_ops intel_nested_domain_ops = {
+@@ -113,5 +121,9 @@ struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent,
+ 	spin_lock_init(&domain->lock);
+ 	xa_init(&domain->iommu_array);
+ 
++	spin_lock(&s2_domain->s1_lock);
++	list_add(&domain->s2_link, &s2_domain->s1_domains);
++	spin_unlock(&s2_domain->s1_lock);
++
+ 	return &domain->domain;
+ }
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 74e8e4c17e814..6e102cbbde845 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -695,7 +695,6 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
+  * Set up dirty tracking on a second only or nested translation type.
+  */
+ int intel_pasid_setup_dirty_tracking(struct intel_iommu *iommu,
+-				     struct dmar_domain *domain,
+ 				     struct device *dev, u32 pasid,
+ 				     bool enabled)
+ {
+@@ -712,7 +711,7 @@ int intel_pasid_setup_dirty_tracking(struct intel_iommu *iommu,
+ 		return -ENODEV;
+ 	}
+ 
+-	did = domain_id_iommu(domain, iommu);
++	did = pasid_get_domain_id(pte);
+ 	pgtt = pasid_pte_get_pgtt(pte);
+ 	if (pgtt != PASID_ENTRY_PGTT_SL_ONLY &&
+ 	    pgtt != PASID_ENTRY_PGTT_NESTED) {
+@@ -926,6 +925,8 @@ int intel_pasid_setup_nested(struct intel_iommu *iommu, struct device *dev,
+ 	pasid_set_domain_id(pte, did);
+ 	pasid_set_address_width(pte, s2_domain->agaw);
+ 	pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
++	if (s2_domain->dirty_tracking)
++		pasid_set_ssade(pte);
+ 	pasid_set_translation_type(pte, PASID_ENTRY_PGTT_NESTED);
+ 	pasid_set_present(pte);
+ 	spin_unlock(&iommu->lock);
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index dd37611175cc1..3568adca1fd82 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -107,7 +107,6 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
+ 				   struct dmar_domain *domain,
+ 				   struct device *dev, u32 pasid);
+ int intel_pasid_setup_dirty_tracking(struct intel_iommu *iommu,
+-				     struct dmar_domain *domain,
+ 				     struct device *dev, u32 pasid,
+ 				     bool enabled);
+ int intel_pasid_setup_pass_through(struct intel_iommu *iommu,
+diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
+index b78671a8a9143..4a2f5699747f1 100644
+--- a/drivers/iommu/iommu-sva.c
++++ b/drivers/iommu/iommu-sva.c
+@@ -141,7 +141,7 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle)
+ {
+ 	struct iommu_domain *domain = handle->domain;
+ 
+-	return domain->mm->pasid;
++	return mm_get_enqcmd_pasid(domain->mm);
+ }
+ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
+ 
+diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
+index cbb5df0a6c32f..6f680959b23ed 100644
+--- a/drivers/iommu/iommufd/hw_pagetable.c
++++ b/drivers/iommu/iommufd/hw_pagetable.c
+@@ -261,7 +261,8 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
+ 
+ 	if (cmd->__reserved)
+ 		return -EOPNOTSUPP;
+-	if (cmd->data_type == IOMMU_HWPT_DATA_NONE && cmd->data_len)
++	if ((cmd->data_type == IOMMU_HWPT_DATA_NONE && cmd->data_len) ||
++	    (cmd->data_type != IOMMU_HWPT_DATA_NONE && !cmd->data_len))
+ 		return -EINVAL;
+ 
+ 	idev = iommufd_get_device(ucmd, cmd->dev_id);
+diff --git a/drivers/iommu/iommufd/iova_bitmap.c b/drivers/iommu/iommufd/iova_bitmap.c
+index 0a92c9eeaf7f5..db8c46bee1559 100644
+--- a/drivers/iommu/iommufd/iova_bitmap.c
++++ b/drivers/iommu/iommufd/iova_bitmap.c
+@@ -100,7 +100,7 @@ struct iova_bitmap {
+ 	struct iova_bitmap_map mapped;
+ 
+ 	/* userspace address of the bitmap */
+-	u64 __user *bitmap;
++	u8 __user *bitmap;
+ 
+ 	/* u64 index that @mapped points to */
+ 	unsigned long mapped_base_index;
+@@ -113,6 +113,9 @@ struct iova_bitmap {
+ 
+ 	/* length of the IOVA range for the whole bitmap */
+ 	size_t length;
++
++	/* length of the IOVA range set ahead the pinned pages */
++	unsigned long set_ahead_length;
+ };
+ 
+ /*
+@@ -162,7 +165,7 @@ static int iova_bitmap_get(struct iova_bitmap *bitmap)
+ {
+ 	struct iova_bitmap_map *mapped = &bitmap->mapped;
+ 	unsigned long npages;
+-	u64 __user *addr;
++	u8 __user *addr;
+ 	long ret;
+ 
+ 	/*
+@@ -175,18 +178,19 @@ static int iova_bitmap_get(struct iova_bitmap *bitmap)
+ 			       bitmap->mapped_base_index) *
+ 			       sizeof(*bitmap->bitmap), PAGE_SIZE);
+ 
+-	/*
+-	 * We always cap at max number of 'struct page' a base page can fit.
+-	 * This is, for example, on x86 means 2M of bitmap data max.
+-	 */
+-	npages = min(npages,  PAGE_SIZE / sizeof(struct page *));
+-
+ 	/*
+ 	 * Bitmap address to be pinned is calculated via pointer arithmetic
+ 	 * with bitmap u64 word index.
+ 	 */
+ 	addr = bitmap->bitmap + bitmap->mapped_base_index;
+ 
++	/*
++	 * We always cap at max number of 'struct page' a base page can fit.
++	 * This is, for example, on x86 means 2M of bitmap data max.
++	 */
++	npages = min(npages + !!offset_in_page(addr),
++		     PAGE_SIZE / sizeof(struct page *));
++
+ 	ret = pin_user_pages_fast((unsigned long)addr, npages,
+ 				  FOLL_WRITE, mapped->pages);
+ 	if (ret <= 0)
+@@ -247,7 +251,7 @@ struct iova_bitmap *iova_bitmap_alloc(unsigned long iova, size_t length,
+ 
+ 	mapped = &bitmap->mapped;
+ 	mapped->pgshift = __ffs(page_size);
+-	bitmap->bitmap = data;
++	bitmap->bitmap = (u8 __user *)data;
+ 	bitmap->mapped_total_index =
+ 		iova_bitmap_offset_to_index(bitmap, length - 1) + 1;
+ 	bitmap->iova = iova;
+@@ -304,7 +308,7 @@ static unsigned long iova_bitmap_mapped_remaining(struct iova_bitmap *bitmap)
+ 
+ 	remaining = bitmap->mapped_total_index - bitmap->mapped_base_index;
+ 	remaining = min_t(unsigned long, remaining,
+-			  bytes / sizeof(*bitmap->bitmap));
++			  DIV_ROUND_UP(bytes, sizeof(*bitmap->bitmap)));
+ 
+ 	return remaining;
+ }
+@@ -341,6 +345,32 @@ static bool iova_bitmap_done(struct iova_bitmap *bitmap)
+ 	return bitmap->mapped_base_index >= bitmap->mapped_total_index;
+ }
+ 
++static int iova_bitmap_set_ahead(struct iova_bitmap *bitmap,
++				 size_t set_ahead_length)
++{
++	int ret = 0;
++
++	while (set_ahead_length > 0 && !iova_bitmap_done(bitmap)) {
++		unsigned long length = iova_bitmap_mapped_length(bitmap);
++		unsigned long iova = iova_bitmap_mapped_iova(bitmap);
++
++		ret = iova_bitmap_get(bitmap);
++		if (ret)
++			break;
++
++		length = min(length, set_ahead_length);
++		iova_bitmap_set(bitmap, iova, length);
++
++		set_ahead_length -= length;
++		bitmap->mapped_base_index +=
++			iova_bitmap_offset_to_index(bitmap, length - 1) + 1;
++		iova_bitmap_put(bitmap);
++	}
++
++	bitmap->set_ahead_length = 0;
++	return ret;
++}
++
+ /*
+  * Advances to the next range, releases the current pinned
+  * pages and pins the next set of bitmap pages.
+@@ -357,6 +387,15 @@ static int iova_bitmap_advance(struct iova_bitmap *bitmap)
+ 	if (iova_bitmap_done(bitmap))
+ 		return 0;
+ 
++	/* Iterate, set and skip any bits requested for next iteration */
++	if (bitmap->set_ahead_length) {
++		int ret;
++
++		ret = iova_bitmap_set_ahead(bitmap, bitmap->set_ahead_length);
++		if (ret)
++			return ret;
++	}
++
+ 	/* When advancing the index we pin the next set of bitmap pages */
+ 	return iova_bitmap_get(bitmap);
+ }
+@@ -409,6 +448,7 @@ void iova_bitmap_set(struct iova_bitmap *bitmap,
+ 			mapped->pgshift) + mapped->pgoff * BITS_PER_BYTE;
+ 	unsigned long last_bit = (((iova + length - 1) - mapped->iova) >>
+ 			mapped->pgshift) + mapped->pgoff * BITS_PER_BYTE;
++	unsigned long last_page_idx = mapped->npages - 1;
+ 
+ 	do {
+ 		unsigned int page_idx = cur_bit / BITS_PER_PAGE;
+@@ -417,10 +457,18 @@ void iova_bitmap_set(struct iova_bitmap *bitmap,
+ 					 last_bit - cur_bit + 1);
+ 		void *kaddr;
+ 
++		if (unlikely(page_idx > last_page_idx))
++			break;
++
+ 		kaddr = kmap_local_page(mapped->pages[page_idx]);
+ 		bitmap_set(kaddr, offset, nbits);
+ 		kunmap_local(kaddr);
+ 		cur_bit += nbits;
+ 	} while (cur_bit <= last_bit);
++
++	if (unlikely(cur_bit <= last_bit)) {
++		bitmap->set_ahead_length =
++			((last_bit - cur_bit + 1) << bitmap->mapped.pgshift);
++	}
+ }
+ EXPORT_SYMBOL_NS_GPL(iova_bitmap_set, IOMMUFD);
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 3632c92cd183c..676c9250d3f28 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3181,6 +3181,7 @@ static void its_cpu_init_lpis(void)
+ 	val |= GICR_CTLR_ENABLE_LPIS;
+ 	writel_relaxed(val, rbase + GICR_CTLR);
+ 
++out:
+ 	if (gic_rdists->has_vlpis && !gic_rdists->has_rvpeid) {
+ 		void __iomem *vlpi_base = gic_data_rdist_vlpi_base();
+ 
+@@ -3216,7 +3217,6 @@ static void its_cpu_init_lpis(void)
+ 
+ 	/* Make sure the GIC has seen the above */
+ 	dsb(sy);
+-out:
+ 	gic_data_rdist()->flags |= RD_LOCAL_LPI_ENABLED;
+ 	pr_info("GICv3: CPU%d: using %s LPI pending table @%pa\n",
+ 		smp_processor_id(),
+diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c
+index 5101a3fb11df5..58881d3139792 100644
+--- a/drivers/irqchip/irq-mbigen.c
++++ b/drivers/irqchip/irq-mbigen.c
+@@ -235,22 +235,17 @@ static const struct irq_domain_ops mbigen_domain_ops = {
+ static int mbigen_of_create_domain(struct platform_device *pdev,
+ 				   struct mbigen_device *mgn_chip)
+ {
+-	struct device *parent;
+ 	struct platform_device *child;
+ 	struct irq_domain *domain;
+ 	struct device_node *np;
+ 	u32 num_pins;
+ 	int ret = 0;
+ 
+-	parent = bus_get_dev_root(&platform_bus_type);
+-	if (!parent)
+-		return -ENODEV;
+-
+ 	for_each_child_of_node(pdev->dev.of_node, np) {
+ 		if (!of_property_read_bool(np, "interrupt-controller"))
+ 			continue;
+ 
+-		child = of_platform_device_create(np, NULL, parent);
++		child = of_platform_device_create(np, NULL, NULL);
+ 		if (!child) {
+ 			ret = -ENOMEM;
+ 			break;
+@@ -273,7 +268,6 @@ static int mbigen_of_create_domain(struct platform_device *pdev,
+ 		}
+ 	}
+ 
+-	put_device(parent);
+ 	if (ret)
+ 		of_node_put(np);
+ 
+diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
+index 5b7bc4fd9517c..bf0b40b0fad4b 100644
+--- a/drivers/irqchip/irq-sifive-plic.c
++++ b/drivers/irqchip/irq-sifive-plic.c
+@@ -148,7 +148,13 @@ static void plic_irq_eoi(struct irq_data *d)
+ {
+ 	struct plic_handler *handler = this_cpu_ptr(&plic_handlers);
+ 
+-	writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++	if (unlikely(irqd_irq_disabled(d))) {
++		plic_toggle(handler, d->hwirq, 1);
++		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++		plic_toggle(handler, d->hwirq, 0);
++	} else {
++		writel(d->hwirq, handler->hart_base + CONTEXT_CLAIM);
++	}
+ }
+ 
+ #ifdef CONFIG_SMP
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 08681f4265205..4ab4e8dcfd3e2 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -62,6 +62,8 @@ struct convert_context {
+ 		struct skcipher_request *req;
+ 		struct aead_request *req_aead;
+ 	} r;
++	bool aead_recheck;
++	bool aead_failed;
+ 
+ };
+ 
+@@ -82,6 +84,8 @@ struct dm_crypt_io {
+ 	blk_status_t error;
+ 	sector_t sector;
+ 
++	struct bvec_iter saved_bi_iter;
++
+ 	struct rb_node rb_node;
+ } CRYPTO_MINALIGN_ATTR;
+ 
+@@ -1370,10 +1374,13 @@ static int crypt_convert_block_aead(struct crypt_config *cc,
+ 	if (r == -EBADMSG) {
+ 		sector_t s = le64_to_cpu(*sector);
+ 
+-		DMERR_LIMIT("%pg: INTEGRITY AEAD ERROR, sector %llu",
+-			    ctx->bio_in->bi_bdev, s);
+-		dm_audit_log_bio(DM_MSG_PREFIX, "integrity-aead",
+-				 ctx->bio_in, s, 0);
++		ctx->aead_failed = true;
++		if (ctx->aead_recheck) {
++			DMERR_LIMIT("%pg: INTEGRITY AEAD ERROR, sector %llu",
++				    ctx->bio_in->bi_bdev, s);
++			dm_audit_log_bio(DM_MSG_PREFIX, "integrity-aead",
++					 ctx->bio_in, s, 0);
++		}
+ 	}
+ 
+ 	if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post)
+@@ -1757,6 +1764,8 @@ static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc,
+ 	io->base_bio = bio;
+ 	io->sector = sector;
+ 	io->error = 0;
++	io->ctx.aead_recheck = false;
++	io->ctx.aead_failed = false;
+ 	io->ctx.r.req = NULL;
+ 	io->integrity_metadata = NULL;
+ 	io->integrity_metadata_from_pool = false;
+@@ -1768,6 +1777,8 @@ static void crypt_inc_pending(struct dm_crypt_io *io)
+ 	atomic_inc(&io->io_pending);
+ }
+ 
++static void kcryptd_queue_read(struct dm_crypt_io *io);
++
+ /*
+  * One of the bios was finished. Check for completion of
+  * the whole request and correctly clean up the buffer.
+@@ -1781,6 +1792,15 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
+ 	if (!atomic_dec_and_test(&io->io_pending))
+ 		return;
+ 
++	if (likely(!io->ctx.aead_recheck) && unlikely(io->ctx.aead_failed) &&
++	    cc->on_disk_tag_size && bio_data_dir(base_bio) == READ) {
++		io->ctx.aead_recheck = true;
++		io->ctx.aead_failed = false;
++		io->error = 0;
++		kcryptd_queue_read(io);
++		return;
++	}
++
+ 	if (io->ctx.r.req)
+ 		crypt_free_req(cc, io->ctx.r.req, base_bio);
+ 
+@@ -1816,15 +1836,19 @@ static void crypt_endio(struct bio *clone)
+ 	struct dm_crypt_io *io = clone->bi_private;
+ 	struct crypt_config *cc = io->cc;
+ 	unsigned int rw = bio_data_dir(clone);
+-	blk_status_t error;
++	blk_status_t error = clone->bi_status;
++
++	if (io->ctx.aead_recheck && !error) {
++		kcryptd_queue_crypt(io);
++		return;
++	}
+ 
+ 	/*
+ 	 * free the processed pages
+ 	 */
+-	if (rw == WRITE)
++	if (rw == WRITE || io->ctx.aead_recheck)
+ 		crypt_free_buffer_pages(cc, clone);
+ 
+-	error = clone->bi_status;
+ 	bio_put(clone);
+ 
+ 	if (rw == READ && !error) {
+@@ -1845,6 +1869,22 @@ static int kcryptd_io_read(struct dm_crypt_io *io, gfp_t gfp)
+ 	struct crypt_config *cc = io->cc;
+ 	struct bio *clone;
+ 
++	if (io->ctx.aead_recheck) {
++		if (!(gfp & __GFP_DIRECT_RECLAIM))
++			return 1;
++		crypt_inc_pending(io);
++		clone = crypt_alloc_buffer(io, io->base_bio->bi_iter.bi_size);
++		if (unlikely(!clone)) {
++			crypt_dec_pending(io);
++			return 1;
++		}
++		clone->bi_iter.bi_sector = cc->start + io->sector;
++		crypt_convert_init(cc, &io->ctx, clone, clone, io->sector);
++		io->saved_bi_iter = clone->bi_iter;
++		dm_submit_bio_remap(io->base_bio, clone);
++		return 0;
++	}
++
+ 	/*
+ 	 * We need the original biovec array in order to decrypt the whole bio
+ 	 * data *afterwards* -- thanks to immutable biovecs we don't need to
+@@ -2071,6 +2111,12 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ 	io->ctx.bio_out = clone;
+ 	io->ctx.iter_out = clone->bi_iter;
+ 
++	if (crypt_integrity_aead(cc)) {
++		bio_copy_data(clone, io->base_bio);
++		io->ctx.bio_in = clone;
++		io->ctx.iter_in = clone->bi_iter;
++	}
++
+ 	sector += bio_sectors(clone);
+ 
+ 	crypt_inc_pending(io);
+@@ -2107,6 +2153,14 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ 
+ static void kcryptd_crypt_read_done(struct dm_crypt_io *io)
+ {
++	if (io->ctx.aead_recheck) {
++		if (!io->error) {
++			io->ctx.bio_in->bi_iter = io->saved_bi_iter;
++			bio_copy_data(io->base_bio, io->ctx.bio_in);
++		}
++		crypt_free_buffer_pages(io->cc, io->ctx.bio_in);
++		bio_put(io->ctx.bio_in);
++	}
+ 	crypt_dec_pending(io);
+ }
+ 
+@@ -2136,11 +2190,17 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io)
+ 
+ 	crypt_inc_pending(io);
+ 
+-	crypt_convert_init(cc, &io->ctx, io->base_bio, io->base_bio,
+-			   io->sector);
++	if (io->ctx.aead_recheck) {
++		io->ctx.cc_sector = io->sector + cc->iv_offset;
++		r = crypt_convert(cc, &io->ctx,
++				  test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true);
++	} else {
++		crypt_convert_init(cc, &io->ctx, io->base_bio, io->base_bio,
++				   io->sector);
+ 
+-	r = crypt_convert(cc, &io->ctx,
+-			  test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true);
++		r = crypt_convert(cc, &io->ctx,
++				  test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true);
++	}
+ 	/*
+ 	 * Crypto API backlogged the request, because its queue was full
+ 	 * and we're in softirq context, so continue from a workqueue
+@@ -2182,10 +2242,13 @@ static void kcryptd_async_done(void *data, int error)
+ 	if (error == -EBADMSG) {
+ 		sector_t s = le64_to_cpu(*org_sector_of_dmreq(cc, dmreq));
+ 
+-		DMERR_LIMIT("%pg: INTEGRITY AEAD ERROR, sector %llu",
+-			    ctx->bio_in->bi_bdev, s);
+-		dm_audit_log_bio(DM_MSG_PREFIX, "integrity-aead",
+-				 ctx->bio_in, s, 0);
++		ctx->aead_failed = true;
++		if (ctx->aead_recheck) {
++			DMERR_LIMIT("%pg: INTEGRITY AEAD ERROR, sector %llu",
++				    ctx->bio_in->bi_bdev, s);
++			dm_audit_log_bio(DM_MSG_PREFIX, "integrity-aead",
++					 ctx->bio_in, s, 0);
++		}
+ 		io->error = BLK_STS_PROTECTION;
+ 	} else if (error < 0)
+ 		io->error = BLK_STS_IOERR;
+@@ -3110,7 +3173,7 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar
+ 			sval = strchr(opt_string + strlen("integrity:"), ':') + 1;
+ 			if (!strcasecmp(sval, "aead")) {
+ 				set_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags);
+-			} else  if (strcasecmp(sval, "none")) {
++			} else if (strcasecmp(sval, "none")) {
+ 				ti->error = "Unknown integrity profile";
+ 				return -EINVAL;
+ 			}
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index c5f03aab45525..e8e8fc33d3440 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -278,6 +278,8 @@ struct dm_integrity_c {
+ 
+ 	atomic64_t number_of_mismatches;
+ 
++	mempool_t recheck_pool;
++
+ 	struct notifier_block reboot_notifier;
+ };
+ 
+@@ -1689,6 +1691,77 @@ static void integrity_sector_checksum(struct dm_integrity_c *ic, sector_t sector
+ 	get_random_bytes(result, ic->tag_size);
+ }
+ 
++static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checksum)
++{
++	struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
++	struct dm_integrity_c *ic = dio->ic;
++	struct bvec_iter iter;
++	struct bio_vec bv;
++	sector_t sector, logical_sector, area, offset;
++	struct page *page;
++	void *buffer;
++
++	get_area_and_offset(ic, dio->range.logical_sector, &area, &offset);
++	dio->metadata_block = get_metadata_sector_and_offset(ic, area, offset,
++							     &dio->metadata_offset);
++	sector = get_data_sector(ic, area, offset);
++	logical_sector = dio->range.logical_sector;
++
++	page = mempool_alloc(&ic->recheck_pool, GFP_NOIO);
++	buffer = page_to_virt(page);
++
++	__bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) {
++		unsigned pos = 0;
++
++		do {
++			char *mem;
++			int r;
++			struct dm_io_request io_req;
++			struct dm_io_region io_loc;
++			io_req.bi_opf = REQ_OP_READ;
++			io_req.mem.type = DM_IO_KMEM;
++			io_req.mem.ptr.addr = buffer;
++			io_req.notify.fn = NULL;
++			io_req.client = ic->io;
++			io_loc.bdev = ic->dev->bdev;
++			io_loc.sector = sector;
++			io_loc.count = ic->sectors_per_block;
++
++			r = dm_io(&io_req, 1, &io_loc, NULL);
++			if (unlikely(r)) {
++				dio->bi_status = errno_to_blk_status(r);
++				goto free_ret;
++			}
++
++			integrity_sector_checksum(ic, logical_sector, buffer, checksum);
++			r = dm_integrity_rw_tag(ic, checksum, &dio->metadata_block,
++						&dio->metadata_offset, ic->tag_size, TAG_CMP);
++			if (r) {
++				if (r > 0) {
++					DMERR_LIMIT("%pg: Checksum failed at sector 0x%llx",
++						    bio->bi_bdev, logical_sector);
++					atomic64_inc(&ic->number_of_mismatches);
++					dm_audit_log_bio(DM_MSG_PREFIX, "integrity-checksum",
++							 bio, logical_sector, 0);
++					r = -EILSEQ;
++				}
++				dio->bi_status = errno_to_blk_status(r);
++				goto free_ret;
++			}
++
++			mem = bvec_kmap_local(&bv);
++			memcpy(mem + pos, buffer, ic->sectors_per_block << SECTOR_SHIFT);
++			kunmap_local(mem);
++
++			pos += ic->sectors_per_block << SECTOR_SHIFT;
++			sector += ic->sectors_per_block;
++			logical_sector += ic->sectors_per_block;
++		} while (pos < bv.bv_len);
++	}
++free_ret:
++	mempool_free(page, &ic->recheck_pool);
++}
++
+ static void integrity_metadata(struct work_struct *w)
+ {
+ 	struct dm_integrity_io *dio = container_of(w, struct dm_integrity_io, work);
+@@ -1776,15 +1849,8 @@ static void integrity_metadata(struct work_struct *w)
+ 						checksums_ptr - checksums, dio->op == REQ_OP_READ ? TAG_CMP : TAG_WRITE);
+ 			if (unlikely(r)) {
+ 				if (r > 0) {
+-					sector_t s;
+-
+-					s = sector - ((r + ic->tag_size - 1) / ic->tag_size);
+-					DMERR_LIMIT("%pg: Checksum failed at sector 0x%llx",
+-						    bio->bi_bdev, s);
+-					r = -EILSEQ;
+-					atomic64_inc(&ic->number_of_mismatches);
+-					dm_audit_log_bio(DM_MSG_PREFIX, "integrity-checksum",
+-							 bio, s, 0);
++					integrity_recheck(dio, checksums);
++					goto skip_io;
+ 				}
+ 				if (likely(checksums != checksums_onstack))
+ 					kfree(checksums);
+@@ -4261,6 +4327,12 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
+ 		goto bad;
+ 	}
+ 
++	r = mempool_init_page_pool(&ic->recheck_pool, 1, 0);
++	if (r) {
++		ti->error = "Cannot allocate mempool";
++		goto bad;
++	}
++
+ 	ic->metadata_wq = alloc_workqueue("dm-integrity-metadata",
+ 					  WQ_MEM_RECLAIM, METADATA_WORKQUEUE_MAX_ACTIVE);
+ 	if (!ic->metadata_wq) {
+@@ -4609,6 +4681,7 @@ static void dm_integrity_dtr(struct dm_target *ti)
+ 	kvfree(ic->bbs);
+ 	if (ic->bufio)
+ 		dm_bufio_client_destroy(ic->bufio);
++	mempool_exit(&ic->recheck_pool);
+ 	mempool_exit(&ic->journal_io_mempool);
+ 	if (ic->io)
+ 		dm_io_client_destroy(ic->io);
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 82662f5769c4a..7b620b187da90 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -482,6 +482,63 @@ int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
+ 	return 0;
+ }
+ 
++static int verity_recheck_copy(struct dm_verity *v, struct dm_verity_io *io,
++			       u8 *data, size_t len)
++{
++	memcpy(data, io->recheck_buffer, len);
++	io->recheck_buffer += len;
++
++	return 0;
++}
++
++static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io,
++				   struct bvec_iter start, sector_t cur_block)
++{
++	struct page *page;
++	void *buffer;
++	int r;
++	struct dm_io_request io_req;
++	struct dm_io_region io_loc;
++
++	page = mempool_alloc(&v->recheck_pool, GFP_NOIO);
++	buffer = page_to_virt(page);
++
++	io_req.bi_opf = REQ_OP_READ;
++	io_req.mem.type = DM_IO_KMEM;
++	io_req.mem.ptr.addr = buffer;
++	io_req.notify.fn = NULL;
++	io_req.client = v->io;
++	io_loc.bdev = v->data_dev->bdev;
++	io_loc.sector = cur_block << (v->data_dev_block_bits - SECTOR_SHIFT);
++	io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT);
++	r = dm_io(&io_req, 1, &io_loc, NULL);
++	if (unlikely(r))
++		goto free_ret;
++
++	r = verity_hash(v, verity_io_hash_req(v, io), buffer,
++			1 << v->data_dev_block_bits,
++			verity_io_real_digest(v, io), true);
++	if (unlikely(r))
++		goto free_ret;
++
++	if (memcmp(verity_io_real_digest(v, io),
++		   verity_io_want_digest(v, io), v->digest_size)) {
++		r = -EIO;
++		goto free_ret;
++	}
++
++	io->recheck_buffer = buffer;
++	r = verity_for_bv_block(v, io, &start, verity_recheck_copy);
++	if (unlikely(r))
++		goto free_ret;
++
++	r = 0;
++free_ret:
++	mempool_free(page, &v->recheck_pool);
++
++	return r;
++}
++
+ static int verity_bv_zero(struct dm_verity *v, struct dm_verity_io *io,
+ 			  u8 *data, size_t len)
+ {
+@@ -508,9 +565,7 @@ static int verity_verify_io(struct dm_verity_io *io)
+ {
+ 	bool is_zero;
+ 	struct dm_verity *v = io->v;
+-#if defined(CONFIG_DM_VERITY_FEC)
+ 	struct bvec_iter start;
+-#endif
+ 	struct bvec_iter iter_copy;
+ 	struct bvec_iter *iter;
+ 	struct crypto_wait wait;
+@@ -561,10 +616,7 @@ static int verity_verify_io(struct dm_verity_io *io)
+ 		if (unlikely(r < 0))
+ 			return r;
+ 
+-#if defined(CONFIG_DM_VERITY_FEC)
+-		if (verity_fec_is_enabled(v))
+-			start = *iter;
+-#endif
++		start = *iter;
+ 		r = verity_for_io_block(v, io, iter, &wait);
+ 		if (unlikely(r < 0))
+ 			return r;
+@@ -586,6 +638,10 @@ static int verity_verify_io(struct dm_verity_io *io)
+ 			 * tasklet since it may sleep, so fallback to work-queue.
+ 			 */
+ 			return -EAGAIN;
++		} else if (verity_recheck(v, io, start, cur_block) == 0) {
++			if (v->validated_blocks)
++				set_bit(cur_block, v->validated_blocks);
++			continue;
+ #if defined(CONFIG_DM_VERITY_FEC)
+ 		} else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,
+ 					     cur_block, NULL, &start) == 0) {
+@@ -941,6 +997,10 @@ static void verity_dtr(struct dm_target *ti)
+ 	if (v->verify_wq)
+ 		destroy_workqueue(v->verify_wq);
+ 
++	mempool_exit(&v->recheck_pool);
++	if (v->io)
++		dm_io_client_destroy(v->io);
++
+ 	if (v->bufio)
+ 		dm_bufio_client_destroy(v->bufio);
+ 
+@@ -1379,6 +1439,20 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	}
+ 	v->hash_blocks = hash_position;
+ 
++	r = mempool_init_page_pool(&v->recheck_pool, 1, 0);
++	if (unlikely(r)) {
++		ti->error = "Cannot allocate mempool";
++		goto bad;
++	}
++
++	v->io = dm_io_client_create();
++	if (IS_ERR(v->io)) {
++		r = PTR_ERR(v->io);
++		v->io = NULL;
++		ti->error = "Cannot allocate dm io";
++		goto bad;
++	}
++
+ 	v->bufio = dm_bufio_client_create(v->hash_dev->bdev,
+ 		1 << v->hash_dev_block_bits, 1, sizeof(struct buffer_aux),
+ 		dm_bufio_alloc_callback, NULL,
+diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
+index f3f6070084196..4620a98c99561 100644
+--- a/drivers/md/dm-verity.h
++++ b/drivers/md/dm-verity.h
+@@ -11,6 +11,7 @@
+ #ifndef DM_VERITY_H
+ #define DM_VERITY_H
+ 
++#include <linux/dm-io.h>
+ #include <linux/dm-bufio.h>
+ #include <linux/device-mapper.h>
+ #include <linux/interrupt.h>
+@@ -68,6 +69,9 @@ struct dm_verity {
+ 	unsigned long *validated_blocks; /* bitset blocks validated */
+ 
+ 	char *signature_key_desc; /* signature keyring reference */
++
++	struct dm_io_client *io;
++	mempool_t recheck_pool;
+ };
+ 
+ struct dm_verity_io {
+@@ -84,6 +88,8 @@ struct dm_verity_io {
+ 
+ 	struct work_struct work;
+ 
++	char *recheck_buffer;
++
+ 	/*
+ 	 * Three variably-size fields follow this struct:
+ 	 *
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index d243c196c5980..58889bc72659a 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -579,8 +579,12 @@ static void submit_flushes(struct work_struct *ws)
+ 			rcu_read_lock();
+ 		}
+ 	rcu_read_unlock();
+-	if (atomic_dec_and_test(&mddev->flush_pending))
++	if (atomic_dec_and_test(&mddev->flush_pending)) {
++		/* The pair is percpu_ref_get() from md_flush_request() */
++		percpu_ref_put(&mddev->active_io);
++
+ 		queue_work(md_wq, &mddev->flush_work);
++	}
+ }
+ 
+ static void md_submit_flush_data(struct work_struct *ws)
+@@ -8813,12 +8817,16 @@ void md_do_sync(struct md_thread *thread)
+ 	int ret;
+ 
+ 	/* just incase thread restarts... */
+-	if (test_bit(MD_RECOVERY_DONE, &mddev->recovery) ||
+-	    test_bit(MD_RECOVERY_WAIT, &mddev->recovery))
++	if (test_bit(MD_RECOVERY_DONE, &mddev->recovery))
+ 		return;
+-	if (!md_is_rdwr(mddev)) {/* never try to sync a read-only array */
++
++	if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
++		goto skip;
++
++	if (test_bit(MD_RECOVERY_WAIT, &mddev->recovery) ||
++	    !md_is_rdwr(mddev)) {/* never try to sync a read-only array */
+ 		set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+-		return;
++		goto skip;
+ 	}
+ 
+ 	if (mddev_is_clustered(mddev)) {
+@@ -9418,13 +9426,19 @@ static void md_start_sync(struct work_struct *ws)
+ 	struct mddev *mddev = container_of(ws, struct mddev, sync_work);
+ 	int spares = 0;
+ 	bool suspend = false;
++	char *name;
+ 
+-	if (md_spares_need_change(mddev))
++	/*
++	 * If reshape is still in progress, spares won't be added or removed
++	 * from conf until reshape is done.
++	 */
++	if (mddev->reshape_position == MaxSector &&
++	    md_spares_need_change(mddev)) {
+ 		suspend = true;
++		mddev_suspend(mddev, false);
++	}
+ 
+-	suspend ? mddev_suspend_and_lock_nointr(mddev) :
+-		  mddev_lock_nointr(mddev);
+-
++	mddev_lock_nointr(mddev);
+ 	if (!md_is_rdwr(mddev)) {
+ 		/*
+ 		 * On a read-only array we can:
+@@ -9450,8 +9464,10 @@ static void md_start_sync(struct work_struct *ws)
+ 	if (spares)
+ 		md_bitmap_write_all(mddev->bitmap);
+ 
++	name = test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) ?
++			"reshape" : "resync";
+ 	rcu_assign_pointer(mddev->sync_thread,
+-			   md_register_thread(md_do_sync, mddev, "resync"));
++			   md_register_thread(md_do_sync, mddev, name));
+ 	if (!mddev->sync_thread) {
+ 		pr_warn("%s: could not start resync thread...\n",
+ 			mdname(mddev));
+@@ -9495,6 +9511,20 @@ static void md_start_sync(struct work_struct *ws)
+ 		sysfs_notify_dirent_safe(mddev->sysfs_action);
+ }
+ 
++static void unregister_sync_thread(struct mddev *mddev)
++{
++	if (!test_bit(MD_RECOVERY_DONE, &mddev->recovery)) {
++		/* resync/recovery still happening */
++		clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++		return;
++	}
++
++	if (WARN_ON_ONCE(!mddev->sync_thread))
++		return;
++
++	md_reap_sync_thread(mddev);
++}
++
+ /*
+  * This routine is regularly called by all per-raid-array threads to
+  * deal with generic issues like resync and super-block update.
+@@ -9519,9 +9549,6 @@ static void md_start_sync(struct work_struct *ws)
+  */
+ void md_check_recovery(struct mddev *mddev)
+ {
+-	if (READ_ONCE(mddev->suspended))
+-		return;
+-
+ 	if (mddev->bitmap)
+ 		md_bitmap_daemon_work(mddev);
+ 
+@@ -9535,7 +9562,8 @@ void md_check_recovery(struct mddev *mddev)
+ 	}
+ 
+ 	if (!md_is_rdwr(mddev) &&
+-	    !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))
++	    !test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) &&
++	    !test_bit(MD_RECOVERY_DONE, &mddev->recovery))
+ 		return;
+ 	if ( ! (
+ 		(mddev->sb_flags & ~ (1<<MD_SB_CHANGE_PENDING)) ||
+@@ -9557,8 +9585,7 @@ void md_check_recovery(struct mddev *mddev)
+ 			struct md_rdev *rdev;
+ 
+ 			if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
+-				/* sync_work already queued. */
+-				clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++				unregister_sync_thread(mddev);
+ 				goto unlock;
+ 			}
+ 
+@@ -9621,16 +9648,7 @@ void md_check_recovery(struct mddev *mddev)
+ 		 * still set.
+ 		 */
+ 		if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
+-			if (!test_bit(MD_RECOVERY_DONE, &mddev->recovery)) {
+-				/* resync/recovery still happening */
+-				clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+-				goto unlock;
+-			}
+-
+-			if (WARN_ON_ONCE(!mddev->sync_thread))
+-				goto unlock;
+-
+-			md_reap_sync_thread(mddev);
++			unregister_sync_thread(mddev);
+ 			goto unlock;
+ 		}
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index a5927e98dc678..b7b0a573e7f8b 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4307,11 +4307,7 @@ static int raid10_run(struct mddev *mddev)
+ 		clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 		clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
+-		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+-		rcu_assign_pointer(mddev->sync_thread,
+-			md_register_thread(md_do_sync, mddev, "reshape"));
+-		if (!mddev->sync_thread)
+-			goto out_free_conf;
++		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	}
+ 
+ 	return 0;
+@@ -4707,16 +4703,8 @@ static int raid10_start_reshape(struct mddev *mddev)
+ 	clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ 	clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
+ 	set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
+-	set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+-
+-	rcu_assign_pointer(mddev->sync_thread,
+-			   md_register_thread(md_do_sync, mddev, "reshape"));
+-	if (!mddev->sync_thread) {
+-		ret = -EAGAIN;
+-		goto abort;
+-	}
++	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	conf->reshape_checkpoint = jiffies;
+-	md_wakeup_thread(mddev->sync_thread);
+ 	md_new_event();
+ 	return 0;
+ 
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 26e1e8a5e9419..6fe334bb954ab 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -8002,11 +8002,7 @@ static int raid5_run(struct mddev *mddev)
+ 		clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 		clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ 		set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
+-		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+-		rcu_assign_pointer(mddev->sync_thread,
+-			md_register_thread(md_do_sync, mddev, "reshape"));
+-		if (!mddev->sync_thread)
+-			goto abort;
++		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	}
+ 
+ 	/* Ok, everything is just fine now */
+@@ -8585,29 +8581,8 @@ static int raid5_start_reshape(struct mddev *mddev)
+ 	clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
+ 	clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
+ 	set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
+-	set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+-	rcu_assign_pointer(mddev->sync_thread,
+-			   md_register_thread(md_do_sync, mddev, "reshape"));
+-	if (!mddev->sync_thread) {
+-		mddev->recovery = 0;
+-		spin_lock_irq(&conf->device_lock);
+-		write_seqcount_begin(&conf->gen_lock);
+-		mddev->raid_disks = conf->raid_disks = conf->previous_raid_disks;
+-		mddev->new_chunk_sectors =
+-			conf->chunk_sectors = conf->prev_chunk_sectors;
+-		mddev->new_layout = conf->algorithm = conf->prev_algo;
+-		rdev_for_each(rdev, mddev)
+-			rdev->new_data_offset = rdev->data_offset;
+-		smp_wmb();
+-		conf->generation --;
+-		conf->reshape_progress = MaxSector;
+-		mddev->reshape_position = MaxSector;
+-		write_seqcount_end(&conf->gen_lock);
+-		spin_unlock_irq(&conf->device_lock);
+-		return -EAGAIN;
+-	}
++	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	conf->reshape_checkpoint = jiffies;
+-	md_wakeup_thread(mddev->sync_thread);
+ 	md_new_event();
+ 	return 0;
+ }
+diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c
+index 8aea2d070a40c..d279a4f195e2a 100644
+--- a/drivers/misc/open-dice.c
++++ b/drivers/misc/open-dice.c
+@@ -140,7 +140,6 @@ static int __init open_dice_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	*drvdata = (struct open_dice_drvdata){
+-		.lock = __MUTEX_INITIALIZER(drvdata->lock),
+ 		.rmem = rmem,
+ 		.misc = (struct miscdevice){
+ 			.parent	= dev,
+@@ -150,6 +149,7 @@ static int __init open_dice_probe(struct platform_device *pdev)
+ 			.mode	= 0600,
+ 		},
+ 	};
++	mutex_init(&drvdata->lock);
+ 
+ 	/* Index overflow check not needed, misc_register() will fail. */
+ 	snprintf(drvdata->name, sizeof(drvdata->name), DRIVER_NAME"%u", dev_idx++);
+diff --git a/drivers/net/ethernet/broadcom/asp2/bcmasp.c b/drivers/net/ethernet/broadcom/asp2/bcmasp.c
+index 29b04a274d077..80245c65cc904 100644
+--- a/drivers/net/ethernet/broadcom/asp2/bcmasp.c
++++ b/drivers/net/ethernet/broadcom/asp2/bcmasp.c
+@@ -535,9 +535,6 @@ int bcmasp_netfilt_get_all_active(struct bcmasp_intf *intf, u32 *rule_locs,
+ 	int j = 0, i;
+ 
+ 	for (i = 0; i < NUM_NET_FILTERS; i++) {
+-		if (j == *rule_cnt)
+-			return -EMSGSIZE;
+-
+ 		if (!priv->net_filters[i].claimed ||
+ 		    priv->net_filters[i].port != intf->port)
+ 			continue;
+@@ -547,6 +544,9 @@ int bcmasp_netfilt_get_all_active(struct bcmasp_intf *intf, u32 *rule_locs,
+ 		    priv->net_filters[i - 1].wake_filter)
+ 			continue;
+ 
++		if (j == *rule_cnt)
++			return -EMSGSIZE;
++
+ 		rule_locs[j++] = priv->net_filters[i].fs.location;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c b/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
+index 53e5428812552..9cae5a3090000 100644
+--- a/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
++++ b/drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
+@@ -1048,6 +1048,9 @@ static int bcmasp_netif_init(struct net_device *dev, bool phy_connect)
+ 			netdev_err(dev, "could not attach to PHY\n");
+ 			goto err_phy_disable;
+ 		}
++
++		/* Indicate that the MAC is responsible for PHY PM */
++		phydev->mac_managed_pm = true;
+ 	} else if (!intf->wolopts) {
+ 		ret = phy_resume(dev->phydev);
+ 		if (ret)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 3784347b6fd88..55639c133dd02 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -437,6 +437,10 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
+ 			return;
+ 	}
+ 
++	/* AF modifies given action iff PF/VF has requested for it */
++	if ((entry->action & 0xFULL) != NIX_RX_ACTION_DEFAULT)
++		return;
++
+ 	/* copy VF default entry action to the VF mcam entry */
+ 	rx_action = npc_get_default_entry_action(rvu, mcam, blkaddr,
+ 						 target_func);
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+index d1f7fc8b1b71a..3c066b62e6894 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+@@ -757,6 +757,7 @@ static int mchp_sparx5_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, sparx5);
+ 	sparx5->pdev = pdev;
+ 	sparx5->dev = &pdev->dev;
++	spin_lock_init(&sparx5->tx_lock);
+ 
+ 	/* Do switch core reset if available */
+ 	reset = devm_reset_control_get_optional_shared(&pdev->dev, "switch");
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.h b/drivers/net/ethernet/microchip/sparx5/sparx5_main.h
+index 6f565c0c0c3dc..316fed5f27355 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.h
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.h
+@@ -280,6 +280,7 @@ struct sparx5 {
+ 	int xtr_irq;
+ 	/* Frame DMA */
+ 	int fdma_irq;
++	spinlock_t tx_lock; /* lock for frame transmission */
+ 	struct sparx5_rx rx;
+ 	struct sparx5_tx tx;
+ 	/* PTP */
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+index 6db6ac6a3bbc2..ac7e1cffbcecf 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_packet.c
+@@ -244,10 +244,12 @@ netdev_tx_t sparx5_port_xmit_impl(struct sk_buff *skb, struct net_device *dev)
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
++	spin_lock(&sparx5->tx_lock);
+ 	if (sparx5->fdma_irq > 0)
+ 		ret = sparx5_fdma_xmit(sparx5, ifh, skb);
+ 	else
+ 		ret = sparx5_inject(sparx5, ifh, skb, dev);
++	spin_unlock(&sparx5->tx_lock);
+ 
+ 	if (ret == -EBUSY)
+ 		goto busy;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index ec34768e054da..e9a1b60ebb503 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5977,11 +5977,6 @@ static irqreturn_t stmmac_mac_interrupt(int irq, void *dev_id)
+ 	struct net_device *dev = (struct net_device *)dev_id;
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 
+-	if (unlikely(!dev)) {
+-		netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);
+-		return IRQ_NONE;
+-	}
+-
+ 	/* Check if adapter is up */
+ 	if (test_bit(STMMAC_DOWN, &priv->state))
+ 		return IRQ_HANDLED;
+@@ -5997,11 +5992,6 @@ static irqreturn_t stmmac_safety_interrupt(int irq, void *dev_id)
+ 	struct net_device *dev = (struct net_device *)dev_id;
+ 	struct stmmac_priv *priv = netdev_priv(dev);
+ 
+-	if (unlikely(!dev)) {
+-		netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);
+-		return IRQ_NONE;
+-	}
+-
+ 	/* Check if adapter is up */
+ 	if (test_bit(STMMAC_DOWN, &priv->state))
+ 		return IRQ_HANDLED;
+@@ -6023,11 +6013,6 @@ static irqreturn_t stmmac_msi_intr_tx(int irq, void *data)
+ 	dma_conf = container_of(tx_q, struct stmmac_dma_conf, tx_queue[chan]);
+ 	priv = container_of(dma_conf, struct stmmac_priv, dma_conf);
+ 
+-	if (unlikely(!data)) {
+-		netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);
+-		return IRQ_NONE;
+-	}
+-
+ 	/* Check if adapter is up */
+ 	if (test_bit(STMMAC_DOWN, &priv->state))
+ 		return IRQ_HANDLED;
+@@ -6054,11 +6039,6 @@ static irqreturn_t stmmac_msi_intr_rx(int irq, void *data)
+ 	dma_conf = container_of(rx_q, struct stmmac_dma_conf, rx_queue[chan]);
+ 	priv = container_of(dma_conf, struct stmmac_priv, dma_conf);
+ 
+-	if (unlikely(!data)) {
+-		netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);
+-		return IRQ_NONE;
+-	}
+-
+ 	/* Check if adapter is up */
+ 	if (test_bit(STMMAC_DOWN, &priv->state))
+ 		return IRQ_HANDLED;
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index b1919278e931f..2129ae42c7030 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1907,20 +1907,20 @@ static int __init gtp_init(void)
+ 	if (err < 0)
+ 		goto error_out;
+ 
+-	err = genl_register_family(&gtp_genl_family);
++	err = register_pernet_subsys(&gtp_net_ops);
+ 	if (err < 0)
+ 		goto unreg_rtnl_link;
+ 
+-	err = register_pernet_subsys(&gtp_net_ops);
++	err = genl_register_family(&gtp_genl_family);
+ 	if (err < 0)
+-		goto unreg_genl_family;
++		goto unreg_pernet_subsys;
+ 
+ 	pr_info("GTP module loaded (pdp ctx size %zd bytes)\n",
+ 		sizeof(struct pdp_ctx));
+ 	return 0;
+ 
+-unreg_genl_family:
+-	genl_unregister_family(&gtp_genl_family);
++unreg_pernet_subsys:
++	unregister_pernet_subsys(&gtp_net_ops);
+ unreg_rtnl_link:
+ 	rtnl_link_unregister(&gtp_link_ops);
+ error_out:
+diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
+index 4bc05948f772d..a78c692f2d3c5 100644
+--- a/drivers/net/ipa/ipa_interrupt.c
++++ b/drivers/net/ipa/ipa_interrupt.c
+@@ -212,7 +212,7 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
+ 	u32 unit_count;
+ 	u32 unit;
+ 
+-	unit_count = roundup(ipa->endpoint_count, 32);
++	unit_count = DIV_ROUND_UP(ipa->endpoint_count, 32);
+ 	for (unit = 0; unit < unit_count; unit++) {
+ 		const struct reg *reg;
+ 		u32 val;
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index 894172a3e15fe..337899c69738e 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -421,9 +421,11 @@ static int rtl8211f_config_init(struct phy_device *phydev)
+ 				ERR_PTR(ret));
+ 			return ret;
+ 		}
++
++		return genphy_soft_reset(phydev);
+ 	}
+ 
+-	return genphy_soft_reset(phydev);
++	return 0;
+ }
+ 
+ static int rtl821x_suspend(struct phy_device *phydev)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index 480f8edbfd35d..678c4a071869f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -668,7 +668,6 @@ static const struct ieee80211_sband_iftype_data iwl_he_eht_capa[] = {
+ 			.has_eht = true,
+ 			.eht_cap_elem = {
+ 				.mac_cap_info[0] =
+-					IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+ 					IEEE80211_EHT_MAC_CAP0_OM_CONTROL |
+ 					IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE1 |
+ 					IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE2 |
+@@ -793,7 +792,6 @@ static const struct ieee80211_sband_iftype_data iwl_he_eht_capa[] = {
+ 			.has_eht = true,
+ 			.eht_cap_elem = {
+ 				.mac_cap_info[0] =
+-					IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+ 					IEEE80211_EHT_MAC_CAP0_OM_CONTROL |
+ 					IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE1 |
+ 					IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE2,
+@@ -1020,8 +1018,7 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
+ 	if (CSR_HW_REV_TYPE(trans->hw_rev) == IWL_CFG_MAC_TYPE_GL &&
+ 	    iftype_data->eht_cap.has_eht) {
+ 		iftype_data->eht_cap.eht_cap_elem.mac_cap_info[0] &=
+-			~(IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+-			  IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE1 |
++			~(IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE1 |
+ 			  IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE2);
+ 		iftype_data->eht_cap.eht_cap_elem.phy_cap_info[3] &=
+ 			~(IEEE80211_EHT_PHY_CAP0_PARTIAL_BW_UL_MU_MIMO |
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 1d51925ea67fd..e0f4129c3a8ed 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -221,11 +221,6 @@ static LIST_HEAD(nvme_fc_lport_list);
+ static DEFINE_IDA(nvme_fc_local_port_cnt);
+ static DEFINE_IDA(nvme_fc_ctrl_cnt);
+ 
+-static struct workqueue_struct *nvme_fc_wq;
+-
+-static bool nvme_fc_waiting_to_unload;
+-static DECLARE_COMPLETION(nvme_fc_unload_proceed);
+-
+ /*
+  * These items are short-term. They will eventually be moved into
+  * a generic FC class. See comments in module init.
+@@ -255,8 +250,6 @@ nvme_fc_free_lport(struct kref *ref)
+ 	/* remove from transport list */
+ 	spin_lock_irqsave(&nvme_fc_lock, flags);
+ 	list_del(&lport->port_list);
+-	if (nvme_fc_waiting_to_unload && list_empty(&nvme_fc_lport_list))
+-		complete(&nvme_fc_unload_proceed);
+ 	spin_unlock_irqrestore(&nvme_fc_lock, flags);
+ 
+ 	ida_free(&nvme_fc_local_port_cnt, lport->localport.port_num);
+@@ -3896,10 +3889,6 @@ static int __init nvme_fc_init_module(void)
+ {
+ 	int ret;
+ 
+-	nvme_fc_wq = alloc_workqueue("nvme_fc_wq", WQ_MEM_RECLAIM, 0);
+-	if (!nvme_fc_wq)
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * NOTE:
+ 	 * It is expected that in the future the kernel will combine
+@@ -3917,7 +3906,7 @@ static int __init nvme_fc_init_module(void)
+ 	ret = class_register(&fc_class);
+ 	if (ret) {
+ 		pr_err("couldn't register class fc\n");
+-		goto out_destroy_wq;
++		return ret;
+ 	}
+ 
+ 	/*
+@@ -3941,8 +3930,6 @@ static int __init nvme_fc_init_module(void)
+ 	device_destroy(&fc_class, MKDEV(0, 0));
+ out_destroy_class:
+ 	class_unregister(&fc_class);
+-out_destroy_wq:
+-	destroy_workqueue(nvme_fc_wq);
+ 
+ 	return ret;
+ }
+@@ -3962,45 +3949,23 @@ nvme_fc_delete_controllers(struct nvme_fc_rport *rport)
+ 	spin_unlock(&rport->lock);
+ }
+ 
+-static void
+-nvme_fc_cleanup_for_unload(void)
++static void __exit nvme_fc_exit_module(void)
+ {
+ 	struct nvme_fc_lport *lport;
+ 	struct nvme_fc_rport *rport;
+-
+-	list_for_each_entry(lport, &nvme_fc_lport_list, port_list) {
+-		list_for_each_entry(rport, &lport->endp_list, endp_list) {
+-			nvme_fc_delete_controllers(rport);
+-		}
+-	}
+-}
+-
+-static void __exit nvme_fc_exit_module(void)
+-{
+ 	unsigned long flags;
+-	bool need_cleanup = false;
+ 
+ 	spin_lock_irqsave(&nvme_fc_lock, flags);
+-	nvme_fc_waiting_to_unload = true;
+-	if (!list_empty(&nvme_fc_lport_list)) {
+-		need_cleanup = true;
+-		nvme_fc_cleanup_for_unload();
+-	}
++	list_for_each_entry(lport, &nvme_fc_lport_list, port_list)
++		list_for_each_entry(rport, &lport->endp_list, endp_list)
++			nvme_fc_delete_controllers(rport);
+ 	spin_unlock_irqrestore(&nvme_fc_lock, flags);
+-	if (need_cleanup) {
+-		pr_info("%s: waiting for ctlr deletes\n", __func__);
+-		wait_for_completion(&nvme_fc_unload_proceed);
+-		pr_info("%s: ctrl deletes complete\n", __func__);
+-	}
++	flush_workqueue(nvme_delete_wq);
+ 
+ 	nvmf_unregister_transport(&nvme_fc_transport);
+ 
+-	ida_destroy(&nvme_fc_local_port_cnt);
+-	ida_destroy(&nvme_fc_ctrl_cnt);
+-
+ 	device_destroy(&fc_class, MKDEV(0, 0));
+ 	class_unregister(&fc_class);
+-	destroy_workqueue(nvme_fc_wq);
+ }
+ 
+ module_init(nvme_fc_init_module);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index bd59990b52501..666130878ee0f 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -111,6 +111,8 @@ struct nvmet_fc_tgtport {
+ 	struct nvmet_fc_port_entry	*pe;
+ 	struct kref			ref;
+ 	u32				max_sg_cnt;
++
++	struct work_struct		put_work;
+ };
+ 
+ struct nvmet_fc_port_entry {
+@@ -145,7 +147,6 @@ struct nvmet_fc_tgt_queue {
+ 	struct list_head		avail_defer_list;
+ 	struct workqueue_struct		*work_q;
+ 	struct kref			ref;
+-	struct rcu_head			rcu;
+ 	/* array of fcp_iods */
+ 	struct nvmet_fc_fcp_iod		fod[] __counted_by(sqsize);
+ } __aligned(sizeof(unsigned long long));
+@@ -166,10 +167,9 @@ struct nvmet_fc_tgt_assoc {
+ 	struct nvmet_fc_hostport	*hostport;
+ 	struct nvmet_fc_ls_iod		*rcv_disconn;
+ 	struct list_head		a_list;
+-	struct nvmet_fc_tgt_queue __rcu	*queues[NVMET_NR_QUEUES + 1];
++	struct nvmet_fc_tgt_queue 	*queues[NVMET_NR_QUEUES + 1];
+ 	struct kref			ref;
+ 	struct work_struct		del_work;
+-	struct rcu_head			rcu;
+ };
+ 
+ 
+@@ -249,6 +249,13 @@ static int nvmet_fc_tgt_a_get(struct nvmet_fc_tgt_assoc *assoc);
+ static void nvmet_fc_tgt_q_put(struct nvmet_fc_tgt_queue *queue);
+ static int nvmet_fc_tgt_q_get(struct nvmet_fc_tgt_queue *queue);
+ static void nvmet_fc_tgtport_put(struct nvmet_fc_tgtport *tgtport);
++static void nvmet_fc_put_tgtport_work(struct work_struct *work)
++{
++	struct nvmet_fc_tgtport *tgtport =
++		container_of(work, struct nvmet_fc_tgtport, put_work);
++
++	nvmet_fc_tgtport_put(tgtport);
++}
+ static int nvmet_fc_tgtport_get(struct nvmet_fc_tgtport *tgtport);
+ static void nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 					struct nvmet_fc_fcp_iod *fod);
+@@ -360,7 +367,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
+ 
+ 	if (!lsop->req_queued) {
+ 		spin_unlock_irqrestore(&tgtport->lock, flags);
+-		return;
++		goto out_putwork;
+ 	}
+ 
+ 	list_del(&lsop->lsreq_list);
+@@ -373,7 +380,8 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
+ 				  (lsreq->rqstlen + lsreq->rsplen),
+ 				  DMA_BIDIRECTIONAL);
+ 
+-	nvmet_fc_tgtport_put(tgtport);
++out_putwork:
++	queue_work(nvmet_wq, &tgtport->put_work);
+ }
+ 
+ static int
+@@ -802,14 +810,11 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc,
+ 	if (!queue)
+ 		return NULL;
+ 
+-	if (!nvmet_fc_tgt_a_get(assoc))
+-		goto out_free_queue;
+-
+ 	queue->work_q = alloc_workqueue("ntfc%d.%d.%d", 0, 0,
+ 				assoc->tgtport->fc_target_port.port_num,
+ 				assoc->a_id, qid);
+ 	if (!queue->work_q)
+-		goto out_a_put;
++		goto out_free_queue;
+ 
+ 	queue->qid = qid;
+ 	queue->sqsize = sqsize;
+@@ -831,15 +836,13 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *assoc,
+ 		goto out_fail_iodlist;
+ 
+ 	WARN_ON(assoc->queues[qid]);
+-	rcu_assign_pointer(assoc->queues[qid], queue);
++	assoc->queues[qid] = queue;
+ 
+ 	return queue;
+ 
+ out_fail_iodlist:
+ 	nvmet_fc_destroy_fcp_iodlist(assoc->tgtport, queue);
+ 	destroy_workqueue(queue->work_q);
+-out_a_put:
+-	nvmet_fc_tgt_a_put(assoc);
+ out_free_queue:
+ 	kfree(queue);
+ 	return NULL;
+@@ -852,15 +855,11 @@ nvmet_fc_tgt_queue_free(struct kref *ref)
+ 	struct nvmet_fc_tgt_queue *queue =
+ 		container_of(ref, struct nvmet_fc_tgt_queue, ref);
+ 
+-	rcu_assign_pointer(queue->assoc->queues[queue->qid], NULL);
+-
+ 	nvmet_fc_destroy_fcp_iodlist(queue->assoc->tgtport, queue);
+ 
+-	nvmet_fc_tgt_a_put(queue->assoc);
+-
+ 	destroy_workqueue(queue->work_q);
+ 
+-	kfree_rcu(queue, rcu);
++	kfree(queue);
+ }
+ 
+ static void
+@@ -969,7 +968,7 @@ nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) {
+ 		if (association_id == assoc->association_id) {
+-			queue = rcu_dereference(assoc->queues[qid]);
++			queue = assoc->queues[qid];
+ 			if (queue &&
+ 			    (!atomic_read(&queue->connected) ||
+ 			     !nvmet_fc_tgt_q_get(queue)))
+@@ -1078,8 +1077,6 @@ nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ 		/* new allocation not needed */
+ 		kfree(newhost);
+ 		newhost = match;
+-		/* no new allocation - release reference */
+-		nvmet_fc_tgtport_put(tgtport);
+ 	} else {
+ 		newhost->tgtport = tgtport;
+ 		newhost->hosthandle = hosthandle;
+@@ -1094,13 +1091,28 @@ nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ }
+ 
+ static void
+-nvmet_fc_delete_assoc(struct work_struct *work)
++nvmet_fc_delete_assoc(struct nvmet_fc_tgt_assoc *assoc)
++{
++	nvmet_fc_delete_target_assoc(assoc);
++	nvmet_fc_tgt_a_put(assoc);
++}
++
++static void
++nvmet_fc_delete_assoc_work(struct work_struct *work)
+ {
+ 	struct nvmet_fc_tgt_assoc *assoc =
+ 		container_of(work, struct nvmet_fc_tgt_assoc, del_work);
++	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
+ 
+-	nvmet_fc_delete_target_assoc(assoc);
+-	nvmet_fc_tgt_a_put(assoc);
++	nvmet_fc_delete_assoc(assoc);
++	nvmet_fc_tgtport_put(tgtport);
++}
++
++static void
++nvmet_fc_schedule_delete_assoc(struct nvmet_fc_tgt_assoc *assoc)
++{
++	nvmet_fc_tgtport_get(assoc->tgtport);
++	queue_work(nvmet_wq, &assoc->del_work);
+ }
+ 
+ static struct nvmet_fc_tgt_assoc *
+@@ -1112,6 +1124,9 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ 	int idx;
+ 	bool needrandom = true;
+ 
++	if (!tgtport->pe)
++		return NULL;
++
+ 	assoc = kzalloc(sizeof(*assoc), GFP_KERNEL);
+ 	if (!assoc)
+ 		return NULL;
+@@ -1131,7 +1146,7 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ 	assoc->a_id = idx;
+ 	INIT_LIST_HEAD(&assoc->a_list);
+ 	kref_init(&assoc->ref);
+-	INIT_WORK(&assoc->del_work, nvmet_fc_delete_assoc);
++	INIT_WORK(&assoc->del_work, nvmet_fc_delete_assoc_work);
+ 	atomic_set(&assoc->terminating, 0);
+ 
+ 	while (needrandom) {
+@@ -1172,13 +1187,18 @@ nvmet_fc_target_assoc_free(struct kref *ref)
+ 	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
+ 	struct nvmet_fc_ls_iod	*oldls;
+ 	unsigned long flags;
++	int i;
++
++	for (i = NVMET_NR_QUEUES; i >= 0; i--) {
++		if (assoc->queues[i])
++			nvmet_fc_delete_target_queue(assoc->queues[i]);
++	}
+ 
+ 	/* Send Disconnect now that all i/o has completed */
+ 	nvmet_fc_xmt_disconnect_assoc(assoc);
+ 
+ 	nvmet_fc_free_hostport(assoc->hostport);
+ 	spin_lock_irqsave(&tgtport->lock, flags);
+-	list_del_rcu(&assoc->a_list);
+ 	oldls = assoc->rcv_disconn;
+ 	spin_unlock_irqrestore(&tgtport->lock, flags);
+ 	/* if pending Rcv Disconnect Association LS, send rsp now */
+@@ -1188,8 +1208,8 @@ nvmet_fc_target_assoc_free(struct kref *ref)
+ 	dev_info(tgtport->dev,
+ 		"{%d:%d} Association freed\n",
+ 		tgtport->fc_target_port.port_num, assoc->a_id);
+-	kfree_rcu(assoc, rcu);
+ 	nvmet_fc_tgtport_put(tgtport);
++	kfree(assoc);
+ }
+ 
+ static void
+@@ -1208,7 +1228,7 @@ static void
+ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc)
+ {
+ 	struct nvmet_fc_tgtport *tgtport = assoc->tgtport;
+-	struct nvmet_fc_tgt_queue *queue;
++	unsigned long flags;
+ 	int i, terminating;
+ 
+ 	terminating = atomic_xchg(&assoc->terminating, 1);
+@@ -1217,29 +1237,21 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc)
+ 	if (terminating)
+ 		return;
+ 
++	spin_lock_irqsave(&tgtport->lock, flags);
++	list_del_rcu(&assoc->a_list);
++	spin_unlock_irqrestore(&tgtport->lock, flags);
+ 
+-	for (i = NVMET_NR_QUEUES; i >= 0; i--) {
+-		rcu_read_lock();
+-		queue = rcu_dereference(assoc->queues[i]);
+-		if (!queue) {
+-			rcu_read_unlock();
+-			continue;
+-		}
++	synchronize_rcu();
+ 
+-		if (!nvmet_fc_tgt_q_get(queue)) {
+-			rcu_read_unlock();
+-			continue;
+-		}
+-		rcu_read_unlock();
+-		nvmet_fc_delete_target_queue(queue);
+-		nvmet_fc_tgt_q_put(queue);
++	/* ensure all in-flight I/Os have been processed */
++	for (i = NVMET_NR_QUEUES; i >= 0; i--) {
++		if (assoc->queues[i])
++			flush_workqueue(assoc->queues[i]->work_q);
+ 	}
+ 
+ 	dev_info(tgtport->dev,
+ 		"{%d:%d} Association deleted\n",
+ 		tgtport->fc_target_port.port_num, assoc->a_id);
+-
+-	nvmet_fc_tgt_a_put(assoc);
+ }
+ 
+ static struct nvmet_fc_tgt_assoc *
+@@ -1415,6 +1427,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
+ 	kref_init(&newrec->ref);
+ 	ida_init(&newrec->assoc_cnt);
+ 	newrec->max_sg_cnt = template->max_sgl_segments;
++	INIT_WORK(&newrec->put_work, nvmet_fc_put_tgtport_work);
+ 
+ 	ret = nvmet_fc_alloc_ls_iodlist(newrec);
+ 	if (ret) {
+@@ -1492,9 +1505,8 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
+ 	list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) {
+ 		if (!nvmet_fc_tgt_a_get(assoc))
+ 			continue;
+-		if (!queue_work(nvmet_wq, &assoc->del_work))
+-			/* already deleting - release local reference */
+-			nvmet_fc_tgt_a_put(assoc);
++		nvmet_fc_schedule_delete_assoc(assoc);
++		nvmet_fc_tgt_a_put(assoc);
+ 	}
+ 	rcu_read_unlock();
+ }
+@@ -1547,9 +1559,8 @@ nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
+ 			continue;
+ 		assoc->hostport->invalid = 1;
+ 		noassoc = false;
+-		if (!queue_work(nvmet_wq, &assoc->del_work))
+-			/* already deleting - release local reference */
+-			nvmet_fc_tgt_a_put(assoc);
++		nvmet_fc_schedule_delete_assoc(assoc);
++		nvmet_fc_tgt_a_put(assoc);
+ 	}
+ 	spin_unlock_irqrestore(&tgtport->lock, flags);
+ 
+@@ -1581,7 +1592,7 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
+ 
+ 		rcu_read_lock();
+ 		list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) {
+-			queue = rcu_dereference(assoc->queues[0]);
++			queue = assoc->queues[0];
+ 			if (queue && queue->nvme_sq.ctrl == ctrl) {
+ 				if (nvmet_fc_tgt_a_get(assoc))
+ 					found_ctrl = true;
+@@ -1593,9 +1604,8 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
+ 		nvmet_fc_tgtport_put(tgtport);
+ 
+ 		if (found_ctrl) {
+-			if (!queue_work(nvmet_wq, &assoc->del_work))
+-				/* already deleting - release local reference */
+-				nvmet_fc_tgt_a_put(assoc);
++			nvmet_fc_schedule_delete_assoc(assoc);
++			nvmet_fc_tgt_a_put(assoc);
+ 			return;
+ 		}
+ 
+@@ -1625,6 +1635,8 @@ nvmet_fc_unregister_targetport(struct nvmet_fc_target_port *target_port)
+ 	/* terminate any outstanding associations */
+ 	__nvmet_fc_free_assocs(tgtport);
+ 
++	flush_workqueue(nvmet_wq);
++
+ 	/*
+ 	 * should terminate LS's as well. However, LS's will be generated
+ 	 * at the tail end of association termination, so they likely don't
+@@ -1870,9 +1882,6 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
+ 				sizeof(struct fcnvme_ls_disconnect_assoc_acc)),
+ 			FCNVME_LS_DISCONNECT_ASSOC);
+ 
+-	/* release get taken in nvmet_fc_find_target_assoc */
+-	nvmet_fc_tgt_a_put(assoc);
+-
+ 	/*
+ 	 * The rules for LS response says the response cannot
+ 	 * go back until ABTS's have been sent for all outstanding
+@@ -1887,8 +1896,6 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
+ 	assoc->rcv_disconn = iod;
+ 	spin_unlock_irqrestore(&tgtport->lock, flags);
+ 
+-	nvmet_fc_delete_target_assoc(assoc);
+-
+ 	if (oldls) {
+ 		dev_info(tgtport->dev,
+ 			"{%d:%d} Multiple Disconnect Association LS's "
+@@ -1904,6 +1911,9 @@ nvmet_fc_ls_disconnect(struct nvmet_fc_tgtport *tgtport,
+ 		nvmet_fc_xmt_ls_rsp(tgtport, oldls);
+ 	}
+ 
++	nvmet_fc_schedule_delete_assoc(assoc);
++	nvmet_fc_tgt_a_put(assoc);
++
+ 	return false;
+ }
+ 
+@@ -2540,8 +2550,9 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 
+ 	fod->req.cmd = &fod->cmdiubuf.sqe;
+ 	fod->req.cqe = &fod->rspiubuf.cqe;
+-	if (tgtport->pe)
+-		fod->req.port = tgtport->pe->port;
++	if (!tgtport->pe)
++		goto transport_error;
++	fod->req.port = tgtport->pe->port;
+ 
+ 	/* clear any response payload */
+ 	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
+@@ -2902,6 +2913,9 @@ nvmet_fc_remove_port(struct nvmet_port *port)
+ 
+ 	nvmet_fc_portentry_unbind(pe);
+ 
++	/* terminate any outstanding associations */
++	__nvmet_fc_free_assocs(pe->tgtport);
++
+ 	kfree(pe);
+ }
+ 
+@@ -2933,6 +2947,9 @@ static int __init nvmet_fc_init_module(void)
+ 
+ static void __exit nvmet_fc_exit_module(void)
+ {
++	/* ensure any shutdown operation, e.g. delete ctrls have finished */
++	flush_workqueue(nvmet_wq);
++
+ 	/* sanity check - all lports should be removed */
+ 	if (!list_empty(&nvmet_fc_target_list))
+ 		pr_warn("%s: targetport list not empty\n", __func__);
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index c65a73433c05f..e6d4226827b52 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -358,7 +358,7 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
+ 	if (!rport->targetport) {
+ 		tls_req->status = -ECONNREFUSED;
+ 		spin_lock(&rport->lock);
+-		list_add_tail(&rport->ls_list, &tls_req->ls_list);
++		list_add_tail(&tls_req->ls_list, &rport->ls_list);
+ 		spin_unlock(&rport->lock);
+ 		queue_work(nvmet_wq, &rport->ls_work);
+ 		return ret;
+@@ -391,7 +391,7 @@ fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
+ 	if (remoteport) {
+ 		rport = remoteport->private;
+ 		spin_lock(&rport->lock);
+-		list_add_tail(&rport->ls_list, &tls_req->ls_list);
++		list_add_tail(&tls_req->ls_list, &rport->ls_list);
+ 		spin_unlock(&rport->lock);
+ 		queue_work(nvmet_wq, &rport->ls_work);
+ 	}
+@@ -446,7 +446,7 @@ fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetport, void *hosthandle,
+ 	if (!tport->remoteport) {
+ 		tls_req->status = -ECONNREFUSED;
+ 		spin_lock(&tport->lock);
+-		list_add_tail(&tport->ls_list, &tls_req->ls_list);
++		list_add_tail(&tls_req->ls_list, &tport->ls_list);
+ 		spin_unlock(&tport->lock);
+ 		queue_work(nvmet_wq, &tport->ls_work);
+ 		return ret;
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 80db64a4b09f6..bb42ae42b1c6e 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -2204,6 +2204,7 @@ static void __exit nvmet_tcp_exit(void)
+ 	flush_workqueue(nvmet_wq);
+ 
+ 	destroy_workqueue(nvmet_tcp_wq);
++	ida_destroy(&nvmet_tcp_queue_ida);
+ }
+ 
+ module_init(nvmet_tcp_init);
+diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
+index c8be056c248de..cfd84a899c82d 100644
+--- a/drivers/pci/msi/irqdomain.c
++++ b/drivers/pci/msi/irqdomain.c
+@@ -61,7 +61,7 @@ static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc)
+ 
+ 	return (irq_hw_number_t)desc->msi_index |
+ 		pci_dev_id(dev) << 11 |
+-		(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27;
++		((irq_hw_number_t)(pci_domain_nr(dev->bus) & 0xFFFFFFFF)) << 27;
+ }
+ 
+ static void pci_msi_domain_set_desc(msi_alloc_info_t *arg,
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index 5c683b4eaf10a..f39b7b9d2bfea 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -47,6 +47,9 @@
+ /* Message with data needs at least two words (for header & data). */
+ #define MLXBF_TMFIFO_DATA_MIN_WORDS		2
+ 
++/* Tx timeout in milliseconds. */
++#define TMFIFO_TX_TIMEOUT			2000
++
+ /* ACPI UID for BlueField-3. */
+ #define TMFIFO_BF3_UID				1
+ 
+@@ -62,12 +65,14 @@ struct mlxbf_tmfifo;
+  * @drop_desc: dummy desc for packet dropping
+  * @cur_len: processed length of the current descriptor
+  * @rem_len: remaining length of the pending packet
++ * @rem_padding: remaining bytes to send as paddings
+  * @pkt_len: total length of the pending packet
+  * @next_avail: next avail descriptor id
+  * @num: vring size (number of descriptors)
+  * @align: vring alignment size
+  * @index: vring index
+  * @vdev_id: vring virtio id (VIRTIO_ID_xxx)
++ * @tx_timeout: expire time of last tx packet
+  * @fifo: pointer to the tmfifo structure
+  */
+ struct mlxbf_tmfifo_vring {
+@@ -79,12 +84,14 @@ struct mlxbf_tmfifo_vring {
+ 	struct vring_desc drop_desc;
+ 	int cur_len;
+ 	int rem_len;
++	int rem_padding;
+ 	u32 pkt_len;
+ 	u16 next_avail;
+ 	int num;
+ 	int align;
+ 	int index;
+ 	int vdev_id;
++	unsigned long tx_timeout;
+ 	struct mlxbf_tmfifo *fifo;
+ };
+ 
+@@ -819,6 +826,50 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
+ 	return true;
+ }
+ 
++static void mlxbf_tmfifo_check_tx_timeout(struct mlxbf_tmfifo_vring *vring)
++{
++	unsigned long flags;
++
++	/* Only handle Tx timeout for network vdev. */
++	if (vring->vdev_id != VIRTIO_ID_NET)
++		return;
++
++	/* Initialize the timeout or return if not expired. */
++	if (!vring->tx_timeout) {
++		/* Initialize the timeout. */
++		vring->tx_timeout = jiffies +
++			msecs_to_jiffies(TMFIFO_TX_TIMEOUT);
++		return;
++	} else if (time_before(jiffies, vring->tx_timeout)) {
++		/* Return if not timeout yet. */
++		return;
++	}
++
++	/*
++	 * Drop the packet after timeout. The outstanding packet is
++	 * released and the remaining bytes will be sent with padding byte 0x00
++	 * as a recovery. On the peer(host) side, the padding bytes 0x00 will be
++	 * either dropped directly, or appended into existing outstanding packet
++	 * thus dropped as corrupted network packet.
++	 */
++	vring->rem_padding = round_up(vring->rem_len, sizeof(u64));
++	mlxbf_tmfifo_release_pkt(vring);
++	vring->cur_len = 0;
++	vring->rem_len = 0;
++	vring->fifo->vring[0] = NULL;
++
++	/*
++	 * Make sure the load/store are in order before
++	 * returning back to virtio.
++	 */
++	virtio_mb(false);
++
++	/* Notify upper layer. */
++	spin_lock_irqsave(&vring->fifo->spin_lock[0], flags);
++	vring_interrupt(0, vring->vq);
++	spin_unlock_irqrestore(&vring->fifo->spin_lock[0], flags);
++}
++
+ /* Rx & Tx processing of a queue. */
+ static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx)
+ {
+@@ -841,6 +892,7 @@ static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx)
+ 		return;
+ 
+ 	do {
++retry:
+ 		/* Get available FIFO space. */
+ 		if (avail == 0) {
+ 			if (is_rx)
+@@ -851,6 +903,17 @@ static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx)
+ 				break;
+ 		}
+ 
++		/* Insert paddings for discarded Tx packet. */
++		if (!is_rx) {
++			vring->tx_timeout = 0;
++			while (vring->rem_padding >= sizeof(u64)) {
++				writeq(0, vring->fifo->tx.data);
++				vring->rem_padding -= sizeof(u64);
++				if (--avail == 0)
++					goto retry;
++			}
++		}
++
+ 		/* Console output always comes from the Tx buffer. */
+ 		if (!is_rx && devid == VIRTIO_ID_CONSOLE) {
+ 			mlxbf_tmfifo_console_tx(fifo, avail);
+@@ -860,6 +923,10 @@ static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx)
+ 		/* Handle one descriptor. */
+ 		more = mlxbf_tmfifo_rxtx_one_desc(vring, is_rx, &avail);
+ 	} while (more);
++
++	/* Check Tx timeout. */
++	if (avail <= 0 && !is_rx)
++		mlxbf_tmfifo_check_tx_timeout(vring);
+ }
+ 
+ /* Handle Rx or Tx queues. */
+diff --git a/drivers/platform/x86/intel/vbtn.c b/drivers/platform/x86/intel/vbtn.c
+index 210b0a81b7ecb..084c355c86f5f 100644
+--- a/drivers/platform/x86/intel/vbtn.c
++++ b/drivers/platform/x86/intel/vbtn.c
+@@ -200,9 +200,6 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
+ 	autorelease = val && (!ke_rel || ke_rel->type == KE_IGNORE);
+ 
+ 	sparse_keymap_report_event(input_dev, event, val, autorelease);
+-
+-	/* Some devices need this to report further events */
+-	acpi_evaluate_object(handle, "VBDL", NULL, NULL);
+ }
+ 
+ /*
+diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
+index 3a396b763c496..ce3e08815a8e6 100644
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -1009,7 +1009,16 @@ static ssize_t current_value_store(struct kobject *kobj,
+ 		 * Note - this sets the variable and then the password as separate
+ 		 * WMI calls. Function tlmi_save_bios_settings will error if the
+ 		 * password is incorrect.
++		 * Workstation's require the opcode to be set before changing the
++		 * attribute.
+ 		 */
++		if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) {
++			ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin",
++						  tlmi_priv.pwd_admin->password);
++			if (ret)
++				goto out;
++		}
++
+ 		set_str = kasprintf(GFP_KERNEL, "%s,%s;", setting->display_name,
+ 				    new_setting);
+ 		if (!set_str) {
+@@ -1021,17 +1030,10 @@ static ssize_t current_value_store(struct kobject *kobj,
+ 		if (ret)
+ 			goto out;
+ 
+-		if (tlmi_priv.save_mode == TLMI_SAVE_BULK) {
++		if (tlmi_priv.save_mode == TLMI_SAVE_BULK)
+ 			tlmi_priv.save_required = true;
+-		} else {
+-			if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) {
+-				ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin",
+-							  tlmi_priv.pwd_admin->password);
+-				if (ret)
+-					goto out;
+-			}
++		else
+ 			ret = tlmi_save_bios_settings("");
+-		}
+ 	} else { /* old non-opcode based authentication method (deprecated) */
+ 		if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) {
+ 			auth_str = kasprintf(GFP_KERNEL, "%s,%s,%s;",
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index c4895e9bc7148..5ecd9d33250d7 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -10308,6 +10308,7 @@ static int convert_dytc_to_profile(int funcmode, int dytcmode,
+ 		return 0;
+ 	default:
+ 		/* Unknown function */
++		pr_debug("unknown function 0x%x\n", funcmode);
+ 		return -EOPNOTSUPP;
+ 	}
+ 	return 0;
+@@ -10493,8 +10494,8 @@ static void dytc_profile_refresh(void)
+ 		return;
+ 
+ 	perfmode = (output >> DYTC_GET_MODE_BIT) & 0xF;
+-	convert_dytc_to_profile(funcmode, perfmode, &profile);
+-	if (profile != dytc_current_profile) {
++	err = convert_dytc_to_profile(funcmode, perfmode, &profile);
++	if (!err && profile != dytc_current_profile) {
+ 		dytc_current_profile = profile;
+ 		platform_profile_notify();
+ 	}
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index 0c67337726984..969477c83e56e 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -81,7 +81,7 @@ static const struct property_entry chuwi_hi8_air_props[] = {
+ };
+ 
+ static const struct ts_dmi_data chuwi_hi8_air_data = {
+-	.acpi_name	= "MSSL1680:00",
++	.acpi_name	= "MSSL1680",
+ 	.properties	= chuwi_hi8_air_props,
+ };
+ 
+@@ -944,6 +944,32 @@ static const struct ts_dmi_data teclast_tbook11_data = {
+ 	.properties	= teclast_tbook11_props,
+ };
+ 
++static const struct property_entry teclast_x16_plus_props[] = {
++	PROPERTY_ENTRY_U32("touchscreen-min-x", 8),
++	PROPERTY_ENTRY_U32("touchscreen-min-y", 14),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 1916),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 1264),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-teclast-x16-plus.fw"),
++	PROPERTY_ENTRY_U32("silead,max-fingers", 10),
++	PROPERTY_ENTRY_BOOL("silead,home-button"),
++	{ }
++};
++
++static const struct ts_dmi_data teclast_x16_plus_data = {
++	.embedded_fw = {
++		.name	= "silead/gsl3692-teclast-x16-plus.fw",
++		.prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 },
++		.length	= 43560,
++		.sha256	= { 0x9d, 0xb0, 0x3d, 0xf1, 0x00, 0x3c, 0xb5, 0x25,
++			    0x62, 0x8a, 0xa0, 0x93, 0x4b, 0xe0, 0x4e, 0x75,
++			    0xd1, 0x27, 0xb1, 0x65, 0x3c, 0xba, 0xa5, 0x0f,
++			    0xcd, 0xb4, 0xbe, 0x00, 0xbb, 0xf6, 0x43, 0x29 },
++	},
++	.acpi_name	= "MSSL1680:00",
++	.properties	= teclast_x16_plus_props,
++};
++
+ static const struct property_entry teclast_x3_plus_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 1980),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 1500),
+@@ -1612,6 +1638,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_SKU, "E5A6_A1"),
+ 		},
+ 	},
++	{
++		/* Teclast X16 Plus */
++		.driver_data = (void *)&teclast_x16_plus_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TECLAST"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
++			DMI_MATCH(DMI_PRODUCT_SKU, "D3A5_A1"),
++		},
++	},
+ 	{
+ 		/* Teclast X3 Plus */
+ 		.driver_data = (void *)&teclast_x3_plus_data,
+@@ -1786,7 +1821,7 @@ static void ts_dmi_add_props(struct i2c_client *client)
+ 	int error;
+ 
+ 	if (has_acpi_companion(dev) &&
+-	    !strncmp(ts_data->acpi_name, client->name, I2C_NAME_SIZE)) {
++	    strstarts(client->name, ts_data->acpi_name)) {
+ 		error = device_create_managed_software_node(dev, ts_data->properties, NULL);
+ 		if (error)
+ 			dev_err(dev, "failed to add properties: %d\n", error);
+diff --git a/drivers/platform/x86/x86-android-tablets/core.c b/drivers/platform/x86/x86-android-tablets/core.c
+index b55957bde0342..ac2e8caaa8526 100644
+--- a/drivers/platform/x86/x86-android-tablets/core.c
++++ b/drivers/platform/x86/x86-android-tablets/core.c
+@@ -113,6 +113,9 @@ int x86_acpi_irq_helper_get(const struct x86_acpi_irq_data *data)
+ 		if (irq_type != IRQ_TYPE_NONE && irq_type != irq_get_trigger_type(irq))
+ 			irq_set_irq_type(irq, irq_type);
+ 
++		if (data->free_gpio)
++			devm_gpiod_put(&x86_android_tablet_device->dev, gpiod);
++
+ 		return irq;
+ 	case X86_ACPI_IRQ_TYPE_PMIC:
+ 		status = acpi_get_handle(NULL, data->chip, &handle);
+diff --git a/drivers/platform/x86/x86-android-tablets/lenovo.c b/drivers/platform/x86/x86-android-tablets/lenovo.c
+index c1e68211283fb..50ac5afa0027e 100644
+--- a/drivers/platform/x86/x86-android-tablets/lenovo.c
++++ b/drivers/platform/x86/x86-android-tablets/lenovo.c
+@@ -96,6 +96,7 @@ static const struct x86_i2c_client_info lenovo_yb1_x90_i2c_clients[] __initconst
+ 			.trigger = ACPI_EDGE_SENSITIVE,
+ 			.polarity = ACPI_ACTIVE_LOW,
+ 			.con_id = "goodix_ts_irq",
++			.free_gpio = true,
+ 		},
+ 	}, {
+ 		/* Wacom Digitizer in keyboard half */
+diff --git a/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h b/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
+index 9d2fb7fded6d7..136a2b359dd22 100644
+--- a/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
++++ b/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
+@@ -38,6 +38,7 @@ struct x86_acpi_irq_data {
+ 	int index;
+ 	int trigger;  /* ACPI_EDGE_SENSITIVE / ACPI_LEVEL_SENSITIVE */
+ 	int polarity; /* ACPI_ACTIVE_HIGH / ACPI_ACTIVE_LOW / ACPI_ACTIVE_BOTH */
++	bool free_gpio; /* Release GPIO after getting IRQ (for TYPE_GPIOINT) */
+ 	const char *con_id;
+ };
+ 
+diff --git a/drivers/regulator/max5970-regulator.c b/drivers/regulator/max5970-regulator.c
+index bc88a40a88d4c..830a1c4cd7057 100644
+--- a/drivers/regulator/max5970-regulator.c
++++ b/drivers/regulator/max5970-regulator.c
+@@ -392,7 +392,7 @@ static int max597x_regmap_read_clear(struct regmap *map, unsigned int reg,
+ 		return ret;
+ 
+ 	if (*val)
+-		return regmap_write(map, reg, *val);
++		return regmap_write(map, reg, 0);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/regulator/pwm-regulator.c b/drivers/regulator/pwm-regulator.c
+index 2aff6db748e2c..e33d10df7a763 100644
+--- a/drivers/regulator/pwm-regulator.c
++++ b/drivers/regulator/pwm-regulator.c
+@@ -158,6 +158,9 @@ static int pwm_regulator_get_voltage(struct regulator_dev *rdev)
+ 	pwm_get_state(drvdata->pwm, &pstate);
+ 
+ 	voltage = pwm_get_relative_duty_cycle(&pstate, duty_unit);
++	if (voltage < min(max_uV_duty, min_uV_duty) ||
++	    voltage > max(max_uV_duty, min_uV_duty))
++		return -ENOTRECOVERABLE;
+ 
+ 	/*
+ 	 * The dutycycle for min_uV might be greater than the one for max_uV.
+diff --git a/drivers/s390/cio/device_ops.c b/drivers/s390/cio/device_ops.c
+index c533d1dadc6bb..a5dba3829769c 100644
+--- a/drivers/s390/cio/device_ops.c
++++ b/drivers/s390/cio/device_ops.c
+@@ -202,7 +202,8 @@ int ccw_device_start_timeout_key(struct ccw_device *cdev, struct ccw1 *cpa,
+ 		return -EINVAL;
+ 	if (cdev->private->state == DEV_STATE_NOT_OPER)
+ 		return -ENODEV;
+-	if (cdev->private->state == DEV_STATE_VERIFY) {
++	if (cdev->private->state == DEV_STATE_VERIFY ||
++	    cdev->private->flags.doverify) {
+ 		/* Remember to fake irb when finished. */
+ 		if (!cdev->private->flags.fake_irb) {
+ 			cdev->private->flags.fake_irb = FAKE_CMD_IRB;
+@@ -214,8 +215,7 @@ int ccw_device_start_timeout_key(struct ccw_device *cdev, struct ccw1 *cpa,
+ 	}
+ 	if (cdev->private->state != DEV_STATE_ONLINE ||
+ 	    ((sch->schib.scsw.cmd.stctl & SCSW_STCTL_PRIM_STATUS) &&
+-	     !(sch->schib.scsw.cmd.stctl & SCSW_STCTL_SEC_STATUS)) ||
+-	    cdev->private->flags.doverify)
++	     !(sch->schib.scsw.cmd.stctl & SCSW_STCTL_SEC_STATUS)))
+ 		return -EBUSY;
+ 	ret = cio_set_options (sch, flags);
+ 	if (ret)
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index addac7fbe37b9..9ce27092729c3 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -1270,7 +1270,7 @@ source "drivers/scsi/arm/Kconfig"
+ 
+ config JAZZ_ESP
+ 	bool "MIPS JAZZ FAS216 SCSI support"
+-	depends on MACH_JAZZ && SCSI
++	depends on MACH_JAZZ && SCSI=y
+ 	select SCSI_SPI_ATTRS
+ 	help
+ 	  This is the driver for the onboard SCSI host adapter of MIPS Magnum
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index d26941b131fdb..bf879d81846b6 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -1918,7 +1918,7 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+  *
+  * Returns the number of SGEs added to the SGL.
+  **/
+-static int
++static uint32_t
+ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 		struct sli4_sge *sgl, int datasegcnt,
+ 		struct lpfc_io_buf *lpfc_cmd)
+@@ -1926,8 +1926,8 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 	struct scatterlist *sgde = NULL; /* s/g data entry */
+ 	struct sli4_sge_diseed *diseed = NULL;
+ 	dma_addr_t physaddr;
+-	int i = 0, num_sge = 0, status;
+-	uint32_t reftag;
++	int i = 0, status;
++	uint32_t reftag, num_sge = 0;
+ 	uint8_t txop, rxop;
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ 	uint32_t rc;
+@@ -2099,7 +2099,7 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+  *
+  * Returns the number of SGEs added to the SGL.
+  **/
+-static int
++static uint32_t
+ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 		struct sli4_sge *sgl, int datacnt, int protcnt,
+ 		struct lpfc_io_buf *lpfc_cmd)
+@@ -2123,8 +2123,8 @@ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
+ 	uint32_t rc;
+ #endif
+ 	uint32_t checking = 1;
+-	uint32_t dma_offset = 0;
+-	int num_sge = 0, j = 2;
++	uint32_t dma_offset = 0, num_sge = 0;
++	int j = 2;
+ 	struct sli4_hybrid_sgl *sgl_xtra = NULL;
+ 
+ 	sgpe = scsi_prot_sglist(sc);
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index 76d369343c7a9..8cad9792a5627 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -328,21 +328,39 @@ static int scsi_vpd_inquiry(struct scsi_device *sdev, unsigned char *buffer,
+ 	return result + 4;
+ }
+ 
++enum scsi_vpd_parameters {
++	SCSI_VPD_HEADER_SIZE = 4,
++	SCSI_VPD_LIST_SIZE = 36,
++};
++
+ static int scsi_get_vpd_size(struct scsi_device *sdev, u8 page)
+ {
+-	unsigned char vpd_header[SCSI_VPD_HEADER_SIZE] __aligned(4);
++	unsigned char vpd[SCSI_VPD_LIST_SIZE] __aligned(4);
+ 	int result;
+ 
+ 	if (sdev->no_vpd_size)
+ 		return SCSI_DEFAULT_VPD_LEN;
+ 
++	/*
++	 * Fetch the supported pages VPD and validate that the requested page
++	 * number is present.
++	 */
++	if (page != 0) {
++		result = scsi_vpd_inquiry(sdev, vpd, 0, sizeof(vpd));
++		if (result < SCSI_VPD_HEADER_SIZE)
++			return 0;
++
++		result -= SCSI_VPD_HEADER_SIZE;
++		if (!memchr(&vpd[SCSI_VPD_HEADER_SIZE], page, result))
++			return 0;
++	}
+ 	/*
+ 	 * Fetch the VPD page header to find out how big the page
+ 	 * is. This is done to prevent problems on legacy devices
+ 	 * which can not handle allocation lengths as large as
+ 	 * potentially requested by the caller.
+ 	 */
+-	result = scsi_vpd_inquiry(sdev, vpd_header, page, sizeof(vpd_header));
++	result = scsi_vpd_inquiry(sdev, vpd, page, SCSI_VPD_HEADER_SIZE);
+ 	if (result < 0)
+ 		return 0;
+ 
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 542a4bbb21bce..a12ff43ac8fd0 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3410,6 +3410,24 @@ static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp,
+ 	return true;
+ }
+ 
++static void sd_read_block_zero(struct scsi_disk *sdkp)
++{
++	unsigned int buf_len = sdkp->device->sector_size;
++	char *buffer, cmd[10] = { };
++
++	buffer = kmalloc(buf_len, GFP_KERNEL);
++	if (!buffer)
++		return;
++
++	cmd[0] = READ_10;
++	put_unaligned_be32(0, &cmd[2]); /* Logical block address 0 */
++	put_unaligned_be16(1, &cmd[7]);	/* Transfer 1 logical block */
++
++	scsi_execute_cmd(sdkp->device, cmd, REQ_OP_DRV_IN, buffer, buf_len,
++			 SD_TIMEOUT, sdkp->max_retries, NULL);
++	kfree(buffer);
++}
++
+ /**
+  *	sd_revalidate_disk - called the first time a new disk is seen,
+  *	performs disk spin up, read_capacity, etc.
+@@ -3449,7 +3467,13 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ 	 */
+ 	if (sdkp->media_present) {
+ 		sd_read_capacity(sdkp, buffer);
+-
++		/*
++		 * Some USB/UAS devices return generic values for mode pages
++		 * until the media has been accessed. Trigger a READ operation
++		 * to force the device to populate mode pages.
++		 */
++		if (sdp->read_before_ms)
++			sd_read_block_zero(sdkp);
+ 		/*
+ 		 * set the default to rotational.  All non-rotational devices
+ 		 * support the block characteristics VPD page, which will
+diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h
+index 0419401835169..cdedc271857aa 100644
+--- a/drivers/scsi/smartpqi/smartpqi.h
++++ b/drivers/scsi/smartpqi/smartpqi.h
+@@ -1347,7 +1347,6 @@ struct pqi_ctrl_info {
+ 	bool		controller_online;
+ 	bool		block_requests;
+ 	bool		scan_blocked;
+-	u8		logical_volume_rescan_needed : 1;
+ 	u8		inbound_spanning_supported : 1;
+ 	u8		outbound_spanning_supported : 1;
+ 	u8		pqi_mode_enabled : 1;
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 9a58df9312fa7..868453b18c9ae 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -2093,8 +2093,6 @@ static void pqi_scsi_update_device(struct pqi_ctrl_info *ctrl_info,
+ 		if (existing_device->devtype == TYPE_DISK) {
+ 			existing_device->raid_level = new_device->raid_level;
+ 			existing_device->volume_status = new_device->volume_status;
+-			if (ctrl_info->logical_volume_rescan_needed)
+-				existing_device->rescan = true;
+ 			memset(existing_device->next_bypass_group, 0, sizeof(existing_device->next_bypass_group));
+ 			if (!pqi_raid_maps_equal(existing_device->raid_map, new_device->raid_map)) {
+ 				kfree(existing_device->raid_map);
+@@ -2164,6 +2162,20 @@ static inline void pqi_init_device_tmf_work(struct pqi_scsi_dev *device)
+ 		INIT_WORK(&tmf_work->work_struct, pqi_tmf_worker);
+ }
+ 
++static inline bool pqi_volume_rescan_needed(struct pqi_scsi_dev *device)
++{
++	if (pqi_device_in_remove(device))
++		return false;
++
++	if (device->sdev == NULL)
++		return false;
++
++	if (!scsi_device_online(device->sdev))
++		return false;
++
++	return device->rescan;
++}
++
+ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info,
+ 	struct pqi_scsi_dev *new_device_list[], unsigned int num_new_devices)
+ {
+@@ -2284,9 +2296,13 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info,
+ 		if (device->sdev && device->queue_depth != device->advertised_queue_depth) {
+ 			device->advertised_queue_depth = device->queue_depth;
+ 			scsi_change_queue_depth(device->sdev, device->advertised_queue_depth);
+-			if (device->rescan) {
+-				scsi_rescan_device(device->sdev);
++			spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
++			if (pqi_volume_rescan_needed(device)) {
+ 				device->rescan = false;
++				spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
++				scsi_rescan_device(device->sdev);
++			} else {
++				spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
+ 			}
+ 		}
+ 	}
+@@ -2308,8 +2324,6 @@ static void pqi_update_device_list(struct pqi_ctrl_info *ctrl_info,
+ 		}
+ 	}
+ 
+-	ctrl_info->logical_volume_rescan_needed = false;
+-
+ }
+ 
+ static inline bool pqi_is_supported_device(struct pqi_scsi_dev *device)
+@@ -3702,6 +3716,21 @@ static bool pqi_ofa_process_event(struct pqi_ctrl_info *ctrl_info,
+ 	return ack_event;
+ }
+ 
++static void pqi_mark_volumes_for_rescan(struct pqi_ctrl_info *ctrl_info)
++{
++	unsigned long flags;
++	struct pqi_scsi_dev *device;
++
++	spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
++
++	list_for_each_entry(device, &ctrl_info->scsi_device_list, scsi_device_list_entry) {
++		if (pqi_is_logical_device(device) && device->devtype == TYPE_DISK)
++			device->rescan = true;
++	}
++
++	spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
++}
++
+ static void pqi_disable_raid_bypass(struct pqi_ctrl_info *ctrl_info)
+ {
+ 	unsigned long flags;
+@@ -3742,7 +3771,7 @@ static void pqi_event_worker(struct work_struct *work)
+ 				ack_event = true;
+ 				rescan_needed = true;
+ 				if (event->event_type == PQI_EVENT_TYPE_LOGICAL_DEVICE)
+-					ctrl_info->logical_volume_rescan_needed = true;
++					pqi_mark_volumes_for_rescan(ctrl_info);
+ 				else if (event->event_type == PQI_EVENT_TYPE_AIO_STATE_CHANGE)
+ 					pqi_disable_raid_bypass(ctrl_info);
+ 			}
+@@ -6504,8 +6533,11 @@ static void pqi_map_queues(struct Scsi_Host *shost)
+ {
+ 	struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost);
+ 
+-	blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT],
++	if (!ctrl_info->disable_managed_interrupts)
++		return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT],
+ 			      ctrl_info->pci_dev, 0);
++	else
++		return blk_mq_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT]);
+ }
+ 
+ static inline bool pqi_is_tape_changer_device(struct pqi_scsi_dev *device)
+@@ -10142,6 +10174,18 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 				0x1014, 0x0718)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1137, 0x02f8)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1137, 0x02f9)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1137, 0x02fa)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 				0x1e93, 0x1000)
+@@ -10198,6 +10242,34 @@ static const struct pci_device_id pqi_pci_id_table[] = {
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 				0x1f51, 0x100a)
+ 	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x100e)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x100f)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x1010)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x1011)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x1043)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x1044)
++	},
++	{
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
++			       0x1f51, 0x1045)
++	},
+ 	{
+ 		PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+ 			       PCI_ANY_ID, PCI_ANY_ID)
+diff --git a/drivers/spi/spi-cs42l43.c b/drivers/spi/spi-cs42l43.c
+index d239fc5a49ccc..c1556b6529092 100644
+--- a/drivers/spi/spi-cs42l43.c
++++ b/drivers/spi/spi-cs42l43.c
+@@ -244,7 +244,10 @@ static int cs42l43_spi_probe(struct platform_device *pdev)
+ 	priv->ctlr->use_gpio_descriptors = true;
+ 	priv->ctlr->auto_runtime_pm = true;
+ 
+-	devm_pm_runtime_enable(priv->dev);
++	ret = devm_pm_runtime_enable(priv->dev);
++	if (ret)
++		return ret;
++
+ 	pm_runtime_idle(priv->dev);
+ 
+ 	regmap_write(priv->regmap, CS42L43_TRAN_CONFIG6, CS42L43_FIFO_SIZE - 1);
+diff --git a/drivers/spi/spi-hisi-sfc-v3xx.c b/drivers/spi/spi-hisi-sfc-v3xx.c
+index 9d22018f7985f..1301d14483d48 100644
+--- a/drivers/spi/spi-hisi-sfc-v3xx.c
++++ b/drivers/spi/spi-hisi-sfc-v3xx.c
+@@ -377,6 +377,11 @@ static const struct spi_controller_mem_ops hisi_sfc_v3xx_mem_ops = {
+ static irqreturn_t hisi_sfc_v3xx_isr(int irq, void *data)
+ {
+ 	struct hisi_sfc_v3xx_host *host = data;
++	u32 reg;
++
++	reg = readl(host->regbase + HISI_SFC_V3XX_INT_STAT);
++	if (!reg)
++		return IRQ_NONE;
+ 
+ 	hisi_sfc_v3xx_disable_int(host);
+ 
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index b9918dcc38027..07d20ca1164c3 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -76,6 +76,7 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7e23), (unsigned long)&cnl_info },
++	{ PCI_VDEVICE(INTEL, 0x7f24), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x9d24), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x9da4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa0a4), (unsigned long)&cnl_info },
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index cfc3b1ddbd229..6f12e4fb2e2e1 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -136,14 +136,14 @@ struct sh_msiof_spi_priv {
+ 
+ /* SIFCTR */
+ #define SIFCTR_TFWM_MASK	GENMASK(31, 29)	/* Transmit FIFO Watermark */
+-#define SIFCTR_TFWM_64		(0 << 29)	/*  Transfer Request when 64 empty stages */
+-#define SIFCTR_TFWM_32		(1 << 29)	/*  Transfer Request when 32 empty stages */
+-#define SIFCTR_TFWM_24		(2 << 29)	/*  Transfer Request when 24 empty stages */
+-#define SIFCTR_TFWM_16		(3 << 29)	/*  Transfer Request when 16 empty stages */
+-#define SIFCTR_TFWM_12		(4 << 29)	/*  Transfer Request when 12 empty stages */
+-#define SIFCTR_TFWM_8		(5 << 29)	/*  Transfer Request when 8 empty stages */
+-#define SIFCTR_TFWM_4		(6 << 29)	/*  Transfer Request when 4 empty stages */
+-#define SIFCTR_TFWM_1		(7 << 29)	/*  Transfer Request when 1 empty stage */
++#define SIFCTR_TFWM_64		(0UL << 29)	/*  Transfer Request when 64 empty stages */
++#define SIFCTR_TFWM_32		(1UL << 29)	/*  Transfer Request when 32 empty stages */
++#define SIFCTR_TFWM_24		(2UL << 29)	/*  Transfer Request when 24 empty stages */
++#define SIFCTR_TFWM_16		(3UL << 29)	/*  Transfer Request when 16 empty stages */
++#define SIFCTR_TFWM_12		(4UL << 29)	/*  Transfer Request when 12 empty stages */
++#define SIFCTR_TFWM_8		(5UL << 29)	/*  Transfer Request when 8 empty stages */
++#define SIFCTR_TFWM_4		(6UL << 29)	/*  Transfer Request when 4 empty stages */
++#define SIFCTR_TFWM_1		(7UL << 29)	/*  Transfer Request when 1 empty stage */
+ #define SIFCTR_TFUA_MASK	GENMASK(26, 20) /* Transmit FIFO Usable Area */
+ #define SIFCTR_TFUA_SHIFT	20
+ #define SIFCTR_TFUA(i)		((i) << SIFCTR_TFUA_SHIFT)
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 506193e870c49..7a85e6477e465 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -147,7 +147,6 @@ int transport_lookup_tmr_lun(struct se_cmd *se_cmd)
+ 	struct se_session *se_sess = se_cmd->se_sess;
+ 	struct se_node_acl *nacl = se_sess->se_node_acl;
+ 	struct se_tmr_req *se_tmr = se_cmd->se_tmr_req;
+-	unsigned long flags;
+ 
+ 	rcu_read_lock();
+ 	deve = target_nacl_find_deve(nacl, se_cmd->orig_fe_lun);
+@@ -178,10 +177,6 @@ int transport_lookup_tmr_lun(struct se_cmd *se_cmd)
+ 	se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev);
+ 	se_tmr->tmr_dev = rcu_dereference_raw(se_lun->lun_se_dev);
+ 
+-	spin_lock_irqsave(&se_tmr->tmr_dev->se_tmr_lock, flags);
+-	list_add_tail(&se_tmr->tmr_list, &se_tmr->tmr_dev->dev_tmr_list);
+-	spin_unlock_irqrestore(&se_tmr->tmr_dev->se_tmr_lock, flags);
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL(transport_lookup_tmr_lun);
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 41b7489d37ce9..ed4fd22eac6e0 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -907,12 +907,15 @@ pscsi_map_sg(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+ 
+ 	return 0;
+ fail:
+-	if (bio)
+-		bio_put(bio);
++	if (bio) {
++		bio_uninit(bio);
++		kfree(bio);
++	}
+ 	while (req->bio) {
+ 		bio = req->bio;
+ 		req->bio = bio->bi_next;
+-		bio_put(bio);
++		bio_uninit(bio);
++		kfree(bio);
+ 	}
+ 	req->biotail = NULL;
+ 	return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 670cfb7bd426a..73d0d6133ac8f 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -3629,6 +3629,10 @@ int transport_generic_handle_tmr(
+ 	unsigned long flags;
+ 	bool aborted = false;
+ 
++	spin_lock_irqsave(&cmd->se_dev->se_tmr_lock, flags);
++	list_add_tail(&cmd->se_tmr_req->tmr_list, &cmd->se_dev->dev_tmr_list);
++	spin_unlock_irqrestore(&cmd->se_dev->se_tmr_lock, flags);
++
+ 	spin_lock_irqsave(&cmd->t_state_lock, flags);
+ 	if (cmd->transport_state & CMD_T_ABORTED) {
+ 		aborted = true;
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index b7635363373e2..474e2b7f43fe1 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1345,11 +1345,41 @@ static void pl011_start_tx_pio(struct uart_amba_port *uap)
+ 	}
+ }
+ 
++static void pl011_rs485_tx_start(struct uart_amba_port *uap)
++{
++	struct uart_port *port = &uap->port;
++	u32 cr;
++
++	/* Enable transmitter */
++	cr = pl011_read(uap, REG_CR);
++	cr |= UART011_CR_TXE;
++
++	/* Disable receiver if half-duplex */
++	if (!(port->rs485.flags & SER_RS485_RX_DURING_TX))
++		cr &= ~UART011_CR_RXE;
++
++	if (port->rs485.flags & SER_RS485_RTS_ON_SEND)
++		cr &= ~UART011_CR_RTS;
++	else
++		cr |= UART011_CR_RTS;
++
++	pl011_write(cr, uap, REG_CR);
++
++	if (port->rs485.delay_rts_before_send)
++		mdelay(port->rs485.delay_rts_before_send);
++
++	uap->rs485_tx_started = true;
++}
++
+ static void pl011_start_tx(struct uart_port *port)
+ {
+ 	struct uart_amba_port *uap =
+ 	    container_of(port, struct uart_amba_port, port);
+ 
++	if ((uap->port.rs485.flags & SER_RS485_ENABLED) &&
++	    !uap->rs485_tx_started)
++		pl011_rs485_tx_start(uap);
++
+ 	if (!pl011_dma_tx_start(uap))
+ 		pl011_start_tx_pio(uap);
+ }
+@@ -1431,42 +1461,12 @@ static bool pl011_tx_char(struct uart_amba_port *uap, unsigned char c,
+ 	return true;
+ }
+ 
+-static void pl011_rs485_tx_start(struct uart_amba_port *uap)
+-{
+-	struct uart_port *port = &uap->port;
+-	u32 cr;
+-
+-	/* Enable transmitter */
+-	cr = pl011_read(uap, REG_CR);
+-	cr |= UART011_CR_TXE;
+-
+-	/* Disable receiver if half-duplex */
+-	if (!(port->rs485.flags & SER_RS485_RX_DURING_TX))
+-		cr &= ~UART011_CR_RXE;
+-
+-	if (port->rs485.flags & SER_RS485_RTS_ON_SEND)
+-		cr &= ~UART011_CR_RTS;
+-	else
+-		cr |= UART011_CR_RTS;
+-
+-	pl011_write(cr, uap, REG_CR);
+-
+-	if (port->rs485.delay_rts_before_send)
+-		mdelay(port->rs485.delay_rts_before_send);
+-
+-	uap->rs485_tx_started = true;
+-}
+-
+ /* Returns true if tx interrupts have to be (kept) enabled  */
+ static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq)
+ {
+ 	struct circ_buf *xmit = &uap->port.state->xmit;
+ 	int count = uap->fifosize >> 1;
+ 
+-	if ((uap->port.rs485.flags & SER_RS485_ENABLED) &&
+-	    !uap->rs485_tx_started)
+-		pl011_rs485_tx_start(uap);
+-
+ 	if (uap->port.x_char) {
+ 		if (!pl011_tx_char(uap, uap->port.x_char, from_irq))
+ 			return true;
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index fc7fd40bca988..e93e3a9b48838 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -251,7 +251,9 @@ static int stm32_usart_config_rs485(struct uart_port *port, struct ktermios *ter
+ 		writel_relaxed(cr3, port->membase + ofs->cr3);
+ 		writel_relaxed(cr1, port->membase + ofs->cr1);
+ 
+-		rs485conf->flags |= SER_RS485_RX_DURING_TX;
++		if (!port->rs485_rx_during_tx_gpio)
++			rs485conf->flags |= SER_RS485_RX_DURING_TX;
++
+ 	} else {
+ 		stm32_usart_clr_bits(port, ofs->cr3,
+ 				     USART_CR3_DEM | USART_CR3_DEP);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 1f8d86b9c4fa0..d2d760143ca30 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -1456,7 +1456,7 @@ static int ufshcd_devfreq_target(struct device *dev,
+ 	int ret = 0;
+ 	struct ufs_hba *hba = dev_get_drvdata(dev);
+ 	ktime_t start;
+-	bool scale_up, sched_clk_scaling_suspend_work = false;
++	bool scale_up = false, sched_clk_scaling_suspend_work = false;
+ 	struct list_head *clk_list = &hba->clk_list_head;
+ 	struct ufs_clk_info *clki;
+ 	unsigned long irq_flags;
+@@ -3045,7 +3045,7 @@ bool ufshcd_cmd_inflight(struct scsi_cmnd *cmd)
+  */
+ static int ufshcd_clear_cmd(struct ufs_hba *hba, u32 task_tag)
+ {
+-	u32 mask = 1U << task_tag;
++	u32 mask;
+ 	unsigned long flags;
+ 	int err;
+ 
+@@ -3063,6 +3063,8 @@ static int ufshcd_clear_cmd(struct ufs_hba *hba, u32 task_tag)
+ 		return 0;
+ 	}
+ 
++	mask = 1U << task_tag;
++
+ 	/* clear outstanding transaction before retry */
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+ 	ufshcd_utrl_clear(hba, mask);
+@@ -6348,7 +6350,6 @@ static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
+ 		ufshcd_hold(hba);
+ 		if (!ufshcd_is_clkgating_allowed(hba))
+ 			ufshcd_setup_clocks(hba, true);
+-		ufshcd_release(hba);
+ 		pm_op = hba->is_sys_suspended ? UFS_SYSTEM_PM : UFS_RUNTIME_PM;
+ 		ufshcd_vops_resume(hba, pm_op);
+ 	} else {
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index d140010257004..b1b46c7c63f8b 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -828,7 +828,11 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
+ 			return;
+ 	}
+ 
+-	if (request->complete) {
++	/*
++	 * zlp request is appended by driver, needn't call usb_gadget_giveback_request() to notify
++	 * gadget composite driver.
++	 */
++	if (request->complete && request->buf != priv_dev->zlp_buf) {
+ 		spin_unlock(&priv_dev->lock);
+ 		usb_gadget_giveback_request(&priv_ep->endpoint,
+ 					    request);
+@@ -2539,11 +2543,11 @@ static int cdns3_gadget_ep_disable(struct usb_ep *ep)
+ 
+ 	while (!list_empty(&priv_ep->wa2_descmiss_req_list)) {
+ 		priv_req = cdns3_next_priv_request(&priv_ep->wa2_descmiss_req_list);
++		list_del_init(&priv_req->list);
+ 
+ 		kfree(priv_req->request.buf);
+ 		cdns3_gadget_ep_free_request(&priv_ep->endpoint,
+ 					     &priv_req->request);
+-		list_del_init(&priv_req->list);
+ 		--priv_ep->wa2_counter;
+ 	}
+ 
+diff --git a/drivers/usb/cdns3/core.c b/drivers/usb/cdns3/core.c
+index 33548771a0d3a..465e9267b49c1 100644
+--- a/drivers/usb/cdns3/core.c
++++ b/drivers/usb/cdns3/core.c
+@@ -395,7 +395,6 @@ static int cdns_role_set(struct usb_role_switch *sw, enum usb_role role)
+ 	return ret;
+ }
+ 
+-
+ /**
+  * cdns_wakeup_irq - interrupt handler for wakeup events
+  * @irq: irq number for cdns3/cdnsp core device
+diff --git a/drivers/usb/cdns3/drd.c b/drivers/usb/cdns3/drd.c
+index 04b6d12f2b9a3..ee917f1b091c8 100644
+--- a/drivers/usb/cdns3/drd.c
++++ b/drivers/usb/cdns3/drd.c
+@@ -156,7 +156,8 @@ bool cdns_is_device(struct cdns *cdns)
+  */
+ static void cdns_otg_disable_irq(struct cdns *cdns)
+ {
+-	writel(0, &cdns->otg_irq_regs->ien);
++	if (cdns->version)
++		writel(0, &cdns->otg_irq_regs->ien);
+ }
+ 
+ /**
+@@ -422,15 +423,20 @@ int cdns_drd_init(struct cdns *cdns)
+ 
+ 		cdns->otg_regs = (void __iomem *)&cdns->otg_v1_regs->cmd;
+ 
+-		if (readl(&cdns->otg_cdnsp_regs->did) == OTG_CDNSP_DID) {
++		state = readl(&cdns->otg_cdnsp_regs->did);
++
++		if (OTG_CDNSP_CHECK_DID(state)) {
+ 			cdns->otg_irq_regs = (struct cdns_otg_irq_regs __iomem *)
+ 					      &cdns->otg_cdnsp_regs->ien;
+ 			cdns->version  = CDNSP_CONTROLLER_V2;
+-		} else {
++		} else if (OTG_CDNS3_CHECK_DID(state)) {
+ 			cdns->otg_irq_regs = (struct cdns_otg_irq_regs __iomem *)
+ 					      &cdns->otg_v1_regs->ien;
+ 			writel(1, &cdns->otg_v1_regs->simulate);
+ 			cdns->version  = CDNS3_CONTROLLER_V1;
++		} else {
++			dev_err(cdns->dev, "not supporte DID=0x%08x\n", state);
++			return -EINVAL;
+ 		}
+ 
+ 		dev_dbg(cdns->dev, "DRD version v1 (ID: %08x, rev: %08x)\n",
+@@ -483,7 +489,6 @@ int cdns_drd_exit(struct cdns *cdns)
+ 	return 0;
+ }
+ 
+-
+ /* Indicate the cdns3 core was power lost before */
+ bool cdns_power_is_lost(struct cdns *cdns)
+ {
+diff --git a/drivers/usb/cdns3/drd.h b/drivers/usb/cdns3/drd.h
+index cbdf94f73ed91..d72370c321d39 100644
+--- a/drivers/usb/cdns3/drd.h
++++ b/drivers/usb/cdns3/drd.h
+@@ -79,7 +79,11 @@ struct cdnsp_otg_regs {
+ 	__le32 susp_timing_ctrl;
+ };
+ 
+-#define OTG_CDNSP_DID	0x0004034E
++/* CDNSP driver supports 0x000403xx Cadence USB controller family. */
++#define OTG_CDNSP_CHECK_DID(did) (((did) & GENMASK(31, 8)) == 0x00040300)
++
++/* CDNS3 driver supports 0x000402xx Cadence USB controller family. */
++#define OTG_CDNS3_CHECK_DID(did) (((did) & GENMASK(31, 8)) == 0x00040200)
+ 
+ /*
+  * Common registers interface for both CDNS3 and CDNSP version of DRD.
+diff --git a/drivers/usb/cdns3/host.c b/drivers/usb/cdns3/host.c
+index 6164fc4c96a49..ceca4d839dfd4 100644
+--- a/drivers/usb/cdns3/host.c
++++ b/drivers/usb/cdns3/host.c
+@@ -18,6 +18,11 @@
+ #include "../host/xhci.h"
+ #include "../host/xhci-plat.h"
+ 
++/*
++ * The XECP_PORT_CAP_REG and XECP_AUX_CTRL_REG1 exist only
++ * in Cadence USB3 dual-role controller, so it can't be used
++ * with Cadence CDNSP dual-role controller.
++ */
+ #define XECP_PORT_CAP_REG	0x8000
+ #define XECP_AUX_CTRL_REG1	0x8120
+ 
+@@ -57,6 +62,8 @@ static const struct xhci_plat_priv xhci_plat_cdns3_xhci = {
+ 	.resume_quirk = xhci_cdns3_resume_quirk,
+ };
+ 
++static const struct xhci_plat_priv xhci_plat_cdnsp_xhci;
++
+ static int __cdns_host_init(struct cdns *cdns)
+ {
+ 	struct platform_device *xhci;
+@@ -81,8 +88,13 @@ static int __cdns_host_init(struct cdns *cdns)
+ 		goto err1;
+ 	}
+ 
+-	cdns->xhci_plat_data = kmemdup(&xhci_plat_cdns3_xhci,
+-			sizeof(struct xhci_plat_priv), GFP_KERNEL);
++	if (cdns->version < CDNSP_CONTROLLER_V2)
++		cdns->xhci_plat_data = kmemdup(&xhci_plat_cdns3_xhci,
++				sizeof(struct xhci_plat_priv), GFP_KERNEL);
++	else
++		cdns->xhci_plat_data = kmemdup(&xhci_plat_cdnsp_xhci,
++				sizeof(struct xhci_plat_priv), GFP_KERNEL);
++
+ 	if (!cdns->xhci_plat_data) {
+ 		ret = -ENOMEM;
+ 		goto err1;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 4c8dd67246788..28f49400f3e8b 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2650,6 +2650,11 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
+ 	int ret;
+ 
+ 	spin_lock_irqsave(&dwc->lock, flags);
++	if (!dwc->pullups_connected) {
++		spin_unlock_irqrestore(&dwc->lock, flags);
++		return 0;
++	}
++
+ 	dwc->connected = false;
+ 
+ 	/*
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index cc0ed29a4adc0..5712883a7527c 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1325,7 +1325,15 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 	     "Parsed NTB with %d frames\n", dgram_counter);
+ 
+ 	to_process -= block_len;
+-	if (to_process != 0) {
++
++	/*
++	 * Windows NCM driver avoids USB ZLPs by adding a 1-byte
++	 * zero pad as needed.
++	 */
++	if (to_process == 1 &&
++	    (*(unsigned char *)(ntb_ptr + block_len) == 0x00)) {
++		to_process--;
++	} else if (to_process > 0) {
+ 		ntb_ptr = (unsigned char *)(ntb_ptr + block_len);
+ 		goto parse_ntb;
+ 	}
+diff --git a/drivers/usb/gadget/udc/omap_udc.c b/drivers/usb/gadget/udc/omap_udc.c
+index 10c5d7f726a1f..f90eeecf27de1 100644
+--- a/drivers/usb/gadget/udc/omap_udc.c
++++ b/drivers/usb/gadget/udc/omap_udc.c
+@@ -2036,7 +2036,8 @@ static irqreturn_t omap_udc_iso_irq(int irq, void *_dev)
+ 
+ static inline int machine_without_vbus_sense(void)
+ {
+-	return  machine_is_omap_osk() || machine_is_sx1();
++	return  machine_is_omap_osk() || machine_is_omap_palmte() ||
++		machine_is_sx1();
+ }
+ 
+ static int omap_udc_start(struct usb_gadget *g,
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index ae41578bd0149..70165dd86b5de 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -21,7 +21,9 @@ static const struct class role_class = {
+ struct usb_role_switch {
+ 	struct device dev;
+ 	struct mutex lock; /* device lock*/
++	struct module *module; /* the module this device depends on */
+ 	enum usb_role role;
++	bool registered;
+ 
+ 	/* From descriptor */
+ 	struct device *usb2_port;
+@@ -48,6 +50,9 @@ int usb_role_switch_set_role(struct usb_role_switch *sw, enum usb_role role)
+ 	if (IS_ERR_OR_NULL(sw))
+ 		return 0;
+ 
++	if (!sw->registered)
++		return -EOPNOTSUPP;
++
+ 	mutex_lock(&sw->lock);
+ 
+ 	ret = sw->set(sw, role);
+@@ -73,7 +78,7 @@ enum usb_role usb_role_switch_get_role(struct usb_role_switch *sw)
+ {
+ 	enum usb_role role;
+ 
+-	if (IS_ERR_OR_NULL(sw))
++	if (IS_ERR_OR_NULL(sw) || !sw->registered)
+ 		return USB_ROLE_NONE;
+ 
+ 	mutex_lock(&sw->lock);
+@@ -135,7 +140,7 @@ struct usb_role_switch *usb_role_switch_get(struct device *dev)
+ 						  usb_role_switch_match);
+ 
+ 	if (!IS_ERR_OR_NULL(sw))
+-		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++		WARN_ON(!try_module_get(sw->module));
+ 
+ 	return sw;
+ }
+@@ -157,7 +162,7 @@ struct usb_role_switch *fwnode_usb_role_switch_get(struct fwnode_handle *fwnode)
+ 		sw = fwnode_connection_find_match(fwnode, "usb-role-switch",
+ 						  NULL, usb_role_switch_match);
+ 	if (!IS_ERR_OR_NULL(sw))
+-		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++		WARN_ON(!try_module_get(sw->module));
+ 
+ 	return sw;
+ }
+@@ -172,7 +177,7 @@ EXPORT_SYMBOL_GPL(fwnode_usb_role_switch_get);
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+ 	if (!IS_ERR_OR_NULL(sw)) {
+-		module_put(sw->dev.parent->driver->owner);
++		module_put(sw->module);
+ 		put_device(&sw->dev);
+ 	}
+ }
+@@ -189,15 +194,18 @@ struct usb_role_switch *
+ usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
+ {
+ 	struct device *dev;
++	struct usb_role_switch *sw = NULL;
+ 
+ 	if (!fwnode)
+ 		return NULL;
+ 
+ 	dev = class_find_device_by_fwnode(&role_class, fwnode);
+-	if (dev)
+-		WARN_ON(!try_module_get(dev->parent->driver->owner));
++	if (dev) {
++		sw = to_role_switch(dev);
++		WARN_ON(!try_module_get(sw->module));
++	}
+ 
+-	return dev ? to_role_switch(dev) : NULL;
++	return sw;
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_find_by_fwnode);
+ 
+@@ -338,6 +346,7 @@ usb_role_switch_register(struct device *parent,
+ 	sw->set = desc->set;
+ 	sw->get = desc->get;
+ 
++	sw->module = parent->driver->owner;
+ 	sw->dev.parent = parent;
+ 	sw->dev.fwnode = desc->fwnode;
+ 	sw->dev.class = &role_class;
+@@ -352,6 +361,8 @@ usb_role_switch_register(struct device *parent,
+ 		return ERR_PTR(ret);
+ 	}
+ 
++	sw->registered = true;
++
+ 	/* TODO: Symlinks for the host port and the device controller. */
+ 
+ 	return sw;
+@@ -366,8 +377,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_register);
+  */
+ void usb_role_switch_unregister(struct usb_role_switch *sw)
+ {
+-	if (!IS_ERR_OR_NULL(sw))
++	if (!IS_ERR_OR_NULL(sw)) {
++		sw->registered = false;
+ 		device_unregister(&sw->dev);
++	}
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_unregister);
+ 
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index c54e9805da536..12cf9940e5b67 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -179,6 +179,13 @@ static int slave_configure(struct scsi_device *sdev)
+ 		 */
+ 		sdev->use_192_bytes_for_3f = 1;
+ 
++		/*
++		 * Some devices report generic values until the media has been
++		 * accessed. Force a READ(10) prior to querying device
++		 * characteristics.
++		 */
++		sdev->read_before_ms = 1;
++
+ 		/*
+ 		 * Some devices don't like MODE SENSE with page=0x3f,
+ 		 * which is the command used for checking if a device
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 696bb0b235992..299a6767b7b30 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -878,6 +878,13 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ 	if (devinfo->flags & US_FL_CAPACITY_HEURISTICS)
+ 		sdev->guess_capacity = 1;
+ 
++	/*
++	 * Some devices report generic values until the media has been
++	 * accessed. Force a READ(10) prior to querying device
++	 * characteristics.
++	 */
++	sdev->read_before_ms = 1;
++
+ 	/*
+ 	 * Some devices don't like MODE SENSE with page=0x3f,
+ 	 * which is the command used for checking if a device
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index bfb6f9481e87f..a8e03cfde9dda 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -3730,9 +3730,6 @@ static void tcpm_detach(struct tcpm_port *port)
+ 	if (tcpm_port_is_disconnected(port))
+ 		port->hard_reset_count = 0;
+ 
+-	port->try_src_count = 0;
+-	port->try_snk_count = 0;
+-
+ 	if (!port->attached)
+ 		return;
+ 
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index fa222080887d5..928eacbeb21ac 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -25,6 +25,8 @@ struct ucsi_acpi {
+ 	unsigned long flags;
+ 	guid_t guid;
+ 	u64 cmd;
++	bool dell_quirk_probed;
++	bool dell_quirk_active;
+ };
+ 
+ static int ucsi_acpi_dsm(struct ucsi_acpi *ua, int func)
+@@ -126,12 +128,73 @@ static const struct ucsi_operations ucsi_zenbook_ops = {
+ 	.async_write = ucsi_acpi_async_write
+ };
+ 
+-static const struct dmi_system_id zenbook_dmi_id[] = {
++/*
++ * Some Dell laptops expect that an ACK command with the
++ * UCSI_ACK_CONNECTOR_CHANGE bit set is followed by a (separate)
++ * ACK command that only has the UCSI_ACK_COMMAND_COMPLETE bit set.
++ * If this is not done events are not delivered to OSPM and
++ * subsequent commands will timeout.
++ */
++static int
++ucsi_dell_sync_write(struct ucsi *ucsi, unsigned int offset,
++		     const void *val, size_t val_len)
++{
++	struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
++	u64 cmd = *(u64 *)val, ack = 0;
++	int ret;
++
++	if (UCSI_COMMAND(cmd) == UCSI_ACK_CC_CI &&
++	    cmd & UCSI_ACK_CONNECTOR_CHANGE)
++		ack = UCSI_ACK_CC_CI | UCSI_ACK_COMMAND_COMPLETE;
++
++	ret = ucsi_acpi_sync_write(ucsi, offset, val, val_len);
++	if (ret != 0)
++		return ret;
++	if (ack == 0)
++		return ret;
++
++	if (!ua->dell_quirk_probed) {
++		ua->dell_quirk_probed = true;
++
++		cmd = UCSI_GET_CAPABILITY;
++		ret = ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &cmd,
++					   sizeof(cmd));
++		if (ret == 0)
++			return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL,
++						    &ack, sizeof(ack));
++		if (ret != -ETIMEDOUT)
++			return ret;
++
++		ua->dell_quirk_active = true;
++		dev_err(ua->dev, "Firmware bug: Additional ACK required after ACKing a connector change.\n");
++		dev_err(ua->dev, "Firmware bug: Enabling workaround\n");
++	}
++
++	if (!ua->dell_quirk_active)
++		return ret;
++
++	return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &ack, sizeof(ack));
++}
++
++static const struct ucsi_operations ucsi_dell_ops = {
++	.read = ucsi_acpi_read,
++	.sync_write = ucsi_dell_sync_write,
++	.async_write = ucsi_acpi_async_write
++};
++
++static const struct dmi_system_id ucsi_acpi_quirks[] = {
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
+ 		},
++		.driver_data = (void *)&ucsi_zenbook_ops,
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		},
++		.driver_data = (void *)&ucsi_dell_ops,
+ 	},
+ 	{ }
+ };
+@@ -160,6 +223,7 @@ static int ucsi_acpi_probe(struct platform_device *pdev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
+ 	const struct ucsi_operations *ops = &ucsi_acpi_ops;
++	const struct dmi_system_id *id;
+ 	struct ucsi_acpi *ua;
+ 	struct resource *res;
+ 	acpi_status status;
+@@ -189,8 +253,9 @@ static int ucsi_acpi_probe(struct platform_device *pdev)
+ 	init_completion(&ua->complete);
+ 	ua->dev = &pdev->dev;
+ 
+-	if (dmi_check_system(zenbook_dmi_id))
+-		ops = &ucsi_zenbook_ops;
++	id = dmi_first_match(ucsi_acpi_quirks);
++	if (id)
++		ops = id->driver_data;
+ 
+ 	ua->ucsi = ucsi_create(&pdev->dev, ops);
+ 	if (IS_ERR(ua->ucsi))
+diff --git a/drivers/video/fbdev/savage/savagefb_driver.c b/drivers/video/fbdev/savage/savagefb_driver.c
+index dddd6afcb972a..ebc9aeffdde7c 100644
+--- a/drivers/video/fbdev/savage/savagefb_driver.c
++++ b/drivers/video/fbdev/savage/savagefb_driver.c
+@@ -869,6 +869,9 @@ static int savagefb_check_var(struct fb_var_screeninfo   *var,
+ 
+ 	DBG("savagefb_check_var");
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	var->transp.offset = 0;
+ 	var->transp.length = 0;
+ 	switch (var->bits_per_pixel) {
+diff --git a/drivers/video/fbdev/sis/sis_main.c b/drivers/video/fbdev/sis/sis_main.c
+index 6ad47b6b60046..431b1a138c111 100644
+--- a/drivers/video/fbdev/sis/sis_main.c
++++ b/drivers/video/fbdev/sis/sis_main.c
+@@ -1475,6 +1475,8 @@ sisfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 
+ 	vtotal = var->upper_margin + var->lower_margin + var->vsync_len;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
+ 	pixclock = var->pixclock;
+ 
+ 	if((var->vmode & FB_VMODE_MASK) == FB_VMODE_NONINTERLACED) {
+diff --git a/fs/afs/volume.c b/fs/afs/volume.c
+index 115c081a8e2ce..c028598a903c9 100644
+--- a/fs/afs/volume.c
++++ b/fs/afs/volume.c
+@@ -337,7 +337,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
+ {
+ 	struct afs_server_list *new, *old, *discard;
+ 	struct afs_vldb_entry *vldb;
+-	char idbuf[16];
++	char idbuf[24];
+ 	int ret, idsz;
+ 
+ 	_enter("");
+@@ -345,7 +345,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
+ 	/* We look up an ID by passing it as a decimal string in the
+ 	 * operation's name parameter.
+ 	 */
+-	idsz = sprintf(idbuf, "%llu", volume->vid);
++	idsz = snprintf(idbuf, sizeof(idbuf), "%llu", volume->vid);
+ 
+ 	vldb = afs_vl_lookup_vldb(volume->cell, key, idbuf, idsz);
+ 	if (IS_ERR(vldb)) {
+diff --git a/fs/aio.c b/fs/aio.c
+index f8589caef9c10..3235d4e6cc623 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -594,6 +594,13 @@ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
+ 	struct kioctx *ctx = req->ki_ctx;
+ 	unsigned long flags;
+ 
++	/*
++	 * kiocb didn't come from aio or is neither a read nor a write, hence
++	 * ignore it.
++	 */
++	if (!(iocb->ki_flags & IOCB_AIO_RW))
++		return;
++
+ 	if (WARN_ON_ONCE(!list_empty(&req->ki_list)))
+ 		return;
+ 
+@@ -1463,7 +1470,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ 	req->ki_complete = aio_complete_rw;
+ 	req->private = NULL;
+ 	req->ki_pos = iocb->aio_offset;
+-	req->ki_flags = req->ki_filp->f_iocb_flags;
++	req->ki_flags = req->ki_filp->f_iocb_flags | IOCB_AIO_RW;
+ 	if (iocb->aio_flags & IOCB_FLAG_RESFD)
+ 		req->ki_flags |= IOCB_EVENTFD;
+ 	if (iocb->aio_flags & IOCB_FLAG_IOPRIO) {
+diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
+index 5244561e20168..f99c9029d91e4 100644
+--- a/fs/btrfs/defrag.c
++++ b/fs/btrfs/defrag.c
+@@ -1047,7 +1047,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ 			goto add;
+ 
+ 		/* Skip too large extent */
+-		if (range_len >= extent_thresh)
++		if (em->len >= extent_thresh)
+ 			goto next;
+ 
+ 		/*
+diff --git a/fs/cachefiles/cache.c b/fs/cachefiles/cache.c
+index 7077f72e6f474..f449f7340aad0 100644
+--- a/fs/cachefiles/cache.c
++++ b/fs/cachefiles/cache.c
+@@ -168,6 +168,8 @@ int cachefiles_add_cache(struct cachefiles_cache *cache)
+ 	dput(root);
+ error_open_root:
+ 	cachefiles_end_secure(cache, saved_cred);
++	put_cred(cache->cache_cred);
++	cache->cache_cred = NULL;
+ error_getsec:
+ 	fscache_relinquish_cache(cache_cookie);
+ 	cache->cache = NULL;
+diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c
+index aa4efcabb5e37..5f4df9588620f 100644
+--- a/fs/cachefiles/daemon.c
++++ b/fs/cachefiles/daemon.c
+@@ -805,6 +805,7 @@ static void cachefiles_daemon_unbind(struct cachefiles_cache *cache)
+ 	cachefiles_put_directory(cache->graveyard);
+ 	cachefiles_put_directory(cache->store);
+ 	mntput(cache->mnt);
++	put_cred(cache->cache_cred);
+ 
+ 	kfree(cache->rootdirname);
+ 	kfree(cache->secctx);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index e8bf082105d87..ad1f46c66fbff 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3216,7 +3216,6 @@ static int ceph_try_drop_cap_snap(struct ceph_inode_info *ci,
+ 
+ enum put_cap_refs_mode {
+ 	PUT_CAP_REFS_SYNC = 0,
+-	PUT_CAP_REFS_NO_CHECK,
+ 	PUT_CAP_REFS_ASYNC,
+ };
+ 
+@@ -3332,11 +3331,6 @@ void ceph_put_cap_refs_async(struct ceph_inode_info *ci, int had)
+ 	__ceph_put_cap_refs(ci, had, PUT_CAP_REFS_ASYNC);
+ }
+ 
+-void ceph_put_cap_refs_no_check_caps(struct ceph_inode_info *ci, int had)
+-{
+-	__ceph_put_cap_refs(ci, had, PUT_CAP_REFS_NO_CHECK);
+-}
+-
+ /*
+  * Release @nr WRBUFFER refs on dirty pages for the given @snapc snap
+  * context.  Adjust per-snap dirty page accounting as appropriate.
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 558c3af44449c..2eb66dd7d01b2 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1089,7 +1089,7 @@ void ceph_mdsc_release_request(struct kref *kref)
+ 	struct ceph_mds_request *req = container_of(kref,
+ 						    struct ceph_mds_request,
+ 						    r_kref);
+-	ceph_mdsc_release_dir_caps_no_check(req);
++	ceph_mdsc_release_dir_caps_async(req);
+ 	destroy_reply_info(&req->r_reply_info);
+ 	if (req->r_request)
+ 		ceph_msg_put(req->r_request);
+@@ -4247,7 +4247,7 @@ void ceph_mdsc_release_dir_caps(struct ceph_mds_request *req)
+ 	}
+ }
+ 
+-void ceph_mdsc_release_dir_caps_no_check(struct ceph_mds_request *req)
++void ceph_mdsc_release_dir_caps_async(struct ceph_mds_request *req)
+ {
+ 	struct ceph_client *cl = req->r_mdsc->fsc->client;
+ 	int dcaps;
+@@ -4255,8 +4255,7 @@ void ceph_mdsc_release_dir_caps_no_check(struct ceph_mds_request *req)
+ 	dcaps = xchg(&req->r_dir_caps, 0);
+ 	if (dcaps) {
+ 		doutc(cl, "releasing r_dir_caps=%s\n", ceph_cap_string(dcaps));
+-		ceph_put_cap_refs_no_check_caps(ceph_inode(req->r_parent),
+-						dcaps);
++		ceph_put_cap_refs_async(ceph_inode(req->r_parent), dcaps);
+ 	}
+ }
+ 
+@@ -4292,7 +4291,7 @@ static void replay_unsafe_requests(struct ceph_mds_client *mdsc,
+ 		if (req->r_session->s_mds != session->s_mds)
+ 			continue;
+ 
+-		ceph_mdsc_release_dir_caps_no_check(req);
++		ceph_mdsc_release_dir_caps_async(req);
+ 
+ 		__send_request(session, req, true);
+ 	}
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 2e6ddaa13d725..40560af388272 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -552,7 +552,7 @@ extern int ceph_mdsc_do_request(struct ceph_mds_client *mdsc,
+ 				struct inode *dir,
+ 				struct ceph_mds_request *req);
+ extern void ceph_mdsc_release_dir_caps(struct ceph_mds_request *req);
+-extern void ceph_mdsc_release_dir_caps_no_check(struct ceph_mds_request *req);
++extern void ceph_mdsc_release_dir_caps_async(struct ceph_mds_request *req);
+ static inline void ceph_mdsc_get_request(struct ceph_mds_request *req)
+ {
+ 	kref_get(&req->r_kref);
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index fe0f64a0acb27..15d00bdd92065 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1254,8 +1254,6 @@ extern void ceph_take_cap_refs(struct ceph_inode_info *ci, int caps,
+ extern void ceph_get_cap_refs(struct ceph_inode_info *ci, int caps);
+ extern void ceph_put_cap_refs(struct ceph_inode_info *ci, int had);
+ extern void ceph_put_cap_refs_async(struct ceph_inode_info *ci, int had);
+-extern void ceph_put_cap_refs_no_check_caps(struct ceph_inode_info *ci,
+-					    int had);
+ extern void ceph_put_wrbuffer_cap_refs(struct ceph_inode_info *ci, int nr,
+ 				       struct ceph_snap_context *snapc);
+ extern void __ceph_remove_capsnap(struct inode *inode,
+diff --git a/fs/erofs/namei.c b/fs/erofs/namei.c
+index d4f631d39f0fa..f0110a78acb20 100644
+--- a/fs/erofs/namei.c
++++ b/fs/erofs/namei.c
+@@ -130,24 +130,24 @@ static void *erofs_find_target_block(struct erofs_buf *target,
+ 			/* string comparison without already matched prefix */
+ 			diff = erofs_dirnamecmp(name, &dname, &matched);
+ 
+-			if (!diff) {
+-				*_ndirents = 0;
+-				goto out;
+-			} else if (diff > 0) {
+-				head = mid + 1;
+-				startprfx = matched;
+-
+-				if (!IS_ERR(candidate))
+-					erofs_put_metabuf(target);
+-				*target = buf;
+-				candidate = de;
+-				*_ndirents = ndirents;
+-			} else {
++			if (diff < 0) {
+ 				erofs_put_metabuf(&buf);
+-
+ 				back = mid - 1;
+ 				endprfx = matched;
++				continue;
++			}
++
++			if (!IS_ERR(candidate))
++				erofs_put_metabuf(target);
++			*target = buf;
++			if (!diff) {
++				*_ndirents = 0;
++				return de;
+ 			}
++			head = mid + 1;
++			startprfx = matched;
++			candidate = de;
++			*_ndirents = ndirents;
+ 			continue;
+ 		}
+ out:		/* free if the candidate is valid */
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 01299b55a567a..fadfca75c3197 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -2229,7 +2229,7 @@ static int ext4_fill_es_cache_info(struct inode *inode,
+ 
+ 
+ /*
+- * ext4_ext_determine_hole - determine hole around given block
++ * ext4_ext_find_hole - find hole around given block according to the given path
+  * @inode:	inode we lookup in
+  * @path:	path in extent tree to @lblk
+  * @lblk:	pointer to logical block around which we want to determine hole
+@@ -2241,9 +2241,9 @@ static int ext4_fill_es_cache_info(struct inode *inode,
+  * The function returns the length of a hole starting at @lblk. We update @lblk
+  * to the beginning of the hole if we managed to find it.
+  */
+-static ext4_lblk_t ext4_ext_determine_hole(struct inode *inode,
+-					   struct ext4_ext_path *path,
+-					   ext4_lblk_t *lblk)
++static ext4_lblk_t ext4_ext_find_hole(struct inode *inode,
++				      struct ext4_ext_path *path,
++				      ext4_lblk_t *lblk)
+ {
+ 	int depth = ext_depth(inode);
+ 	struct ext4_extent *ex;
+@@ -2270,30 +2270,6 @@ static ext4_lblk_t ext4_ext_determine_hole(struct inode *inode,
+ 	return len;
+ }
+ 
+-/*
+- * ext4_ext_put_gap_in_cache:
+- * calculate boundaries of the gap that the requested block fits into
+- * and cache this gap
+- */
+-static void
+-ext4_ext_put_gap_in_cache(struct inode *inode, ext4_lblk_t hole_start,
+-			  ext4_lblk_t hole_len)
+-{
+-	struct extent_status es;
+-
+-	ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
+-				  hole_start + hole_len - 1, &es);
+-	if (es.es_len) {
+-		/* There's delayed extent containing lblock? */
+-		if (es.es_lblk <= hole_start)
+-			return;
+-		hole_len = min(es.es_lblk - hole_start, hole_len);
+-	}
+-	ext_debug(inode, " -> %u:%u\n", hole_start, hole_len);
+-	ext4_es_insert_extent(inode, hole_start, hole_len, ~0,
+-			      EXTENT_STATUS_HOLE);
+-}
+-
+ /*
+  * ext4_ext_rm_idx:
+  * removes index from the index block.
+@@ -4062,6 +4038,69 @@ static int get_implied_cluster_alloc(struct super_block *sb,
+ 	return 0;
+ }
+ 
++/*
++ * Determine hole length around the given logical block, first try to
++ * locate and expand the hole from the given @path, and then adjust it
++ * if it's partially or completely converted to delayed extents, insert
++ * it into the extent cache tree if it's indeed a hole, finally return
++ * the length of the determined extent.
++ */
++static ext4_lblk_t ext4_ext_determine_insert_hole(struct inode *inode,
++						  struct ext4_ext_path *path,
++						  ext4_lblk_t lblk)
++{
++	ext4_lblk_t hole_start, len;
++	struct extent_status es;
++
++	hole_start = lblk;
++	len = ext4_ext_find_hole(inode, path, &hole_start);
++again:
++	ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
++				  hole_start + len - 1, &es);
++	if (!es.es_len)
++		goto insert_hole;
++
++	/*
++	 * There's a delalloc extent in the hole, handle it if the delalloc
++	 * extent is in front of, behind and straddle the queried range.
++	 */
++	if (lblk >= es.es_lblk + es.es_len) {
++		/*
++		 * The delalloc extent is in front of the queried range,
++		 * find again from the queried start block.
++		 */
++		len -= lblk - hole_start;
++		hole_start = lblk;
++		goto again;
++	} else if (in_range(lblk, es.es_lblk, es.es_len)) {
++		/*
++		 * The delalloc extent containing lblk, it must have been
++		 * added after ext4_map_blocks() checked the extent status
++		 * tree, adjust the length to the delalloc extent's after
++		 * lblk.
++		 */
++		len = es.es_lblk + es.es_len - lblk;
++		return len;
++	} else {
++		/*
++		 * The delalloc extent is partially or completely behind
++		 * the queried range, update hole length until the
++		 * beginning of the delalloc extent.
++		 */
++		len = min(es.es_lblk - hole_start, len);
++	}
++
++insert_hole:
++	/* Put just found gap into cache to speed up subsequent requests */
++	ext_debug(inode, " -> %u:%u\n", hole_start, len);
++	ext4_es_insert_extent(inode, hole_start, len, ~0, EXTENT_STATUS_HOLE);
++
++	/* Update hole_len to reflect hole size after lblk */
++	if (hole_start != lblk)
++		len -= lblk - hole_start;
++
++	return len;
++}
+ 
+ /*
+  * Block allocation/map/preallocation routine for extents based files
+@@ -4179,22 +4218,12 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ 	 * we couldn't try to create block if create flag is zero
+ 	 */
+ 	if ((flags & EXT4_GET_BLOCKS_CREATE) == 0) {
+-		ext4_lblk_t hole_start, hole_len;
++		ext4_lblk_t len;
+ 
+-		hole_start = map->m_lblk;
+-		hole_len = ext4_ext_determine_hole(inode, path, &hole_start);
+-		/*
+-		 * put just found gap into cache to speed up
+-		 * subsequent requests
+-		 */
+-		ext4_ext_put_gap_in_cache(inode, hole_start, hole_len);
++		len = ext4_ext_determine_insert_hole(inode, path, map->m_lblk);
+ 
+-		/* Update hole_len to reflect hole size after map->m_lblk */
+-		if (hole_start != map->m_lblk)
+-			hole_len -= map->m_lblk - hole_start;
+ 		map->m_pblk = 0;
+-		map->m_len = min_t(unsigned int, map->m_len, hole_len);
+-
++		map->m_len = min_t(unsigned int, map->m_len, len);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index be63744c95d3b..7497a789d002e 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -842,7 +842,7 @@ mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	int new_order;
+ 
+-	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0)
++	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_fragments == 0)
+ 		return;
+ 
+ 	new_order = mb_avg_fragment_size_order(sb,
+@@ -2305,6 +2305,9 @@ void ext4_mb_try_best_found(struct ext4_allocation_context *ac,
+ 		return;
+ 
+ 	ext4_lock_group(ac->ac_sb, group);
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
++		goto out;
++
+ 	max = mb_find_extent(e4b, ex.fe_start, ex.fe_len, &ex);
+ 
+ 	if (max > 0) {
+@@ -2312,6 +2315,7 @@ void ext4_mb_try_best_found(struct ext4_allocation_context *ac,
+ 		ext4_mb_use_best_found(ac, e4b);
+ 	}
+ 
++out:
+ 	ext4_unlock_group(ac->ac_sb, group);
+ 	ext4_mb_unload_buddy(e4b);
+ }
+@@ -2338,12 +2342,10 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 	if (err)
+ 		return err;
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info))) {
+-		ext4_mb_unload_buddy(e4b);
+-		return 0;
+-	}
+-
+ 	ext4_lock_group(ac->ac_sb, group);
++	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
++		goto out;
++
+ 	max = mb_find_extent(e4b, ac->ac_g_ex.fe_start,
+ 			     ac->ac_g_ex.fe_len, &ex);
+ 	ex.fe_logical = 0xDEADFA11; /* debug value */
+@@ -2376,6 +2378,7 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
+ 		ac->ac_b_ex = ex;
+ 		ext4_mb_use_best_found(ac, e4b);
+ 	}
++out:
+ 	ext4_unlock_group(ac->ac_sb, group);
+ 	ext4_mb_unload_buddy(e4b);
+ 
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 63f70259edc0d..7aadf50109994 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -886,7 +886,7 @@ int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+ 	struct runs_tree *run = &ni->file.run;
+ 	struct ntfs_sb_info *sbi;
+ 	u8 cluster_bits;
+-	struct ATTRIB *attr = NULL, *attr_b;
++	struct ATTRIB *attr, *attr_b;
+ 	struct ATTR_LIST_ENTRY *le, *le_b;
+ 	struct mft_inode *mi, *mi_b;
+ 	CLST hint, svcn, to_alloc, evcn1, next_svcn, asize, end, vcn0, alen;
+@@ -904,12 +904,8 @@ int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+ 		*len = 0;
+ 	up_read(&ni->file.run_lock);
+ 
+-	if (*len) {
+-		if (*lcn != SPARSE_LCN || !new)
+-			return 0; /* Fast normal way without allocation. */
+-		else if (clen > *len)
+-			clen = *len;
+-	}
++	if (*len && (*lcn != SPARSE_LCN || !new))
++		return 0; /* Fast normal way without allocation. */
+ 
+ 	/* No cluster in cache or we need to allocate cluster in hole. */
+ 	sbi = ni->mi.sbi;
+@@ -918,6 +914,17 @@ int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+ 	ni_lock(ni);
+ 	down_write(&ni->file.run_lock);
+ 
++	/* Repeat the code above (under write lock). */
++	if (!run_lookup_entry(run, vcn, lcn, len, NULL))
++		*len = 0;
++
++	if (*len) {
++		if (*lcn != SPARSE_LCN || !new)
++			goto out; /* normal way without allocation. */
++		if (clen > *len)
++			clen = *len;
++	}
++
+ 	le_b = NULL;
+ 	attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, &mi_b);
+ 	if (!attr_b) {
+@@ -1736,8 +1743,10 @@ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ 			le_b = NULL;
+ 			attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL,
+ 					      0, NULL, &mi_b);
+-			if (!attr_b)
+-				return -ENOENT;
++			if (!attr_b) {
++				err = -ENOENT;
++				goto out;
++			}
+ 
+ 			attr = attr_b;
+ 			le = le_b;
+@@ -1818,13 +1827,15 @@ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ ok:
+ 	run_truncate_around(run, vcn);
+ out:
+-	if (new_valid > data_size)
+-		new_valid = data_size;
++	if (attr_b) {
++		if (new_valid > data_size)
++			new_valid = data_size;
+ 
+-	valid_size = le64_to_cpu(attr_b->nres.valid_size);
+-	if (new_valid != valid_size) {
+-		attr_b->nres.valid_size = cpu_to_le64(valid_size);
+-		mi_b->dirty = true;
++		valid_size = le64_to_cpu(attr_b->nres.valid_size);
++		if (new_valid != valid_size) {
++			attr_b->nres.valid_size = cpu_to_le64(valid_size);
++			mi_b->dirty = true;
++		}
+ 	}
+ 
+ 	return err;
+@@ -2073,7 +2084,7 @@ int attr_collapse_range(struct ntfs_inode *ni, u64 vbo, u64 bytes)
+ 
+ 	/* Update inode size. */
+ 	ni->i_valid = valid_size;
+-	ni->vfs_inode.i_size = data_size;
++	i_size_write(&ni->vfs_inode, data_size);
+ 	inode_set_bytes(&ni->vfs_inode, total_size);
+ 	ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+ 	mark_inode_dirty(&ni->vfs_inode);
+@@ -2488,7 +2499,7 @@ int attr_insert_range(struct ntfs_inode *ni, u64 vbo, u64 bytes)
+ 	mi_b->dirty = true;
+ 
+ done:
+-	ni->vfs_inode.i_size += bytes;
++	i_size_write(&ni->vfs_inode, ni->vfs_inode.i_size + bytes);
+ 	ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+ 	mark_inode_dirty(&ni->vfs_inode);
+ 
+diff --git a/fs/ntfs3/attrlist.c b/fs/ntfs3/attrlist.c
+index 7c01735d1219d..9f4bd8d260901 100644
+--- a/fs/ntfs3/attrlist.c
++++ b/fs/ntfs3/attrlist.c
+@@ -29,7 +29,7 @@ static inline bool al_is_valid_le(const struct ntfs_inode *ni,
+ void al_destroy(struct ntfs_inode *ni)
+ {
+ 	run_close(&ni->attr_list.run);
+-	kfree(ni->attr_list.le);
++	kvfree(ni->attr_list.le);
+ 	ni->attr_list.le = NULL;
+ 	ni->attr_list.size = 0;
+ 	ni->attr_list.dirty = false;
+@@ -127,12 +127,13 @@ struct ATTR_LIST_ENTRY *al_enumerate(struct ntfs_inode *ni,
+ {
+ 	size_t off;
+ 	u16 sz;
++	const unsigned le_min_size = le_size(0);
+ 
+ 	if (!le) {
+ 		le = ni->attr_list.le;
+ 	} else {
+ 		sz = le16_to_cpu(le->size);
+-		if (sz < sizeof(struct ATTR_LIST_ENTRY)) {
++		if (sz < le_min_size) {
+ 			/* Impossible 'cause we should not return such le. */
+ 			return NULL;
+ 		}
+@@ -141,7 +142,7 @@ struct ATTR_LIST_ENTRY *al_enumerate(struct ntfs_inode *ni,
+ 
+ 	/* Check boundary. */
+ 	off = PtrOffset(ni->attr_list.le, le);
+-	if (off + sizeof(struct ATTR_LIST_ENTRY) > ni->attr_list.size) {
++	if (off + le_min_size > ni->attr_list.size) {
+ 		/* The regular end of list. */
+ 		return NULL;
+ 	}
+@@ -149,8 +150,7 @@ struct ATTR_LIST_ENTRY *al_enumerate(struct ntfs_inode *ni,
+ 	sz = le16_to_cpu(le->size);
+ 
+ 	/* Check le for errors. */
+-	if (sz < sizeof(struct ATTR_LIST_ENTRY) ||
+-	    off + sz > ni->attr_list.size ||
++	if (sz < le_min_size || off + sz > ni->attr_list.size ||
+ 	    sz < le->name_off + le->name_len * sizeof(short)) {
+ 		return NULL;
+ 	}
+@@ -318,7 +318,7 @@ int al_add_le(struct ntfs_inode *ni, enum ATTR_TYPE type, const __le16 *name,
+ 		memcpy(ptr, al->le, off);
+ 		memcpy(Add2Ptr(ptr, off + sz), le, old_size - off);
+ 		le = Add2Ptr(ptr, off);
+-		kfree(al->le);
++		kvfree(al->le);
+ 		al->le = ptr;
+ 	} else {
+ 		memmove(Add2Ptr(le, sz), le, old_size - off);
+diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c
+index 63f14a0232f6a..845f9b22deef0 100644
+--- a/fs/ntfs3/bitmap.c
++++ b/fs/ntfs3/bitmap.c
+@@ -124,7 +124,7 @@ void wnd_close(struct wnd_bitmap *wnd)
+ {
+ 	struct rb_node *node, *next;
+ 
+-	kfree(wnd->free_bits);
++	kvfree(wnd->free_bits);
+ 	wnd->free_bits = NULL;
+ 	run_close(&wnd->run);
+ 
+@@ -1360,7 +1360,7 @@ int wnd_extend(struct wnd_bitmap *wnd, size_t new_bits)
+ 		memcpy(new_free, wnd->free_bits, wnd->nwnd * sizeof(short));
+ 		memset(new_free + wnd->nwnd, 0,
+ 		       (new_wnd - wnd->nwnd) * sizeof(short));
+-		kfree(wnd->free_bits);
++		kvfree(wnd->free_bits);
+ 		wnd->free_bits = new_free;
+ 	}
+ 
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index ec0566b322d5d..effa6accf8a85 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -309,11 +309,31 @@ static inline int ntfs_filldir(struct ntfs_sb_info *sbi, struct ntfs_inode *ni,
+ 		return 0;
+ 	}
+ 
+-	/* NTFS: symlinks are "dir + reparse" or "file + reparse" */
+-	if (fname->dup.fa & FILE_ATTRIBUTE_REPARSE_POINT)
+-		dt_type = DT_LNK;
+-	else
+-		dt_type = (fname->dup.fa & FILE_ATTRIBUTE_DIRECTORY) ? DT_DIR : DT_REG;
++	/*
++	 * NTFS: symlinks are "dir + reparse" or "file + reparse"
++	 * Unfortunately reparse attribute is used for many purposes (several dozens).
++	 * It is not possible here to know is this name symlink or not.
++	 * To get exactly the type of name we should to open inode (read mft).
++	 * getattr for opened file (fstat) correctly returns symlink.
++	 */
++	dt_type = (fname->dup.fa & FILE_ATTRIBUTE_DIRECTORY) ? DT_DIR : DT_REG;
++
++	/*
++	 * It is not reliable to detect the type of name using duplicated information
++	 * stored in parent directory.
++	 * The only correct way to get the type of name - read MFT record and find ATTR_STD.
++	 * The code below is not good idea.
++	 * It does additional locks/reads just to get the type of name.
++	 * Should we use additional mount option to enable branch below?
++	 */
++	if ((fname->dup.fa & FILE_ATTRIBUTE_REPARSE_POINT) &&
++	    ino != ni->mi.rno) {
++		struct inode *inode = ntfs_iget5(sbi->sb, &e->ref, NULL);
++		if (!IS_ERR_OR_NULL(inode)) {
++			dt_type = fs_umode_to_dtype(inode->i_mode);
++			iput(inode);
++		}
++	}
+ 
+ 	return !dir_emit(ctx, (s8 *)name, name_len, ino, dt_type);
+ }
+@@ -495,11 +515,9 @@ static int ntfs_dir_count(struct inode *dir, bool *is_empty, size_t *dirs,
+ 	struct INDEX_HDR *hdr;
+ 	const struct ATTR_FILE_NAME *fname;
+ 	u32 e_size, off, end;
+-	u64 vbo = 0;
+ 	size_t drs = 0, fles = 0, bit = 0;
+-	loff_t i_size = ni->vfs_inode.i_size;
+ 	struct indx_node *node = NULL;
+-	u8 index_bits = ni->dir.index_bits;
++	size_t max_indx = i_size_read(&ni->vfs_inode) >> ni->dir.index_bits;
+ 
+ 	if (is_empty)
+ 		*is_empty = true;
+@@ -518,8 +536,10 @@ static int ntfs_dir_count(struct inode *dir, bool *is_empty, size_t *dirs,
+ 			e = Add2Ptr(hdr, off);
+ 			e_size = le16_to_cpu(e->size);
+ 			if (e_size < sizeof(struct NTFS_DE) ||
+-			    off + e_size > end)
++			    off + e_size > end) {
++				/* Looks like corruption. */
+ 				break;
++			}
+ 
+ 			if (de_is_last(e))
+ 				break;
+@@ -543,7 +563,7 @@ static int ntfs_dir_count(struct inode *dir, bool *is_empty, size_t *dirs,
+ 				fles += 1;
+ 		}
+ 
+-		if (vbo >= i_size)
++		if (bit >= max_indx)
+ 			goto out;
+ 
+ 		err = indx_used_bit(&ni->dir, ni, &bit);
+@@ -553,8 +573,7 @@ static int ntfs_dir_count(struct inode *dir, bool *is_empty, size_t *dirs,
+ 		if (bit == MINUS_ONE_T)
+ 			goto out;
+ 
+-		vbo = (u64)bit << index_bits;
+-		if (vbo >= i_size)
++		if (bit >= max_indx)
+ 			goto out;
+ 
+ 		err = indx_read(&ni->dir, ni, bit << ni->dir.idx2vbn_bits,
+@@ -564,7 +583,6 @@ static int ntfs_dir_count(struct inode *dir, bool *is_empty, size_t *dirs,
+ 
+ 		hdr = &node->index->ihdr;
+ 		bit += 1;
+-		vbo = (u64)bit << ni->dir.idx2vbn_bits;
+ 	}
+ 
+ out:
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index a5a30a24ce5df..691b0c9b95ae7 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -188,6 +188,7 @@ static int ntfs_zero_range(struct inode *inode, u64 vbo, u64 vbo_to)
+ 	u32 bh_next, bh_off, to;
+ 	sector_t iblock;
+ 	struct folio *folio;
++	bool dirty = false;
+ 
+ 	for (; idx < idx_end; idx += 1, from = 0) {
+ 		page_off = (loff_t)idx << PAGE_SHIFT;
+@@ -223,29 +224,27 @@ static int ntfs_zero_range(struct inode *inode, u64 vbo, u64 vbo_to)
+ 			/* Ok, it's mapped. Make sure it's up-to-date. */
+ 			if (folio_test_uptodate(folio))
+ 				set_buffer_uptodate(bh);
+-
+-			if (!buffer_uptodate(bh)) {
+-				err = bh_read(bh, 0);
+-				if (err < 0) {
+-					folio_unlock(folio);
+-					folio_put(folio);
+-					goto out;
+-				}
++			else if (bh_read(bh, 0) < 0) {
++				err = -EIO;
++				folio_unlock(folio);
++				folio_put(folio);
++				goto out;
+ 			}
+ 
+ 			mark_buffer_dirty(bh);
+-
+ 		} while (bh_off = bh_next, iblock += 1,
+ 			 head != (bh = bh->b_this_page));
+ 
+ 		folio_zero_segment(folio, from, to);
++		dirty = true;
+ 
+ 		folio_unlock(folio);
+ 		folio_put(folio);
+ 		cond_resched();
+ 	}
+ out:
+-	mark_inode_dirty(inode);
++	if (dirty)
++		mark_inode_dirty(inode);
+ 	return err;
+ }
+ 
+@@ -261,6 +260,9 @@ static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 	bool rw = vma->vm_flags & VM_WRITE;
+ 	int err;
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	if (is_encrypted(ni)) {
+ 		ntfs_inode_warn(inode, "mmap encrypted not supported");
+ 		return -EOPNOTSUPP;
+@@ -499,10 +501,14 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ 		ni_lock(ni);
+ 		err = attr_punch_hole(ni, vbo, len, &frame_size);
+ 		ni_unlock(ni);
++		if (!err)
++			goto ok;
++
+ 		if (err != E_NTFS_NOTALIGNED)
+ 			goto out;
+ 
+ 		/* Process not aligned punch. */
++		err = 0;
+ 		mask = frame_size - 1;
+ 		vbo_a = (vbo + mask) & ~mask;
+ 		end_a = end & ~mask;
+@@ -525,6 +531,8 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ 			ni_lock(ni);
+ 			err = attr_punch_hole(ni, vbo_a, end_a - vbo_a, NULL);
+ 			ni_unlock(ni);
++			if (err)
++				goto out;
+ 		}
+ 	} else if (mode & FALLOC_FL_COLLAPSE_RANGE) {
+ 		/*
+@@ -564,6 +572,8 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ 		ni_lock(ni);
+ 		err = attr_insert_range(ni, vbo, len);
+ 		ni_unlock(ni);
++		if (err)
++			goto out;
+ 	} else {
+ 		/* Check new size. */
+ 		u8 cluster_bits = sbi->cluster_bits;
+@@ -633,11 +643,18 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ 					    &ni->file.run, i_size, &ni->i_valid,
+ 					    true, NULL);
+ 			ni_unlock(ni);
++			if (err)
++				goto out;
+ 		} else if (new_size > i_size) {
+-			inode->i_size = new_size;
++			i_size_write(inode, new_size);
+ 		}
+ 	}
+ 
++ok:
++	err = file_modified(file);
++	if (err)
++		goto out;
++
+ out:
+ 	if (map_locked)
+ 		filemap_invalidate_unlock(mapping);
+@@ -663,6 +680,9 @@ int ntfs3_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	umode_t mode = inode->i_mode;
+ 	int err;
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	err = setattr_prepare(idmap, dentry, attr);
+ 	if (err)
+ 		goto out;
+@@ -676,7 +696,7 @@ int ntfs3_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 			goto out;
+ 		}
+ 		inode_dio_wait(inode);
+-		oldsize = inode->i_size;
++		oldsize = i_size_read(inode);
+ 		newsize = attr->ia_size;
+ 
+ 		if (newsize <= oldsize)
+@@ -688,7 +708,7 @@ int ntfs3_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 			goto out;
+ 
+ 		ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+-		inode->i_size = newsize;
++		i_size_write(inode, newsize);
+ 	}
+ 
+ 	setattr_copy(idmap, inode, attr);
+@@ -718,6 +738,9 @@ static ssize_t ntfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	struct inode *inode = file->f_mapping->host;
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	if (is_encrypted(ni)) {
+ 		ntfs_inode_warn(inode, "encrypted i/o not supported");
+ 		return -EOPNOTSUPP;
+@@ -752,6 +775,9 @@ static ssize_t ntfs_file_splice_read(struct file *in, loff_t *ppos,
+ 	struct inode *inode = in->f_mapping->host;
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	if (is_encrypted(ni)) {
+ 		ntfs_inode_warn(inode, "encrypted i/o not supported");
+ 		return -EOPNOTSUPP;
+@@ -821,7 +847,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
+ 	size_t count = iov_iter_count(from);
+ 	loff_t pos = iocb->ki_pos;
+ 	struct inode *inode = file_inode(file);
+-	loff_t i_size = inode->i_size;
++	loff_t i_size = i_size_read(inode);
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 	u64 valid = ni->i_valid;
+@@ -1028,6 +1054,8 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
+ 	iocb->ki_pos += written;
+ 	if (iocb->ki_pos > ni->i_valid)
+ 		ni->i_valid = iocb->ki_pos;
++	if (iocb->ki_pos > i_size)
++		i_size_write(inode, iocb->ki_pos);
+ 
+ 	return written;
+ }
+@@ -1041,8 +1069,12 @@ static ssize_t ntfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	struct address_space *mapping = file->f_mapping;
+ 	struct inode *inode = mapping->host;
+ 	ssize_t ret;
++	int err;
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	if (is_encrypted(ni)) {
+ 		ntfs_inode_warn(inode, "encrypted i/o not supported");
+ 		return -EOPNOTSUPP;
+@@ -1068,6 +1100,12 @@ static ssize_t ntfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	if (ret <= 0)
+ 		goto out;
+ 
++	err = file_modified(iocb->ki_filp);
++	if (err) {
++		ret = err;
++		goto out;
++	}
++
+ 	if (WARN_ON(ni->ni_flags & NI_FLAG_COMPRESSED_MASK)) {
+ 		/* Should never be here, see ntfs_file_open(). */
+ 		ret = -EOPNOTSUPP;
+@@ -1097,6 +1135,9 @@ int ntfs_file_open(struct inode *inode, struct file *file)
+ {
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	if (unlikely((is_compressed(ni) || is_encrypted(ni)) &&
+ 		     (file->f_flags & O_DIRECT))) {
+ 		return -EOPNOTSUPP;
+@@ -1138,7 +1179,8 @@ static int ntfs_file_release(struct inode *inode, struct file *file)
+ 		down_write(&ni->file.run_lock);
+ 
+ 		err = attr_set_size(ni, ATTR_DATA, NULL, 0, &ni->file.run,
+-				    inode->i_size, &ni->i_valid, false, NULL);
++				    i_size_read(inode), &ni->i_valid, false,
++				    NULL);
+ 
+ 		up_write(&ni->file.run_lock);
+ 		ni_unlock(ni);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 3df2d9e34b914..3b42938a9d3b2 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -778,7 +778,7 @@ static int ni_try_remove_attr_list(struct ntfs_inode *ni)
+ 	run_deallocate(sbi, &ni->attr_list.run, true);
+ 	run_close(&ni->attr_list.run);
+ 	ni->attr_list.size = 0;
+-	kfree(ni->attr_list.le);
++	kvfree(ni->attr_list.le);
+ 	ni->attr_list.le = NULL;
+ 	ni->attr_list.dirty = false;
+ 
+@@ -927,7 +927,7 @@ int ni_create_attr_list(struct ntfs_inode *ni)
+ 	return 0;
+ 
+ out:
+-	kfree(ni->attr_list.le);
++	kvfree(ni->attr_list.le);
+ 	ni->attr_list.le = NULL;
+ 	ni->attr_list.size = 0;
+ 	return err;
+@@ -2099,7 +2099,7 @@ int ni_readpage_cmpr(struct ntfs_inode *ni, struct page *page)
+ 	gfp_t gfp_mask;
+ 	struct page *pg;
+ 
+-	if (vbo >= ni->vfs_inode.i_size) {
++	if (vbo >= i_size_read(&ni->vfs_inode)) {
+ 		SetPageUptodate(page);
+ 		err = 0;
+ 		goto out;
+@@ -2173,7 +2173,7 @@ int ni_decompress_file(struct ntfs_inode *ni)
+ {
+ 	struct ntfs_sb_info *sbi = ni->mi.sbi;
+ 	struct inode *inode = &ni->vfs_inode;
+-	loff_t i_size = inode->i_size;
++	loff_t i_size = i_size_read(inode);
+ 	struct address_space *mapping = inode->i_mapping;
+ 	gfp_t gfp_mask = mapping_gfp_mask(mapping);
+ 	struct page **pages = NULL;
+@@ -2457,6 +2457,7 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ 	struct ATTR_LIST_ENTRY *le = NULL;
+ 	struct runs_tree *run = &ni->file.run;
+ 	u64 valid_size = ni->i_valid;
++	loff_t i_size = i_size_read(&ni->vfs_inode);
+ 	u64 vbo_disk;
+ 	size_t unc_size;
+ 	u32 frame_size, i, npages_disk, ondisk_size;
+@@ -2548,7 +2549,7 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ 			}
+ 		}
+ 
+-		frames = (ni->vfs_inode.i_size - 1) >> frame_bits;
++		frames = (i_size - 1) >> frame_bits;
+ 
+ 		err = attr_wof_frame_info(ni, attr, run, frame64, frames,
+ 					  frame_bits, &ondisk_size, &vbo_data);
+@@ -2556,8 +2557,7 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ 			goto out2;
+ 
+ 		if (frame64 == frames) {
+-			unc_size = 1 + ((ni->vfs_inode.i_size - 1) &
+-					(frame_size - 1));
++			unc_size = 1 + ((i_size - 1) & (frame_size - 1));
+ 			ondisk_size = attr_size(attr) - vbo_data;
+ 		} else {
+ 			unc_size = frame_size;
+@@ -3259,6 +3259,9 @@ int ni_write_inode(struct inode *inode, int sync, const char *hint)
+ 	if (is_bad_inode(inode) || sb_rdonly(sb))
+ 		return 0;
+ 
++	if (unlikely(ntfs3_forced_shutdown(sb)))
++		return -EIO;
++
+ 	if (!ni_trylock(ni)) {
+ 		/* 'ni' is under modification, skip for now. */
+ 		mark_inode_dirty_sync(inode);
+@@ -3288,7 +3291,7 @@ int ni_write_inode(struct inode *inode, int sync, const char *hint)
+ 			modified = true;
+ 		}
+ 
+-		ts = inode_get_mtime(inode);
++		ts = inode_get_ctime(inode);
+ 		dup.c_time = kernel2nt(&ts);
+ 		if (std->c_time != dup.c_time) {
+ 			std->c_time = dup.c_time;
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index 98ccb66508583..855519713bf79 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -465,7 +465,7 @@ static inline bool is_rst_area_valid(const struct RESTART_HDR *rhdr)
+ {
+ 	const struct RESTART_AREA *ra;
+ 	u16 cl, fl, ul;
+-	u32 off, l_size, file_dat_bits, file_size_round;
++	u32 off, l_size, seq_bits;
+ 	u16 ro = le16_to_cpu(rhdr->ra_off);
+ 	u32 sys_page = le32_to_cpu(rhdr->sys_page_size);
+ 
+@@ -511,13 +511,15 @@ static inline bool is_rst_area_valid(const struct RESTART_HDR *rhdr)
+ 	/* Make sure the sequence number bits match the log file size. */
+ 	l_size = le64_to_cpu(ra->l_size);
+ 
+-	file_dat_bits = sizeof(u64) * 8 - le32_to_cpu(ra->seq_num_bits);
+-	file_size_round = 1u << (file_dat_bits + 3);
+-	if (file_size_round != l_size &&
+-	    (file_size_round < l_size || (file_size_round / 2) > l_size)) {
+-		return false;
++	seq_bits = sizeof(u64) * 8 + 3;
++	while (l_size) {
++		l_size >>= 1;
++		seq_bits -= 1;
+ 	}
+ 
++	if (seq_bits != ra->seq_num_bits)
++		return false;
++
+ 	/* The log page data offset and record header length must be quad-aligned. */
+ 	if (!IS_ALIGNED(le16_to_cpu(ra->data_off), 8) ||
+ 	    !IS_ALIGNED(le16_to_cpu(ra->rec_hdr_len), 8))
+@@ -974,6 +976,16 @@ static inline void *alloc_rsttbl_from_idx(struct RESTART_TABLE **tbl, u32 vbo)
+ 	return e;
+ }
+ 
++struct restart_info {
++	u64 last_lsn;
++	struct RESTART_HDR *r_page;
++	u32 vbo;
++	bool chkdsk_was_run;
++	bool valid_page;
++	bool initialized;
++	bool restart;
++};
++
+ #define RESTART_SINGLE_PAGE_IO cpu_to_le16(0x0001)
+ 
+ #define NTFSLOG_WRAPPED 0x00000001
+@@ -987,6 +999,7 @@ struct ntfs_log {
+ 	struct ntfs_inode *ni;
+ 
+ 	u32 l_size;
++	u32 orig_file_size;
+ 	u32 sys_page_size;
+ 	u32 sys_page_mask;
+ 	u32 page_size;
+@@ -1040,6 +1053,8 @@ struct ntfs_log {
+ 
+ 	struct CLIENT_ID client_id;
+ 	u32 client_undo_commit;
++
++	struct restart_info rst_info, rst_info2;
+ };
+ 
+ static inline u32 lsn_to_vbo(struct ntfs_log *log, const u64 lsn)
+@@ -1105,16 +1120,6 @@ static inline bool verify_client_lsn(struct ntfs_log *log,
+ 	       lsn <= le64_to_cpu(log->ra->current_lsn) && lsn;
+ }
+ 
+-struct restart_info {
+-	u64 last_lsn;
+-	struct RESTART_HDR *r_page;
+-	u32 vbo;
+-	bool chkdsk_was_run;
+-	bool valid_page;
+-	bool initialized;
+-	bool restart;
+-};
+-
+ static int read_log_page(struct ntfs_log *log, u32 vbo,
+ 			 struct RECORD_PAGE_HDR **buffer, bool *usa_error)
+ {
+@@ -1176,7 +1181,7 @@ static int read_log_page(struct ntfs_log *log, u32 vbo,
+  * restart page header. It will stop the first time we find a
+  * valid page header.
+  */
+-static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
++static int log_read_rst(struct ntfs_log *log, bool first,
+ 			struct restart_info *info)
+ {
+ 	u32 skip, vbo;
+@@ -1192,7 +1197,7 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
+ 	}
+ 
+ 	/* Loop continuously until we succeed. */
+-	for (; vbo < l_size; vbo = 2 * vbo + skip, skip = 0) {
++	for (; vbo < log->l_size; vbo = 2 * vbo + skip, skip = 0) {
+ 		bool usa_error;
+ 		bool brst, bchk;
+ 		struct RESTART_AREA *ra;
+@@ -1285,22 +1290,17 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
+ /*
+  * Ilog_init_pg_hdr - Init @log from restart page header.
+  */
+-static void log_init_pg_hdr(struct ntfs_log *log, u32 sys_page_size,
+-			    u32 page_size, u16 major_ver, u16 minor_ver)
++static void log_init_pg_hdr(struct ntfs_log *log, u16 major_ver, u16 minor_ver)
+ {
+-	log->sys_page_size = sys_page_size;
+-	log->sys_page_mask = sys_page_size - 1;
+-	log->page_size = page_size;
+-	log->page_mask = page_size - 1;
+-	log->page_bits = blksize_bits(page_size);
++	log->sys_page_size = log->page_size;
++	log->sys_page_mask = log->page_mask;
+ 
+ 	log->clst_per_page = log->page_size >> log->ni->mi.sbi->cluster_bits;
+ 	if (!log->clst_per_page)
+ 		log->clst_per_page = 1;
+ 
+-	log->first_page = major_ver >= 2 ?
+-				  0x22 * page_size :
+-				  ((sys_page_size << 1) + (page_size << 1));
++	log->first_page = major_ver >= 2 ? 0x22 * log->page_size :
++					   4 * log->page_size;
+ 	log->major_ver = major_ver;
+ 	log->minor_ver = minor_ver;
+ }
+@@ -1308,12 +1308,11 @@ static void log_init_pg_hdr(struct ntfs_log *log, u32 sys_page_size,
+ /*
+  * log_create - Init @log in cases when we don't have a restart area to use.
+  */
+-static void log_create(struct ntfs_log *log, u32 l_size, const u64 last_lsn,
++static void log_create(struct ntfs_log *log, const u64 last_lsn,
+ 		       u32 open_log_count, bool wrapped, bool use_multi_page)
+ {
+-	log->l_size = l_size;
+ 	/* All file offsets must be quadword aligned. */
+-	log->file_data_bits = blksize_bits(l_size) - 3;
++	log->file_data_bits = blksize_bits(log->l_size) - 3;
+ 	log->seq_num_mask = (8 << log->file_data_bits) - 1;
+ 	log->seq_num_bits = sizeof(u64) * 8 - log->file_data_bits;
+ 	log->seq_num = (last_lsn >> log->file_data_bits) + 2;
+@@ -3720,10 +3719,8 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	struct ntfs_sb_info *sbi = ni->mi.sbi;
+ 	struct ntfs_log *log;
+ 
+-	struct restart_info rst_info, rst_info2;
+-	u64 rec_lsn, ra_lsn, checkpt_lsn = 0, rlsn = 0;
++	u64 rec_lsn, checkpt_lsn = 0, rlsn = 0;
+ 	struct ATTR_NAME_ENTRY *attr_names = NULL;
+-	struct ATTR_NAME_ENTRY *ane;
+ 	struct RESTART_TABLE *dptbl = NULL;
+ 	struct RESTART_TABLE *trtbl = NULL;
+ 	const struct RESTART_TABLE *rt;
+@@ -3741,9 +3738,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	struct TRANSACTION_ENTRY *tr;
+ 	struct DIR_PAGE_ENTRY *dp;
+ 	u32 i, bytes_per_attr_entry;
+-	u32 l_size = ni->vfs_inode.i_size;
+-	u32 orig_file_size = l_size;
+-	u32 page_size, vbo, tail, off, dlen;
++	u32 vbo, tail, off, dlen;
+ 	u32 saved_len, rec_len, transact_id;
+ 	bool use_second_page;
+ 	struct RESTART_AREA *ra2, *ra = NULL;
+@@ -3758,52 +3753,50 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	u16 t16;
+ 	u32 t32;
+ 
+-	/* Get the size of page. NOTE: To replay we can use default page. */
+-#if PAGE_SIZE >= DefaultLogPageSize && PAGE_SIZE <= DefaultLogPageSize * 2
+-	page_size = norm_file_page(PAGE_SIZE, &l_size, true);
+-#else
+-	page_size = norm_file_page(PAGE_SIZE, &l_size, false);
+-#endif
+-	if (!page_size)
+-		return -EINVAL;
+-
+ 	log = kzalloc(sizeof(struct ntfs_log), GFP_NOFS);
+ 	if (!log)
+ 		return -ENOMEM;
+ 
+ 	log->ni = ni;
+-	log->l_size = l_size;
+-	log->one_page_buf = kmalloc(page_size, GFP_NOFS);
++	log->l_size = log->orig_file_size = ni->vfs_inode.i_size;
+ 
++	/* Get the size of page. NOTE: To replay we can use default page. */
++#if PAGE_SIZE >= DefaultLogPageSize && PAGE_SIZE <= DefaultLogPageSize * 2
++	log->page_size = norm_file_page(PAGE_SIZE, &log->l_size, true);
++#else
++	log->page_size = norm_file_page(PAGE_SIZE, &log->l_size, false);
++#endif
++	if (!log->page_size) {
++		err = -EINVAL;
++		goto out;
++	}
++
++	log->one_page_buf = kmalloc(log->page_size, GFP_NOFS);
+ 	if (!log->one_page_buf) {
+ 		err = -ENOMEM;
+ 		goto out;
+ 	}
+ 
+-	log->page_size = page_size;
+-	log->page_mask = page_size - 1;
+-	log->page_bits = blksize_bits(page_size);
++	log->page_mask = log->page_size - 1;
++	log->page_bits = blksize_bits(log->page_size);
+ 
+ 	/* Look for a restart area on the disk. */
+-	memset(&rst_info, 0, sizeof(struct restart_info));
+-	err = log_read_rst(log, l_size, true, &rst_info);
++	err = log_read_rst(log, true, &log->rst_info);
+ 	if (err)
+ 		goto out;
+ 
+ 	/* remember 'initialized' */
+-	*initialized = rst_info.initialized;
++	*initialized = log->rst_info.initialized;
+ 
+-	if (!rst_info.restart) {
+-		if (rst_info.initialized) {
++	if (!log->rst_info.restart) {
++		if (log->rst_info.initialized) {
+ 			/* No restart area but the file is not initialized. */
+ 			err = -EINVAL;
+ 			goto out;
+ 		}
+ 
+-		log_init_pg_hdr(log, page_size, page_size, 1, 1);
+-		log_create(log, l_size, 0, get_random_u32(), false, false);
+-
+-		log->ra = ra;
++		log_init_pg_hdr(log, 1, 1);
++		log_create(log, 0, get_random_u32(), false, false);
+ 
+ 		ra = log_create_ra(log);
+ 		if (!ra) {
+@@ -3820,25 +3813,26 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	 * If the restart offset above wasn't zero then we won't
+ 	 * look for a second restart.
+ 	 */
+-	if (rst_info.vbo)
++	if (log->rst_info.vbo)
+ 		goto check_restart_area;
+ 
+-	memset(&rst_info2, 0, sizeof(struct restart_info));
+-	err = log_read_rst(log, l_size, false, &rst_info2);
++	err = log_read_rst(log, false, &log->rst_info2);
+ 	if (err)
+ 		goto out;
+ 
+ 	/* Determine which restart area to use. */
+-	if (!rst_info2.restart || rst_info2.last_lsn <= rst_info.last_lsn)
++	if (!log->rst_info2.restart ||
++	    log->rst_info2.last_lsn <= log->rst_info.last_lsn)
+ 		goto use_first_page;
+ 
+ 	use_second_page = true;
+ 
+-	if (rst_info.chkdsk_was_run && page_size != rst_info.vbo) {
++	if (log->rst_info.chkdsk_was_run &&
++	    log->page_size != log->rst_info.vbo) {
+ 		struct RECORD_PAGE_HDR *sp = NULL;
+ 		bool usa_error;
+ 
+-		if (!read_log_page(log, page_size, &sp, &usa_error) &&
++		if (!read_log_page(log, log->page_size, &sp, &usa_error) &&
+ 		    sp->rhdr.sign == NTFS_CHKD_SIGNATURE) {
+ 			use_second_page = false;
+ 		}
+@@ -3846,52 +3840,43 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	}
+ 
+ 	if (use_second_page) {
+-		kfree(rst_info.r_page);
+-		memcpy(&rst_info, &rst_info2, sizeof(struct restart_info));
+-		rst_info2.r_page = NULL;
++		kfree(log->rst_info.r_page);
++		memcpy(&log->rst_info, &log->rst_info2,
++		       sizeof(struct restart_info));
++		log->rst_info2.r_page = NULL;
+ 	}
+ 
+ use_first_page:
+-	kfree(rst_info2.r_page);
++	kfree(log->rst_info2.r_page);
+ 
+ check_restart_area:
+ 	/*
+ 	 * If the restart area is at offset 0, we want
+ 	 * to write the second restart area first.
+ 	 */
+-	log->init_ra = !!rst_info.vbo;
++	log->init_ra = !!log->rst_info.vbo;
+ 
+ 	/* If we have a valid page then grab a pointer to the restart area. */
+-	ra2 = rst_info.valid_page ?
+-		      Add2Ptr(rst_info.r_page,
+-			      le16_to_cpu(rst_info.r_page->ra_off)) :
++	ra2 = log->rst_info.valid_page ?
++		      Add2Ptr(log->rst_info.r_page,
++			      le16_to_cpu(log->rst_info.r_page->ra_off)) :
+ 		      NULL;
+ 
+-	if (rst_info.chkdsk_was_run ||
++	if (log->rst_info.chkdsk_was_run ||
+ 	    (ra2 && ra2->client_idx[1] == LFS_NO_CLIENT_LE)) {
+ 		bool wrapped = false;
+ 		bool use_multi_page = false;
+ 		u32 open_log_count;
+ 
+ 		/* Do some checks based on whether we have a valid log page. */
+-		if (!rst_info.valid_page) {
+-			open_log_count = get_random_u32();
+-			goto init_log_instance;
+-		}
+-		open_log_count = le32_to_cpu(ra2->open_log_count);
+-
+-		/*
+-		 * If the restart page size isn't changing then we want to
+-		 * check how much work we need to do.
+-		 */
+-		if (page_size != le32_to_cpu(rst_info.r_page->sys_page_size))
+-			goto init_log_instance;
++		open_log_count = log->rst_info.valid_page ?
++					 le32_to_cpu(ra2->open_log_count) :
++					 get_random_u32();
+ 
+-init_log_instance:
+-		log_init_pg_hdr(log, page_size, page_size, 1, 1);
++		log_init_pg_hdr(log, 1, 1);
+ 
+-		log_create(log, l_size, rst_info.last_lsn, open_log_count,
+-			   wrapped, use_multi_page);
++		log_create(log, log->rst_info.last_lsn, open_log_count, wrapped,
++			   use_multi_page);
+ 
+ 		ra = log_create_ra(log);
+ 		if (!ra) {
+@@ -3916,28 +3901,27 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	 * use the log file. We must use the system page size instead of the
+ 	 * default size if there is not a clean shutdown.
+ 	 */
+-	t32 = le32_to_cpu(rst_info.r_page->sys_page_size);
+-	if (page_size != t32) {
+-		l_size = orig_file_size;
+-		page_size =
+-			norm_file_page(t32, &l_size, t32 == DefaultLogPageSize);
++	t32 = le32_to_cpu(log->rst_info.r_page->sys_page_size);
++	if (log->page_size != t32) {
++		log->l_size = log->orig_file_size;
++		log->page_size = norm_file_page(t32, &log->l_size,
++						t32 == DefaultLogPageSize);
+ 	}
+ 
+-	if (page_size != t32 ||
+-	    page_size != le32_to_cpu(rst_info.r_page->page_size)) {
++	if (log->page_size != t32 ||
++	    log->page_size != le32_to_cpu(log->rst_info.r_page->page_size)) {
+ 		err = -EINVAL;
+ 		goto out;
+ 	}
+ 
+ 	/* If the file size has shrunk then we won't mount it. */
+-	if (l_size < le64_to_cpu(ra2->l_size)) {
++	if (log->l_size < le64_to_cpu(ra2->l_size)) {
+ 		err = -EINVAL;
+ 		goto out;
+ 	}
+ 
+-	log_init_pg_hdr(log, page_size, page_size,
+-			le16_to_cpu(rst_info.r_page->major_ver),
+-			le16_to_cpu(rst_info.r_page->minor_ver));
++	log_init_pg_hdr(log, le16_to_cpu(log->rst_info.r_page->major_ver),
++			le16_to_cpu(log->rst_info.r_page->minor_ver));
+ 
+ 	log->l_size = le64_to_cpu(ra2->l_size);
+ 	log->seq_num_bits = le32_to_cpu(ra2->seq_num_bits);
+@@ -3945,7 +3929,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	log->seq_num_mask = (8 << log->file_data_bits) - 1;
+ 	log->last_lsn = le64_to_cpu(ra2->current_lsn);
+ 	log->seq_num = log->last_lsn >> log->file_data_bits;
+-	log->ra_off = le16_to_cpu(rst_info.r_page->ra_off);
++	log->ra_off = le16_to_cpu(log->rst_info.r_page->ra_off);
+ 	log->restart_size = log->sys_page_size - log->ra_off;
+ 	log->record_header_len = le16_to_cpu(ra2->rec_hdr_len);
+ 	log->ra_size = le16_to_cpu(ra2->ra_len);
+@@ -4045,7 +4029,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	log->current_avail = current_log_avail(log);
+ 
+ 	/* Remember which restart area to write first. */
+-	log->init_ra = rst_info.vbo;
++	log->init_ra = log->rst_info.vbo;
+ 
+ process_log:
+ 	/* 1.0, 1.1, 2.0 log->major_ver/minor_ver - short values. */
+@@ -4105,7 +4089,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	log->client_id.seq_num = cr->seq_num;
+ 	log->client_id.client_idx = client;
+ 
+-	err = read_rst_area(log, &rst, &ra_lsn);
++	err = read_rst_area(log, &rst, &checkpt_lsn);
+ 	if (err)
+ 		goto out;
+ 
+@@ -4114,9 +4098,8 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 
+ 	bytes_per_attr_entry = !rst->major_ver ? 0x2C : 0x28;
+ 
+-	checkpt_lsn = le64_to_cpu(rst->check_point_start);
+-	if (!checkpt_lsn)
+-		checkpt_lsn = ra_lsn;
++	if (rst->check_point_start)
++		checkpt_lsn = le64_to_cpu(rst->check_point_start);
+ 
+ 	/* Allocate and Read the Transaction Table. */
+ 	if (!rst->transact_table_len)
+@@ -4330,23 +4313,20 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	lcb = NULL;
+ 
+ check_attribute_names2:
+-	if (!rst->attr_names_len)
+-		goto trace_attribute_table;
+-
+-	ane = attr_names;
+-	if (!oatbl)
+-		goto trace_attribute_table;
+-	while (ane->off) {
+-		/* TODO: Clear table on exit! */
+-		oe = Add2Ptr(oatbl, le16_to_cpu(ane->off));
+-		t16 = le16_to_cpu(ane->name_bytes);
+-		oe->name_len = t16 / sizeof(short);
+-		oe->ptr = ane->name;
+-		oe->is_attr_name = 2;
+-		ane = Add2Ptr(ane, sizeof(struct ATTR_NAME_ENTRY) + t16);
+-	}
+-
+-trace_attribute_table:
++	if (rst->attr_names_len && oatbl) {
++		struct ATTR_NAME_ENTRY *ane = attr_names;
++		while (ane->off) {
++			/* TODO: Clear table on exit! */
++			oe = Add2Ptr(oatbl, le16_to_cpu(ane->off));
++			t16 = le16_to_cpu(ane->name_bytes);
++			oe->name_len = t16 / sizeof(short);
++			oe->ptr = ane->name;
++			oe->is_attr_name = 2;
++			ane = Add2Ptr(ane,
++				      sizeof(struct ATTR_NAME_ENTRY) + t16);
++		}
++	}
++
+ 	/*
+ 	 * If the checkpt_lsn is zero, then this is a freshly
+ 	 * formatted disk and we have no work to do.
+@@ -5189,7 +5169,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	kfree(oatbl);
+ 	kfree(dptbl);
+ 	kfree(attr_names);
+-	kfree(rst_info.r_page);
++	kfree(log->rst_info.r_page);
+ 
+ 	kfree(ra);
+ 	kfree(log->one_page_buf);
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index fbfe21dbb4259..ae2ef5c11868c 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -853,7 +853,8 @@ void ntfs_update_mftmirr(struct ntfs_sb_info *sbi, int wait)
+ 	/*
+ 	 * sb can be NULL here. In this case sbi->flags should be 0 too.
+ 	 */
+-	if (!sb || !(sbi->flags & NTFS_FLAGS_MFTMIRR))
++	if (!sb || !(sbi->flags & NTFS_FLAGS_MFTMIRR) ||
++	    unlikely(ntfs3_forced_shutdown(sb)))
+ 		return;
+ 
+ 	blocksize = sb->s_blocksize;
+@@ -1006,6 +1007,30 @@ static inline __le32 security_hash(const void *sd, size_t bytes)
+ 	return cpu_to_le32(hash);
+ }
+ 
++/*
++ * simple wrapper for sb_bread_unmovable.
++ */
++struct buffer_head *ntfs_bread(struct super_block *sb, sector_t block)
++{
++	struct ntfs_sb_info *sbi = sb->s_fs_info;
++	struct buffer_head *bh;
++
++	if (unlikely(block >= sbi->volume.blocks)) {
++		/* prevent generic message "attempt to access beyond end of device" */
++		ntfs_err(sb, "try to read out of volume at offset 0x%llx",
++			 (u64)block << sb->s_blocksize_bits);
++		return NULL;
++	}
++
++	bh = sb_bread_unmovable(sb, block);
++	if (bh)
++		return bh;
++
++	ntfs_err(sb, "failed to read volume at offset 0x%llx",
++		 (u64)block << sb->s_blocksize_bits);
++	return NULL;
++}
++
+ int ntfs_sb_read(struct super_block *sb, u64 lbo, size_t bytes, void *buffer)
+ {
+ 	struct block_device *bdev = sb->s_bdev;
+@@ -2128,8 +2153,8 @@ int ntfs_insert_security(struct ntfs_sb_info *sbi,
+ 			if (le32_to_cpu(d_security->size) == new_sec_size &&
+ 			    d_security->key.hash == hash_key.hash &&
+ 			    !memcmp(d_security + 1, sd, size_sd)) {
+-				*security_id = d_security->key.sec_id;
+ 				/* Such security already exists. */
++				*security_id = d_security->key.sec_id;
+ 				err = 0;
+ 				goto out;
+ 			}
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index cf92b2433f7a7..daabaad63aaf6 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -1462,7 +1462,7 @@ static int indx_create_allocate(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 		goto out2;
+ 
+ 	if (in->name == I30_NAME) {
+-		ni->vfs_inode.i_size = data_size;
++		i_size_write(&ni->vfs_inode, data_size);
+ 		inode_set_bytes(&ni->vfs_inode, alloc_size);
+ 	}
+ 
+@@ -1544,7 +1544,7 @@ static int indx_add_allocate(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 	}
+ 
+ 	if (in->name == I30_NAME)
+-		ni->vfs_inode.i_size = data_size;
++		i_size_write(&ni->vfs_inode, data_size);
+ 
+ 	*vbn = bit << indx->idx2vbn_bits;
+ 
+@@ -2090,7 +2090,7 @@ static int indx_shrink(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 		return err;
+ 
+ 	if (in->name == I30_NAME)
+-		ni->vfs_inode.i_size = new_data;
++		i_size_write(&ni->vfs_inode, new_data);
+ 
+ 	bpb = bitmap_size(bit);
+ 	if (bpb * 8 == nbits)
+@@ -2576,7 +2576,7 @@ int indx_delete_entry(struct ntfs_index *indx, struct ntfs_inode *ni,
+ 		err = attr_set_size(ni, ATTR_ALLOC, in->name, in->name_len,
+ 				    &indx->alloc_run, 0, NULL, false, NULL);
+ 		if (in->name == I30_NAME)
+-			ni->vfs_inode.i_size = 0;
++			i_size_write(&ni->vfs_inode, 0);
+ 
+ 		err = ni_remove_attr(ni, ATTR_ALLOC, in->name, in->name_len,
+ 				     false, NULL);
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 5e3d713749185..eb7a8c9fba018 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -345,9 +345,7 @@ static struct inode *ntfs_read_mft(struct inode *inode,
+ 			inode->i_size = le16_to_cpu(rp.SymbolicLinkReparseBuffer
+ 							    .PrintNameLength) /
+ 					sizeof(u16);
+-
+ 			ni->i_valid = inode->i_size;
+-
+ 			/* Clear directory bit. */
+ 			if (ni->ni_flags & NI_FLAG_DIR) {
+ 				indx_clear(&ni->dir);
+@@ -412,7 +410,6 @@ static struct inode *ntfs_read_mft(struct inode *inode,
+ 		goto out;
+ 
+ 	if (!is_match && name) {
+-		/* Reuse rec as buffer for ascii name. */
+ 		err = -ENOENT;
+ 		goto out;
+ 	}
+@@ -427,6 +424,7 @@ static struct inode *ntfs_read_mft(struct inode *inode,
+ 
+ 	if (names != le16_to_cpu(rec->hard_links)) {
+ 		/* Correct minor error on the fly. Do not mark inode as dirty. */
++		ntfs_inode_warn(inode, "Correct links count -> %u.", names);
+ 		rec->hard_links = cpu_to_le16(names);
+ 		ni->mi.dirty = true;
+ 	}
+@@ -653,9 +651,10 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo,
+ 			off = vbo & (PAGE_SIZE - 1);
+ 			folio_set_bh(bh, folio, off);
+ 
+-			err = bh_read(bh, 0);
+-			if (err < 0)
++			if (bh_read(bh, 0) < 0) {
++				err = -EIO;
+ 				goto out;
++			}
+ 			folio_zero_segment(folio, off + voff, off + block_size);
+ 		}
+ 	}
+@@ -853,9 +852,13 @@ static int ntfs_resident_writepage(struct folio *folio,
+ 				   struct writeback_control *wbc, void *data)
+ {
+ 	struct address_space *mapping = data;
+-	struct ntfs_inode *ni = ntfs_i(mapping->host);
++	struct inode *inode = mapping->host;
++	struct ntfs_inode *ni = ntfs_i(inode);
+ 	int ret;
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	ni_lock(ni);
+ 	ret = attr_data_write_resident(ni, &folio->page);
+ 	ni_unlock(ni);
+@@ -869,7 +872,12 @@ static int ntfs_resident_writepage(struct folio *folio,
+ static int ntfs_writepages(struct address_space *mapping,
+ 			   struct writeback_control *wbc)
+ {
+-	if (is_resident(ntfs_i(mapping->host)))
++	struct inode *inode = mapping->host;
++
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
++	if (is_resident(ntfs_i(inode)))
+ 		return write_cache_pages(mapping, wbc, ntfs_resident_writepage,
+ 					 mapping);
+ 	return mpage_writepages(mapping, wbc, ntfs_get_block);
+@@ -889,6 +897,9 @@ int ntfs_write_begin(struct file *file, struct address_space *mapping,
+ 	struct inode *inode = mapping->host;
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	*pagep = NULL;
+ 	if (is_resident(ni)) {
+ 		struct page *page =
+@@ -974,7 +985,7 @@ int ntfs_write_end(struct file *file, struct address_space *mapping, loff_t pos,
+ 		}
+ 
+ 		if (pos + err > inode->i_size) {
+-			inode->i_size = pos + err;
++			i_size_write(inode, pos + err);
+ 			dirty = true;
+ 		}
+ 
+@@ -1306,6 +1317,11 @@ struct inode *ntfs_create_inode(struct mnt_idmap *idmap, struct inode *dir,
+ 		goto out1;
+ 	}
+ 
++	if (unlikely(ntfs3_forced_shutdown(sb))) {
++		err = -EIO;
++		goto out2;
++	}
++
+ 	/* Mark rw ntfs as dirty. it will be cleared at umount. */
+ 	ntfs_set_state(sbi, NTFS_DIRTY_DIRTY);
+ 
+diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c
+index ee3093be51701..cae41db0aaa7d 100644
+--- a/fs/ntfs3/namei.c
++++ b/fs/ntfs3/namei.c
+@@ -181,6 +181,9 @@ static int ntfs_unlink(struct inode *dir, struct dentry *dentry)
+ 	struct ntfs_inode *ni = ntfs_i(dir);
+ 	int err;
+ 
++	if (unlikely(ntfs3_forced_shutdown(dir->i_sb)))
++		return -EIO;
++
+ 	ni_lock_dir(ni);
+ 
+ 	err = ntfs_unlink_inode(dir, dentry);
+@@ -199,6 +202,9 @@ static int ntfs_symlink(struct mnt_idmap *idmap, struct inode *dir,
+ 	u32 size = strlen(symname);
+ 	struct inode *inode;
+ 
++	if (unlikely(ntfs3_forced_shutdown(dir->i_sb)))
++		return -EIO;
++
+ 	inode = ntfs_create_inode(idmap, dir, dentry, NULL, S_IFLNK | 0777, 0,
+ 				  symname, size, NULL);
+ 
+@@ -227,6 +233,9 @@ static int ntfs_rmdir(struct inode *dir, struct dentry *dentry)
+ 	struct ntfs_inode *ni = ntfs_i(dir);
+ 	int err;
+ 
++	if (unlikely(ntfs3_forced_shutdown(dir->i_sb)))
++		return -EIO;
++
+ 	ni_lock_dir(ni);
+ 
+ 	err = ntfs_unlink_inode(dir, dentry);
+@@ -264,6 +273,9 @@ static int ntfs_rename(struct mnt_idmap *idmap, struct inode *dir,
+ 		      1024);
+ 	static_assert(PATH_MAX >= 4 * 1024);
+ 
++	if (unlikely(ntfs3_forced_shutdown(sb)))
++		return -EIO;
++
+ 	if (flags & ~RENAME_NOREPLACE)
+ 		return -EINVAL;
+ 
+diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h
+index 86aecbb01a92f..9c7478150a035 100644
+--- a/fs/ntfs3/ntfs.h
++++ b/fs/ntfs3/ntfs.h
+@@ -523,12 +523,10 @@ struct ATTR_LIST_ENTRY {
+ 	__le64 vcn;		// 0x08: Starting VCN of this attribute.
+ 	struct MFT_REF ref;	// 0x10: MFT record number with attribute.
+ 	__le16 id;		// 0x18: struct ATTRIB ID.
+-	__le16 name[3];		// 0x1A: Just to align. To get real name can use bNameOffset.
++	__le16 name[];		// 0x1A: To get real name use name_off.
+ 
+ }; // sizeof(0x20)
+ 
+-static_assert(sizeof(struct ATTR_LIST_ENTRY) == 0x20);
+-
+ static inline u32 le_size(u8 name_len)
+ {
+ 	return ALIGN(offsetof(struct ATTR_LIST_ENTRY, name) +
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index a46d30b84bf39..627419bd6f778 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -61,6 +61,8 @@ enum utf16_endian;
+ 
+ /* sbi->flags */
+ #define NTFS_FLAGS_NODISCARD		0x00000001
++/* ntfs in shutdown state. */
++#define NTFS_FLAGS_SHUTDOWN_BIT		0x00000002  /* == 4*/
+ /* Set when LogFile is replaying. */
+ #define NTFS_FLAGS_LOG_REPLAYING	0x00000008
+ /* Set when we changed first MFT's which copy must be updated in $MftMirr. */
+@@ -226,7 +228,7 @@ struct ntfs_sb_info {
+ 	u64 maxbytes; // Maximum size for normal files.
+ 	u64 maxbytes_sparse; // Maximum size for sparse file.
+ 
+-	u32 flags; // See NTFS_FLAGS_XXX.
++	unsigned long flags; // See NTFS_FLAGS_
+ 
+ 	CLST zone_max; // Maximum MFT zone length in clusters
+ 	CLST bad_clusters; // The count of marked bad clusters.
+@@ -584,6 +586,7 @@ bool check_index_header(const struct INDEX_HDR *hdr, size_t bytes);
+ int log_replay(struct ntfs_inode *ni, bool *initialized);
+ 
+ /* Globals from fsntfs.c */
++struct buffer_head *ntfs_bread(struct super_block *sb, sector_t block);
+ bool ntfs_fix_pre_write(struct NTFS_RECORD_HEADER *rhdr, size_t bytes);
+ int ntfs_fix_post_read(struct NTFS_RECORD_HEADER *rhdr, size_t bytes,
+ 		       bool simple);
+@@ -872,7 +875,7 @@ int ntfs_init_acl(struct mnt_idmap *idmap, struct inode *inode,
+ 
+ int ntfs_acl_chmod(struct mnt_idmap *idmap, struct dentry *dentry);
+ ssize_t ntfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
+-extern const struct xattr_handler * const ntfs_xattr_handlers[];
++extern const struct xattr_handler *const ntfs_xattr_handlers[];
+ 
+ int ntfs_save_wsl_perm(struct inode *inode, __le16 *ea_size);
+ void ntfs_get_wsl_perm(struct inode *inode);
+@@ -999,6 +1002,11 @@ static inline struct ntfs_sb_info *ntfs_sb(struct super_block *sb)
+ 	return sb->s_fs_info;
+ }
+ 
++static inline int ntfs3_forced_shutdown(struct super_block *sb)
++{
++	return test_bit(NTFS_FLAGS_SHUTDOWN_BIT, &ntfs_sb(sb)->flags);
++}
++
+ /*
+  * ntfs_up_cluster - Align up on cluster boundary.
+  */
+@@ -1025,19 +1033,6 @@ static inline u64 bytes_to_block(const struct super_block *sb, u64 size)
+ 	return (size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
+ }
+ 
+-static inline struct buffer_head *ntfs_bread(struct super_block *sb,
+-					     sector_t block)
+-{
+-	struct buffer_head *bh = sb_bread(sb, block);
+-
+-	if (bh)
+-		return bh;
+-
+-	ntfs_err(sb, "failed to read volume at offset 0x%llx",
+-		 (u64)block << sb->s_blocksize_bits);
+-	return NULL;
+-}
+-
+ static inline struct ntfs_inode *ntfs_i(struct inode *inode)
+ {
+ 	return container_of(inode, struct ntfs_inode, vfs_inode);
+diff --git a/fs/ntfs3/record.c b/fs/ntfs3/record.c
+index 53629b1f65e99..6aa3a9d44df1b 100644
+--- a/fs/ntfs3/record.c
++++ b/fs/ntfs3/record.c
+@@ -279,7 +279,7 @@ struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr)
+ 		if (t16 > asize)
+ 			return NULL;
+ 
+-		if (t16 + le32_to_cpu(attr->res.data_size) > asize)
++		if (le32_to_cpu(attr->res.data_size) > asize - t16)
+ 			return NULL;
+ 
+ 		t32 = sizeof(short) * attr->name_len;
+@@ -535,8 +535,20 @@ bool mi_remove_attr(struct ntfs_inode *ni, struct mft_inode *mi,
+ 		return false;
+ 
+ 	if (ni && is_attr_indexed(attr)) {
+-		le16_add_cpu(&ni->mi.mrec->hard_links, -1);
+-		ni->mi.dirty = true;
++		u16 links = le16_to_cpu(ni->mi.mrec->hard_links);
++		struct ATTR_FILE_NAME *fname =
++			attr->type != ATTR_NAME ?
++				NULL :
++				resident_data_ex(attr,
++						 SIZEOF_ATTRIBUTE_FILENAME);
++		if (fname && fname->type == FILE_NAME_DOS) {
++			/* Do not decrease links count deleting DOS name. */
++		} else if (!links) {
++			/* minor error. Not critical. */
++		} else {
++			ni->mi.mrec->hard_links = cpu_to_le16(links - 1);
++			ni->mi.dirty = true;
++		}
+ 	}
+ 
+ 	used -= asize;
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index 9153dffde950c..c55a29793a8d8 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -625,7 +625,7 @@ static void ntfs3_free_sbi(struct ntfs_sb_info *sbi)
+ {
+ 	kfree(sbi->new_rec);
+ 	kvfree(ntfs_put_shared(sbi->upcase));
+-	kfree(sbi->def_table);
++	kvfree(sbi->def_table);
+ 	kfree(sbi->compress.lznt);
+ #ifdef CONFIG_NTFS3_LZX_XPRESS
+ 	xpress_free_decompressor(sbi->compress.xpress);
+@@ -714,6 +714,14 @@ static int ntfs_show_options(struct seq_file *m, struct dentry *root)
+ 	return 0;
+ }
+ 
++/*
++ * ntfs_shutdown - super_operations::shutdown
++ */
++static void ntfs_shutdown(struct super_block *sb)
++{
++	set_bit(NTFS_FLAGS_SHUTDOWN_BIT, &ntfs_sb(sb)->flags);
++}
++
+ /*
+  * ntfs_sync_fs - super_operations::sync_fs
+  */
+@@ -724,6 +732,9 @@ static int ntfs_sync_fs(struct super_block *sb, int wait)
+ 	struct ntfs_inode *ni;
+ 	struct inode *inode;
+ 
++	if (unlikely(ntfs3_forced_shutdown(sb)))
++		return -EIO;
++
+ 	ni = sbi->security.ni;
+ 	if (ni) {
+ 		inode = &ni->vfs_inode;
+@@ -763,6 +774,7 @@ static const struct super_operations ntfs_sops = {
+ 	.put_super = ntfs_put_super,
+ 	.statfs = ntfs_statfs,
+ 	.show_options = ntfs_show_options,
++	.shutdown = ntfs_shutdown,
+ 	.sync_fs = ntfs_sync_fs,
+ 	.write_inode = ntfs3_write_inode,
+ };
+@@ -866,6 +878,7 @@ static int ntfs_init_from_boot(struct super_block *sb, u32 sector_size,
+ 	u16 fn, ao;
+ 	u8 cluster_bits;
+ 	u32 boot_off = 0;
++	sector_t boot_block = 0;
+ 	const char *hint = "Primary boot";
+ 
+ 	/* Save original dev_size. Used with alternative boot. */
+@@ -873,11 +886,11 @@ static int ntfs_init_from_boot(struct super_block *sb, u32 sector_size,
+ 
+ 	sbi->volume.blocks = dev_size >> PAGE_SHIFT;
+ 
+-	bh = ntfs_bread(sb, 0);
++read_boot:
++	bh = ntfs_bread(sb, boot_block);
+ 	if (!bh)
+-		return -EIO;
++		return boot_block ? -EINVAL : -EIO;
+ 
+-check_boot:
+ 	err = -EINVAL;
+ 
+ 	/* Corrupted image; do not read OOB */
+@@ -1108,26 +1121,24 @@ static int ntfs_init_from_boot(struct super_block *sb, u32 sector_size,
+ 	}
+ 
+ out:
+-	if (err == -EINVAL && !bh->b_blocknr && dev_size0 > PAGE_SHIFT) {
++	brelse(bh);
++
++	if (err == -EINVAL && !boot_block && dev_size0 > PAGE_SHIFT) {
+ 		u32 block_size = min_t(u32, sector_size, PAGE_SIZE);
+ 		u64 lbo = dev_size0 - sizeof(*boot);
+ 
+-		/*
+-	 	 * Try alternative boot (last sector)
+-		 */
+-		brelse(bh);
+-
+-		sb_set_blocksize(sb, block_size);
+-		bh = ntfs_bread(sb, lbo >> blksize_bits(block_size));
+-		if (!bh)
+-			return -EINVAL;
+-
++		boot_block = lbo >> blksize_bits(block_size);
+ 		boot_off = lbo & (block_size - 1);
+-		hint = "Alternative boot";
+-		dev_size = dev_size0; /* restore original size. */
+-		goto check_boot;
++		if (boot_block && block_size >= boot_off + sizeof(*boot)) {
++			/*
++			 * Try alternative boot (last sector)
++			 */
++			sb_set_blocksize(sb, block_size);
++			hint = "Alternative boot";
++			dev_size = dev_size0; /* restore original size. */
++			goto read_boot;
++		}
+ 	}
+-	brelse(bh);
+ 
+ 	return err;
+ }
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index 4274b6f31cfa1..53e7d1fa036aa 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -219,6 +219,9 @@ static ssize_t ntfs_list_ea(struct ntfs_inode *ni, char *buffer,
+ 		if (!ea->name_len)
+ 			break;
+ 
++		if (ea->name_len > ea_size)
++			break;
++
+ 		if (buffer) {
+ 			/* Check if we can use field ea->name */
+ 			if (off + ea_size > size)
+@@ -744,6 +747,9 @@ static int ntfs_getxattr(const struct xattr_handler *handler, struct dentry *de,
+ 	int err;
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 
++	if (unlikely(ntfs3_forced_shutdown(inode->i_sb)))
++		return -EIO;
++
+ 	/* Dispatch request. */
+ 	if (!strcmp(name, SYSTEM_DOS_ATTRIB)) {
+ 		/* system.dos_attrib */
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index d64a306a414be..5730c65ffb40d 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -151,7 +151,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ 		return -EOPNOTSUPP;
+ 
+ 	ses = tcon->ses;
+-	server = ses->server;
++	server = cifs_pick_channel(ses);
+ 	cfids = tcon->cfids;
+ 
+ 	if (!server->ops->new_lease_key)
+@@ -367,6 +367,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ 		atomic_inc(&tcon->num_remote_opens);
+ 	}
+ 	kfree(utf16_path);
++
+ 	return rc;
+ }
+ 
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index ef4c2e3c9fa61..6322f0f68a176 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -572,7 +572,7 @@ static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
+ 		len = cifs_strtoUTF16(user, ses->user_name, len, nls_cp);
+ 		UniStrupr(user);
+ 	} else {
+-		memset(user, '\0', 2);
++		*(u16 *)user = 0;
+ 	}
+ 
+ 	rc = crypto_shash_update(ses->server->secmech.hmacmd5,
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 942e6ece56b1a..462554917e5a1 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -82,7 +82,7 @@
+ #define SMB_INTERFACE_POLL_INTERVAL	600
+ 
+ /* maximum number of PDUs in one compound */
+-#define MAX_COMPOUND 5
++#define MAX_COMPOUND 7
+ 
+ /*
+  * Default number of credits to keep available for SMB3.
+@@ -1018,6 +1018,8 @@ struct cifs_chan {
+ 	__u8 signkey[SMB3_SIGN_KEY_SIZE];
+ };
+ 
++#define CIFS_SES_FLAG_SCALE_CHANNELS (0x1)
++
+ /*
+  * Session structure.  One of these for each uid session with a particular host
+  */
+@@ -1050,6 +1052,7 @@ struct cifs_ses {
+ 	enum securityEnum sectype; /* what security flavor was specified? */
+ 	bool sign;		/* is signing required? */
+ 	bool domainAuto:1;
++	unsigned int flags;
+ 	__u16 session_flags;
+ 	__u8 smb3signingkey[SMB3_SIGN_KEY_SIZE];
+ 	__u8 smb3encryptionkey[SMB3_ENC_DEC_KEY_SIZE];
+@@ -1820,6 +1823,13 @@ static inline bool is_retryable_error(int error)
+ 	return false;
+ }
+ 
++static inline bool is_replayable_error(int error)
++{
++	if (error == -EAGAIN || error == -ECONNABORTED)
++		return true;
++	return false;
++}
++
+ 
+ /* cifs_get_writable_file() flags */
+ #define FIND_WR_ANY         0
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index ea1da3ce0d401..c3d805ecb7f11 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -233,6 +233,12 @@ cifs_mark_tcp_ses_conns_for_reconnect(struct TCP_Server_Info *server,
+ 	list_for_each_entry_safe(ses, nses, &pserver->smb_ses_list, smb_ses_list) {
+ 		/* check if iface is still active */
+ 		spin_lock(&ses->chan_lock);
++		if (cifs_ses_get_chan_index(ses, server) ==
++		    CIFS_INVAL_CHAN_INDEX) {
++			spin_unlock(&ses->chan_lock);
++			continue;
++		}
++
+ 		if (!cifs_chan_is_iface_active(ses, server)) {
+ 			spin_unlock(&ses->chan_lock);
+ 			cifs_chan_update_iface(ses, server);
+@@ -4225,6 +4231,11 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
++
++	/* if tcon is marked for needing reconnect, update state */
++	if (tcon->need_reconnect)
++		tcon->status = TID_NEED_TCON;
++
+ 	if (tcon->status == TID_GOOD) {
+ 		spin_unlock(&tcon->tc_lock);
+ 		return 0;
+diff --git a/fs/smb/client/dfs.c b/fs/smb/client/dfs.c
+index a8a1d386da656..449c59830039b 100644
+--- a/fs/smb/client/dfs.c
++++ b/fs/smb/client/dfs.c
+@@ -565,6 +565,11 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
++
++	/* if tcon is marked for needing reconnect, update state */
++	if (tcon->need_reconnect)
++		tcon->status = TID_NEED_TCON;
++
+ 	if (tcon->status == TID_GOOD) {
+ 		spin_unlock(&tcon->tc_lock);
+ 		return 0;
+@@ -625,8 +630,8 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 		spin_lock(&tcon->tc_lock);
+ 		if (tcon->status == TID_IN_TCON)
+ 			tcon->status = TID_GOOD;
+-		spin_unlock(&tcon->tc_lock);
+ 		tcon->need_reconnect = false;
++		spin_unlock(&tcon->tc_lock);
+ 	}
+ 
+ 	return rc;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 32a8525415d96..4cbb5487bd8d0 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -175,6 +175,9 @@ cifs_mark_open_files_invalid(struct cifs_tcon *tcon)
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
++	if (tcon->need_reconnect)
++		tcon->status = TID_NEED_RECON;
++
+ 	if (tcon->status != TID_NEED_RECON) {
+ 		spin_unlock(&tcon->tc_lock);
+ 		return;
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 75f2c8734ff56..6ecbf48d0f0c6 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -210,7 +210,7 @@ cifs_parse_security_flavors(struct fs_context *fc, char *value, struct smb3_fs_c
+ 
+ 	switch (match_token(value, cifs_secflavor_tokens, args)) {
+ 	case Opt_sec_krb5p:
+-		cifs_errorf(fc, "sec=krb5p is not supported!\n");
++		cifs_errorf(fc, "sec=krb5p is not supported. Use sec=krb5,seal instead\n");
+ 		return 1;
+ 	case Opt_sec_krb5i:
+ 		ctx->sign = true;
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index d30ea2005eb36..e23cd216bffbe 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -299,14 +299,16 @@ cifs_dir_info_to_fattr(struct cifs_fattr *fattr, FILE_DIRECTORY_INFO *info,
+ }
+ 
+ static void cifs_fulldir_info_to_fattr(struct cifs_fattr *fattr,
+-				       SEARCH_ID_FULL_DIR_INFO *info,
++				       const void *info,
+ 				       struct cifs_sb_info *cifs_sb)
+ {
++	const FILE_FULL_DIRECTORY_INFO *di = info;
++
+ 	__dir_info_to_fattr(fattr, info);
+ 
+-	/* See MS-FSCC 2.4.19 FileIdFullDirectoryInformation */
++	/* See MS-FSCC 2.4.14, 2.4.19 */
+ 	if (fattr->cf_cifsattrs & ATTR_REPARSE)
+-		fattr->cf_cifstag = le32_to_cpu(info->EaSize);
++		fattr->cf_cifstag = le32_to_cpu(di->EaSize);
+ 	cifs_fill_common_info(fattr, cifs_sb);
+ }
+ 
+@@ -420,7 +422,7 @@ _initiate_cifs_search(const unsigned int xid, struct file *file,
+ 	} else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {
+ 		cifsFile->srch_inf.info_level = SMB_FIND_FILE_ID_FULL_DIR_INFO;
+ 	} else /* not srvinos - BB fixme add check for backlevel? */ {
+-		cifsFile->srch_inf.info_level = SMB_FIND_FILE_DIRECTORY_INFO;
++		cifsFile->srch_inf.info_level = SMB_FIND_FILE_FULL_DIRECTORY_INFO;
+ 	}
+ 
+ 	search_flags = CIFS_SEARCH_CLOSE_AT_END | CIFS_SEARCH_RETURN_RESUME;
+@@ -1014,10 +1016,9 @@ static int cifs_filldir(char *find_entry, struct file *file,
+ 				       (FIND_FILE_STANDARD_INFO *)find_entry,
+ 				       cifs_sb);
+ 		break;
++	case SMB_FIND_FILE_FULL_DIRECTORY_INFO:
+ 	case SMB_FIND_FILE_ID_FULL_DIR_INFO:
+-		cifs_fulldir_info_to_fattr(&fattr,
+-					   (SEARCH_ID_FULL_DIR_INFO *)find_entry,
+-					   cifs_sb);
++		cifs_fulldir_info_to_fattr(&fattr, find_entry, cifs_sb);
+ 		break;
+ 	default:
+ 		cifs_dir_info_to_fattr(&fattr,
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index a1b9734564711..94c5d50aa3474 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -75,6 +75,10 @@ cifs_ses_get_chan_index(struct cifs_ses *ses,
+ {
+ 	unsigned int i;
+ 
++	/* if the channel is waiting for termination */
++	if (server && server->terminate)
++		return CIFS_INVAL_CHAN_INDEX;
++
+ 	for (i = 0; i < ses->chan_count; i++) {
+ 		if (ses->chans[i].server == server)
+ 			return i;
+@@ -84,7 +88,6 @@ cifs_ses_get_chan_index(struct cifs_ses *ses,
+ 	if (server)
+ 		cifs_dbg(VFS, "unable to get chan index for server: 0x%llx",
+ 			 server->conn_id);
+-	WARN_ON(1);
+ 	return CIFS_INVAL_CHAN_INDEX;
+ }
+ 
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 5d9c87d2e1e01..9d34a55fdb5e4 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -178,6 +178,7 @@ cifs_chan_skip_or_disable(struct cifs_ses *ses,
+ 		}
+ 
+ 		ses->chans[chan_index].server = NULL;
++		server->terminate = true;
+ 		spin_unlock(&ses->chan_lock);
+ 
+ 		/*
+@@ -188,7 +189,6 @@ cifs_chan_skip_or_disable(struct cifs_ses *ses,
+ 		 */
+ 		cifs_put_tcp_session(server, from_reconnect);
+ 
+-		server->terminate = true;
+ 		cifs_signal_cifsd_for_reconnect(server, false);
+ 
+ 		/* mark primary server as needing reconnect */
+@@ -399,6 +399,15 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		goto out;
+ 	}
+ 
++	spin_lock(&ses->ses_lock);
++	if (ses->flags & CIFS_SES_FLAG_SCALE_CHANNELS) {
++		spin_unlock(&ses->ses_lock);
++		mutex_unlock(&ses->session_mutex);
++		goto skip_add_channels;
++	}
++	ses->flags |= CIFS_SES_FLAG_SCALE_CHANNELS;
++	spin_unlock(&ses->ses_lock);
++
+ 	if (!rc &&
+ 	    (server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
+ 		mutex_unlock(&ses->session_mutex);
+@@ -428,15 +437,22 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		if (ses->chan_max > ses->chan_count &&
+ 		    ses->iface_count &&
+ 		    !SERVER_IS_CHAN(server)) {
+-			if (ses->chan_count == 1)
++			if (ses->chan_count == 1) {
+ 				cifs_server_dbg(VFS, "supports multichannel now\n");
++				queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
++						 (SMB_INTERFACE_POLL_INTERVAL * HZ));
++			}
+ 
+ 			cifs_try_adding_channels(ses);
+ 		}
+ 	} else {
+ 		mutex_unlock(&ses->session_mutex);
+ 	}
++
+ skip_add_channels:
++	spin_lock(&ses->ses_lock);
++	ses->flags &= ~CIFS_SES_FLAG_SCALE_CHANNELS;
++	spin_unlock(&ses->ses_lock);
+ 
+ 	if (smb2_command != SMB2_INTERNAL_CMD)
+ 		mod_delayed_work(cifsiod_wq, &server->reconnect, 0);
+@@ -5076,6 +5092,9 @@ int SMB2_query_directory_init(const unsigned int xid,
+ 	case SMB_FIND_FILE_POSIX_INFO:
+ 		req->FileInformationClass = SMB_FIND_FILE_POSIX_INFO;
+ 		break;
++	case SMB_FIND_FILE_FULL_DIRECTORY_INFO:
++		req->FileInformationClass = FILE_FULL_DIRECTORY_INFORMATION;
++		break;
+ 	default:
+ 		cifs_tcon_dbg(VFS, "info level %u isn't supported\n",
+ 			info_level);
+@@ -5145,6 +5164,9 @@ smb2_parse_query_directory(struct cifs_tcon *tcon,
+ 		/* note that posix payload are variable size */
+ 		info_buf_size = sizeof(struct smb2_posix_info);
+ 		break;
++	case SMB_FIND_FILE_FULL_DIRECTORY_INFO:
++		info_buf_size = sizeof(FILE_FULL_DIRECTORY_INFO);
++		break;
+ 	default:
+ 		cifs_tcon_dbg(VFS, "info level %u isn't supported\n",
+ 			 srch_inf->info_level);
+diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
+index 4f717ad7c21b4..994d701934329 100644
+--- a/fs/smb/client/transport.c
++++ b/fs/smb/client/transport.c
+@@ -400,10 +400,17 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 						  server->conn_id, server->hostname);
+ 	}
+ smbd_done:
+-	if (rc < 0 && rc != -EINTR)
++	/*
++	 * there's hardly any use for the layers above to know the
++	 * actual error code here. All they should do at this point is
++	 * to retry the connection and hope it goes away.
++	 */
++	if (rc < 0 && rc != -EINTR && rc != -EAGAIN) {
+ 		cifs_server_dbg(VFS, "Error %d sending data on socket to server\n",
+ 			 rc);
+-	else if (rc > 0)
++		rc = -ECONNABORTED;
++		cifs_signal_cifsd_for_reconnect(server, false);
++	} else if (rc > 0)
+ 		rc = 0;
+ out:
+ 	cifs_in_send_dec(server);
+@@ -428,8 +435,8 @@ smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	if (!(flags & CIFS_TRANSFORM_REQ))
+ 		return __smb_send_rqst(server, num_rqst, rqst);
+ 
+-	if (num_rqst > MAX_COMPOUND - 1)
+-		return -ENOMEM;
++	if (WARN_ON_ONCE(num_rqst > MAX_COMPOUND - 1))
++		return -EIO;
+ 
+ 	if (!server->ops->init_transform_rq) {
+ 		cifs_server_dbg(VFS, "Encryption requested but transform callback is missing\n");
+@@ -1026,6 +1033,9 @@ struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)
+ 		if (!server || server->terminate)
+ 			continue;
+ 
++		if (CIFS_CHAN_NEEDS_RECONNECT(ses, i))
++			continue;
++
+ 		/*
+ 		 * strictly speaking, we should pick up req_lock to read
+ 		 * server->in_flight. But it shouldn't matter much here if we
+diff --git a/include/kunit/resource.h b/include/kunit/resource.h
+index c7383e90f5c9e..4ad69a2642a51 100644
+--- a/include/kunit/resource.h
++++ b/include/kunit/resource.h
+@@ -390,6 +390,27 @@ void kunit_remove_resource(struct kunit *test, struct kunit_resource *res);
+ /* A 'deferred action' function to be used with kunit_add_action. */
+ typedef void (kunit_action_t)(void *);
+ 
++/**
++ * KUNIT_DEFINE_ACTION_WRAPPER() - Wrap a function for use as a deferred action.
++ *
++ * @wrapper: The name of the new wrapper function define.
++ * @orig: The original function to wrap.
++ * @arg_type: The type of the argument accepted by @orig.
++ *
++ * Defines a wrapper for a function which accepts a single, pointer-sized
++ * argument. This wrapper can then be passed to kunit_add_action() and
++ * similar. This should be used in preference to casting a function
++ * directly to kunit_action_t, as casting function pointers will break
++ * control flow integrity (CFI), leading to crashes.
++ */
++#define KUNIT_DEFINE_ACTION_WRAPPER(wrapper, orig, arg_type)	\
++	static void wrapper(void *in)				\
++	{							\
++		arg_type arg = (arg_type)in;			\
++		orig(arg);					\
++	}
++
++
+ /**
+  * kunit_add_action() - Call a function when the test ends.
+  * @test: Test case to associate the action with.
+diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h
+index b8610e9d2471f..5edf9fffa0973 100644
+--- a/include/linux/ceph/osd_client.h
++++ b/include/linux/ceph/osd_client.h
+@@ -45,6 +45,7 @@ enum ceph_sparse_read_state {
+ 	CEPH_SPARSE_READ_HDR	= 0,
+ 	CEPH_SPARSE_READ_EXTENTS,
+ 	CEPH_SPARSE_READ_DATA_LEN,
++	CEPH_SPARSE_READ_DATA_PRE,
+ 	CEPH_SPARSE_READ_DATA,
+ };
+ 
+@@ -64,7 +65,7 @@ struct ceph_sparse_read {
+ 	u64				sr_req_len;  /* orig request length */
+ 	u64				sr_pos;      /* current pos in buffer */
+ 	int				sr_index;    /* current extent index */
+-	__le32				sr_datalen;  /* length of actual data */
++	u32				sr_datalen;  /* length of actual data */
+ 	u32				sr_count;    /* extent count in reply */
+ 	int				sr_ext_len;  /* length of extent array */
+ 	struct ceph_sparse_extent	*sr_extent;  /* extent array */
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 98b7a7a8c42e3..7f659c26794b5 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -352,6 +352,8 @@ enum rw_hint {
+  * unrelated IO (like cache flushing, new IO generation, etc).
+  */
+ #define IOCB_DIO_CALLER_COMP	(1 << 22)
++/* kiocb is a read or write operation submitted by fs/aio.c. */
++#define IOCB_AIO_RW		(1 << 23)
+ 
+ /* for use in trace events */
+ #define TRACE_IOCB_STRINGS \
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index 6291aa7b079b0..81553770e411a 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -1346,6 +1346,12 @@ static inline bool mm_valid_pasid(struct mm_struct *mm)
+ {
+ 	return mm->pasid != IOMMU_PASID_INVALID;
+ }
++
++static inline u32 mm_get_enqcmd_pasid(struct mm_struct *mm)
++{
++	return mm->pasid;
++}
++
+ void mm_pasid_drop(struct mm_struct *mm);
+ struct iommu_sva *iommu_sva_bind_device(struct device *dev,
+ 					struct mm_struct *mm);
+@@ -1368,6 +1374,12 @@ static inline u32 iommu_sva_get_pasid(struct iommu_sva *handle)
+ }
+ static inline void mm_pasid_init(struct mm_struct *mm) {}
+ static inline bool mm_valid_pasid(struct mm_struct *mm) { return false; }
++
++static inline u32 mm_get_enqcmd_pasid(struct mm_struct *mm)
++{
++	return IOMMU_PASID_INVALID;
++}
++
+ static inline void mm_pasid_drop(struct mm_struct *mm) {}
+ #endif /* CONFIG_IOMMU_SVA */
+ 
+diff --git a/include/linux/memblock.h b/include/linux/memblock.h
+index ae3bde302f704..ccf0176ba3681 100644
+--- a/include/linux/memblock.h
++++ b/include/linux/memblock.h
+@@ -121,6 +121,8 @@ int memblock_reserve(phys_addr_t base, phys_addr_t size);
+ int memblock_physmem_add(phys_addr_t base, phys_addr_t size);
+ #endif
+ void memblock_trim_memory(phys_addr_t align);
++unsigned long memblock_addrs_overlap(phys_addr_t base1, phys_addr_t size1,
++				     phys_addr_t base2, phys_addr_t size2);
+ bool memblock_overlaps_region(struct memblock_type *type,
+ 			      phys_addr_t base, phys_addr_t size);
+ int memblock_mark_hotplug(phys_addr_t base, phys_addr_t size);
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index fb8d26a15df47..77cd2e13724e7 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -1103,7 +1103,7 @@ struct mlx5_ifc_roce_cap_bits {
+ 	u8         sw_r_roce_src_udp_port[0x1];
+ 	u8         fl_rc_qp_when_roce_disabled[0x1];
+ 	u8         fl_rc_qp_when_roce_enabled[0x1];
+-	u8         reserved_at_7[0x1];
++	u8         roce_cc_general[0x1];
+ 	u8	   qp_ooo_transmit_default[0x1];
+ 	u8         reserved_at_9[0x15];
+ 	u8	   qp_ts_format[0x2];
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index f6dd6575b9054..e73285e3c8708 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -553,6 +553,11 @@ static inline int swap_duplicate(swp_entry_t swp)
+ 	return 0;
+ }
+ 
++static inline int swapcache_prepare(swp_entry_t swp)
++{
++	return 0;
++}
++
+ static inline void swap_free(swp_entry_t swp)
+ {
+ }
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index 692d5955911c7..4a767b3d20b9d 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -275,7 +275,7 @@ nf_flow_table_offload_del_cb(struct nf_flowtable *flow_table,
+ }
+ 
+ void flow_offload_route_init(struct flow_offload *flow,
+-			     const struct nf_flow_route *route);
++			     struct nf_flow_route *route);
+ 
+ int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow);
+ void flow_offload_refresh(struct nf_flowtable *flow_table,
+diff --git a/include/net/switchdev.h b/include/net/switchdev.h
+index a43062d4c734b..8346b0d29542c 100644
+--- a/include/net/switchdev.h
++++ b/include/net/switchdev.h
+@@ -308,6 +308,9 @@ void switchdev_deferred_process(void);
+ int switchdev_port_attr_set(struct net_device *dev,
+ 			    const struct switchdev_attr *attr,
+ 			    struct netlink_ext_ack *extack);
++bool switchdev_port_obj_act_is_deferred(struct net_device *dev,
++					enum switchdev_notifier_type nt,
++					const struct switchdev_obj *obj);
+ int switchdev_port_obj_add(struct net_device *dev,
+ 			   const struct switchdev_obj *obj,
+ 			   struct netlink_ext_ack *extack);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 87f0e6c2e1f2f..2ac9475be144c 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2502,7 +2502,7 @@ struct tcp_ulp_ops {
+ 	/* cleanup ulp */
+ 	void (*release)(struct sock *sk);
+ 	/* diagnostic */
+-	int (*get_info)(const struct sock *sk, struct sk_buff *skb);
++	int (*get_info)(struct sock *sk, struct sk_buff *skb);
+ 	size_t (*get_info_size)(const struct sock *sk);
+ 	/* clone ulp */
+ 	void (*clone)(const struct request_sock *req, struct sock *newsk,
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index 5ec1e71a09de7..c38f4fe5e64cf 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -100,10 +100,6 @@ struct scsi_vpd {
+ 	unsigned char	data[];
+ };
+ 
+-enum scsi_vpd_parameters {
+-	SCSI_VPD_HEADER_SIZE = 4,
+-};
+-
+ struct scsi_device {
+ 	struct Scsi_Host *host;
+ 	struct request_queue *request_queue;
+@@ -208,6 +204,7 @@ struct scsi_device {
+ 	unsigned use_10_for_rw:1; /* first try 10-byte read / write */
+ 	unsigned use_10_for_ms:1; /* first try 10-byte mode sense/select */
+ 	unsigned set_dbd_for_ms:1; /* Set "DBD" field in mode sense */
++	unsigned read_before_ms:1;	/* perform a READ before MODE SENSE */
+ 	unsigned no_report_opcodes:1;	/* no REPORT SUPPORTED OPERATION CODES */
+ 	unsigned no_write_same:1;	/* no WRITE SAME command */
+ 	unsigned use_16_for_rw:1; /* Use read/write(16) over read/write(10) */
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index b3053af6427d2..ce4729ef1ad2d 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1101,6 +1101,7 @@ struct bpf_hrtimer {
+ 	struct bpf_prog *prog;
+ 	void __rcu *callback_fn;
+ 	void *value;
++	struct rcu_head rcu;
+ };
+ 
+ /* the actual struct hidden inside uapi struct bpf_timer */
+@@ -1332,6 +1333,7 @@ BPF_CALL_1(bpf_timer_cancel, struct bpf_timer_kern *, timer)
+ 
+ 	if (in_nmi())
+ 		return -EOPNOTSUPP;
++	rcu_read_lock();
+ 	__bpf_spin_lock_irqsave(&timer->lock);
+ 	t = timer->timer;
+ 	if (!t) {
+@@ -1353,6 +1355,7 @@ BPF_CALL_1(bpf_timer_cancel, struct bpf_timer_kern *, timer)
+ 	 * if it was running.
+ 	 */
+ 	ret = ret ?: hrtimer_cancel(&t->timer);
++	rcu_read_unlock();
+ 	return ret;
+ }
+ 
+@@ -1407,7 +1410,7 @@ void bpf_timer_cancel_and_free(void *val)
+ 	 */
+ 	if (this_cpu_read(hrtimer_running) != t)
+ 		hrtimer_cancel(&t->timer);
+-	kfree(t);
++	kfree_rcu(t, rcu);
+ }
+ 
+ BPF_CALL_2(bpf_kptr_xchg, void *, map_value, void *, ptr)
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 4405f81248fbc..ef39dffffd4bc 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -2234,6 +2234,7 @@ config TEST_DIV64
+ config TEST_IOV_ITER
+ 	tristate "Test iov_iter operation" if !KUNIT_ALL_TESTS
+ 	depends on KUNIT
++	depends on MMU
+ 	default KUNIT_ALL_TESTS
+ 	help
+ 	  Enable this to turn on testing of the operation of the I/O iterator
+diff --git a/lib/kunit/kunit-test.c b/lib/kunit/kunit-test.c
+index de2113a58fa03..ee6927c609797 100644
+--- a/lib/kunit/kunit-test.c
++++ b/lib/kunit/kunit-test.c
+@@ -538,10 +538,7 @@ static struct kunit_suite kunit_resource_test_suite = {
+ #if IS_BUILTIN(CONFIG_KUNIT_TEST)
+ 
+ /* This avoids a cast warning if kfree() is passed direct to kunit_add_action(). */
+-static void kfree_wrapper(void *p)
+-{
+-	kfree(p);
+-}
++KUNIT_DEFINE_ACTION_WRAPPER(kfree_wrapper, kfree, const void *);
+ 
+ static void kunit_log_test(struct kunit *test)
+ {
+diff --git a/lib/kunit/test.c b/lib/kunit/test.c
+index 3dc9d66eca494..d6a2b2460d52d 100644
+--- a/lib/kunit/test.c
++++ b/lib/kunit/test.c
+@@ -819,6 +819,8 @@ static struct notifier_block kunit_mod_nb = {
+ };
+ #endif
+ 
++KUNIT_DEFINE_ACTION_WRAPPER(kfree_action_wrapper, kfree, const void *)
++
+ void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t gfp)
+ {
+ 	void *data;
+@@ -828,7 +830,7 @@ void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t gfp)
+ 	if (!data)
+ 		return NULL;
+ 
+-	if (kunit_add_action_or_reset(test, (kunit_action_t *)kfree, data) != 0)
++	if (kunit_add_action_or_reset(test, kfree_action_wrapper, data) != 0)
+ 		return NULL;
+ 
+ 	return data;
+@@ -840,7 +842,7 @@ void kunit_kfree(struct kunit *test, const void *ptr)
+ 	if (!ptr)
+ 		return;
+ 
+-	kunit_release_action(test, (kunit_action_t *)kfree, (void *)ptr);
++	kunit_release_action(test, kfree_action_wrapper, (void *)ptr);
+ }
+ EXPORT_SYMBOL_GPL(kunit_kfree);
+ 
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 3a05e71509b9d..fb381835760c6 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1026,6 +1026,9 @@ static void damon_do_apply_schemes(struct damon_ctx *c,
+ 	damon_for_each_scheme(s, c) {
+ 		struct damos_quota *quota = &s->quota;
+ 
++		if (c->passed_sample_intervals != s->next_apply_sis)
++			continue;
++
+ 		if (!s->wmarks.activated)
+ 			continue;
+ 
+@@ -1126,10 +1129,6 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
+ 		if (c->passed_sample_intervals != s->next_apply_sis)
+ 			continue;
+ 
+-		s->next_apply_sis +=
+-			(s->apply_interval_us ? s->apply_interval_us :
+-			 c->attrs.aggr_interval) / sample_interval;
+-
+ 		if (!s->wmarks.activated)
+ 			continue;
+ 
+@@ -1145,6 +1144,14 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
+ 		damon_for_each_region_safe(r, next_r, t)
+ 			damon_do_apply_schemes(c, t, r);
+ 	}
++
++	damon_for_each_scheme(s, c) {
++		if (c->passed_sample_intervals != s->next_apply_sis)
++			continue;
++		s->next_apply_sis +=
++			(s->apply_interval_us ? s->apply_interval_us :
++			 c->attrs.aggr_interval) / sample_interval;
++	}
+ }
+ 
+ /*
+diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
+index f2e5f9431892e..3de2916a65c38 100644
+--- a/mm/damon/lru_sort.c
++++ b/mm/damon/lru_sort.c
+@@ -185,9 +185,21 @@ static struct damos *damon_lru_sort_new_cold_scheme(unsigned int cold_thres)
+ 	return damon_lru_sort_new_scheme(&pattern, DAMOS_LRU_DEPRIO);
+ }
+ 
++static void damon_lru_sort_copy_quota_status(struct damos_quota *dst,
++		struct damos_quota *src)
++{
++	dst->total_charged_sz = src->total_charged_sz;
++	dst->total_charged_ns = src->total_charged_ns;
++	dst->charged_sz = src->charged_sz;
++	dst->charged_from = src->charged_from;
++	dst->charge_target_from = src->charge_target_from;
++	dst->charge_addr_from = src->charge_addr_from;
++}
++
+ static int damon_lru_sort_apply_parameters(void)
+ {
+-	struct damos *scheme;
++	struct damos *scheme, *hot_scheme, *cold_scheme;
++	struct damos *old_hot_scheme = NULL, *old_cold_scheme = NULL;
+ 	unsigned int hot_thres, cold_thres;
+ 	int err = 0;
+ 
+@@ -195,18 +207,35 @@ static int damon_lru_sort_apply_parameters(void)
+ 	if (err)
+ 		return err;
+ 
++	damon_for_each_scheme(scheme, ctx) {
++		if (!old_hot_scheme) {
++			old_hot_scheme = scheme;
++			continue;
++		}
++		old_cold_scheme = scheme;
++	}
++
+ 	hot_thres = damon_max_nr_accesses(&damon_lru_sort_mon_attrs) *
+ 		hot_thres_access_freq / 1000;
+-	scheme = damon_lru_sort_new_hot_scheme(hot_thres);
+-	if (!scheme)
++	hot_scheme = damon_lru_sort_new_hot_scheme(hot_thres);
++	if (!hot_scheme)
+ 		return -ENOMEM;
+-	damon_set_schemes(ctx, &scheme, 1);
++	if (old_hot_scheme)
++		damon_lru_sort_copy_quota_status(&hot_scheme->quota,
++				&old_hot_scheme->quota);
+ 
+ 	cold_thres = cold_min_age / damon_lru_sort_mon_attrs.aggr_interval;
+-	scheme = damon_lru_sort_new_cold_scheme(cold_thres);
+-	if (!scheme)
++	cold_scheme = damon_lru_sort_new_cold_scheme(cold_thres);
++	if (!cold_scheme) {
++		damon_destroy_scheme(hot_scheme);
+ 		return -ENOMEM;
+-	damon_add_scheme(ctx, scheme);
++	}
++	if (old_cold_scheme)
++		damon_lru_sort_copy_quota_status(&cold_scheme->quota,
++				&old_cold_scheme->quota);
++
++	damon_set_schemes(ctx, &hot_scheme, 1);
++	damon_add_scheme(ctx, cold_scheme);
+ 
+ 	return damon_set_region_biggest_system_ram_default(target,
+ 					&monitor_region_start,
+diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
+index ab974e477d2f2..66e190f0374ac 100644
+--- a/mm/damon/reclaim.c
++++ b/mm/damon/reclaim.c
+@@ -150,9 +150,20 @@ static struct damos *damon_reclaim_new_scheme(void)
+ 			&damon_reclaim_wmarks);
+ }
+ 
++static void damon_reclaim_copy_quota_status(struct damos_quota *dst,
++		struct damos_quota *src)
++{
++	dst->total_charged_sz = src->total_charged_sz;
++	dst->total_charged_ns = src->total_charged_ns;
++	dst->charged_sz = src->charged_sz;
++	dst->charged_from = src->charged_from;
++	dst->charge_target_from = src->charge_target_from;
++	dst->charge_addr_from = src->charge_addr_from;
++}
++
+ static int damon_reclaim_apply_parameters(void)
+ {
+-	struct damos *scheme;
++	struct damos *scheme, *old_scheme;
+ 	struct damos_filter *filter;
+ 	int err = 0;
+ 
+@@ -164,6 +175,11 @@ static int damon_reclaim_apply_parameters(void)
+ 	scheme = damon_reclaim_new_scheme();
+ 	if (!scheme)
+ 		return -ENOMEM;
++	if (!list_empty(&ctx->schemes)) {
++		damon_for_each_scheme(old_scheme, ctx)
++			damon_reclaim_copy_quota_status(&scheme->quota,
++					&old_scheme->quota);
++	}
+ 	if (skip_anon) {
+ 		filter = damos_new_filter(DAMOS_FILTER_TYPE_ANON, true);
+ 		if (!filter) {
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 4823ad979b72f..9a5248fe9cf97 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -180,8 +180,9 @@ static inline phys_addr_t memblock_cap_size(phys_addr_t base, phys_addr_t *size)
+ /*
+  * Address comparison utilities
+  */
+-static unsigned long __init_memblock memblock_addrs_overlap(phys_addr_t base1, phys_addr_t size1,
+-				       phys_addr_t base2, phys_addr_t size2)
++unsigned long __init_memblock
++memblock_addrs_overlap(phys_addr_t base1, phys_addr_t size1, phys_addr_t base2,
++		       phys_addr_t size2)
+ {
+ 	return ((base1 < (base2 + size2)) && (base2 < (base1 + size1)));
+ }
+@@ -2214,6 +2215,7 @@ static const char * const flagname[] = {
+ 	[ilog2(MEMBLOCK_MIRROR)] = "MIRROR",
+ 	[ilog2(MEMBLOCK_NOMAP)] = "NOMAP",
+ 	[ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG",
++	[ilog2(MEMBLOCK_RSRV_NOINIT)] = "RSV_NIT",
+ };
+ 
+ static int memblock_debug_show(struct seq_file *m, void *private)
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 651b86ff03dd1..792fb3a5ce3b9 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -7905,9 +7905,13 @@ bool mem_cgroup_swap_full(struct folio *folio)
+ 
+ static int __init setup_swap_account(char *s)
+ {
+-	pr_warn_once("The swapaccount= commandline option is deprecated. "
+-		     "Please report your usecase to linux-mm@kvack.org if you "
+-		     "depend on this functionality.\n");
++	bool res;
++
++	if (!kstrtobool(s, &res) && !res)
++		pr_warn_once("The swapaccount=0 commandline option is deprecated "
++			     "in favor of configuring swap control via cgroupfs. "
++			     "Please report your usecase to linux-mm@kvack.org if you "
++			     "depend on this functionality.\n");
+ 	return 1;
+ }
+ __setup("swapaccount=", setup_swap_account);
+diff --git a/mm/memory.c b/mm/memory.c
+index f941489d60414..ec7b775fb4313 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3785,6 +3785,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 	struct page *page;
+ 	struct swap_info_struct *si = NULL;
+ 	rmap_t rmap_flags = RMAP_NONE;
++	bool need_clear_cache = false;
+ 	bool exclusive = false;
+ 	swp_entry_t entry;
+ 	pte_t pte;
+@@ -3853,6 +3854,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 	if (!folio) {
+ 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
+ 		    __swap_count(entry) == 1) {
++			/*
++			 * Prevent parallel swapin from proceeding with
++			 * the cache flag. Otherwise, another thread may
++			 * finish swapin first, free the entry, and swapout
++			 * reusing the same entry. It's undetectable as
++			 * pte_same() returns true due to entry reuse.
++			 */
++			if (swapcache_prepare(entry)) {
++				/* Relax a bit to prevent rapid repeated page faults */
++				schedule_timeout_uninterruptible(1);
++				goto out;
++			}
++			need_clear_cache = true;
++
+ 			/* skip swapcache */
+ 			folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
+ 						vma, vmf->address, false);
+@@ -4099,6 +4114,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 	if (vmf->pte)
+ 		pte_unmap_unlock(vmf->pte, vmf->ptl);
+ out:
++	/* Clear the swap cache pin for direct swapin after PTL unlock */
++	if (need_clear_cache)
++		swapcache_clear(si, entry);
+ 	if (si)
+ 		put_swap_device(si);
+ 	return ret;
+@@ -4113,6 +4131,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 		folio_unlock(swapcache);
+ 		folio_put(swapcache);
+ 	}
++	if (need_clear_cache)
++		swapcache_clear(si, entry);
+ 	if (si)
+ 		put_swap_device(si);
+ 	return ret;
+diff --git a/mm/swap.h b/mm/swap.h
+index 73c332ee4d910..3501fdf5f1a6e 100644
+--- a/mm/swap.h
++++ b/mm/swap.h
+@@ -40,6 +40,7 @@ void __delete_from_swap_cache(struct folio *folio,
+ void delete_from_swap_cache(struct folio *folio);
+ void clear_shadow_from_swap_cache(int type, unsigned long begin,
+ 				  unsigned long end);
++void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
+ struct folio *swap_cache_get_folio(swp_entry_t entry,
+ 		struct vm_area_struct *vma, unsigned long addr);
+ struct folio *filemap_get_incore_folio(struct address_space *mapping,
+@@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
+ 	return 0;
+ }
+ 
++static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
++{
++}
++
+ static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
+ 		struct vm_area_struct *vma, unsigned long addr)
+ {
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 4bc70f4591641..022581ec40be3 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -3363,6 +3363,19 @@ int swapcache_prepare(swp_entry_t entry)
+ 	return __swap_duplicate(entry, SWAP_HAS_CACHE);
+ }
+ 
++void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
++{
++	struct swap_cluster_info *ci;
++	unsigned long offset = swp_offset(entry);
++	unsigned char usage;
++
++	ci = lock_cluster_or_swap_info(si, offset);
++	usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
++	unlock_cluster_or_swap_info(si, ci);
++	if (!usage)
++		free_swap_slot(entry);
++}
++
+ struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ {
+ 	return swap_type_to_swap_info(swp_type(entry));
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 74411dfdad925..870fd6f5a5bb9 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -1105,6 +1105,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
+ 	if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) {
+ 		spin_unlock(&tree->lock);
+ 		delete_from_swap_cache(page_folio(page));
++		unlock_page(page);
++		put_page(page);
+ 		ret = -ENOMEM;
+ 		goto fail;
+ 	}
+@@ -1220,7 +1222,7 @@ bool zswap_store(struct folio *folio)
+ 	if (folio_test_large(folio))
+ 		return false;
+ 
+-	if (!zswap_enabled || !tree)
++	if (!tree)
+ 		return false;
+ 
+ 	/*
+@@ -1236,6 +1238,9 @@ bool zswap_store(struct folio *folio)
+ 	}
+ 	spin_unlock(&tree->lock);
+ 
++	if (!zswap_enabled)
++		return false;
++
+ 	/*
+ 	 * XXX: zswap reclaim does not work with cgroups yet. Without a
+ 	 * cgroup-aware entry LRU, we will push out entries system-wide based on
+diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c
+index ee84e783e1dff..7b41ee8740cbb 100644
+--- a/net/bridge/br_switchdev.c
++++ b/net/bridge/br_switchdev.c
+@@ -595,21 +595,40 @@ br_switchdev_mdb_replay_one(struct notifier_block *nb, struct net_device *dev,
+ }
+ 
+ static int br_switchdev_mdb_queue_one(struct list_head *mdb_list,
++				      struct net_device *dev,
++				      unsigned long action,
+ 				      enum switchdev_obj_id id,
+ 				      const struct net_bridge_mdb_entry *mp,
+ 				      struct net_device *orig_dev)
+ {
+-	struct switchdev_obj_port_mdb *mdb;
++	struct switchdev_obj_port_mdb mdb = {
++		.obj = {
++			.id = id,
++			.orig_dev = orig_dev,
++		},
++	};
++	struct switchdev_obj_port_mdb *pmdb;
+ 
+-	mdb = kzalloc(sizeof(*mdb), GFP_ATOMIC);
+-	if (!mdb)
+-		return -ENOMEM;
++	br_switchdev_mdb_populate(&mdb, mp);
++
++	if (action == SWITCHDEV_PORT_OBJ_ADD &&
++	    switchdev_port_obj_act_is_deferred(dev, action, &mdb.obj)) {
++		/* This event is already in the deferred queue of
++		 * events, so this replay must be elided, lest the
++		 * driver receives duplicate events for it. This can
++		 * only happen when replaying additions, since
++		 * modifications are always immediately visible in
++		 * br->mdb_list, whereas actual event delivery may be
++		 * delayed.
++		 */
++		return 0;
++	}
+ 
+-	mdb->obj.id = id;
+-	mdb->obj.orig_dev = orig_dev;
+-	br_switchdev_mdb_populate(mdb, mp);
+-	list_add_tail(&mdb->obj.list, mdb_list);
++	pmdb = kmemdup(&mdb, sizeof(mdb), GFP_ATOMIC);
++	if (!pmdb)
++		return -ENOMEM;
+ 
++	list_add_tail(&pmdb->obj.list, mdb_list);
+ 	return 0;
+ }
+ 
+@@ -677,51 +696,50 @@ br_switchdev_mdb_replay(struct net_device *br_dev, struct net_device *dev,
+ 	if (!br_opt_get(br, BROPT_MULTICAST_ENABLED))
+ 		return 0;
+ 
+-	/* We cannot walk over br->mdb_list protected just by the rtnl_mutex,
+-	 * because the write-side protection is br->multicast_lock. But we
+-	 * need to emulate the [ blocking ] calling context of a regular
+-	 * switchdev event, so since both br->multicast_lock and RCU read side
+-	 * critical sections are atomic, we have no choice but to pick the RCU
+-	 * read side lock, queue up all our events, leave the critical section
+-	 * and notify switchdev from blocking context.
++	if (adding)
++		action = SWITCHDEV_PORT_OBJ_ADD;
++	else
++		action = SWITCHDEV_PORT_OBJ_DEL;
++
++	/* br_switchdev_mdb_queue_one() will take care to not queue a
++	 * replay of an event that is already pending in the switchdev
++	 * deferred queue. In order to safely determine that, there
++	 * must be no new deferred MDB notifications enqueued for the
++	 * duration of the MDB scan. Therefore, grab the write-side
++	 * lock to avoid racing with any concurrent IGMP/MLD snooping.
+ 	 */
+-	rcu_read_lock();
++	spin_lock_bh(&br->multicast_lock);
+ 
+-	hlist_for_each_entry_rcu(mp, &br->mdb_list, mdb_node) {
++	hlist_for_each_entry(mp, &br->mdb_list, mdb_node) {
+ 		struct net_bridge_port_group __rcu * const *pp;
+ 		const struct net_bridge_port_group *p;
+ 
+ 		if (mp->host_joined) {
+-			err = br_switchdev_mdb_queue_one(&mdb_list,
++			err = br_switchdev_mdb_queue_one(&mdb_list, dev, action,
+ 							 SWITCHDEV_OBJ_ID_HOST_MDB,
+ 							 mp, br_dev);
+ 			if (err) {
+-				rcu_read_unlock();
++				spin_unlock_bh(&br->multicast_lock);
+ 				goto out_free_mdb;
+ 			}
+ 		}
+ 
+-		for (pp = &mp->ports; (p = rcu_dereference(*pp)) != NULL;
++		for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL;
+ 		     pp = &p->next) {
+ 			if (p->key.port->dev != dev)
+ 				continue;
+ 
+-			err = br_switchdev_mdb_queue_one(&mdb_list,
++			err = br_switchdev_mdb_queue_one(&mdb_list, dev, action,
+ 							 SWITCHDEV_OBJ_ID_PORT_MDB,
+ 							 mp, dev);
+ 			if (err) {
+-				rcu_read_unlock();
++				spin_unlock_bh(&br->multicast_lock);
+ 				goto out_free_mdb;
+ 			}
+ 		}
+ 	}
+ 
+-	rcu_read_unlock();
+-
+-	if (adding)
+-		action = SWITCHDEV_PORT_OBJ_ADD;
+-	else
+-		action = SWITCHDEV_PORT_OBJ_DEL;
++	spin_unlock_bh(&br->multicast_lock);
+ 
+ 	list_for_each_entry(obj, &mdb_list, list) {
+ 		err = br_switchdev_mdb_replay_one(nb, dev,
+@@ -786,6 +804,16 @@ static void nbp_switchdev_unsync_objs(struct net_bridge_port *p,
+ 	br_switchdev_mdb_replay(br_dev, dev, ctx, false, blocking_nb, NULL);
+ 
+ 	br_switchdev_vlan_replay(br_dev, ctx, false, blocking_nb, NULL);
++
++	/* Make sure that the device leaving this bridge has seen all
++	 * relevant events before it is disassociated. In the normal
++	 * case, when the device is directly attached to the bridge,
++	 * this is covered by del_nbp(). If the association was indirect
++	 * however, e.g. via a team or bond, and the device is leaving
++	 * that intermediate device, then the bridge port remains in
++	 * place.
++	 */
++	switchdev_deferred_process();
+ }
+ 
+ /* Let the bridge know that this port is offloaded, so that it can assign a
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 8d9760397b887..3babcd5e65e16 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -5856,8 +5856,8 @@ static int osd_sparse_read(struct ceph_connection *con,
+ 	struct ceph_osd *o = con->private;
+ 	struct ceph_sparse_read *sr = &o->o_sparse_read;
+ 	u32 count = sr->sr_count;
+-	u64 eoff, elen;
+-	int ret;
++	u64 eoff, elen, len = 0;
++	int i, ret;
+ 
+ 	switch (sr->sr_state) {
+ 	case CEPH_SPARSE_READ_HDR:
+@@ -5909,8 +5909,20 @@ static int osd_sparse_read(struct ceph_connection *con,
+ 		convert_extent_map(sr);
+ 		ret = sizeof(sr->sr_datalen);
+ 		*pbuf = (char *)&sr->sr_datalen;
+-		sr->sr_state = CEPH_SPARSE_READ_DATA;
++		sr->sr_state = CEPH_SPARSE_READ_DATA_PRE;
+ 		break;
++	case CEPH_SPARSE_READ_DATA_PRE:
++		/* Convert sr_datalen to host-endian */
++		sr->sr_datalen = le32_to_cpu((__force __le32)sr->sr_datalen);
++		for (i = 0; i < count; i++)
++			len += sr->sr_extent[i].len;
++		if (sr->sr_datalen != len) {
++			pr_warn_ratelimited("data len %u != extent len %llu\n",
++					    sr->sr_datalen, len);
++			return -EREMOTEIO;
++		}
++		sr->sr_state = CEPH_SPARSE_READ_DATA;
++		fallthrough;
+ 	case CEPH_SPARSE_READ_DATA:
+ 		if (sr->sr_index >= count) {
+ 			sr->sr_state = CEPH_SPARSE_READ_HDR;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 93ecfceac1bc4..4d75ef9d24bfa 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -1226,8 +1226,11 @@ static void sk_psock_verdict_data_ready(struct sock *sk)
+ 
+ 		rcu_read_lock();
+ 		psock = sk_psock(sk);
+-		if (psock)
+-			psock->saved_data_ready(sk);
++		if (psock) {
++			read_lock_bh(&sk->sk_callback_lock);
++			sk_psock_data_ready(sk, psock);
++			read_unlock_bh(&sk->sk_callback_lock);
++		}
+ 		rcu_read_unlock();
+ 	}
+ }
+diff --git a/net/core/sock.c b/net/core/sock.c
+index e5d43a068f8ed..20160865ede9c 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1192,6 +1192,17 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
+ 		 */
+ 		WRITE_ONCE(sk->sk_txrehash, (u8)val);
+ 		return 0;
++	case SO_PEEK_OFF:
++		{
++		int (*set_peek_off)(struct sock *sk, int val);
++
++		set_peek_off = READ_ONCE(sock->ops)->set_peek_off;
++		if (set_peek_off)
++			ret = set_peek_off(sk, val);
++		else
++			ret = -EOPNOTSUPP;
++		return ret;
++		}
+ 	}
+ 
+ 	sockopt_lock_sock(sk);
+@@ -1434,18 +1445,6 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
+ 		sock_valbool_flag(sk, SOCK_WIFI_STATUS, valbool);
+ 		break;
+ 
+-	case SO_PEEK_OFF:
+-		{
+-		int (*set_peek_off)(struct sock *sk, int val);
+-
+-		set_peek_off = READ_ONCE(sock->ops)->set_peek_off;
+-		if (set_peek_off)
+-			ret = set_peek_off(sk, val);
+-		else
+-			ret = -EOPNOTSUPP;
+-		break;
+-		}
+-
+ 	case SO_NOFCS:
+ 		sock_valbool_flag(sk, SOCK_NOFCS, valbool);
+ 		break;
+diff --git a/net/devlink/core.c b/net/devlink/core.c
+index cbf8560c93752..bc3d265fe2d6e 100644
+--- a/net/devlink/core.c
++++ b/net/devlink/core.c
+@@ -529,14 +529,20 @@ static int __init devlink_init(void)
+ {
+ 	int err;
+ 
+-	err = genl_register_family(&devlink_nl_family);
+-	if (err)
+-		goto out;
+ 	err = register_pernet_subsys(&devlink_pernet_ops);
+ 	if (err)
+ 		goto out;
++	err = genl_register_family(&devlink_nl_family);
++	if (err)
++		goto out_unreg_pernet_subsys;
+ 	err = register_netdevice_notifier(&devlink_port_netdevice_nb);
++	if (!err)
++		return 0;
++
++	genl_unregister_family(&devlink_nl_family);
+ 
++out_unreg_pernet_subsys:
++	unregister_pernet_subsys(&devlink_pernet_ops);
+ out:
+ 	WARN_ON(err);
+ 	return err;
+diff --git a/net/devlink/port.c b/net/devlink/port.c
+index 841a3eafa328e..d39ee6053cc7b 100644
+--- a/net/devlink/port.c
++++ b/net/devlink/port.c
+@@ -581,7 +581,7 @@ devlink_nl_port_get_dump_one(struct sk_buff *msg, struct devlink *devlink,
+ 
+ 	xa_for_each_start(&devlink->ports, port_index, devlink_port, state->idx) {
+ 		err = devlink_nl_port_fill(msg, devlink_port,
+-					   DEVLINK_CMD_NEW,
++					   DEVLINK_CMD_PORT_NEW,
+ 					   NETLINK_CB(cb->skb).portid,
+ 					   cb->nlh->nlmsg_seq, flags,
+ 					   cb->extack);
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 9456f5bb35e5d..0d0d725b46ad0 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1125,7 +1125,8 @@ static int arp_req_get(struct arpreq *r, struct net_device *dev)
+ 	if (neigh) {
+ 		if (!(READ_ONCE(neigh->nud_state) & NUD_NOARP)) {
+ 			read_lock_bh(&neigh->lock);
+-			memcpy(r->arp_ha.sa_data, neigh->ha, dev->addr_len);
++			memcpy(r->arp_ha.sa_data, neigh->ha,
++			       min(dev->addr_len, sizeof(r->arp_ha.sa_data_min)));
+ 			r->arp_flags = arp_state_to_flags(neigh);
+ 			read_unlock_bh(&neigh->lock);
+ 			r->arp_ha.sa_family = dev->type;
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index ca0ff15dc8fa3..bc74f131fe4df 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1825,6 +1825,21 @@ static int in_dev_dump_addr(struct in_device *in_dev, struct sk_buff *skb,
+ 	return err;
+ }
+ 
++/* Combine dev_addr_genid and dev_base_seq to detect changes.
++ */
++static u32 inet_base_seq(const struct net *net)
++{
++	u32 res = atomic_read(&net->ipv4.dev_addr_genid) +
++		  net->dev_base_seq;
++
++	/* Must not return 0 (see nl_dump_check_consistent()).
++	 * Chose a value far away from 0.
++	 */
++	if (!res)
++		res = 0x80000000;
++	return res;
++}
++
+ static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ 	const struct nlmsghdr *nlh = cb->nlh;
+@@ -1876,8 +1891,7 @@ static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
+ 		idx = 0;
+ 		head = &tgt_net->dev_index_head[h];
+ 		rcu_read_lock();
+-		cb->seq = atomic_read(&tgt_net->ipv4.dev_addr_genid) ^
+-			  tgt_net->dev_base_seq;
++		cb->seq = inet_base_seq(tgt_net);
+ 		hlist_for_each_entry_rcu(dev, head, index_hlist) {
+ 			if (idx < s_idx)
+ 				goto cont;
+@@ -2278,8 +2292,7 @@ static int inet_netconf_dump_devconf(struct sk_buff *skb,
+ 		idx = 0;
+ 		head = &net->dev_index_head[h];
+ 		rcu_read_lock();
+-		cb->seq = atomic_read(&net->ipv4.dev_addr_genid) ^
+-			  net->dev_base_seq;
++		cb->seq = inet_base_seq(net);
+ 		hlist_for_each_entry_rcu(dev, head, index_hlist) {
+ 			if (idx < s_idx)
+ 				goto cont;
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index a532f749e4778..9456bf9e2705b 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -1131,10 +1131,33 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 	return 0;
+ 
+ error:
++	if (sk_hashed(sk)) {
++		spinlock_t *lock = inet_ehash_lockp(hinfo, sk->sk_hash);
++
++		sock_prot_inuse_add(net, sk->sk_prot, -1);
++
++		spin_lock(lock);
++		sk_nulls_del_node_init_rcu(sk);
++		spin_unlock(lock);
++
++		sk->sk_hash = 0;
++		inet_sk(sk)->inet_sport = 0;
++		inet_sk(sk)->inet_num = 0;
++
++		if (tw)
++			inet_twsk_bind_unhash(tw, hinfo);
++	}
++
+ 	spin_unlock(&head2->lock);
+ 	if (tb_created)
+ 		inet_bind_bucket_destroy(hinfo->bind_bucket_cachep, tb);
+-	spin_unlock_bh(&head->lock);
++	spin_unlock(&head->lock);
++
++	if (tw)
++		inet_twsk_deschedule_put(tw);
++
++	local_bh_enable();
++
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index f631b0a21af4c..e474b201900f9 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1589,12 +1589,7 @@ int udp_init_sock(struct sock *sk)
+ 
+ void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len)
+ {
+-	if (unlikely(READ_ONCE(sk->sk_peek_off) >= 0)) {
+-		bool slow = lock_sock_fast(sk);
+-
+-		sk_peek_offset_bwd(sk, len);
+-		unlock_sock_fast(sk, slow);
+-	}
++	sk_peek_offset_bwd(sk, len);
+ 
+ 	if (!skb_unref(skb))
+ 		return;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 733ace18806c6..5a839c5fb1a5a 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -708,6 +708,22 @@ static int inet6_netconf_get_devconf(struct sk_buff *in_skb,
+ 	return err;
+ }
+ 
++/* Combine dev_addr_genid and dev_base_seq to detect changes.
++ */
++static u32 inet6_base_seq(const struct net *net)
++{
++	u32 res = atomic_read(&net->ipv6.dev_addr_genid) +
++		  net->dev_base_seq;
++
++	/* Must not return 0 (see nl_dump_check_consistent()).
++	 * Chose a value far away from 0.
++	 */
++	if (!res)
++		res = 0x80000000;
++	return res;
++}
++
++
+ static int inet6_netconf_dump_devconf(struct sk_buff *skb,
+ 				      struct netlink_callback *cb)
+ {
+@@ -741,8 +757,7 @@ static int inet6_netconf_dump_devconf(struct sk_buff *skb,
+ 		idx = 0;
+ 		head = &net->dev_index_head[h];
+ 		rcu_read_lock();
+-		cb->seq = atomic_read(&net->ipv6.dev_addr_genid) ^
+-			  net->dev_base_seq;
++		cb->seq = inet6_base_seq(net);
+ 		hlist_for_each_entry_rcu(dev, head, index_hlist) {
+ 			if (idx < s_idx)
+ 				goto cont;
+@@ -5362,7 +5377,7 @@ static int inet6_dump_addr(struct sk_buff *skb, struct netlink_callback *cb,
+ 	}
+ 
+ 	rcu_read_lock();
+-	cb->seq = atomic_read(&tgt_net->ipv6.dev_addr_genid) ^ tgt_net->dev_base_seq;
++	cb->seq = inet6_base_seq(tgt_net);
+ 	for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
+ 		idx = 0;
+ 		head = &tgt_net->dev_index_head[h];
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index 4952ae7924505..02e9ffb63af19 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -177,6 +177,8 @@ static bool ip6_parse_tlv(bool hopbyhop,
+ 				case IPV6_TLV_IOAM:
+ 					if (!ipv6_hop_ioam(skb, off))
+ 						return false;
++
++					nh = skb_network_header(skb);
+ 					break;
+ 				case IPV6_TLV_JUMBO:
+ 					if (!ipv6_hop_jumbo(skb, off))
+@@ -943,6 +945,14 @@ static bool ipv6_hop_ioam(struct sk_buff *skb, int optoff)
+ 		if (!skb_valid_dst(skb))
+ 			ip6_route_input(skb);
+ 
++		/* About to mangle packet header */
++		if (skb_ensure_writable(skb, optoff + 2 + hdr->opt_len))
++			goto drop;
++
++		/* Trace pointer may have changed */
++		trace = (struct ioam6_trace_hdr *)(skb_network_header(skb)
++						   + optoff + sizeof(*hdr));
++
+ 		ioam6_fill_trace_data(skb, ns, trace, true);
+ 		break;
+ 	default:
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index 29346a6eec9ff..35508abd76f43 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -512,22 +512,24 @@ int __init seg6_init(void)
+ {
+ 	int err;
+ 
+-	err = genl_register_family(&seg6_genl_family);
++	err = register_pernet_subsys(&ip6_segments_ops);
+ 	if (err)
+ 		goto out;
+ 
+-	err = register_pernet_subsys(&ip6_segments_ops);
++	err = genl_register_family(&seg6_genl_family);
+ 	if (err)
+-		goto out_unregister_genl;
++		goto out_unregister_pernet;
+ 
+ #ifdef CONFIG_IPV6_SEG6_LWTUNNEL
+ 	err = seg6_iptunnel_init();
+ 	if (err)
+-		goto out_unregister_pernet;
++		goto out_unregister_genl;
+ 
+ 	err = seg6_local_init();
+-	if (err)
+-		goto out_unregister_pernet;
++	if (err) {
++		seg6_iptunnel_exit();
++		goto out_unregister_genl;
++	}
+ #endif
+ 
+ #ifdef CONFIG_IPV6_SEG6_HMAC
+@@ -548,11 +550,11 @@ int __init seg6_init(void)
+ #endif
+ #endif
+ #ifdef CONFIG_IPV6_SEG6_LWTUNNEL
+-out_unregister_pernet:
+-	unregister_pernet_subsys(&ip6_segments_ops);
+-#endif
+ out_unregister_genl:
+ 	genl_unregister_family(&seg6_genl_family);
++#endif
++out_unregister_pernet:
++	unregister_pernet_subsys(&ip6_segments_ops);
+ 	goto out;
+ }
+ 
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index bb373e249237a..763a59414b163 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -627,7 +627,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 
+ back_from_confirm:
+ 	lock_sock(sk);
+-	ulen = len + skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0;
++	ulen = len + (skb_queue_empty(&sk->sk_write_queue) ? transhdrlen : 0);
+ 	err = ip6_append_data(sk, ip_generic_getfrag, msg,
+ 			      ulen, transhdrlen, &ipc6,
+ 			      &fl6, (struct rt6_info *)dst,
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index bfb06dea43c2d..b382c2e0a39a0 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1869,6 +1869,8 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ 					      sband->band);
+ 	}
+ 
++	ieee80211_sta_set_rx_nss(link_sta);
++
+ 	return ret;
+ }
+ 
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index dce5606ed66da..68596ef78b15e 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -997,8 +997,8 @@ static void add_link_files(struct ieee80211_link_data *link,
+ 	}
+ }
+ 
+-void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata,
+-				  bool mld_vif)
++static void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata,
++					 bool mld_vif)
+ {
+ 	char buf[10+IFNAMSIZ];
+ 
+diff --git a/net/mac80211/debugfs_netdev.h b/net/mac80211/debugfs_netdev.h
+index b226b1aae88a5..a02ec0a413f61 100644
+--- a/net/mac80211/debugfs_netdev.h
++++ b/net/mac80211/debugfs_netdev.h
+@@ -11,8 +11,6 @@
+ #include "ieee80211_i.h"
+ 
+ #ifdef CONFIG_MAC80211_DEBUGFS
+-void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata,
+-				  bool mld_vif);
+ void ieee80211_debugfs_remove_netdev(struct ieee80211_sub_if_data *sdata);
+ void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata);
+ void ieee80211_debugfs_recreate_netdev(struct ieee80211_sub_if_data *sdata,
+@@ -24,9 +22,6 @@ void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link);
+ void ieee80211_link_debugfs_drv_add(struct ieee80211_link_data *link);
+ void ieee80211_link_debugfs_drv_remove(struct ieee80211_link_data *link);
+ #else
+-static inline void ieee80211_debugfs_add_netdev(
+-	struct ieee80211_sub_if_data *sdata, bool mld_vif)
+-{}
+ static inline void ieee80211_debugfs_remove_netdev(
+ 	struct ieee80211_sub_if_data *sdata)
+ {}
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index e4e7c0b38cb6e..11c4caa4748e4 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1783,7 +1783,7 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata,
+ 	/* need to do this after the switch so vif.type is correct */
+ 	ieee80211_link_setup(&sdata->deflink);
+ 
+-	ieee80211_debugfs_add_netdev(sdata, false);
++	ieee80211_debugfs_recreate_netdev(sdata, false);
+ }
+ 
+ static int ieee80211_runtime_change_iftype(struct ieee80211_sub_if_data *sdata,
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 5a03bf1de6bb7..241e615189244 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -8,7 +8,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2023 Intel Corporation
++ * Copyright (C) 2018 - 2024 Intel Corporation
+  */
+ 
+ #include <linux/delay.h>
+@@ -2903,6 +2903,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ 
+ 	/* other links will be destroyed */
+ 	sdata->deflink.u.mgd.bss = NULL;
++	sdata->deflink.smps_mode = IEEE80211_SMPS_OFF;
+ 
+ 	netif_carrier_off(sdata->dev);
+ 
+@@ -5030,9 +5031,6 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
+ 	if (!link)
+ 		return 0;
+ 
+-	/* will change later if needed */
+-	link->smps_mode = IEEE80211_SMPS_OFF;
+-
+ 	/*
+ 	 * If this fails (possibly due to channel context sharing
+ 	 * on incompatible channels, e.g. 80+80 and 160 sharing the
+@@ -7075,6 +7073,7 @@ void ieee80211_mgd_setup_link(struct ieee80211_link_data *link)
+ 	link->u.mgd.p2p_noa_index = -1;
+ 	link->u.mgd.conn_flags = 0;
+ 	link->conf->bssid = link->u.mgd.bssid;
++	link->smps_mode = IEEE80211_SMPS_OFF;
+ 
+ 	wiphy_work_init(&link->u.mgd.request_smps_work,
+ 			ieee80211_request_smps_mgd_work);
+@@ -8106,6 +8105,7 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
+ 		ieee80211_report_disconnect(sdata, frame_buf,
+ 					    sizeof(frame_buf), true,
+ 					    req->reason_code, false);
++		drv_mgd_complete_tx(sdata->local, sdata, &info);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 24fa06105378d..fca3f67ac0e8e 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2013-2015  Intel Mobile Communications GmbH
+  * Copyright 2016-2017  Intel Deutschland GmbH
+- * Copyright (C) 2018-2023 Intel Corporation
++ * Copyright (C) 2018-2024 Intel Corporation
+  */
+ 
+ #include <linux/if_arp.h>
+@@ -216,14 +216,18 @@ ieee80211_bss_info_update(struct ieee80211_local *local,
+ }
+ 
+ static bool ieee80211_scan_accept_presp(struct ieee80211_sub_if_data *sdata,
++					struct ieee80211_channel *channel,
+ 					u32 scan_flags, const u8 *da)
+ {
+ 	if (!sdata)
+ 		return false;
+-	/* accept broadcast for OCE */
+-	if (scan_flags & NL80211_SCAN_FLAG_ACCEPT_BCAST_PROBE_RESP &&
+-	    is_broadcast_ether_addr(da))
++
++	/* accept broadcast on 6 GHz and for OCE */
++	if (is_broadcast_ether_addr(da) &&
++	    (channel->band == NL80211_BAND_6GHZ ||
++	     scan_flags & NL80211_SCAN_FLAG_ACCEPT_BCAST_PROBE_RESP))
+ 		return true;
++
+ 	if (scan_flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
+ 		return true;
+ 	return ether_addr_equal(da, sdata->vif.addr);
+@@ -272,6 +276,12 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 		wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work, 0);
+ 	}
+ 
++	channel = ieee80211_get_channel_khz(local->hw.wiphy,
++					    ieee80211_rx_status_to_khz(rx_status));
++
++	if (!channel || channel->flags & IEEE80211_CHAN_DISABLED)
++		return;
++
+ 	if (ieee80211_is_probe_resp(mgmt->frame_control)) {
+ 		struct cfg80211_scan_request *scan_req;
+ 		struct cfg80211_sched_scan_request *sched_scan_req;
+@@ -289,19 +299,15 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 		/* ignore ProbeResp to foreign address or non-bcast (OCE)
+ 		 * unless scanning with randomised address
+ 		 */
+-		if (!ieee80211_scan_accept_presp(sdata1, scan_req_flags,
++		if (!ieee80211_scan_accept_presp(sdata1, channel,
++						 scan_req_flags,
+ 						 mgmt->da) &&
+-		    !ieee80211_scan_accept_presp(sdata2, sched_scan_req_flags,
++		    !ieee80211_scan_accept_presp(sdata2, channel,
++						 sched_scan_req_flags,
+ 						 mgmt->da))
+ 			return;
+ 	}
+ 
+-	channel = ieee80211_get_channel_khz(local->hw.wiphy,
+-					ieee80211_rx_status_to_khz(rx_status));
+-
+-	if (!channel || channel->flags & IEEE80211_CHAN_DISABLED)
+-		return;
+-
+ 	bss = ieee80211_bss_info_update(local, rx_status,
+ 					mgmt, skb->len,
+ 					channel);
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index c33decbb97f2d..bcf3f727fc6da 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -913,6 +913,8 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
+ 	if (ieee80211_vif_is_mesh(&sdata->vif))
+ 		mesh_accept_plinks_update(sdata);
+ 
++	ieee80211_check_fast_xmit(sta);
++
+ 	return 0;
+  out_remove:
+ 	if (sta->sta.valid_links)
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index d7aa75b7fd917..a85918594cbe2 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3048,7 +3048,7 @@ void ieee80211_check_fast_xmit(struct sta_info *sta)
+ 	    sdata->vif.type == NL80211_IFTYPE_STATION)
+ 		goto out;
+ 
+-	if (!test_sta_flag(sta, WLAN_STA_AUTHORIZED))
++	if (!test_sta_flag(sta, WLAN_STA_AUTHORIZED) || !sta->uploaded)
+ 		goto out;
+ 
+ 	if (test_sta_flag(sta, WLAN_STA_PS_STA) ||
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 7a47a58aa54b4..6218dcd07e184 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -663,7 +663,7 @@ struct mctp_sk_key *mctp_alloc_local_tag(struct mctp_sock *msk,
+ 	spin_unlock_irqrestore(&mns->keys_lock, flags);
+ 
+ 	if (!tagbits) {
+-		kfree(key);
++		mctp_key_unref(key);
+ 		return ERR_PTR(-EBUSY);
+ 	}
+ 
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index a536586742f28..6ff6f14674aa2 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -13,17 +13,19 @@
+ #include <uapi/linux/mptcp.h>
+ #include "protocol.h"
+ 
+-static int subflow_get_info(const struct sock *sk, struct sk_buff *skb)
++static int subflow_get_info(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *sf;
+ 	struct nlattr *start;
+ 	u32 flags = 0;
++	bool slow;
+ 	int err;
+ 
+ 	start = nla_nest_start_noflag(skb, INET_ULP_INFO_MPTCP);
+ 	if (!start)
+ 		return -EMSGSIZE;
+ 
++	slow = lock_sock_fast(sk);
+ 	rcu_read_lock();
+ 	sf = rcu_dereference(inet_csk(sk)->icsk_ulp_data);
+ 	if (!sf) {
+@@ -63,17 +65,19 @@ static int subflow_get_info(const struct sock *sk, struct sk_buff *skb)
+ 			sf->map_data_len) ||
+ 	    nla_put_u32(skb, MPTCP_SUBFLOW_ATTR_FLAGS, flags) ||
+ 	    nla_put_u8(skb, MPTCP_SUBFLOW_ATTR_ID_REM, sf->remote_id) ||
+-	    nla_put_u8(skb, MPTCP_SUBFLOW_ATTR_ID_LOC, sf->local_id)) {
++	    nla_put_u8(skb, MPTCP_SUBFLOW_ATTR_ID_LOC, subflow_get_local_id(sf))) {
+ 		err = -EMSGSIZE;
+ 		goto nla_failure;
+ 	}
+ 
+ 	rcu_read_unlock();
++	unlock_sock_fast(sk, slow);
+ 	nla_nest_end(skb, start);
+ 	return 0;
+ 
+ nla_failure:
+ 	rcu_read_unlock();
++	unlock_sock_fast(sk, slow);
+ 	nla_nest_cancel(skb, start);
+ 	return err;
+ }
+diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c
+index 74698582a2859..ad28da655f8bc 100644
+--- a/net/mptcp/fastopen.c
++++ b/net/mptcp/fastopen.c
+@@ -59,13 +59,12 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
+ 	mptcp_data_unlock(sk);
+ }
+ 
+-void mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
+-				   const struct mptcp_options_received *mp_opt)
++void __mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
++				     const struct mptcp_options_received *mp_opt)
+ {
+ 	struct sock *sk = (struct sock *)msk;
+ 	struct sk_buff *skb;
+ 
+-	mptcp_data_lock(sk);
+ 	skb = skb_peek_tail(&sk->sk_receive_queue);
+ 	if (skb) {
+ 		WARN_ON_ONCE(MPTCP_SKB_CB(skb)->end_seq);
+@@ -77,5 +76,4 @@ void mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_
+ 	}
+ 
+ 	pr_debug("msk=%p ack_seq=%llx", msk, msk->ack_seq);
+-	mptcp_data_unlock(sk);
+ }
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index a0990c365a2ea..c30405e768337 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -66,6 +66,7 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ 	SNMP_MIB_ITEM("RcvWndShared", MPTCP_MIB_RCVWNDSHARED),
+ 	SNMP_MIB_ITEM("RcvWndConflictUpdate", MPTCP_MIB_RCVWNDCONFLICTUPDATE),
+ 	SNMP_MIB_ITEM("RcvWndConflict", MPTCP_MIB_RCVWNDCONFLICT),
++	SNMP_MIB_ITEM("MPCurrEstab", MPTCP_MIB_CURRESTAB),
+ 	SNMP_MIB_SENTINEL
+ };
+ 
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index cae71d9472529..dd7fd1f246b5f 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -65,6 +65,7 @@ enum linux_mptcp_mib_field {
+ 					 * conflict with another subflow while updating msk rcv wnd
+ 					 */
+ 	MPTCP_MIB_RCVWNDCONFLICT,	/* Conflict with while updating msk rcv wnd */
++	MPTCP_MIB_CURRESTAB,		/* Current established MPTCP connections */
+ 	__MPTCP_MIB_MAX
+ };
+ 
+@@ -95,4 +96,11 @@ static inline void __MPTCP_INC_STATS(struct net *net,
+ 		__SNMP_INC_STATS(net->mib.mptcp_statistics, field);
+ }
+ 
++static inline void MPTCP_DEC_STATS(struct net *net,
++				   enum linux_mptcp_mib_field field)
++{
++	if (likely(net->mib.mptcp_statistics))
++		SNMP_DEC_STATS(net->mib.mptcp_statistics, field);
++}
++
+ bool mptcp_mib_alloc(struct net *net);
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index d2527d189a799..e3e96a49f9229 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -962,9 +962,7 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ 		/* subflows are fully established as soon as we get any
+ 		 * additional ack, including ADD_ADDR.
+ 		 */
+-		subflow->fully_established = 1;
+-		WRITE_ONCE(msk->fully_established, true);
+-		goto check_notify;
++		goto set_fully_established;
+ 	}
+ 
+ 	/* If the first established packet does not contain MP_CAPABLE + data
+@@ -986,7 +984,10 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ set_fully_established:
+ 	if (unlikely(!READ_ONCE(msk->pm.server_side)))
+ 		pr_warn_once("bogus mpc option on established client sk");
+-	mptcp_subflow_fully_established(subflow, mp_opt);
++
++	mptcp_data_lock((struct sock *)msk);
++	__mptcp_subflow_fully_established(msk, subflow, mp_opt);
++	mptcp_data_unlock((struct sock *)msk);
+ 
+ check_notify:
+ 	/* if the subflow is not already linked into the conn_list, we can't
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index bf4d96f6f99a6..cccb720c1c685 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -396,19 +396,6 @@ void mptcp_pm_free_anno_list(struct mptcp_sock *msk)
+ 	}
+ }
+ 
+-static bool lookup_address_in_vec(const struct mptcp_addr_info *addrs, unsigned int nr,
+-				  const struct mptcp_addr_info *addr)
+-{
+-	int i;
+-
+-	for (i = 0; i < nr; i++) {
+-		if (addrs[i].id == addr->id)
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+ /* Fill all the remote addresses into the array addrs[],
+  * and return the array size.
+  */
+@@ -440,18 +427,34 @@ static unsigned int fill_remote_addresses_vec(struct mptcp_sock *msk,
+ 		msk->pm.subflows++;
+ 		addrs[i++] = remote;
+ 	} else {
++		DECLARE_BITMAP(unavail_id, MPTCP_PM_MAX_ADDR_ID + 1);
++
++		/* Forbid creation of new subflows matching existing
++		 * ones, possibly already created by incoming ADD_ADDR
++		 */
++		bitmap_zero(unavail_id, MPTCP_PM_MAX_ADDR_ID + 1);
++		mptcp_for_each_subflow(msk, subflow)
++			if (READ_ONCE(subflow->local_id) == local->id)
++				__set_bit(subflow->remote_id, unavail_id);
++
+ 		mptcp_for_each_subflow(msk, subflow) {
+ 			ssk = mptcp_subflow_tcp_sock(subflow);
+ 			remote_address((struct sock_common *)ssk, &addrs[i]);
+-			addrs[i].id = subflow->remote_id;
++			addrs[i].id = READ_ONCE(subflow->remote_id);
+ 			if (deny_id0 && !addrs[i].id)
+ 				continue;
+ 
++			if (test_bit(addrs[i].id, unavail_id))
++				continue;
++
+ 			if (!mptcp_pm_addr_families_match(sk, local, &addrs[i]))
+ 				continue;
+ 
+-			if (!lookup_address_in_vec(addrs, i, &addrs[i]) &&
+-			    msk->pm.subflows < subflows_max) {
++			if (msk->pm.subflows < subflows_max) {
++				/* forbid creating multiple address towards
++				 * this id
++				 */
++				__set_bit(addrs[i].id, unavail_id);
+ 				msk->pm.subflows++;
+ 				i++;
+ 			}
+@@ -799,18 +802,18 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
+ 
+ 		mptcp_for_each_subflow_safe(msk, subflow, tmp) {
+ 			struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++			u8 remote_id = READ_ONCE(subflow->remote_id);
+ 			int how = RCV_SHUTDOWN | SEND_SHUTDOWN;
+-			u8 id = subflow->local_id;
++			u8 id = subflow_get_local_id(subflow);
+ 
+-			if (rm_type == MPTCP_MIB_RMADDR && subflow->remote_id != rm_id)
++			if (rm_type == MPTCP_MIB_RMADDR && remote_id != rm_id)
+ 				continue;
+ 			if (rm_type == MPTCP_MIB_RMSUBFLOW && !mptcp_local_id_match(msk, id, rm_id))
+ 				continue;
+ 
+ 			pr_debug(" -> %s rm_list_ids[%d]=%u local_id=%u remote_id=%u mpc_id=%u",
+ 				 rm_type == MPTCP_MIB_RMADDR ? "address" : "subflow",
+-				 i, rm_id, subflow->local_id, subflow->remote_id,
+-				 msk->mpc_endpoint_id);
++				 i, rm_id, id, remote_id, msk->mpc_endpoint_id);
+ 			spin_unlock_bh(&msk->pm.lock);
+ 			mptcp_subflow_shutdown(sk, ssk, how);
+ 
+@@ -901,7 +904,8 @@ static void __mptcp_pm_release_addr_entry(struct mptcp_pm_addr_entry *entry)
+ }
+ 
+ static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
+-					     struct mptcp_pm_addr_entry *entry)
++					     struct mptcp_pm_addr_entry *entry,
++					     bool needs_id)
+ {
+ 	struct mptcp_pm_addr_entry *cur, *del_entry = NULL;
+ 	unsigned int addr_max;
+@@ -949,7 +953,7 @@ static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
+ 		}
+ 	}
+ 
+-	if (!entry->addr.id) {
++	if (!entry->addr.id && needs_id) {
+ find_next:
+ 		entry->addr.id = find_next_zero_bit(pernet->id_bitmap,
+ 						    MPTCP_PM_MAX_ADDR_ID + 1,
+@@ -960,7 +964,7 @@ static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
+ 		}
+ 	}
+ 
+-	if (!entry->addr.id)
++	if (!entry->addr.id && needs_id)
+ 		goto out;
+ 
+ 	__set_bit(entry->addr.id, pernet->id_bitmap);
+@@ -1048,6 +1052,11 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
+ 	if (err)
+ 		return err;
+ 
++	/* We don't use mptcp_set_state() here because it needs to be called
++	 * under the msk socket lock. For the moment, that will not bring
++	 * anything more than only calling inet_sk_state_store(), because the
++	 * old status is known (TCP_CLOSE).
++	 */
+ 	inet_sk_state_store(newsk, TCP_LISTEN);
+ 	lock_sock(ssk);
+ 	err = __inet_listen_sk(ssk, backlog);
+@@ -1087,7 +1096,7 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc
+ 	entry->ifindex = 0;
+ 	entry->flags = MPTCP_PM_ADDR_FLAG_IMPLICIT;
+ 	entry->lsk = NULL;
+-	ret = mptcp_pm_nl_append_new_local_addr(pernet, entry);
++	ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true);
+ 	if (ret < 0)
+ 		kfree(entry);
+ 
+@@ -1280,6 +1289,18 @@ static int mptcp_nl_add_subflow_or_signal_addr(struct net *net)
+ 	return 0;
+ }
+ 
++static bool mptcp_pm_has_addr_attr_id(const struct nlattr *attr,
++				      struct genl_info *info)
++{
++	struct nlattr *tb[MPTCP_PM_ADDR_ATTR_MAX + 1];
++
++	if (!nla_parse_nested_deprecated(tb, MPTCP_PM_ADDR_ATTR_MAX, attr,
++					 mptcp_pm_address_nl_policy, info->extack) &&
++	    tb[MPTCP_PM_ADDR_ATTR_ID])
++		return true;
++	return false;
++}
++
+ int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
+ {
+ 	struct nlattr *attr = info->attrs[MPTCP_PM_ENDPOINT_ADDR];
+@@ -1321,7 +1342,8 @@ int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
+ 			goto out_free;
+ 		}
+ 	}
+-	ret = mptcp_pm_nl_append_new_local_addr(pernet, entry);
++	ret = mptcp_pm_nl_append_new_local_addr(pernet, entry,
++						!mptcp_pm_has_addr_attr_id(attr, info));
+ 	if (ret < 0) {
+ 		GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret);
+ 		goto out_free;
+@@ -1975,7 +1997,7 @@ static int mptcp_event_add_subflow(struct sk_buff *skb, const struct sock *ssk)
+ 	if (WARN_ON_ONCE(!sf))
+ 		return -EINVAL;
+ 
+-	if (nla_put_u8(skb, MPTCP_ATTR_LOC_ID, sf->local_id))
++	if (nla_put_u8(skb, MPTCP_ATTR_LOC_ID, subflow_get_local_id(sf)))
+ 		return -EMSGSIZE;
+ 
+ 	if (nla_put_u8(skb, MPTCP_ATTR_REM_ID, sf->remote_id))
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 80857ef3b90f5..01b3a8f2f0e9c 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -26,7 +26,8 @@ void mptcp_free_local_addr_list(struct mptcp_sock *msk)
+ }
+ 
+ static int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk,
+-						    struct mptcp_pm_addr_entry *entry)
++						    struct mptcp_pm_addr_entry *entry,
++						    bool needs_id)
+ {
+ 	DECLARE_BITMAP(id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
+ 	struct mptcp_pm_addr_entry *match = NULL;
+@@ -41,7 +42,7 @@ static int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk,
+ 	spin_lock_bh(&msk->pm.lock);
+ 	list_for_each_entry(e, &msk->pm.userspace_pm_local_addr_list, list) {
+ 		addr_match = mptcp_addresses_equal(&e->addr, &entry->addr, true);
+-		if (addr_match && entry->addr.id == 0)
++		if (addr_match && entry->addr.id == 0 && needs_id)
+ 			entry->addr.id = e->addr.id;
+ 		id_match = (e->addr.id == entry->addr.id);
+ 		if (addr_match && id_match) {
+@@ -64,7 +65,7 @@ static int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk,
+ 		}
+ 
+ 		*e = *entry;
+-		if (!e->addr.id)
++		if (!e->addr.id && needs_id)
+ 			e->addr.id = find_next_zero_bit(id_bitmap,
+ 							MPTCP_PM_MAX_ADDR_ID + 1,
+ 							1);
+@@ -153,7 +154,7 @@ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk,
+ 	if (new_entry.addr.port == msk_sport)
+ 		new_entry.addr.port = 0;
+ 
+-	return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry);
++	return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry, true);
+ }
+ 
+ int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info)
+@@ -198,7 +199,7 @@ int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info)
+ 		goto announce_err;
+ 	}
+ 
+-	err = mptcp_userspace_pm_append_new_local_addr(msk, &addr_val);
++	err = mptcp_userspace_pm_append_new_local_addr(msk, &addr_val, false);
+ 	if (err < 0) {
+ 		GENL_SET_ERR_MSG(info, "did not match address and id");
+ 		goto announce_err;
+@@ -233,7 +234,7 @@ static int mptcp_userspace_pm_remove_id_zero_address(struct mptcp_sock *msk,
+ 
+ 	lock_sock(sk);
+ 	mptcp_for_each_subflow(msk, subflow) {
+-		if (subflow->local_id == 0) {
++		if (READ_ONCE(subflow->local_id) == 0) {
+ 			has_id_0 = true;
+ 			break;
+ 		}
+@@ -378,7 +379,7 @@ int mptcp_pm_nl_subflow_create_doit(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	local.addr = addr_l;
+-	err = mptcp_userspace_pm_append_new_local_addr(msk, &local);
++	err = mptcp_userspace_pm_append_new_local_addr(msk, &local, false);
+ 	if (err < 0) {
+ 		GENL_SET_ERR_MSG(info, "did not match address and id");
+ 		goto create_err;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 0b42ce7de45cc..5305f2ff0fd21 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -99,7 +99,7 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
+ 	subflow->subflow_id = msk->subflow_id++;
+ 
+ 	/* This is the first subflow, always with id 0 */
+-	subflow->local_id_valid = 1;
++	WRITE_ONCE(subflow->local_id, 0);
+ 	mptcp_sock_graft(msk->first, sk->sk_socket);
+ 	iput(SOCK_INODE(ssock));
+ 
+@@ -443,11 +443,11 @@ static void mptcp_check_data_fin_ack(struct sock *sk)
+ 
+ 		switch (sk->sk_state) {
+ 		case TCP_FIN_WAIT1:
+-			inet_sk_state_store(sk, TCP_FIN_WAIT2);
++			mptcp_set_state(sk, TCP_FIN_WAIT2);
+ 			break;
+ 		case TCP_CLOSING:
+ 		case TCP_LAST_ACK:
+-			inet_sk_state_store(sk, TCP_CLOSE);
++			mptcp_set_state(sk, TCP_CLOSE);
+ 			break;
+ 		}
+ 
+@@ -608,13 +608,13 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 
+ 		switch (sk->sk_state) {
+ 		case TCP_ESTABLISHED:
+-			inet_sk_state_store(sk, TCP_CLOSE_WAIT);
++			mptcp_set_state(sk, TCP_CLOSE_WAIT);
+ 			break;
+ 		case TCP_FIN_WAIT1:
+-			inet_sk_state_store(sk, TCP_CLOSING);
++			mptcp_set_state(sk, TCP_CLOSING);
+ 			break;
+ 		case TCP_FIN_WAIT2:
+-			inet_sk_state_store(sk, TCP_CLOSE);
++			mptcp_set_state(sk, TCP_CLOSE);
+ 			break;
+ 		default:
+ 			/* Other states not expected */
+@@ -789,7 +789,7 @@ static bool __mptcp_subflow_error_report(struct sock *sk, struct sock *ssk)
+ 	 */
+ 	ssk_state = inet_sk_state_load(ssk);
+ 	if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
+-		inet_sk_state_store(sk, ssk_state);
++		mptcp_set_state(sk, ssk_state);
+ 	WRITE_ONCE(sk->sk_err, -err);
+ 
+ 	/* This barrier is coupled with smp_rmb() in mptcp_poll() */
+@@ -2480,7 +2480,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 	    inet_sk_state_load(msk->first) == TCP_CLOSE) {
+ 		if (sk->sk_state != TCP_ESTABLISHED ||
+ 		    msk->in_accept_queue || sock_flag(sk, SOCK_DEAD)) {
+-			inet_sk_state_store(sk, TCP_CLOSE);
++			mptcp_set_state(sk, TCP_CLOSE);
+ 			mptcp_close_wake_up(sk);
+ 		} else {
+ 			mptcp_start_tout_timer(sk);
+@@ -2575,7 +2575,7 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
+ 		WRITE_ONCE(sk->sk_err, ECONNRESET);
+ 	}
+ 
+-	inet_sk_state_store(sk, TCP_CLOSE);
++	mptcp_set_state(sk, TCP_CLOSE);
+ 	WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);
+ 	smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
+ 	set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags);
+@@ -2710,7 +2710,7 @@ static void mptcp_do_fastclose(struct sock *sk)
+ 	struct mptcp_subflow_context *subflow, *tmp;
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
+ 
+-	inet_sk_state_store(sk, TCP_CLOSE);
++	mptcp_set_state(sk, TCP_CLOSE);
+ 	mptcp_for_each_subflow_safe(msk, subflow, tmp)
+ 		__mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow),
+ 				  subflow, MPTCP_CF_FASTCLOSE);
+@@ -2888,6 +2888,24 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
+ 	release_sock(ssk);
+ }
+ 
++void mptcp_set_state(struct sock *sk, int state)
++{
++	int oldstate = sk->sk_state;
++
++	switch (state) {
++	case TCP_ESTABLISHED:
++		if (oldstate != TCP_ESTABLISHED)
++			MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_CURRESTAB);
++		break;
++
++	default:
++		if (oldstate == TCP_ESTABLISHED)
++			MPTCP_DEC_STATS(sock_net(sk), MPTCP_MIB_CURRESTAB);
++	}
++
++	inet_sk_state_store(sk, state);
++}
++
+ static const unsigned char new_state[16] = {
+ 	/* current state:     new state:      action:	*/
+ 	[0 /* (Invalid) */] = TCP_CLOSE,
+@@ -2910,7 +2928,7 @@ static int mptcp_close_state(struct sock *sk)
+ 	int next = (int)new_state[sk->sk_state];
+ 	int ns = next & TCP_STATE_MASK;
+ 
+-	inet_sk_state_store(sk, ns);
++	mptcp_set_state(sk, ns);
+ 
+ 	return next & TCP_ACTION_FIN;
+ }
+@@ -3021,7 +3039,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) {
+ 		mptcp_check_listen_stop(sk);
+-		inet_sk_state_store(sk, TCP_CLOSE);
++		mptcp_set_state(sk, TCP_CLOSE);
+ 		goto cleanup;
+ 	}
+ 
+@@ -3064,7 +3082,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
+ 	 * state, let's not keep resources busy for no reasons
+ 	 */
+ 	if (subflows_alive == 0)
+-		inet_sk_state_store(sk, TCP_CLOSE);
++		mptcp_set_state(sk, TCP_CLOSE);
+ 
+ 	sock_hold(sk);
+ 	pr_debug("msk=%p state=%d", sk, sk->sk_state);
+@@ -3130,7 +3148,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
+ 		return -EBUSY;
+ 
+ 	mptcp_check_listen_stop(sk);
+-	inet_sk_state_store(sk, TCP_CLOSE);
++	mptcp_set_state(sk, TCP_CLOSE);
+ 
+ 	mptcp_stop_rtx_timer(sk);
+ 	mptcp_stop_tout_timer(sk);
+@@ -3182,6 +3200,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ {
+ 	struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);
+ 	struct sock *nsk = sk_clone_lock(sk, GFP_ATOMIC);
++	struct mptcp_subflow_context *subflow;
+ 	struct mptcp_sock *msk;
+ 
+ 	if (!nsk)
+@@ -3218,11 +3237,12 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ 	/* this can't race with mptcp_close(), as the msk is
+ 	 * not yet exposted to user-space
+ 	 */
+-	inet_sk_state_store(nsk, TCP_ESTABLISHED);
++	mptcp_set_state(nsk, TCP_ESTABLISHED);
+ 
+ 	/* The msk maintain a ref to each subflow in the connections list */
+ 	WRITE_ONCE(msk->first, ssk);
+-	list_add(&mptcp_subflow_ctx(ssk)->node, &msk->conn_list);
++	subflow = mptcp_subflow_ctx(ssk);
++	list_add(&subflow->node, &msk->conn_list);
+ 	sock_hold(ssk);
+ 
+ 	/* new mpc subflow takes ownership of the newly
+@@ -3237,6 +3257,9 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ 	__mptcp_propagate_sndbuf(nsk, ssk);
+ 
+ 	mptcp_rcv_space_init(msk, ssk);
++
++	if (mp_opt->suboptions & OPTION_MPTCP_MPC_ACK)
++		__mptcp_subflow_fully_established(msk, subflow, mp_opt);
+ 	bh_unlock_sock(nsk);
+ 
+ 	/* note: the newly allocated socket refcount is 2 now */
+@@ -3512,10 +3535,6 @@ void mptcp_finish_connect(struct sock *ssk)
+ 	 * accessing the field below
+ 	 */
+ 	WRITE_ONCE(msk->local_key, subflow->local_key);
+-	WRITE_ONCE(msk->write_seq, subflow->idsn + 1);
+-	WRITE_ONCE(msk->snd_nxt, msk->write_seq);
+-	WRITE_ONCE(msk->snd_una, msk->write_seq);
+-	WRITE_ONCE(msk->wnd_end, msk->snd_nxt + tcp_sk(ssk)->snd_wnd);
+ 
+ 	mptcp_pm_new_connection(msk, ssk, 0);
+ }
+@@ -3673,7 +3692,7 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	if (IS_ERR(ssk))
+ 		return PTR_ERR(ssk);
+ 
+-	inet_sk_state_store(sk, TCP_SYN_SENT);
++	mptcp_set_state(sk, TCP_SYN_SENT);
+ 	subflow = mptcp_subflow_ctx(ssk);
+ #ifdef CONFIG_TCP_MD5SIG
+ 	/* no MPTCP if MD5SIG is enabled on this socket or we may run out of
+@@ -3723,7 +3742,7 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 	if (unlikely(err)) {
+ 		/* avoid leaving a dangling token in an unconnected socket */
+ 		mptcp_token_destroy(msk);
+-		inet_sk_state_store(sk, TCP_CLOSE);
++		mptcp_set_state(sk, TCP_CLOSE);
+ 		return err;
+ 	}
+ 
+@@ -3813,13 +3832,13 @@ static int mptcp_listen(struct socket *sock, int backlog)
+ 		goto unlock;
+ 	}
+ 
+-	inet_sk_state_store(sk, TCP_LISTEN);
++	mptcp_set_state(sk, TCP_LISTEN);
+ 	sock_set_flag(sk, SOCK_RCU_FREE);
+ 
+ 	lock_sock(ssk);
+ 	err = __inet_listen_sk(ssk, backlog);
+ 	release_sock(ssk);
+-	inet_sk_state_store(sk, inet_sk_state_load(ssk));
++	mptcp_set_state(sk, inet_sk_state_load(ssk));
+ 
+ 	if (!err) {
+ 		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+@@ -3879,7 +3898,7 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
+ 			__mptcp_close_ssk(newsk, msk->first,
+ 					  mptcp_subflow_ctx(msk->first), 0);
+ 			if (unlikely(list_is_singular(&msk->conn_list)))
+-				inet_sk_state_store(newsk, TCP_CLOSE);
++				mptcp_set_state(newsk, TCP_CLOSE);
+ 		}
+ 	}
+ 	release_sock(newsk);
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 27f3fedb9c366..3e50baba1b2a9 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -491,10 +491,9 @@ struct mptcp_subflow_context {
+ 		remote_key_valid : 1,        /* received the peer key from */
+ 		disposable : 1,	    /* ctx can be free at ulp release time */
+ 		stale : 1,	    /* unable to snd/rcv data, do not use for xmit */
+-		local_id_valid : 1, /* local_id is correctly initialized */
+ 		valid_csum_seen : 1,        /* at least one csum validated */
+ 		is_mptfo : 1,	    /* subflow is doing TFO */
+-		__unused : 9;
++		__unused : 10;
+ 	bool	data_avail;
+ 	bool	scheduled;
+ 	u32	remote_nonce;
+@@ -505,7 +504,7 @@ struct mptcp_subflow_context {
+ 		u8	hmac[MPTCPOPT_HMAC_LEN]; /* MPJ subflow only */
+ 		u64	iasn;	    /* initial ack sequence number, MPC subflows only */
+ 	};
+-	u8	local_id;
++	s16	local_id;	    /* if negative not initialized yet */
+ 	u8	remote_id;
+ 	u8	reset_seen:1;
+ 	u8	reset_transient:1;
+@@ -556,6 +555,7 @@ mptcp_subflow_ctx_reset(struct mptcp_subflow_context *subflow)
+ {
+ 	memset(&subflow->reset, 0, sizeof(subflow->reset));
+ 	subflow->request_mptcp = 1;
++	WRITE_ONCE(subflow->local_id, -1);
+ }
+ 
+ static inline u64
+@@ -622,8 +622,9 @@ unsigned int mptcp_stale_loss_cnt(const struct net *net);
+ unsigned int mptcp_close_timeout(const struct sock *sk);
+ int mptcp_get_pm_type(const struct net *net);
+ const char *mptcp_get_scheduler(const struct net *net);
+-void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
+-				     const struct mptcp_options_received *mp_opt);
++void __mptcp_subflow_fully_established(struct mptcp_sock *msk,
++				       struct mptcp_subflow_context *subflow,
++				       const struct mptcp_options_received *mp_opt);
+ bool __mptcp_retransmit_pending_data(struct sock *sk);
+ void mptcp_check_and_set_pending(struct sock *sk);
+ void __mptcp_push_pending(struct sock *sk, unsigned int flags);
+@@ -641,6 +642,7 @@ bool __mptcp_close(struct sock *sk, long timeout);
+ void mptcp_cancel_work(struct sock *sk);
+ void __mptcp_unaccepted_force_close(struct sock *sk);
+ void mptcp_set_owner_r(struct sk_buff *skb, struct sock *sk);
++void mptcp_set_state(struct sock *sk, int state);
+ 
+ bool mptcp_addresses_equal(const struct mptcp_addr_info *a,
+ 			   const struct mptcp_addr_info *b, bool use_port);
+@@ -951,8 +953,8 @@ void mptcp_event_pm_listener(const struct sock *ssk,
+ 			     enum mptcp_event_type event);
+ bool mptcp_userspace_pm_active(const struct mptcp_sock *msk);
+ 
+-void mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
+-				   const struct mptcp_options_received *mp_opt);
++void __mptcp_fastopen_gen_msk_ackseq(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow,
++				     const struct mptcp_options_received *mp_opt);
+ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow,
+ 					      struct request_sock *req);
+ 
+@@ -1020,6 +1022,15 @@ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc);
+ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
+ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc);
+ 
++static inline u8 subflow_get_local_id(const struct mptcp_subflow_context *subflow)
++{
++	int local_id = READ_ONCE(subflow->local_id);
++
++	if (local_id < 0)
++		return 0;
++	return local_id;
++}
++
+ void __init mptcp_pm_nl_init(void);
+ void mptcp_pm_nl_work(struct mptcp_sock *msk);
+ void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 5155ce5b71088..71ba86246ff89 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -421,31 +421,26 @@ static bool subflow_use_different_dport(struct mptcp_sock *msk, const struct soc
+ 
+ void __mptcp_sync_state(struct sock *sk, int state)
+ {
++	struct mptcp_subflow_context *subflow;
+ 	struct mptcp_sock *msk = mptcp_sk(sk);
++	struct sock *ssk = msk->first;
+ 
+-	__mptcp_propagate_sndbuf(sk, msk->first);
++	subflow = mptcp_subflow_ctx(ssk);
++	__mptcp_propagate_sndbuf(sk, ssk);
+ 	if (!msk->rcvspace_init)
+-		mptcp_rcv_space_init(msk, msk->first);
++		mptcp_rcv_space_init(msk, ssk);
++
+ 	if (sk->sk_state == TCP_SYN_SENT) {
+-		inet_sk_state_store(sk, state);
++		/* subflow->idsn is always available is TCP_SYN_SENT state,
++		 * even for the FASTOPEN scenarios
++		 */
++		WRITE_ONCE(msk->write_seq, subflow->idsn + 1);
++		WRITE_ONCE(msk->snd_nxt, msk->write_seq);
++		mptcp_set_state(sk, state);
+ 		sk->sk_state_change(sk);
+ 	}
+ }
+ 
+-static void mptcp_propagate_state(struct sock *sk, struct sock *ssk)
+-{
+-	struct mptcp_sock *msk = mptcp_sk(sk);
+-
+-	mptcp_data_lock(sk);
+-	if (!sock_owned_by_user(sk)) {
+-		__mptcp_sync_state(sk, ssk->sk_state);
+-	} else {
+-		msk->pending_state = ssk->sk_state;
+-		__set_bit(MPTCP_SYNC_STATE, &msk->cb_flags);
+-	}
+-	mptcp_data_unlock(sk);
+-}
+-
+ static void subflow_set_remote_key(struct mptcp_sock *msk,
+ 				   struct mptcp_subflow_context *subflow,
+ 				   const struct mptcp_options_received *mp_opt)
+@@ -467,6 +462,31 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
+ 	atomic64_set(&msk->rcv_wnd_sent, subflow->iasn);
+ }
+ 
++static void mptcp_propagate_state(struct sock *sk, struct sock *ssk,
++				  struct mptcp_subflow_context *subflow,
++				  const struct mptcp_options_received *mp_opt)
++{
++	struct mptcp_sock *msk = mptcp_sk(sk);
++
++	mptcp_data_lock(sk);
++	if (mp_opt) {
++		/* Options are available only in the non fallback cases
++		 * avoid updating rx path fields otherwise
++		 */
++		WRITE_ONCE(msk->snd_una, subflow->idsn + 1);
++		WRITE_ONCE(msk->wnd_end, subflow->idsn + 1 + tcp_sk(ssk)->snd_wnd);
++		subflow_set_remote_key(msk, subflow, mp_opt);
++	}
++
++	if (!sock_owned_by_user(sk)) {
++		__mptcp_sync_state(sk, ssk->sk_state);
++	} else {
++		msk->pending_state = ssk->sk_state;
++		__set_bit(MPTCP_SYNC_STATE, &msk->cb_flags);
++	}
++	mptcp_data_unlock(sk);
++}
++
+ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+@@ -501,10 +521,9 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		if (mp_opt.deny_join_id0)
+ 			WRITE_ONCE(msk->pm.remote_deny_join_id0, true);
+ 		subflow->mp_capable = 1;
+-		subflow_set_remote_key(msk, subflow, &mp_opt);
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVEACK);
+ 		mptcp_finish_connect(sk);
+-		mptcp_propagate_state(parent, sk);
++		mptcp_propagate_state(parent, sk, subflow, &mp_opt);
+ 	} else if (subflow->request_join) {
+ 		u8 hmac[SHA256_DIGEST_SIZE];
+ 
+@@ -516,7 +535,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		subflow->backup = mp_opt.backup;
+ 		subflow->thmac = mp_opt.thmac;
+ 		subflow->remote_nonce = mp_opt.nonce;
+-		subflow->remote_id = mp_opt.join_id;
++		WRITE_ONCE(subflow->remote_id, mp_opt.join_id);
+ 		pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u backup=%d",
+ 			 subflow, subflow->thmac, subflow->remote_nonce,
+ 			 subflow->backup);
+@@ -547,7 +566,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 		}
+ 	} else if (mptcp_check_fallback(sk)) {
+ fallback:
+-		mptcp_propagate_state(parent, sk);
++		mptcp_propagate_state(parent, sk, subflow, NULL);
+ 	}
+ 	return;
+ 
+@@ -558,8 +577,8 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 
+ static void subflow_set_local_id(struct mptcp_subflow_context *subflow, int local_id)
+ {
+-	subflow->local_id = local_id;
+-	subflow->local_id_valid = 1;
++	WARN_ON_ONCE(local_id < 0 || local_id > 255);
++	WRITE_ONCE(subflow->local_id, local_id);
+ }
+ 
+ static int subflow_chk_local_id(struct sock *sk)
+@@ -568,7 +587,7 @@ static int subflow_chk_local_id(struct sock *sk)
+ 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+ 	int err;
+ 
+-	if (likely(subflow->local_id_valid))
++	if (likely(subflow->local_id >= 0))
+ 		return 0;
+ 
+ 	err = mptcp_pm_get_local_id(msk, (struct sock_common *)sk);
+@@ -732,17 +751,16 @@ void mptcp_subflow_drop_ctx(struct sock *ssk)
+ 	kfree_rcu(ctx, rcu);
+ }
+ 
+-void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
+-				     const struct mptcp_options_received *mp_opt)
++void __mptcp_subflow_fully_established(struct mptcp_sock *msk,
++				       struct mptcp_subflow_context *subflow,
++				       const struct mptcp_options_received *mp_opt)
+ {
+-	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+-
+ 	subflow_set_remote_key(msk, subflow, mp_opt);
+ 	subflow->fully_established = 1;
+ 	WRITE_ONCE(msk->fully_established, true);
+ 
+ 	if (subflow->is_mptfo)
+-		mptcp_fastopen_gen_msk_ackseq(msk, subflow, mp_opt);
++		__mptcp_fastopen_gen_msk_ackseq(msk, subflow, mp_opt);
+ }
+ 
+ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+@@ -835,7 +853,6 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 			 * mpc option
+ 			 */
+ 			if (mp_opt.suboptions & OPTION_MPTCP_MPC_ACK) {
+-				mptcp_subflow_fully_established(ctx, &mp_opt);
+ 				mptcp_pm_fully_established(owner, child);
+ 				ctx->pm_notified = 1;
+ 			}
+@@ -1550,7 +1567,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+ 	pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
+ 		 remote_token, local_id, remote_id);
+ 	subflow->remote_token = remote_token;
+-	subflow->remote_id = remote_id;
++	WRITE_ONCE(subflow->remote_id, remote_id);
+ 	subflow->request_join = 1;
+ 	subflow->request_bkup = !!(flags & MPTCP_PM_ADDR_FLAG_BACKUP);
+ 	subflow->subflow_id = msk->subflow_id++;
+@@ -1714,6 +1731,7 @@ static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk,
+ 	pr_debug("subflow=%p", ctx);
+ 
+ 	ctx->tcp_sock = sk;
++	WRITE_ONCE(ctx->local_id, -1);
+ 
+ 	return ctx;
+ }
+@@ -1747,7 +1765,7 @@ static void subflow_state_change(struct sock *sk)
+ 		mptcp_do_fallback(sk);
+ 		pr_fallback(msk);
+ 		subflow->conn_finished = 1;
+-		mptcp_propagate_state(parent, sk);
++		mptcp_propagate_state(parent, sk, subflow, NULL);
+ 	}
+ 
+ 	/* as recvmsg() does not acquire the subflow socket for ssk selection
+@@ -1949,14 +1967,14 @@ static void subflow_ulp_clone(const struct request_sock *req,
+ 		new_ctx->idsn = subflow_req->idsn;
+ 
+ 		/* this is the first subflow, id is always 0 */
+-		new_ctx->local_id_valid = 1;
++		subflow_set_local_id(new_ctx, 0);
+ 	} else if (subflow_req->mp_join) {
+ 		new_ctx->ssn_offset = subflow_req->ssn_offset;
+ 		new_ctx->mp_join = 1;
+ 		new_ctx->fully_established = 1;
+ 		new_ctx->remote_key_valid = 1;
+ 		new_ctx->backup = subflow_req->backup;
+-		new_ctx->remote_id = subflow_req->remote_id;
++		WRITE_ONCE(new_ctx->remote_id, subflow_req->remote_id);
+ 		new_ctx->token = subflow_req->token;
+ 		new_ctx->thmac = subflow_req->thmac;
+ 
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index c6bd533983c1f..4cc97f971264e 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -283,7 +283,7 @@ sctp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ 			pr_debug("Setting vtag %x for secondary conntrack\n",
+ 				 sh->vtag);
+ 			ct->proto.sctp.vtag[IP_CT_DIR_ORIGINAL] = sh->vtag;
+-		} else {
++		} else if (sch->type == SCTP_CID_SHUTDOWN_ACK) {
+ 		/* If it is a shutdown ack OOTB packet, we expect a return
+ 		   shutdown complete, otherwise an ABORT Sec 8.4 (5) and (8) */
+ 			pr_debug("Setting vtag %x for new conn OOTB\n",
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 920a5a29ae1dc..a0571339239c4 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -87,12 +87,22 @@ static u32 flow_offload_dst_cookie(struct flow_offload_tuple *flow_tuple)
+ 	return 0;
+ }
+ 
++static struct dst_entry *nft_route_dst_fetch(struct nf_flow_route *route,
++					     enum flow_offload_tuple_dir dir)
++{
++	struct dst_entry *dst = route->tuple[dir].dst;
++
++	route->tuple[dir].dst = NULL;
++
++	return dst;
++}
++
+ static int flow_offload_fill_route(struct flow_offload *flow,
+-				   const struct nf_flow_route *route,
++				   struct nf_flow_route *route,
+ 				   enum flow_offload_tuple_dir dir)
+ {
+ 	struct flow_offload_tuple *flow_tuple = &flow->tuplehash[dir].tuple;
+-	struct dst_entry *dst = route->tuple[dir].dst;
++	struct dst_entry *dst = nft_route_dst_fetch(route, dir);
+ 	int i, j = 0;
+ 
+ 	switch (flow_tuple->l3proto) {
+@@ -122,6 +132,7 @@ static int flow_offload_fill_route(struct flow_offload *flow,
+ 		       ETH_ALEN);
+ 		flow_tuple->out.ifidx = route->tuple[dir].out.ifindex;
+ 		flow_tuple->out.hw_ifidx = route->tuple[dir].out.hw_ifindex;
++		dst_release(dst);
+ 		break;
+ 	case FLOW_OFFLOAD_XMIT_XFRM:
+ 	case FLOW_OFFLOAD_XMIT_NEIGH:
+@@ -146,7 +157,7 @@ static void nft_flow_dst_release(struct flow_offload *flow,
+ }
+ 
+ void flow_offload_route_init(struct flow_offload *flow,
+-			    const struct nf_flow_route *route)
++			     struct nf_flow_route *route)
+ {
+ 	flow_offload_fill_route(flow, route, FLOW_OFFLOAD_DIR_ORIGINAL);
+ 	flow_offload_fill_route(flow, route, FLOW_OFFLOAD_DIR_REPLY);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 04c5aa4debc74..79e088e6f103e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -684,15 +684,16 @@ static int nft_delobj(struct nft_ctx *ctx, struct nft_object *obj)
+ 	return err;
+ }
+ 
+-static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
+-				   struct nft_flowtable *flowtable)
++static struct nft_trans *
++nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
++		        struct nft_flowtable *flowtable)
+ {
+ 	struct nft_trans *trans;
+ 
+ 	trans = nft_trans_alloc(ctx, msg_type,
+ 				sizeof(struct nft_trans_flowtable));
+ 	if (trans == NULL)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	if (msg_type == NFT_MSG_NEWFLOWTABLE)
+ 		nft_activate_next(ctx->net, flowtable);
+@@ -701,22 +702,22 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
+ 	nft_trans_flowtable(trans) = flowtable;
+ 	nft_trans_commit_list_add_tail(ctx->net, trans);
+ 
+-	return 0;
++	return trans;
+ }
+ 
+ static int nft_delflowtable(struct nft_ctx *ctx,
+ 			    struct nft_flowtable *flowtable)
+ {
+-	int err;
++	struct nft_trans *trans;
+ 
+-	err = nft_trans_flowtable_add(ctx, NFT_MSG_DELFLOWTABLE, flowtable);
+-	if (err < 0)
+-		return err;
++	trans = nft_trans_flowtable_add(ctx, NFT_MSG_DELFLOWTABLE, flowtable);
++	if (IS_ERR(trans))
++		return PTR_ERR(trans);
+ 
+ 	nft_deactivate_next(ctx->net, flowtable);
+ 	nft_use_dec(&ctx->table->use);
+ 
+-	return err;
++	return 0;
+ }
+ 
+ static void __nft_reg_track_clobber(struct nft_regs_track *track, u8 dreg)
+@@ -1251,6 +1252,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 	return 0;
+ 
+ err_register_hooks:
++	ctx->table->flags |= NFT_TABLE_F_DORMANT;
+ 	nft_trans_destroy(trans);
+ 	return ret;
+ }
+@@ -2080,7 +2082,7 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net,
+ 	struct nft_hook *hook;
+ 	int err;
+ 
+-	hook = kmalloc(sizeof(struct nft_hook), GFP_KERNEL_ACCOUNT);
++	hook = kzalloc(sizeof(struct nft_hook), GFP_KERNEL_ACCOUNT);
+ 	if (!hook) {
+ 		err = -ENOMEM;
+ 		goto err_hook_alloc;
+@@ -2503,19 +2505,15 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 	RCU_INIT_POINTER(chain->blob_gen_0, blob);
+ 	RCU_INIT_POINTER(chain->blob_gen_1, blob);
+ 
+-	err = nf_tables_register_hook(net, table, chain);
+-	if (err < 0)
+-		goto err_destroy_chain;
+-
+ 	if (!nft_use_inc(&table->use)) {
+ 		err = -EMFILE;
+-		goto err_use;
++		goto err_destroy_chain;
+ 	}
+ 
+ 	trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN);
+ 	if (IS_ERR(trans)) {
+ 		err = PTR_ERR(trans);
+-		goto err_unregister_hook;
++		goto err_trans;
+ 	}
+ 
+ 	nft_trans_chain_policy(trans) = NFT_CHAIN_POLICY_UNSET;
+@@ -2523,17 +2521,22 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
+ 		nft_trans_chain_policy(trans) = policy;
+ 
+ 	err = nft_chain_add(table, chain);
+-	if (err < 0) {
+-		nft_trans_destroy(trans);
+-		goto err_unregister_hook;
+-	}
++	if (err < 0)
++		goto err_chain_add;
++
++	/* This must be LAST to ensure no packets are walking over this chain. */
++	err = nf_tables_register_hook(net, table, chain);
++	if (err < 0)
++		goto err_register_hook;
+ 
+ 	return 0;
+ 
+-err_unregister_hook:
++err_register_hook:
++	nft_chain_del(chain);
++err_chain_add:
++	nft_trans_destroy(trans);
++err_trans:
+ 	nft_use_dec_restore(&table->use);
+-err_use:
+-	nf_tables_unregister_hook(net, table, chain);
+ err_destroy_chain:
+ 	nf_tables_chain_destroy(ctx);
+ 
+@@ -8372,9 +8375,9 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
+ 	u8 family = info->nfmsg->nfgen_family;
+ 	const struct nf_flowtable_type *type;
+ 	struct nft_flowtable *flowtable;
+-	struct nft_hook *hook, *next;
+ 	struct net *net = info->net;
+ 	struct nft_table *table;
++	struct nft_trans *trans;
+ 	struct nft_ctx ctx;
+ 	int err;
+ 
+@@ -8454,34 +8457,34 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
+ 	err = nft_flowtable_parse_hook(&ctx, nla, &flowtable_hook, flowtable,
+ 				       extack, true);
+ 	if (err < 0)
+-		goto err4;
++		goto err_flowtable_parse_hooks;
+ 
+ 	list_splice(&flowtable_hook.list, &flowtable->hook_list);
+ 	flowtable->data.priority = flowtable_hook.priority;
+ 	flowtable->hooknum = flowtable_hook.num;
+ 
++	trans = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable);
++	if (IS_ERR(trans)) {
++		err = PTR_ERR(trans);
++		goto err_flowtable_trans;
++	}
++
++	/* This must be LAST to ensure no packets are walking over this flowtable. */
+ 	err = nft_register_flowtable_net_hooks(ctx.net, table,
+ 					       &flowtable->hook_list,
+ 					       flowtable);
+-	if (err < 0) {
+-		nft_hooks_destroy(&flowtable->hook_list);
+-		goto err4;
+-	}
+-
+-	err = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable);
+ 	if (err < 0)
+-		goto err5;
++		goto err_flowtable_hooks;
+ 
+ 	list_add_tail_rcu(&flowtable->list, &table->flowtables);
+ 
+ 	return 0;
+-err5:
+-	list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
+-		nft_unregister_flowtable_hook(net, flowtable, hook);
+-		list_del_rcu(&hook->list);
+-		kfree_rcu(hook, rcu);
+-	}
+-err4:
++
++err_flowtable_hooks:
++	nft_trans_destroy(trans);
++err_flowtable_trans:
++	nft_hooks_destroy(&flowtable->hook_list);
++err_flowtable_parse_hooks:
+ 	flowtable->data.type->free(&flowtable->data);
+ err3:
+ 	module_put(type->owner);
+diff --git a/net/phonet/datagram.c b/net/phonet/datagram.c
+index 3aa50dc7535b7..976fe250b5095 100644
+--- a/net/phonet/datagram.c
++++ b/net/phonet/datagram.c
+@@ -34,10 +34,10 @@ static int pn_ioctl(struct sock *sk, int cmd, int *karg)
+ 
+ 	switch (cmd) {
+ 	case SIOCINQ:
+-		lock_sock(sk);
++		spin_lock_bh(&sk->sk_receive_queue.lock);
+ 		skb = skb_peek(&sk->sk_receive_queue);
+ 		*karg = skb ? skb->len : 0;
+-		release_sock(sk);
++		spin_unlock_bh(&sk->sk_receive_queue.lock);
+ 		return 0;
+ 
+ 	case SIOCPNADDRESOURCE:
+diff --git a/net/phonet/pep.c b/net/phonet/pep.c
+index faba31f2eff29..3dd5f52bc1b58 100644
+--- a/net/phonet/pep.c
++++ b/net/phonet/pep.c
+@@ -917,6 +917,37 @@ static int pep_sock_enable(struct sock *sk, struct sockaddr *addr, int len)
+ 	return 0;
+ }
+ 
++static unsigned int pep_first_packet_length(struct sock *sk)
++{
++	struct pep_sock *pn = pep_sk(sk);
++	struct sk_buff_head *q;
++	struct sk_buff *skb;
++	unsigned int len = 0;
++	bool found = false;
++
++	if (sock_flag(sk, SOCK_URGINLINE)) {
++		q = &pn->ctrlreq_queue;
++		spin_lock_bh(&q->lock);
++		skb = skb_peek(q);
++		if (skb) {
++			len = skb->len;
++			found = true;
++		}
++		spin_unlock_bh(&q->lock);
++	}
++
++	if (likely(!found)) {
++		q = &sk->sk_receive_queue;
++		spin_lock_bh(&q->lock);
++		skb = skb_peek(q);
++		if (skb)
++			len = skb->len;
++		spin_unlock_bh(&q->lock);
++	}
++
++	return len;
++}
++
+ static int pep_ioctl(struct sock *sk, int cmd, int *karg)
+ {
+ 	struct pep_sock *pn = pep_sk(sk);
+@@ -929,15 +960,7 @@ static int pep_ioctl(struct sock *sk, int cmd, int *karg)
+ 			break;
+ 		}
+ 
+-		lock_sock(sk);
+-		if (sock_flag(sk, SOCK_URGINLINE) &&
+-		    !skb_queue_empty(&pn->ctrlreq_queue))
+-			*karg = skb_peek(&pn->ctrlreq_queue)->len;
+-		else if (!skb_queue_empty(&sk->sk_receive_queue))
+-			*karg = skb_peek(&sk->sk_receive_queue)->len;
+-		else
+-			*karg = 0;
+-		release_sock(sk);
++		*karg = pep_first_packet_length(sk);
+ 		ret = 0;
+ 		break;
+ 
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 0a711c184c29b..674f7ae356ca2 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -206,18 +206,14 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
+ 	return err;
+ }
+ 
+-static bool is_mirred_nested(void)
+-{
+-	return unlikely(__this_cpu_read(mirred_nest_level) > 1);
+-}
+-
+-static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb)
++static int
++tcf_mirred_forward(bool at_ingress, bool want_ingress, struct sk_buff *skb)
+ {
+ 	int err;
+ 
+ 	if (!want_ingress)
+ 		err = tcf_dev_queue_xmit(skb, dev_queue_xmit);
+-	else if (is_mirred_nested())
++	else if (!at_ingress)
+ 		err = netif_rx(skb);
+ 	else
+ 		err = netif_receive_skb(skb);
+@@ -225,110 +221,123 @@ static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb)
+ 	return err;
+ }
+ 
+-TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
+-				     const struct tc_action *a,
+-				     struct tcf_result *res)
++static int tcf_mirred_to_dev(struct sk_buff *skb, struct tcf_mirred *m,
++			     struct net_device *dev,
++			     const bool m_mac_header_xmit, int m_eaction,
++			     int retval)
+ {
+-	struct tcf_mirred *m = to_mirred(a);
+-	struct sk_buff *skb2 = skb;
+-	bool m_mac_header_xmit;
+-	struct net_device *dev;
+-	unsigned int nest_level;
+-	int retval, err = 0;
+-	bool use_reinsert;
++	struct sk_buff *skb_to_send = skb;
+ 	bool want_ingress;
+ 	bool is_redirect;
+ 	bool expects_nh;
+ 	bool at_ingress;
+-	int m_eaction;
++	bool dont_clone;
+ 	int mac_len;
+ 	bool at_nh;
++	int err;
+ 
+-	nest_level = __this_cpu_inc_return(mirred_nest_level);
+-	if (unlikely(nest_level > MIRRED_NEST_LIMIT)) {
+-		net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n",
+-				     netdev_name(skb->dev));
+-		__this_cpu_dec(mirred_nest_level);
+-		return TC_ACT_SHOT;
+-	}
+-
+-	tcf_lastuse_update(&m->tcf_tm);
+-	tcf_action_update_bstats(&m->common, skb);
+-
+-	m_mac_header_xmit = READ_ONCE(m->tcfm_mac_header_xmit);
+-	m_eaction = READ_ONCE(m->tcfm_eaction);
+-	retval = READ_ONCE(m->tcf_action);
+-	dev = rcu_dereference_bh(m->tcfm_dev);
+-	if (unlikely(!dev)) {
+-		pr_notice_once("tc mirred: target device is gone\n");
+-		goto out;
+-	}
+-
++	is_redirect = tcf_mirred_is_act_redirect(m_eaction);
+ 	if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) {
+ 		net_notice_ratelimited("tc mirred to Houston: device %s is down\n",
+ 				       dev->name);
+-		goto out;
++		goto err_cant_do;
+ 	}
+ 
+ 	/* we could easily avoid the clone only if called by ingress and clsact;
+ 	 * since we can't easily detect the clsact caller, skip clone only for
+ 	 * ingress - that covers the TC S/W datapath.
+ 	 */
+-	is_redirect = tcf_mirred_is_act_redirect(m_eaction);
+ 	at_ingress = skb_at_tc_ingress(skb);
+-	use_reinsert = at_ingress && is_redirect &&
+-		       tcf_mirred_can_reinsert(retval);
+-	if (!use_reinsert) {
+-		skb2 = skb_clone(skb, GFP_ATOMIC);
+-		if (!skb2)
+-			goto out;
++	dont_clone = skb_at_tc_ingress(skb) && is_redirect &&
++		tcf_mirred_can_reinsert(retval);
++	if (!dont_clone) {
++		skb_to_send = skb_clone(skb, GFP_ATOMIC);
++		if (!skb_to_send)
++			goto err_cant_do;
+ 	}
+ 
+ 	want_ingress = tcf_mirred_act_wants_ingress(m_eaction);
+ 
+ 	/* All mirred/redirected skbs should clear previous ct info */
+-	nf_reset_ct(skb2);
++	nf_reset_ct(skb_to_send);
+ 	if (want_ingress && !at_ingress) /* drop dst for egress -> ingress */
+-		skb_dst_drop(skb2);
++		skb_dst_drop(skb_to_send);
+ 
+ 	expects_nh = want_ingress || !m_mac_header_xmit;
+ 	at_nh = skb->data == skb_network_header(skb);
+ 	if (at_nh != expects_nh) {
+-		mac_len = skb_at_tc_ingress(skb) ? skb->mac_len :
++		mac_len = at_ingress ? skb->mac_len :
+ 			  skb_network_offset(skb);
+ 		if (expects_nh) {
+ 			/* target device/action expect data at nh */
+-			skb_pull_rcsum(skb2, mac_len);
++			skb_pull_rcsum(skb_to_send, mac_len);
+ 		} else {
+ 			/* target device/action expect data at mac */
+-			skb_push_rcsum(skb2, mac_len);
++			skb_push_rcsum(skb_to_send, mac_len);
+ 		}
+ 	}
+ 
+-	skb2->skb_iif = skb->dev->ifindex;
+-	skb2->dev = dev;
++	skb_to_send->skb_iif = skb->dev->ifindex;
++	skb_to_send->dev = dev;
+ 
+-	/* mirror is always swallowed */
+ 	if (is_redirect) {
+-		skb_set_redirected(skb2, skb2->tc_at_ingress);
+-
+-		/* let's the caller reinsert the packet, if possible */
+-		if (use_reinsert) {
+-			err = tcf_mirred_forward(want_ingress, skb);
+-			if (err)
+-				tcf_action_inc_overlimit_qstats(&m->common);
+-			__this_cpu_dec(mirred_nest_level);
+-			return TC_ACT_CONSUMED;
+-		}
++		if (skb == skb_to_send)
++			retval = TC_ACT_CONSUMED;
++
++		skb_set_redirected(skb_to_send, skb_to_send->tc_at_ingress);
++
++		err = tcf_mirred_forward(at_ingress, want_ingress, skb_to_send);
++	} else {
++		err = tcf_mirred_forward(at_ingress, want_ingress, skb_to_send);
+ 	}
++	if (err)
++		tcf_action_inc_overlimit_qstats(&m->common);
++
++	return retval;
++
++err_cant_do:
++	if (is_redirect)
++		retval = TC_ACT_SHOT;
++	tcf_action_inc_overlimit_qstats(&m->common);
++	return retval;
++}
++
++TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
++				     const struct tc_action *a,
++				     struct tcf_result *res)
++{
++	struct tcf_mirred *m = to_mirred(a);
++	int retval = READ_ONCE(m->tcf_action);
++	unsigned int nest_level;
++	bool m_mac_header_xmit;
++	struct net_device *dev;
++	int m_eaction;
+ 
+-	err = tcf_mirred_forward(want_ingress, skb2);
+-	if (err) {
+-out:
++	nest_level = __this_cpu_inc_return(mirred_nest_level);
++	if (unlikely(nest_level > MIRRED_NEST_LIMIT)) {
++		net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n",
++				     netdev_name(skb->dev));
++		retval = TC_ACT_SHOT;
++		goto dec_nest_level;
++	}
++
++	tcf_lastuse_update(&m->tcf_tm);
++	tcf_action_update_bstats(&m->common, skb);
++
++	dev = rcu_dereference_bh(m->tcfm_dev);
++	if (unlikely(!dev)) {
++		pr_notice_once("tc mirred: target device is gone\n");
+ 		tcf_action_inc_overlimit_qstats(&m->common);
+-		if (tcf_mirred_is_act_redirect(m_eaction))
+-			retval = TC_ACT_SHOT;
++		goto dec_nest_level;
+ 	}
++
++	m_mac_header_xmit = READ_ONCE(m->tcfm_mac_header_xmit);
++	m_eaction = READ_ONCE(m->tcfm_eaction);
++
++	retval = tcf_mirred_to_dev(skb, m, dev, m_mac_header_xmit, m_eaction,
++				   retval);
++
++dec_nest_level:
+ 	__this_cpu_dec(mirred_nest_level);
+ 
+ 	return retval;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index efb9d2811b73d..6ee7064c82fcc 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2460,8 +2460,11 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ 
+ errout_idr:
+-	if (!fold)
++	if (!fold) {
++		spin_lock(&tp->lock);
+ 		idr_remove(&head->handle_idr, fnew->handle);
++		spin_unlock(&tp->lock);
++	}
+ 	__fl_put(fnew);
+ errout_tb:
+ 	kfree(tb);
+diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c
+index 5b045284849e0..c9189a970eec3 100644
+--- a/net/switchdev/switchdev.c
++++ b/net/switchdev/switchdev.c
+@@ -19,6 +19,35 @@
+ #include <linux/rtnetlink.h>
+ #include <net/switchdev.h>
+ 
++static bool switchdev_obj_eq(const struct switchdev_obj *a,
++			     const struct switchdev_obj *b)
++{
++	const struct switchdev_obj_port_vlan *va, *vb;
++	const struct switchdev_obj_port_mdb *ma, *mb;
++
++	if (a->id != b->id || a->orig_dev != b->orig_dev)
++		return false;
++
++	switch (a->id) {
++	case SWITCHDEV_OBJ_ID_PORT_VLAN:
++		va = SWITCHDEV_OBJ_PORT_VLAN(a);
++		vb = SWITCHDEV_OBJ_PORT_VLAN(b);
++		return va->flags == vb->flags &&
++			va->vid == vb->vid &&
++			va->changed == vb->changed;
++	case SWITCHDEV_OBJ_ID_PORT_MDB:
++	case SWITCHDEV_OBJ_ID_HOST_MDB:
++		ma = SWITCHDEV_OBJ_PORT_MDB(a);
++		mb = SWITCHDEV_OBJ_PORT_MDB(b);
++		return ma->vid == mb->vid &&
++			ether_addr_equal(ma->addr, mb->addr);
++	default:
++		break;
++	}
++
++	BUG();
++}
++
+ static LIST_HEAD(deferred);
+ static DEFINE_SPINLOCK(deferred_lock);
+ 
+@@ -307,6 +336,50 @@ int switchdev_port_obj_del(struct net_device *dev,
+ }
+ EXPORT_SYMBOL_GPL(switchdev_port_obj_del);
+ 
++/**
++ *	switchdev_port_obj_act_is_deferred - Is object action pending?
++ *
++ *	@dev: port device
++ *	@nt: type of action; add or delete
++ *	@obj: object to test
++ *
++ *	Returns true if a deferred item is pending, which is
++ *	equivalent to the action @nt on an object @obj.
++ *
++ *	rtnl_lock must be held.
++ */
++bool switchdev_port_obj_act_is_deferred(struct net_device *dev,
++					enum switchdev_notifier_type nt,
++					const struct switchdev_obj *obj)
++{
++	struct switchdev_deferred_item *dfitem;
++	bool found = false;
++
++	ASSERT_RTNL();
++
++	spin_lock_bh(&deferred_lock);
++
++	list_for_each_entry(dfitem, &deferred, list) {
++		if (dfitem->dev != dev)
++			continue;
++
++		if ((dfitem->func == switchdev_port_obj_add_deferred &&
++		     nt == SWITCHDEV_PORT_OBJ_ADD) ||
++		    (dfitem->func == switchdev_port_obj_del_deferred &&
++		     nt == SWITCHDEV_PORT_OBJ_DEL)) {
++			if (switchdev_obj_eq((const void *)dfitem->data, obj)) {
++				found = true;
++				break;
++			}
++		}
++	}
++
++	spin_unlock_bh(&deferred_lock);
++
++	return found;
++}
++EXPORT_SYMBOL_GPL(switchdev_port_obj_act_is_deferred);
++
+ static ATOMIC_NOTIFIER_HEAD(switchdev_notif_chain);
+ static BLOCKING_NOTIFIER_HEAD(switchdev_blocking_notif_chain);
+ 
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 1c2c6800949dd..b4674f03d71a9 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -1003,7 +1003,7 @@ static u16 tls_user_config(struct tls_context *ctx, bool tx)
+ 	return 0;
+ }
+ 
+-static int tls_get_info(const struct sock *sk, struct sk_buff *skb)
++static int tls_get_info(struct sock *sk, struct sk_buff *skb)
+ {
+ 	u16 version, cipher_type;
+ 	struct tls_context *ctx;
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 9fbc70200cd0f..de96959336c48 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1772,7 +1772,8 @@ static int process_rx_list(struct tls_sw_context_rx *ctx,
+ 			   u8 *control,
+ 			   size_t skip,
+ 			   size_t len,
+-			   bool is_peek)
++			   bool is_peek,
++			   bool *more)
+ {
+ 	struct sk_buff *skb = skb_peek(&ctx->rx_list);
+ 	struct tls_msg *tlm;
+@@ -1785,7 +1786,7 @@ static int process_rx_list(struct tls_sw_context_rx *ctx,
+ 
+ 		err = tls_record_content_type(msg, tlm, control);
+ 		if (err <= 0)
+-			goto out;
++			goto more;
+ 
+ 		if (skip < rxm->full_len)
+ 			break;
+@@ -1803,12 +1804,12 @@ static int process_rx_list(struct tls_sw_context_rx *ctx,
+ 
+ 		err = tls_record_content_type(msg, tlm, control);
+ 		if (err <= 0)
+-			goto out;
++			goto more;
+ 
+ 		err = skb_copy_datagram_msg(skb, rxm->offset + skip,
+ 					    msg, chunk);
+ 		if (err < 0)
+-			goto out;
++			goto more;
+ 
+ 		len = len - chunk;
+ 		copied = copied + chunk;
+@@ -1844,6 +1845,10 @@ static int process_rx_list(struct tls_sw_context_rx *ctx,
+ 
+ out:
+ 	return copied ? : err;
++more:
++	if (more)
++		*more = true;
++	goto out;
+ }
+ 
+ static bool
+@@ -1947,6 +1952,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	int target, err;
+ 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
+ 	bool is_peek = flags & MSG_PEEK;
++	bool rx_more = false;
+ 	bool released = true;
+ 	bool bpf_strp_enabled;
+ 	bool zc_capable;
+@@ -1966,12 +1972,12 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		goto end;
+ 
+ 	/* Process pending decrypted records. It must be non-zero-copy */
+-	err = process_rx_list(ctx, msg, &control, 0, len, is_peek);
++	err = process_rx_list(ctx, msg, &control, 0, len, is_peek, &rx_more);
+ 	if (err < 0)
+ 		goto end;
+ 
+ 	copied = err;
+-	if (len <= copied)
++	if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more)
+ 		goto end;
+ 
+ 	target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+@@ -2064,6 +2070,8 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				decrypted += chunk;
+ 				len -= chunk;
+ 				__skb_queue_tail(&ctx->rx_list, skb);
++				if (unlikely(control != TLS_RECORD_TYPE_DATA))
++					break;
+ 				continue;
+ 			}
+ 
+@@ -2128,10 +2136,10 @@ int tls_sw_recvmsg(struct sock *sk,
+ 		/* Drain records from the rx_list & copy if required */
+ 		if (is_peek || is_kvec)
+ 			err = process_rx_list(ctx, msg, &control, copied,
+-					      decrypted, is_peek);
++					      decrypted, is_peek, NULL);
+ 		else
+ 			err = process_rx_list(ctx, msg, &control, 0,
+-					      async_copy_bytes, is_peek);
++					      async_copy_bytes, is_peek, NULL);
+ 	}
+ 
+ 	copied += decrypted;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 30b178ebba60a..0748e7ea5210e 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -782,19 +782,6 @@ static int unix_seqpacket_sendmsg(struct socket *, struct msghdr *, size_t);
+ static int unix_seqpacket_recvmsg(struct socket *, struct msghdr *, size_t,
+ 				  int);
+ 
+-static int unix_set_peek_off(struct sock *sk, int val)
+-{
+-	struct unix_sock *u = unix_sk(sk);
+-
+-	if (mutex_lock_interruptible(&u->iolock))
+-		return -EINTR;
+-
+-	WRITE_ONCE(sk->sk_peek_off, val);
+-	mutex_unlock(&u->iolock);
+-
+-	return 0;
+-}
+-
+ #ifdef CONFIG_PROC_FS
+ static int unix_count_nr_fds(struct sock *sk)
+ {
+@@ -862,7 +849,7 @@ static const struct proto_ops unix_stream_ops = {
+ 	.read_skb =	unix_stream_read_skb,
+ 	.mmap =		sock_no_mmap,
+ 	.splice_read =	unix_stream_splice_read,
+-	.set_peek_off =	unix_set_peek_off,
++	.set_peek_off =	sk_set_peek_off,
+ 	.show_fdinfo =	unix_show_fdinfo,
+ };
+ 
+@@ -886,7 +873,7 @@ static const struct proto_ops unix_dgram_ops = {
+ 	.read_skb =	unix_read_skb,
+ 	.recvmsg =	unix_dgram_recvmsg,
+ 	.mmap =		sock_no_mmap,
+-	.set_peek_off =	unix_set_peek_off,
++	.set_peek_off =	sk_set_peek_off,
+ 	.show_fdinfo =	unix_show_fdinfo,
+ };
+ 
+@@ -909,7 +896,7 @@ static const struct proto_ops unix_seqpacket_ops = {
+ 	.sendmsg =	unix_seqpacket_sendmsg,
+ 	.recvmsg =	unix_seqpacket_recvmsg,
+ 	.mmap =		sock_no_mmap,
+-	.set_peek_off =	unix_set_peek_off,
++	.set_peek_off =	sk_set_peek_off,
+ 	.show_fdinfo =	unix_show_fdinfo,
+ };
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 1cbbb11ea5033..fbf95b7ff6b43 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4008,6 +4008,7 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
+ 		}
+ 		wiphy_unlock(&rdev->wiphy);
+ 
++		if_start = 0;
+ 		wp_idx++;
+ 	}
+  out:
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 688e641cd2784..da1582de6e84a 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -711,7 +711,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 			memcpy(vaddr, buffer, len);
+ 			kunmap_local(vaddr);
+ 
+-			skb_add_rx_frag(skb, nr_frags, page, 0, len, 0);
++			skb_add_rx_frag(skb, nr_frags, page, 0, len, PAGE_SIZE);
++			refcount_add(PAGE_SIZE, &xs->sk.sk_wmem_alloc);
+ 		}
+ 	}
+ 
+diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py
+index 61b7dddedc461..0669bac5e900e 100755
+--- a/scripts/bpf_doc.py
++++ b/scripts/bpf_doc.py
+@@ -513,7 +513,7 @@ eBPF programs can have an associated license, passed along with the bytecode
+ instructions to the kernel when the programs are loaded. The format for that
+ string is identical to the one in use for kernel modules (Dual licenses, such
+ as "Dual BSD/GPL", may be used). Some helper functions are only accessible to
+-programs that are compatible with the GNU Privacy License (GPL).
++programs that are compatible with the GNU General Public License (GNU GPL).
+ 
+ In order to use such helpers, the eBPF program must be loaded with the correct
+ license string passed (via **attr**) to the **bpf**\\ () system call, and this
+diff --git a/sound/pci/hda/cs35l41_hda_property.c b/sound/pci/hda/cs35l41_hda_property.c
+index 35277ce890a46..d74cf11eef1ea 100644
+--- a/sound/pci/hda/cs35l41_hda_property.c
++++ b/sound/pci/hda/cs35l41_hda_property.c
+@@ -76,6 +76,8 @@ static const struct cs35l41_config cs35l41_config_table[] = {
+ 	{ "10431533", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+ 	{ "10431573", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+ 	{ "10431663", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 1000, 4500, 24 },
++	{ "10431683", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
++	{ "104316A3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+ 	{ "104316D3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+ 	{ "104316F3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+ 	{ "104317F3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+@@ -410,6 +412,8 @@ static const struct cs35l41_prop_model cs35l41_prop_model_table[] = {
+ 	{ "CSC3551", "10431533", generic_dsd_config },
+ 	{ "CSC3551", "10431573", generic_dsd_config },
+ 	{ "CSC3551", "10431663", generic_dsd_config },
++	{ "CSC3551", "10431683", generic_dsd_config },
++	{ "CSC3551", "104316A3", generic_dsd_config },
+ 	{ "CSC3551", "104316D3", generic_dsd_config },
+ 	{ "CSC3551", "104316F3", generic_dsd_config },
+ 	{ "CSC3551", "104317F3", generic_dsd_config },
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 2276adc844784..1b550c42db092 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1729,9 +1729,11 @@ static int default_bdl_pos_adj(struct azx *chip)
+ 	/* some exceptions: Atoms seem problematic with value 1 */
+ 	if (chip->pci->vendor == PCI_VENDOR_ID_INTEL) {
+ 		switch (chip->pci->device) {
+-		case 0x0f04: /* Baytrail */
+-		case 0x2284: /* Braswell */
++		case PCI_DEVICE_ID_INTEL_HDA_BYT:
++		case PCI_DEVICE_ID_INTEL_HDA_BSW:
+ 			return 32;
++		case PCI_DEVICE_ID_INTEL_HDA_APL:
++			return 64;
+ 		}
+ 	}
+ 
+diff --git a/sound/soc/amd/acp/acp-mach-common.c b/sound/soc/amd/acp/acp-mach-common.c
+index 34b14f2611ba8..12ff0a558ea8f 100644
+--- a/sound/soc/amd/acp/acp-mach-common.c
++++ b/sound/soc/amd/acp/acp-mach-common.c
+@@ -1428,8 +1428,13 @@ int acp_sofdsp_dai_links_create(struct snd_soc_card *card)
+ 	if (drv_data->amp_cpu_id == I2S_SP) {
+ 		links[i].name = "acp-amp-codec";
+ 		links[i].id = AMP_BE_ID;
+-		links[i].cpus = sof_sp_virtual;
+-		links[i].num_cpus = ARRAY_SIZE(sof_sp_virtual);
++		if (drv_data->platform == RENOIR) {
++			links[i].cpus = sof_sp;
++			links[i].num_cpus = ARRAY_SIZE(sof_sp);
++		} else {
++			links[i].cpus = sof_sp_virtual;
++			links[i].num_cpus = ARRAY_SIZE(sof_sp_virtual);
++		}
+ 		links[i].platforms = sof_component;
+ 		links[i].num_platforms = ARRAY_SIZE(sof_component);
+ 		links[i].dpcm_playback = 1;
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index c01e31175015c..9c0accf5e1888 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -739,19 +739,25 @@ static int wm_adsp_request_firmware_file(struct wm_adsp *dsp,
+ 					 const char *filetype)
+ {
+ 	struct cs_dsp *cs_dsp = &dsp->cs_dsp;
++	const char *fwf;
+ 	char *s, c;
+ 	int ret = 0;
+ 
++	if (dsp->fwf_name)
++		fwf = dsp->fwf_name;
++	else
++		fwf = dsp->cs_dsp.name;
++
+ 	if (system_name && asoc_component_prefix)
+ 		*filename = kasprintf(GFP_KERNEL, "%s%s-%s-%s-%s-%s.%s", dir, dsp->part,
+-				      dsp->fwf_name, wm_adsp_fw[dsp->fw].file, system_name,
++				      fwf, wm_adsp_fw[dsp->fw].file, system_name,
+ 				      asoc_component_prefix, filetype);
+ 	else if (system_name)
+ 		*filename = kasprintf(GFP_KERNEL, "%s%s-%s-%s-%s.%s", dir, dsp->part,
+-				      dsp->fwf_name, wm_adsp_fw[dsp->fw].file, system_name,
++				      fwf, wm_adsp_fw[dsp->fw].file, system_name,
+ 				      filetype);
+ 	else
+-		*filename = kasprintf(GFP_KERNEL, "%s%s-%s-%s.%s", dir, dsp->part, dsp->fwf_name,
++		*filename = kasprintf(GFP_KERNEL, "%s%s-%s-%s.%s", dir, dsp->part, fwf,
+ 				      wm_adsp_fw[dsp->fw].file, filetype);
+ 
+ 	if (*filename == NULL)
+@@ -863,29 +869,18 @@ static int wm_adsp_request_firmware_files(struct wm_adsp *dsp,
+ 	}
+ 
+ 	adsp_err(dsp, "Failed to request firmware <%s>%s-%s-%s<-%s<%s>>.wmfw\n",
+-		 cirrus_dir, dsp->part, dsp->fwf_name, wm_adsp_fw[dsp->fw].file,
+-		 system_name, asoc_component_prefix);
++		 cirrus_dir, dsp->part,
++		 dsp->fwf_name ? dsp->fwf_name : dsp->cs_dsp.name,
++		 wm_adsp_fw[dsp->fw].file, system_name, asoc_component_prefix);
+ 
+ 	return -ENOENT;
+ }
+ 
+ static int wm_adsp_common_init(struct wm_adsp *dsp)
+ {
+-	char *p;
+-
+ 	INIT_LIST_HEAD(&dsp->compr_list);
+ 	INIT_LIST_HEAD(&dsp->buffer_list);
+ 
+-	if (!dsp->fwf_name) {
+-		p = devm_kstrdup(dsp->cs_dsp.dev, dsp->cs_dsp.name, GFP_KERNEL);
+-		if (!p)
+-			return -ENOMEM;
+-
+-		dsp->fwf_name = p;
+-		for (; *p != 0; ++p)
+-			*p = tolower(*p);
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/sunxi/sun4i-spdif.c b/sound/soc/sunxi/sun4i-spdif.c
+index 702386823d172..f41c309558579 100644
+--- a/sound/soc/sunxi/sun4i-spdif.c
++++ b/sound/soc/sunxi/sun4i-spdif.c
+@@ -577,6 +577,11 @@ static const struct of_device_id sun4i_spdif_of_match[] = {
+ 		.compatible = "allwinner,sun50i-h6-spdif",
+ 		.data = &sun50i_h6_spdif_quirks,
+ 	},
++	{
++		.compatible = "allwinner,sun50i-h616-spdif",
++		/* Essentially the same as the H6, but without RX */
++		.data = &sun50i_h6_spdif_quirks,
++	},
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, sun4i_spdif_of_match);
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 33db334e65566..a676ad093d189 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -328,8 +328,16 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
+ 			if (chip->quirk_flags & QUIRK_FLAG_SKIP_CLOCK_SELECTOR)
+ 				return ret;
+ 			err = uac_clock_selector_set_val(chip, entity_id, cur);
+-			if (err < 0)
++			if (err < 0) {
++				if (pins == 1) {
++					usb_audio_dbg(chip,
++						      "%s(): selector returned an error, "
++						      "assuming a firmware bug, id %d, ret %d\n",
++						      __func__, clock_id, err);
++					return ret;
++				}
+ 				return err;
++			}
+ 		}
+ 
+ 		if (!validate || ret > 0 || !chip->autoclock)
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index ab5fed9f55b60..3b45d0ee76938 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -470,9 +470,11 @@ static int validate_sample_rate_table_v2v3(struct snd_usb_audio *chip,
+ 					   int clock)
+ {
+ 	struct usb_device *dev = chip->dev;
++	struct usb_host_interface *alts;
+ 	unsigned int *table;
+ 	unsigned int nr_rates;
+ 	int i, err;
++	u32 bmControls;
+ 
+ 	/* performing the rate verification may lead to unexpected USB bus
+ 	 * behavior afterwards by some unknown reason.  Do this only for the
+@@ -481,6 +483,24 @@ static int validate_sample_rate_table_v2v3(struct snd_usb_audio *chip,
+ 	if (!(chip->quirk_flags & QUIRK_FLAG_VALIDATE_RATES))
+ 		return 0; /* don't perform the validation as default */
+ 
++	alts = snd_usb_get_host_interface(chip, fp->iface, fp->altsetting);
++	if (!alts)
++		return 0;
++
++	if (fp->protocol == UAC_VERSION_3) {
++		struct uac3_as_header_descriptor *as = snd_usb_find_csint_desc(
++				alts->extra, alts->extralen, NULL, UAC_AS_GENERAL);
++		bmControls = le32_to_cpu(as->bmControls);
++	} else {
++		struct uac2_as_header_descriptor *as = snd_usb_find_csint_desc(
++				alts->extra, alts->extralen, NULL, UAC_AS_GENERAL);
++		bmControls = as->bmControls;
++	}
++
++	if (!uac_v2v3_control_is_readable(bmControls,
++				UAC2_AS_VAL_ALT_SETTINGS))
++		return 0;
++
+ 	table = kcalloc(fp->nr_rates, sizeof(*table), GFP_KERNEL);
+ 	if (!table)
+ 		return -ENOMEM;
+diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
+index 830d25097009a..591f5f50ddaab 100644
+--- a/tools/net/ynl/lib/ynl.c
++++ b/tools/net/ynl/lib/ynl.c
+@@ -462,6 +462,8 @@ ynl_gemsg_start_dump(struct ynl_sock *ys, __u32 id, __u8 cmd, __u8 version)
+ 
+ int ynl_recv_ack(struct ynl_sock *ys, int ret)
+ {
++	struct ynl_parse_arg yarg = { .ys = ys, };
++
+ 	if (!ret) {
+ 		yerr(ys, YNL_ERROR_EXPECT_ACK,
+ 		     "Expecting an ACK but nothing received");
+@@ -474,7 +476,7 @@ int ynl_recv_ack(struct ynl_sock *ys, int ret)
+ 		return ret;
+ 	}
+ 	return mnl_cb_run(ys->rx_buf, ret, ys->seq, ys->portid,
+-			  ynl_cb_null, ys);
++			  ynl_cb_null, &yarg);
+ }
+ 
+ int ynl_cb_null(const struct nlmsghdr *nlh, void *data)
+@@ -582,7 +584,13 @@ static int ynl_sock_read_family(struct ynl_sock *ys, const char *family_name)
+ 		return err;
+ 	}
+ 
+-	return ynl_recv_ack(ys, err);
++	err = ynl_recv_ack(ys, err);
++	if (err < 0) {
++		free(ys->mcast_groups);
++		return err;
++	}
++
++	return 0;
+ }
+ 
+ struct ynl_sock *
+@@ -737,11 +745,14 @@ static int ynl_ntf_parse(struct ynl_sock *ys, const struct nlmsghdr *nlh)
+ 
+ static int ynl_ntf_trampoline(const struct nlmsghdr *nlh, void *data)
+ {
+-	return ynl_ntf_parse((struct ynl_sock *)data, nlh);
++	struct ynl_parse_arg *yarg = data;
++
++	return ynl_ntf_parse(yarg->ys, nlh);
+ }
+ 
+ int ynl_ntf_check(struct ynl_sock *ys)
+ {
++	struct ynl_parse_arg yarg = { .ys = ys, };
+ 	ssize_t len;
+ 	int err;
+ 
+@@ -763,7 +774,7 @@ int ynl_ntf_check(struct ynl_sock *ys)
+ 			return len;
+ 
+ 		err = mnl_cb_run2(ys->rx_buf, len, ys->seq, ys->portid,
+-				  ynl_ntf_trampoline, ys,
++				  ynl_ntf_trampoline, &yarg,
+ 				  ynl_cb_array, NLMSG_MIN_TYPE);
+ 		if (err < 0)
+ 			return err;
+diff --git a/tools/testing/selftests/drivers/net/bonding/bond_options.sh b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
+index d508486cc0bdc..9a3d3c389dadd 100755
+--- a/tools/testing/selftests/drivers/net/bonding/bond_options.sh
++++ b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
+@@ -62,6 +62,8 @@ prio_test()
+ 
+ 	# create bond
+ 	bond_reset "${param}"
++	# set active_slave to primary eth1 specifically
++	ip -n ${s_ns} link set bond0 type bond active_slave eth1
+ 
+ 	# check bonding member prio value
+ 	ip -n ${s_ns} link set eth0 type bond_slave prio 0
+diff --git a/tools/testing/selftests/iommu/config b/tools/testing/selftests/iommu/config
+index 6c4f901d6fed3..110d73917615d 100644
+--- a/tools/testing/selftests/iommu/config
++++ b/tools/testing/selftests/iommu/config
+@@ -1,2 +1,3 @@
+-CONFIG_IOMMUFD
+-CONFIG_IOMMUFD_TEST
++CONFIG_IOMMUFD=y
++CONFIG_FAULT_INJECTION=y
++CONFIG_IOMMUFD_TEST=y
+diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
+index 2709a34a39c52..2c68709062da5 100644
+--- a/tools/testing/selftests/mm/uffd-unit-tests.c
++++ b/tools/testing/selftests/mm/uffd-unit-tests.c
+@@ -1309,6 +1309,12 @@ int main(int argc, char *argv[])
+ 				continue;
+ 
+ 			uffd_test_start("%s on %s", test->name, mem_type->name);
++			if ((mem_type->mem_flag == MEM_HUGETLB ||
++			    mem_type->mem_flag == MEM_HUGETLB_PRIVATE) &&
++			    (default_huge_page_size() == 0)) {
++				uffd_test_skip("huge page size is 0, feature missing?");
++				continue;
++			}
+ 			if (!uffd_feature_supported(test)) {
+ 				uffd_test_skip("feature missing");
+ 				continue;
+diff --git a/tools/testing/selftests/net/forwarding/tc_actions.sh b/tools/testing/selftests/net/forwarding/tc_actions.sh
+index b0f5e55d2d0b2..5896296365022 100755
+--- a/tools/testing/selftests/net/forwarding/tc_actions.sh
++++ b/tools/testing/selftests/net/forwarding/tc_actions.sh
+@@ -235,9 +235,6 @@ mirred_egress_to_ingress_tcp_test()
+ 	check_err $? "didn't mirred redirect ICMP"
+ 	tc_check_packets "dev $h1 ingress" 102 10
+ 	check_err $? "didn't drop mirred ICMP"
+-	local overlimits=$(tc_rule_stats_get ${h1} 101 egress .overlimits)
+-	test ${overlimits} = 10
+-	check_err $? "wrong overlimits, expected 10 got ${overlimits}"
+ 
+ 	tc filter del dev $h1 egress protocol ip pref 100 handle 100 flower
+ 	tc filter del dev $h1 egress protocol ip pref 101 handle 101 flower
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index 85a8ee9395b39..4d8c59be1b30c 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -56,14 +56,14 @@ __chk_nr()
+ 	local command="$1"
+ 	local expected=$2
+ 	local msg="$3"
+-	local skip="${4:-SKIP}"
++	local skip="${4-SKIP}"
+ 	local nr
+ 
+ 	nr=$(eval $command)
+ 
+ 	printf "%-50s" "$msg"
+-	if [ $nr != $expected ]; then
+-		if [ $nr = "$skip" ] && ! mptcp_lib_expect_all_features; then
++	if [ "$nr" != "$expected" ]; then
++		if [ "$nr" = "$skip" ] && ! mptcp_lib_expect_all_features; then
+ 			echo "[ skip ] Feature probably not supported"
+ 			mptcp_lib_result_skip "${msg}"
+ 		else
+@@ -166,9 +166,13 @@ chk_msk_listen()
+ chk_msk_inuse()
+ {
+ 	local expected=$1
+-	local msg="$2"
++	local msg="....chk ${2:-${expected}} msk in use"
+ 	local listen_nr
+ 
++	if [ "${expected}" -eq 0 ]; then
++		msg+=" after flush"
++	fi
++
+ 	listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN)
+ 	expected=$((expected + listen_nr))
+ 
+@@ -179,7 +183,7 @@ chk_msk_inuse()
+ 		sleep 0.1
+ 	done
+ 
+-	__chk_nr get_msk_inuse $expected "$msg" 0
++	__chk_nr get_msk_inuse $expected "${msg}" 0
+ }
+ 
+ # $1: ns, $2: port
+@@ -199,6 +203,20 @@ wait_local_port_listen()
+ 	done
+ }
+ 
++# $1: cestab nr
++chk_msk_cestab()
++{
++	local expected=$1
++	local msg="....chk ${2:-${expected}} cestab"
++
++	if [ "${expected}" -eq 0 ]; then
++		msg+=" after flush"
++	fi
++
++	__chk_nr "mptcp_lib_get_counter ${ns} MPTcpExtMPCurrEstab" \
++		 "${expected}" "${msg}" ""
++}
++
+ wait_connected()
+ {
+ 	local listener_ns="${1}"
+@@ -235,10 +253,12 @@ wait_connected $ns 10000
+ chk_msk_nr 2 "after MPC handshake "
+ chk_msk_remote_key_nr 2 "....chk remote_key"
+ chk_msk_fallback_nr 0 "....chk no fallback"
+-chk_msk_inuse 2 "....chk 2 msk in use"
++chk_msk_inuse 2
++chk_msk_cestab 2
+ flush_pids
+ 
+-chk_msk_inuse 0 "....chk 0 msk in use after flush"
++chk_msk_inuse 0 "2->0"
++chk_msk_cestab 0 "2->0"
+ 
+ echo "a" | \
+ 	timeout ${timeout_test} \
+@@ -253,10 +273,12 @@ echo "b" | \
+ 				127.0.0.1 >/dev/null &
+ wait_connected $ns 10001
+ chk_msk_fallback_nr 1 "check fallback"
+-chk_msk_inuse 1 "....chk 1 msk in use"
++chk_msk_inuse 1
++chk_msk_cestab 1
+ flush_pids
+ 
+-chk_msk_inuse 0 "....chk 0 msk in use after flush"
++chk_msk_inuse 0 "1->0"
++chk_msk_cestab 0 "1->0"
+ 
+ NR_CLIENTS=100
+ for I in `seq 1 $NR_CLIENTS`; do
+@@ -277,10 +299,12 @@ for I in `seq 1 $NR_CLIENTS`; do
+ done
+ 
+ wait_msk_nr $((NR_CLIENTS*2)) "many msk socket present"
+-chk_msk_inuse $((NR_CLIENTS*2)) "....chk many msk in use"
++chk_msk_inuse $((NR_CLIENTS*2)) "many"
++chk_msk_cestab $((NR_CLIENTS*2)) "many"
+ flush_pids
+ 
+-chk_msk_inuse 0 "....chk 0 msk in use after flush"
++chk_msk_inuse 0 "many->0"
++chk_msk_cestab 0 "many->0"
+ 
+ mptcp_lib_result_print_all_tap
+ exit $ret
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index b1fc8afd072dc..10cd322e05c42 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -341,21 +341,6 @@ do_ping()
+ 	return 0
+ }
+ 
+-# $1: ns, $2: MIB counter
+-get_mib_counter()
+-{
+-	local listener_ns="${1}"
+-	local mib="${2}"
+-
+-	# strip the header
+-	ip netns exec "${listener_ns}" \
+-		nstat -z -a "${mib}" | \
+-			tail -n+2 | \
+-			while read a count c rest; do
+-				echo $count
+-			done
+-}
+-
+ # $1: ns, $2: port
+ wait_local_port_listen()
+ {
+@@ -441,12 +426,12 @@ do_transfer()
+ 			nstat -n
+ 	fi
+ 
+-	local stat_synrx_last_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableSYNRX")
+-	local stat_ackrx_last_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
+-	local stat_cookietx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent")
+-	local stat_cookierx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv")
+-	local stat_csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
+-	local stat_csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
++	local stat_synrx_last_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableSYNRX")
++	local stat_ackrx_last_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
++	local stat_cookietx_last=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesSent")
++	local stat_cookierx_last=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesRecv")
++	local stat_csum_err_s=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtDataCsumErr")
++	local stat_csum_err_c=$(mptcp_lib_get_counter "${connector_ns}" "MPTcpExtDataCsumErr")
+ 
+ 	timeout ${timeout_test} \
+ 		ip netns exec ${listener_ns} \
+@@ -509,11 +494,11 @@ do_transfer()
+ 	check_transfer $cin $sout "file received by server"
+ 	rets=$?
+ 
+-	local stat_synrx_now_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableSYNRX")
+-	local stat_ackrx_now_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
+-	local stat_cookietx_now=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent")
+-	local stat_cookierx_now=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv")
+-	local stat_ooo_now=$(get_mib_counter "${listener_ns}" "TcpExtTCPOFOQueue")
++	local stat_synrx_now_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableSYNRX")
++	local stat_ackrx_now_l=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
++	local stat_cookietx_now=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesSent")
++	local stat_cookierx_now=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtSyncookiesRecv")
++	local stat_ooo_now=$(mptcp_lib_get_counter "${listener_ns}" "TcpExtTCPOFOQueue")
+ 
+ 	expect_synrx=$((stat_synrx_last_l))
+ 	expect_ackrx=$((stat_ackrx_last_l))
+@@ -542,8 +527,8 @@ do_transfer()
+ 	fi
+ 
+ 	if $checksum; then
+-		local csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
+-		local csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
++		local csum_err_s=$(mptcp_lib_get_counter "${listener_ns}" "MPTcpExtDataCsumErr")
++		local csum_err_c=$(mptcp_lib_get_counter "${connector_ns}" "MPTcpExtDataCsumErr")
+ 
+ 		local csum_err_s_nr=$((csum_err_s - stat_csum_err_s))
+ 		if [ $csum_err_s_nr -gt 0 ]; then
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index ed665e0057d33..be10b971e912b 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -611,25 +611,9 @@ wait_local_port_listen()
+ 	done
+ }
+ 
+-# $1: ns ; $2: counter
+-get_counter()
+-{
+-	local ns="${1}"
+-	local counter="${2}"
+-	local count
+-
+-	count=$(ip netns exec ${ns} nstat -asz "${counter}" | awk 'NR==1 {next} {print $2}')
+-	if [ -z "${count}" ]; then
+-		mptcp_lib_fail_if_expected_feature "${counter} counter"
+-		return 1
+-	fi
+-
+-	echo "${count}"
+-}
+-
+ rm_addr_count()
+ {
+-	get_counter "${1}" "MPTcpExtRmAddr"
++	mptcp_lib_get_counter "${1}" "MPTcpExtRmAddr"
+ }
+ 
+ # $1: ns, $2: old rm_addr counter in $ns
+@@ -649,7 +633,7 @@ wait_rm_addr()
+ 
+ rm_sf_count()
+ {
+-	get_counter "${1}" "MPTcpExtRmSubflow"
++	mptcp_lib_get_counter "${1}" "MPTcpExtRmSubflow"
+ }
+ 
+ # $1: ns, $2: old rm_sf counter in $ns
+@@ -672,11 +656,11 @@ wait_mpj()
+ 	local ns="${1}"
+ 	local cnt old_cnt
+ 
+-	old_cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx")
++	old_cnt=$(mptcp_lib_get_counter ${ns} "MPTcpExtMPJoinAckRx")
+ 
+ 	local i
+ 	for i in $(seq 10); do
+-		cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx")
++		cnt=$(mptcp_lib_get_counter ${ns} "MPTcpExtMPJoinAckRx")
+ 		[ "$cnt" = "${old_cnt}" ] || break
+ 		sleep 0.1
+ 	done
+@@ -688,13 +672,6 @@ kill_events_pids()
+ 	mptcp_lib_kill_wait $evts_ns2_pid
+ }
+ 
+-kill_tests_wait()
+-{
+-	#shellcheck disable=SC2046
+-	kill -SIGUSR1 $(ip netns pids $ns2) $(ip netns pids $ns1)
+-	wait
+-}
+-
+ pm_nl_set_limits()
+ {
+ 	local ns=$1
+@@ -1278,7 +1255,7 @@ chk_csum_nr()
+ 	fi
+ 
+ 	print_check "sum"
+-	count=$(get_counter ${ns1} "MPTcpExtDataCsumErr")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtDataCsumErr")
+ 	if [ "$count" != "$csum_ns1" ]; then
+ 		extra_msg="$extra_msg ns1=$count"
+ 	fi
+@@ -1291,7 +1268,7 @@ chk_csum_nr()
+ 		print_ok
+ 	fi
+ 	print_check "csum"
+-	count=$(get_counter ${ns2} "MPTcpExtDataCsumErr")
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtDataCsumErr")
+ 	if [ "$count" != "$csum_ns2" ]; then
+ 		extra_msg="$extra_msg ns2=$count"
+ 	fi
+@@ -1335,7 +1312,7 @@ chk_fail_nr()
+ 	fi
+ 
+ 	print_check "ftx"
+-	count=$(get_counter ${ns_tx} "MPTcpExtMPFailTx")
++	count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPFailTx")
+ 	if [ "$count" != "$fail_tx" ]; then
+ 		extra_msg="$extra_msg,tx=$count"
+ 	fi
+@@ -1349,7 +1326,7 @@ chk_fail_nr()
+ 	fi
+ 
+ 	print_check "failrx"
+-	count=$(get_counter ${ns_rx} "MPTcpExtMPFailRx")
++	count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtMPFailRx")
+ 	if [ "$count" != "$fail_rx" ]; then
+ 		extra_msg="$extra_msg,rx=$count"
+ 	fi
+@@ -1382,7 +1359,7 @@ chk_fclose_nr()
+ 	fi
+ 
+ 	print_check "ctx"
+-	count=$(get_counter ${ns_tx} "MPTcpExtMPFastcloseTx")
++	count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPFastcloseTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$fclose_tx" ]; then
+@@ -1393,7 +1370,7 @@ chk_fclose_nr()
+ 	fi
+ 
+ 	print_check "fclzrx"
+-	count=$(get_counter ${ns_rx} "MPTcpExtMPFastcloseRx")
++	count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtMPFastcloseRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$fclose_rx" ]; then
+@@ -1423,7 +1400,7 @@ chk_rst_nr()
+ 	fi
+ 
+ 	print_check "rtx"
+-	count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx")
++	count=$(mptcp_lib_get_counter ${ns_tx} "MPTcpExtMPRstTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	# accept more rst than expected except if we don't expect any
+@@ -1435,7 +1412,7 @@ chk_rst_nr()
+ 	fi
+ 
+ 	print_check "rstrx"
+-	count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx")
++	count=$(mptcp_lib_get_counter ${ns_rx} "MPTcpExtMPRstRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	# accept more rst than expected except if we don't expect any
+@@ -1456,7 +1433,7 @@ chk_infi_nr()
+ 	local count
+ 
+ 	print_check "itx"
+-	count=$(get_counter ${ns2} "MPTcpExtInfiniteMapTx")
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtInfiniteMapTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$infi_tx" ]; then
+@@ -1466,7 +1443,7 @@ chk_infi_nr()
+ 	fi
+ 
+ 	print_check "infirx"
+-	count=$(get_counter ${ns1} "MPTcpExtInfiniteMapRx")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtInfiniteMapRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$infi_rx" ]; then
+@@ -1495,7 +1472,7 @@ chk_join_nr()
+ 	fi
+ 
+ 	print_check "syn"
+-	count=$(get_counter ${ns1} "MPTcpExtMPJoinSynRx")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinSynRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$syn_nr" ]; then
+@@ -1506,7 +1483,7 @@ chk_join_nr()
+ 
+ 	print_check "synack"
+ 	with_cookie=$(ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies)
+-	count=$(get_counter ${ns2} "MPTcpExtMPJoinSynAckRx")
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtMPJoinSynAckRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$syn_ack_nr" ]; then
+@@ -1523,7 +1500,7 @@ chk_join_nr()
+ 	fi
+ 
+ 	print_check "ack"
+-	count=$(get_counter ${ns1} "MPTcpExtMPJoinAckRx")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinAckRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$ack_nr" ]; then
+@@ -1556,8 +1533,8 @@ chk_stale_nr()
+ 
+ 	print_check "stale"
+ 
+-	stale_nr=$(get_counter ${ns} "MPTcpExtSubflowStale")
+-	recover_nr=$(get_counter ${ns} "MPTcpExtSubflowRecover")
++	stale_nr=$(mptcp_lib_get_counter ${ns} "MPTcpExtSubflowStale")
++	recover_nr=$(mptcp_lib_get_counter ${ns} "MPTcpExtSubflowRecover")
+ 	if [ -z "$stale_nr" ] || [ -z "$recover_nr" ]; then
+ 		print_skip
+ 	elif [ $stale_nr -lt $stale_min ] ||
+@@ -1594,7 +1571,7 @@ chk_add_nr()
+ 	timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout)
+ 
+ 	print_check "add"
+-	count=$(get_counter ${ns2} "MPTcpExtAddAddr")
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtAddAddr")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	# if the test configured a short timeout tolerate greater then expected
+@@ -1606,7 +1583,7 @@ chk_add_nr()
+ 	fi
+ 
+ 	print_check "echo"
+-	count=$(get_counter ${ns1} "MPTcpExtEchoAdd")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtEchoAdd")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$echo_nr" ]; then
+@@ -1617,7 +1594,7 @@ chk_add_nr()
+ 
+ 	if [ $port_nr -gt 0 ]; then
+ 		print_check "pt"
+-		count=$(get_counter ${ns2} "MPTcpExtPortAdd")
++		count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtPortAdd")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$port_nr" ]; then
+@@ -1627,7 +1604,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "syn"
+-		count=$(get_counter ${ns1} "MPTcpExtMPJoinPortSynRx")
++		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinPortSynRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$syn_nr" ]; then
+@@ -1638,7 +1615,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "synack"
+-		count=$(get_counter ${ns2} "MPTcpExtMPJoinPortSynAckRx")
++		count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtMPJoinPortSynAckRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$syn_ack_nr" ]; then
+@@ -1649,7 +1626,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "ack"
+-		count=$(get_counter ${ns1} "MPTcpExtMPJoinPortAckRx")
++		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPJoinPortAckRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$ack_nr" ]; then
+@@ -1660,7 +1637,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "syn"
+-		count=$(get_counter ${ns1} "MPTcpExtMismatchPortSynRx")
++		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMismatchPortSynRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$mis_syn_nr" ]; then
+@@ -1671,7 +1648,7 @@ chk_add_nr()
+ 		fi
+ 
+ 		print_check "ack"
+-		count=$(get_counter ${ns1} "MPTcpExtMismatchPortAckRx")
++		count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMismatchPortAckRx")
+ 		if [ -z "$count" ]; then
+ 			print_skip
+ 		elif [ "$count" != "$mis_ack_nr" ]; then
+@@ -1693,7 +1670,7 @@ chk_add_tx_nr()
+ 	timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout)
+ 
+ 	print_check "add TX"
+-	count=$(get_counter ${ns1} "MPTcpExtAddAddrTx")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtAddAddrTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	# if the test configured a short timeout tolerate greater then expected
+@@ -1705,7 +1682,7 @@ chk_add_tx_nr()
+ 	fi
+ 
+ 	print_check "echo TX"
+-	count=$(get_counter ${ns2} "MPTcpExtEchoAddTx")
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtEchoAddTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$echo_tx_nr" ]; then
+@@ -1743,7 +1720,7 @@ chk_rm_nr()
+ 	fi
+ 
+ 	print_check "rm"
+-	count=$(get_counter ${addr_ns} "MPTcpExtRmAddr")
++	count=$(mptcp_lib_get_counter ${addr_ns} "MPTcpExtRmAddr")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$rm_addr_nr" ]; then
+@@ -1753,13 +1730,13 @@ chk_rm_nr()
+ 	fi
+ 
+ 	print_check "rmsf"
+-	count=$(get_counter ${subflow_ns} "MPTcpExtRmSubflow")
++	count=$(mptcp_lib_get_counter ${subflow_ns} "MPTcpExtRmSubflow")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ -n "$simult" ]; then
+ 		local cnt suffix
+ 
+-		cnt=$(get_counter ${addr_ns} "MPTcpExtRmSubflow")
++		cnt=$(mptcp_lib_get_counter ${addr_ns} "MPTcpExtRmSubflow")
+ 
+ 		# in case of simult flush, the subflow removal count on each side is
+ 		# unreliable
+@@ -1788,7 +1765,7 @@ chk_rm_tx_nr()
+ 	local rm_addr_tx_nr=$1
+ 
+ 	print_check "rm TX"
+-	count=$(get_counter ${ns2} "MPTcpExtRmAddrTx")
++	count=$(mptcp_lib_get_counter ${ns2} "MPTcpExtRmAddrTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$rm_addr_tx_nr" ]; then
+@@ -1805,7 +1782,7 @@ chk_prio_nr()
+ 	local count
+ 
+ 	print_check "ptx"
+-	count=$(get_counter ${ns1} "MPTcpExtMPPrioTx")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPPrioTx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$mp_prio_nr_tx" ]; then
+@@ -1815,7 +1792,7 @@ chk_prio_nr()
+ 	fi
+ 
+ 	print_check "prx"
+-	count=$(get_counter ${ns1} "MPTcpExtMPPrioRx")
++	count=$(mptcp_lib_get_counter ${ns1} "MPTcpExtMPPrioRx")
+ 	if [ -z "$count" ]; then
+ 		print_skip
+ 	elif [ "$count" != "$mp_prio_nr_rx" ]; then
+@@ -1915,7 +1892,7 @@ wait_attempt_fail()
+ 	while [ $time -lt $timeout_ms ]; do
+ 		local cnt
+ 
+-		cnt=$(get_counter ${ns} "TcpAttemptFails")
++		cnt=$(mptcp_lib_get_counter ${ns} "TcpAttemptFails")
+ 
+ 		[ "$cnt" = 1 ] && return 1
+ 		time=$((time + 100))
+@@ -3433,7 +3410,7 @@ userspace_tests()
+ 		chk_rm_nr 1 1 invert
+ 		chk_mptcp_info subflows 0 subflows 0
+ 		kill_events_pids
+-		wait $tests_pid
++		mptcp_lib_kill_wait $tests_pid
+ 	fi
+ 
+ 	# userspace pm create destroy subflow
+@@ -3452,7 +3429,7 @@ userspace_tests()
+ 		chk_rm_nr 1 1
+ 		chk_mptcp_info subflows 0 subflows 0
+ 		kill_events_pids
+-		wait $tests_pid
++		mptcp_lib_kill_wait $tests_pid
+ 	fi
+ }
+ 
+@@ -3466,7 +3443,8 @@ endpoint_tests()
+ 		pm_nl_set_limits $ns2 2 2
+ 		pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
+ 		speed=slow \
+-			run_tests $ns1 $ns2 10.0.1.1 2>/dev/null &
++			run_tests $ns1 $ns2 10.0.1.1 &
++		local tests_pid=$!
+ 
+ 		wait_mpj $ns1
+ 		pm_nl_check_endpoint "creation" \
+@@ -3481,7 +3459,7 @@ endpoint_tests()
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 flags signal
+ 		pm_nl_check_endpoint "modif is allowed" \
+ 			$ns2 10.0.2.2 id 1 flags signal
+-		kill_tests_wait
++		mptcp_lib_kill_wait $tests_pid
+ 	fi
+ 
+ 	if reset "delete and re-add" &&
+@@ -3490,7 +3468,8 @@ endpoint_tests()
+ 		pm_nl_set_limits $ns2 1 1
+ 		pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+ 		test_linkfail=4 speed=20 \
+-			run_tests $ns1 $ns2 10.0.1.1 2>/dev/null &
++			run_tests $ns1 $ns2 10.0.1.1 &
++		local tests_pid=$!
+ 
+ 		wait_mpj $ns2
+ 		chk_subflow_nr "before delete" 2
+@@ -3505,7 +3484,7 @@ endpoint_tests()
+ 		wait_mpj $ns2
+ 		chk_subflow_nr "after re-add" 2
+ 		chk_mptcp_info subflows 1 subflows 1
+-		kill_tests_wait
++		mptcp_lib_kill_wait $tests_pid
+ 	fi
+ }
+ 
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+index 4cd4297ca86de..2b10f200de402 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_lib.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -216,3 +216,19 @@ mptcp_lib_kill_wait() {
+ 	kill "${1}" > /dev/null 2>&1
+ 	wait "${1}" 2>/dev/null
+ }
++
++# $1: ns, $2: MIB counter
++mptcp_lib_get_counter() {
++	local ns="${1}"
++	local counter="${2}"
++	local count
++
++	count=$(ip netns exec "${ns}" nstat -asz "${counter}" |
++		awk 'NR==1 {next} {print $2}')
++	if [ -z "${count}" ]; then
++		mptcp_lib_fail_if_expected_feature "${counter} counter"
++		return 1
++	fi
++
++	echo "${count}"
++}
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index 8f4ff123a7eb9..71899a3ffa7a9 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -183,7 +183,7 @@ check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+ subflow 10.0.1.1" "          (nobackup)"
+ 
+ # fullmesh support has been added later
+-ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh
++ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh 2>/dev/null
+ if ip netns exec $ns1 ./pm_nl_ctl dump | grep -q "fullmesh" ||
+    mptcp_lib_expect_all_features; then
+ 	check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+@@ -194,6 +194,12 @@ subflow 10.0.1.1" "          (nofullmesh)"
+ 	ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh
+ 	check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \
+ subflow,backup,fullmesh 10.0.1.1" "          (backup,fullmesh)"
++else
++	for st in fullmesh nofullmesh backup,fullmesh; do
++		st="          (${st})"
++		printf "%-50s%s\n" "${st}" "[SKIP]"
++		mptcp_lib_result_skip "${st}"
++	done
+ fi
+ 
+ mptcp_lib_result_print_all_tap
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index ce9203b817f88..9096bf5794888 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -267,7 +267,8 @@ run_test()
+ 		[ $bail -eq 0 ] || exit $ret
+ 	fi
+ 
+-	printf "%-60s" "$msg - reverse direction"
++	msg+=" - reverse direction"
++	printf "%-60s" "${msg}"
+ 	do_transfer $large $small $time
+ 	lret=$?
+ 	mptcp_lib_result_code "${lret}" "${msg}"
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index c44bf5c7c6e04..0e748068ee95e 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -75,7 +75,7 @@ print_test()
+ {
+ 	test_name="${1}"
+ 
+-	_printf "%-63s" "${test_name}"
++	_printf "%-68s" "${test_name}"
+ }
+ 
+ print_results()
+@@ -555,7 +555,7 @@ verify_subflow_events()
+ 	local remid
+ 	local info
+ 
+-	info="${e_saddr} (${e_from}) => ${e_daddr} (${e_to})"
++	info="${e_saddr} (${e_from}) => ${e_daddr}:${e_dport} (${e_to})"
+ 
+ 	if [ "$e_type" = "$SUB_ESTABLISHED" ]
+ 	then
+@@ -887,9 +887,10 @@ test_prio()
+ 
+ 	# Check TX
+ 	print_test "MP_PRIO TX"
+-	count=$(ip netns exec "$ns2" nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ $count != 1 ]; then
++	count=$(mptcp_lib_get_counter "$ns2" "MPTcpExtMPPrioTx")
++	if [ -z "$count" ]; then
++		test_skip
++	elif [ $count != 1 ]; then
+ 		test_fail "Count != 1: ${count}"
+ 	else
+ 		test_pass
+@@ -897,9 +898,10 @@ test_prio()
+ 
+ 	# Check RX
+ 	print_test "MP_PRIO RX"
+-	count=$(ip netns exec "$ns1" nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}')
+-	[ -z "$count" ] && count=0
+-	if [ $count != 1 ]; then
++	count=$(mptcp_lib_get_counter "$ns1" "MPTcpExtMPPrioRx")
++	if [ -z "$count" ]; then
++		test_skip
++	elif [ $count != 1 ]; then
+ 		test_fail "Count != 1: ${count}"
+ 	else
+ 		test_pass
+diff --git a/tools/testing/selftests/riscv/hwprobe/cbo.c b/tools/testing/selftests/riscv/hwprobe/cbo.c
+index c000942c80483..c537d52fafc58 100644
+--- a/tools/testing/selftests/riscv/hwprobe/cbo.c
++++ b/tools/testing/selftests/riscv/hwprobe/cbo.c
+@@ -95,7 +95,7 @@ static void test_zicboz(void *arg)
+ 	block_size = pair.value;
+ 	ksft_test_result(rc == 0 && pair.key == RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE &&
+ 			 is_power_of_2(block_size), "Zicboz block size\n");
+-	ksft_print_msg("Zicboz block size: %ld\n", block_size);
++	ksft_print_msg("Zicboz block size: %llu\n", block_size);
+ 
+ 	illegal_insn = false;
+ 	cbo_zero(&mem[block_size]);
+@@ -119,7 +119,7 @@ static void test_zicboz(void *arg)
+ 		for (j = 0; j < block_size; ++j) {
+ 			if (mem[i * block_size + j] != expected) {
+ 				ksft_test_result_fail("cbo.zero check\n");
+-				ksft_print_msg("cbo.zero check: mem[%d] != 0x%x\n",
++				ksft_print_msg("cbo.zero check: mem[%llu] != 0x%x\n",
+ 					       i * block_size + j, expected);
+ 				return;
+ 			}
+@@ -199,7 +199,7 @@ int main(int argc, char **argv)
+ 	pair.key = RISCV_HWPROBE_KEY_IMA_EXT_0;
+ 	rc = riscv_hwprobe(&pair, 1, sizeof(cpu_set_t), (unsigned long *)&cpus, 0);
+ 	if (rc < 0)
+-		ksft_exit_fail_msg("hwprobe() failed with %d\n", rc);
++		ksft_exit_fail_msg("hwprobe() failed with %ld\n", rc);
+ 	assert(rc == 0 && pair.key == RISCV_HWPROBE_KEY_IMA_EXT_0);
+ 
+ 	if (pair.value & RISCV_HWPROBE_EXT_ZICBOZ) {
+diff --git a/tools/testing/selftests/riscv/hwprobe/hwprobe.c b/tools/testing/selftests/riscv/hwprobe/hwprobe.c
+index c474891df3071..abb825811c70e 100644
+--- a/tools/testing/selftests/riscv/hwprobe/hwprobe.c
++++ b/tools/testing/selftests/riscv/hwprobe/hwprobe.c
+@@ -29,7 +29,7 @@ int main(int argc, char **argv)
+ 		/* Fail if the kernel claims not to recognize a base key. */
+ 		if ((i < 4) && (pairs[i].key != i))
+ 			ksft_exit_fail_msg("Failed to recognize base key: key != i, "
+-					   "key=%ld, i=%ld\n", pairs[i].key, i);
++					   "key=%lld, i=%ld\n", pairs[i].key, i);
+ 
+ 		if (pairs[i].key != RISCV_HWPROBE_KEY_BASE_BEHAVIOR)
+ 			continue;
+@@ -37,7 +37,7 @@ int main(int argc, char **argv)
+ 		if (pairs[i].value & RISCV_HWPROBE_BASE_BEHAVIOR_IMA)
+ 			continue;
+ 
+-		ksft_exit_fail_msg("Unexpected pair: (%ld, %ld)\n", pairs[i].key, pairs[i].value);
++		ksft_exit_fail_msg("Unexpected pair: (%lld, %llu)\n", pairs[i].key, pairs[i].value);
+ 	}
+ 
+ 	out = riscv_hwprobe(pairs, 8, 0, 0, 0);
+diff --git a/tools/testing/selftests/riscv/mm/mmap_test.h b/tools/testing/selftests/riscv/mm/mmap_test.h
+index 9b8434f62f570..2e0db9c5be6c3 100644
+--- a/tools/testing/selftests/riscv/mm/mmap_test.h
++++ b/tools/testing/selftests/riscv/mm/mmap_test.h
+@@ -18,6 +18,8 @@ struct addresses {
+ 	int *on_56_addr;
+ };
+ 
++// Only works on 64 bit
++#if __riscv_xlen == 64
+ static inline void do_mmaps(struct addresses *mmap_addresses)
+ {
+ 	/*
+@@ -50,6 +52,7 @@ static inline void do_mmaps(struct addresses *mmap_addresses)
+ 	mmap_addresses->on_56_addr =
+ 		mmap(on_56_bits, 5 * sizeof(int), prot, flags, 0, 0);
+ }
++#endif /* __riscv_xlen == 64 */
+ 
+ static inline int memory_layout(void)
+ {
+diff --git a/tools/testing/selftests/riscv/vector/v_initval_nolibc.c b/tools/testing/selftests/riscv/vector/v_initval_nolibc.c
+index 66764edb0d526..1dd94197da30c 100644
+--- a/tools/testing/selftests/riscv/vector/v_initval_nolibc.c
++++ b/tools/testing/selftests/riscv/vector/v_initval_nolibc.c
+@@ -27,7 +27,7 @@ int main(void)
+ 
+ 	datap = malloc(MAX_VSIZE);
+ 	if (!datap) {
+-		ksft_test_result_fail("fail to allocate memory for size = %lu\n", MAX_VSIZE);
++		ksft_test_result_fail("fail to allocate memory for size = %d\n", MAX_VSIZE);
+ 		exit(-1);
+ 	}
+ 
+diff --git a/tools/testing/selftests/riscv/vector/vstate_prctl.c b/tools/testing/selftests/riscv/vector/vstate_prctl.c
+index b348b475be570..8ad94e08ff4d0 100644
+--- a/tools/testing/selftests/riscv/vector/vstate_prctl.c
++++ b/tools/testing/selftests/riscv/vector/vstate_prctl.c
+@@ -68,7 +68,7 @@ int test_and_compare_child(long provided, long expected, int inherit)
+ 	}
+ 	rc = launch_test(inherit);
+ 	if (rc != expected) {
+-		ksft_test_result_fail("Test failed, check %d != %d\n", rc,
++		ksft_test_result_fail("Test failed, check %d != %ld\n", rc,
+ 				      expected);
+ 		return -2;
+ 	}
+@@ -87,7 +87,7 @@ int main(void)
+ 	pair.key = RISCV_HWPROBE_KEY_IMA_EXT_0;
+ 	rc = riscv_hwprobe(&pair, 1, 0, NULL, 0);
+ 	if (rc < 0) {
+-		ksft_test_result_fail("hwprobe() failed with %d\n", rc);
++		ksft_test_result_fail("hwprobe() failed with %ld\n", rc);
+ 		return -1;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-03-02 22:36 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-03-02 22:36 UTC (permalink / raw
  To: gentoo-commits

commit:     917982656114eea0e5d207b9d0a3ef662ca9fc0d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar  2 22:36:08 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar  2 22:36:08 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=91798265

Linux patch 6.7.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |  4 ++++
 1007_linux-6.7.8.patch | 33 +++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/0000_README b/0000_README
index 178ef64f..83109cbf 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.7.7.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.7
 
+Patch:  1007_linux-6.7.8.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.8
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1007_linux-6.7.8.patch b/1007_linux-6.7.8.patch
new file mode 100644
index 00000000..fd9c8745
--- /dev/null
+++ b/1007_linux-6.7.8.patch
@@ -0,0 +1,33 @@
+diff --git a/Makefile b/Makefile
+index 72ee3609aae34..6569f2255d500 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 3b42938a9d3b2..7f27382e0ce25 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -2457,7 +2457,6 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ 	struct ATTR_LIST_ENTRY *le = NULL;
+ 	struct runs_tree *run = &ni->file.run;
+ 	u64 valid_size = ni->i_valid;
+-	loff_t i_size = i_size_read(&ni->vfs_inode);
+ 	u64 vbo_disk;
+ 	size_t unc_size;
+ 	u32 frame_size, i, npages_disk, ondisk_size;
+@@ -2509,6 +2508,7 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ 		err = -EOPNOTSUPP;
+ 		goto out1;
+ #else
++		loff_t i_size = i_size_read(&ni->vfs_inode);
+ 		u32 frame_bits = ni_ext_compress_bits(ni);
+ 		u64 frame64 = frame_vbo >> frame_bits;
+ 		u64 frames, vbo_data;


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-03-06 18:06 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-03-06 18:06 UTC (permalink / raw
  To: gentoo-commits

commit:     bd52fcff26bd976bcfa841d3893ec128f4241998
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar  6 18:06:18 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar  6 18:06:18 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bd52fcff

Linux patch 6.7.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1008_linux-6.7.9.patch | 6129 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6133 insertions(+)

diff --git a/0000_README b/0000_README
index 83109cbf..6b6b26d5 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-6.7.8.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.8
 
+Patch:  1008_linux-6.7.9.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.9
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1008_linux-6.7.9.patch b/1008_linux-6.7.9.patch
new file mode 100644
index 00000000..04b17769
--- /dev/null
+++ b/1008_linux-6.7.9.patch
@@ -0,0 +1,6129 @@
+diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
+index e73fdff62c0aa..c58c72362911c 100644
+--- a/Documentation/arch/x86/mds.rst
++++ b/Documentation/arch/x86/mds.rst
+@@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing:
+ 
+     mds_clear_cpu_buffers()
+ 
++Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
++Other than CFLAGS.ZF, this macro doesn't clobber any registers.
++
+ The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
+ (idle) transitions.
+ 
+@@ -138,17 +141,30 @@ Mitigation points
+ 
+    When transitioning from kernel to user space the CPU buffers are flushed
+    on affected CPUs when the mitigation is not disabled on the kernel
+-   command line. The migitation is enabled through the static key
+-   mds_user_clear.
+-
+-   The mitigation is invoked in prepare_exit_to_usermode() which covers
+-   all but one of the kernel to user space transitions.  The exception
+-   is when we return from a Non Maskable Interrupt (NMI), which is
+-   handled directly in do_nmi().
+-
+-   (The reason that NMI is special is that prepare_exit_to_usermode() can
+-    enable IRQs.  In NMI context, NMIs are blocked, and we don't want to
+-    enable IRQs with NMIs blocked.)
++   command line. The mitigation is enabled through the feature flag
++   X86_FEATURE_CLEAR_CPU_BUF.
++
++   The mitigation is invoked just before transitioning to userspace after
++   user registers are restored. This is done to minimize the window in
++   which kernel data could be accessed after VERW e.g. via an NMI after
++   VERW.
++
++   **Corner case not handled**
++   Interrupts returning to kernel don't clear CPUs buffers since the
++   exit-to-user path is expected to do that anyways. But, there could be
++   a case when an NMI is generated in kernel after the exit-to-user path
++   has cleared the buffers. This case is not handled and NMI returning to
++   kernel don't clear CPU buffers because:
++
++   1. It is rare to get an NMI after VERW, but before returning to userspace.
++   2. For an unprivileged user, there is no known way to make that NMI
++      less rare or target it.
++   3. It would take a large number of these precisely-timed NMIs to mount
++      an actual attack.  There's presumably not enough bandwidth.
++   4. The NMI in question occurs after a VERW, i.e. when user state is
++      restored and most interesting data is already scrubbed. Whats left
++      is only the data that NMI touches, and that may or may not be of
++      any interest.
+ 
+ 
+ 2. C-State transition
+diff --git a/Makefile b/Makefile
+index 6569f2255d500..f1a592b7c7bc8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
+index bac4cabef6073..467ac2f768ac2 100644
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -227,8 +227,19 @@ static int ctr_encrypt(struct skcipher_request *req)
+ 			src += blocks * AES_BLOCK_SIZE;
+ 		}
+ 		if (nbytes && walk.nbytes == walk.total) {
++			u8 buf[AES_BLOCK_SIZE];
++			u8 *d = dst;
++
++			if (unlikely(nbytes < AES_BLOCK_SIZE))
++				src = dst = memcpy(buf + sizeof(buf) - nbytes,
++						   src, nbytes);
++
+ 			neon_aes_ctr_encrypt(dst, src, ctx->enc, ctx->key.rounds,
+ 					     nbytes, walk.iv);
++
++			if (unlikely(nbytes < AES_BLOCK_SIZE))
++				memcpy(d, dst, nbytes);
++
+ 			nbytes = 0;
+ 		}
+ 		kernel_neon_end();
+diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
+index c697c3c746946..33024a2874a69 100644
+--- a/arch/powerpc/include/asm/rtas.h
++++ b/arch/powerpc/include/asm/rtas.h
+@@ -68,7 +68,7 @@ enum rtas_function_index {
+ 	RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE,
+ 	RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE2,
+ 	RTAS_FNIDX__IBM_REMOVE_PE_DMA_WINDOW,
+-	RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOWS,
++	RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOW,
+ 	RTAS_FNIDX__IBM_SCAN_LOG_DUMP,
+ 	RTAS_FNIDX__IBM_SET_DYNAMIC_INDICATOR,
+ 	RTAS_FNIDX__IBM_SET_EEH_OPTION,
+@@ -163,7 +163,7 @@ typedef struct {
+ #define RTAS_FN_IBM_READ_SLOT_RESET_STATE         rtas_fn_handle(RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE)
+ #define RTAS_FN_IBM_READ_SLOT_RESET_STATE2        rtas_fn_handle(RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE2)
+ #define RTAS_FN_IBM_REMOVE_PE_DMA_WINDOW          rtas_fn_handle(RTAS_FNIDX__IBM_REMOVE_PE_DMA_WINDOW)
+-#define RTAS_FN_IBM_RESET_PE_DMA_WINDOWS          rtas_fn_handle(RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOWS)
++#define RTAS_FN_IBM_RESET_PE_DMA_WINDOW           rtas_fn_handle(RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOW)
+ #define RTAS_FN_IBM_SCAN_LOG_DUMP                 rtas_fn_handle(RTAS_FNIDX__IBM_SCAN_LOG_DUMP)
+ #define RTAS_FN_IBM_SET_DYNAMIC_INDICATOR         rtas_fn_handle(RTAS_FNIDX__IBM_SET_DYNAMIC_INDICATOR)
+ #define RTAS_FN_IBM_SET_EEH_OPTION                rtas_fn_handle(RTAS_FNIDX__IBM_SET_EEH_OPTION)
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 87d65bdd3ecae..46b9476d75824 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -310,8 +310,13 @@ static struct rtas_function rtas_function_table[] __ro_after_init = {
+ 	[RTAS_FNIDX__IBM_REMOVE_PE_DMA_WINDOW] = {
+ 		.name = "ibm,remove-pe-dma-window",
+ 	},
+-	[RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOWS] = {
+-		.name = "ibm,reset-pe-dma-windows",
++	[RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOW] = {
++		/*
++		 * Note: PAPR+ v2.13 7.3.31.4.1 spells this as
++		 * "ibm,reset-pe-dma-windows" (plural), but RTAS
++		 * implementations use the singular form in practice.
++		 */
++		.name = "ibm,reset-pe-dma-window",
+ 	},
+ 	[RTAS_FNIDX__IBM_SCAN_LOG_DUMP] = {
+ 		.name = "ibm,scan-log-dump",
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index 496e16c588aaa..e8c4129697b14 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -574,29 +574,6 @@ static void iommu_table_setparms(struct pci_controller *phb,
+ 
+ struct iommu_table_ops iommu_table_lpar_multi_ops;
+ 
+-/*
+- * iommu_table_setparms_lpar
+- *
+- * Function: On pSeries LPAR systems, return TCE table info, given a pci bus.
+- */
+-static void iommu_table_setparms_lpar(struct pci_controller *phb,
+-				      struct device_node *dn,
+-				      struct iommu_table *tbl,
+-				      struct iommu_table_group *table_group,
+-				      const __be32 *dma_window)
+-{
+-	unsigned long offset, size, liobn;
+-
+-	of_parse_dma_window(dn, dma_window, &liobn, &offset, &size);
+-
+-	iommu_table_setparms_common(tbl, phb->bus->number, liobn, offset, size, IOMMU_PAGE_SHIFT_4K, NULL,
+-				    &iommu_table_lpar_multi_ops);
+-
+-
+-	table_group->tce32_start = offset;
+-	table_group->tce32_size = size;
+-}
+-
+ struct iommu_table_ops iommu_table_pseries_ops = {
+ 	.set = tce_build_pSeries,
+ 	.clear = tce_free_pSeries,
+@@ -724,26 +701,71 @@ struct iommu_table_ops iommu_table_lpar_multi_ops = {
+  * dynamic 64bit DMA window, walking up the device tree.
+  */
+ static struct device_node *pci_dma_find(struct device_node *dn,
+-					const __be32 **dma_window)
++					struct dynamic_dma_window_prop *prop)
+ {
+-	const __be32 *dw = NULL;
++	const __be32 *default_prop = NULL;
++	const __be32 *ddw_prop = NULL;
++	struct device_node *rdn = NULL;
++	bool default_win = false, ddw_win = false;
+ 
+ 	for ( ; dn && PCI_DN(dn); dn = dn->parent) {
+-		dw = of_get_property(dn, "ibm,dma-window", NULL);
+-		if (dw) {
+-			if (dma_window)
+-				*dma_window = dw;
+-			return dn;
++		default_prop = of_get_property(dn, "ibm,dma-window", NULL);
++		if (default_prop) {
++			rdn = dn;
++			default_win = true;
++		}
++		ddw_prop = of_get_property(dn, DIRECT64_PROPNAME, NULL);
++		if (ddw_prop) {
++			rdn = dn;
++			ddw_win = true;
++			break;
++		}
++		ddw_prop = of_get_property(dn, DMA64_PROPNAME, NULL);
++		if (ddw_prop) {
++			rdn = dn;
++			ddw_win = true;
++			break;
+ 		}
+-		dw = of_get_property(dn, DIRECT64_PROPNAME, NULL);
+-		if (dw)
+-			return dn;
+-		dw = of_get_property(dn, DMA64_PROPNAME, NULL);
+-		if (dw)
+-			return dn;
++
++		/* At least found default window, which is the case for normal boot */
++		if (default_win)
++			break;
+ 	}
+ 
+-	return NULL;
++	/* For PCI devices there will always be a DMA window, either on the device
++	 * or parent bus
++	 */
++	WARN_ON(!(default_win | ddw_win));
++
++	/* caller doesn't want to get DMA window property */
++	if (!prop)
++		return rdn;
++
++	/* parse DMA window property. During normal system boot, only default
++	 * DMA window is passed in OF. But, for kdump, a dedicated adapter might
++	 * have both default and DDW in FDT. In this scenario, DDW takes precedence
++	 * over default window.
++	 */
++	if (ddw_win) {
++		struct dynamic_dma_window_prop *p;
++
++		p = (struct dynamic_dma_window_prop *)ddw_prop;
++		prop->liobn = p->liobn;
++		prop->dma_base = p->dma_base;
++		prop->tce_shift = p->tce_shift;
++		prop->window_shift = p->window_shift;
++	} else if (default_win) {
++		unsigned long offset, size, liobn;
++
++		of_parse_dma_window(rdn, default_prop, &liobn, &offset, &size);
++
++		prop->liobn = cpu_to_be32((u32)liobn);
++		prop->dma_base = cpu_to_be64(offset);
++		prop->tce_shift = cpu_to_be32(IOMMU_PAGE_SHIFT_4K);
++		prop->window_shift = cpu_to_be32(order_base_2(size));
++	}
++
++	return rdn;
+ }
+ 
+ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
+@@ -751,17 +773,20 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
+ 	struct iommu_table *tbl;
+ 	struct device_node *dn, *pdn;
+ 	struct pci_dn *ppci;
+-	const __be32 *dma_window = NULL;
++	struct dynamic_dma_window_prop prop;
+ 
+ 	dn = pci_bus_to_OF_node(bus);
+ 
+ 	pr_debug("pci_dma_bus_setup_pSeriesLP: setting up bus %pOF\n",
+ 		 dn);
+ 
+-	pdn = pci_dma_find(dn, &dma_window);
++	pdn = pci_dma_find(dn, &prop);
+ 
+-	if (dma_window == NULL)
+-		pr_debug("  no ibm,dma-window property !\n");
++	/* In PPC architecture, there will always be DMA window on bus or one of the
++	 * parent bus. During reboot, there will be ibm,dma-window property to
++	 * define DMA window. For kdump, there will at least be default window or DDW
++	 * or both.
++	 */
+ 
+ 	ppci = PCI_DN(pdn);
+ 
+@@ -771,13 +796,24 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
+ 	if (!ppci->table_group) {
+ 		ppci->table_group = iommu_pseries_alloc_group(ppci->phb->node);
+ 		tbl = ppci->table_group->tables[0];
+-		if (dma_window) {
+-			iommu_table_setparms_lpar(ppci->phb, pdn, tbl,
+-						  ppci->table_group, dma_window);
+ 
+-			if (!iommu_init_table(tbl, ppci->phb->node, 0, 0))
+-				panic("Failed to initialize iommu table");
+-		}
++		iommu_table_setparms_common(tbl, ppci->phb->bus->number,
++				be32_to_cpu(prop.liobn),
++				be64_to_cpu(prop.dma_base),
++				1ULL << be32_to_cpu(prop.window_shift),
++				be32_to_cpu(prop.tce_shift), NULL,
++				&iommu_table_lpar_multi_ops);
++
++		/* Only for normal boot with default window. Doesn't matter even
++		 * if we set these with DDW which is 64bit during kdump, since
++		 * these will not be used during kdump.
++		 */
++		ppci->table_group->tce32_start = be64_to_cpu(prop.dma_base);
++		ppci->table_group->tce32_size = 1 << be32_to_cpu(prop.window_shift);
++
++		if (!iommu_init_table(tbl, ppci->phb->node, 0, 0))
++			panic("Failed to initialize iommu table");
++
+ 		iommu_register_group(ppci->table_group,
+ 				pci_domain_nr(bus), 0);
+ 		pr_debug("  created table: %p\n", ppci->table_group);
+@@ -968,6 +1004,12 @@ static void find_existing_ddw_windows_named(const char *name)
+ 			continue;
+ 		}
+ 
++		/* If at the time of system initialization, there are DDWs in OF,
++		 * it means this is during kexec. DDW could be direct or dynamic.
++		 * We will just mark DDWs as "dynamic" since this is kdump path,
++		 * no need to worry about perforance. ddw_list_new_entry() will
++		 * set window->direct = false.
++		 */
+ 		window = ddw_list_new_entry(pdn, dma64);
+ 		if (!window) {
+ 			of_node_put(pdn);
+@@ -1524,8 +1566,8 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
+ {
+ 	struct device_node *pdn, *dn;
+ 	struct iommu_table *tbl;
+-	const __be32 *dma_window = NULL;
+ 	struct pci_dn *pci;
++	struct dynamic_dma_window_prop prop;
+ 
+ 	pr_debug("pci_dma_dev_setup_pSeriesLP: %s\n", pci_name(dev));
+ 
+@@ -1538,7 +1580,7 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
+ 	dn = pci_device_to_OF_node(dev);
+ 	pr_debug("  node is %pOF\n", dn);
+ 
+-	pdn = pci_dma_find(dn, &dma_window);
++	pdn = pci_dma_find(dn, &prop);
+ 	if (!pdn || !PCI_DN(pdn)) {
+ 		printk(KERN_WARNING "pci_dma_dev_setup_pSeriesLP: "
+ 		       "no DMA window found for pci dev=%s dn=%pOF\n",
+@@ -1551,8 +1593,20 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
+ 	if (!pci->table_group) {
+ 		pci->table_group = iommu_pseries_alloc_group(pci->phb->node);
+ 		tbl = pci->table_group->tables[0];
+-		iommu_table_setparms_lpar(pci->phb, pdn, tbl,
+-				pci->table_group, dma_window);
++
++		iommu_table_setparms_common(tbl, pci->phb->bus->number,
++				be32_to_cpu(prop.liobn),
++				be64_to_cpu(prop.dma_base),
++				1ULL << be32_to_cpu(prop.window_shift),
++				be32_to_cpu(prop.tce_shift), NULL,
++				&iommu_table_lpar_multi_ops);
++
++		/* Only for normal boot with default window. Doesn't matter even
++		 * if we set these with DDW which is 64bit during kdump, since
++		 * these will not be used during kdump.
++		 */
++		pci->table_group->tce32_start = be64_to_cpu(prop.dma_base);
++		pci->table_group->tce32_size = 1 << be32_to_cpu(prop.window_shift);
+ 
+ 		iommu_init_table(tbl, pci->phb->node, 0, 0);
+ 		iommu_register_group(pci->table_group,
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index cd4c9a204d08c..f6df30e8d18e6 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -294,7 +294,6 @@ config AS_HAS_OPTION_ARCH
+ 	# https://reviews.llvm.org/D123515
+ 	def_bool y
+ 	depends on $(as-instr, .option arch$(comma) +m)
+-	depends on !$(as-instr, .option arch$(comma) -i)
+ 
+ source "arch/riscv/Kconfig.socs"
+ source "arch/riscv/Kconfig.errata"
+diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
+index 306a19a5509c1..0f2e68f4e813b 100644
+--- a/arch/riscv/include/asm/csr.h
++++ b/arch/riscv/include/asm/csr.h
+@@ -415,6 +415,7 @@
+ # define CSR_STATUS	CSR_MSTATUS
+ # define CSR_IE		CSR_MIE
+ # define CSR_TVEC	CSR_MTVEC
++# define CSR_ENVCFG	CSR_MENVCFG
+ # define CSR_SCRATCH	CSR_MSCRATCH
+ # define CSR_EPC	CSR_MEPC
+ # define CSR_CAUSE	CSR_MCAUSE
+@@ -439,6 +440,7 @@
+ # define CSR_STATUS	CSR_SSTATUS
+ # define CSR_IE		CSR_SIE
+ # define CSR_TVEC	CSR_STVEC
++# define CSR_ENVCFG	CSR_SENVCFG
+ # define CSR_SCRATCH	CSR_SSCRATCH
+ # define CSR_EPC	CSR_SEPC
+ # define CSR_CAUSE	CSR_SCAUSE
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index 2b2f5df7ef2c7..42777f91a9c58 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -25,6 +25,11 @@
+ 
+ #define ARCH_SUPPORTS_FTRACE_OPS 1
+ #ifndef __ASSEMBLY__
++
++extern void *return_address(unsigned int level);
++
++#define ftrace_return_address(n) return_address(n)
++
+ void MCOUNT_NAME(void);
+ static inline unsigned long ftrace_call_adjust(unsigned long addr)
+ {
+diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
+index 20f9c3ba23414..22deb7a2a6ec4 100644
+--- a/arch/riscv/include/asm/hugetlb.h
++++ b/arch/riscv/include/asm/hugetlb.h
+@@ -11,8 +11,10 @@ static inline void arch_clear_hugepage_flags(struct page *page)
+ }
+ #define arch_clear_hugepage_flags arch_clear_hugepage_flags
+ 
++#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
+ bool arch_hugetlb_migration_supported(struct hstate *h);
+ #define arch_hugetlb_migration_supported arch_hugetlb_migration_supported
++#endif
+ 
+ #ifdef CONFIG_RISCV_ISA_SVNAPOT
+ #define __HAVE_ARCH_HUGE_PTE_CLEAR
+diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
+index d169a4f41a2e7..c80bb9990d32e 100644
+--- a/arch/riscv/include/asm/pgalloc.h
++++ b/arch/riscv/include/asm/pgalloc.h
+@@ -95,7 +95,13 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
+ 		__pud_free(mm, pud);
+ }
+ 
+-#define __pud_free_tlb(tlb, pud, addr)  pud_free((tlb)->mm, pud)
++#define __pud_free_tlb(tlb, pud, addr)					\
++do {									\
++	if (pgtable_l4_enabled) {					\
++		pagetable_pud_dtor(virt_to_ptdesc(pud));		\
++		tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud));	\
++	}								\
++} while (0)
+ 
+ #define p4d_alloc_one p4d_alloc_one
+ static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr)
+@@ -124,7 +130,11 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d)
+ 		__p4d_free(mm, p4d);
+ }
+ 
+-#define __p4d_free_tlb(tlb, p4d, addr)  p4d_free((tlb)->mm, p4d)
++#define __p4d_free_tlb(tlb, p4d, addr)					\
++do {									\
++	if (pgtable_l5_enabled)						\
++		tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d));	\
++} while (0)
+ #endif /* __PAGETABLE_PMD_FOLDED */
+ 
+ static inline void sync_kernel_mappings(pgd_t *pgd)
+@@ -149,7 +159,11 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ 
+ #ifndef __PAGETABLE_PMD_FOLDED
+ 
+-#define __pmd_free_tlb(tlb, pmd, addr)  pmd_free((tlb)->mm, pmd)
++#define __pmd_free_tlb(tlb, pmd, addr)				\
++do {								\
++	pagetable_pmd_dtor(virt_to_ptdesc(pmd));		\
++	tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd));	\
++} while (0)
+ 
+ #endif /* __PAGETABLE_PMD_FOLDED */
+ 
+diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h
+index 9a2c780a11e95..783837bbd8783 100644
+--- a/arch/riscv/include/asm/pgtable-64.h
++++ b/arch/riscv/include/asm/pgtable-64.h
+@@ -136,7 +136,7 @@ enum napot_cont_order {
+  * 10010 - IO   Strongly-ordered, Non-cacheable, Non-bufferable, Shareable, Non-trustable
+  */
+ #define _PAGE_PMA_THEAD		((1UL << 62) | (1UL << 61) | (1UL << 60))
+-#define _PAGE_NOCACHE_THEAD	((1UL < 61) | (1UL << 60))
++#define _PAGE_NOCACHE_THEAD	((1UL << 61) | (1UL << 60))
+ #define _PAGE_IO_THEAD		((1UL << 63) | (1UL << 60))
+ #define _PAGE_MTMASK_THEAD	(_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1UL << 59))
+ 
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 74ffb2178f545..76b131e7bbcad 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -84,7 +84,7 @@
+  * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel
+  * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled.
+  */
+-#define vmemmap		((struct page *)VMEMMAP_START)
++#define vmemmap		((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT))
+ 
+ #define PCI_IO_SIZE      SZ_16M
+ #define PCI_IO_END       VMEMMAP_START
+@@ -439,6 +439,10 @@ static inline pte_t pte_mkhuge(pte_t pte)
+ 	return pte;
+ }
+ 
++#define pte_leaf_size(pte)	(pte_napot(pte) ?				\
++					napot_cont_size(napot_cont_order(pte)) :\
++					PAGE_SIZE)
++
+ #ifdef CONFIG_NUMA_BALANCING
+ /*
+  * See the comment in include/asm-generic/pgtable.h
+diff --git a/arch/riscv/include/asm/vmalloc.h b/arch/riscv/include/asm/vmalloc.h
+index 924d01b56c9a1..51f6dfe19745a 100644
+--- a/arch/riscv/include/asm/vmalloc.h
++++ b/arch/riscv/include/asm/vmalloc.h
+@@ -19,65 +19,6 @@ static inline bool arch_vmap_pmd_supported(pgprot_t prot)
+ 	return true;
+ }
+ 
+-#ifdef CONFIG_RISCV_ISA_SVNAPOT
+-#include <linux/pgtable.h>
++#endif
+ 
+-#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size
+-static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end,
+-							 u64 pfn, unsigned int max_page_shift)
+-{
+-	unsigned long map_size = PAGE_SIZE;
+-	unsigned long size, order;
+-
+-	if (!has_svnapot())
+-		return map_size;
+-
+-	for_each_napot_order_rev(order) {
+-		if (napot_cont_shift(order) > max_page_shift)
+-			continue;
+-
+-		size = napot_cont_size(order);
+-		if (end - addr < size)
+-			continue;
+-
+-		if (!IS_ALIGNED(addr, size))
+-			continue;
+-
+-		if (!IS_ALIGNED(PFN_PHYS(pfn), size))
+-			continue;
+-
+-		map_size = size;
+-		break;
+-	}
+-
+-	return map_size;
+-}
+-
+-#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift
+-static inline int arch_vmap_pte_supported_shift(unsigned long size)
+-{
+-	int shift = PAGE_SHIFT;
+-	unsigned long order;
+-
+-	if (!has_svnapot())
+-		return shift;
+-
+-	WARN_ON_ONCE(size >= PMD_SIZE);
+-
+-	for_each_napot_order_rev(order) {
+-		if (napot_cont_size(order) > size)
+-			continue;
+-
+-		if (!IS_ALIGNED(size, napot_cont_size(order)))
+-			continue;
+-
+-		shift = napot_cont_shift(order);
+-		break;
+-	}
+-
+-	return shift;
+-}
+-
+-#endif /* CONFIG_RISCV_ISA_SVNAPOT */
+-#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
+ #endif /* _ASM_RISCV_VMALLOC_H */
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index fee22a3d1b534..40d054939ae25 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -7,6 +7,7 @@ ifdef CONFIG_FTRACE
+ CFLAGS_REMOVE_ftrace.o	= $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_patch.o	= $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_sbi.o	= $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_return_address.o	= $(CC_FLAGS_FTRACE)
+ endif
+ CFLAGS_syscall_table.o	+= $(call cc-option,-Wno-override-init,)
+ CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,)
+@@ -46,6 +47,7 @@ obj-y	+= irq.o
+ obj-y	+= process.o
+ obj-y	+= ptrace.o
+ obj-y	+= reset.o
++obj-y	+= return_address.o
+ obj-y	+= setup.o
+ obj-y	+= signal.o
+ obj-y	+= syscall_table.o
+diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
+index b3785ffc15703..867778813353c 100644
+--- a/arch/riscv/kernel/cpufeature.c
++++ b/arch/riscv/kernel/cpufeature.c
+@@ -22,6 +22,7 @@
+ #include <asm/hwprobe.h>
+ #include <asm/patch.h>
+ #include <asm/processor.h>
++#include <asm/sbi.h>
+ #include <asm/vector.h>
+ 
+ #include "copy-unaligned.h"
+@@ -401,6 +402,20 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap)
+ 			set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa);
+ 		}
+ 
++		/*
++		 * "V" in ISA strings is ambiguous in practice: it should mean
++		 * just the standard V-1.0 but vendors aren't well behaved.
++		 * Many vendors with T-Head CPU cores which implement the 0.7.1
++		 * version of the vector specification put "v" into their DTs.
++		 * CPU cores with the ratified spec will contain non-zero
++		 * marchid.
++		 */
++		if (acpi_disabled && riscv_cached_mvendorid(cpu) == THEAD_VENDOR_ID &&
++		    riscv_cached_marchid(cpu) == 0x0) {
++			this_hwcap &= ~isa2hwcap[RISCV_ISA_EXT_v];
++			clear_bit(RISCV_ISA_EXT_v, isainfo->isa);
++		}
++
+ 		/*
+ 		 * All "okay" hart should have same isa. Set HWCAP based on
+ 		 * common capabilities of every "okay" hart, in case they don't
+@@ -725,7 +740,7 @@ arch_initcall(check_unaligned_access_all_cpus);
+ void riscv_user_isa_enable(void)
+ {
+ 	if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_ZICBOZ))
+-		csr_set(CSR_SENVCFG, ENVCFG_CBZE);
++		csr_set(CSR_ENVCFG, ENVCFG_CBZE);
+ }
+ 
+ #ifdef CONFIG_RISCV_ALTERNATIVE
+diff --git a/arch/riscv/kernel/return_address.c b/arch/riscv/kernel/return_address.c
+new file mode 100644
+index 0000000000000..c8115ec8fb304
+--- /dev/null
++++ b/arch/riscv/kernel/return_address.c
+@@ -0,0 +1,48 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * This code come from arch/arm64/kernel/return_address.c
++ *
++ * Copyright (C) 2023 SiFive.
++ */
++
++#include <linux/export.h>
++#include <linux/kprobes.h>
++#include <linux/stacktrace.h>
++
++struct return_address_data {
++	unsigned int level;
++	void *addr;
++};
++
++static bool save_return_addr(void *d, unsigned long pc)
++{
++	struct return_address_data *data = d;
++
++	if (!data->level) {
++		data->addr = (void *)pc;
++		return false;
++	}
++
++	--data->level;
++
++	return true;
++}
++NOKPROBE_SYMBOL(save_return_addr);
++
++noinline void *return_address(unsigned int level)
++{
++	struct return_address_data data;
++
++	data.level = level + 3;
++	data.addr = NULL;
++
++	arch_stack_walk(save_return_addr, &data, current, NULL);
++
++	if (!data.level)
++		return data.addr;
++	else
++		return NULL;
++
++}
++EXPORT_SYMBOL_GPL(return_address);
++NOKPROBE_SYMBOL(return_address);
+diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
+index e7b69281875b2..fbe918801667d 100644
+--- a/arch/riscv/mm/hugetlbpage.c
++++ b/arch/riscv/mm/hugetlbpage.c
+@@ -426,10 +426,12 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
+ 	return __hugetlb_valid_size(size);
+ }
+ 
++#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
+ bool arch_hugetlb_migration_supported(struct hstate *h)
+ {
+ 	return __hugetlb_valid_size(huge_page_size(h));
+ }
++#endif
+ 
+ #ifdef CONFIG_CONTIG_ALLOC
+ static __init int gigantic_pages_init(void)
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index c73047bf9f4bf..fba427646805d 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -885,6 +885,7 @@ SYM_FUNC_START(entry_SYSENTER_32)
+ 	BUG_IF_WRONG_CR3 no_user_check=1
+ 	popfl
+ 	popl	%eax
++	CLEAR_CPU_BUFFERS
+ 
+ 	/*
+ 	 * Return back to the vDSO, which will pop ecx and edx.
+@@ -954,6 +955,7 @@ restore_all_switch_stack:
+ 
+ 	/* Restore user state */
+ 	RESTORE_REGS pop=4			# skip orig_eax/error_code
++	CLEAR_CPU_BUFFERS
+ .Lirq_return:
+ 	/*
+ 	 * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
+@@ -1146,6 +1148,7 @@ SYM_CODE_START(asm_exc_nmi)
+ 
+ 	/* Not on SYSENTER stack. */
+ 	call	exc_nmi
++	CLEAR_CPU_BUFFERS
+ 	jmp	.Lnmi_return
+ 
+ .Lnmi_from_sysenter_stack:
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index de6469dffe3a4..bdb17fad5d046 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -161,6 +161,7 @@ syscall_return_via_sysret:
+ SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL)
+ 	ANNOTATE_NOENDBR
+ 	swapgs
++	CLEAR_CPU_BUFFERS
+ 	sysretq
+ SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL)
+ 	ANNOTATE_NOENDBR
+@@ -601,6 +602,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
+ 	/* Restore RDI. */
+ 	popq	%rdi
+ 	swapgs
++	CLEAR_CPU_BUFFERS
+ 	jmp	.Lnative_iret
+ 
+ 
+@@ -712,6 +714,8 @@ native_irq_return_ldt:
+ 	 */
+ 	popq	%rax				/* Restore user RAX */
+ 
++	CLEAR_CPU_BUFFERS
++
+ 	/*
+ 	 * RSP now points to an ordinary IRET frame, except that the page
+ 	 * is read-only and RSP[31:16] are preloaded with the userspace
+@@ -1438,6 +1442,12 @@ nmi_restore:
+ 	std
+ 	movq	$0, 5*8(%rsp)		/* clear "NMI executing" */
+ 
++	/*
++	 * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like
++	 * NMI in kernel after user state is restored. For an unprivileged user
++	 * these conditions are hard to meet.
++	 */
++
+ 	/*
+ 	 * iretq reads the "iret" frame and exits the NMI stack in a
+ 	 * single instruction.  We are returning to kernel mode, so this
+@@ -1455,6 +1465,7 @@ SYM_CODE_START(entry_SYSCALL32_ignore)
+ 	UNWIND_HINT_END_OF_STACK
+ 	ENDBR
+ 	mov	$-ENOSYS, %eax
++	CLEAR_CPU_BUFFERS
+ 	sysretl
+ SYM_CODE_END(entry_SYSCALL32_ignore)
+ 
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index de94e2e84ecca..eabf48c4d4b4c 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -270,6 +270,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL)
+ 	xorl	%r9d, %r9d
+ 	xorl	%r10d, %r10d
+ 	swapgs
++	CLEAR_CPU_BUFFERS
+ 	sysretl
+ SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL)
+ 	ANNOTATE_NOENDBR
+diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
+index ce8f50192ae3e..7e523bb3d2d31 100644
+--- a/arch/x86/include/asm/entry-common.h
++++ b/arch/x86/include/asm/entry-common.h
+@@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
+ 
+ static __always_inline void arch_exit_to_user_mode(void)
+ {
+-	mds_user_clear_cpu_buffers();
+ 	amd_clear_divider();
+ }
+ #define arch_exit_to_user_mode arch_exit_to_user_mode
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 2519615936e95..d15b35815ebae 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -540,7 +540,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
+-DECLARE_STATIC_KEY_FALSE(mds_user_clear);
+ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
+ 
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
+@@ -574,17 +573,6 @@ static __always_inline void mds_clear_cpu_buffers(void)
+ 	asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
+ }
+ 
+-/**
+- * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
+- *
+- * Clear CPU buffers if the corresponding static key is enabled
+- */
+-static __always_inline void mds_user_clear_cpu_buffers(void)
+-{
+-	if (static_branch_likely(&mds_user_clear))
+-		mds_clear_cpu_buffers();
+-}
+-
+ /**
+  * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
+  *
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index bb0ab8466b919..48d049cd74e71 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ /* Control unconditional IBPB in switch_mm() */
+ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
+-/* Control MDS CPU buffer clear before returning to user space */
+-DEFINE_STATIC_KEY_FALSE(mds_user_clear);
+-EXPORT_SYMBOL_GPL(mds_user_clear);
+ /* Control MDS CPU buffer clear before idling (halt, mwait) */
+ DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+ EXPORT_SYMBOL_GPL(mds_idle_clear);
+@@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void)
+ 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ 			mds_mitigation = MDS_MITIGATION_VMWERV;
+ 
+-		static_branch_enable(&mds_user_clear);
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ 
+ 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
+ 		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
+@@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void)
+ 	 * For guests that can't determine whether the correct microcode is
+ 	 * present on host, enable the mitigation for UCODE_NEEDED as well.
+ 	 */
+-	static_branch_enable(&mds_user_clear);
++	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ 
+ 	if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ 		cpu_smt_disable(false);
+@@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void)
+ 	 */
+ 	if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
+ 					      boot_cpu_has(X86_FEATURE_RTM)))
+-		static_branch_enable(&mds_user_clear);
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ 	else
+ 		static_branch_enable(&mmio_stale_data_clear);
+ 
+@@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void)
+ 	if (cpu_mitigations_off())
+ 		return;
+ 
+-	if (!static_key_enabled(&mds_user_clear))
++	if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
+ 		goto out;
+ 
+ 	/*
+-	 * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
+-	 * mitigation, if necessary.
++	 * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
++	 * Stale Data mitigation, if necessary.
+ 	 */
+ 	if (mds_mitigation == MDS_MITIGATION_OFF &&
+ 	    boot_cpu_has_bug(X86_BUG_MDS)) {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 98f7ea6b931cb..34cac9ea19171 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1596,6 +1596,7 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ 		get_cpu_vendor(c);
+ 		get_cpu_cap(c);
+ 		setup_force_cpu_cap(X86_FEATURE_CPUID);
++		get_cpu_address_sizes(c);
+ 		cpu_parse_early_param();
+ 
+ 		if (this_cpu->c_early_init)
+@@ -1608,10 +1609,9 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ 			this_cpu->c_bsp_init(c);
+ 	} else {
+ 		setup_clear_cpu_cap(X86_FEATURE_CPUID);
++		get_cpu_address_sizes(c);
+ 	}
+ 
+-	get_cpu_address_sizes(c);
+-
+ 	setup_force_cpu_cap(X86_FEATURE_ALWAYS);
+ 
+ 	cpu_set_bug_bits(c);
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index a927a8fc96244..40dec9b56f87d 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -184,6 +184,90 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+ 	return false;
+ }
+ 
++#define MSR_IA32_TME_ACTIVATE		0x982
++
++/* Helpers to access TME_ACTIVATE MSR */
++#define TME_ACTIVATE_LOCKED(x)		(x & 0x1)
++#define TME_ACTIVATE_ENABLED(x)		(x & 0x2)
++
++#define TME_ACTIVATE_POLICY(x)		((x >> 4) & 0xf)	/* Bits 7:4 */
++#define TME_ACTIVATE_POLICY_AES_XTS_128	0
++
++#define TME_ACTIVATE_KEYID_BITS(x)	((x >> 32) & 0xf)	/* Bits 35:32 */
++
++#define TME_ACTIVATE_CRYPTO_ALGS(x)	((x >> 48) & 0xffff)	/* Bits 63:48 */
++#define TME_ACTIVATE_CRYPTO_AES_XTS_128	1
++
++/* Values for mktme_status (SW only construct) */
++#define MKTME_ENABLED			0
++#define MKTME_DISABLED			1
++#define MKTME_UNINITIALIZED		2
++static int mktme_status = MKTME_UNINITIALIZED;
++
++static void detect_tme_early(struct cpuinfo_x86 *c)
++{
++	u64 tme_activate, tme_policy, tme_crypto_algs;
++	int keyid_bits = 0, nr_keyids = 0;
++	static u64 tme_activate_cpu0 = 0;
++
++	rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate);
++
++	if (mktme_status != MKTME_UNINITIALIZED) {
++		if (tme_activate != tme_activate_cpu0) {
++			/* Broken BIOS? */
++			pr_err_once("x86/tme: configuration is inconsistent between CPUs\n");
++			pr_err_once("x86/tme: MKTME is not usable\n");
++			mktme_status = MKTME_DISABLED;
++
++			/* Proceed. We may need to exclude bits from x86_phys_bits. */
++		}
++	} else {
++		tme_activate_cpu0 = tme_activate;
++	}
++
++	if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) {
++		pr_info_once("x86/tme: not enabled by BIOS\n");
++		mktme_status = MKTME_DISABLED;
++		return;
++	}
++
++	if (mktme_status != MKTME_UNINITIALIZED)
++		goto detect_keyid_bits;
++
++	pr_info("x86/tme: enabled by BIOS\n");
++
++	tme_policy = TME_ACTIVATE_POLICY(tme_activate);
++	if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128)
++		pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy);
++
++	tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate);
++	if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) {
++		pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n",
++				tme_crypto_algs);
++		mktme_status = MKTME_DISABLED;
++	}
++detect_keyid_bits:
++	keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate);
++	nr_keyids = (1UL << keyid_bits) - 1;
++	if (nr_keyids) {
++		pr_info_once("x86/mktme: enabled by BIOS\n");
++		pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids);
++	} else {
++		pr_info_once("x86/mktme: disabled by BIOS\n");
++	}
++
++	if (mktme_status == MKTME_UNINITIALIZED) {
++		/* MKTME is usable */
++		mktme_status = MKTME_ENABLED;
++	}
++
++	/*
++	 * KeyID bits effectively lower the number of physical address
++	 * bits.  Update cpuinfo_x86::x86_phys_bits accordingly.
++	 */
++	c->x86_phys_bits -= keyid_bits;
++}
++
+ static void early_init_intel(struct cpuinfo_x86 *c)
+ {
+ 	u64 misc_enable;
+@@ -322,6 +406,13 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ 	 */
+ 	if (detect_extended_topology_early(c) < 0)
+ 		detect_ht_early(c);
++
++	/*
++	 * Adjust the number of physical bits early because it affects the
++	 * valid bits of the MTRR mask registers.
++	 */
++	if (cpu_has(c, X86_FEATURE_TME))
++		detect_tme_early(c);
+ }
+ 
+ static void bsp_init_intel(struct cpuinfo_x86 *c)
+@@ -482,90 +573,6 @@ static void srat_detect_node(struct cpuinfo_x86 *c)
+ #endif
+ }
+ 
+-#define MSR_IA32_TME_ACTIVATE		0x982
+-
+-/* Helpers to access TME_ACTIVATE MSR */
+-#define TME_ACTIVATE_LOCKED(x)		(x & 0x1)
+-#define TME_ACTIVATE_ENABLED(x)		(x & 0x2)
+-
+-#define TME_ACTIVATE_POLICY(x)		((x >> 4) & 0xf)	/* Bits 7:4 */
+-#define TME_ACTIVATE_POLICY_AES_XTS_128	0
+-
+-#define TME_ACTIVATE_KEYID_BITS(x)	((x >> 32) & 0xf)	/* Bits 35:32 */
+-
+-#define TME_ACTIVATE_CRYPTO_ALGS(x)	((x >> 48) & 0xffff)	/* Bits 63:48 */
+-#define TME_ACTIVATE_CRYPTO_AES_XTS_128	1
+-
+-/* Values for mktme_status (SW only construct) */
+-#define MKTME_ENABLED			0
+-#define MKTME_DISABLED			1
+-#define MKTME_UNINITIALIZED		2
+-static int mktme_status = MKTME_UNINITIALIZED;
+-
+-static void detect_tme(struct cpuinfo_x86 *c)
+-{
+-	u64 tme_activate, tme_policy, tme_crypto_algs;
+-	int keyid_bits = 0, nr_keyids = 0;
+-	static u64 tme_activate_cpu0 = 0;
+-
+-	rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate);
+-
+-	if (mktme_status != MKTME_UNINITIALIZED) {
+-		if (tme_activate != tme_activate_cpu0) {
+-			/* Broken BIOS? */
+-			pr_err_once("x86/tme: configuration is inconsistent between CPUs\n");
+-			pr_err_once("x86/tme: MKTME is not usable\n");
+-			mktme_status = MKTME_DISABLED;
+-
+-			/* Proceed. We may need to exclude bits from x86_phys_bits. */
+-		}
+-	} else {
+-		tme_activate_cpu0 = tme_activate;
+-	}
+-
+-	if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) {
+-		pr_info_once("x86/tme: not enabled by BIOS\n");
+-		mktme_status = MKTME_DISABLED;
+-		return;
+-	}
+-
+-	if (mktme_status != MKTME_UNINITIALIZED)
+-		goto detect_keyid_bits;
+-
+-	pr_info("x86/tme: enabled by BIOS\n");
+-
+-	tme_policy = TME_ACTIVATE_POLICY(tme_activate);
+-	if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128)
+-		pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy);
+-
+-	tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate);
+-	if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) {
+-		pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n",
+-				tme_crypto_algs);
+-		mktme_status = MKTME_DISABLED;
+-	}
+-detect_keyid_bits:
+-	keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate);
+-	nr_keyids = (1UL << keyid_bits) - 1;
+-	if (nr_keyids) {
+-		pr_info_once("x86/mktme: enabled by BIOS\n");
+-		pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids);
+-	} else {
+-		pr_info_once("x86/mktme: disabled by BIOS\n");
+-	}
+-
+-	if (mktme_status == MKTME_UNINITIALIZED) {
+-		/* MKTME is usable */
+-		mktme_status = MKTME_ENABLED;
+-	}
+-
+-	/*
+-	 * KeyID bits effectively lower the number of physical address
+-	 * bits.  Update cpuinfo_x86::x86_phys_bits accordingly.
+-	 */
+-	c->x86_phys_bits -= keyid_bits;
+-}
+-
+ static void init_cpuid_fault(struct cpuinfo_x86 *c)
+ {
+ 	u64 msr;
+@@ -702,9 +709,6 @@ static void init_intel(struct cpuinfo_x86 *c)
+ 
+ 	init_ia32_feat_ctl(c);
+ 
+-	if (cpu_has(c, X86_FEATURE_TME))
+-		detect_tme(c);
+-
+ 	init_intel_misc_features(c);
+ 
+ 	split_lock_init();
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index fb8cf953380da..b66f540de054a 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -1017,10 +1017,12 @@ void __init e820__reserve_setup_data(void)
+ 		e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+ 
+ 		/*
+-		 * SETUP_EFI and SETUP_IMA are supplied by kexec and do not need
+-		 * to be reserved.
++		 * SETUP_EFI, SETUP_IMA and SETUP_RNG_SEED are supplied by
++		 * kexec and do not need to be reserved.
+ 		 */
+-		if (data->type != SETUP_EFI && data->type != SETUP_IMA)
++		if (data->type != SETUP_EFI &&
++		    data->type != SETUP_IMA &&
++		    data->type != SETUP_RNG_SEED)
+ 			e820__range_update_kexec(pa_data,
+ 						 sizeof(*data) + data->len,
+ 						 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index 17e955ab69fed..3082cf24b69e3 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -563,9 +563,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
+ 	}
+ 	if (this_cpu_dec_return(nmi_state))
+ 		goto nmi_restart;
+-
+-	if (user_mode(regs))
+-		mds_user_clear_cpu_buffers();
+ }
+ 
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
+index edc3f16cc1896..6a9bfdfbb6e59 100644
+--- a/arch/x86/kvm/vmx/run_flags.h
++++ b/arch/x86/kvm/vmx/run_flags.h
+@@ -2,7 +2,10 @@
+ #ifndef __KVM_X86_VMX_RUN_FLAGS_H
+ #define __KVM_X86_VMX_RUN_FLAGS_H
+ 
+-#define VMX_RUN_VMRESUME	(1 << 0)
+-#define VMX_RUN_SAVE_SPEC_CTRL	(1 << 1)
++#define VMX_RUN_VMRESUME_SHIFT		0
++#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT	1
++
++#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
++#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
+ 
+ #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index be275a0410a89..139960deb7362 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -139,7 +139,7 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	mov (%_ASM_SP), %_ASM_AX
+ 
+ 	/* Check if vmlaunch or vmresume is needed */
+-	test $VMX_RUN_VMRESUME, %ebx
++	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
+ 
+ 	/* Load guest registers.  Don't clobber flags. */
+ 	mov VCPU_RCX(%_ASM_AX), %_ASM_CX
+@@ -161,8 +161,11 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 	/* Load guest RAX.  This kills the @regs pointer! */
+ 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
+ 
+-	/* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */
+-	jz .Lvmlaunch
++	/* Clobbers EFLAGS.ZF */
++	CLEAR_CPU_BUFFERS
++
++	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
++	jnc .Lvmlaunch
+ 
+ 	/*
+ 	 * After a successful VMRESUME/VMLAUNCH, control flow "magically"
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 94082169bbbc2..856eef56b3a8f 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -387,7 +387,16 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx)
+ 
+ static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ {
+-	vmx->disable_fb_clear = (host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) &&
++	/*
++	 * Disable VERW's behavior of clearing CPU buffers for the guest if the
++	 * CPU isn't affected by MDS/TAA, and the host hasn't forcefully enabled
++	 * the mitigation. Disabling the clearing behavior provides a
++	 * performance boost for guests that aren't aware that manually clearing
++	 * CPU buffers is unnecessary, at the cost of MSR accesses on VM-Entry
++	 * and VM-Exit.
++	 */
++	vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) &&
++				(host_arch_capabilities & ARCH_CAP_FB_CLEAR_CTRL) &&
+ 				!boot_cpu_has_bug(X86_BUG_MDS) &&
+ 				!boot_cpu_has_bug(X86_BUG_TAA);
+ 
+@@ -7226,11 +7235,14 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 
+ 	guest_state_enter_irqoff();
+ 
+-	/* L1D Flush includes CPU buffer clear to mitigate MDS */
++	/*
++	 * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
++	 * mitigation for MDS is done late in VMentry and is still
++	 * executed in spite of L1D Flush. This is because an extra VERW
++	 * should not matter much after the big hammer L1D Flush.
++	 */
+ 	if (static_branch_unlikely(&vmx_l1d_should_flush))
+ 		vmx_l1d_flush(vcpu);
+-	else if (static_branch_unlikely(&mds_user_clear))
+-		mds_clear_cpu_buffers();
+ 	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ 		 kvm_arch_has_assigned_device(vcpu->kvm))
+ 		mds_clear_cpu_buffers();
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index fdb0fae88d1c5..b40b32fa7f1c3 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -152,7 +152,7 @@ static int qca_send_patch_config_cmd(struct hci_dev *hdev)
+ 	bt_dev_dbg(hdev, "QCA Patch config");
+ 
+ 	skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, sizeof(cmd),
+-				cmd, HCI_EV_VENDOR, HCI_INIT_TIMEOUT);
++				cmd, 0, HCI_INIT_TIMEOUT);
+ 	if (IS_ERR(skb)) {
+ 		err = PTR_ERR(skb);
+ 		bt_dev_err(hdev, "Sending QCA Patch config failed (%d)", err);
+diff --git a/drivers/bluetooth/hci_bcm4377.c b/drivers/bluetooth/hci_bcm4377.c
+index a617578356953..9a7243d5db71f 100644
+--- a/drivers/bluetooth/hci_bcm4377.c
++++ b/drivers/bluetooth/hci_bcm4377.c
+@@ -1417,7 +1417,7 @@ static int bcm4377_check_bdaddr(struct bcm4377_data *bcm4377)
+ 
+ 	bda = (struct hci_rp_read_bd_addr *)skb->data;
+ 	if (!bcm4377_is_valid_bdaddr(bcm4377, &bda->bdaddr))
+-		set_bit(HCI_QUIRK_INVALID_BDADDR, &bcm4377->hdev->quirks);
++		set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &bcm4377->hdev->quirks);
+ 
+ 	kfree_skb(skb);
+ 	return 0;
+@@ -2368,7 +2368,6 @@ static int bcm4377_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	hdev->set_bdaddr = bcm4377_hci_set_bdaddr;
+ 	hdev->setup = bcm4377_hci_setup;
+ 
+-	set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
+ 	if (bcm4377->hw->broken_mws_transport_config)
+ 		set_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &hdev->quirks);
+ 	if (bcm4377->hw->broken_ext_scan)
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 35f74f209d1fc..a65de7309da4c 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -7,6 +7,7 @@
+  *
+  *  Copyright (C) 2007 Texas Instruments, Inc.
+  *  Copyright (c) 2010, 2012, 2018 The Linux Foundation. All rights reserved.
++ *  Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  *
+  *  Acknowledgements:
+  *  This file is based on hci_ll.c, which was...
+@@ -1806,13 +1807,12 @@ static int qca_power_on(struct hci_dev *hdev)
+ 
+ static void hci_coredump_qca(struct hci_dev *hdev)
+ {
++	int err;
+ 	static const u8 param[] = { 0x26 };
+-	struct sk_buff *skb;
+ 
+-	skb = __hci_cmd_sync(hdev, 0xfc0c, 1, param, HCI_CMD_TIMEOUT);
+-	if (IS_ERR(skb))
+-		bt_dev_err(hdev, "%s: trigger crash failed (%ld)", __func__, PTR_ERR(skb));
+-	kfree_skb(skb);
++	err = __hci_cmd_send(hdev, 0xfc0c, 1, param);
++	if (err < 0)
++		bt_dev_err(hdev, "%s: trigger crash failed (%d)", __func__, err);
+ }
+ 
+ static int qca_setup(struct hci_uart *hu)
+@@ -1886,7 +1886,17 @@ static int qca_setup(struct hci_uart *hu)
+ 	case QCA_WCN6750:
+ 	case QCA_WCN6855:
+ 	case QCA_WCN7850:
+-		set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
++
++		/* Set BDA quirk bit for reading BDA value from fwnode property
++		 * only if that property exist in DT.
++		 */
++		if (fwnode_property_present(dev_fwnode(hdev->dev.parent), "local-bd-address")) {
++			set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
++			bt_dev_info(hdev, "setting quirk bit to read BDA from fwnode later");
++		} else {
++			bt_dev_dbg(hdev, "local-bd-address` is not present in the devicetree so not setting quirk bit for BDA");
++		}
++
+ 		hci_set_aosp_capable(hdev);
+ 
+ 		ret = qca_read_soc_version(hdev, &ver, soc_type);
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index f5c69fa230d9b..357e01a272743 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2983,6 +2983,9 @@ static void intel_cpufreq_adjust_perf(unsigned int cpunum,
+ 	if (min_pstate < cpu->min_perf_ratio)
+ 		min_pstate = cpu->min_perf_ratio;
+ 
++	if (min_pstate > cpu->max_perf_ratio)
++		min_pstate = cpu->max_perf_ratio;
++
+ 	max_pstate = min(cap_pstate, cpu->max_perf_ratio);
+ 	if (max_pstate < min_pstate)
+ 		max_pstate = min_pstate;
+diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c
+index b38786f0ad799..b75fdaffad9a4 100644
+--- a/drivers/dma/dw-edma/dw-edma-v0-core.c
++++ b/drivers/dma/dw-edma/dw-edma-v0-core.c
+@@ -346,6 +346,20 @@ static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk)
+ 	dw_edma_v0_write_ll_link(chunk, i, control, chunk->ll_region.paddr);
+ }
+ 
++static void dw_edma_v0_sync_ll_data(struct dw_edma_chunk *chunk)
++{
++	/*
++	 * In case of remote eDMA engine setup, the DW PCIe RP/EP internal
++	 * configuration registers and application memory are normally accessed
++	 * over different buses. Ensure LL-data reaches the memory before the
++	 * doorbell register is toggled by issuing the dummy-read from the remote
++	 * LL memory in a hope that the MRd TLP will return only after the
++	 * last MWr TLP is completed
++	 */
++	if (!(chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL))
++		readl(chunk->ll_region.vaddr.io);
++}
++
+ static void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ {
+ 	struct dw_edma_chan *chan = chunk->chan;
+@@ -412,6 +426,9 @@ static void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ 		SET_CH_32(dw, chan->dir, chan->id, llp.msb,
+ 			  upper_32_bits(chunk->ll_region.paddr));
+ 	}
++
++	dw_edma_v0_sync_ll_data(chunk);
++
+ 	/* Doorbell */
+ 	SET_RW_32(dw, chan->dir, doorbell,
+ 		  FIELD_PREP(EDMA_V0_DOORBELL_CH_MASK, chan->id));
+diff --git a/drivers/dma/dw-edma/dw-hdma-v0-core.c b/drivers/dma/dw-edma/dw-hdma-v0-core.c
+index 00b735a0202ab..10e8f0715114f 100644
+--- a/drivers/dma/dw-edma/dw-hdma-v0-core.c
++++ b/drivers/dma/dw-edma/dw-hdma-v0-core.c
+@@ -65,18 +65,12 @@ static void dw_hdma_v0_core_off(struct dw_edma *dw)
+ 
+ static u16 dw_hdma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir)
+ {
+-	u32 num_ch = 0;
+-	int id;
+-
+-	for (id = 0; id < HDMA_V0_MAX_NR_CH; id++) {
+-		if (GET_CH_32(dw, id, dir, ch_en) & BIT(0))
+-			num_ch++;
+-	}
+-
+-	if (num_ch > HDMA_V0_MAX_NR_CH)
+-		num_ch = HDMA_V0_MAX_NR_CH;
+-
+-	return (u16)num_ch;
++	/*
++	 * The HDMA IP have no way to know the number of hardware channels
++	 * available, we set it to maximum channels and let the platform
++	 * set the right number of channels.
++	 */
++	return HDMA_V0_MAX_NR_CH;
+ }
+ 
+ static enum dma_status dw_hdma_v0_core_ch_status(struct dw_edma_chan *chan)
+@@ -228,6 +222,20 @@ static void dw_hdma_v0_core_write_chunk(struct dw_edma_chunk *chunk)
+ 	dw_hdma_v0_write_ll_link(chunk, i, control, chunk->ll_region.paddr);
+ }
+ 
++static void dw_hdma_v0_sync_ll_data(struct dw_edma_chunk *chunk)
++{
++	/*
++	 * In case of remote HDMA engine setup, the DW PCIe RP/EP internal
++	 * configuration registers and application memory are normally accessed
++	 * over different buses. Ensure LL-data reaches the memory before the
++	 * doorbell register is toggled by issuing the dummy-read from the remote
++	 * LL memory in a hope that the MRd TLP will return only after the
++	 * last MWr TLP is completed
++	 */
++	if (!(chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL))
++		readl(chunk->ll_region.vaddr.io);
++}
++
+ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ {
+ 	struct dw_edma_chan *chan = chunk->chan;
+@@ -242,7 +250,9 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ 		/* Interrupt enable&unmask - done, abort */
+ 		tmp = GET_CH_32(dw, chan->dir, chan->id, int_setup) |
+ 		      HDMA_V0_STOP_INT_MASK | HDMA_V0_ABORT_INT_MASK |
+-		      HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_STOP_INT_EN;
++		      HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_ABORT_INT_EN;
++		if (!(dw->chip->flags & DW_EDMA_CHIP_LOCAL))
++			tmp |= HDMA_V0_REMOTE_STOP_INT_EN | HDMA_V0_REMOTE_ABORT_INT_EN;
+ 		SET_CH_32(dw, chan->dir, chan->id, int_setup, tmp);
+ 		/* Channel control */
+ 		SET_CH_32(dw, chan->dir, chan->id, control1, HDMA_V0_LINKLIST_EN);
+@@ -256,6 +266,9 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ 	/* Set consumer cycle */
+ 	SET_CH_32(dw, chan->dir, chan->id, cycle_sync,
+ 		  HDMA_V0_CONSUMER_CYCLE_STAT | HDMA_V0_CONSUMER_CYCLE_BIT);
++
++	dw_hdma_v0_sync_ll_data(chunk);
++
+ 	/* Doorbell */
+ 	SET_CH_32(dw, chan->dir, chan->id, doorbell, HDMA_V0_DOORBELL_START);
+ }
+diff --git a/drivers/dma/dw-edma/dw-hdma-v0-regs.h b/drivers/dma/dw-edma/dw-hdma-v0-regs.h
+index a974abdf8aaf5..eab5fd7177e54 100644
+--- a/drivers/dma/dw-edma/dw-hdma-v0-regs.h
++++ b/drivers/dma/dw-edma/dw-hdma-v0-regs.h
+@@ -15,7 +15,7 @@
+ #define HDMA_V0_LOCAL_ABORT_INT_EN		BIT(6)
+ #define HDMA_V0_REMOTE_ABORT_INT_EN		BIT(5)
+ #define HDMA_V0_LOCAL_STOP_INT_EN		BIT(4)
+-#define HDMA_V0_REMOTEL_STOP_INT_EN		BIT(3)
++#define HDMA_V0_REMOTE_STOP_INT_EN		BIT(3)
+ #define HDMA_V0_ABORT_INT_MASK			BIT(2)
+ #define HDMA_V0_STOP_INT_MASK			BIT(0)
+ #define HDMA_V0_LINKLIST_EN			BIT(0)
+diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
+index b53f46245c377..793f1a7ad5e34 100644
+--- a/drivers/dma/fsl-edma-common.c
++++ b/drivers/dma/fsl-edma-common.c
+@@ -503,7 +503,7 @@ void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
+ 	if (fsl_chan->is_multi_fifo) {
+ 		/* set mloff to support multiple fifo */
+ 		burst = cfg->direction == DMA_DEV_TO_MEM ?
+-				cfg->src_addr_width : cfg->dst_addr_width;
++				cfg->src_maxburst : cfg->dst_maxburst;
+ 		nbytes |= EDMA_V3_TCD_NBYTES_MLOFF(-(burst * 4));
+ 		/* enable DMLOE/SMLOE */
+ 		if (cfg->direction == DMA_MEM_TO_DEV) {
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index 9b141369bea5d..7a24279318e64 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -109,6 +109,7 @@
+ #define FSL_QDMA_CMD_WTHROTL_OFFSET	20
+ #define FSL_QDMA_CMD_DSEN_OFFSET	19
+ #define FSL_QDMA_CMD_LWC_OFFSET		16
++#define FSL_QDMA_CMD_PF			BIT(17)
+ 
+ /* Field definition for Descriptor status */
+ #define QDMA_CCDF_STATUS_RTE		BIT(5)
+@@ -384,7 +385,8 @@ static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp,
+ 	qdma_csgf_set_f(csgf_dest, len);
+ 	/* Descriptor Buffer */
+ 	cmd = cpu_to_le32(FSL_QDMA_CMD_RWTTYPE <<
+-			  FSL_QDMA_CMD_RWTTYPE_OFFSET);
++			  FSL_QDMA_CMD_RWTTYPE_OFFSET) |
++			  FSL_QDMA_CMD_PF;
+ 	sdf->data = QDMA_SDDF_CMD(cmd);
+ 
+ 	cmd = cpu_to_le32(FSL_QDMA_CMD_RWTTYPE <<
+@@ -1197,10 +1199,6 @@ static int fsl_qdma_probe(struct platform_device *pdev)
+ 	if (!fsl_qdma->queue)
+ 		return -ENOMEM;
+ 
+-	ret = fsl_qdma_irq_init(pdev, fsl_qdma);
+-	if (ret)
+-		return ret;
+-
+ 	fsl_qdma->irq_base = platform_get_irq_byname(pdev, "qdma-queue0");
+ 	if (fsl_qdma->irq_base < 0)
+ 		return fsl_qdma->irq_base;
+@@ -1239,16 +1237,19 @@ static int fsl_qdma_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, fsl_qdma);
+ 
+-	ret = dma_async_device_register(&fsl_qdma->dma_dev);
++	ret = fsl_qdma_reg_init(fsl_qdma);
+ 	if (ret) {
+-		dev_err(&pdev->dev,
+-			"Can't register NXP Layerscape qDMA engine.\n");
++		dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n");
+ 		return ret;
+ 	}
+ 
+-	ret = fsl_qdma_reg_init(fsl_qdma);
++	ret = fsl_qdma_irq_init(pdev, fsl_qdma);
++	if (ret)
++		return ret;
++
++	ret = dma_async_device_register(&fsl_qdma->dma_dev);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n");
++		dev_err(&pdev->dev, "Can't register NXP Layerscape qDMA engine.\n");
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index 0423655f5a880..3dd25a9a04f18 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -345,7 +345,7 @@ static void idxd_cdev_evl_drain_pasid(struct idxd_wq *wq, u32 pasid)
+ 	spin_lock(&evl->lock);
+ 	status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ 	t = status.tail;
+-	h = evl->head;
++	h = status.head;
+ 	size = evl->size;
+ 
+ 	while (h != t) {
+diff --git a/drivers/dma/idxd/debugfs.c b/drivers/dma/idxd/debugfs.c
+index 9cfbd9b14c4c4..f3f25ee676f30 100644
+--- a/drivers/dma/idxd/debugfs.c
++++ b/drivers/dma/idxd/debugfs.c
+@@ -68,9 +68,9 @@ static int debugfs_evl_show(struct seq_file *s, void *d)
+ 
+ 	spin_lock(&evl->lock);
+ 
+-	h = evl->head;
+ 	evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ 	t = evl_status.tail;
++	h = evl_status.head;
+ 	evl_size = evl->size;
+ 
+ 	seq_printf(s, "Event Log head %u tail %u interrupt pending %u\n\n",
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index 1e89c80a07fc2..96062ae39f9a2 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -290,7 +290,6 @@ struct idxd_evl {
+ 	unsigned int log_size;
+ 	/* The number of entries in the event log. */
+ 	u16 size;
+-	u16 head;
+ 	unsigned long *bmap;
+ 	bool batch_fail[IDXD_MAX_BATCH_IDENT];
+ };
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 0eb1c827a215f..d09a8553ea71d 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -342,7 +342,9 @@ static void idxd_cleanup_internals(struct idxd_device *idxd)
+ static int idxd_init_evl(struct idxd_device *idxd)
+ {
+ 	struct device *dev = &idxd->pdev->dev;
++	unsigned int evl_cache_size;
+ 	struct idxd_evl *evl;
++	const char *idxd_name;
+ 
+ 	if (idxd->hw.gen_cap.evl_support == 0)
+ 		return 0;
+@@ -354,9 +356,16 @@ static int idxd_init_evl(struct idxd_device *idxd)
+ 	spin_lock_init(&evl->lock);
+ 	evl->size = IDXD_EVL_SIZE_MIN;
+ 
+-	idxd->evl_cache = kmem_cache_create(dev_name(idxd_confdev(idxd)),
+-					    sizeof(struct idxd_evl_fault) + evl_ent_size(idxd),
+-					    0, 0, NULL);
++	idxd_name = dev_name(idxd_confdev(idxd));
++	evl_cache_size = sizeof(struct idxd_evl_fault) + evl_ent_size(idxd);
++	/*
++	 * Since completion record in evl_cache will be copied to user
++	 * when handling completion record page fault, need to create
++	 * the cache suitable for user copy.
++	 */
++	idxd->evl_cache = kmem_cache_create_usercopy(idxd_name, evl_cache_size,
++						     0, 0, 0, evl_cache_size,
++						     NULL);
+ 	if (!idxd->evl_cache) {
+ 		kfree(evl);
+ 		return -ENOMEM;
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index 2183d7f9cdbdd..3bdfc1797f620 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -367,9 +367,9 @@ static void process_evl_entries(struct idxd_device *idxd)
+ 	/* Clear interrupt pending bit */
+ 	iowrite32(evl_status.bits_upper32,
+ 		  idxd->reg_base + IDXD_EVLSTATUS_OFFSET + sizeof(u32));
+-	h = evl->head;
+ 	evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ 	t = evl_status.tail;
++	h = evl_status.head;
+ 	size = idxd->evl->size;
+ 
+ 	while (h != t) {
+@@ -378,7 +378,6 @@ static void process_evl_entries(struct idxd_device *idxd)
+ 		h = (h + 1) % size;
+ 	}
+ 
+-	evl->head = h;
+ 	evl_status.head = h;
+ 	iowrite32(evl_status.bits_lower32, idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ 	spin_unlock(&evl->lock);
+diff --git a/drivers/dma/ptdma/ptdma-dmaengine.c b/drivers/dma/ptdma/ptdma-dmaengine.c
+index 1aa65e5de0f3a..f792407348077 100644
+--- a/drivers/dma/ptdma/ptdma-dmaengine.c
++++ b/drivers/dma/ptdma/ptdma-dmaengine.c
+@@ -385,8 +385,6 @@ int pt_dmaengine_register(struct pt_device *pt)
+ 	chan->vc.desc_free = pt_do_cleanup;
+ 	vchan_init(&chan->vc, dma_dev);
+ 
+-	dma_set_mask_and_coherent(pt->dev, DMA_BIT_MASK(64));
+-
+ 	ret = dma_async_device_register(dma_dev);
+ 	if (ret)
+ 		goto err_reg;
+diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
+index 3e8d4b51a8140..97bafb5f70389 100644
+--- a/drivers/firmware/efi/capsule-loader.c
++++ b/drivers/firmware/efi/capsule-loader.c
+@@ -292,7 +292,7 @@ static int efi_capsule_open(struct inode *inode, struct file *file)
+ 		return -ENOMEM;
+ 	}
+ 
+-	cap_info->phys = kzalloc(sizeof(void *), GFP_KERNEL);
++	cap_info->phys = kzalloc(sizeof(phys_addr_t), GFP_KERNEL);
+ 	if (!cap_info->phys) {
+ 		kfree(cap_info->pages);
+ 		kfree(cap_info);
+diff --git a/drivers/gpio/gpio-74x164.c b/drivers/gpio/gpio-74x164.c
+index e00c333105170..753e7be039e4d 100644
+--- a/drivers/gpio/gpio-74x164.c
++++ b/drivers/gpio/gpio-74x164.c
+@@ -127,8 +127,6 @@ static int gen_74x164_probe(struct spi_device *spi)
+ 	if (IS_ERR(chip->gpiod_oe))
+ 		return PTR_ERR(chip->gpiod_oe);
+ 
+-	gpiod_set_value_cansleep(chip->gpiod_oe, 1);
+-
+ 	spi_set_drvdata(spi, chip);
+ 
+ 	chip->gpio_chip.label = spi->modalias;
+@@ -153,6 +151,8 @@ static int gen_74x164_probe(struct spi_device *spi)
+ 		goto exit_destroy;
+ 	}
+ 
++	gpiod_set_value_cansleep(chip->gpiod_oe, 1);
++
+ 	ret = gpiochip_add_data(&chip->gpio_chip, chip);
+ 	if (!ret)
+ 		return 0;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 15de124d5b402..1d033106cf396 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -972,11 +972,11 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 
+ 	ret = gpiochip_irqchip_init_valid_mask(gc);
+ 	if (ret)
+-		goto err_remove_acpi_chip;
++		goto err_free_hogs;
+ 
+ 	ret = gpiochip_irqchip_init_hw(gc);
+ 	if (ret)
+-		goto err_remove_acpi_chip;
++		goto err_remove_irqchip_mask;
+ 
+ 	ret = gpiochip_add_irqchip(gc, lock_key, request_key);
+ 	if (ret)
+@@ -1001,13 +1001,13 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ 	gpiochip_irqchip_remove(gc);
+ err_remove_irqchip_mask:
+ 	gpiochip_irqchip_free_valid_mask(gc);
+-err_remove_acpi_chip:
++err_free_hogs:
++	gpiochip_free_hogs(gc);
+ 	acpi_gpiochip_remove(gc);
++	gpiochip_remove_pin_ranges(gc);
+ err_remove_of_chip:
+-	gpiochip_free_hogs(gc);
+ 	of_gpiochip_remove(gc);
+ err_free_gpiochip_mask:
+-	gpiochip_remove_pin_ranges(gc);
+ 	gpiochip_free_valid_mask(gc);
+ 	if (gdev->dev.release) {
+ 		/* release() has been registered by gpiochip_setup_dev() */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 4fad9c478c8aa..7a09a72e182ff 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -66,6 +66,8 @@ static void apply_edid_quirks(struct edid *edid, struct dc_edid_caps *edid_caps)
+ 	/* Workaround for some monitors that do not clear DPCD 0x317 if FreeSync is unsupported */
+ 	case drm_edid_encode_panel_id('A', 'U', 'O', 0xA7AB):
+ 	case drm_edid_encode_panel_id('A', 'U', 'O', 0xE69B):
++	case drm_edid_encode_panel_id('B', 'O', 'E', 0x092A):
++	case drm_edid_encode_panel_id('L', 'G', 'D', 0x06D1):
+ 		DRM_DEBUG_DRIVER("Clearing DPCD 0x317 on monitor with panel id %X\n", panel_id);
+ 		edid_caps->panel_patch.remove_sink_ext_caps = true;
+ 		break;
+@@ -119,6 +121,8 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
+ 
+ 	edid_caps->edid_hdmi = connector->display_info.is_hdmi;
+ 
++	apply_edid_quirks(edid_buf, edid_caps);
++
+ 	sad_count = drm_edid_to_sad((struct edid *) edid->raw_edid, &sads);
+ 	if (sad_count <= 0)
+ 		return result;
+@@ -145,8 +149,6 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
+ 	else
+ 		edid_caps->speaker_flags = DEFAULT_SPEAKER_LOCATION;
+ 
+-	apply_edid_quirks(edid_buf, edid_caps);
+-
+ 	kfree(sads);
+ 	kfree(sadb);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index 8f231418870f2..c62b61ac45d27 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -76,6 +76,11 @@ static void map_hw_resources(struct dml2_context *dml2,
+ 			in_out_display_cfg->hw.DLGRefClkFreqMHz = 50;
+ 		}
+ 		for (j = 0; j < mode_support_info->DPPPerSurface[i]; j++) {
++			if (i >= __DML2_WRAPPER_MAX_STREAMS_PLANES__) {
++				dml_print("DML::%s: Index out of bounds: i=%d, __DML2_WRAPPER_MAX_STREAMS_PLANES__=%d\n",
++					  __func__, i, __DML2_WRAPPER_MAX_STREAMS_PLANES__);
++				break;
++			}
+ 			dml2->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_stream_id[num_pipes] = dml2->v20.scratch.dml_to_dc_pipe_mapping.disp_cfg_to_stream_id[i];
+ 			dml2->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_stream_id_valid[num_pipes] = true;
+ 			dml2->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_plane_id[num_pipes] = dml2->v20.scratch.dml_to_dc_pipe_mapping.disp_cfg_to_plane_id[i];
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+index df4f20293c16a..eb4da3666e05d 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+@@ -6925,6 +6925,23 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 	return 0;
+ }
+ 
++static int si_set_temperature_range(struct amdgpu_device *adev)
++{
++	int ret;
++
++	ret = si_thermal_enable_alert(adev, false);
++	if (ret)
++		return ret;
++	ret = si_thermal_set_temperature_range(adev, R600_TEMP_RANGE_MIN, R600_TEMP_RANGE_MAX);
++	if (ret)
++		return ret;
++	ret = si_thermal_enable_alert(adev, true);
++	if (ret)
++		return ret;
++
++	return ret;
++}
++
+ static void si_dpm_disable(struct amdgpu_device *adev)
+ {
+ 	struct rv7xx_power_info *pi = rv770_get_pi(adev);
+@@ -7608,6 +7625,18 @@ static int si_dpm_process_interrupt(struct amdgpu_device *adev,
+ 
+ static int si_dpm_late_init(void *handle)
+ {
++	int ret;
++	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
++
++	if (!adev->pm.dpm_enabled)
++		return 0;
++
++	ret = si_set_temperature_range(adev);
++	if (ret)
++		return ret;
++#if 0 //TODO ?
++	si_dpm_powergate_uvd(adev, true);
++#endif
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index 2cb6b68222bab..b9c5398cb986c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1303,13 +1303,12 @@ static int arcturus_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled) {
++	if (smu->od_enabled)
+ 		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+-		od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+-	} else {
++	else
+ 		od_percent_upper = 0;
+-		od_percent_lower = 100;
+-	}
++
++	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 							od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index a38233cc5b7ff..aefe72c8abd2d 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2357,13 +2357,12 @@ static int navi10_get_power_limit(struct smu_context *smu,
+ 		*default_power_limit = power_limit;
+ 
+ 	if (smu->od_enabled &&
+-		    navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT)) {
++		    navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT))
+ 		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+-		od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+-	} else {
++	else
+ 		od_percent_upper = 0;
+-		od_percent_lower = 100;
+-	}
++
++	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 1de9f8b5cc5fa..fa953d4445487 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -640,13 +640,12 @@ static int sienna_cichlid_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled) {
++	if (smu->od_enabled)
+ 		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
+-		od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
+-	} else {
++	else
+ 		od_percent_upper = 0;
+-		od_percent_lower = 100;
+-	}
++
++	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 5625a6e5702a3..ac3c9e0966ed6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2364,13 +2364,12 @@ static int smu_v13_0_0_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled) {
++	if (smu->od_enabled)
+ 		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
+-		od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
+-	} else {
++	else
+ 		od_percent_upper = 0;
+-		od_percent_lower = 100;
+-	}
++
++	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index bc5891c3f6487..b296c5f9d98d0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2328,13 +2328,12 @@ static int smu_v13_0_7_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled) {
++	if (smu->od_enabled)
+ 		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
+-		od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
+-	} else {
++	else
+ 		od_percent_upper = 0;
+-		od_percent_lower = 100;
+-	}
++
++	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
+index c4222b886db7c..f3a6ac908f815 100644
+--- a/drivers/gpu/drm/drm_buddy.c
++++ b/drivers/gpu/drm/drm_buddy.c
+@@ -332,6 +332,7 @@ alloc_range_bias(struct drm_buddy *mm,
+ 		 u64 start, u64 end,
+ 		 unsigned int order)
+ {
++	u64 req_size = mm->chunk_size << order;
+ 	struct drm_buddy_block *block;
+ 	struct drm_buddy_block *buddy;
+ 	LIST_HEAD(dfs);
+@@ -367,6 +368,15 @@ alloc_range_bias(struct drm_buddy *mm,
+ 		if (drm_buddy_block_is_allocated(block))
+ 			continue;
+ 
++		if (block_start < start || block_end > end) {
++			u64 adjusted_start = max(block_start, start);
++			u64 adjusted_end = min(block_end, end);
++
++			if (round_down(adjusted_end + 1, req_size) <=
++			    round_up(adjusted_start, req_size))
++				continue;
++		}
++
+ 		if (contains(start, end, block_start, block_end) &&
+ 		    order == drm_buddy_block_order(block)) {
+ 			/*
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 50589f982d1a4..75545da9d1e91 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -708,10 +708,11 @@ nouveau_drm_device_fini(struct drm_device *dev)
+ 	}
+ 	mutex_unlock(&drm->clients_lock);
+ 
+-	nouveau_sched_fini(drm);
+-
+ 	nouveau_cli_fini(&drm->client);
+ 	nouveau_cli_fini(&drm->master);
++
++	nouveau_sched_fini(drm);
++
+ 	nvif_parent_dtor(&drm->parent);
+ 	mutex_destroy(&drm->clients_lock);
+ 	kfree(drm);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index a41735ab60683..d66fc3570642b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -1054,8 +1054,6 @@ r535_gsp_postinit(struct nvkm_gsp *gsp)
+ 	/* Release the DMA buffers that were needed only for boot and init */
+ 	nvkm_gsp_mem_dtor(gsp, &gsp->boot.fw);
+ 	nvkm_gsp_mem_dtor(gsp, &gsp->libos);
+-	nvkm_gsp_mem_dtor(gsp, &gsp->rmargs);
+-	nvkm_gsp_mem_dtor(gsp, &gsp->wpr_meta);
+ 
+ 	return ret;
+ }
+@@ -2163,6 +2161,8 @@ r535_gsp_dtor(struct nvkm_gsp *gsp)
+ 
+ 	r535_gsp_dtor_fws(gsp);
+ 
++	nvkm_gsp_mem_dtor(gsp, &gsp->rmargs);
++	nvkm_gsp_mem_dtor(gsp, &gsp->wpr_meta);
+ 	nvkm_gsp_mem_dtor(gsp, &gsp->shm.mem);
+ 	nvkm_gsp_mem_dtor(gsp, &gsp->loginit);
+ 	nvkm_gsp_mem_dtor(gsp, &gsp->logintr);
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index ff36171c8fb70..373bcd79257e0 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -1242,9 +1242,26 @@ static int host1x_drm_probe(struct host1x_device *dev)
+ 
+ 	drm_mode_config_reset(drm);
+ 
+-	err = drm_aperture_remove_framebuffers(&tegra_drm_driver);
+-	if (err < 0)
+-		goto hub;
++	/*
++	 * Only take over from a potential firmware framebuffer if any CRTCs
++	 * have been registered. This must not be a fatal error because there
++	 * are other accelerators that are exposed via this driver.
++	 *
++	 * Another case where this happens is on Tegra234 where the display
++	 * hardware is no longer part of the host1x complex, so this driver
++	 * will not expose any modesetting features.
++	 */
++	if (drm->mode_config.num_crtc > 0) {
++		err = drm_aperture_remove_framebuffers(&tegra_drm_driver);
++		if (err < 0)
++			goto hub;
++	} else {
++		/*
++		 * Indicate to userspace that this doesn't expose any display
++		 * capabilities.
++		 */
++		drm->driver_features &= ~(DRIVER_MODESET | DRIVER_ATOMIC);
++	}
+ 
+ 	err = drm_dev_register(drm, 0);
+ 	if (err < 0)
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 42fd504abbcda..89983d7d73ca1 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -169,6 +169,7 @@ static const struct host1x_info host1x06_info = {
+ 	.num_sid_entries = ARRAY_SIZE(tegra186_sid_table),
+ 	.sid_table = tegra186_sid_table,
+ 	.reserve_vblank_syncpts = false,
++	.skip_reset_assert = true,
+ };
+ 
+ static const struct host1x_sid_entry tegra194_sid_table[] = {
+@@ -680,13 +681,15 @@ static int __maybe_unused host1x_runtime_suspend(struct device *dev)
+ 	host1x_intr_stop(host);
+ 	host1x_syncpt_save(host);
+ 
+-	err = reset_control_bulk_assert(host->nresets, host->resets);
+-	if (err) {
+-		dev_err(dev, "failed to assert reset: %d\n", err);
+-		goto resume_host1x;
+-	}
++	if (!host->info->skip_reset_assert) {
++		err = reset_control_bulk_assert(host->nresets, host->resets);
++		if (err) {
++			dev_err(dev, "failed to assert reset: %d\n", err);
++			goto resume_host1x;
++		}
+ 
+-	usleep_range(1000, 2000);
++		usleep_range(1000, 2000);
++	}
+ 
+ 	clk_disable_unprepare(host->clk);
+ 	reset_control_bulk_release(host->nresets, host->resets);
+diff --git a/drivers/gpu/host1x/dev.h b/drivers/gpu/host1x/dev.h
+index c8e302de76257..925a118db23f5 100644
+--- a/drivers/gpu/host1x/dev.h
++++ b/drivers/gpu/host1x/dev.h
+@@ -116,6 +116,12 @@ struct host1x_info {
+ 	 * the display driver disables VBLANK increments.
+ 	 */
+ 	bool reserve_vblank_syncpts;
++	/*
++	 * On Tegra186, secure world applications may require access to
++	 * host1x during suspend/resume. To allow this, we need to leave
++	 * host1x not in reset.
++	 */
++	bool skip_reset_assert;
+ };
+ 
+ struct host1x {
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index 504ac1b01b2d2..05fd9d3abf1b8 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -1330,20 +1330,23 @@ int iopt_disable_large_pages(struct io_pagetable *iopt)
+ 
+ int iopt_add_access(struct io_pagetable *iopt, struct iommufd_access *access)
+ {
++	u32 new_id;
+ 	int rc;
+ 
+ 	down_write(&iopt->domains_rwsem);
+ 	down_write(&iopt->iova_rwsem);
+-	rc = xa_alloc(&iopt->access_list, &access->iopt_access_list_id, access,
+-		      xa_limit_16b, GFP_KERNEL_ACCOUNT);
++	rc = xa_alloc(&iopt->access_list, &new_id, access, xa_limit_16b,
++		      GFP_KERNEL_ACCOUNT);
++
+ 	if (rc)
+ 		goto out_unlock;
+ 
+ 	rc = iopt_calculate_iova_alignment(iopt);
+ 	if (rc) {
+-		xa_erase(&iopt->access_list, access->iopt_access_list_id);
++		xa_erase(&iopt->access_list, new_id);
+ 		goto out_unlock;
+ 	}
++	access->iopt_access_list_id = new_id;
+ 
+ out_unlock:
+ 	up_write(&iopt->iova_rwsem);
+diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c
+index 022ef8f55088a..d437ffb715e3c 100644
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -48,8 +48,8 @@ enum {
+  * In syzkaller mode the 64 bit IOVA is converted into an nth area and offset
+  * value. This has a much smaller randomization space and syzkaller can hit it.
+  */
+-static unsigned long iommufd_test_syz_conv_iova(struct io_pagetable *iopt,
+-						u64 *iova)
++static unsigned long __iommufd_test_syz_conv_iova(struct io_pagetable *iopt,
++						  u64 *iova)
+ {
+ 	struct syz_layout {
+ 		__u32 nth_area;
+@@ -73,6 +73,21 @@ static unsigned long iommufd_test_syz_conv_iova(struct io_pagetable *iopt,
+ 	return 0;
+ }
+ 
++static unsigned long iommufd_test_syz_conv_iova(struct iommufd_access *access,
++						u64 *iova)
++{
++	unsigned long ret;
++
++	mutex_lock(&access->ioas_lock);
++	if (!access->ioas) {
++		mutex_unlock(&access->ioas_lock);
++		return 0;
++	}
++	ret = __iommufd_test_syz_conv_iova(&access->ioas->iopt, iova);
++	mutex_unlock(&access->ioas_lock);
++	return ret;
++}
++
+ void iommufd_test_syz_conv_iova_id(struct iommufd_ucmd *ucmd,
+ 				   unsigned int ioas_id, u64 *iova, u32 *flags)
+ {
+@@ -85,7 +100,7 @@ void iommufd_test_syz_conv_iova_id(struct iommufd_ucmd *ucmd,
+ 	ioas = iommufd_get_ioas(ucmd->ictx, ioas_id);
+ 	if (IS_ERR(ioas))
+ 		return;
+-	*iova = iommufd_test_syz_conv_iova(&ioas->iopt, iova);
++	*iova = __iommufd_test_syz_conv_iova(&ioas->iopt, iova);
+ 	iommufd_put_object(ucmd->ictx, &ioas->obj);
+ }
+ 
+@@ -1045,7 +1060,7 @@ static int iommufd_test_access_pages(struct iommufd_ucmd *ucmd,
+ 	}
+ 
+ 	if (flags & MOCK_FLAGS_ACCESS_SYZ)
+-		iova = iommufd_test_syz_conv_iova(&staccess->access->ioas->iopt,
++		iova = iommufd_test_syz_conv_iova(staccess->access,
+ 					&cmd->access_pages.iova);
+ 
+ 	npages = (ALIGN(iova + length, PAGE_SIZE) -
+@@ -1147,8 +1162,8 @@ static int iommufd_test_access_rw(struct iommufd_ucmd *ucmd,
+ 	}
+ 
+ 	if (flags & MOCK_FLAGS_ACCESS_SYZ)
+-		iova = iommufd_test_syz_conv_iova(&staccess->access->ioas->iopt,
+-					&cmd->access_rw.iova);
++		iova = iommufd_test_syz_conv_iova(staccess->access,
++				&cmd->access_rw.iova);
+ 
+ 	rc = iommufd_access_rw(staccess->access, iova, tmp, length, flags);
+ 	if (rc)
+diff --git a/drivers/mfd/twl6030-irq.c b/drivers/mfd/twl6030-irq.c
+index f9fce8408c2cf..3c03681c124c0 100644
+--- a/drivers/mfd/twl6030-irq.c
++++ b/drivers/mfd/twl6030-irq.c
+@@ -24,10 +24,10 @@
+ #include <linux/kthread.h>
+ #include <linux/mfd/twl.h>
+ #include <linux/platform_device.h>
+-#include <linux/property.h>
+ #include <linux/suspend.h>
+ #include <linux/of.h>
+ #include <linux/irqdomain.h>
++#include <linux/of_device.h>
+ 
+ #include "twl-core.h"
+ 
+@@ -368,10 +368,10 @@ int twl6030_init_irq(struct device *dev, int irq_num)
+ 	int			nr_irqs;
+ 	int			status;
+ 	u8			mask[3];
+-	const int		*irq_tbl;
++	const struct of_device_id *of_id;
+ 
+-	irq_tbl = device_get_match_data(dev);
+-	if (!irq_tbl) {
++	of_id = of_match_device(twl6030_of_match, dev);
++	if (!of_id || !of_id->data) {
+ 		dev_err(dev, "Unknown TWL device model\n");
+ 		return -EINVAL;
+ 	}
+@@ -409,7 +409,7 @@ int twl6030_init_irq(struct device *dev, int irq_num)
+ 
+ 	twl6030_irq->pm_nb.notifier_call = twl6030_irq_pm_notifier;
+ 	atomic_set(&twl6030_irq->wakeirqs, 0);
+-	twl6030_irq->irq_mapping_tbl = irq_tbl;
++	twl6030_irq->irq_mapping_tbl = of_id->data;
+ 
+ 	twl6030_irq->irq_domain =
+ 		irq_domain_add_linear(node, nr_irqs,
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 705942edacc6a..8479cf3df9749 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -1006,10 +1006,12 @@ static int mmc_select_bus_width(struct mmc_card *card)
+ 	static unsigned ext_csd_bits[] = {
+ 		EXT_CSD_BUS_WIDTH_8,
+ 		EXT_CSD_BUS_WIDTH_4,
++		EXT_CSD_BUS_WIDTH_1,
+ 	};
+ 	static unsigned bus_widths[] = {
+ 		MMC_BUS_WIDTH_8,
+ 		MMC_BUS_WIDTH_4,
++		MMC_BUS_WIDTH_1,
+ 	};
+ 	struct mmc_host *host = card->host;
+ 	unsigned idx, bus_width = 0;
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index 35067e1e6cd80..f5da7f9baa52d 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -225,6 +225,8 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
+ 	struct scatterlist *sg;
+ 	int i;
+ 
++	host->dma_in_progress = true;
++
+ 	if (!host->variant->dma_lli || data->sg_len == 1 ||
+ 	    idma->use_bounce_buffer) {
+ 		u32 dma_addr;
+@@ -263,9 +265,30 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
+ 	return 0;
+ }
+ 
++static void sdmmc_idma_error(struct mmci_host *host)
++{
++	struct mmc_data *data = host->data;
++	struct sdmmc_idma *idma = host->dma_priv;
++
++	if (!dma_inprogress(host))
++		return;
++
++	writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR);
++	host->dma_in_progress = false;
++	data->host_cookie = 0;
++
++	if (!idma->use_bounce_buffer)
++		dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
++			     mmc_get_dma_dir(data));
++}
++
+ static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data)
+ {
++	if (!dma_inprogress(host))
++		return;
++
+ 	writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR);
++	host->dma_in_progress = false;
+ 
+ 	if (!data->host_cookie)
+ 		sdmmc_idma_unprep_data(host, data, 0);
+@@ -676,6 +699,7 @@ static struct mmci_host_ops sdmmc_variant_ops = {
+ 	.dma_setup = sdmmc_idma_setup,
+ 	.dma_start = sdmmc_idma_start,
+ 	.dma_finalize = sdmmc_idma_finalize,
++	.dma_error = sdmmc_idma_error,
+ 	.set_clkreg = mmci_sdmmc_set_clkreg,
+ 	.set_pwrreg = mmci_sdmmc_set_pwrreg,
+ 	.busy_complete = sdmmc_busy_complete,
+diff --git a/drivers/mmc/host/sdhci-xenon-phy.c b/drivers/mmc/host/sdhci-xenon-phy.c
+index 8cf3a375de659..cc9d28b75eb91 100644
+--- a/drivers/mmc/host/sdhci-xenon-phy.c
++++ b/drivers/mmc/host/sdhci-xenon-phy.c
+@@ -11,6 +11,7 @@
+ #include <linux/slab.h>
+ #include <linux/delay.h>
+ #include <linux/ktime.h>
++#include <linux/iopoll.h>
+ #include <linux/of_address.h>
+ 
+ #include "sdhci-pltfm.h"
+@@ -109,6 +110,8 @@
+ #define XENON_EMMC_PHY_LOGIC_TIMING_ADJUST	(XENON_EMMC_PHY_REG_BASE + 0x18)
+ #define XENON_LOGIC_TIMING_VALUE		0x00AA8977
+ 
++#define XENON_MAX_PHY_TIMEOUT_LOOPS		100
++
+ /*
+  * List offset of PHY registers and some special register values
+  * in eMMC PHY 5.0 or eMMC PHY 5.1
+@@ -216,6 +219,19 @@ static int xenon_alloc_emmc_phy(struct sdhci_host *host)
+ 	return 0;
+ }
+ 
++static int xenon_check_stability_internal_clk(struct sdhci_host *host)
++{
++	u32 reg;
++	int err;
++
++	err = read_poll_timeout(sdhci_readw, reg, reg & SDHCI_CLOCK_INT_STABLE,
++				1100, 20000, false, host, SDHCI_CLOCK_CONTROL);
++	if (err)
++		dev_err(mmc_dev(host->mmc), "phy_init: Internal clock never stabilized.\n");
++
++	return err;
++}
++
+ /*
+  * eMMC 5.0/5.1 PHY init/re-init.
+  * eMMC PHY init should be executed after:
+@@ -232,6 +248,11 @@ static int xenon_emmc_phy_init(struct sdhci_host *host)
+ 	struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host);
+ 	struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs;
+ 
++	int ret = xenon_check_stability_internal_clk(host);
++
++	if (ret)
++		return ret;
++
+ 	reg = sdhci_readl(host, phy_regs->timing_adj);
+ 	reg |= XENON_PHY_INITIALIZAION;
+ 	sdhci_writel(host, reg, phy_regs->timing_adj);
+@@ -259,18 +280,27 @@ static int xenon_emmc_phy_init(struct sdhci_host *host)
+ 	/* get the wait time */
+ 	wait /= clock;
+ 	wait++;
+-	/* wait for host eMMC PHY init completes */
+-	udelay(wait);
+ 
+-	reg = sdhci_readl(host, phy_regs->timing_adj);
+-	reg &= XENON_PHY_INITIALIZAION;
+-	if (reg) {
++	/*
++	 * AC5X spec says bit must be polled until zero.
++	 * We see cases in which timeout can take longer
++	 * than the standard calculation on AC5X, which is
++	 * expected following the spec comment above.
++	 * According to the spec, we must wait as long as
++	 * it takes for that bit to toggle on AC5X.
++	 * Cap that with 100 delay loops so we won't get
++	 * stuck here forever:
++	 */
++
++	ret = read_poll_timeout(sdhci_readl, reg,
++				!(reg & XENON_PHY_INITIALIZAION),
++				wait, XENON_MAX_PHY_TIMEOUT_LOOPS * wait,
++				false, host, phy_regs->timing_adj);
++	if (ret)
+ 		dev_err(mmc_dev(host->mmc), "eMMC PHY init cannot complete after %d us\n",
+-			wait);
+-		return -ETIMEDOUT;
+-	}
++			wait * XENON_MAX_PHY_TIMEOUT_LOOPS);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ #define ARMADA_3700_SOC_PAD_1_8V	0x1
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index a466987448502..5b0f5a9cef81b 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -290,16 +290,13 @@ static const struct marvell_hw_ecc_layout marvell_nfc_layouts[] = {
+ 	MARVELL_LAYOUT( 2048,   512,  4,  1,  1, 2048, 32, 30,  0,  0,  0),
+ 	MARVELL_LAYOUT( 2048,   512,  8,  2,  1, 1024,  0, 30,1024,32, 30),
+ 	MARVELL_LAYOUT( 2048,   512,  8,  2,  1, 1024,  0, 30,1024,64, 30),
+-	MARVELL_LAYOUT( 2048,   512,  12, 3,  2, 704,   0, 30,640,  0, 30),
+-	MARVELL_LAYOUT( 2048,   512,  16, 5,  4, 512,   0, 30,  0, 32, 30),
++	MARVELL_LAYOUT( 2048,   512,  16, 4,  4, 512,   0, 30,  0, 32, 30),
+ 	MARVELL_LAYOUT( 4096,   512,  4,  2,  2, 2048, 32, 30,  0,  0,  0),
+-	MARVELL_LAYOUT( 4096,   512,  8,  5,  4, 1024,  0, 30,  0, 64, 30),
+-	MARVELL_LAYOUT( 4096,   512,  12, 6,  5, 704,   0, 30,576, 32, 30),
+-	MARVELL_LAYOUT( 4096,   512,  16, 9,  8, 512,   0, 30,  0, 32, 30),
++	MARVELL_LAYOUT( 4096,   512,  8,  4,  4, 1024,  0, 30,  0, 64, 30),
++	MARVELL_LAYOUT( 4096,   512,  16, 8,  8, 512,   0, 30,  0, 32, 30),
+ 	MARVELL_LAYOUT( 8192,   512,  4,  4,  4, 2048,  0, 30,  0,  0,  0),
+-	MARVELL_LAYOUT( 8192,   512,  8,  9,  8, 1024,  0, 30,  0, 160, 30),
+-	MARVELL_LAYOUT( 8192,   512,  12, 12, 11, 704,  0, 30,448,  64, 30),
+-	MARVELL_LAYOUT( 8192,   512,  16, 17, 16, 512,  0, 30,  0,  32, 30),
++	MARVELL_LAYOUT( 8192,   512,  8,  8,  8, 1024,  0, 30,  0, 160, 30),
++	MARVELL_LAYOUT( 8192,   512,  16, 16, 16, 512,  0, 30,  0,  32, 30),
+ };
+ 
+ /**
+diff --git a/drivers/mtd/nand/spi/gigadevice.c b/drivers/mtd/nand/spi/gigadevice.c
+index 987710e09441a..6023cba748bb8 100644
+--- a/drivers/mtd/nand/spi/gigadevice.c
++++ b/drivers/mtd/nand/spi/gigadevice.c
+@@ -186,7 +186,7 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
+ {
+ 	u8 status2;
+ 	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2,
+-						      &status2);
++						      spinand->scratchbuf);
+ 	int ret;
+ 
+ 	switch (status & STATUS_ECC_MASK) {
+@@ -207,6 +207,7 @@ static int gd5fxgq4uexxg_ecc_get_status(struct spinand_device *spinand,
+ 		 * report the maximum of 4 in this case
+ 		 */
+ 		/* bits sorted this way (3...0): ECCS1,ECCS0,ECCSE1,ECCSE0 */
++		status2 = *(spinand->scratchbuf);
+ 		return ((status & STATUS_ECC_MASK) >> 2) |
+ 			((status2 & STATUS_ECC_MASK) >> 4);
+ 
+@@ -228,7 +229,7 @@ static int gd5fxgq5xexxg_ecc_get_status(struct spinand_device *spinand,
+ {
+ 	u8 status2;
+ 	struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2,
+-						      &status2);
++						      spinand->scratchbuf);
+ 	int ret;
+ 
+ 	switch (status & STATUS_ECC_MASK) {
+@@ -248,6 +249,7 @@ static int gd5fxgq5xexxg_ecc_get_status(struct spinand_device *spinand,
+ 		 * 1 ... 4 bits are flipped (and corrected)
+ 		 */
+ 		/* bits sorted this way (1...0): ECCSE1, ECCSE0 */
++		status2 = *(spinand->scratchbuf);
+ 		return ((status2 & STATUS_ECC_MASK) >> 4) + 1;
+ 
+ 	case STATUS_ECC_UNCOR_ERROR:
+diff --git a/drivers/net/ethernet/freescale/fman/fman_memac.c b/drivers/net/ethernet/freescale/fman/fman_memac.c
+index 9ba15d3183d75..758535adc9ff5 100644
+--- a/drivers/net/ethernet/freescale/fman/fman_memac.c
++++ b/drivers/net/ethernet/freescale/fman/fman_memac.c
+@@ -1073,6 +1073,14 @@ int memac_initialization(struct mac_device *mac_dev,
+ 	unsigned long		 capabilities;
+ 	unsigned long		*supported;
+ 
++	/* The internal connection to the serdes is XGMII, but this isn't
++	 * really correct for the phy mode (which is the external connection).
++	 * However, this is how all older device trees say that they want
++	 * 10GBASE-R (aka XFI), so just convert it for them.
++	 */
++	if (mac_dev->phy_if == PHY_INTERFACE_MODE_XGMII)
++		mac_dev->phy_if = PHY_INTERFACE_MODE_10GBASER;
++
+ 	mac_dev->phylink_ops		= &memac_mac_ops;
+ 	mac_dev->set_promisc		= memac_set_promiscuous;
+ 	mac_dev->change_addr		= memac_modify_mac_address;
+@@ -1139,7 +1147,7 @@ int memac_initialization(struct mac_device *mac_dev,
+ 	 * (and therefore that xfi_pcs cannot be set). If we are defaulting to
+ 	 * XGMII, assume this is for XFI. Otherwise, assume it is for SGMII.
+ 	 */
+-	if (err && mac_dev->phy_if == PHY_INTERFACE_MODE_XGMII)
++	if (err && mac_dev->phy_if == PHY_INTERFACE_MODE_10GBASER)
+ 		memac->xfi_pcs = pcs;
+ 	else
+ 		memac->sgmii_pcs = pcs;
+@@ -1153,14 +1161,6 @@ int memac_initialization(struct mac_device *mac_dev,
+ 		goto _return_fm_mac_free;
+ 	}
+ 
+-	/* The internal connection to the serdes is XGMII, but this isn't
+-	 * really correct for the phy mode (which is the external connection).
+-	 * However, this is how all older device trees say that they want
+-	 * 10GBASE-R (aka XFI), so just convert it for them.
+-	 */
+-	if (mac_dev->phy_if == PHY_INTERFACE_MODE_XGMII)
+-		mac_dev->phy_if = PHY_INTERFACE_MODE_10GBASER;
+-
+ 	/* TODO: The following interface modes are supported by (some) hardware
+ 	 * but not by this driver:
+ 	 * - 1000BASE-KX
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
+index 86b180cb32a02..2b657d43c769d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
+@@ -30,6 +30,26 @@ static const char * const pin_type_name[] = {
+ 	[ICE_DPLL_PIN_TYPE_RCLK_INPUT] = "rclk-input",
+ };
+ 
++/**
++ * ice_dpll_is_reset - check if reset is in progress
++ * @pf: private board structure
++ * @extack: error reporting
++ *
++ * If reset is in progress, fill extack with error.
++ *
++ * Return:
++ * * false - no reset in progress
++ * * true - reset in progress
++ */
++static bool ice_dpll_is_reset(struct ice_pf *pf, struct netlink_ext_ack *extack)
++{
++	if (ice_is_reset_in_progress(pf->state)) {
++		NL_SET_ERR_MSG(extack, "PF reset in progress");
++		return true;
++	}
++	return false;
++}
++
+ /**
+  * ice_dpll_pin_freq_set - set pin's frequency
+  * @pf: private board structure
+@@ -109,6 +129,9 @@ ice_dpll_frequency_set(const struct dpll_pin *pin, void *pin_priv,
+ 	struct ice_pf *pf = d->pf;
+ 	int ret;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	ret = ice_dpll_pin_freq_set(pf, p, pin_type, frequency, extack);
+ 	mutex_unlock(&pf->dplls.lock);
+@@ -254,6 +277,7 @@ ice_dpll_output_frequency_get(const struct dpll_pin *pin, void *pin_priv,
+  * ice_dpll_pin_enable - enable a pin on dplls
+  * @hw: board private hw structure
+  * @pin: pointer to a pin
++ * @dpll_idx: dpll index to connect to output pin
+  * @pin_type: type of pin being enabled
+  * @extack: error reporting
+  *
+@@ -266,7 +290,7 @@ ice_dpll_output_frequency_get(const struct dpll_pin *pin, void *pin_priv,
+  */
+ static int
+ ice_dpll_pin_enable(struct ice_hw *hw, struct ice_dpll_pin *pin,
+-		    enum ice_dpll_pin_type pin_type,
++		    u8 dpll_idx, enum ice_dpll_pin_type pin_type,
+ 		    struct netlink_ext_ack *extack)
+ {
+ 	u8 flags = 0;
+@@ -280,10 +304,12 @@ ice_dpll_pin_enable(struct ice_hw *hw, struct ice_dpll_pin *pin,
+ 		ret = ice_aq_set_input_pin_cfg(hw, pin->idx, 0, flags, 0, 0);
+ 		break;
+ 	case ICE_DPLL_PIN_TYPE_OUTPUT:
++		flags = ICE_AQC_SET_CGU_OUT_CFG_UPDATE_SRC_SEL;
+ 		if (pin->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN)
+ 			flags |= ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN;
+ 		flags |= ICE_AQC_SET_CGU_OUT_CFG_OUT_EN;
+-		ret = ice_aq_set_output_pin_cfg(hw, pin->idx, flags, 0, 0, 0);
++		ret = ice_aq_set_output_pin_cfg(hw, pin->idx, flags, dpll_idx,
++						0, 0);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -370,7 +396,7 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ 	case ICE_DPLL_PIN_TYPE_INPUT:
+ 		ret = ice_aq_get_input_pin_cfg(&pf->hw, pin->idx, NULL, NULL,
+ 					       NULL, &pin->flags[0],
+-					       &pin->freq, NULL);
++					       &pin->freq, &pin->phase_adjust);
+ 		if (ret)
+ 			goto err;
+ 		if (ICE_AQC_GET_CGU_IN_CFG_FLG2_INPUT_EN & pin->flags[0]) {
+@@ -398,14 +424,27 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ 		break;
+ 	case ICE_DPLL_PIN_TYPE_OUTPUT:
+ 		ret = ice_aq_get_output_pin_cfg(&pf->hw, pin->idx,
+-						&pin->flags[0], NULL,
++						&pin->flags[0], &parent,
+ 						&pin->freq, NULL);
+ 		if (ret)
+ 			goto err;
+-		if (ICE_AQC_SET_CGU_OUT_CFG_OUT_EN & pin->flags[0])
+-			pin->state[0] = DPLL_PIN_STATE_CONNECTED;
+-		else
+-			pin->state[0] = DPLL_PIN_STATE_DISCONNECTED;
++
++		parent &= ICE_AQC_GET_CGU_OUT_CFG_DPLL_SRC_SEL;
++		if (ICE_AQC_SET_CGU_OUT_CFG_OUT_EN & pin->flags[0]) {
++			pin->state[pf->dplls.eec.dpll_idx] =
++				parent == pf->dplls.eec.dpll_idx ?
++				DPLL_PIN_STATE_CONNECTED :
++				DPLL_PIN_STATE_DISCONNECTED;
++			pin->state[pf->dplls.pps.dpll_idx] =
++				parent == pf->dplls.pps.dpll_idx ?
++				DPLL_PIN_STATE_CONNECTED :
++				DPLL_PIN_STATE_DISCONNECTED;
++		} else {
++			pin->state[pf->dplls.eec.dpll_idx] =
++				DPLL_PIN_STATE_DISCONNECTED;
++			pin->state[pf->dplls.pps.dpll_idx] =
++				DPLL_PIN_STATE_DISCONNECTED;
++		}
+ 		break;
+ 	case ICE_DPLL_PIN_TYPE_RCLK_INPUT:
+ 		for (parent = 0; parent < pf->dplls.rclk.num_parents;
+@@ -593,9 +632,13 @@ ice_dpll_pin_state_set(const struct dpll_pin *pin, void *pin_priv,
+ 	struct ice_pf *pf = d->pf;
+ 	int ret;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	if (enable)
+-		ret = ice_dpll_pin_enable(&pf->hw, p, pin_type, extack);
++		ret = ice_dpll_pin_enable(&pf->hw, p, d->dpll_idx, pin_type,
++					  extack);
+ 	else
+ 		ret = ice_dpll_pin_disable(&pf->hw, p, pin_type, extack);
+ 	if (!ret)
+@@ -628,6 +671,11 @@ ice_dpll_output_state_set(const struct dpll_pin *pin, void *pin_priv,
+ 			  struct netlink_ext_ack *extack)
+ {
+ 	bool enable = state == DPLL_PIN_STATE_CONNECTED;
++	struct ice_dpll_pin *p = pin_priv;
++	struct ice_dpll *d = dpll_priv;
++
++	if (!enable && p->state[d->dpll_idx] == DPLL_PIN_STATE_DISCONNECTED)
++		return 0;
+ 
+ 	return ice_dpll_pin_state_set(pin, pin_priv, dpll, dpll_priv, enable,
+ 				      extack, ICE_DPLL_PIN_TYPE_OUTPUT);
+@@ -690,14 +738,16 @@ ice_dpll_pin_state_get(const struct dpll_pin *pin, void *pin_priv,
+ 	struct ice_pf *pf = d->pf;
+ 	int ret;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	ret = ice_dpll_pin_state_update(pf, p, pin_type, extack);
+ 	if (ret)
+ 		goto unlock;
+-	if (pin_type == ICE_DPLL_PIN_TYPE_INPUT)
++	if (pin_type == ICE_DPLL_PIN_TYPE_INPUT ||
++	    pin_type == ICE_DPLL_PIN_TYPE_OUTPUT)
+ 		*state = p->state[d->dpll_idx];
+-	else if (pin_type == ICE_DPLL_PIN_TYPE_OUTPUT)
+-		*state = p->state[0];
+ 	ret = 0;
+ unlock:
+ 	mutex_unlock(&pf->dplls.lock);
+@@ -815,6 +865,9 @@ ice_dpll_input_prio_set(const struct dpll_pin *pin, void *pin_priv,
+ 	struct ice_pf *pf = d->pf;
+ 	int ret;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	ret = ice_dpll_hw_input_prio_set(pf, d, p, prio, extack);
+ 	mutex_unlock(&pf->dplls.lock);
+@@ -935,6 +988,9 @@ ice_dpll_pin_phase_adjust_set(const struct dpll_pin *pin, void *pin_priv,
+ 	u8 flag, flags_en = 0;
+ 	int ret;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	switch (type) {
+ 	case ICE_DPLL_PIN_TYPE_INPUT:
+@@ -1094,6 +1150,9 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv,
+ 	int ret = -EINVAL;
+ 	u32 hw_idx;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	hw_idx = parent->idx - pf->dplls.base_rclk_idx;
+ 	if (hw_idx >= pf->dplls.num_inputs)
+@@ -1148,6 +1207,9 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv,
+ 	int ret = -EINVAL;
+ 	u32 hw_idx;
+ 
++	if (ice_dpll_is_reset(pf, extack))
++		return -EBUSY;
++
+ 	mutex_lock(&pf->dplls.lock);
+ 	hw_idx = parent->idx - pf->dplls.base_rclk_idx;
+ 	if (hw_idx >= pf->dplls.num_inputs)
+@@ -1331,8 +1393,10 @@ static void ice_dpll_periodic_work(struct kthread_work *work)
+ 	struct ice_pf *pf = container_of(d, struct ice_pf, dplls);
+ 	struct ice_dpll *de = &pf->dplls.eec;
+ 	struct ice_dpll *dp = &pf->dplls.pps;
+-	int ret;
++	int ret = 0;
+ 
++	if (ice_is_reset_in_progress(pf->state))
++		goto resched;
+ 	mutex_lock(&pf->dplls.lock);
+ 	ret = ice_dpll_update_state(pf, de, false);
+ 	if (!ret)
+@@ -1352,6 +1416,7 @@ static void ice_dpll_periodic_work(struct kthread_work *work)
+ 	ice_dpll_notify_changes(de);
+ 	ice_dpll_notify_changes(dp);
+ 
++resched:
+ 	/* Run twice a second or reschedule if update failed */
+ 	kthread_queue_delayed_work(d->kworker, &d->work,
+ 				   ret ? msecs_to_jiffies(10) :
+diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
+index 319c544b9f04c..f945705561200 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
++++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
+@@ -957,7 +957,7 @@ static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter)
+ 
+ 	igb_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
+ 	/* adjust timestamp for the TX latency based on link speed */
+-	if (adapter->hw.mac.type == e1000_i210) {
++	if (hw->mac.type == e1000_i210 || hw->mac.type == e1000_i211) {
+ 		switch (adapter->link_speed) {
+ 		case SPEED_10:
+ 			adjust = IGB_I210_TX_LATENCY_10;
+@@ -1003,6 +1003,7 @@ int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
+ 			ktime_t *timestamp)
+ {
+ 	struct igb_adapter *adapter = q_vector->adapter;
++	struct e1000_hw *hw = &adapter->hw;
+ 	struct skb_shared_hwtstamps ts;
+ 	__le64 *regval = (__le64 *)va;
+ 	int adjust = 0;
+@@ -1022,7 +1023,7 @@ int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,
+ 	igb_ptp_systim_to_hwtstamp(adapter, &ts, le64_to_cpu(regval[1]));
+ 
+ 	/* adjust timestamp for the RX latency based on link speed */
+-	if (adapter->hw.mac.type == e1000_i210) {
++	if (hw->mac.type == e1000_i210 || hw->mac.type == e1000_i211) {
+ 		switch (adapter->link_speed) {
+ 		case SPEED_10:
+ 			adjust = IGB_I210_RX_LATENCY_10;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index e9a1b60ebb503..de4d769195174 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3942,8 +3942,10 @@ static void stmmac_fpe_stop_wq(struct stmmac_priv *priv)
+ {
+ 	set_bit(__FPE_REMOVING, &priv->fpe_task_state);
+ 
+-	if (priv->fpe_wq)
++	if (priv->fpe_wq) {
+ 		destroy_workqueue(priv->fpe_wq);
++		priv->fpe_wq = NULL;
++	}
+ 
+ 	netdev_info(priv->dev, "FPE workqueue stop");
+ }
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 2129ae42c7030..2b5357d94ff56 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1903,26 +1903,26 @@ static int __init gtp_init(void)
+ 
+ 	get_random_bytes(&gtp_h_initval, sizeof(gtp_h_initval));
+ 
+-	err = rtnl_link_register(&gtp_link_ops);
++	err = register_pernet_subsys(&gtp_net_ops);
+ 	if (err < 0)
+ 		goto error_out;
+ 
+-	err = register_pernet_subsys(&gtp_net_ops);
++	err = rtnl_link_register(&gtp_link_ops);
+ 	if (err < 0)
+-		goto unreg_rtnl_link;
++		goto unreg_pernet_subsys;
+ 
+ 	err = genl_register_family(&gtp_genl_family);
+ 	if (err < 0)
+-		goto unreg_pernet_subsys;
++		goto unreg_rtnl_link;
+ 
+ 	pr_info("GTP module loaded (pdp ctx size %zd bytes)\n",
+ 		sizeof(struct pdp_ctx));
+ 	return 0;
+ 
+-unreg_pernet_subsys:
+-	unregister_pernet_subsys(&gtp_net_ops);
+ unreg_rtnl_link:
+ 	rtnl_link_unregister(&gtp_link_ops);
++unreg_pernet_subsys:
++	unregister_pernet_subsys(&gtp_net_ops);
+ error_out:
+ 	pr_err("error loading GTP module loaded\n");
+ 	return err;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 4a4f8c8e79fa1..8f95a562b8d0c 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -653,6 +653,7 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 				   tun->tfiles[tun->numqueues - 1]);
+ 		ntfile = rtnl_dereference(tun->tfiles[index]);
+ 		ntfile->queue_index = index;
++		ntfile->xdp_rxq.queue_index = index;
+ 		rcu_assign_pointer(tun->tfiles[tun->numqueues - 1],
+ 				   NULL);
+ 
+diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c
+index 99ec1d4a972db..8b6d6a1b3c2ec 100644
+--- a/drivers/net/usb/dm9601.c
++++ b/drivers/net/usb/dm9601.c
+@@ -232,7 +232,7 @@ static int dm9601_mdio_read(struct net_device *netdev, int phy_id, int loc)
+ 	err = dm_read_shared_word(dev, 1, loc, &res);
+ 	if (err < 0) {
+ 		netdev_err(dev->net, "MDIO read error: %d\n", err);
+-		return err;
++		return 0;
+ 	}
+ 
+ 	netdev_dbg(dev->net,
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 5add4145d9fc1..a2dde84499fdd 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1501,7 +1501,9 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 
+ 		lan78xx_rx_urb_submit_all(dev);
+ 
++		local_bh_disable();
+ 		napi_schedule(&dev->napi);
++		local_bh_enable();
+ 	}
+ 
+ 	return 0;
+@@ -3035,7 +3037,8 @@ static int lan78xx_reset(struct lan78xx_net *dev)
+ 	if (dev->chipid == ID_REV_CHIP_ID_7801_)
+ 		buf &= ~MAC_CR_GMII_EN_;
+ 
+-	if (dev->chipid == ID_REV_CHIP_ID_7800_) {
++	if (dev->chipid == ID_REV_CHIP_ID_7800_ ||
++	    dev->chipid == ID_REV_CHIP_ID_7850_) {
+ 		ret = lan78xx_read_raw_eeprom(dev, 0, 1, &sig);
+ 		if (!ret && sig != EEPROM_INDICATOR) {
+ 			/* Implies there is no external eeprom. Set mac speed */
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index 977861c46b1fe..a2e80278eb2f9 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -1208,14 +1208,6 @@ static int veth_enable_xdp(struct net_device *dev)
+ 				veth_disable_xdp_range(dev, 0, dev->real_num_rx_queues, true);
+ 				return err;
+ 			}
+-
+-			if (!veth_gro_requested(dev)) {
+-				/* user-space did not require GRO, but adding XDP
+-				 * is supposed to get GRO working
+-				 */
+-				dev->features |= NETIF_F_GRO;
+-				netdev_features_change(dev);
+-			}
+ 		}
+ 	}
+ 
+@@ -1235,18 +1227,9 @@ static void veth_disable_xdp(struct net_device *dev)
+ 	for (i = 0; i < dev->real_num_rx_queues; i++)
+ 		rcu_assign_pointer(priv->rq[i].xdp_prog, NULL);
+ 
+-	if (!netif_running(dev) || !veth_gro_requested(dev)) {
++	if (!netif_running(dev) || !veth_gro_requested(dev))
+ 		veth_napi_del(dev);
+ 
+-		/* if user-space did not require GRO, since adding XDP
+-		 * enabled it, clear it now
+-		 */
+-		if (!veth_gro_requested(dev) && netif_running(dev)) {
+-			dev->features &= ~NETIF_F_GRO;
+-			netdev_features_change(dev);
+-		}
+-	}
+-
+ 	veth_disable_xdp_range(dev, 0, dev->real_num_rx_queues, false);
+ }
+ 
+@@ -1478,7 +1461,8 @@ static int veth_alloc_queues(struct net_device *dev)
+ 	struct veth_priv *priv = netdev_priv(dev);
+ 	int i;
+ 
+-	priv->rq = kcalloc(dev->num_rx_queues, sizeof(*priv->rq), GFP_KERNEL_ACCOUNT);
++	priv->rq = kvcalloc(dev->num_rx_queues, sizeof(*priv->rq),
++			    GFP_KERNEL_ACCOUNT | __GFP_RETRY_MAYFAIL);
+ 	if (!priv->rq)
+ 		return -ENOMEM;
+ 
+@@ -1494,7 +1478,7 @@ static void veth_free_queues(struct net_device *dev)
+ {
+ 	struct veth_priv *priv = netdev_priv(dev);
+ 
+-	kfree(priv->rq);
++	kvfree(priv->rq);
+ }
+ 
+ static int veth_dev_init(struct net_device *dev)
+@@ -1654,6 +1638,14 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 		}
+ 
+ 		if (!old_prog) {
++			if (!veth_gro_requested(dev)) {
++				/* user-space did not require GRO, but adding
++				 * XDP is supposed to get GRO working
++				 */
++				dev->features |= NETIF_F_GRO;
++				netdev_features_change(dev);
++			}
++
+ 			peer->hw_features &= ~NETIF_F_GSO_SOFTWARE;
+ 			peer->max_mtu = max_mtu;
+ 		}
+@@ -1669,6 +1661,14 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 			if (dev->flags & IFF_UP)
+ 				veth_disable_xdp(dev);
+ 
++			/* if user-space did not require GRO, since adding XDP
++			 * enabled it, clear it now
++			 */
++			if (!veth_gro_requested(dev)) {
++				dev->features &= ~NETIF_F_GRO;
++				netdev_features_change(dev);
++			}
++
+ 			if (peer) {
+ 				peer->hw_features |= NETIF_F_GSO_SOFTWARE;
+ 				peer->max_mtu = ETH_MAX_MTU;
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 885ee040ee2b3..15d1156843256 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1303,7 +1303,7 @@ static struct device_node *parse_remote_endpoint(struct device_node *np,
+ 						 int index)
+ {
+ 	/* Return NULL for index > 0 to signify end of remote-endpoints. */
+-	if (!index || strcmp(prop_name, "remote-endpoint"))
++	if (index > 0 || strcmp(prop_name, "remote-endpoint"))
+ 		return NULL;
+ 
+ 	return of_graph_get_remote_port_parent(np);
+diff --git a/drivers/perf/riscv_pmu.c b/drivers/perf/riscv_pmu.c
+index 0dda70e1ef90a..c78a6fd6c57f6 100644
+--- a/drivers/perf/riscv_pmu.c
++++ b/drivers/perf/riscv_pmu.c
+@@ -150,19 +150,11 @@ u64 riscv_pmu_ctr_get_width_mask(struct perf_event *event)
+ 	struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu);
+ 	struct hw_perf_event *hwc = &event->hw;
+ 
+-	if (!rvpmu->ctr_get_width)
+-	/**
+-	 * If the pmu driver doesn't support counter width, set it to default
+-	 * maximum allowed by the specification.
+-	 */
+-		cwidth = 63;
+-	else {
+-		if (hwc->idx == -1)
+-			/* Handle init case where idx is not initialized yet */
+-			cwidth = rvpmu->ctr_get_width(0);
+-		else
+-			cwidth = rvpmu->ctr_get_width(hwc->idx);
+-	}
++	if (hwc->idx == -1)
++		/* Handle init case where idx is not initialized yet */
++		cwidth = rvpmu->ctr_get_width(0);
++	else
++		cwidth = rvpmu->ctr_get_width(hwc->idx);
+ 
+ 	return GENMASK_ULL(cwidth, 0);
+ }
+diff --git a/drivers/perf/riscv_pmu_legacy.c b/drivers/perf/riscv_pmu_legacy.c
+index 79fdd667922e8..fa0bccf4edf2e 100644
+--- a/drivers/perf/riscv_pmu_legacy.c
++++ b/drivers/perf/riscv_pmu_legacy.c
+@@ -37,6 +37,12 @@ static int pmu_legacy_event_map(struct perf_event *event, u64 *config)
+ 	return pmu_legacy_ctr_get_idx(event);
+ }
+ 
++/* cycle & instret are always 64 bit, one bit less according to SBI spec */
++static int pmu_legacy_ctr_get_width(int idx)
++{
++	return 63;
++}
++
+ static u64 pmu_legacy_read_ctr(struct perf_event *event)
+ {
+ 	struct hw_perf_event *hwc = &event->hw;
+@@ -111,12 +117,14 @@ static void pmu_legacy_init(struct riscv_pmu *pmu)
+ 	pmu->ctr_stop = NULL;
+ 	pmu->event_map = pmu_legacy_event_map;
+ 	pmu->ctr_get_idx = pmu_legacy_ctr_get_idx;
+-	pmu->ctr_get_width = NULL;
++	pmu->ctr_get_width = pmu_legacy_ctr_get_width;
+ 	pmu->ctr_clear_idx = NULL;
+ 	pmu->ctr_read = pmu_legacy_read_ctr;
+ 	pmu->event_mapped = pmu_legacy_event_mapped;
+ 	pmu->event_unmapped = pmu_legacy_event_unmapped;
+ 	pmu->csr_index = pmu_legacy_csr_index;
++	pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
++	pmu->pmu.capabilities |= PERF_PMU_CAP_NO_EXCLUDE;
+ 
+ 	perf_pmu_register(&pmu->pmu, "cpu", PERF_TYPE_RAW);
+ }
+diff --git a/drivers/phy/freescale/phy-fsl-imx8-mipi-dphy.c b/drivers/phy/freescale/phy-fsl-imx8-mipi-dphy.c
+index e625b32889bfc..0928a526e2ab3 100644
+--- a/drivers/phy/freescale/phy-fsl-imx8-mipi-dphy.c
++++ b/drivers/phy/freescale/phy-fsl-imx8-mipi-dphy.c
+@@ -706,7 +706,7 @@ static int mixel_dphy_probe(struct platform_device *pdev)
+ 			return ret;
+ 		}
+ 
+-		priv->id = of_alias_get_id(np, "mipi_dphy");
++		priv->id = of_alias_get_id(np, "mipi-dphy");
+ 		if (priv->id < 0) {
+ 			dev_err(dev, "Failed to get phy node alias id: %d\n",
+ 				priv->id);
+diff --git a/drivers/phy/qualcomm/phy-qcom-m31.c b/drivers/phy/qualcomm/phy-qcom-m31.c
+index c2590579190a9..03fb0d4b75d74 100644
+--- a/drivers/phy/qualcomm/phy-qcom-m31.c
++++ b/drivers/phy/qualcomm/phy-qcom-m31.c
+@@ -299,7 +299,7 @@ static int m31usb_phy_probe(struct platform_device *pdev)
+ 
+ 	qphy->vreg = devm_regulator_get(dev, "vdda-phy");
+ 	if (IS_ERR(qphy->vreg))
+-		return dev_err_probe(dev, PTR_ERR(qphy->phy),
++		return dev_err_probe(dev, PTR_ERR(qphy->vreg),
+ 				     "failed to get vreg\n");
+ 
+ 	phy_set_drvdata(qphy->phy, qphy);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index a3719719e2e0f..365f5d85847b8 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -1276,7 +1276,7 @@ static const char * const qmp_phy_vreg_l[] = {
+ 	"vdda-phy", "vdda-pll",
+ };
+ 
+-static const struct qmp_usb_offsets qmp_usb_offsets_ipq8074 = {
++static const struct qmp_usb_offsets qmp_usb_offsets_v3 = {
+ 	.serdes		= 0,
+ 	.pcs		= 0x800,
+ 	.pcs_misc	= 0x600,
+@@ -1292,7 +1292,7 @@ static const struct qmp_usb_offsets qmp_usb_offsets_ipq9574 = {
+ 	.rx		= 0x400,
+ };
+ 
+-static const struct qmp_usb_offsets qmp_usb_offsets_v3 = {
++static const struct qmp_usb_offsets qmp_usb_offsets_v3_msm8996 = {
+ 	.serdes		= 0,
+ 	.pcs		= 0x600,
+ 	.tx		= 0x200,
+@@ -1328,7 +1328,7 @@ static const struct qmp_usb_offsets qmp_usb_offsets_v5 = {
+ static const struct qmp_phy_cfg ipq6018_usb3phy_cfg = {
+ 	.lanes			= 1,
+ 
+-	.offsets		= &qmp_usb_offsets_ipq8074,
++	.offsets		= &qmp_usb_offsets_v3,
+ 
+ 	.serdes_tbl		= ipq9574_usb3_serdes_tbl,
+ 	.serdes_tbl_num		= ARRAY_SIZE(ipq9574_usb3_serdes_tbl),
+@@ -1346,7 +1346,7 @@ static const struct qmp_phy_cfg ipq6018_usb3phy_cfg = {
+ static const struct qmp_phy_cfg ipq8074_usb3phy_cfg = {
+ 	.lanes			= 1,
+ 
+-	.offsets		= &qmp_usb_offsets_ipq8074,
++	.offsets		= &qmp_usb_offsets_v3,
+ 
+ 	.serdes_tbl		= ipq8074_usb3_serdes_tbl,
+ 	.serdes_tbl_num		= ARRAY_SIZE(ipq8074_usb3_serdes_tbl),
+@@ -1382,7 +1382,7 @@ static const struct qmp_phy_cfg ipq9574_usb3phy_cfg = {
+ static const struct qmp_phy_cfg msm8996_usb3phy_cfg = {
+ 	.lanes			= 1,
+ 
+-	.offsets		= &qmp_usb_offsets_v3,
++	.offsets		= &qmp_usb_offsets_v3_msm8996,
+ 
+ 	.serdes_tbl		= msm8996_usb3_serdes_tbl,
+ 	.serdes_tbl_num		= ARRAY_SIZE(msm8996_usb3_serdes_tbl),
+diff --git a/drivers/pmdomain/arm/scmi_perf_domain.c b/drivers/pmdomain/arm/scmi_perf_domain.c
+index 709bbc448fad4..d7ef46ccd9b8a 100644
+--- a/drivers/pmdomain/arm/scmi_perf_domain.c
++++ b/drivers/pmdomain/arm/scmi_perf_domain.c
+@@ -159,6 +159,9 @@ static void scmi_perf_domain_remove(struct scmi_device *sdev)
+ 	struct genpd_onecell_data *scmi_pd_data = dev_get_drvdata(dev);
+ 	int i;
+ 
++	if (!scmi_pd_data)
++		return;
++
+ 	of_genpd_del_provider(dev->of_node);
+ 
+ 	for (i = 0; i < scmi_pd_data->num_domains; i++)
+diff --git a/drivers/pmdomain/qcom/rpmhpd.c b/drivers/pmdomain/qcom/rpmhpd.c
+index f2e64324deb84..0f0943e3efe50 100644
+--- a/drivers/pmdomain/qcom/rpmhpd.c
++++ b/drivers/pmdomain/qcom/rpmhpd.c
+@@ -692,6 +692,7 @@ static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner)
+ 	unsigned int active_corner, sleep_corner;
+ 	unsigned int this_active_corner = 0, this_sleep_corner = 0;
+ 	unsigned int peer_active_corner = 0, peer_sleep_corner = 0;
++	unsigned int peer_enabled_corner;
+ 
+ 	if (pd->state_synced) {
+ 		to_active_sleep(pd, corner, &this_active_corner, &this_sleep_corner);
+@@ -701,9 +702,11 @@ static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner)
+ 		this_sleep_corner = pd->level_count - 1;
+ 	}
+ 
+-	if (peer && peer->enabled)
+-		to_active_sleep(peer, peer->corner, &peer_active_corner,
++	if (peer && peer->enabled) {
++		peer_enabled_corner = max(peer->corner, peer->enable_corner);
++		to_active_sleep(peer, peer_enabled_corner, &peer_active_corner,
+ 				&peer_sleep_corner);
++	}
+ 
+ 	active_corner = max(this_active_corner, peer_active_corner);
+ 
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index f21cb05815ec6..3e31375491d58 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -978,6 +978,7 @@ config CHARGER_QCOM_SMB2
+ config FUEL_GAUGE_MM8013
+ 	tristate "Mitsumi MM8013 fuel gauge driver"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  Say Y here to enable the Mitsumi MM8013 fuel gauge driver.
+ 	  It enables the monitoring of many battery parameters, including
+diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c
+index 9b5475590518f..886e0a8e2abd1 100644
+--- a/drivers/power/supply/bq27xxx_battery_i2c.c
++++ b/drivers/power/supply/bq27xxx_battery_i2c.c
+@@ -209,7 +209,9 @@ static void bq27xxx_battery_i2c_remove(struct i2c_client *client)
+ {
+ 	struct bq27xxx_device_info *di = i2c_get_clientdata(client);
+ 
+-	free_irq(client->irq, di);
++	if (client->irq)
++		free_irq(client->irq, di);
++
+ 	bq27xxx_battery_teardown(di);
+ 
+ 	mutex_lock(&battery_mutex);
+diff --git a/drivers/soc/qcom/pmic_glink.c b/drivers/soc/qcom/pmic_glink.c
+index 914057331afd7..d05e15f107e30 100644
+--- a/drivers/soc/qcom/pmic_glink.c
++++ b/drivers/soc/qcom/pmic_glink.c
+@@ -268,10 +268,17 @@ static int pmic_glink_probe(struct platform_device *pdev)
+ 	else
+ 		pg->client_mask = PMIC_GLINK_CLIENT_DEFAULT;
+ 
++	pg->pdr = pdr_handle_alloc(pmic_glink_pdr_callback, pg);
++	if (IS_ERR(pg->pdr)) {
++		ret = dev_err_probe(&pdev->dev, PTR_ERR(pg->pdr),
++				    "failed to initialize pdr\n");
++		return ret;
++	}
++
+ 	if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_UCSI)) {
+ 		ret = pmic_glink_add_aux_device(pg, &pg->ucsi_aux, "ucsi");
+ 		if (ret)
+-			return ret;
++			goto out_release_pdr_handle;
+ 	}
+ 	if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_ALTMODE)) {
+ 		ret = pmic_glink_add_aux_device(pg, &pg->altmode_aux, "altmode");
+@@ -284,17 +291,11 @@ static int pmic_glink_probe(struct platform_device *pdev)
+ 			goto out_release_altmode_aux;
+ 	}
+ 
+-	pg->pdr = pdr_handle_alloc(pmic_glink_pdr_callback, pg);
+-	if (IS_ERR(pg->pdr)) {
+-		ret = dev_err_probe(&pdev->dev, PTR_ERR(pg->pdr), "failed to initialize pdr\n");
+-		goto out_release_aux_devices;
+-	}
+-
+ 	service = pdr_add_lookup(pg->pdr, "tms/servreg", "msm/adsp/charger_pd");
+ 	if (IS_ERR(service)) {
+ 		ret = dev_err_probe(&pdev->dev, PTR_ERR(service),
+ 				    "failed adding pdr lookup for charger_pd\n");
+-		goto out_release_pdr_handle;
++		goto out_release_aux_devices;
+ 	}
+ 
+ 	mutex_lock(&__pmic_glink_lock);
+@@ -303,8 +304,6 @@ static int pmic_glink_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
+-out_release_pdr_handle:
+-	pdr_handle_release(pg->pdr);
+ out_release_aux_devices:
+ 	if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_BATT))
+ 		pmic_glink_del_aux_device(pg, &pg->ps_aux);
+@@ -314,6 +313,8 @@ static int pmic_glink_probe(struct platform_device *pdev)
+ out_release_ucsi_aux:
+ 	if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_UCSI))
+ 		pmic_glink_del_aux_device(pg, &pg->ucsi_aux);
++out_release_pdr_handle:
++	pdr_handle_release(pg->pdr);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index f94e0d370d466..731775d34d393 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1930,21 +1930,15 @@ static void cqspi_remove(struct platform_device *pdev)
+ static int cqspi_suspend(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
+-	struct spi_controller *host = dev_get_drvdata(dev);
+-	int ret;
+ 
+-	ret = spi_controller_suspend(host);
+ 	cqspi_controller_enable(cqspi, 0);
+-
+ 	clk_disable_unprepare(cqspi->clk);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int cqspi_resume(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
+-	struct spi_controller *host = dev_get_drvdata(dev);
+ 
+ 	clk_prepare_enable(cqspi->clk);
+ 	cqspi_wait_idle(cqspi);
+@@ -1952,8 +1946,7 @@ static int cqspi_resume(struct device *dev)
+ 
+ 	cqspi->current_cs = -1;
+ 	cqspi->sclk = 0;
+-
+-	return spi_controller_resume(host);
++	return 0;
+ }
+ 
+ static DEFINE_RUNTIME_DEV_PM_OPS(cqspi_dev_pm_ops, cqspi_suspend,
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 63af6ab034b5f..e6c45d585482f 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2400,11 +2400,9 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h, int charcount,
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	struct fbcon_display *p = &fb_display[vc->vc_num];
+ 	int resize, ret, old_userfont, old_width, old_height, old_charcount;
+-	char *old_data = NULL;
++	u8 *old_data = vc->vc_font.data;
+ 
+ 	resize = (w != vc->vc_font.width) || (h != vc->vc_font.height);
+-	if (p->userfont)
+-		old_data = vc->vc_font.data;
+ 	vc->vc_font.data = (void *)(p->fontdata = data);
+ 	old_userfont = p->userfont;
+ 	if ((p->userfont = userfont))
+@@ -2438,13 +2436,13 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h, int charcount,
+ 		update_screen(vc);
+ 	}
+ 
+-	if (old_data && (--REFCOUNT(old_data) == 0))
++	if (old_userfont && (--REFCOUNT(old_data) == 0))
+ 		kfree(old_data - FONT_EXTRA_WORDS * sizeof(int));
+ 	return 0;
+ 
+ err_out:
+ 	p->fontdata = old_data;
+-	vc->vc_font.data = (void *)old_data;
++	vc->vc_font.data = old_data;
+ 
+ 	if (userfont) {
+ 		p->userfont = old_userfont;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 9140780be5a43..c097da6e9c5b2 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -479,8 +479,10 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 		    dire->u.name[0] == '.' &&
+ 		    ctx->actor != afs_lookup_filldir &&
+ 		    ctx->actor != afs_lookup_one_filldir &&
+-		    memcmp(dire->u.name, ".__afs", 6) == 0)
++		    memcmp(dire->u.name, ".__afs", 6) == 0) {
++			ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent);
+ 			continue;
++		}
+ 
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index f9544fda38e96..f4928da96499a 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -727,6 +727,23 @@ static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ }
+ 
++static int btrfs_check_replace_dev_names(struct btrfs_ioctl_dev_replace_args *args)
++{
++	if (args->start.srcdevid == 0) {
++		if (memchr(args->start.srcdev_name, 0,
++			   sizeof(args->start.srcdev_name)) == NULL)
++			return -ENAMETOOLONG;
++	} else {
++		args->start.srcdev_name[0] = 0;
++	}
++
++	if (memchr(args->start.tgtdev_name, 0,
++		   sizeof(args->start.tgtdev_name)) == NULL)
++	    return -ENAMETOOLONG;
++
++	return 0;
++}
++
+ int btrfs_dev_replace_by_ioctl(struct btrfs_fs_info *fs_info,
+ 			    struct btrfs_ioctl_dev_replace_args *args)
+ {
+@@ -739,10 +756,9 @@ int btrfs_dev_replace_by_ioctl(struct btrfs_fs_info *fs_info,
+ 	default:
+ 		return -EINVAL;
+ 	}
+-
+-	if ((args->start.srcdevid == 0 && args->start.srcdev_name[0] == '\0') ||
+-	    args->start.tgtdev_name[0] == '\0')
+-		return -EINVAL;
++	ret = btrfs_check_replace_dev_names(args);
++	if (ret < 0)
++		return ret;
+ 
+ 	ret = btrfs_dev_replace_start(fs_info, args->start.tgtdev_name,
+ 					args->start.srcdevid,
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index c03091793aaa4..233912c07f1c8 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1286,12 +1286,12 @@ void btrfs_free_fs_info(struct btrfs_fs_info *fs_info)
+  *
+  * @objectid:	root id
+  * @anon_dev:	preallocated anonymous block device number for new roots,
+- * 		pass 0 for new allocation.
++ *		pass NULL for a new allocation.
+  * @check_ref:	whether to check root item references, If true, return -ENOENT
+  *		for orphan roots
+  */
+ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+-					     u64 objectid, dev_t anon_dev,
++					     u64 objectid, dev_t *anon_dev,
+ 					     bool check_ref)
+ {
+ 	struct btrfs_root *root;
+@@ -1321,9 +1321,9 @@ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+ 		 * that common but still possible.  In that case, we just need
+ 		 * to free the anon_dev.
+ 		 */
+-		if (unlikely(anon_dev)) {
+-			free_anon_bdev(anon_dev);
+-			anon_dev = 0;
++		if (unlikely(anon_dev && *anon_dev)) {
++			free_anon_bdev(*anon_dev);
++			*anon_dev = 0;
+ 		}
+ 
+ 		if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
+@@ -1345,7 +1345,7 @@ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+ 		goto fail;
+ 	}
+ 
+-	ret = btrfs_init_fs_root(root, anon_dev);
++	ret = btrfs_init_fs_root(root, anon_dev ? *anon_dev : 0);
+ 	if (ret)
+ 		goto fail;
+ 
+@@ -1381,7 +1381,7 @@ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+ 	 * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root()
+ 	 * and once again by our caller.
+ 	 */
+-	if (anon_dev)
++	if (anon_dev && *anon_dev)
+ 		root->anon_dev = 0;
+ 	btrfs_put_root(root);
+ 	return ERR_PTR(ret);
+@@ -1397,7 +1397,7 @@ static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info,
+ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ 				     u64 objectid, bool check_ref)
+ {
+-	return btrfs_get_root_ref(fs_info, objectid, 0, check_ref);
++	return btrfs_get_root_ref(fs_info, objectid, NULL, check_ref);
+ }
+ 
+ /*
+@@ -1405,11 +1405,11 @@ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+  * the anonymous block device id
+  *
+  * @objectid:	tree objectid
+- * @anon_dev:	if zero, allocate a new anonymous block device or use the
+- *		parameter value
++ * @anon_dev:	if NULL, allocate a new anonymous block device or use the
++ *		parameter value if not NULL
+  */
+ struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
+-					 u64 objectid, dev_t anon_dev)
++					 u64 objectid, dev_t *anon_dev)
+ {
+ 	return btrfs_get_root_ref(fs_info, objectid, anon_dev, true);
+ }
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 50dab8f639dcc..fca52385830cf 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -64,7 +64,7 @@ void btrfs_free_fs_roots(struct btrfs_fs_info *fs_info);
+ struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ 				     u64 objectid, bool check_ref);
+ struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info,
+-					 u64 objectid, dev_t anon_dev);
++					 u64 objectid, dev_t *anon_dev);
+ struct btrfs_root *btrfs_get_fs_root_commit_root(struct btrfs_fs_info *fs_info,
+ 						 struct btrfs_path *path,
+ 						 u64 objectid);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 8f724c54fc8e9..eade0432bd9ce 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2436,6 +2436,7 @@ static int emit_fiemap_extent(struct fiemap_extent_info *fieinfo,
+ 				struct fiemap_cache *cache,
+ 				u64 offset, u64 phys, u64 len, u32 flags)
+ {
++	u64 cache_end;
+ 	int ret = 0;
+ 
+ 	/* Set at the end of extent_fiemap(). */
+@@ -2445,15 +2446,102 @@ static int emit_fiemap_extent(struct fiemap_extent_info *fieinfo,
+ 		goto assign;
+ 
+ 	/*
+-	 * Sanity check, extent_fiemap() should have ensured that new
+-	 * fiemap extent won't overlap with cached one.
+-	 * Not recoverable.
++	 * When iterating the extents of the inode, at extent_fiemap(), we may
++	 * find an extent that starts at an offset behind the end offset of the
++	 * previous extent we processed. This happens if fiemap is called
++	 * without FIEMAP_FLAG_SYNC and there are ordered extents completing
++	 * while we call btrfs_next_leaf() (through fiemap_next_leaf_item()).
+ 	 *
+-	 * NOTE: Physical address can overlap, due to compression
++	 * For example we are in leaf X processing its last item, which is the
++	 * file extent item for file range [512K, 1M[, and after
++	 * btrfs_next_leaf() releases the path, there's an ordered extent that
++	 * completes for the file range [768K, 2M[, and that results in trimming
++	 * the file extent item so that it now corresponds to the file range
++	 * [512K, 768K[ and a new file extent item is inserted for the file
++	 * range [768K, 2M[, which may end up as the last item of leaf X or as
++	 * the first item of the next leaf - in either case btrfs_next_leaf()
++	 * will leave us with a path pointing to the new extent item, for the
++	 * file range [768K, 2M[, since that's the first key that follows the
++	 * last one we processed. So in order not to report overlapping extents
++	 * to user space, we trim the length of the previously cached extent and
++	 * emit it.
++	 *
++	 * Upon calling btrfs_next_leaf() we may also find an extent with an
++	 * offset smaller than or equals to cache->offset, and this happens
++	 * when we had a hole or prealloc extent with several delalloc ranges in
++	 * it, but after btrfs_next_leaf() released the path, delalloc was
++	 * flushed and the resulting ordered extents were completed, so we can
++	 * now have found a file extent item for an offset that is smaller than
++	 * or equals to what we have in cache->offset. We deal with this as
++	 * described below.
+ 	 */
+-	if (cache->offset + cache->len > offset) {
+-		WARN_ON(1);
+-		return -EINVAL;
++	cache_end = cache->offset + cache->len;
++	if (cache_end > offset) {
++		if (offset == cache->offset) {
++			/*
++			 * We cached a dealloc range (found in the io tree) for
++			 * a hole or prealloc extent and we have now found a
++			 * file extent item for the same offset. What we have
++			 * now is more recent and up to date, so discard what
++			 * we had in the cache and use what we have just found.
++			 */
++			goto assign;
++		} else if (offset > cache->offset) {
++			/*
++			 * The extent range we previously found ends after the
++			 * offset of the file extent item we found and that
++			 * offset falls somewhere in the middle of that previous
++			 * extent range. So adjust the range we previously found
++			 * to end at the offset of the file extent item we have
++			 * just found, since this extent is more up to date.
++			 * Emit that adjusted range and cache the file extent
++			 * item we have just found. This corresponds to the case
++			 * where a previously found file extent item was split
++			 * due to an ordered extent completing.
++			 */
++			cache->len = offset - cache->offset;
++			goto emit;
++		} else {
++			const u64 range_end = offset + len;
++
++			/*
++			 * The offset of the file extent item we have just found
++			 * is behind the cached offset. This means we were
++			 * processing a hole or prealloc extent for which we
++			 * have found delalloc ranges (in the io tree), so what
++			 * we have in the cache is the last delalloc range we
++			 * found while the file extent item we found can be
++			 * either for a whole delalloc range we previously
++			 * emmitted or only a part of that range.
++			 *
++			 * We have two cases here:
++			 *
++			 * 1) The file extent item's range ends at or behind the
++			 *    cached extent's end. In this case just ignore the
++			 *    current file extent item because we don't want to
++			 *    overlap with previous ranges that may have been
++			 *    emmitted already;
++			 *
++			 * 2) The file extent item starts behind the currently
++			 *    cached extent but its end offset goes beyond the
++			 *    end offset of the cached extent. We don't want to
++			 *    overlap with a previous range that may have been
++			 *    emmitted already, so we emit the currently cached
++			 *    extent and then partially store the current file
++			 *    extent item's range in the cache, for the subrange
++			 *    going the cached extent's end to the end of the
++			 *    file extent item.
++			 */
++			if (range_end <= cache_end)
++				return 0;
++
++			if (!(flags & (FIEMAP_EXTENT_ENCODED | FIEMAP_EXTENT_DELALLOC)))
++				phys += cache_end - offset;
++
++			offset = cache_end;
++			len = range_end - cache_end;
++			goto emit;
++		}
+ 	}
+ 
+ 	/*
+@@ -2473,6 +2561,7 @@ static int emit_fiemap_extent(struct fiemap_extent_info *fieinfo,
+ 		return 0;
+ 	}
+ 
++emit:
+ 	/* Not mergeable, need to submit cached one */
+ 	ret = fiemap_fill_next_extent(fieinfo, cache->offset, cache->phys,
+ 				      cache->len, cache->flags);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 0bd43b863c43a..40d141d6894be 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -721,7 +721,7 @@ static noinline int create_subvol(struct mnt_idmap *idmap,
+ 	free_extent_buffer(leaf);
+ 	leaf = NULL;
+ 
+-	new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
++	new_root = btrfs_get_new_fs_root(fs_info, objectid, &anon_dev);
+ 	if (IS_ERR(new_root)) {
+ 		ret = PTR_ERR(new_root);
+ 		btrfs_abort_transaction(trans, ret);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 77b182225809e..4d165479dfc92 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -6705,11 +6705,20 @@ static int finish_inode_if_needed(struct send_ctx *sctx, int at_end)
+ 				if (ret)
+ 					goto out;
+ 			}
+-			if (sctx->cur_inode_last_extent <
+-			    sctx->cur_inode_size) {
+-				ret = send_hole(sctx, sctx->cur_inode_size);
+-				if (ret)
++			if (sctx->cur_inode_last_extent < sctx->cur_inode_size) {
++				ret = range_is_hole_in_parent(sctx,
++						      sctx->cur_inode_last_extent,
++						      sctx->cur_inode_size);
++				if (ret < 0) {
+ 					goto out;
++				} else if (ret == 0) {
++					ret = send_hole(sctx, sctx->cur_inode_size);
++					if (ret < 0)
++						goto out;
++				} else {
++					/* Range is already a hole, skip. */
++					ret = 0;
++				}
+ 			}
+ 		}
+ 		if (need_truncate) {
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index c52807d97efa5..bf8e64c766b63 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1834,7 +1834,7 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	key.offset = (u64)-1;
+-	pending->snap = btrfs_get_new_fs_root(fs_info, objectid, pending->anon_dev);
++	pending->snap = btrfs_get_new_fs_root(fs_info, objectid, &pending->anon_dev);
+ 	if (IS_ERR(pending->snap)) {
+ 		ret = PTR_ERR(pending->snap);
+ 		pending->snap = NULL;
+diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
+index fae97c25ce58d..8109aba66e023 100644
+--- a/fs/ceph/mdsmap.c
++++ b/fs/ceph/mdsmap.c
+@@ -380,10 +380,11 @@ struct ceph_mdsmap *ceph_mdsmap_decode(struct ceph_mds_client *mdsc, void **p,
+ 		ceph_decode_skip_8(p, end, bad_ext);
+ 		/* required_client_features */
+ 		ceph_decode_skip_set(p, end, 64, bad_ext);
++		/* bal_rank_mask */
++		ceph_decode_skip_string(p, end, bad_ext);
++	}
++	if (mdsmap_ev >= 18) {
+ 		ceph_decode_64_safe(p, end, m->m_max_xattr_size, bad_ext);
+-	} else {
+-		/* This forces the usage of the (sync) SETXATTR Op */
+-		m->m_max_xattr_size = 0;
+ 	}
+ bad_ext:
+ 	doutc(cl, "m_enabled: %d, m_damaged: %d, m_num_laggy: %d\n",
+diff --git a/fs/ceph/mdsmap.h b/fs/ceph/mdsmap.h
+index 89f1931f1ba6c..1f2171dd01bfa 100644
+--- a/fs/ceph/mdsmap.h
++++ b/fs/ceph/mdsmap.h
+@@ -27,7 +27,11 @@ struct ceph_mdsmap {
+ 	u32 m_session_timeout;          /* seconds */
+ 	u32 m_session_autoclose;        /* seconds */
+ 	u64 m_max_file_size;
+-	u64 m_max_xattr_size;		/* maximum size for xattrs blob */
++	/*
++	 * maximum size for xattrs blob.
++	 * Zeroed by default to force the usage of the (sync) SETXATTR Op.
++	 */
++	u64 m_max_xattr_size;
+ 	u32 m_max_mds;			/* expected up:active mds number */
+ 	u32 m_num_active_mds;		/* actual up:active mds number */
+ 	u32 possible_max_rank;		/* possible max rank index */
+diff --git a/fs/efivarfs/vars.c b/fs/efivarfs/vars.c
+index 9e4f47808bd5a..13bc606989557 100644
+--- a/fs/efivarfs/vars.c
++++ b/fs/efivarfs/vars.c
+@@ -372,7 +372,7 @@ static void dup_variable_bug(efi_char16_t *str16, efi_guid_t *vendor_guid,
+ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 		void *data, bool duplicates, struct list_head *head)
+ {
+-	unsigned long variable_name_size = 1024;
++	unsigned long variable_name_size = 512;
+ 	efi_char16_t *variable_name;
+ 	efi_status_t status;
+ 	efi_guid_t vendor_guid;
+@@ -389,12 +389,13 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 		goto free;
+ 
+ 	/*
+-	 * Per EFI spec, the maximum storage allocated for both
+-	 * the variable name and variable data is 1024 bytes.
++	 * A small set of old UEFI implementations reject sizes
++	 * above a certain threshold, the lowest seen in the wild
++	 * is 512.
+ 	 */
+ 
+ 	do {
+-		variable_name_size = 1024;
++		variable_name_size = 512;
+ 
+ 		status = efivar_get_next_variable(&variable_name_size,
+ 						  variable_name,
+@@ -431,9 +432,13 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
+ 			break;
+ 		case EFI_NOT_FOUND:
+ 			break;
++		case EFI_BUFFER_TOO_SMALL:
++			pr_warn("efivars: Variable name size exceeds maximum (%lu > 512)\n",
++				variable_name_size);
++			status = EFI_NOT_FOUND;
++			break;
+ 		default:
+-			printk(KERN_WARNING "efivars: get_next_variable: status=%lx\n",
+-				status);
++			pr_warn("efivars: get_next_variable: status=%lx\n", status);
+ 			status = EFI_NOT_FOUND;
+ 			break;
+ 		}
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index b664caea8b4e6..9e345d3c305a6 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -668,8 +668,10 @@ static int nfs_writepage_locked(struct folio *folio,
+ 	int err;
+ 
+ 	if (wbc->sync_mode == WB_SYNC_NONE &&
+-	    NFS_SERVER(inode)->write_congested)
++	    NFS_SERVER(inode)->write_congested) {
++		folio_redirty_for_writepage(wbc, folio);
+ 		return AOP_WRITEPAGE_ACTIVATE;
++	}
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE);
+ 	nfs_pageio_init_write(&pgio, inode, 0, false,
+diff --git a/include/linux/bvec.h b/include/linux/bvec.h
+index 555aae5448ae4..bd1e361b351c5 100644
+--- a/include/linux/bvec.h
++++ b/include/linux/bvec.h
+@@ -83,7 +83,7 @@ struct bvec_iter {
+ 
+ 	unsigned int            bi_bvec_done;	/* number of bytes completed in
+ 						   current bvec */
+-} __packed;
++} __packed __aligned(4);
+ 
+ struct bvec_iter_all {
+ 	struct bio_vec	bv;
+diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
+index 80900d9109920..ce660d51549b4 100644
+--- a/include/linux/netfilter.h
++++ b/include/linux/netfilter.h
+@@ -474,6 +474,7 @@ struct nf_ct_hook {
+ 			      const struct sk_buff *);
+ 	void (*attach)(struct sk_buff *nskb, const struct sk_buff *skb);
+ 	void (*set_closing)(struct nf_conntrack *nfct);
++	int (*confirm)(struct sk_buff *skb);
+ };
+ extern const struct nf_ct_hook __rcu *nf_ct_hook;
+ 
+diff --git a/include/net/mctp.h b/include/net/mctp.h
+index da86e106c91d5..2bff5f47ce82f 100644
+--- a/include/net/mctp.h
++++ b/include/net/mctp.h
+@@ -249,6 +249,7 @@ struct mctp_route {
+ struct mctp_route *mctp_route_lookup(struct net *net, unsigned int dnet,
+ 				     mctp_eid_t daddr);
+ 
++/* always takes ownership of skb */
+ int mctp_local_output(struct sock *sk, struct mctp_route *rt,
+ 		      struct sk_buff *skb, mctp_eid_t daddr, u8 req_tag);
+ 
+diff --git a/include/sound/soc-card.h b/include/sound/soc-card.h
+index ecc02e955279f..1f4c39922d825 100644
+--- a/include/sound/soc-card.h
++++ b/include/sound/soc-card.h
+@@ -30,6 +30,8 @@ static inline void snd_soc_card_mutex_unlock(struct snd_soc_card *card)
+ 
+ struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
+ 					       const char *name);
++struct snd_kcontrol *snd_soc_card_get_kcontrol_locked(struct snd_soc_card *soc_card,
++						      const char *name);
+ int snd_soc_card_jack_new(struct snd_soc_card *card, const char *id, int type,
+ 			  struct snd_soc_jack *jack);
+ int snd_soc_card_jack_new_pins(struct snd_soc_card *card, const char *id,
+diff --git a/include/uapi/linux/in6.h b/include/uapi/linux/in6.h
+index c4c53a9ab9595..ff8d21f9e95b7 100644
+--- a/include/uapi/linux/in6.h
++++ b/include/uapi/linux/in6.h
+@@ -145,7 +145,7 @@ struct in6_flowlabel_req {
+ #define IPV6_TLV_PADN		1
+ #define IPV6_TLV_ROUTERALERT	5
+ #define IPV6_TLV_CALIPSO	7	/* RFC 5570 */
+-#define IPV6_TLV_IOAM		49	/* TEMPORARY IANA allocation for IOAM */
++#define IPV6_TLV_IOAM		49	/* RFC 9486 */
+ #define IPV6_TLV_JUMBO		194
+ #define IPV6_TLV_HAO		201	/* home address option */
+ 
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index 6cd2a4e3afb8f..9ff0182458408 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -189,9 +189,6 @@ static int fprobe_init_rethook(struct fprobe *fp, int num)
+ {
+ 	int size;
+ 
+-	if (num <= 0)
+-		return -EINVAL;
+-
+ 	if (!fp->exit_handler) {
+ 		fp->rethook = NULL;
+ 		return 0;
+@@ -199,15 +196,16 @@ static int fprobe_init_rethook(struct fprobe *fp, int num)
+ 
+ 	/* Initialize rethook if needed */
+ 	if (fp->nr_maxactive)
+-		size = fp->nr_maxactive;
++		num = fp->nr_maxactive;
+ 	else
+-		size = num * num_possible_cpus() * 2;
+-	if (size <= 0)
++		num *= num_possible_cpus() * 2;
++	if (num <= 0)
+ 		return -EINVAL;
+ 
++	size = sizeof(struct fprobe_rethook_node) + fp->entry_data_size;
++
+ 	/* Initialize rethook */
+-	fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler,
+-				sizeof(struct fprobe_rethook_node), size);
++	fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler, size, num);
+ 	if (IS_ERR(fp->rethook))
+ 		return PTR_ERR(fp->rethook);
+ 
+diff --git a/lib/nlattr.c b/lib/nlattr.c
+index dc15e7888fc1f..0319e811bb10a 100644
+--- a/lib/nlattr.c
++++ b/lib/nlattr.c
+@@ -30,6 +30,8 @@ static const u8 nla_attr_len[NLA_TYPE_MAX+1] = {
+ 	[NLA_S16]	= sizeof(s16),
+ 	[NLA_S32]	= sizeof(s32),
+ 	[NLA_S64]	= sizeof(s64),
++	[NLA_BE16]	= sizeof(__be16),
++	[NLA_BE32]	= sizeof(__be32),
+ };
+ 
+ static const u8 nla_attr_minlen[NLA_TYPE_MAX+1] = {
+@@ -43,6 +45,8 @@ static const u8 nla_attr_minlen[NLA_TYPE_MAX+1] = {
+ 	[NLA_S16]	= sizeof(s16),
+ 	[NLA_S32]	= sizeof(s32),
+ 	[NLA_S64]	= sizeof(s64),
++	[NLA_BE16]	= sizeof(__be16),
++	[NLA_BE32]	= sizeof(__be32),
+ };
+ 
+ /*
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index e651500e597a2..5c4dc7a0c50df 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -362,6 +362,12 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args)
+ 	vaddr &= HPAGE_PUD_MASK;
+ 
+ 	pud = pfn_pud(args->pud_pfn, args->page_prot);
++	/*
++	 * Some architectures have debug checks to make sure
++	 * huge pud mapping are only found with devmap entries
++	 * For now test with only devmap entries.
++	 */
++	pud = pud_mkdevmap(pud);
+ 	set_pud_at(args->mm, vaddr, args->pudp, pud);
+ 	flush_dcache_page(page);
+ 	pudp_set_wrprotect(args->mm, vaddr, args->pudp);
+@@ -374,6 +380,7 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args)
+ 	WARN_ON(!pud_none(pud));
+ #endif /* __PAGETABLE_PMD_FOLDED */
+ 	pud = pfn_pud(args->pud_pfn, args->page_prot);
++	pud = pud_mkdevmap(pud);
+ 	pud = pud_wrprotect(pud);
+ 	pud = pud_mkclean(pud);
+ 	set_pud_at(args->mm, vaddr, args->pudp, pud);
+@@ -391,6 +398,7 @@ static void __init pud_advanced_tests(struct pgtable_debug_args *args)
+ #endif /* __PAGETABLE_PMD_FOLDED */
+ 
+ 	pud = pfn_pud(args->pud_pfn, args->page_prot);
++	pud = pud_mkdevmap(pud);
+ 	pud = pud_mkyoung(pud);
+ 	set_pud_at(args->mm, vaddr, args->pudp, pud);
+ 	flush_dcache_page(page);
+diff --git a/mm/filemap.c b/mm/filemap.c
+index ad5b4aa049a38..5be7887957c70 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -4108,28 +4108,40 @@ static void filemap_cachestat(struct address_space *mapping,
+ 
+ 	rcu_read_lock();
+ 	xas_for_each(&xas, folio, last_index) {
++		int order;
+ 		unsigned long nr_pages;
+ 		pgoff_t folio_first_index, folio_last_index;
+ 
++		/*
++		 * Don't deref the folio. It is not pinned, and might
++		 * get freed (and reused) underneath us.
++		 *
++		 * We *could* pin it, but that would be expensive for
++		 * what should be a fast and lightweight syscall.
++		 *
++		 * Instead, derive all information of interest from
++		 * the rcu-protected xarray.
++		 */
++
+ 		if (xas_retry(&xas, folio))
+ 			continue;
+ 
++		order = xa_get_order(xas.xa, xas.xa_index);
++		nr_pages = 1 << order;
++		folio_first_index = round_down(xas.xa_index, 1 << order);
++		folio_last_index = folio_first_index + nr_pages - 1;
++
++		/* Folios might straddle the range boundaries, only count covered pages */
++		if (folio_first_index < first_index)
++			nr_pages -= first_index - folio_first_index;
++
++		if (folio_last_index > last_index)
++			nr_pages -= folio_last_index - last_index;
++
+ 		if (xa_is_value(folio)) {
+ 			/* page is evicted */
+ 			void *shadow = (void *)folio;
+ 			bool workingset; /* not used */
+-			int order = xa_get_order(xas.xa, xas.xa_index);
+-
+-			nr_pages = 1 << order;
+-			folio_first_index = round_down(xas.xa_index, 1 << order);
+-			folio_last_index = folio_first_index + nr_pages - 1;
+-
+-			/* Folios might straddle the range boundaries, only count covered pages */
+-			if (folio_first_index < first_index)
+-				nr_pages -= first_index - folio_first_index;
+-
+-			if (folio_last_index > last_index)
+-				nr_pages -= folio_last_index - last_index;
+ 
+ 			cs->nr_evicted += nr_pages;
+ 
+@@ -4147,24 +4159,13 @@ static void filemap_cachestat(struct address_space *mapping,
+ 			goto resched;
+ 		}
+ 
+-		nr_pages = folio_nr_pages(folio);
+-		folio_first_index = folio_pgoff(folio);
+-		folio_last_index = folio_first_index + nr_pages - 1;
+-
+-		/* Folios might straddle the range boundaries, only count covered pages */
+-		if (folio_first_index < first_index)
+-			nr_pages -= first_index - folio_first_index;
+-
+-		if (folio_last_index > last_index)
+-			nr_pages -= folio_last_index - last_index;
+-
+ 		/* page is in cache */
+ 		cs->nr_cache += nr_pages;
+ 
+-		if (folio_test_dirty(folio))
++		if (xas_get_mark(&xas, PAGECACHE_TAG_DIRTY))
+ 			cs->nr_dirty += nr_pages;
+ 
+-		if (folio_test_writeback(folio))
++		if (xas_get_mark(&xas, PAGECACHE_TAG_WRITEBACK))
+ 			cs->nr_writeback += nr_pages;
+ 
+ resched:
+diff --git a/mm/migrate.c b/mm/migrate.c
+index bad3039d165e6..a6999ce57e232 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -2517,6 +2517,14 @@ static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
+ 			if (managed_zone(pgdat->node_zones + z))
+ 				break;
+ 		}
++
++		/*
++		 * If there are no managed zones, it should not proceed
++		 * further.
++		 */
++		if (z < 0)
++			return 0;
++
+ 		wakeup_kswapd(pgdat->node_zones + z, 0,
+ 			      folio_order(folio), ZONE_MOVABLE);
+ 		return 0;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 65601aa52e0d8..2821a42cefdc6 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1049,6 +1049,7 @@ static void hci_error_reset(struct work_struct *work)
+ {
+ 	struct hci_dev *hdev = container_of(work, struct hci_dev, error_reset);
+ 
++	hci_dev_hold(hdev);
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	if (hdev->hw_error)
+@@ -1056,10 +1057,10 @@ static void hci_error_reset(struct work_struct *work)
+ 	else
+ 		bt_dev_err(hdev, "hardware error 0x%2.2x", hdev->hw_error_code);
+ 
+-	if (hci_dev_do_close(hdev))
+-		return;
++	if (!hci_dev_do_close(hdev))
++		hci_dev_do_open(hdev);
+ 
+-	hci_dev_do_open(hdev);
++	hci_dev_put(hdev);
+ }
+ 
+ void hci_uuids_clear(struct hci_dev *hdev)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index ef8c3bed73617..2a5f5a7d2412b 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5329,9 +5329,12 @@ static void hci_io_capa_request_evt(struct hci_dev *hdev, void *data,
+ 	hci_dev_lock(hdev);
+ 
+ 	conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
+-	if (!conn || !hci_conn_ssp_enabled(conn))
++	if (!conn || !hci_dev_test_flag(hdev, HCI_SSP_ENABLED))
+ 		goto unlock;
+ 
++	/* Assume remote supports SSP since it has triggered this event */
++	set_bit(HCI_CONN_SSP_ENABLED, &conn->flags);
++
+ 	hci_conn_hold(conn);
+ 
+ 	if (!hci_dev_test_flag(hdev, HCI_MGMT))
+@@ -6794,6 +6797,10 @@ static void hci_le_remote_conn_param_req_evt(struct hci_dev *hdev, void *data,
+ 		return send_conn_param_neg_reply(hdev, handle,
+ 						 HCI_ERROR_UNKNOWN_CONN_ID);
+ 
++	if (max > hcon->le_conn_max_interval)
++		return send_conn_param_neg_reply(hdev, handle,
++						 HCI_ERROR_INVALID_LL_PARAMS);
++
+ 	if (hci_check_conn_params(min, max, latency, timeout))
+ 		return send_conn_param_neg_reply(hdev, handle,
+ 						 HCI_ERROR_INVALID_LL_PARAMS);
+@@ -7420,10 +7427,10 @@ static void hci_store_wake_reason(struct hci_dev *hdev, u8 event,
+ 	 * keep track of the bdaddr of the connection event that woke us up.
+ 	 */
+ 	if (event == HCI_EV_CONN_REQUEST) {
+-		bacpy(&hdev->wake_addr, &conn_complete->bdaddr);
++		bacpy(&hdev->wake_addr, &conn_request->bdaddr);
+ 		hdev->wake_addr_type = BDADDR_BREDR;
+ 	} else if (event == HCI_EV_CONN_COMPLETE) {
+-		bacpy(&hdev->wake_addr, &conn_request->bdaddr);
++		bacpy(&hdev->wake_addr, &conn_complete->bdaddr);
+ 		hdev->wake_addr_type = BDADDR_BREDR;
+ 	} else if (event == HCI_EV_LE_META) {
+ 		struct hci_ev_le_meta *le_ev = (void *)skb->data;
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 97284d9b2a2e3..b90ee68bba1d6 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2274,8 +2274,11 @@ static int hci_le_add_accept_list_sync(struct hci_dev *hdev,
+ 
+ 	/* During suspend, only wakeable devices can be in acceptlist */
+ 	if (hdev->suspended &&
+-	    !(params->flags & HCI_CONN_FLAG_REMOTE_WAKEUP))
++	    !(params->flags & HCI_CONN_FLAG_REMOTE_WAKEUP)) {
++		hci_le_del_accept_list_sync(hdev, &params->addr,
++					    params->addr_type);
+ 		return 0;
++	}
+ 
+ 	/* Select filter policy to accept all advertising */
+ 	if (*num_entries >= hdev->le_accept_list_size)
+@@ -5629,7 +5632,7 @@ static int hci_inquiry_sync(struct hci_dev *hdev, u8 length)
+ 
+ 	bt_dev_dbg(hdev, "");
+ 
+-	if (hci_dev_test_flag(hdev, HCI_INQUIRY))
++	if (test_bit(HCI_INQUIRY, &hdev->flags))
+ 		return 0;
+ 
+ 	hci_dev_lock(hdev);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 60298975d5c45..656f49b299d20 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -5613,7 +5613,13 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn,
+ 
+ 	memset(&rsp, 0, sizeof(rsp));
+ 
+-	err = hci_check_conn_params(min, max, latency, to_multiplier);
++	if (max > hcon->le_conn_max_interval) {
++		BT_DBG("requested connection interval exceeds current bounds.");
++		err = -EINVAL;
++	} else {
++		err = hci_check_conn_params(min, max, latency, to_multiplier);
++	}
++
+ 	if (err)
+ 		rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED);
+ 	else
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index ed17208907578..35e10c5a766d5 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -43,6 +43,10 @@
+ #include <linux/sysctl.h>
+ #endif
+ 
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++#include <net/netfilter/nf_conntrack_core.h>
++#endif
++
+ static unsigned int brnf_net_id __read_mostly;
+ 
+ struct brnf_net {
+@@ -553,6 +557,90 @@ static unsigned int br_nf_pre_routing(void *priv,
+ 	return NF_STOLEN;
+ }
+ 
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++/* conntracks' nf_confirm logic cannot handle cloned skbs referencing
++ * the same nf_conn entry, which will happen for multicast (broadcast)
++ * Frames on bridges.
++ *
++ * Example:
++ *      macvlan0
++ *      br0
++ *  ethX  ethY
++ *
++ * ethX (or Y) receives multicast or broadcast packet containing
++ * an IP packet, not yet in conntrack table.
++ *
++ * 1. skb passes through bridge and fake-ip (br_netfilter)Prerouting.
++ *    -> skb->_nfct now references a unconfirmed entry
++ * 2. skb is broad/mcast packet. bridge now passes clones out on each bridge
++ *    interface.
++ * 3. skb gets passed up the stack.
++ * 4. In macvlan case, macvlan driver retains clone(s) of the mcast skb
++ *    and schedules a work queue to send them out on the lower devices.
++ *
++ *    The clone skb->_nfct is not a copy, it is the same entry as the
++ *    original skb.  The macvlan rx handler then returns RX_HANDLER_PASS.
++ * 5. Normal conntrack hooks (in NF_INET_LOCAL_IN) confirm the orig skb.
++ *
++ * The Macvlan broadcast worker and normal confirm path will race.
++ *
++ * This race will not happen if step 2 already confirmed a clone. In that
++ * case later steps perform skb_clone() with skb->_nfct already confirmed (in
++ * hash table).  This works fine.
++ *
++ * But such confirmation won't happen when eb/ip/nftables rules dropped the
++ * packets before they reached the nf_confirm step in postrouting.
++ *
++ * Work around this problem by explicit confirmation of the entry at
++ * LOCAL_IN time, before upper layer has a chance to clone the unconfirmed
++ * entry.
++ *
++ */
++static unsigned int br_nf_local_in(void *priv,
++				   struct sk_buff *skb,
++				   const struct nf_hook_state *state)
++{
++	struct nf_conntrack *nfct = skb_nfct(skb);
++	const struct nf_ct_hook *ct_hook;
++	struct nf_conn *ct;
++	int ret;
++
++	if (!nfct || skb->pkt_type == PACKET_HOST)
++		return NF_ACCEPT;
++
++	ct = container_of(nfct, struct nf_conn, ct_general);
++	if (likely(nf_ct_is_confirmed(ct)))
++		return NF_ACCEPT;
++
++	WARN_ON_ONCE(skb_shared(skb));
++	WARN_ON_ONCE(refcount_read(&nfct->use) != 1);
++
++	/* We can't call nf_confirm here, it would create a dependency
++	 * on nf_conntrack module.
++	 */
++	ct_hook = rcu_dereference(nf_ct_hook);
++	if (!ct_hook) {
++		skb->_nfct = 0ul;
++		nf_conntrack_put(nfct);
++		return NF_ACCEPT;
++	}
++
++	nf_bridge_pull_encap_header(skb);
++	ret = ct_hook->confirm(skb);
++	switch (ret & NF_VERDICT_MASK) {
++	case NF_STOLEN:
++		return NF_STOLEN;
++	default:
++		nf_bridge_push_encap_header(skb);
++		break;
++	}
++
++	ct = container_of(nfct, struct nf_conn, ct_general);
++	WARN_ON_ONCE(!nf_ct_is_confirmed(ct));
++
++	return ret;
++}
++#endif
+ 
+ /* PF_BRIDGE/FORWARD *************************************************/
+ static int br_nf_forward_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+@@ -964,6 +1052,14 @@ static const struct nf_hook_ops br_nf_ops[] = {
+ 		.hooknum = NF_BR_PRE_ROUTING,
+ 		.priority = NF_BR_PRI_BRNF,
+ 	},
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++	{
++		.hook = br_nf_local_in,
++		.pf = NFPROTO_BRIDGE,
++		.hooknum = NF_BR_LOCAL_IN,
++		.priority = NF_BR_PRI_LAST,
++	},
++#endif
+ 	{
+ 		.hook = br_nf_forward,
+ 		.pf = NFPROTO_BRIDGE,
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index abb090f94ed26..6f877e31709ba 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -291,6 +291,30 @@ static unsigned int nf_ct_bridge_pre(void *priv, struct sk_buff *skb,
+ 	return nf_conntrack_in(skb, &bridge_state);
+ }
+ 
++static unsigned int nf_ct_bridge_in(void *priv, struct sk_buff *skb,
++				    const struct nf_hook_state *state)
++{
++	enum ip_conntrack_info ctinfo;
++	struct nf_conn *ct;
++
++	if (skb->pkt_type == PACKET_HOST)
++		return NF_ACCEPT;
++
++	/* nf_conntrack_confirm() cannot handle concurrent clones,
++	 * this happens for broad/multicast frames with e.g. macvlan on top
++	 * of the bridge device.
++	 */
++	ct = nf_ct_get(skb, &ctinfo);
++	if (!ct || nf_ct_is_confirmed(ct) || nf_ct_is_template(ct))
++		return NF_ACCEPT;
++
++	/* let inet prerouting call conntrack again */
++	skb->_nfct = 0;
++	nf_ct_put(ct);
++
++	return NF_ACCEPT;
++}
++
+ static void nf_ct_bridge_frag_save(struct sk_buff *skb,
+ 				   struct nf_bridge_frag_data *data)
+ {
+@@ -385,6 +409,12 @@ static struct nf_hook_ops nf_ct_bridge_hook_ops[] __read_mostly = {
+ 		.hooknum	= NF_BR_PRE_ROUTING,
+ 		.priority	= NF_IP_PRI_CONNTRACK,
+ 	},
++	{
++		.hook		= nf_ct_bridge_in,
++		.pf		= NFPROTO_BRIDGE,
++		.hooknum	= NF_BR_LOCAL_IN,
++		.priority	= NF_IP_PRI_CONNTRACK_CONFIRM,
++	},
+ 	{
+ 		.hook		= nf_ct_bridge_post,
+ 		.pf		= NFPROTO_BRIDGE,
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index bf4c3f65ad99c..8f9cd6b7926e3 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -5164,10 +5164,9 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	struct net *net = sock_net(skb->sk);
+ 	struct ifinfomsg *ifm;
+ 	struct net_device *dev;
+-	struct nlattr *br_spec, *attr = NULL;
++	struct nlattr *br_spec, *attr, *br_flags_attr = NULL;
+ 	int rem, err = -EOPNOTSUPP;
+ 	u16 flags = 0;
+-	bool have_flags = false;
+ 
+ 	if (nlmsg_len(nlh) < sizeof(*ifm))
+ 		return -EINVAL;
+@@ -5185,11 +5184,11 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
+ 	if (br_spec) {
+ 		nla_for_each_nested(attr, br_spec, rem) {
+-			if (nla_type(attr) == IFLA_BRIDGE_FLAGS && !have_flags) {
++			if (nla_type(attr) == IFLA_BRIDGE_FLAGS && !br_flags_attr) {
+ 				if (nla_len(attr) < sizeof(flags))
+ 					return -EINVAL;
+ 
+-				have_flags = true;
++				br_flags_attr = attr;
+ 				flags = nla_get_u16(attr);
+ 			}
+ 
+@@ -5233,8 +5232,8 @@ static int rtnl_bridge_setlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		}
+ 	}
+ 
+-	if (have_flags)
+-		memcpy(nla_data(attr), &flags, sizeof(flags));
++	if (br_flags_attr)
++		memcpy(nla_data(br_flags_attr), &flags, sizeof(flags));
+ out:
+ 	return err;
+ }
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 80cdc6f6b34c9..0323ab5023c69 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -83,7 +83,7 @@ static bool is_supervision_frame(struct hsr_priv *hsr, struct sk_buff *skb)
+ 		return false;
+ 
+ 	/* Get next tlv */
+-	total_length += sizeof(struct hsr_sup_tlv) + hsr_sup_tag->tlv.HSR_TLV_length;
++	total_length += hsr_sup_tag->tlv.HSR_TLV_length;
+ 	if (!pskb_may_pull(skb, total_length))
+ 		return false;
+ 	skb_pull(skb, total_length);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index beeae624c412d..2d29fce7c5606 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -554,6 +554,20 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 	return 0;
+ }
+ 
++static void ip_tunnel_adj_headroom(struct net_device *dev, unsigned int headroom)
++{
++	/* we must cap headroom to some upperlimit, else pskb_expand_head
++	 * will overflow header offsets in skb_headers_offset_update().
++	 */
++	static const unsigned int max_allowed = 512;
++
++	if (headroom > max_allowed)
++		headroom = max_allowed;
++
++	if (headroom > READ_ONCE(dev->needed_headroom))
++		WRITE_ONCE(dev->needed_headroom, headroom);
++}
++
+ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		       u8 proto, int tunnel_hlen)
+ {
+@@ -632,13 +646,13 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	}
+ 
+ 	headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
+-	if (headroom > READ_ONCE(dev->needed_headroom))
+-		WRITE_ONCE(dev->needed_headroom, headroom);
+-
+-	if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
++	if (skb_cow_head(skb, headroom)) {
+ 		ip_rt_put(rt);
+ 		goto tx_dropped;
+ 	}
++
++	ip_tunnel_adj_headroom(dev, headroom);
++
+ 	iptunnel_xmit(NULL, rt, skb, fl4.saddr, fl4.daddr, proto, tos, ttl,
+ 		      df, !net_eq(tunnel->net, dev_net(dev)));
+ 	return;
+@@ -818,16 +832,16 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
+ 			+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
+-	if (max_headroom > READ_ONCE(dev->needed_headroom))
+-		WRITE_ONCE(dev->needed_headroom, max_headroom);
+ 
+-	if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
++	if (skb_cow_head(skb, max_headroom)) {
+ 		ip_rt_put(rt);
+ 		DEV_STATS_INC(dev, tx_dropped);
+ 		kfree_skb(skb);
+ 		return;
+ 	}
+ 
++	ip_tunnel_adj_headroom(dev, max_headroom);
++
+ 	iptunnel_xmit(NULL, rt, skb, fl4.saddr, fl4.daddr, protocol, tos, ttl,
+ 		      df, !net_eq(tunnel->net, dev_net(dev)));
+ 	return;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 5a839c5fb1a5a..055230b669cf2 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5509,9 +5509,10 @@ static int inet6_rtm_getaddr(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ 	}
+ 
+ 	addr = extract_addr(tb[IFA_ADDRESS], tb[IFA_LOCAL], &peer);
+-	if (!addr)
+-		return -EINVAL;
+-
++	if (!addr) {
++		err = -EINVAL;
++		goto errout;
++	}
+ 	ifm = nlmsg_data(nlh);
+ 	if (ifm->ifa_index)
+ 		dev = dev_get_by_index(tgt_net, ifm->ifa_index);
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 6218dcd07e184..ceee44ea09d97 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -888,7 +888,7 @@ int mctp_local_output(struct sock *sk, struct mctp_route *rt,
+ 		dev = dev_get_by_index_rcu(sock_net(sk), cb->ifindex);
+ 		if (!dev) {
+ 			rcu_read_unlock();
+-			return rc;
++			goto out_free;
+ 		}
+ 		rt->dev = __mctp_dev_get(dev);
+ 		rcu_read_unlock();
+@@ -903,7 +903,8 @@ int mctp_local_output(struct sock *sk, struct mctp_route *rt,
+ 		rt->mtu = 0;
+ 
+ 	} else {
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto out_free;
+ 	}
+ 
+ 	spin_lock_irqsave(&rt->dev->addrs_lock, flags);
+@@ -966,12 +967,17 @@ int mctp_local_output(struct sock *sk, struct mctp_route *rt,
+ 		rc = mctp_do_fragment_route(rt, skb, mtu, tag);
+ 	}
+ 
++	/* route output functions consume the skb, even on error */
++	skb = NULL;
++
+ out_release:
+ 	if (!ext_rt)
+ 		mctp_route_release(rt);
+ 
+ 	mctp_dev_put(tmp_rt.dev);
+ 
++out_free:
++	kfree_skb(skb);
+ 	return rc;
+ }
+ 
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index 6ff6f14674aa2..7017dd60659dc 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -21,6 +21,9 @@ static int subflow_get_info(struct sock *sk, struct sk_buff *skb)
+ 	bool slow;
+ 	int err;
+ 
++	if (inet_sk_state_load(sk) == TCP_LISTEN)
++		return 0;
++
+ 	start = nla_nest_start_noflag(skb, INET_ULP_INFO_MPTCP);
+ 	if (!start)
+ 		return -EMSGSIZE;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index e3e96a49f9229..63fc0758c22d4 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -981,10 +981,10 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ 	if (mp_opt->deny_join_id0)
+ 		WRITE_ONCE(msk->pm.remote_deny_join_id0, true);
+ 
+-set_fully_established:
+ 	if (unlikely(!READ_ONCE(msk->pm.server_side)))
+ 		pr_warn_once("bogus mpc option on established client sk");
+ 
++set_fully_established:
+ 	mptcp_data_lock((struct sock *)msk);
+ 	__mptcp_subflow_fully_established(msk, subflow, mp_opt);
+ 	mptcp_data_unlock((struct sock *)msk);
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 01b3a8f2f0e9c..6eabd1d79f083 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -495,6 +495,16 @@ int mptcp_pm_nl_subflow_destroy_doit(struct sk_buff *skb, struct genl_info *info
+ 		goto destroy_err;
+ 	}
+ 
++#if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	if (addr_l.family == AF_INET && ipv6_addr_v4mapped(&addr_r.addr6)) {
++		ipv6_addr_set_v4mapped(addr_l.addr.s_addr, &addr_l.addr6);
++		addr_l.family = AF_INET6;
++	}
++	if (addr_r.family == AF_INET && ipv6_addr_v4mapped(&addr_l.addr6)) {
++		ipv6_addr_set_v4mapped(addr_r.addr.s_addr, &addr_r.addr6);
++		addr_r.family = AF_INET6;
++	}
++#endif
+ 	if (addr_l.family != addr_r.family) {
+ 		GENL_SET_ERR_MSG(info, "address families do not match");
+ 		err = -EINVAL;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 5305f2ff0fd21..046ab95bc0b1d 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1274,6 +1274,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 		mpext = mptcp_get_ext(skb);
+ 		if (!mptcp_skb_can_collapse_to(data_seq, skb, mpext)) {
+ 			TCP_SKB_CB(skb)->eor = 1;
++			tcp_mark_push(tcp_sk(ssk), skb);
+ 			goto alloc_skb;
+ 		}
+ 
+@@ -3191,8 +3192,50 @@ static struct ipv6_pinfo *mptcp_inet6_sk(const struct sock *sk)
+ 
+ 	return (struct ipv6_pinfo *)(((u8 *)sk) + offset);
+ }
++
++static void mptcp_copy_ip6_options(struct sock *newsk, const struct sock *sk)
++{
++	const struct ipv6_pinfo *np = inet6_sk(sk);
++	struct ipv6_txoptions *opt;
++	struct ipv6_pinfo *newnp;
++
++	newnp = inet6_sk(newsk);
++
++	rcu_read_lock();
++	opt = rcu_dereference(np->opt);
++	if (opt) {
++		opt = ipv6_dup_options(newsk, opt);
++		if (!opt)
++			net_warn_ratelimited("%s: Failed to copy ip6 options\n", __func__);
++	}
++	RCU_INIT_POINTER(newnp->opt, opt);
++	rcu_read_unlock();
++}
+ #endif
+ 
++static void mptcp_copy_ip_options(struct sock *newsk, const struct sock *sk)
++{
++	struct ip_options_rcu *inet_opt, *newopt = NULL;
++	const struct inet_sock *inet = inet_sk(sk);
++	struct inet_sock *newinet;
++
++	newinet = inet_sk(newsk);
++
++	rcu_read_lock();
++	inet_opt = rcu_dereference(inet->inet_opt);
++	if (inet_opt) {
++		newopt = sock_kmalloc(newsk, sizeof(*inet_opt) +
++				      inet_opt->opt.optlen, GFP_ATOMIC);
++		if (newopt)
++			memcpy(newopt, inet_opt, sizeof(*inet_opt) +
++			       inet_opt->opt.optlen);
++		else
++			net_warn_ratelimited("%s: Failed to copy ip options\n", __func__);
++	}
++	RCU_INIT_POINTER(newinet->inet_opt, newopt);
++	rcu_read_unlock();
++}
++
+ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ 				 const struct mptcp_options_received *mp_opt,
+ 				 struct sock *ssk,
+@@ -3213,6 +3256,13 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ 
+ 	__mptcp_init_sock(nsk);
+ 
++#if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	if (nsk->sk_family == AF_INET6)
++		mptcp_copy_ip6_options(nsk, sk);
++	else
++#endif
++		mptcp_copy_ip_options(nsk, sk);
++
+ 	msk = mptcp_sk(nsk);
+ 	msk->local_key = subflow_req->local_key;
+ 	msk->token = subflow_req->token;
+@@ -3224,7 +3274,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
+ 	msk->write_seq = subflow_req->idsn + 1;
+ 	msk->snd_nxt = msk->write_seq;
+ 	msk->snd_una = msk->write_seq;
+-	msk->wnd_end = msk->snd_nxt + req->rsk_rcv_wnd;
++	msk->wnd_end = msk->snd_nxt + tcp_sk(ssk)->snd_wnd;
+ 	msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq;
+ 	mptcp_init_sched(msk, mptcp_sk(sk)->sched);
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 3e50baba1b2a9..7384613ea2444 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -790,6 +790,16 @@ static inline bool mptcp_data_fin_enabled(const struct mptcp_sock *msk)
+ 	       READ_ONCE(msk->write_seq) == READ_ONCE(msk->snd_nxt);
+ }
+ 
++static inline void mptcp_write_space(struct sock *sk)
++{
++	if (sk_stream_is_writeable(sk)) {
++		/* pairs with memory barrier in mptcp_poll */
++		smp_mb();
++		if (test_and_clear_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags))
++			sk_stream_write_space(sk);
++	}
++}
++
+ static inline void __mptcp_sync_sndbuf(struct sock *sk)
+ {
+ 	struct mptcp_subflow_context *subflow;
+@@ -808,6 +818,7 @@ static inline void __mptcp_sync_sndbuf(struct sock *sk)
+ 
+ 	/* the msk max wmem limit is <nr_subflows> * tcp wmem[2] */
+ 	WRITE_ONCE(sk->sk_sndbuf, new_sndbuf);
++	mptcp_write_space(sk);
+ }
+ 
+ /* The called held both the msk socket and the subflow socket locks,
+@@ -838,16 +849,6 @@ static inline void mptcp_propagate_sndbuf(struct sock *sk, struct sock *ssk)
+ 	local_bh_enable();
+ }
+ 
+-static inline void mptcp_write_space(struct sock *sk)
+-{
+-	if (sk_stream_is_writeable(sk)) {
+-		/* pairs with memory barrier in mptcp_poll */
+-		smp_mb();
+-		if (test_and_clear_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags))
+-			sk_stream_write_space(sk);
+-	}
+-}
+-
+ void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags);
+ 
+ #define MPTCP_TOKEN_MAX_RETRIES	4
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 2e5f3864d353a..5b876fa7f9af9 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2756,6 +2756,7 @@ static const struct nf_ct_hook nf_conntrack_hook = {
+ 	.get_tuple_skb  = nf_conntrack_get_tuple_skb,
+ 	.attach		= nf_conntrack_attach,
+ 	.set_closing	= nf_conntrack_set_closing,
++	.confirm	= __nf_conntrack_confirm,
+ };
+ 
+ void nf_conntrack_init_end(void)
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 1f9474fefe849..d3d11dede5450 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -359,10 +359,20 @@ static int nft_target_validate(const struct nft_ctx *ctx,
+ 
+ 	if (ctx->family != NFPROTO_IPV4 &&
+ 	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET &&
+ 	    ctx->family != NFPROTO_BRIDGE &&
+ 	    ctx->family != NFPROTO_ARP)
+ 		return -EOPNOTSUPP;
+ 
++	ret = nft_chain_validate_hooks(ctx->chain,
++				       (1 << NF_INET_PRE_ROUTING) |
++				       (1 << NF_INET_LOCAL_IN) |
++				       (1 << NF_INET_FORWARD) |
++				       (1 << NF_INET_LOCAL_OUT) |
++				       (1 << NF_INET_POST_ROUTING));
++	if (ret)
++		return ret;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+@@ -610,10 +620,20 @@ static int nft_match_validate(const struct nft_ctx *ctx,
+ 
+ 	if (ctx->family != NFPROTO_IPV4 &&
+ 	    ctx->family != NFPROTO_IPV6 &&
++	    ctx->family != NFPROTO_INET &&
+ 	    ctx->family != NFPROTO_BRIDGE &&
+ 	    ctx->family != NFPROTO_ARP)
+ 		return -EOPNOTSUPP;
+ 
++	ret = nft_chain_validate_hooks(ctx->chain,
++				       (1 << NF_INET_PRE_ROUTING) |
++				       (1 << NF_INET_LOCAL_IN) |
++				       (1 << NF_INET_FORWARD) |
++				       (1 << NF_INET_LOCAL_OUT) |
++				       (1 << NF_INET_POST_ROUTING));
++	if (ret)
++		return ret;
++
+ 	if (nft_is_base_chain(ctx->chain)) {
+ 		const struct nft_base_chain *basechain =
+ 						nft_base_chain(ctx->chain);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index d9107b545d360..6ae782efb1ee3 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -167,7 +167,7 @@ static inline u32 netlink_group_mask(u32 group)
+ static struct sk_buff *netlink_to_full_skb(const struct sk_buff *skb,
+ 					   gfp_t gfp_mask)
+ {
+-	unsigned int len = skb_end_offset(skb);
++	unsigned int len = skb->len;
+ 	struct sk_buff *new;
+ 
+ 	new = alloc_skb(len, gfp_mask);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index de96959336c48..211f57164cb61 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -52,6 +52,7 @@ struct tls_decrypt_arg {
+ 	struct_group(inargs,
+ 	bool zc;
+ 	bool async;
++	bool async_done;
+ 	u8 tail;
+ 	);
+ 
+@@ -274,22 +275,30 @@ static int tls_do_decryption(struct sock *sk,
+ 		DEBUG_NET_WARN_ON_ONCE(atomic_read(&ctx->decrypt_pending) < 1);
+ 		atomic_inc(&ctx->decrypt_pending);
+ 	} else {
++		DECLARE_CRYPTO_WAIT(wait);
++
+ 		aead_request_set_callback(aead_req,
+ 					  CRYPTO_TFM_REQ_MAY_BACKLOG,
+-					  crypto_req_done, &ctx->async_wait);
++					  crypto_req_done, &wait);
++		ret = crypto_aead_decrypt(aead_req);
++		if (ret == -EINPROGRESS || ret == -EBUSY)
++			ret = crypto_wait_req(ret, &wait);
++		return ret;
+ 	}
+ 
+ 	ret = crypto_aead_decrypt(aead_req);
++	if (ret == -EINPROGRESS)
++		return 0;
++
+ 	if (ret == -EBUSY) {
+ 		ret = tls_decrypt_async_wait(ctx);
+-		ret = ret ?: -EINPROGRESS;
++		darg->async_done = true;
++		/* all completions have run, we're not doing async anymore */
++		darg->async = false;
++		return ret;
+ 	}
+-	if (ret == -EINPROGRESS) {
+-		if (darg->async)
+-			return 0;
+ 
+-		ret = crypto_wait_req(ret, &ctx->async_wait);
+-	}
++	atomic_dec(&ctx->decrypt_pending);
+ 	darg->async = false;
+ 
+ 	return ret;
+@@ -1588,8 +1597,11 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov,
+ 	/* Prepare and submit AEAD request */
+ 	err = tls_do_decryption(sk, sgin, sgout, dctx->iv,
+ 				data_len + prot->tail_size, aead_req, darg);
+-	if (err)
++	if (err) {
++		if (darg->async_done)
++			goto exit_free_skb;
+ 		goto exit_free_pages;
++	}
+ 
+ 	darg->skb = clear_skb ?: tls_strp_msg(ctx);
+ 	clear_skb = NULL;
+@@ -1601,6 +1613,9 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov,
+ 		return err;
+ 	}
+ 
++	if (unlikely(darg->async_done))
++		return 0;
++
+ 	if (prot->tail_size)
+ 		darg->tail = dctx->tail;
+ 
+@@ -1948,6 +1963,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	struct strp_msg *rxm;
+ 	struct tls_msg *tlm;
+ 	ssize_t copied = 0;
++	ssize_t peeked = 0;
+ 	bool async = false;
+ 	int target, err;
+ 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
+@@ -2095,8 +2111,10 @@ int tls_sw_recvmsg(struct sock *sk,
+ 			if (err < 0)
+ 				goto put_on_rx_list_err;
+ 
+-			if (is_peek)
++			if (is_peek) {
++				peeked += chunk;
+ 				goto put_on_rx_list;
++			}
+ 
+ 			if (partially_consumed) {
+ 				rxm->offset += chunk;
+@@ -2135,8 +2153,8 @@ int tls_sw_recvmsg(struct sock *sk,
+ 
+ 		/* Drain records from the rx_list & copy if required */
+ 		if (is_peek || is_kvec)
+-			err = process_rx_list(ctx, msg, &control, copied,
+-					      decrypted, is_peek, NULL);
++			err = process_rx_list(ctx, msg, &control, copied + peeked,
++					      decrypted - peeked, is_peek, NULL);
+ 		else
+ 			err = process_rx_list(ctx, msg, &control, 0,
+ 					      async_copy_bytes, is_peek, NULL);
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 8f63f0b4bf012..2a81880dac7b7 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -284,9 +284,17 @@ void unix_gc(void)
+ 	 * which are creating the cycle(s).
+ 	 */
+ 	skb_queue_head_init(&hitlist);
+-	list_for_each_entry(u, &gc_candidates, link)
++	list_for_each_entry(u, &gc_candidates, link) {
+ 		scan_children(&u->sk, inc_inflight, &hitlist);
+ 
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++		if (u->oob_skb) {
++			kfree_skb(u->oob_skb);
++			u->oob_skb = NULL;
++		}
++#endif
++	}
++
+ 	/* not_cycle_list contains those sockets which do not make up a
+ 	 * cycle.  Restore these to the inflight list.
+ 	 */
+@@ -314,17 +322,6 @@ void unix_gc(void)
+ 	/* Here we are. Hitlist is filled. Die. */
+ 	__skb_queue_purge(&hitlist);
+ 
+-#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
+-	list_for_each_entry_safe(u, next, &gc_candidates, link) {
+-		struct sk_buff *skb = u->oob_skb;
+-
+-		if (skb) {
+-			u->oob_skb = NULL;
+-			kfree_skb(skb);
+-		}
+-	}
+-#endif
+-
+ 	spin_lock(&unix_gc_lock);
+ 
+ 	/* There could be io_uring registered files, just push them back to
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index fbf95b7ff6b43..f853b54415d5d 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4185,6 +4185,8 @@ static int nl80211_set_interface(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		if (ntype != NL80211_IFTYPE_MESH_POINT)
+ 			return -EINVAL;
++		if (otype != NL80211_IFTYPE_MESH_POINT)
++			return -EINVAL;
+ 		if (netif_running(dev))
+ 			return -EBUSY;
+ 
+diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include
+index 5a84b6443875c..3ee8ecfb8c044 100644
+--- a/scripts/Kconfig.include
++++ b/scripts/Kconfig.include
+@@ -33,7 +33,7 @@ ld-option = $(success,$(LD) -v $(1))
+ 
+ # $(as-instr,<instr>)
+ # Return y if the assembler supports <instr>, n otherwise
+-as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler-with-cpp -o /dev/null -)
++as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -Wa$(comma)--fatal-warnings -c -x assembler-with-cpp -o /dev/null -)
+ 
+ # check if $(CC) and $(LD) exist
+ $(error-if,$(failure,command -v $(CC)),C compiler '$(CC)' not found)
+diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
+index 8fcb427405a6f..92be0c9a13eeb 100644
+--- a/scripts/Makefile.compiler
++++ b/scripts/Makefile.compiler
+@@ -38,7 +38,7 @@ as-option = $(call try-run,\
+ # Usage: aflags-y += $(call as-instr,instr,option1,option2)
+ 
+ as-instr = $(call try-run,\
+-	printf "%b\n" "$(1)" | $(CC) -Werror $(CLANG_FLAGS) $(KBUILD_AFLAGS) -c -x assembler-with-cpp -o "$$TMP" -,$(2),$(3))
++	printf "%b\n" "$(1)" | $(CC) -Werror $(CLANG_FLAGS) $(KBUILD_AFLAGS) -Wa$(comma)--fatal-warnings -c -x assembler-with-cpp -o "$$TMP" -,$(2),$(3))
+ 
+ # __cc-option
+ # Usage: MY_CFLAGS += $(call __cc-option,$(CC),$(MY_CFLAGS),-march=winchip-c6,-march=i586)
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index bc7c126deea2f..6ea260cf89f06 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -737,8 +737,8 @@ static int current_check_refer_path(struct dentry *const old_dentry,
+ 	bool allow_parent1, allow_parent2;
+ 	access_mask_t access_request_parent1, access_request_parent2;
+ 	struct path mnt_dir;
+-	layer_mask_t layer_masks_parent1[LANDLOCK_NUM_ACCESS_FS],
+-		layer_masks_parent2[LANDLOCK_NUM_ACCESS_FS];
++	layer_mask_t layer_masks_parent1[LANDLOCK_NUM_ACCESS_FS] = {},
++		     layer_masks_parent2[LANDLOCK_NUM_ACCESS_FS] = {};
+ 
+ 	if (!dom)
+ 		return 0;
+diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c
+index 57ee70ae50f24..ea3140d510ecb 100644
+--- a/security/tomoyo/common.c
++++ b/security/tomoyo/common.c
+@@ -2649,13 +2649,14 @@ ssize_t tomoyo_write_control(struct tomoyo_io_buffer *head,
+ {
+ 	int error = buffer_len;
+ 	size_t avail_len = buffer_len;
+-	char *cp0 = head->write_buf;
++	char *cp0;
+ 	int idx;
+ 
+ 	if (!head->write)
+ 		return -EINVAL;
+ 	if (mutex_lock_interruptible(&head->io_sem))
+ 		return -EINTR;
++	cp0 = head->write_buf;
+ 	head->read_user_buf_avail = 0;
+ 	idx = tomoyo_read_lock();
+ 	/* Read a line and dispatch it to the policy handler. */
+diff --git a/sound/core/Makefile b/sound/core/Makefile
+index a6b444ee28326..f6526b3371375 100644
+--- a/sound/core/Makefile
++++ b/sound/core/Makefile
+@@ -32,7 +32,6 @@ snd-ump-objs      := ump.o
+ snd-ump-$(CONFIG_SND_UMP_LEGACY_RAWMIDI) += ump_convert.o
+ snd-timer-objs    := timer.o
+ snd-hrtimer-objs  := hrtimer.o
+-snd-rtctimer-objs := rtctimer.o
+ snd-hwdep-objs    := hwdep.o
+ snd-seq-device-objs := seq_device.o
+ 
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 3bef1944e955f..fe7911498cc43 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -985,7 +985,7 @@ static int snd_ump_legacy_open(struct snd_rawmidi_substream *substream)
+ 	struct snd_ump_endpoint *ump = substream->rmidi->private_data;
+ 	int dir = substream->stream;
+ 	int group = ump->legacy_mapping[substream->number];
+-	int err;
++	int err = 0;
+ 
+ 	mutex_lock(&ump->open_mutex);
+ 	if (ump->legacy_substreams[dir][group]) {
+@@ -1009,7 +1009,7 @@ static int snd_ump_legacy_open(struct snd_rawmidi_substream *substream)
+ 	spin_unlock_irq(&ump->legacy_locks[dir]);
+  unlock:
+ 	mutex_unlock(&ump->open_mutex);
+-	return 0;
++	return err;
+ }
+ 
+ static int snd_ump_legacy_close(struct snd_rawmidi_substream *substream)
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index a13c0b408aadf..7be17bca257f0 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -951,7 +951,7 @@ static int generate_tx_packet_descs(struct amdtp_stream *s, struct pkt_desc *des
+ 				// to the reason.
+ 				unsigned int safe_cycle = increment_ohci_cycle_count(next_cycle,
+ 								IR_JUMBO_PAYLOAD_MAX_SKIP_CYCLES);
+-				lost = (compare_ohci_cycle_count(safe_cycle, cycle) > 0);
++				lost = (compare_ohci_cycle_count(safe_cycle, cycle) < 0);
+ 			}
+ 			if (lost) {
+ 				dev_err(&s->unit->device, "Detect discontinuity of cycle: %d %d\n",
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e3096572881dc..eb45e5c3db8c6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7438,6 +7438,7 @@ enum {
+ 	ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ 	ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ 	ALC298_FIXUP_LENOVO_C940_DUET7,
++	ALC287_FIXUP_LENOVO_14IRP8_DUETITL,
+ 	ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ 	ALC256_FIXUP_SET_COEF_DEFAULTS,
+ 	ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+@@ -7488,6 +7489,26 @@ static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec,
+ 	__snd_hda_apply_fixup(codec, id, action, 0);
+ }
+ 
++/* A special fixup for Lenovo Slim/Yoga Pro 9 14IRP8 and Yoga DuetITL 2021;
++ * 14IRP8 PCI SSID will mistakenly be matched with the DuetITL codec SSID,
++ * so we need to apply a different fixup in this case. The only DuetITL codec
++ * SSID reported so far is the 17aa:3802 while the 14IRP8 has the 17aa:38be
++ * and 17aa:38bf. If it weren't for the PCI SSID, the 14IRP8 models would
++ * have matched correctly by their codecs.
++ */
++static void alc287_fixup_lenovo_14irp8_duetitl(struct hda_codec *codec,
++					      const struct hda_fixup *fix,
++					      int action)
++{
++	int id;
++
++	if (codec->core.subsystem_id == 0x17aa3802)
++		id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* DuetITL */
++	else
++		id = ALC287_FIXUP_TAS2781_I2C; /* 14IRP8 */
++	__snd_hda_apply_fixup(codec, id, action, 0);
++}
++
+ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC269_FIXUP_GPIO2] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -9372,6 +9393,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc298_fixup_lenovo_c940_duet7,
+ 	},
++	[ALC287_FIXUP_LENOVO_14IRP8_DUETITL] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc287_fixup_lenovo_14irp8_duetitl,
++	},
+ 	[ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -9578,7 +9603,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = tas2781_fixup_i2c,
+ 		.chained = true,
+-		.chain_id = ALC269_FIXUP_THINKPAD_ACPI,
++		.chain_id = ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ 	},
+ 	[ALC245_FIXUP_HP_MUTE_LED_COEFBIT] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -9733,6 +9758,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0c1c, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c1d, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
++	SND_PCI_QUIRK(0x1028, 0x0c28, "Dell Inspiron 16 Plus 7630", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
+ 	SND_PCI_QUIRK(0x1028, 0x0c4d, "Dell", ALC287_FIXUP_CS35L41_I2C_4),
+ 	SND_PCI_QUIRK(0x1028, 0x0cbd, "Dell Oasis 13 CS MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1028, 0x0cbe, "Dell Oasis 13 2-IN-1 MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2),
+@@ -9889,6 +9915,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8973, "HP EliteBook 860 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8974, "HP EliteBook 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8975, "HP EliteBook x360 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x897d, "HP mt440 Mobile Thin Client U74", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8981, "HP Elite Dragonfly G3", ALC245_FIXUP_CS35L41_SPI_4),
+ 	SND_PCI_QUIRK(0x103c, 0x898e, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x898f, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -9914,11 +9941,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8ab9, "HP EliteBook 840 G8 (MB 8AB8)", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b0f, "HP Elite mt645 G7 Mobile Thin Client U81", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b2f, "HP 255 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++	SND_PCI_QUIRK(0x103c, 0x8b3f, "HP mt440 Mobile Thin Client U91", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b42, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b43, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b44, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+@@ -10235,7 +10264,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ 	SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+-	SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++	SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8 / DuetITL 2021", ALC287_FIXUP_LENOVO_14IRP8_DUETITL),
+ 	SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7),
+ 	SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+diff --git a/sound/soc/codecs/cs35l45.c b/sound/soc/codecs/cs35l45.c
+index 44c221745c3b2..2392c6effed85 100644
+--- a/sound/soc/codecs/cs35l45.c
++++ b/sound/soc/codecs/cs35l45.c
+@@ -184,7 +184,7 @@ static int cs35l45_activate_ctl(struct snd_soc_component *component,
+ 	else
+ 		snprintf(name, SNDRV_CTL_ELEM_ID_NAME_MAXLEN, "%s", ctl_name);
+ 
+-	kcontrol = snd_soc_card_get_kcontrol(component->card, name);
++	kcontrol = snd_soc_card_get_kcontrol_locked(component->card, name);
+ 	if (!kcontrol) {
+ 		dev_err(component->dev, "Can't find kcontrol %s\n", name);
+ 		return -EINVAL;
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index 953ba066bab1e..2eb397724b4ba 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -34,10 +34,9 @@ static const struct reg_default cs35l56_reg_defaults[] = {
+ 	{ CS35L56_ASP1_FRAME_CONTROL5,		0x00020100 },
+ 	{ CS35L56_ASP1_DATA_CONTROL1,		0x00000018 },
+ 	{ CS35L56_ASP1_DATA_CONTROL5,		0x00000018 },
+-	{ CS35L56_ASP1TX1_INPUT,		0x00000018 },
+-	{ CS35L56_ASP1TX2_INPUT,		0x00000019 },
+-	{ CS35L56_ASP1TX3_INPUT,		0x00000020 },
+-	{ CS35L56_ASP1TX4_INPUT,		0x00000028 },
++
++	/* no defaults for ASP1TX mixer */
++
+ 	{ CS35L56_SWIRE_DP3_CH1_INPUT,		0x00000018 },
+ 	{ CS35L56_SWIRE_DP3_CH2_INPUT,		0x00000019 },
+ 	{ CS35L56_SWIRE_DP3_CH3_INPUT,		0x00000029 },
+@@ -286,6 +285,7 @@ void cs35l56_wait_min_reset_pulse(void)
+ EXPORT_SYMBOL_NS_GPL(cs35l56_wait_min_reset_pulse, SND_SOC_CS35L56_SHARED);
+ 
+ static const struct reg_sequence cs35l56_system_reset_seq[] = {
++	REG_SEQ0(CS35L56_DSP1_HALO_STATE, 0),
+ 	REG_SEQ0(CS35L56_DSP_VIRTUAL1_MBOX_1, CS35L56_MBOX_CMD_SYSTEM_RESET),
+ };
+ 
+diff --git a/sound/soc/codecs/cs35l56.c b/sound/soc/codecs/cs35l56.c
+index 45b4de3eff94f..319347be0524c 100644
+--- a/sound/soc/codecs/cs35l56.c
++++ b/sound/soc/codecs/cs35l56.c
+@@ -59,6 +59,131 @@ static int cs35l56_dspwait_put_volsw(struct snd_kcontrol *kcontrol,
+ 	return snd_soc_put_volsw(kcontrol, ucontrol);
+ }
+ 
++static const unsigned short cs35l56_asp1_mixer_regs[] = {
++	CS35L56_ASP1TX1_INPUT, CS35L56_ASP1TX2_INPUT,
++	CS35L56_ASP1TX3_INPUT, CS35L56_ASP1TX4_INPUT,
++};
++
++static const char * const cs35l56_asp1_mux_control_names[] = {
++	"ASP1 TX1 Source", "ASP1 TX2 Source", "ASP1 TX3 Source", "ASP1 TX4 Source"
++};
++
++static int cs35l56_sync_asp1_mixer_widgets_with_firmware(struct cs35l56_private *cs35l56)
++{
++	struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(cs35l56->component);
++	const char *prefix = cs35l56->component->name_prefix;
++	char full_name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];
++	const char *name;
++	struct snd_kcontrol *kcontrol;
++	struct soc_enum *e;
++	unsigned int val[4];
++	int i, item, ret;
++
++	if (cs35l56->asp1_mixer_widgets_initialized)
++		return 0;
++
++	/*
++	 * Resume so we can read the registers from silicon if the regmap
++	 * cache has not yet been populated.
++	 */
++	ret = pm_runtime_resume_and_get(cs35l56->base.dev);
++	if (ret < 0)
++		return ret;
++
++	/* Wait for firmware download and reboot */
++	cs35l56_wait_dsp_ready(cs35l56);
++
++	ret = regmap_bulk_read(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT,
++			       val, ARRAY_SIZE(val));
++
++	pm_runtime_mark_last_busy(cs35l56->base.dev);
++	pm_runtime_put_autosuspend(cs35l56->base.dev);
++
++	if (ret) {
++		dev_err(cs35l56->base.dev, "Failed to read ASP1 mixer regs: %d\n", ret);
++		return ret;
++	}
++
++	for (i = 0; i < ARRAY_SIZE(cs35l56_asp1_mux_control_names); ++i) {
++		name = cs35l56_asp1_mux_control_names[i];
++
++		if (prefix) {
++			snprintf(full_name, sizeof(full_name), "%s %s", prefix, name);
++			name = full_name;
++		}
++
++		kcontrol = snd_soc_card_get_kcontrol_locked(dapm->card, name);
++		if (!kcontrol) {
++			dev_warn(cs35l56->base.dev, "Could not find control %s\n", name);
++			continue;
++		}
++
++		e = (struct soc_enum *)kcontrol->private_value;
++		item = snd_soc_enum_val_to_item(e, val[i] & CS35L56_ASP_TXn_SRC_MASK);
++		snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL);
++	}
++
++	cs35l56->asp1_mixer_widgets_initialized = true;
++
++	return 0;
++}
++
++static int cs35l56_dspwait_asp1tx_get(struct snd_kcontrol *kcontrol,
++				      struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *component = snd_soc_dapm_kcontrol_component(kcontrol);
++	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);
++	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
++	int index = e->shift_l;
++	unsigned int addr, val;
++	int ret;
++
++	ret = cs35l56_sync_asp1_mixer_widgets_with_firmware(cs35l56);
++	if (ret)
++		return ret;
++
++	addr = cs35l56_asp1_mixer_regs[index];
++	ret = regmap_read(cs35l56->base.regmap, addr, &val);
++	if (ret)
++		return ret;
++
++	val &= CS35L56_ASP_TXn_SRC_MASK;
++	ucontrol->value.enumerated.item[0] = snd_soc_enum_val_to_item(e, val);
++
++	return 0;
++}
++
++static int cs35l56_dspwait_asp1tx_put(struct snd_kcontrol *kcontrol,
++				      struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_component *component = snd_soc_dapm_kcontrol_component(kcontrol);
++	struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
++	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);
++	struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
++	int item = ucontrol->value.enumerated.item[0];
++	int index = e->shift_l;
++	unsigned int addr, val;
++	bool changed;
++	int ret;
++
++	ret = cs35l56_sync_asp1_mixer_widgets_with_firmware(cs35l56);
++	if (ret)
++		return ret;
++
++	addr = cs35l56_asp1_mixer_regs[index];
++	val = snd_soc_enum_item_to_val(e, item);
++
++	ret = regmap_update_bits_check(cs35l56->base.regmap, addr,
++				       CS35L56_ASP_TXn_SRC_MASK, val, &changed);
++	if (ret)
++		return ret;
++
++	if (changed)
++		snd_soc_dapm_mux_update_power(dapm, kcontrol, item, e, NULL);
++
++	return changed;
++}
++
+ static DECLARE_TLV_DB_SCALE(vol_tlv, -10000, 25, 0);
+ 
+ static const struct snd_kcontrol_new cs35l56_controls[] = {
+@@ -77,40 +202,44 @@ static const struct snd_kcontrol_new cs35l56_controls[] = {
+ };
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx1_enum,
+-				  CS35L56_ASP1TX1_INPUT,
+-				  0, CS35L56_ASP_TXn_SRC_MASK,
++				  SND_SOC_NOPM,
++				  0, 0,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx1_mux =
+-	SOC_DAPM_ENUM("ASP1TX1 SRC", cs35l56_asp1tx1_enum);
++	SOC_DAPM_ENUM_EXT("ASP1TX1 SRC", cs35l56_asp1tx1_enum,
++			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx2_enum,
+-				  CS35L56_ASP1TX2_INPUT,
+-				  0, CS35L56_ASP_TXn_SRC_MASK,
++				  SND_SOC_NOPM,
++				  1, 0,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx2_mux =
+-	SOC_DAPM_ENUM("ASP1TX2 SRC", cs35l56_asp1tx2_enum);
++	SOC_DAPM_ENUM_EXT("ASP1TX2 SRC", cs35l56_asp1tx2_enum,
++			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx3_enum,
+-				  CS35L56_ASP1TX3_INPUT,
+-				  0, CS35L56_ASP_TXn_SRC_MASK,
++				  SND_SOC_NOPM,
++				  2, 0,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx3_mux =
+-	SOC_DAPM_ENUM("ASP1TX3 SRC", cs35l56_asp1tx3_enum);
++	SOC_DAPM_ENUM_EXT("ASP1TX3 SRC", cs35l56_asp1tx3_enum,
++			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_asp1tx4_enum,
+-				  CS35L56_ASP1TX4_INPUT,
+-				  0, CS35L56_ASP_TXn_SRC_MASK,
++				  SND_SOC_NOPM,
++				  3, 0,
+ 				  cs35l56_tx_input_texts,
+ 				  cs35l56_tx_input_values);
+ 
+ static const struct snd_kcontrol_new asp1_tx4_mux =
+-	SOC_DAPM_ENUM("ASP1TX4 SRC", cs35l56_asp1tx4_enum);
++	SOC_DAPM_ENUM_EXT("ASP1TX4 SRC", cs35l56_asp1tx4_enum,
++			  cs35l56_dspwait_asp1tx_get, cs35l56_dspwait_asp1tx_put);
+ 
+ static SOC_VALUE_ENUM_SINGLE_DECL(cs35l56_sdw1tx1_enum,
+ 				CS35L56_SWIRE_DP3_CH1_INPUT,
+@@ -753,6 +882,18 @@ static void cs35l56_dsp_work(struct work_struct *work)
+ 
+ 	pm_runtime_get_sync(cs35l56->base.dev);
+ 
++	/* Populate fw file qualifier with the revision and security state */
++	if (!cs35l56->dsp.fwf_name) {
++		cs35l56->dsp.fwf_name = kasprintf(GFP_KERNEL, "%02x%s-dsp1",
++						  cs35l56->base.rev,
++						  cs35l56->base.secured ? "-s" : "");
++		if (!cs35l56->dsp.fwf_name)
++			goto err;
++	}
++
++	dev_dbg(cs35l56->base.dev, "DSP fwf name: '%s' system name: '%s'\n",
++		cs35l56->dsp.fwf_name, cs35l56->dsp.system_name);
++
+ 	/*
+ 	 * When the device is running in secure mode the firmware files can
+ 	 * only contain insecure tunings and therefore we do not need to
+@@ -764,6 +905,7 @@ static void cs35l56_dsp_work(struct work_struct *work)
+ 	else
+ 		cs35l56_patch(cs35l56);
+ 
++err:
+ 	pm_runtime_mark_last_busy(cs35l56->base.dev);
+ 	pm_runtime_put_autosuspend(cs35l56->base.dev);
+ }
+@@ -799,6 +941,13 @@ static int cs35l56_component_probe(struct snd_soc_component *component)
+ 	debugfs_create_bool("can_hibernate", 0444, debugfs_root, &cs35l56->base.can_hibernate);
+ 	debugfs_create_bool("fw_patched", 0444, debugfs_root, &cs35l56->base.fw_patched);
+ 
++	/*
++	 * The widgets for the ASP1TX mixer can't be initialized
++	 * until the firmware has been downloaded and rebooted.
++	 */
++	regcache_drop_region(cs35l56->base.regmap, CS35L56_ASP1TX1_INPUT, CS35L56_ASP1TX4_INPUT);
++	cs35l56->asp1_mixer_widgets_initialized = false;
++
+ 	queue_work(cs35l56->dsp_wq, &cs35l56->dsp_work);
+ 
+ 	return 0;
+@@ -809,6 +958,16 @@ static void cs35l56_component_remove(struct snd_soc_component *component)
+ 	struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component);
+ 
+ 	cancel_work_sync(&cs35l56->dsp_work);
++
++	if (cs35l56->dsp.cs_dsp.booted)
++		wm_adsp_power_down(&cs35l56->dsp);
++
++	wm_adsp2_component_remove(&cs35l56->dsp, component);
++
++	kfree(cs35l56->dsp.fwf_name);
++	cs35l56->dsp.fwf_name = NULL;
++
++	cs35l56->component = NULL;
+ }
+ 
+ static int cs35l56_set_bias_level(struct snd_soc_component *component,
+@@ -1152,11 +1311,9 @@ int cs35l56_init(struct cs35l56_private *cs35l56)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	/* Populate the DSP information with the revision and security state */
+-	cs35l56->dsp.part = devm_kasprintf(cs35l56->base.dev, GFP_KERNEL, "cs35l56%s-%02x",
+-					   cs35l56->base.secured ? "s" : "", cs35l56->base.rev);
+-	if (!cs35l56->dsp.part)
+-		return -ENOMEM;
++	ret = cs35l56_set_patch(&cs35l56->base);
++	if (ret)
++		return ret;
+ 
+ 	if (!cs35l56->base.reset_gpio) {
+ 		dev_dbg(cs35l56->base.dev, "No reset gpio: using soft reset\n");
+@@ -1190,10 +1347,6 @@ int cs35l56_init(struct cs35l56_private *cs35l56)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = cs35l56_set_patch(&cs35l56->base);
+-	if (ret)
+-		return ret;
+-
+ 	/* Registers could be dirty after soft reset or SoundWire enumeration */
+ 	regcache_sync(cs35l56->base.regmap);
+ 
+diff --git a/sound/soc/codecs/cs35l56.h b/sound/soc/codecs/cs35l56.h
+index 8159c3e217d93..d9fbf568a1958 100644
+--- a/sound/soc/codecs/cs35l56.h
++++ b/sound/soc/codecs/cs35l56.h
+@@ -50,6 +50,7 @@ struct cs35l56_private {
+ 	u8 asp_slot_count;
+ 	bool tdm_mode;
+ 	bool sysclk_set;
++	bool asp1_mixer_widgets_initialized;
+ 	u8 old_sdw_clock_scale;
+ };
+ 
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index f0fb33d719c25..c46f64557a7ff 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -174,7 +174,9 @@ static int fsl_xcvr_activate_ctl(struct snd_soc_dai *dai, const char *name,
+ 	struct snd_kcontrol *kctl;
+ 	bool enabled;
+ 
+-	kctl = snd_soc_card_get_kcontrol(card, name);
++	lockdep_assert_held(&card->snd_card->controls_rwsem);
++
++	kctl = snd_soc_card_get_kcontrol_locked(card, name);
+ 	if (kctl == NULL)
+ 		return -ENOENT;
+ 
+@@ -576,10 +578,14 @@ static int fsl_xcvr_startup(struct snd_pcm_substream *substream,
+ 	xcvr->streams |= BIT(substream->stream);
+ 
+ 	if (!xcvr->soc_data->spdif_only) {
++		struct snd_soc_card *card = dai->component->card;
++
+ 		/* Disable XCVR controls if there is stream started */
++		down_read(&card->snd_card->controls_rwsem);
+ 		fsl_xcvr_activate_ctl(dai, fsl_xcvr_mode_kctl.name, false);
+ 		fsl_xcvr_activate_ctl(dai, fsl_xcvr_arc_mode_kctl.name, false);
+ 		fsl_xcvr_activate_ctl(dai, fsl_xcvr_earc_capds_kctl.name, false);
++		up_read(&card->snd_card->controls_rwsem);
+ 	}
+ 
+ 	return 0;
+@@ -598,11 +604,15 @@ static void fsl_xcvr_shutdown(struct snd_pcm_substream *substream,
+ 	/* Enable XCVR controls if there is no stream started */
+ 	if (!xcvr->streams) {
+ 		if (!xcvr->soc_data->spdif_only) {
++			struct snd_soc_card *card = dai->component->card;
++
++			down_read(&card->snd_card->controls_rwsem);
+ 			fsl_xcvr_activate_ctl(dai, fsl_xcvr_mode_kctl.name, true);
+ 			fsl_xcvr_activate_ctl(dai, fsl_xcvr_arc_mode_kctl.name,
+ 						(xcvr->mode == FSL_XCVR_MODE_ARC));
+ 			fsl_xcvr_activate_ctl(dai, fsl_xcvr_earc_capds_kctl.name,
+ 						(xcvr->mode == FSL_XCVR_MODE_EARC));
++			up_read(&card->snd_card->controls_rwsem);
+ 		}
+ 		ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_IER0,
+ 					 FSL_XCVR_IRQ_EARC_ALL, 0);
+diff --git a/sound/soc/qcom/lpass-cdc-dma.c b/sound/soc/qcom/lpass-cdc-dma.c
+index 48b03e60e3a3d..8106c586f68a4 100644
+--- a/sound/soc/qcom/lpass-cdc-dma.c
++++ b/sound/soc/qcom/lpass-cdc-dma.c
+@@ -259,7 +259,7 @@ static int lpass_cdc_dma_daiops_trigger(struct snd_pcm_substream *substream,
+ 				    int cmd, struct snd_soc_dai *dai)
+ {
+ 	struct snd_soc_pcm_runtime *soc_runtime = snd_soc_substream_to_rtd(substream);
+-	struct lpaif_dmactl *dmactl;
++	struct lpaif_dmactl *dmactl = NULL;
+ 	int ret = 0, id;
+ 
+ 	switch (cmd) {
+diff --git a/sound/soc/soc-card.c b/sound/soc/soc-card.c
+index 285ab4c9c7168..8a2f163da6bc9 100644
+--- a/sound/soc/soc-card.c
++++ b/sound/soc/soc-card.c
+@@ -5,6 +5,9 @@
+ // Copyright (C) 2019 Renesas Electronics Corp.
+ // Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
+ //
++
++#include <linux/lockdep.h>
++#include <linux/rwsem.h>
+ #include <sound/soc.h>
+ #include <sound/jack.h>
+ 
+@@ -26,12 +29,15 @@ static inline int _soc_card_ret(struct snd_soc_card *card,
+ 	return ret;
+ }
+ 
+-struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
+-					       const char *name)
++struct snd_kcontrol *snd_soc_card_get_kcontrol_locked(struct snd_soc_card *soc_card,
++						      const char *name)
+ {
+ 	struct snd_card *card = soc_card->snd_card;
+ 	struct snd_kcontrol *kctl;
+ 
++	/* must be held read or write */
++	lockdep_assert_held(&card->controls_rwsem);
++
+ 	if (unlikely(!name))
+ 		return NULL;
+ 
+@@ -40,6 +46,20 @@ struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
+ 			return kctl;
+ 	return NULL;
+ }
++EXPORT_SYMBOL_GPL(snd_soc_card_get_kcontrol_locked);
++
++struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
++					       const char *name)
++{
++	struct snd_card *card = soc_card->snd_card;
++	struct snd_kcontrol *kctl;
++
++	down_read(&card->controls_rwsem);
++	kctl = snd_soc_card_get_kcontrol_locked(soc_card, name);
++	up_read(&card->controls_rwsem);
++
++	return kctl;
++}
+ EXPORT_SYMBOL_GPL(snd_soc_card_get_kcontrol);
+ 
+ static int jack_new(struct snd_soc_card *card, const char *id, int type,
+diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
+index 591f5f50ddaab..2aa19004fa0cf 100644
+--- a/tools/net/ynl/lib/ynl.c
++++ b/tools/net/ynl/lib/ynl.c
+@@ -519,6 +519,7 @@ ynl_get_family_info_mcast(struct ynl_sock *ys, const struct nlattr *mcasts)
+ 				ys->mcast_groups[i].name[GENL_NAMSIZ - 1] = 0;
+ 			}
+ 		}
++		i++;
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index 10cd322e05c42..3b971d1617d81 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -310,12 +310,6 @@ check_mptcp_disabled()
+ 	return 0
+ }
+ 
+-# $1: IP address
+-is_v6()
+-{
+-	[ -z "${1##*:*}" ]
+-}
+-
+ do_ping()
+ {
+ 	local listener_ns="$1"
+@@ -324,7 +318,7 @@ do_ping()
+ 	local ping_args="-q -c 1"
+ 	local rc=0
+ 
+-	if is_v6 "${connect_addr}"; then
++	if mptcp_lib_is_v6 "${connect_addr}"; then
+ 		$ipv6 || return 0
+ 		ping_args="${ping_args} -6"
+ 	fi
+@@ -620,12 +614,12 @@ run_tests_lo()
+ 	fi
+ 
+ 	# skip if we don't want v6
+-	if ! $ipv6 && is_v6 "${connect_addr}"; then
++	if ! $ipv6 && mptcp_lib_is_v6 "${connect_addr}"; then
+ 		return 0
+ 	fi
+ 
+ 	local local_addr
+-	if is_v6 "${connect_addr}"; then
++	if mptcp_lib_is_v6 "${connect_addr}"; then
+ 		local_addr="::"
+ 	else
+ 		local_addr="0.0.0.0"
+@@ -693,7 +687,7 @@ run_test_transparent()
+ 	TEST_GROUP="${msg}"
+ 
+ 	# skip if we don't want v6
+-	if ! $ipv6 && is_v6 "${connect_addr}"; then
++	if ! $ipv6 && mptcp_lib_is_v6 "${connect_addr}"; then
+ 		return 0
+ 	fi
+ 
+@@ -726,7 +720,7 @@ EOF
+ 	fi
+ 
+ 	local local_addr
+-	if is_v6 "${connect_addr}"; then
++	if mptcp_lib_is_v6 "${connect_addr}"; then
+ 		local_addr="::"
+ 		r6flag="-6"
+ 	else
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index be10b971e912b..e6b778a9a9375 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -159,6 +159,11 @@ check_tools()
+ 		exit $ksft_skip
+ 	fi
+ 
++	if ! ss -h | grep -q MPTCP; then
++		echo "SKIP: ss tool does not support MPTCP"
++		exit $ksft_skip
++	fi
++
+ 	# Use the legacy version if available to support old kernel versions
+ 	if iptables-legacy -V &> /dev/null; then
+ 		iptables="iptables-legacy"
+@@ -587,12 +592,6 @@ link_failure()
+ 	done
+ }
+ 
+-# $1: IP address
+-is_v6()
+-{
+-	[ -z "${1##*:*}" ]
+-}
+-
+ # $1: ns, $2: port
+ wait_local_port_listen()
+ {
+@@ -872,7 +871,7 @@ pm_nl_set_endpoint()
+ 		local id=10
+ 		while [ $add_nr_ns1 -gt 0 ]; do
+ 			local addr
+-			if is_v6 "${connect_addr}"; then
++			if mptcp_lib_is_v6 "${connect_addr}"; then
+ 				addr="dead:beef:$counter::1"
+ 			else
+ 				addr="10.0.$counter.1"
+@@ -924,7 +923,7 @@ pm_nl_set_endpoint()
+ 		local id=20
+ 		while [ $add_nr_ns2 -gt 0 ]; do
+ 			local addr
+-			if is_v6 "${connect_addr}"; then
++			if mptcp_lib_is_v6 "${connect_addr}"; then
+ 				addr="dead:beef:$counter::2"
+ 			else
+ 				addr="10.0.$counter.2"
+@@ -966,7 +965,7 @@ pm_nl_set_endpoint()
+ 			pm_nl_flush_endpoint ${connector_ns}
+ 		elif [ $rm_nr_ns2 -eq 9 ]; then
+ 			local addr
+-			if is_v6 "${connect_addr}"; then
++			if mptcp_lib_is_v6 "${connect_addr}"; then
+ 				addr="dead:beef:1::2"
+ 			else
+ 				addr="10.0.1.2"
+@@ -1838,12 +1837,10 @@ chk_mptcp_info()
+ 	local cnt2
+ 	local dump_stats
+ 
+-	print_check "mptcp_info ${info1:0:8}=$exp1:$exp2"
++	print_check "mptcp_info ${info1:0:15}=$exp1:$exp2"
+ 
+-	cnt1=$(ss -N $ns1 -inmHM | grep "$info1:" |
+-	       sed -n 's/.*\('"$info1"':\)\([[:digit:]]*\).*$/\2/p;q')
+-	cnt2=$(ss -N $ns2 -inmHM | grep "$info2:" |
+-	       sed -n 's/.*\('"$info2"':\)\([[:digit:]]*\).*$/\2/p;q')
++	cnt1=$(ss -N $ns1 -inmHM | mptcp_lib_get_info_value "$info1" "$info1")
++	cnt2=$(ss -N $ns2 -inmHM | mptcp_lib_get_info_value "$info2" "$info2")
+ 	# 'ss' only display active connections and counters that are not 0.
+ 	[ -z "$cnt1" ] && cnt1=0
+ 	[ -z "$cnt2" ] && cnt2=0
+@@ -1861,6 +1858,42 @@ chk_mptcp_info()
+ 	fi
+ }
+ 
++# $1: subflows in ns1 ; $2: subflows in ns2
++# number of all subflows, including the initial subflow.
++chk_subflows_total()
++{
++	local cnt1
++	local cnt2
++	local info="subflows_total"
++	local dump_stats
++
++	# if subflows_total counter is supported, use it:
++	if [ -n "$(ss -N $ns1 -inmHM | mptcp_lib_get_info_value $info $info)" ]; then
++		chk_mptcp_info $info $1 $info $2
++		return
++	fi
++
++	print_check "$info $1:$2"
++
++	# if not, count the TCP connections that are in fact MPTCP subflows
++	cnt1=$(ss -N $ns1 -ti state established state syn-sent state syn-recv |
++	       grep -c tcp-ulp-mptcp)
++	cnt2=$(ss -N $ns2 -ti state established state syn-sent state syn-recv |
++	       grep -c tcp-ulp-mptcp)
++
++	if [ "$1" != "$cnt1" ] || [ "$2" != "$cnt2" ]; then
++		fail_test "got subflows $cnt1:$cnt2 expected $1:$2"
++		dump_stats=1
++	else
++		print_ok
++	fi
++
++	if [ "$dump_stats" = 1 ]; then
++		ss -N $ns1 -ti
++		ss -N $ns2 -ti
++	fi
++}
++
+ chk_link_usage()
+ {
+ 	local ns=$1
+@@ -2785,6 +2818,7 @@ backup_tests()
+ 	fi
+ }
+ 
++SUB_ESTABLISHED=10 # MPTCP_EVENT_SUB_ESTABLISHED
+ LISTENER_CREATED=15 #MPTCP_EVENT_LISTENER_CREATED
+ LISTENER_CLOSED=16  #MPTCP_EVENT_LISTENER_CLOSED
+ 
+@@ -2819,13 +2853,13 @@ verify_listener_events()
+ 		return
+ 	fi
+ 
+-	type=$(grep "type:$e_type," $evt | sed -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q')
+-	family=$(grep "type:$e_type," $evt | sed -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q')
+-	sport=$(grep "type:$e_type," $evt | sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
++	type=$(mptcp_lib_evts_get_info type "$evt" "$e_type")
++	family=$(mptcp_lib_evts_get_info family "$evt" "$e_type")
++	sport=$(mptcp_lib_evts_get_info sport "$evt" "$e_type")
+ 	if [ $family ] && [ $family = $AF_INET6 ]; then
+-		saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
++		saddr=$(mptcp_lib_evts_get_info saddr6 "$evt" "$e_type")
+ 	else
+-		saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q')
++		saddr=$(mptcp_lib_evts_get_info saddr4 "$evt" "$e_type")
+ 	fi
+ 
+ 	if [ $type ] && [ $type = $e_type ] &&
+@@ -3220,8 +3254,7 @@ fastclose_tests()
+ pedit_action_pkts()
+ {
+ 	tc -n $ns2 -j -s action show action pedit index 100 | \
+-		grep "packets" | \
+-		sed 's/.*"packets":\([0-9]\+\),.*/\1/'
++		mptcp_lib_get_info_value \"packets\" packets
+ }
+ 
+ fail_tests()
+@@ -3246,75 +3279,71 @@ fail_tests()
+ 	fi
+ }
+ 
++# $1: ns ; $2: addr ; $3: id
+ userspace_pm_add_addr()
+ {
+-	local addr=$1
+-	local id=$2
++	local evts=$evts_ns1
+ 	local tk
+ 
+-	tk=$(grep "type:1," "$evts_ns1" |
+-	     sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q')
+-	ip netns exec $ns1 ./pm_nl_ctl ann $addr token $tk id $id
++	[ "$1" == "$ns2" ] && evts=$evts_ns2
++	tk=$(mptcp_lib_evts_get_info token "$evts")
++
++	ip netns exec $1 ./pm_nl_ctl ann $2 token $tk id $3
+ 	sleep 1
+ }
+ 
+-userspace_pm_rm_sf_addr_ns1()
++# $1: ns ; $2: id
++userspace_pm_rm_addr()
+ {
+-	local addr=$1
+-	local id=$2
+-	local tk sp da dp
+-	local cnt_addr cnt_sf
+-
+-	tk=$(grep "type:1," "$evts_ns1" |
+-	     sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q')
+-	sp=$(grep "type:10" "$evts_ns1" |
+-	     sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
+-	da=$(grep "type:10" "$evts_ns1" |
+-	     sed -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
+-	dp=$(grep "type:10" "$evts_ns1" |
+-	     sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q')
+-	cnt_addr=$(rm_addr_count ${ns1})
+-	cnt_sf=$(rm_sf_count ${ns1})
+-	ip netns exec $ns1 ./pm_nl_ctl rem token $tk id $id
+-	ip netns exec $ns1 ./pm_nl_ctl dsf lip "::ffff:$addr" \
+-				lport $sp rip $da rport $dp token $tk
+-	wait_rm_addr $ns1 "${cnt_addr}"
+-	wait_rm_sf $ns1 "${cnt_sf}"
++	local evts=$evts_ns1
++	local tk
++	local cnt
++
++	[ "$1" == "$ns2" ] && evts=$evts_ns2
++	tk=$(mptcp_lib_evts_get_info token "$evts")
++
++	cnt=$(rm_addr_count ${1})
++	ip netns exec $1 ./pm_nl_ctl rem token $tk id $2
++	wait_rm_addr $1 "${cnt}"
+ }
+ 
++# $1: ns ; $2: addr ; $3: id
+ userspace_pm_add_sf()
+ {
+-	local addr=$1
+-	local id=$2
++	local evts=$evts_ns1
+ 	local tk da dp
+ 
+-	tk=$(sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
+-	da=$(sed -n 's/.*\(daddr4:\)\([0-9.]*\).*$/\2/p;q' "$evts_ns2")
+-	dp=$(sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
+-	ip netns exec $ns2 ./pm_nl_ctl csf lip $addr lid $id \
++	[ "$1" == "$ns2" ] && evts=$evts_ns2
++	tk=$(mptcp_lib_evts_get_info token "$evts")
++	da=$(mptcp_lib_evts_get_info daddr4 "$evts")
++	dp=$(mptcp_lib_evts_get_info dport "$evts")
++
++	ip netns exec $1 ./pm_nl_ctl csf lip $2 lid $3 \
+ 				rip $da rport $dp token $tk
+ 	sleep 1
+ }
+ 
+-userspace_pm_rm_sf_addr_ns2()
++# $1: ns ; $2: addr $3: event type
++userspace_pm_rm_sf()
+ {
+-	local addr=$1
+-	local id=$2
++	local evts=$evts_ns1
++	local t=${3:-1}
++	local ip
+ 	local tk da dp sp
+-	local cnt_addr cnt_sf
+-
+-	tk=$(sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
+-	da=$(sed -n 's/.*\(daddr4:\)\([0-9.]*\).*$/\2/p;q' "$evts_ns2")
+-	dp=$(sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
+-	sp=$(grep "type:10" "$evts_ns2" |
+-	     sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
+-	cnt_addr=$(rm_addr_count ${ns2})
+-	cnt_sf=$(rm_sf_count ${ns2})
+-	ip netns exec $ns2 ./pm_nl_ctl rem token $tk id $id
+-	ip netns exec $ns2 ./pm_nl_ctl dsf lip $addr lport $sp \
++	local cnt
++
++	[ "$1" == "$ns2" ] && evts=$evts_ns2
++	[ -n "$(mptcp_lib_evts_get_info "saddr4" "$evts" $t)" ] && ip=4
++	[ -n "$(mptcp_lib_evts_get_info "saddr6" "$evts" $t)" ] && ip=6
++	tk=$(mptcp_lib_evts_get_info token "$evts")
++	da=$(mptcp_lib_evts_get_info "daddr$ip" "$evts" $t $2)
++	dp=$(mptcp_lib_evts_get_info dport "$evts" $t $2)
++	sp=$(mptcp_lib_evts_get_info sport "$evts" $t $2)
++
++	cnt=$(rm_sf_count ${1})
++	ip netns exec $1 ./pm_nl_ctl dsf lip $2 lport $sp \
+ 				rip $da rport $dp token $tk
+-	wait_rm_addr $ns2 "${cnt_addr}"
+-	wait_rm_sf $ns2 "${cnt_sf}"
++	wait_rm_sf $1 "${cnt}"
+ }
+ 
+ userspace_tests()
+@@ -3396,19 +3425,25 @@ userspace_tests()
+ 	if reset_with_events "userspace pm add & remove address" &&
+ 	   continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
+ 		set_userspace_pm $ns1
+-		pm_nl_set_limits $ns2 1 1
++		pm_nl_set_limits $ns2 2 2
+ 		speed=5 \
+ 			run_tests $ns1 $ns2 10.0.1.1 &
+ 		local tests_pid=$!
+ 		wait_mpj $ns1
+-		userspace_pm_add_addr 10.0.2.1 10
+-		chk_join_nr 1 1 1
+-		chk_add_nr 1 1
+-		chk_mptcp_info subflows 1 subflows 1
+-		chk_mptcp_info add_addr_signal 1 add_addr_accepted 1
+-		userspace_pm_rm_sf_addr_ns1 10.0.2.1 10
+-		chk_rm_nr 1 1 invert
++		userspace_pm_add_addr $ns1 10.0.2.1 10
++		userspace_pm_add_addr $ns1 10.0.3.1 20
++		chk_join_nr 2 2 2
++		chk_add_nr 2 2
++		chk_mptcp_info subflows 2 subflows 2
++		chk_subflows_total 3 3
++		chk_mptcp_info add_addr_signal 2 add_addr_accepted 2
++		userspace_pm_rm_addr $ns1 10
++		userspace_pm_rm_sf $ns1 "::ffff:10.0.2.1" $SUB_ESTABLISHED
++		userspace_pm_rm_addr $ns1 20
++		userspace_pm_rm_sf $ns1 10.0.3.1 $SUB_ESTABLISHED
++		chk_rm_nr 2 2 invert
+ 		chk_mptcp_info subflows 0 subflows 0
++		chk_subflows_total 1 1
+ 		kill_events_pids
+ 		mptcp_lib_kill_wait $tests_pid
+ 	fi
+@@ -3422,12 +3457,15 @@ userspace_tests()
+ 			run_tests $ns1 $ns2 10.0.1.1 &
+ 		local tests_pid=$!
+ 		wait_mpj $ns2
+-		userspace_pm_add_sf 10.0.3.2 20
++		userspace_pm_add_sf $ns2 10.0.3.2 20
+ 		chk_join_nr 1 1 1
+ 		chk_mptcp_info subflows 1 subflows 1
+-		userspace_pm_rm_sf_addr_ns2 10.0.3.2 20
++		chk_subflows_total 2 2
++		userspace_pm_rm_addr $ns2 20
++		userspace_pm_rm_sf $ns2 10.0.3.2 $SUB_ESTABLISHED
+ 		chk_rm_nr 1 1
+ 		chk_mptcp_info subflows 0 subflows 0
++		chk_subflows_total 1 1
+ 		kill_events_pids
+ 		mptcp_lib_kill_wait $tests_pid
+ 	fi
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_lib.sh b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+index 2b10f200de402..8939d5c135a0e 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_lib.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_lib.sh
+@@ -208,6 +208,16 @@ mptcp_lib_result_print_all_tap() {
+ 	done
+ }
+ 
++# get the value of keyword $1 in the line marked by keyword $2
++mptcp_lib_get_info_value() {
++	grep "${2}" | sed -n 's/.*\('"${1}"':\)\([0-9a-f:.]*\).*$/\2/p;q'
++}
++
++# $1: info name ; $2: evts_ns ; [$3: event type; [$4: addr]]
++mptcp_lib_evts_get_info() {
++	grep "${4:-}" "${2}" | mptcp_lib_get_info_value "${1}" "^type:${3:-1},"
++}
++
+ # $1: PID
+ mptcp_lib_kill_wait() {
+ 	[ "${1}" -eq 0 ] && return 0
+@@ -217,6 +227,11 @@ mptcp_lib_kill_wait() {
+ 	wait "${1}" 2>/dev/null
+ }
+ 
++# $1: IP address
++mptcp_lib_is_v6() {
++	[ -z "${1##*:*}" ]
++}
++
+ # $1: ns, $2: MIB counter
+ mptcp_lib_get_counter() {
+ 	local ns="${1}"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+index a817af6616ec9..bfa744e350eff 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+@@ -161,12 +161,6 @@ check_transfer()
+ 	return 0
+ }
+ 
+-# $1: IP address
+-is_v6()
+-{
+-	[ -z "${1##*:*}" ]
+-}
+-
+ do_transfer()
+ {
+ 	local listener_ns="$1"
+@@ -183,7 +177,7 @@ do_transfer()
+ 	local mptcp_connect="./mptcp_connect -r 20"
+ 
+ 	local local_addr ip
+-	if is_v6 "${connect_addr}"; then
++	if mptcp_lib_is_v6 "${connect_addr}"; then
+ 		local_addr="::"
+ 		ip=ipv6
+ 	else
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index 0e748068ee95e..4c62114de0637 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -238,14 +238,11 @@ make_connection()
+ 	local server_token
+ 	local server_serverside
+ 
+-	client_token=$(sed --unbuffered -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
+-	client_port=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
+-	client_serverside=$(sed --unbuffered -n 's/.*\(server_side:\)\([[:digit:]]*\).*$/\2/p;q'\
+-				      "$client_evts")
+-	server_token=$(grep "type:1," "$server_evts" |
+-		       sed --unbuffered -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q')
+-	server_serverside=$(grep "type:1," "$server_evts" |
+-			    sed --unbuffered -n 's/.*\(server_side:\)\([[:digit:]]*\).*$/\2/p;q')
++	client_token=$(mptcp_lib_evts_get_info token "$client_evts")
++	client_port=$(mptcp_lib_evts_get_info sport "$client_evts")
++	client_serverside=$(mptcp_lib_evts_get_info server_side "$client_evts")
++	server_token=$(mptcp_lib_evts_get_info token "$server_evts")
++	server_serverside=$(mptcp_lib_evts_get_info server_side "$server_evts")
+ 
+ 	print_test "Established IP${is_v6} MPTCP Connection ns2 => ns1"
+ 	if [ "$client_token" != "" ] && [ "$server_token" != "" ] && [ "$client_serverside" = 0 ] &&
+@@ -331,16 +328,16 @@ verify_announce_event()
+ 	local dport
+ 	local id
+ 
+-	type=$(sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	token=$(sed --unbuffered -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
++	type=$(mptcp_lib_evts_get_info type "$evt" $e_type)
++	token=$(mptcp_lib_evts_get_info token "$evt" $e_type)
+ 	if [ "$e_af" = "v6" ]
+ 	then
+-		addr=$(sed --unbuffered -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q' "$evt")
++		addr=$(mptcp_lib_evts_get_info daddr6 "$evt" $e_type)
+ 	else
+-		addr=$(sed --unbuffered -n 's/.*\(daddr4:\)\([0-9.]*\).*$/\2/p;q' "$evt")
++		addr=$(mptcp_lib_evts_get_info daddr4 "$evt" $e_type)
+ 	fi
+-	dport=$(sed --unbuffered -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	id=$(sed --unbuffered -n 's/.*\(rem_id:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
++	dport=$(mptcp_lib_evts_get_info dport "$evt" $e_type)
++	id=$(mptcp_lib_evts_get_info rem_id "$evt" $e_type)
+ 
+ 	check_expected "type" "token" "addr" "dport" "id"
+ }
+@@ -358,7 +355,7 @@ test_announce()
+ 	   $client_addr_id dev ns2eth1 > /dev/null 2>&1
+ 
+ 	local type
+-	type=$(sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
++	type=$(mptcp_lib_evts_get_info type "$server_evts")
+ 	print_test "ADD_ADDR 10.0.2.2 (ns2) => ns1, invalid token"
+ 	if [ "$type" = "" ]
+ 	then
+@@ -437,9 +434,9 @@ verify_remove_event()
+ 	local token
+ 	local id
+ 
+-	type=$(sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	token=$(sed --unbuffered -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	id=$(sed --unbuffered -n 's/.*\(rem_id:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
++	type=$(mptcp_lib_evts_get_info type "$evt" $e_type)
++	token=$(mptcp_lib_evts_get_info token "$evt" $e_type)
++	id=$(mptcp_lib_evts_get_info rem_id "$evt" $e_type)
+ 
+ 	check_expected "type" "token" "id"
+ }
+@@ -457,7 +454,7 @@ test_remove()
+ 	   $client_addr_id > /dev/null 2>&1
+ 	print_test "RM_ADDR id:${client_addr_id} ns2 => ns1, invalid token"
+ 	local type
+-	type=$(sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
++	type=$(mptcp_lib_evts_get_info type "$server_evts")
+ 	if [ "$type" = "" ]
+ 	then
+ 		test_pass
+@@ -470,7 +467,7 @@ test_remove()
+ 	ip netns exec "$ns2" ./pm_nl_ctl rem token "$client4_token" id\
+ 	   $invalid_id > /dev/null 2>&1
+ 	print_test "RM_ADDR id:${invalid_id} ns2 => ns1, invalid id"
+-	type=$(sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
++	type=$(mptcp_lib_evts_get_info type "$server_evts")
+ 	if [ "$type" = "" ]
+ 	then
+ 		test_pass
+@@ -574,19 +571,19 @@ verify_subflow_events()
+ 		fi
+ 	fi
+ 
+-	type=$(sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	token=$(sed --unbuffered -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	family=$(sed --unbuffered -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	dport=$(sed --unbuffered -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	locid=$(sed --unbuffered -n 's/.*\(loc_id:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
+-	remid=$(sed --unbuffered -n 's/.*\(rem_id:\)\([[:digit:]]*\).*$/\2/p;q' "$evt")
++	type=$(mptcp_lib_evts_get_info type "$evt" $e_type)
++	token=$(mptcp_lib_evts_get_info token "$evt" $e_type)
++	family=$(mptcp_lib_evts_get_info family "$evt" $e_type)
++	dport=$(mptcp_lib_evts_get_info dport "$evt" $e_type)
++	locid=$(mptcp_lib_evts_get_info loc_id "$evt" $e_type)
++	remid=$(mptcp_lib_evts_get_info rem_id "$evt" $e_type)
+ 	if [ "$family" = "$AF_INET6" ]
+ 	then
+-		saddr=$(sed --unbuffered -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q' "$evt")
+-		daddr=$(sed --unbuffered -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q' "$evt")
++		saddr=$(mptcp_lib_evts_get_info saddr6 "$evt" $e_type)
++		daddr=$(mptcp_lib_evts_get_info daddr6 "$evt" $e_type)
+ 	else
+-		saddr=$(sed --unbuffered -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q' "$evt")
+-		daddr=$(sed --unbuffered -n 's/.*\(daddr4:\)\([0-9.]*\).*$/\2/p;q' "$evt")
++		saddr=$(mptcp_lib_evts_get_info saddr4 "$evt" $e_type)
++		daddr=$(mptcp_lib_evts_get_info daddr4 "$evt" $e_type)
+ 	fi
+ 
+ 	check_expected "type" "token" "daddr" "dport" "family" "saddr" "locid" "remid"
+@@ -621,7 +618,7 @@ test_subflows()
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+ 	local sport
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$server_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW from server to client machine
+ 	:>"$server_evts"
+@@ -659,7 +656,7 @@ test_subflows()
+ 	# Delete the listener from the client ns, if one was created
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$server_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW6 from server to client machine
+ 	:>"$server_evts"
+@@ -698,7 +695,7 @@ test_subflows()
+ 	# Delete the listener from the client ns, if one was created
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$server_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$server_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW from server to client machine
+ 	:>"$server_evts"
+@@ -736,7 +733,7 @@ test_subflows()
+ 	# Delete the listener from the server ns, if one was created
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$client_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW from client to server machine
+ 	:>"$client_evts"
+@@ -775,7 +772,7 @@ test_subflows()
+ 	# Delete the listener from the server ns, if one was created
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$client_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW6 from client to server machine
+ 	:>"$client_evts"
+@@ -812,7 +809,7 @@ test_subflows()
+ 	# Delete the listener from the server ns, if one was created
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$client_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW from client to server machine
+ 	:>"$client_evts"
+@@ -858,7 +855,7 @@ test_subflows_v4_v6_mix()
+ 	# Delete the listener from the server ns, if one was created
+ 	mptcp_lib_kill_wait $listener_pid
+ 
+-	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
++	sport=$(mptcp_lib_evts_get_info sport "$client_evts" $SUB_ESTABLISHED)
+ 
+ 	# DESTROY_SUBFLOW from client to server machine
+ 	:>"$client_evts"
+@@ -926,18 +923,13 @@ verify_listener_events()
+ 		print_test "CLOSE_LISTENER $e_saddr:$e_sport"
+ 	fi
+ 
+-	type=$(grep "type:$e_type," $evt |
+-	       sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q')
+-	family=$(grep "type:$e_type," $evt |
+-		 sed --unbuffered -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q')
+-	sport=$(grep "type:$e_type," $evt |
+-		sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
++	type=$(mptcp_lib_evts_get_info type $evt $e_type)
++	family=$(mptcp_lib_evts_get_info family $evt $e_type)
++	sport=$(mptcp_lib_evts_get_info sport $evt $e_type)
+ 	if [ $family ] && [ $family = $AF_INET6 ]; then
+-		saddr=$(grep "type:$e_type," $evt |
+-			sed --unbuffered -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
++		saddr=$(mptcp_lib_evts_get_info saddr6 $evt $e_type)
+ 	else
+-		saddr=$(grep "type:$e_type," $evt |
+-			sed --unbuffered -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q')
++		saddr=$(mptcp_lib_evts_get_info saddr4 $evt $e_type)
+ 	fi
+ 
+ 	check_expected "type" "family" "saddr" "sport"


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-03-15 21:59 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-03-15 21:59 UTC (permalink / raw
  To: gentoo-commits

commit:     c8e12e6b4f0e4903cc60060db6ba840be987fe15
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 15 21:59:14 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 15 21:59:14 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c8e12e6b

Linux patch 6.7.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1009_linux-6.7.10.patch | 2187 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2191 insertions(+)

diff --git a/0000_README b/0000_README
index 6b6b26d5..9b12f1b7 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-6.7.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.9
 
+Patch:  1009_linux-6.7.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.10
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1009_linux-6.7.10.patch b/1009_linux-6.7.10.patch
new file mode 100644
index 00000000..30979d1b
--- /dev/null
+++ b/1009_linux-6.7.10.patch
@@ -0,0 +1,2187 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index a1db6db475055..710d47be11e04 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -516,6 +516,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/mds
+ 		/sys/devices/system/cpu/vulnerabilities/meltdown
+ 		/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
++		/sys/devices/system/cpu/vulnerabilities/reg_file_data_sampling
+ 		/sys/devices/system/cpu/vulnerabilities/retbleed
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index de99caabf65a3..ff0b440ef2dc9 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -21,3 +21,4 @@ are configurable at compile, boot or run time.
+    cross-thread-rsb
+    srso
+    gather_data_sampling
++   reg-file-data-sampling
+diff --git a/Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst b/Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst
+new file mode 100644
+index 0000000000000..0585d02b9a6cb
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst
+@@ -0,0 +1,104 @@
++==================================
++Register File Data Sampling (RFDS)
++==================================
++
++Register File Data Sampling (RFDS) is a microarchitectural vulnerability that
++only affects Intel Atom parts(also branded as E-cores). RFDS may allow
++a malicious actor to infer data values previously used in floating point
++registers, vector registers, or integer registers. RFDS does not provide the
++ability to choose which data is inferred. CVE-2023-28746 is assigned to RFDS.
++
++Affected Processors
++===================
++Below is the list of affected Intel processors [#f1]_:
++
++   ===================  ============
++   Common name          Family_Model
++   ===================  ============
++   ATOM_GOLDMONT           06_5CH
++   ATOM_GOLDMONT_D         06_5FH
++   ATOM_GOLDMONT_PLUS      06_7AH
++   ATOM_TREMONT_D          06_86H
++   ATOM_TREMONT            06_96H
++   ALDERLAKE               06_97H
++   ALDERLAKE_L             06_9AH
++   ATOM_TREMONT_L          06_9CH
++   RAPTORLAKE              06_B7H
++   RAPTORLAKE_P            06_BAH
++   ATOM_GRACEMONT          06_BEH
++   RAPTORLAKE_S            06_BFH
++   ===================  ============
++
++As an exception to this table, Intel Xeon E family parts ALDERLAKE(06_97H) and
++RAPTORLAKE(06_B7H) codenamed Catlow are not affected. They are reported as
++vulnerable in Linux because they share the same family/model with an affected
++part. Unlike their affected counterparts, they do not enumerate RFDS_CLEAR or
++CPUID.HYBRID. This information could be used to distinguish between the
++affected and unaffected parts, but it is deemed not worth adding complexity as
++the reporting is fixed automatically when these parts enumerate RFDS_NO.
++
++Mitigation
++==========
++Intel released a microcode update that enables software to clear sensitive
++information using the VERW instruction. Like MDS, RFDS deploys the same
++mitigation strategy to force the CPU to clear the affected buffers before an
++attacker can extract the secrets. This is achieved by using the otherwise
++unused and obsolete VERW instruction in combination with a microcode update.
++The microcode clears the affected CPU buffers when the VERW instruction is
++executed.
++
++Mitigation points
++-----------------
++VERW is executed by the kernel before returning to user space, and by KVM
++before VMentry. None of the affected cores support SMT, so VERW is not required
++at C-state transitions.
++
++New bits in IA32_ARCH_CAPABILITIES
++----------------------------------
++Newer processors and microcode update on existing affected processors added new
++bits to IA32_ARCH_CAPABILITIES MSR. These bits can be used to enumerate
++vulnerability and mitigation capability:
++
++- Bit 27 - RFDS_NO - When set, processor is not affected by RFDS.
++- Bit 28 - RFDS_CLEAR - When set, processor is affected by RFDS, and has the
++  microcode that clears the affected buffers on VERW execution.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The kernel command line allows to control RFDS mitigation at boot time with the
++parameter "reg_file_data_sampling=". The valid arguments are:
++
++  ==========  =================================================================
++  on          If the CPU is vulnerable, enable mitigation; CPU buffer clearing
++              on exit to userspace and before entering a VM.
++  off         Disables mitigation.
++  ==========  =================================================================
++
++Mitigation default is selected by CONFIG_MITIGATION_RFDS.
++
++Mitigation status information
++-----------------------------
++The Linux kernel provides a sysfs interface to enumerate the current
++vulnerability status of the system: whether the system is vulnerable, and
++which mitigations are active. The relevant sysfs file is:
++
++	/sys/devices/system/cpu/vulnerabilities/reg_file_data_sampling
++
++The possible values in this file are:
++
++  .. list-table::
++
++     * - 'Not affected'
++       - The processor is not vulnerable
++     * - 'Vulnerable'
++       - The processor is vulnerable, but no mitigation enabled
++     * - 'Vulnerable: No microcode'
++       - The processor is vulnerable but microcode is not updated.
++     * - 'Mitigation: Clear Register File'
++       - The processor is vulnerable and the CPU buffer clearing mitigation is
++	 enabled.
++
++References
++----------
++.. [#f1] Affected Processors
++   https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index b72e2049c4876..40b89dd7c0bb3 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1137,6 +1137,26 @@
+ 			The filter can be disabled or changed to another
+ 			driver later using sysfs.
+ 
++	reg_file_data_sampling=
++			[X86] Controls mitigation for Register File Data
++			Sampling (RFDS) vulnerability. RFDS is a CPU
++			vulnerability which may allow userspace to infer
++			kernel data values previously stored in floating point
++			registers, vector registers, or integer registers.
++			RFDS only affects Intel Atom processors.
++
++			on:	Turns ON the mitigation.
++			off:	Turns OFF the mitigation.
++
++			This parameter overrides the compile time default set
++			by CONFIG_MITIGATION_RFDS. Mitigation cannot be
++			disabled when other VERW based mitigations (like MDS)
++			are enabled. In order to disable RFDS mitigation all
++			VERW based mitigations need to be disabled.
++
++			For details see:
++			Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst
++
+ 	driver_async_probe=  [KNL]
+ 			List of driver names to be probed asynchronously. *
+ 			matches with all driver names. If * is specified, the
+@@ -3385,6 +3405,7 @@
+ 					       nospectre_bhb [ARM64]
+ 					       nospectre_v1 [X86,PPC]
+ 					       nospectre_v2 [X86,PPC,S390,ARM64]
++					       reg_file_data_sampling=off [X86]
+ 					       retbleed=off [X86]
+ 					       spec_store_bypass_disable=off [X86,PPC]
+ 					       spectre_v2_user=off [X86]
+diff --git a/Makefile b/Makefile
+index f1a592b7c7bc8..00c159535dabe 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index f8567e95f98be..8f47d6762ea4b 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -35,6 +35,7 @@ config ARM
+ 	select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7
+ 	select ARCH_SUPPORTS_ATOMIC_RMW
+ 	select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE
++	select ARCH_SUPPORTS_PER_VMA_LOCK
+ 	select ARCH_USE_BUILTIN_BSWAP
+ 	select ARCH_USE_CMPXCHG_LOCKREF
+ 	select ARCH_USE_MEMTEST
+diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
+index fef62e4a9edde..07565b593ed68 100644
+--- a/arch/arm/mm/fault.c
++++ b/arch/arm/mm/fault.c
+@@ -278,6 +278,37 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ 
+ 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+ 
++	if (!(flags & FAULT_FLAG_USER))
++		goto lock_mmap;
++
++	vma = lock_vma_under_rcu(mm, addr);
++	if (!vma)
++		goto lock_mmap;
++
++	if (!(vma->vm_flags & vm_flags)) {
++		vma_end_read(vma);
++		goto lock_mmap;
++	}
++	fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs);
++	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
++		vma_end_read(vma);
++
++	if (!(fault & VM_FAULT_RETRY)) {
++		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
++		goto done;
++	}
++	count_vm_vma_lock_event(VMA_LOCK_RETRY);
++	if (fault & VM_FAULT_MAJOR)
++		flags |= FAULT_FLAG_TRIED;
++
++	/* Quick path to respond to signals */
++	if (fault_signal_pending(fault, regs)) {
++		if (!user_mode(regs))
++			goto no_context;
++		return 0;
++	}
++lock_mmap:
++
+ retry:
+ 	vma = lock_mm_and_find_vma(mm, addr, regs);
+ 	if (unlikely(!vma)) {
+@@ -316,6 +347,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+ 	}
+ 
+ 	mmap_read_unlock(mm);
++done:
+ 
+ 	/*
+ 	 * Handle the "normal" case first - VM_FAULT_MAJOR
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 1566748f16c42..d2003865b7cf6 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2609,6 +2609,17 @@ config GDS_FORCE_MITIGATION
+ 
+ 	  If in doubt, say N.
+ 
++config MITIGATION_RFDS
++	bool "RFDS Mitigation"
++	depends on CPU_SUP_INTEL
++	default y
++	help
++	  Enable mitigation for Register File Data Sampling (RFDS) by default.
++	  RFDS is a hardware vulnerability which affects Intel Atom CPUs. It
++	  allows unprivileged speculative access to stale data previously
++	  stored in floating point, vector and integer registers.
++	  See also <file:Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst>
++
+ endif
+ 
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index caf4cf2e1036f..0e4f2da9f618a 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -499,4 +499,5 @@
+ /* BUG word 2 */
+ #define X86_BUG_SRSO			X86_BUG(1*32 + 0) /* AMD SRSO bug */
+ #define X86_BUG_DIV0			X86_BUG(1*32 + 1) /* AMD DIV0 speculation bug */
++#define X86_BUG_RFDS			X86_BUG(1*32 + 2) /* CPU is vulnerable to Register File Data Sampling */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 1d51e1850ed03..857839df66dfb 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -165,6 +165,14 @@
+ 						 * CPU is not vulnerable to Gather
+ 						 * Data Sampling (GDS).
+ 						 */
++#define ARCH_CAP_RFDS_NO		BIT(27)	/*
++						 * Not susceptible to Register
++						 * File Data Sampling.
++						 */
++#define ARCH_CAP_RFDS_CLEAR		BIT(28)	/*
++						 * VERW clears CPU Register
++						 * File.
++						 */
+ 
+ #define ARCH_CAP_XAPIC_DISABLE		BIT(21)	/*
+ 						 * IA32_XAPIC_DISABLE_STATUS MSR
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 48d049cd74e71..01ac18f56147f 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -422,6 +422,13 @@ static void __init mmio_select_mitigation(void)
+ 	if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
+ 					      boot_cpu_has(X86_FEATURE_RTM)))
+ 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++
++	/*
++	 * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
++	 * mitigations, disable KVM-only mitigation in that case.
++	 */
++	if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
++		static_branch_disable(&mmio_stale_data_clear);
+ 	else
+ 		static_branch_enable(&mmio_stale_data_clear);
+ 
+@@ -473,6 +480,57 @@ static int __init mmio_stale_data_parse_cmdline(char *str)
+ }
+ early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"Register File Data Sampling: " fmt
++
++enum rfds_mitigations {
++	RFDS_MITIGATION_OFF,
++	RFDS_MITIGATION_VERW,
++	RFDS_MITIGATION_UCODE_NEEDED,
++};
++
++/* Default mitigation for Register File Data Sampling */
++static enum rfds_mitigations rfds_mitigation __ro_after_init =
++	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
++
++static const char * const rfds_strings[] = {
++	[RFDS_MITIGATION_OFF]			= "Vulnerable",
++	[RFDS_MITIGATION_VERW]			= "Mitigation: Clear Register File",
++	[RFDS_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
++};
++
++static void __init rfds_select_mitigation(void)
++{
++	if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off()) {
++		rfds_mitigation = RFDS_MITIGATION_OFF;
++		return;
++	}
++	if (rfds_mitigation == RFDS_MITIGATION_OFF)
++		return;
++
++	if (x86_read_arch_cap_msr() & ARCH_CAP_RFDS_CLEAR)
++		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++	else
++		rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
++}
++
++static __init int rfds_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!boot_cpu_has_bug(X86_BUG_RFDS))
++		return 0;
++
++	if (!strcmp(str, "off"))
++		rfds_mitigation = RFDS_MITIGATION_OFF;
++	else if (!strcmp(str, "on"))
++		rfds_mitigation = RFDS_MITIGATION_VERW;
++
++	return 0;
++}
++early_param("reg_file_data_sampling", rfds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "" fmt
+ 
+@@ -498,11 +556,19 @@ static void __init md_clear_update_mitigation(void)
+ 		taa_mitigation = TAA_MITIGATION_VERW;
+ 		taa_select_mitigation();
+ 	}
+-	if (mmio_mitigation == MMIO_MITIGATION_OFF &&
+-	    boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
++	/*
++	 * MMIO_MITIGATION_OFF is not checked here so that mmio_stale_data_clear
++	 * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
++	 */
++	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
+ 		mmio_mitigation = MMIO_MITIGATION_VERW;
+ 		mmio_select_mitigation();
+ 	}
++	if (rfds_mitigation == RFDS_MITIGATION_OFF &&
++	    boot_cpu_has_bug(X86_BUG_RFDS)) {
++		rfds_mitigation = RFDS_MITIGATION_VERW;
++		rfds_select_mitigation();
++	}
+ out:
+ 	if (boot_cpu_has_bug(X86_BUG_MDS))
+ 		pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
+@@ -512,6 +578,8 @@ static void __init md_clear_update_mitigation(void)
+ 		pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
+ 	else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
+ 		pr_info("MMIO Stale Data: Unknown: No mitigations\n");
++	if (boot_cpu_has_bug(X86_BUG_RFDS))
++		pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
+ }
+ 
+ static void __init md_clear_select_mitigation(void)
+@@ -519,11 +587,12 @@ static void __init md_clear_select_mitigation(void)
+ 	mds_select_mitigation();
+ 	taa_select_mitigation();
+ 	mmio_select_mitigation();
++	rfds_select_mitigation();
+ 
+ 	/*
+-	 * As MDS, TAA and MMIO Stale Data mitigations are inter-related, update
+-	 * and print their mitigation after MDS, TAA and MMIO Stale Data
+-	 * mitigation selection is done.
++	 * As these mitigations are inter-related and rely on VERW instruction
++	 * to clear the microarchitural buffers, update and print their status
++	 * after mitigation selection is done for each of these vulnerabilities.
+ 	 */
+ 	md_clear_update_mitigation();
+ }
+@@ -2612,6 +2681,11 @@ static ssize_t mmio_stale_data_show_state(char *buf)
+ 			  sched_smt_active() ? "vulnerable" : "disabled");
+ }
+ 
++static ssize_t rfds_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s\n", rfds_strings[rfds_mitigation]);
++}
++
+ static char *stibp_state(void)
+ {
+ 	if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+@@ -2771,6 +2845,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_GDS:
+ 		return gds_show_state(buf);
+ 
++	case X86_BUG_RFDS:
++		return rfds_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -2845,4 +2922,9 @@ ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *bu
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_GDS);
+ }
++
++ssize_t cpu_show_reg_file_data_sampling(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_RFDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 34cac9ea19171..97ea52a4e8a39 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1274,6 +1274,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define SRSO		BIT(5)
+ /* CPU is affected by GDS */
+ #define GDS		BIT(6)
++/* CPU is affected by Register File Data Sampling */
++#define RFDS		BIT(7)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
+@@ -1301,9 +1303,18 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(TIGERLAKE,	X86_STEPPING_ANY,		GDS),
+ 	VULNBL_INTEL_STEPPINGS(LAKEFIELD,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED),
+ 	VULNBL_INTEL_STEPPINGS(ROCKETLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO),
+-	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS),
++	VULNBL_INTEL_STEPPINGS(ALDERLAKE,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ALDERLAKE_L,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(RAPTORLAKE,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(RAPTORLAKE_P,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(RAPTORLAKE_S,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GRACEMONT,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D,	X86_STEPPING_ANY,		MMIO | RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GOLDMONT,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GOLDMONT_D,	X86_STEPPING_ANY,		RFDS),
++	VULNBL_INTEL_STEPPINGS(ATOM_GOLDMONT_PLUS, X86_STEPPING_ANY,		RFDS),
+ 
+ 	VULNBL_AMD(0x15, RETBLEED),
+ 	VULNBL_AMD(0x16, RETBLEED),
+@@ -1337,6 +1348,24 @@ static bool arch_cap_mmio_immune(u64 ia32_cap)
+ 		ia32_cap & ARCH_CAP_SBDR_SSDP_NO);
+ }
+ 
++static bool __init vulnerable_to_rfds(u64 ia32_cap)
++{
++	/* The "immunity" bit trumps everything else: */
++	if (ia32_cap & ARCH_CAP_RFDS_NO)
++		return false;
++
++	/*
++	 * VMMs set ARCH_CAP_RFDS_CLEAR for processors not in the blacklist to
++	 * indicate that mitigation is needed because guest is running on a
++	 * vulnerable hardware or may migrate to such hardware:
++	 */
++	if (ia32_cap & ARCH_CAP_RFDS_CLEAR)
++		return true;
++
++	/* Only consult the blacklist when there is no enumeration: */
++	return cpu_matches(cpu_vuln_blacklist, RFDS);
++}
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = x86_read_arch_cap_msr();
+@@ -1448,6 +1477,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	    boot_cpu_has(X86_FEATURE_AVX))
+ 		setup_force_cpu_bug(X86_BUG_GDS);
+ 
++	if (vulnerable_to_rfds(ia32_cap))
++		setup_force_cpu_bug(X86_BUG_RFDS);
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 468870450b8ba..8021c62b0e7b0 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1620,7 +1620,8 @@ static bool kvm_is_immutable_feature_msr(u32 msr)
+ 	 ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \
+ 	 ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
+ 	 ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
+-	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO)
++	 ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO | \
++	 ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR)
+ 
+ static u64 kvm_get_arch_capabilities(void)
+ {
+@@ -1652,6 +1653,8 @@ static u64 kvm_get_arch_capabilities(void)
+ 		data |= ARCH_CAP_SSB_NO;
+ 	if (!boot_cpu_has_bug(X86_BUG_MDS))
+ 		data |= ARCH_CAP_MDS_NO;
++	if (!boot_cpu_has_bug(X86_BUG_RFDS))
++		data |= ARCH_CAP_RFDS_NO;
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_RTM)) {
+ 		/*
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 548491de818ef..ef427ee787a99 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -565,6 +565,7 @@ CPU_SHOW_VULN_FALLBACK(mmio_stale_data);
+ CPU_SHOW_VULN_FALLBACK(retbleed);
+ CPU_SHOW_VULN_FALLBACK(spec_rstack_overflow);
+ CPU_SHOW_VULN_FALLBACK(gds);
++CPU_SHOW_VULN_FALLBACK(reg_file_data_sampling);
+ 
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+@@ -579,6 +580,7 @@ static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL);
+ static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL);
+ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NULL);
+ static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
++static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -594,6 +596,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_retbleed.attr,
+ 	&dev_attr_spec_rstack_overflow.attr,
+ 	&dev_attr_gather_data_sampling.attr,
++	&dev_attr_reg_file_data_sampling.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
+index bb5221158a770..f5e216b157c75 100644
+--- a/drivers/dma/fsl-edma-common.h
++++ b/drivers/dma/fsl-edma-common.h
+@@ -30,8 +30,9 @@
+ #define EDMA_TCD_ATTR_SSIZE(x)		(((x) & GENMASK(2, 0)) << 8)
+ #define EDMA_TCD_ATTR_SMOD(x)		(((x) & GENMASK(4, 0)) << 11)
+ 
+-#define EDMA_TCD_CITER_CITER(x)		((x) & GENMASK(14, 0))
+-#define EDMA_TCD_BITER_BITER(x)		((x) & GENMASK(14, 0))
++#define EDMA_TCD_ITER_MASK		GENMASK(14, 0)
++#define EDMA_TCD_CITER_CITER(x)		((x) & EDMA_TCD_ITER_MASK)
++#define EDMA_TCD_BITER_BITER(x)		((x) & EDMA_TCD_ITER_MASK)
+ 
+ #define EDMA_TCD_CSR_START		BIT(0)
+ #define EDMA_TCD_CSR_INT_MAJOR		BIT(1)
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index 75cae7ccae270..d36e28b9c767a 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -9,6 +9,8 @@
+  * Vybrid and Layerscape SoCs.
+  */
+ 
++#include <dt-bindings/dma/fsl-edma.h>
++#include <linux/bitfield.h>
+ #include <linux/module.h>
+ #include <linux/interrupt.h>
+ #include <linux/clk.h>
+@@ -21,12 +23,6 @@
+ 
+ #include "fsl-edma-common.h"
+ 
+-#define ARGS_RX                         BIT(0)
+-#define ARGS_REMOTE                     BIT(1)
+-#define ARGS_MULTI_FIFO                 BIT(2)
+-#define ARGS_EVEN_CH                    BIT(3)
+-#define ARGS_ODD_CH                     BIT(4)
+-
+ static void fsl_edma_synchronize(struct dma_chan *chan)
+ {
+ 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+@@ -155,14 +151,14 @@ static struct dma_chan *fsl_edma3_xlate(struct of_phandle_args *dma_spec,
+ 		i = fsl_chan - fsl_edma->chans;
+ 
+ 		fsl_chan->priority = dma_spec->args[1];
+-		fsl_chan->is_rxchan = dma_spec->args[2] & ARGS_RX;
+-		fsl_chan->is_remote = dma_spec->args[2] & ARGS_REMOTE;
+-		fsl_chan->is_multi_fifo = dma_spec->args[2] & ARGS_MULTI_FIFO;
++		fsl_chan->is_rxchan = dma_spec->args[2] & FSL_EDMA_RX;
++		fsl_chan->is_remote = dma_spec->args[2] & FSL_EDMA_REMOTE;
++		fsl_chan->is_multi_fifo = dma_spec->args[2] & FSL_EDMA_MULTI_FIFO;
+ 
+-		if ((dma_spec->args[2] & ARGS_EVEN_CH) && (i & 0x1))
++		if ((dma_spec->args[2] & FSL_EDMA_EVEN_CH) && (i & 0x1))
+ 			continue;
+ 
+-		if ((dma_spec->args[2] & ARGS_ODD_CH) && !(i & 0x1))
++		if ((dma_spec->args[2] & FSL_EDMA_ODD_CH) && !(i & 0x1))
+ 			continue;
+ 
+ 		if (!b_chmux && i == dma_spec->args[0]) {
+@@ -587,7 +583,8 @@ static int fsl_edma_probe(struct platform_device *pdev)
+ 					DMAENGINE_ALIGN_32_BYTES;
+ 
+ 	/* Per worst case 'nbytes = 1' take CITER as the max_seg_size */
+-	dma_set_max_seg_size(fsl_edma->dma_dev.dev, 0x3fff);
++	dma_set_max_seg_size(fsl_edma->dma_dev.dev,
++			     FIELD_GET(EDMA_TCD_ITER_MASK, EDMA_TCD_ITER_MASK));
+ 
+ 	fsl_edma->dma_dev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 6cf7f364704e8..b094c48bebc30 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1811,7 +1811,7 @@ void bond_xdp_set_features(struct net_device *bond_dev)
+ 
+ 	ASSERT_RTNL();
+ 
+-	if (!bond_xdp_check(bond)) {
++	if (!bond_xdp_check(bond) || !bond_has_slaves(bond)) {
+ 		xdp_clear_features_flag(bond_dev);
+ 		return;
+ 	}
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index 4bf4d67557dcf..9048d1f196110 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -49,9 +49,9 @@ static int ksz8_ind_write8(struct ksz_device *dev, u8 table, u16 addr, u8 data)
+ 	mutex_lock(&dev->alu_mutex);
+ 
+ 	ctrl_addr = IND_ACC_TABLE(table) | addr;
+-	ret = ksz_write8(dev, regs[REG_IND_BYTE], data);
++	ret = ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr);
+ 	if (!ret)
+-		ret = ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr);
++		ret = ksz_write8(dev, regs[REG_IND_BYTE], data);
+ 
+ 	mutex_unlock(&dev->alu_mutex);
+ 
+diff --git a/drivers/net/ethernet/amd/pds_core/auxbus.c b/drivers/net/ethernet/amd/pds_core/auxbus.c
+index 11c23a7f3172d..fd1a5149c0031 100644
+--- a/drivers/net/ethernet/amd/pds_core/auxbus.c
++++ b/drivers/net/ethernet/amd/pds_core/auxbus.c
+@@ -160,23 +160,19 @@ static struct pds_auxiliary_dev *pdsc_auxbus_dev_register(struct pdsc *cf,
+ 	if (err < 0) {
+ 		dev_warn(cf->dev, "auxiliary_device_init of %s failed: %pe\n",
+ 			 name, ERR_PTR(err));
+-		goto err_out;
++		kfree(padev);
++		return ERR_PTR(err);
+ 	}
+ 
+ 	err = auxiliary_device_add(aux_dev);
+ 	if (err) {
+ 		dev_warn(cf->dev, "auxiliary_device_add of %s failed: %pe\n",
+ 			 name, ERR_PTR(err));
+-		goto err_out_uninit;
++		auxiliary_device_uninit(aux_dev);
++		return ERR_PTR(err);
+ 	}
+ 
+ 	return padev;
+-
+-err_out_uninit:
+-	auxiliary_device_uninit(aux_dev);
+-err_out:
+-	kfree(padev);
+-	return ERR_PTR(err);
+ }
+ 
+ int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index d9716bcec81bb..b0dc0fc6b1359 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -13629,9 +13629,9 @@ int i40e_queue_pair_disable(struct i40e_vsi *vsi, int queue_pair)
+ 		return err;
+ 
+ 	i40e_queue_pair_disable_irq(vsi, queue_pair);
++	i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
+ 	err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */);
+ 	i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);
+-	i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */);
+ 	i40e_queue_pair_clean_rings(vsi, queue_pair);
+ 	i40e_queue_pair_reset_stats(vsi, queue_pair);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
+index 2b657d43c769d..68b894bb68fe7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
+@@ -2146,6 +2146,7 @@ void ice_dpll_init(struct ice_pf *pf)
+ 	struct ice_dplls *d = &pf->dplls;
+ 	int err = 0;
+ 
++	mutex_init(&d->lock);
+ 	err = ice_dpll_init_info(pf, cgu);
+ 	if (err)
+ 		goto err_exit;
+@@ -2158,7 +2159,6 @@ void ice_dpll_init(struct ice_pf *pf)
+ 	err = ice_dpll_init_pins(pf, cgu);
+ 	if (err)
+ 		goto deinit_pps;
+-	mutex_init(&d->lock);
+ 	if (cgu) {
+ 		err = ice_dpll_init_worker(pf);
+ 		if (err)
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index adfdea1e2805a..a9cca2d24120a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -7800,6 +7800,8 @@ ice_bridge_setlink(struct net_device *dev, struct nlmsghdr *nlh,
+ 	pf_sw = pf->first_sw;
+ 	/* find the attribute in the netlink message */
+ 	br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
++	if (!br_spec)
++		return -EINVAL;
+ 
+ 	nla_for_each_nested(attr, br_spec, rem) {
+ 		__u16 mode;
+diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
+index e1494f24f661d..cd61928700211 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
++++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
+@@ -762,24 +762,6 @@ static void ice_sriov_clear_reset_trigger(struct ice_vf *vf)
+ 	ice_flush(hw);
+ }
+ 
+-/**
+- * ice_sriov_create_vsi - Create a new VSI for a VF
+- * @vf: VF to create the VSI for
+- *
+- * This is called by ice_vf_recreate_vsi to create the new VSI after the old
+- * VSI has been released.
+- */
+-static int ice_sriov_create_vsi(struct ice_vf *vf)
+-{
+-	struct ice_vsi *vsi;
+-
+-	vsi = ice_vf_vsi_setup(vf);
+-	if (!vsi)
+-		return -ENOMEM;
+-
+-	return 0;
+-}
+-
+ /**
+  * ice_sriov_post_vsi_rebuild - tasks to do after the VF's VSI have been rebuilt
+  * @vf: VF to perform tasks on
+@@ -799,7 +781,6 @@ static const struct ice_vf_ops ice_sriov_vf_ops = {
+ 	.poll_reset_status = ice_sriov_poll_reset_status,
+ 	.clear_reset_trigger = ice_sriov_clear_reset_trigger,
+ 	.irq_close = NULL,
+-	.create_vsi = ice_sriov_create_vsi,
+ 	.post_vsi_rebuild = ice_sriov_post_vsi_rebuild,
+ };
+ 
+@@ -1093,6 +1074,7 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ 	struct ice_pf *pf = pci_get_drvdata(pdev);
+ 	u16 prev_msix, prev_queues, queues;
+ 	bool needs_rebuild = false;
++	struct ice_vsi *vsi;
+ 	struct ice_vf *vf;
+ 	int id;
+ 
+@@ -1127,6 +1109,10 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ 	if (!vf)
+ 		return -ENOENT;
+ 
++	vsi = ice_get_vf_vsi(vf);
++	if (!vsi)
++		return -ENOENT;
++
+ 	prev_msix = vf->num_msix;
+ 	prev_queues = vf->num_vf_qs;
+ 
+@@ -1147,8 +1133,7 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ 	if (vf->first_vector_idx < 0)
+ 		goto unroll;
+ 
+-	ice_vf_vsi_release(vf);
+-	if (vf->vf_ops->create_vsi(vf)) {
++	if (ice_vf_reconfig_vsi(vf) || ice_vf_init_host_cfg(vf, vsi)) {
+ 		/* Try to rebuild with previous values */
+ 		needs_rebuild = true;
+ 		goto unroll;
+@@ -1174,8 +1159,10 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+ 	if (vf->first_vector_idx < 0)
+ 		return -EINVAL;
+ 
+-	if (needs_rebuild)
+-		vf->vf_ops->create_vsi(vf);
++	if (needs_rebuild) {
++		ice_vf_reconfig_vsi(vf);
++		ice_vf_init_host_cfg(vf, vsi);
++	}
+ 
+ 	ice_ena_vf_mappings(vf);
+ 	ice_put_vf(vf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index b7ae099521566..88e3cd09f8d0c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -248,29 +248,44 @@ static void ice_vf_pre_vsi_rebuild(struct ice_vf *vf)
+ }
+ 
+ /**
+- * ice_vf_recreate_vsi - Release and re-create the VF's VSI
+- * @vf: VF to recreate the VSI for
++ * ice_vf_reconfig_vsi - Reconfigure a VF VSI with the device
++ * @vf: VF to reconfigure the VSI for
+  *
+- * This is only called when a single VF is being reset (i.e. VVF, VFLR, host
+- * VF configuration change, etc)
++ * This is called when a single VF is being reset (i.e. VVF, VFLR, host VF
++ * configuration change, etc).
+  *
+- * It releases and then re-creates a new VSI.
++ * It brings the VSI down and then reconfigures it with the hardware.
+  */
+-static int ice_vf_recreate_vsi(struct ice_vf *vf)
++int ice_vf_reconfig_vsi(struct ice_vf *vf)
+ {
++	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
++	struct ice_vsi_cfg_params params = {};
+ 	struct ice_pf *pf = vf->pf;
+ 	int err;
+ 
+-	ice_vf_vsi_release(vf);
++	if (WARN_ON(!vsi))
++		return -EINVAL;
++
++	params = ice_vsi_to_params(vsi);
++	params.flags = ICE_VSI_FLAG_NO_INIT;
+ 
+-	err = vf->vf_ops->create_vsi(vf);
++	ice_vsi_decfg(vsi);
++	ice_fltr_remove_all(vsi);
++
++	err = ice_vsi_cfg(vsi, &params);
+ 	if (err) {
+ 		dev_err(ice_pf_to_dev(pf),
+-			"Failed to recreate the VF%u's VSI, error %d\n",
++			"Failed to reconfigure the VF%u's VSI, error %d\n",
+ 			vf->vf_id, err);
+ 		return err;
+ 	}
+ 
++	/* Update the lan_vsi_num field since it might have been changed. The
++	 * PF lan_vsi_idx number remains the same so we don't need to change
++	 * that.
++	 */
++	vf->lan_vsi_num = vsi->vsi_num;
++
+ 	return 0;
+ }
+ 
+@@ -929,7 +944,7 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+ 
+ 	ice_vf_pre_vsi_rebuild(vf);
+ 
+-	if (ice_vf_recreate_vsi(vf)) {
++	if (ice_vf_reconfig_vsi(vf)) {
+ 		dev_err(dev, "Failed to release and setup the VF%u's VSI\n",
+ 			vf->vf_id);
+ 		err = -EFAULT;
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+index 93c774f2f4376..6b41e0f3d37ed 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+@@ -62,7 +62,6 @@ struct ice_vf_ops {
+ 	bool (*poll_reset_status)(struct ice_vf *vf);
+ 	void (*clear_reset_trigger)(struct ice_vf *vf);
+ 	void (*irq_close)(struct ice_vf *vf);
+-	int (*create_vsi)(struct ice_vf *vf);
+ 	void (*post_vsi_rebuild)(struct ice_vf *vf);
+ };
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+index 0c7e77c0a09fa..91ba7fe0eaee1 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+@@ -23,6 +23,7 @@
+ #warning "Only include ice_vf_lib_private.h in CONFIG_PCI_IOV virtualization files"
+ #endif
+ 
++int ice_vf_reconfig_vsi(struct ice_vf *vf);
+ void ice_initialize_vf_entry(struct ice_vf *vf);
+ void ice_dis_vf_qs(struct ice_vf *vf);
+ int ice_check_vf_init(struct ice_vf *vf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 8872f7a4f4320..d6348f20822e8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -440,7 +440,6 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
+ 		vf->driver_caps = *(u32 *)msg;
+ 	else
+ 		vf->driver_caps = VIRTCHNL_VF_OFFLOAD_L2 |
+-				  VIRTCHNL_VF_OFFLOAD_RSS_REG |
+ 				  VIRTCHNL_VF_OFFLOAD_VLAN;
+ 
+ 	vfres->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2;
+@@ -453,14 +452,8 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
+ 	vfres->vf_cap_flags |= ice_vc_get_vlan_caps(hw, vf, vsi,
+ 						    vf->driver_caps);
+ 
+-	if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
++	if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF)
+ 		vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_PF;
+-	} else {
+-		if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_AQ)
+-			vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_AQ;
+-		else
+-			vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_REG;
+-	}
+ 
+ 	if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC)
+ 		vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC;
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
+index 7d547fa616fa6..588b77f1a4bf6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
+@@ -13,8 +13,6 @@
+  * - opcodes needed by VF when caps are activated
+  *
+  * Caps that don't use new opcodes (no opcodes should be allowed):
+- * - VIRTCHNL_VF_OFFLOAD_RSS_AQ
+- * - VIRTCHNL_VF_OFFLOAD_RSS_REG
+  * - VIRTCHNL_VF_OFFLOAD_WB_ON_ITR
+  * - VIRTCHNL_VF_OFFLOAD_CRC
+  * - VIRTCHNL_VF_OFFLOAD_RX_POLLING
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index f3663b3f6390e..0fd5551b108ce 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -179,6 +179,10 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 			return -EBUSY;
+ 		usleep_range(1000, 2000);
+ 	}
++
++	ice_qvec_dis_irq(vsi, rx_ring, q_vector);
++	ice_qvec_toggle_napi(vsi, q_vector, false);
++
+ 	netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
+ 
+ 	ice_fill_txq_meta(vsi, tx_ring, &txq_meta);
+@@ -195,13 +199,10 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+ 		if (err)
+ 			return err;
+ 	}
+-	ice_qvec_dis_irq(vsi, rx_ring, q_vector);
+-
+ 	err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
+ 	if (err)
+ 		return err;
+ 
+-	ice_qvec_toggle_napi(vsi, q_vector, false);
+ 	ice_qp_clean_rings(vsi, q_idx);
+ 	ice_qp_reset_stats(vsi, q_idx);
+ 
+@@ -259,11 +260,11 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
+ 	if (err)
+ 		return err;
+ 
+-	clear_bit(ICE_CFG_BUSY, vsi->state);
+ 	ice_qvec_toggle_napi(vsi, q_vector, true);
+ 	ice_qvec_ena_irq(vsi, q_vector);
+ 
+ 	netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
++	clear_bit(ICE_CFG_BUSY, vsi->state);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index 2c1b051fdc0d4..b0c52f17848f6 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -2087,8 +2087,10 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport)
+ 		set_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags);
+ 
+ 	/* schedule the napi to receive all the marker packets */
++	local_bh_disable();
+ 	for (i = 0; i < vport->num_q_vectors; i++)
+ 		napi_schedule(&vport->q_vectors[i].napi);
++	local_bh_enable();
+ 
+ 	return idpf_wait_for_marker_event(vport);
+ }
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index e9bb403bbacf9..58ffddc6419ad 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -6489,7 +6489,7 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
+ 	int cpu = smp_processor_id();
+ 	struct netdev_queue *nq;
+ 	struct igc_ring *ring;
+-	int i, drops;
++	int i, nxmit;
+ 
+ 	if (unlikely(!netif_carrier_ok(dev)))
+ 		return -ENETDOWN;
+@@ -6505,16 +6505,15 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
+ 	/* Avoid transmit queue timeout since we share it with the slow path */
+ 	txq_trans_cond_update(nq);
+ 
+-	drops = 0;
++	nxmit = 0;
+ 	for (i = 0; i < num_frames; i++) {
+ 		int err;
+ 		struct xdp_frame *xdpf = frames[i];
+ 
+ 		err = igc_xdp_init_tx_descriptor(ring, xdpf);
+-		if (err) {
+-			xdp_return_frame_rx_napi(xdpf);
+-			drops++;
+-		}
++		if (err)
++			break;
++		nxmit++;
+ 	}
+ 
+ 	if (flags & XDP_XMIT_FLUSH)
+@@ -6522,7 +6521,7 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
+ 
+ 	__netif_tx_unlock(nq);
+ 
+-	return num_frames - drops;
++	return nxmit;
+ }
+ 
+ static void igc_trigger_rxtxq_interrupt(struct igc_adapter *adapter,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 6a3f633406c4b..ce234e76ea236 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -2939,8 +2939,8 @@ static void ixgbe_check_lsc(struct ixgbe_adapter *adapter)
+ static inline void ixgbe_irq_enable_queues(struct ixgbe_adapter *adapter,
+ 					   u64 qmask)
+ {
+-	u32 mask;
+ 	struct ixgbe_hw *hw = &adapter->hw;
++	u32 mask;
+ 
+ 	switch (hw->mac.type) {
+ 	case ixgbe_mac_82598EB:
+@@ -10524,6 +10524,44 @@ static void ixgbe_reset_rxr_stats(struct ixgbe_ring *rx_ring)
+ 	memset(&rx_ring->rx_stats, 0, sizeof(rx_ring->rx_stats));
+ }
+ 
++/**
++ * ixgbe_irq_disable_single - Disable single IRQ vector
++ * @adapter: adapter structure
++ * @ring: ring index
++ **/
++static void ixgbe_irq_disable_single(struct ixgbe_adapter *adapter, u32 ring)
++{
++	struct ixgbe_hw *hw = &adapter->hw;
++	u64 qmask = BIT_ULL(ring);
++	u32 mask;
++
++	switch (adapter->hw.mac.type) {
++	case ixgbe_mac_82598EB:
++		mask = qmask & IXGBE_EIMC_RTX_QUEUE;
++		IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC, mask);
++		break;
++	case ixgbe_mac_82599EB:
++	case ixgbe_mac_X540:
++	case ixgbe_mac_X550:
++	case ixgbe_mac_X550EM_x:
++	case ixgbe_mac_x550em_a:
++		mask = (qmask & 0xFFFFFFFF);
++		if (mask)
++			IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(0), mask);
++		mask = (qmask >> 32);
++		if (mask)
++			IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(1), mask);
++		break;
++	default:
++		break;
++	}
++	IXGBE_WRITE_FLUSH(&adapter->hw);
++	if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED)
++		synchronize_irq(adapter->msix_entries[ring].vector);
++	else
++		synchronize_irq(adapter->pdev->irq);
++}
++
+ /**
+  * ixgbe_txrx_ring_disable - Disable Rx/Tx/XDP Tx rings
+  * @adapter: adapter structure
+@@ -10540,6 +10578,11 @@ void ixgbe_txrx_ring_disable(struct ixgbe_adapter *adapter, int ring)
+ 	tx_ring = adapter->tx_ring[ring];
+ 	xdp_ring = adapter->xdp_ring[ring];
+ 
++	ixgbe_irq_disable_single(adapter, ring);
++
++	/* Rx/Tx/XDP Tx share the same napi context. */
++	napi_disable(&rx_ring->q_vector->napi);
++
+ 	ixgbe_disable_txr(adapter, tx_ring);
+ 	if (xdp_ring)
+ 		ixgbe_disable_txr(adapter, xdp_ring);
+@@ -10548,9 +10591,6 @@ void ixgbe_txrx_ring_disable(struct ixgbe_adapter *adapter, int ring)
+ 	if (xdp_ring)
+ 		synchronize_rcu();
+ 
+-	/* Rx/Tx/XDP Tx share the same napi context. */
+-	napi_disable(&rx_ring->q_vector->napi);
+-
+ 	ixgbe_clean_tx_ring(tx_ring);
+ 	if (xdp_ring)
+ 		ixgbe_clean_tx_ring(xdp_ring);
+@@ -10578,9 +10618,6 @@ void ixgbe_txrx_ring_enable(struct ixgbe_adapter *adapter, int ring)
+ 	tx_ring = adapter->tx_ring[ring];
+ 	xdp_ring = adapter->xdp_ring[ring];
+ 
+-	/* Rx/Tx/XDP Tx share the same napi context. */
+-	napi_enable(&rx_ring->q_vector->napi);
+-
+ 	ixgbe_configure_tx_ring(adapter, tx_ring);
+ 	if (xdp_ring)
+ 		ixgbe_configure_tx_ring(adapter, xdp_ring);
+@@ -10589,6 +10626,11 @@ void ixgbe_txrx_ring_enable(struct ixgbe_adapter *adapter, int ring)
+ 	clear_bit(__IXGBE_TX_DISABLED, &tx_ring->state);
+ 	if (xdp_ring)
+ 		clear_bit(__IXGBE_TX_DISABLED, &xdp_ring->state);
++
++	/* Rx/Tx/XDP Tx share the same napi context. */
++	napi_enable(&rx_ring->q_vector->napi);
++	ixgbe_irq_enable_queues(adapter, BIT_ULL(ring));
++	IXGBE_WRITE_FLUSH(&adapter->hw);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 3e064234f6fe9..98d4306929f3e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -157,6 +157,12 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	if (action == DEVLINK_RELOAD_ACTION_FW_ACTIVATE &&
++	    !dev->priv.fw_reset) {
++		NL_SET_ERR_MSG_MOD(extack, "FW activate is unsupported for this function");
++		return -EOPNOTSUPP;
++	}
++
+ 	if (mlx5_core_is_pf(dev) && pci_num_vf(pdev))
+ 		NL_SET_ERR_MSG_MOD(extack, "reload while VFs are present is unfavorable");
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+index 803035d4e5976..15d97c685ad33 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+@@ -42,9 +42,9 @@ mlx5e_ptp_port_ts_cqe_list_add(struct mlx5e_ptp_port_ts_cqe_list *list, u8 metad
+ 
+ 	WARN_ON_ONCE(tracker->inuse);
+ 	tracker->inuse = true;
+-	spin_lock(&list->tracker_list_lock);
++	spin_lock_bh(&list->tracker_list_lock);
+ 	list_add_tail(&tracker->entry, &list->tracker_list_head);
+-	spin_unlock(&list->tracker_list_lock);
++	spin_unlock_bh(&list->tracker_list_lock);
+ }
+ 
+ static void
+@@ -54,9 +54,9 @@ mlx5e_ptp_port_ts_cqe_list_remove(struct mlx5e_ptp_port_ts_cqe_list *list, u8 me
+ 
+ 	WARN_ON_ONCE(!tracker->inuse);
+ 	tracker->inuse = false;
+-	spin_lock(&list->tracker_list_lock);
++	spin_lock_bh(&list->tracker_list_lock);
+ 	list_del(&tracker->entry);
+-	spin_unlock(&list->tracker_list_lock);
++	spin_unlock_bh(&list->tracker_list_lock);
+ }
+ 
+ void mlx5e_ptpsq_track_metadata(struct mlx5e_ptpsq *ptpsq, u8 metadata)
+@@ -155,7 +155,7 @@ static void mlx5e_ptpsq_mark_ts_cqes_undelivered(struct mlx5e_ptpsq *ptpsq,
+ 	struct mlx5e_ptp_metadata_map *metadata_map = &ptpsq->metadata_map;
+ 	struct mlx5e_ptp_port_ts_cqe_tracker *pos, *n;
+ 
+-	spin_lock(&cqe_list->tracker_list_lock);
++	spin_lock_bh(&cqe_list->tracker_list_lock);
+ 	list_for_each_entry_safe(pos, n, &cqe_list->tracker_list_head, entry) {
+ 		struct sk_buff *skb =
+ 			mlx5e_ptp_metadata_map_lookup(metadata_map, pos->metadata_id);
+@@ -170,7 +170,7 @@ static void mlx5e_ptpsq_mark_ts_cqes_undelivered(struct mlx5e_ptpsq *ptpsq,
+ 		pos->inuse = false;
+ 		list_del(&pos->entry);
+ 	}
+-	spin_unlock(&cqe_list->tracker_list_lock);
++	spin_unlock_bh(&cqe_list->tracker_list_lock);
+ }
+ 
+ #define PTP_WQE_CTR2IDX(val) ((val) & ptpsq->ts_cqe_ctr_mask)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
+index 86bf007fd05b7..b500cc2c9689d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
+@@ -37,7 +37,7 @@ mlx5e_tc_post_act_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains,
+ 
+ 	if (!MLX5_CAP_FLOWTABLE_TYPE(priv->mdev, ignore_flow_level, table_type)) {
+ 		if (priv->mdev->coredev_type == MLX5_COREDEV_PF)
+-			mlx5_core_warn(priv->mdev, "firmware level support is missing\n");
++			mlx5_core_dbg(priv->mdev, "firmware flow level support is missing\n");
+ 		err = -EOPNOTSUPP;
+ 		goto err_check;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+index d4ebd87431145..b2cabd6ab86cb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+@@ -310,9 +310,9 @@ static void mlx5e_macsec_destroy_object(struct mlx5_core_dev *mdev, u32 macsec_o
+ 	mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+ }
+ 
+-static void mlx5e_macsec_cleanup_sa(struct mlx5e_macsec *macsec,
+-				    struct mlx5e_macsec_sa *sa,
+-				    bool is_tx, struct net_device *netdev, u32 fs_id)
++static void mlx5e_macsec_cleanup_sa_fs(struct mlx5e_macsec *macsec,
++				       struct mlx5e_macsec_sa *sa, bool is_tx,
++				       struct net_device *netdev, u32 fs_id)
+ {
+ 	int action =  (is_tx) ?  MLX5_ACCEL_MACSEC_ACTION_ENCRYPT :
+ 				 MLX5_ACCEL_MACSEC_ACTION_DECRYPT;
+@@ -322,20 +322,49 @@ static void mlx5e_macsec_cleanup_sa(struct mlx5e_macsec *macsec,
+ 
+ 	mlx5_macsec_fs_del_rule(macsec->mdev->macsec_fs, sa->macsec_rule, action, netdev,
+ 				fs_id);
+-	mlx5e_macsec_destroy_object(macsec->mdev, sa->macsec_obj_id);
+ 	sa->macsec_rule = NULL;
+ }
+ 
++static void mlx5e_macsec_cleanup_sa(struct mlx5e_macsec *macsec,
++				    struct mlx5e_macsec_sa *sa, bool is_tx,
++				    struct net_device *netdev, u32 fs_id)
++{
++	mlx5e_macsec_cleanup_sa_fs(macsec, sa, is_tx, netdev, fs_id);
++	mlx5e_macsec_destroy_object(macsec->mdev, sa->macsec_obj_id);
++}
++
++static int mlx5e_macsec_init_sa_fs(struct macsec_context *ctx,
++				   struct mlx5e_macsec_sa *sa, bool encrypt,
++				   bool is_tx, u32 *fs_id)
++{
++	struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev);
++	struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs;
++	struct mlx5_macsec_rule_attrs rule_attrs;
++	union mlx5_macsec_rule *macsec_rule;
++
++	rule_attrs.macsec_obj_id = sa->macsec_obj_id;
++	rule_attrs.sci = sa->sci;
++	rule_attrs.assoc_num = sa->assoc_num;
++	rule_attrs.action = (is_tx) ? MLX5_ACCEL_MACSEC_ACTION_ENCRYPT :
++				      MLX5_ACCEL_MACSEC_ACTION_DECRYPT;
++
++	macsec_rule = mlx5_macsec_fs_add_rule(macsec_fs, ctx, &rule_attrs, fs_id);
++	if (!macsec_rule)
++		return -ENOMEM;
++
++	sa->macsec_rule = macsec_rule;
++
++	return 0;
++}
++
+ static int mlx5e_macsec_init_sa(struct macsec_context *ctx,
+ 				struct mlx5e_macsec_sa *sa,
+ 				bool encrypt, bool is_tx, u32 *fs_id)
+ {
+ 	struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev);
+ 	struct mlx5e_macsec *macsec = priv->macsec;
+-	struct mlx5_macsec_rule_attrs rule_attrs;
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	struct mlx5_macsec_obj_attrs obj_attrs;
+-	union mlx5_macsec_rule *macsec_rule;
+ 	int err;
+ 
+ 	obj_attrs.next_pn = sa->next_pn;
+@@ -357,20 +386,12 @@ static int mlx5e_macsec_init_sa(struct macsec_context *ctx,
+ 	if (err)
+ 		return err;
+ 
+-	rule_attrs.macsec_obj_id = sa->macsec_obj_id;
+-	rule_attrs.sci = sa->sci;
+-	rule_attrs.assoc_num = sa->assoc_num;
+-	rule_attrs.action = (is_tx) ? MLX5_ACCEL_MACSEC_ACTION_ENCRYPT :
+-				      MLX5_ACCEL_MACSEC_ACTION_DECRYPT;
+-
+-	macsec_rule = mlx5_macsec_fs_add_rule(mdev->macsec_fs, ctx, &rule_attrs, fs_id);
+-	if (!macsec_rule) {
+-		err = -ENOMEM;
+-		goto destroy_macsec_object;
++	if (sa->active) {
++		err = mlx5e_macsec_init_sa_fs(ctx, sa, encrypt, is_tx, fs_id);
++		if (err)
++			goto destroy_macsec_object;
+ 	}
+ 
+-	sa->macsec_rule = macsec_rule;
+-
+ 	return 0;
+ 
+ destroy_macsec_object:
+@@ -526,9 +547,7 @@ static int mlx5e_macsec_add_txsa(struct macsec_context *ctx)
+ 		goto destroy_sa;
+ 
+ 	macsec_device->tx_sa[assoc_num] = tx_sa;
+-	if (!secy->operational ||
+-	    assoc_num != tx_sc->encoding_sa ||
+-	    !tx_sa->active)
++	if (!secy->operational)
+ 		goto out;
+ 
+ 	err = mlx5e_macsec_init_sa(ctx, tx_sa, tx_sc->encrypt, true, NULL);
+@@ -595,7 +614,7 @@ static int mlx5e_macsec_upd_txsa(struct macsec_context *ctx)
+ 		goto out;
+ 
+ 	if (ctx_tx_sa->active) {
+-		err = mlx5e_macsec_init_sa(ctx, tx_sa, tx_sc->encrypt, true, NULL);
++		err = mlx5e_macsec_init_sa_fs(ctx, tx_sa, tx_sc->encrypt, true, NULL);
+ 		if (err)
+ 			goto out;
+ 	} else {
+@@ -604,7 +623,7 @@ static int mlx5e_macsec_upd_txsa(struct macsec_context *ctx)
+ 			goto out;
+ 		}
+ 
+-		mlx5e_macsec_cleanup_sa(macsec, tx_sa, true, ctx->secy->netdev, 0);
++		mlx5e_macsec_cleanup_sa_fs(macsec, tx_sa, true, ctx->secy->netdev, 0);
+ 	}
+ out:
+ 	mutex_unlock(&macsec->lock);
+@@ -1030,8 +1049,9 @@ static int mlx5e_macsec_del_rxsa(struct macsec_context *ctx)
+ 		goto out;
+ 	}
+ 
+-	mlx5e_macsec_cleanup_sa(macsec, rx_sa, false, ctx->secy->netdev,
+-				rx_sc->sc_xarray_element->fs_id);
++	if (rx_sa->active)
++		mlx5e_macsec_cleanup_sa(macsec, rx_sa, false, ctx->secy->netdev,
++					rx_sc->sc_xarray_element->fs_id);
+ 	mlx5_destroy_encryption_key(macsec->mdev, rx_sa->enc_key_id);
+ 	kfree(rx_sa);
+ 	rx_sc->rx_sa[assoc_num] = NULL;
+@@ -1112,8 +1132,8 @@ static int macsec_upd_secy_hw_address(struct macsec_context *ctx,
+ 			if (!rx_sa || !rx_sa->macsec_rule)
+ 				continue;
+ 
+-			mlx5e_macsec_cleanup_sa(macsec, rx_sa, false, ctx->secy->netdev,
+-						rx_sc->sc_xarray_element->fs_id);
++			mlx5e_macsec_cleanup_sa_fs(macsec, rx_sa, false, ctx->secy->netdev,
++						   rx_sc->sc_xarray_element->fs_id);
+ 		}
+ 	}
+ 
+@@ -1124,8 +1144,8 @@ static int macsec_upd_secy_hw_address(struct macsec_context *ctx,
+ 				continue;
+ 
+ 			if (rx_sa->active) {
+-				err = mlx5e_macsec_init_sa(ctx, rx_sa, true, false,
+-							   &rx_sc->sc_xarray_element->fs_id);
++				err = mlx5e_macsec_init_sa_fs(ctx, rx_sa, true, false,
++							      &rx_sc->sc_xarray_element->fs_id);
+ 				if (err)
+ 					goto out;
+ 			}
+@@ -1178,7 +1198,7 @@ static int mlx5e_macsec_upd_secy(struct macsec_context *ctx)
+ 		if (!tx_sa)
+ 			continue;
+ 
+-		mlx5e_macsec_cleanup_sa(macsec, tx_sa, true, ctx->secy->netdev, 0);
++		mlx5e_macsec_cleanup_sa_fs(macsec, tx_sa, true, ctx->secy->netdev, 0);
+ 	}
+ 
+ 	for (i = 0; i < MACSEC_NUM_AN; ++i) {
+@@ -1187,7 +1207,7 @@ static int mlx5e_macsec_upd_secy(struct macsec_context *ctx)
+ 			continue;
+ 
+ 		if (tx_sa->assoc_num == tx_sc->encoding_sa && tx_sa->active) {
+-			err = mlx5e_macsec_init_sa(ctx, tx_sa, tx_sc->encrypt, true, NULL);
++			err = mlx5e_macsec_init_sa_fs(ctx, tx_sa, tx_sc->encrypt, true, NULL);
+ 			if (err)
+ 				goto out;
+ 		}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index f0b506e562df3..1ead69c5f5fa3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -401,6 +401,8 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 		mlx5e_skb_cb_hwtstamp_init(skb);
+ 		mlx5e_ptp_metadata_map_put(&sq->ptpsq->metadata_map, skb,
+ 					   metadata_index);
++		/* ensure skb is put on metadata_map before tracking the index */
++		wmb();
+ 		mlx5e_ptpsq_track_metadata(sq->ptpsq, metadata_index);
+ 		if (!netif_tx_queue_stopped(sq->txq) &&
+ 		    mlx5e_ptpsq_metadata_freelist_empty(sq->ptpsq)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+index 190f10aba1702..5a0047bdcb510 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+@@ -152,7 +152,7 @@ void mlx5_esw_ipsec_restore_dest_uplink(struct mlx5_core_dev *mdev)
+ 
+ 	xa_for_each(&esw->offloads.vport_reps, i, rep) {
+ 		rpriv = rep->rep_data[REP_ETH].priv;
+-		if (!rpriv || !rpriv->netdev || !atomic_read(&rpriv->tc_ht.nelems))
++		if (!rpriv || !rpriv->netdev)
+ 			continue;
+ 
+ 		rhashtable_walk_enter(&rpriv->tc_ht, &iter);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index b0455134c98ef..baaae628b0a0f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -535,21 +535,26 @@ esw_src_port_rewrite_supported(struct mlx5_eswitch *esw)
+ }
+ 
+ static bool
+-esw_dests_to_vf_pf_vports(struct mlx5_flow_destination *dests, int max_dest)
++esw_dests_to_int_external(struct mlx5_flow_destination *dests, int max_dest)
+ {
+-	bool vf_dest = false, pf_dest = false;
++	bool internal_dest = false, external_dest = false;
+ 	int i;
+ 
+ 	for (i = 0; i < max_dest; i++) {
+-		if (dests[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
++		if (dests[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT &&
++		    dests[i].type != MLX5_FLOW_DESTINATION_TYPE_UPLINK)
+ 			continue;
+ 
+-		if (dests[i].vport.num == MLX5_VPORT_UPLINK)
+-			pf_dest = true;
++		/* Uplink dest is external, but considered as internal
++		 * if there is reformat because firmware uses LB+hairpin to support it.
++		 */
++		if (dests[i].vport.num == MLX5_VPORT_UPLINK &&
++		    !(dests[i].vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID))
++			external_dest = true;
+ 		else
+-			vf_dest = true;
++			internal_dest = true;
+ 
+-		if (vf_dest && pf_dest)
++		if (internal_dest && external_dest)
+ 			return true;
+ 	}
+ 
+@@ -695,9 +700,9 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
+ 
+ 		/* Header rewrite with combined wire+loopback in FDB is not allowed */
+ 		if ((flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) &&
+-		    esw_dests_to_vf_pf_vports(dest, i)) {
++		    esw_dests_to_int_external(dest, i)) {
+ 			esw_warn(esw->dev,
+-				 "FDB: Header rewrite with forwarding to both PF and VF is not allowed\n");
++				 "FDB: Header rewrite with forwarding to both internal and external dests is not allowed\n");
+ 			rule = ERR_PTR(-EINVAL);
+ 			goto err_esw_get;
+ 		}
+@@ -3658,22 +3663,6 @@ static int esw_inline_mode_to_devlink(u8 mlx5_mode, u8 *mode)
+ 	return 0;
+ }
+ 
+-static bool esw_offloads_devlink_ns_eq_netdev_ns(struct devlink *devlink)
+-{
+-	struct mlx5_core_dev *dev = devlink_priv(devlink);
+-	struct net *devl_net, *netdev_net;
+-	bool ret = false;
+-
+-	mutex_lock(&dev->mlx5e_res.uplink_netdev_lock);
+-	if (dev->mlx5e_res.uplink_netdev) {
+-		netdev_net = dev_net(dev->mlx5e_res.uplink_netdev);
+-		devl_net = devlink_net(devlink);
+-		ret = net_eq(devl_net, netdev_net);
+-	}
+-	mutex_unlock(&dev->mlx5e_res.uplink_netdev_lock);
+-	return ret;
+-}
+-
+ int mlx5_eswitch_block_mode(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_eswitch *esw = dev->priv.eswitch;
+@@ -3718,13 +3707,6 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
+ 	if (esw_mode_from_devlink(mode, &mlx5_mode))
+ 		return -EINVAL;
+ 
+-	if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV &&
+-	    !esw_offloads_devlink_ns_eq_netdev_ns(devlink)) {
+-		NL_SET_ERR_MSG_MOD(extack,
+-				   "Can't change E-Switch mode to switchdev when netdev net namespace has diverged from the devlink's.");
+-		return -EPERM;
+-	}
+-
+ 	mlx5_lag_disable_change(esw->dev);
+ 	err = mlx5_esw_try_lock(esw);
+ 	if (err < 0) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index c4e19d627da21..3a9cdf79403ae 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -679,19 +679,30 @@ void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
++	if (!fw_reset)
++		return;
++
+ 	MLX5_NB_INIT(&fw_reset->nb, fw_reset_event_notifier, GENERAL_EVENT);
+ 	mlx5_eq_notifier_register(dev, &fw_reset->nb);
+ }
+ 
+ void mlx5_fw_reset_events_stop(struct mlx5_core_dev *dev)
+ {
+-	mlx5_eq_notifier_unregister(dev, &dev->priv.fw_reset->nb);
++	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
++
++	if (!fw_reset)
++		return;
++
++	mlx5_eq_notifier_unregister(dev, &fw_reset->nb);
+ }
+ 
+ void mlx5_drain_fw_reset(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
++	if (!fw_reset)
++		return;
++
+ 	set_bit(MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, &fw_reset->reset_flags);
+ 	cancel_work_sync(&fw_reset->fw_live_patch_work);
+ 	cancel_work_sync(&fw_reset->reset_request_work);
+@@ -709,9 +720,13 @@ static const struct devlink_param mlx5_fw_reset_devlink_params[] = {
+ 
+ int mlx5_fw_reset_init(struct mlx5_core_dev *dev)
+ {
+-	struct mlx5_fw_reset *fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL);
++	struct mlx5_fw_reset *fw_reset;
+ 	int err;
+ 
++	if (!MLX5_CAP_MCAM_REG(dev, mfrl))
++		return 0;
++
++	fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL);
+ 	if (!fw_reset)
+ 		return -ENOMEM;
+ 	fw_reset->wq = create_singlethread_workqueue("mlx5_fw_reset_events");
+@@ -747,6 +762,9 @@ void mlx5_fw_reset_cleanup(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 
++	if (!fw_reset)
++		return;
++
+ 	devl_params_unregister(priv_to_devlink(dev),
+ 			       mlx5_fw_reset_devlink_params,
+ 			       ARRAY_SIZE(mlx5_fw_reset_devlink_params));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index 8ff6dc9bc8033..b5c709bba1553 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -452,10 +452,10 @@ mlx5_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
+ 	struct health_buffer __iomem *h = health->health;
+ 	u8 synd = ioread8(&h->synd);
+ 
++	devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd);
+ 	if (!synd)
+ 		return 0;
+ 
+-	devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd);
+ 	devlink_fmsg_string_pair_put(fmsg, "Description", hsynd_str(synd));
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_mactable.c b/drivers/net/ethernet/microchip/sparx5/sparx5_mactable.c
+index 4af285918ea2a..75868b3f548ec 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_mactable.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_mactable.c
+@@ -347,10 +347,10 @@ int sparx5_del_mact_entry(struct sparx5 *sparx5,
+ 				 list) {
+ 		if ((vid == 0 || mact_entry->vid == vid) &&
+ 		    ether_addr_equal(addr, mact_entry->mac)) {
++			sparx5_mact_forget(sparx5, addr, mact_entry->vid);
++
+ 			list_del(&mact_entry->list);
+ 			devm_kfree(sparx5->dev, mact_entry);
+-
+-			sparx5_mact_forget(sparx5, addr, mact_entry->vid);
+ 		}
+ 	}
+ 	mutex_unlock(&sparx5->mact_lock);
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index acd9c615d1f4f..356da958ee81b 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -221,7 +221,7 @@ static void geneve_rx(struct geneve_dev *geneve, struct geneve_sock *gs,
+ 	struct genevehdr *gnvh = geneve_hdr(skb);
+ 	struct metadata_dst *tun_dst = NULL;
+ 	unsigned int len;
+-	int err = 0;
++	int nh, err = 0;
+ 	void *oiph;
+ 
+ 	if (ip_tunnel_collect_metadata() || gs->collect_md) {
+@@ -272,9 +272,23 @@ static void geneve_rx(struct geneve_dev *geneve, struct geneve_sock *gs,
+ 		skb->pkt_type = PACKET_HOST;
+ 	}
+ 
+-	oiph = skb_network_header(skb);
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
+ 	skb_reset_network_header(skb);
+ 
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(geneve->dev, rx_length_errors);
++		DEV_STATS_INC(geneve->dev, rx_errors);
++		goto drop;
++	}
++
++	/* Get the outer header. */
++	oiph = skb->head + nh;
++
+ 	if (geneve_get_sk_family(gs) == AF_INET)
+ 		err = IP_ECN_decapsulate(oiph, skb);
+ #if IS_ENABLED(CONFIG_IPV6)
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index a2dde84499fdd..f0fb9cd1ff56c 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -3137,7 +3137,8 @@ static int lan78xx_open(struct net_device *net)
+ done:
+ 	mutex_unlock(&dev->dev_mutex);
+ 
+-	usb_autopm_put_interface(dev->intf);
++	if (ret < 0)
++		usb_autopm_put_interface(dev->intf);
+ 
+ 	return ret;
+ }
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index c98aeda8abb21..3d9721b3faa81 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -447,5 +447,6 @@ const struct file_operations erofs_file_fops = {
+ 	.llseek		= generic_file_llseek,
+ 	.read_iter	= erofs_file_read_iter,
+ 	.mmap		= erofs_file_mmap,
++	.get_unmapped_area = thp_get_unmapped_area,
+ 	.splice_read	= filemap_splice_read,
+ };
+diff --git a/include/dt-bindings/dma/fsl-edma.h b/include/dt-bindings/dma/fsl-edma.h
+new file mode 100644
+index 0000000000000..fd11478cfe9cc
+--- /dev/null
++++ b/include/dt-bindings/dma/fsl-edma.h
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
++
++#ifndef _FSL_EDMA_DT_BINDING_H_
++#define _FSL_EDMA_DT_BINDING_H_
++
++/* Receive Channel */
++#define FSL_EDMA_RX		0x1
++
++/* iMX8 audio remote DMA */
++#define FSL_EDMA_REMOTE		0x2
++
++/* FIFO is continue memory region */
++#define FSL_EDMA_MULTI_FIFO	0x4
++
++/* Channel need stick to even channel */
++#define FSL_EDMA_EVEN_CH	0x8
++
++/* Channel need stick to odd channel */
++#define FSL_EDMA_ODD_CH		0x10
++
++#endif
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index fc8094419084f..e990c180282e7 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -75,6 +75,8 @@ extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev,
+ 					     struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_gds(struct device *dev,
+ 			    struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
++					       struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 77cd2e13724e7..bfc8320fb46cb 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -10215,7 +10215,9 @@ struct mlx5_ifc_mcam_access_reg_bits {
+ 
+ 	u8         regs_63_to_46[0x12];
+ 	u8         mrtc[0x1];
+-	u8         regs_44_to_32[0xd];
++	u8         regs_44_to_41[0x4];
++	u8         mfrl[0x1];
++	u8         regs_39_to_32[0x8];
+ 
+ 	u8         regs_31_to_10[0x16];
+ 	u8         mtmp[0x1];
+diff --git a/include/trace/events/qdisc.h b/include/trace/events/qdisc.h
+index a3995925cb057..1f4258308b967 100644
+--- a/include/trace/events/qdisc.h
++++ b/include/trace/events/qdisc.h
+@@ -81,14 +81,14 @@ TRACE_EVENT(qdisc_reset,
+ 	TP_ARGS(q),
+ 
+ 	TP_STRUCT__entry(
+-		__string(	dev,		qdisc_dev(q)	)
+-		__string(	kind,		q->ops->id	)
+-		__field(	u32,		parent		)
+-		__field(	u32,		handle		)
++		__string(	dev,		qdisc_dev(q)->name	)
++		__string(	kind,		q->ops->id		)
++		__field(	u32,		parent			)
++		__field(	u32,		handle			)
+ 	),
+ 
+ 	TP_fast_assign(
+-		__assign_str(dev, qdisc_dev(q));
++		__assign_str(dev, qdisc_dev(q)->name);
+ 		__assign_str(kind, q->ops->id);
+ 		__entry->parent = q->parent;
+ 		__entry->handle = q->handle;
+@@ -106,14 +106,14 @@ TRACE_EVENT(qdisc_destroy,
+ 	TP_ARGS(q),
+ 
+ 	TP_STRUCT__entry(
+-		__string(	dev,		qdisc_dev(q)	)
+-		__string(	kind,		q->ops->id	)
+-		__field(	u32,		parent		)
+-		__field(	u32,		handle		)
++		__string(	dev,		qdisc_dev(q)->name	)
++		__string(	kind,		q->ops->id		)
++		__field(	u32,		parent			)
++		__field(	u32,		handle			)
+ 	),
+ 
+ 	TP_fast_assign(
+-		__assign_str(dev, qdisc_dev(q));
++		__assign_str(dev, qdisc_dev(q)->name);
+ 		__assign_str(kind, q->ops->id);
+ 		__entry->parent = q->parent;
+ 		__entry->handle = q->handle;
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 8a0bb80fe48a3..ef82ffc90cbe9 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -178,7 +178,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ 				    void **frames, int n,
+ 				    struct xdp_cpumap_stats *stats)
+ {
+-	struct xdp_rxq_info rxq;
++	struct xdp_rxq_info rxq = {};
+ 	struct xdp_buff xdp;
+ 	int i, nframes = 0;
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index e215413c79a52..9698e93d48c6e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -16686,6 +16686,9 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
+ {
+ 	int i;
+ 
++	if (old->callback_depth > cur->callback_depth)
++		return false;
++
+ 	for (i = 0; i < MAX_BPF_REG; i++)
+ 		if (!regsafe(env, &old->regs[i], &cur->regs[i],
+ 			     &env->idmap_scratch, exact))
+diff --git a/mm/readahead.c b/mm/readahead.c
+index 6925e6959fd3f..1d1a84deb5bc5 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -469,7 +469,7 @@ static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index,
+ 
+ 	if (!folio)
+ 		return -ENOMEM;
+-	mark = round_up(mark, 1UL << order);
++	mark = round_down(mark, 1UL << order);
+ 	if (index == mark)
+ 		folio_set_readahead(folio);
+ 	err = filemap_add_folio(ractl->mapping, folio, index, gfp);
+@@ -577,7 +577,7 @@ static void ondemand_readahead(struct readahead_control *ractl,
+ 	 * It's the expected callback index, assume sequential access.
+ 	 * Ramp up sizes, and push forward the readahead window.
+ 	 */
+-	expected = round_up(ra->start + ra->size - ra->async_size,
++	expected = round_down(ra->start + ra->size - ra->async_size,
+ 			1UL << order);
+ 	if (index == expected || index == (ra->start + ra->size)) {
+ 		ra->start += ra->size;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ea1dec8448fce..ef815ba583a8f 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5332,19 +5332,7 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 	err_nh = NULL;
+ 	list_for_each_entry(nh, &rt6_nh_list, next) {
+ 		err = __ip6_ins_rt(nh->fib6_info, info, extack);
+-		fib6_info_release(nh->fib6_info);
+-
+-		if (!err) {
+-			/* save reference to last route successfully inserted */
+-			rt_last = nh->fib6_info;
+-
+-			/* save reference to first route for notification */
+-			if (!rt_notif)
+-				rt_notif = nh->fib6_info;
+-		}
+ 
+-		/* nh->fib6_info is used or freed at this point, reset to NULL*/
+-		nh->fib6_info = NULL;
+ 		if (err) {
+ 			if (replace && nhn)
+ 				NL_SET_ERR_MSG_MOD(extack,
+@@ -5352,6 +5340,12 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 			err_nh = nh;
+ 			goto add_errout;
+ 		}
++		/* save reference to last route successfully inserted */
++		rt_last = nh->fib6_info;
++
++		/* save reference to first route for notification */
++		if (!rt_notif)
++			rt_notif = nh->fib6_info;
+ 
+ 		/* Because each route is added like a single route we remove
+ 		 * these flags after the first nexthop: if there is a collision,
+@@ -5412,8 +5406,7 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
+ 
+ cleanup:
+ 	list_for_each_entry_safe(nh, nh_safe, &rt6_nh_list, next) {
+-		if (nh->fib6_info)
+-			fib6_info_release(nh->fib6_info);
++		fib6_info_release(nh->fib6_info);
+ 		list_del(&nh->next);
+ 		kfree(nh);
+ 	}
+diff --git a/net/netfilter/nf_conntrack_h323_asn1.c b/net/netfilter/nf_conntrack_h323_asn1.c
+index e697a824b0018..540d97715bd23 100644
+--- a/net/netfilter/nf_conntrack_h323_asn1.c
++++ b/net/netfilter/nf_conntrack_h323_asn1.c
+@@ -533,6 +533,8 @@ static int decode_seq(struct bitstr *bs, const struct field_t *f,
+ 	/* Get fields bitmap */
+ 	if (nf_h323_error_boundary(bs, 0, f->sz))
+ 		return H323_ERROR_BOUND;
++	if (f->sz > 32)
++		return H323_ERROR_RANGE;
+ 	bmp = get_bitmap(bs, f->sz);
+ 	if (base)
+ 		*(unsigned int *)base = bmp;
+@@ -589,6 +591,8 @@ static int decode_seq(struct bitstr *bs, const struct field_t *f,
+ 	bmp2_len = get_bits(bs, 7) + 1;
+ 	if (nf_h323_error_boundary(bs, 0, bmp2_len))
+ 		return H323_ERROR_BOUND;
++	if (bmp2_len > 32)
++		return H323_ERROR_RANGE;
+ 	bmp2 = get_bitmap(bs, bmp2_len);
+ 	bmp |= bmp2 >> f->sz;
+ 	if (base)
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index bfd3e5a14dab6..255640013ab84 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -1256,14 +1256,13 @@ static int nft_ct_expect_obj_init(const struct nft_ctx *ctx,
+ 	switch (priv->l3num) {
+ 	case NFPROTO_IPV4:
+ 	case NFPROTO_IPV6:
+-		if (priv->l3num != ctx->family)
+-			return -EINVAL;
++		if (priv->l3num == ctx->family || ctx->family == NFPROTO_INET)
++			break;
+ 
+-		fallthrough;
+-	case NFPROTO_INET:
+-		break;
++		return -EINVAL;
++	case NFPROTO_INET: /* tuple.src.l3num supports NFPROTO_IPV4/6 only */
+ 	default:
+-		return -EOPNOTSUPP;
++		return -EAFNOSUPPORT;
+ 	}
+ 
+ 	priv->l4proto = nla_get_u8(tb[NFTA_CT_EXPECT_L4PROTO]);
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 0eed00184adf4..104a80b75477f 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -453,16 +453,16 @@ static int nr_create(struct net *net, struct socket *sock, int protocol,
+ 	nr_init_timers(sk);
+ 
+ 	nr->t1     =
+-		msecs_to_jiffies(sysctl_netrom_transport_timeout);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_timeout));
+ 	nr->t2     =
+-		msecs_to_jiffies(sysctl_netrom_transport_acknowledge_delay);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_acknowledge_delay));
+ 	nr->n2     =
+-		msecs_to_jiffies(sysctl_netrom_transport_maximum_tries);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_maximum_tries));
+ 	nr->t4     =
+-		msecs_to_jiffies(sysctl_netrom_transport_busy_delay);
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_busy_delay));
+ 	nr->idle   =
+-		msecs_to_jiffies(sysctl_netrom_transport_no_activity_timeout);
+-	nr->window = sysctl_netrom_transport_requested_window_size;
++		msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_no_activity_timeout));
++	nr->window = READ_ONCE(sysctl_netrom_transport_requested_window_size);
+ 
+ 	nr->bpqext = 1;
+ 	nr->state  = NR_STATE_0;
+@@ -954,7 +954,7 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev)
+ 		 * G8PZT's Xrouter which is sending packets with command type 7
+ 		 * as an extension of the protocol.
+ 		 */
+-		if (sysctl_netrom_reset_circuit &&
++		if (READ_ONCE(sysctl_netrom_reset_circuit) &&
+ 		    (frametype != NR_RESET || flags != 0))
+ 			nr_transmit_reset(skb, 1);
+ 
+diff --git a/net/netrom/nr_dev.c b/net/netrom/nr_dev.c
+index 3aaac4a22b387..2c34389c3ce6f 100644
+--- a/net/netrom/nr_dev.c
++++ b/net/netrom/nr_dev.c
+@@ -81,7 +81,7 @@ static int nr_header(struct sk_buff *skb, struct net_device *dev,
+ 	buff[6] |= AX25_SSSID_SPARE;
+ 	buff    += AX25_ADDR_LEN;
+ 
+-	*buff++ = sysctl_netrom_network_ttl_initialiser;
++	*buff++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 
+ 	*buff++ = NR_PROTO_IP;
+ 	*buff++ = NR_PROTO_IP;
+diff --git a/net/netrom/nr_in.c b/net/netrom/nr_in.c
+index 2f084b6f69d7e..97944db6b5ac6 100644
+--- a/net/netrom/nr_in.c
++++ b/net/netrom/nr_in.c
+@@ -97,7 +97,7 @@ static int nr_state1_machine(struct sock *sk, struct sk_buff *skb,
+ 		break;
+ 
+ 	case NR_RESET:
+-		if (sysctl_netrom_reset_circuit)
++		if (READ_ONCE(sysctl_netrom_reset_circuit))
+ 			nr_disconnect(sk, ECONNRESET);
+ 		break;
+ 
+@@ -128,7 +128,7 @@ static int nr_state2_machine(struct sock *sk, struct sk_buff *skb,
+ 		break;
+ 
+ 	case NR_RESET:
+-		if (sysctl_netrom_reset_circuit)
++		if (READ_ONCE(sysctl_netrom_reset_circuit))
+ 			nr_disconnect(sk, ECONNRESET);
+ 		break;
+ 
+@@ -262,7 +262,7 @@ static int nr_state3_machine(struct sock *sk, struct sk_buff *skb, int frametype
+ 		break;
+ 
+ 	case NR_RESET:
+-		if (sysctl_netrom_reset_circuit)
++		if (READ_ONCE(sysctl_netrom_reset_circuit))
+ 			nr_disconnect(sk, ECONNRESET);
+ 		break;
+ 
+diff --git a/net/netrom/nr_out.c b/net/netrom/nr_out.c
+index 44929657f5b71..5e531394a724b 100644
+--- a/net/netrom/nr_out.c
++++ b/net/netrom/nr_out.c
+@@ -204,7 +204,7 @@ void nr_transmit_buffer(struct sock *sk, struct sk_buff *skb)
+ 	dptr[6] |= AX25_SSSID_SPARE;
+ 	dptr += AX25_ADDR_LEN;
+ 
+-	*dptr++ = sysctl_netrom_network_ttl_initialiser;
++	*dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 
+ 	if (!nr_route_frame(skb, NULL)) {
+ 		kfree_skb(skb);
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index baea3cbd76ca5..70480869ad1c5 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -153,7 +153,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
+ 		nr_neigh->digipeat = NULL;
+ 		nr_neigh->ax25     = NULL;
+ 		nr_neigh->dev      = dev;
+-		nr_neigh->quality  = sysctl_netrom_default_path_quality;
++		nr_neigh->quality  = READ_ONCE(sysctl_netrom_default_path_quality);
+ 		nr_neigh->locked   = 0;
+ 		nr_neigh->count    = 0;
+ 		nr_neigh->number   = nr_neigh_no++;
+@@ -728,7 +728,7 @@ void nr_link_failed(ax25_cb *ax25, int reason)
+ 	nr_neigh->ax25 = NULL;
+ 	ax25_cb_put(ax25);
+ 
+-	if (++nr_neigh->failed < sysctl_netrom_link_fails_count) {
++	if (++nr_neigh->failed < READ_ONCE(sysctl_netrom_link_fails_count)) {
+ 		nr_neigh_put(nr_neigh);
+ 		return;
+ 	}
+@@ -766,7 +766,7 @@ int nr_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 	if (ax25 != NULL) {
+ 		ret = nr_add_node(nr_src, "", &ax25->dest_addr, ax25->digipeat,
+ 				  ax25->ax25_dev->dev, 0,
+-				  sysctl_netrom_obsolescence_count_initialiser);
++				  READ_ONCE(sysctl_netrom_obsolescence_count_initialiser));
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -780,7 +780,7 @@ int nr_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 		return ret;
+ 	}
+ 
+-	if (!sysctl_netrom_routing_control && ax25 != NULL)
++	if (!READ_ONCE(sysctl_netrom_routing_control) && ax25 != NULL)
+ 		return 0;
+ 
+ 	/* Its Time-To-Live has expired */
+diff --git a/net/netrom/nr_subr.c b/net/netrom/nr_subr.c
+index e2d2af924cff4..c3bbd5880850b 100644
+--- a/net/netrom/nr_subr.c
++++ b/net/netrom/nr_subr.c
+@@ -182,7 +182,8 @@ void nr_write_internal(struct sock *sk, int frametype)
+ 		*dptr++ = nr->my_id;
+ 		*dptr++ = frametype;
+ 		*dptr++ = nr->window;
+-		if (nr->bpqext) *dptr++ = sysctl_netrom_network_ttl_initialiser;
++		if (nr->bpqext)
++			*dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 		break;
+ 
+ 	case NR_DISCREQ:
+@@ -236,7 +237,7 @@ void __nr_transmit_reply(struct sk_buff *skb, int mine, unsigned char cmdflags)
+ 	dptr[6] |= AX25_SSSID_SPARE;
+ 	dptr += AX25_ADDR_LEN;
+ 
+-	*dptr++ = sysctl_netrom_network_ttl_initialiser;
++	*dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser);
+ 
+ 	if (mine) {
+ 		*dptr++ = 0;
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index fba82d36593ad..a4e3c5de998be 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -301,6 +301,9 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+ 			kfree(sg);
+ 		}
+ 		ret = PTR_ERR(trans_private);
++		/* Trigger connection so that its ready for the next retry */
++		if (ret == -ENODEV)
++			rds_conn_connect_if_down(cp->cp_conn);
+ 		goto out;
+ 	}
+ 
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 5e57a1581dc60..2899def23865f 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1313,12 +1313,8 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 
+ 	/* Parse any control messages the user may have included. */
+ 	ret = rds_cmsg_send(rs, rm, msg, &allocated_mr, &vct);
+-	if (ret) {
+-		/* Trigger connection so that its ready for the next retry */
+-		if (ret ==  -EAGAIN)
+-			rds_conn_connect_if_down(conn);
++	if (ret)
+ 		goto out;
+-	}
+ 
+ 	if (rm->rdma.op_active && !conn->c_trans->xmit_rdma) {
+ 		printk_ratelimited(KERN_NOTICE "rdma_op %p conn xmit_rdma %p\n",
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index 3784534c91855..653e51ae39648 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -407,7 +407,7 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 	struct xfrm_dst *xdst = (struct xfrm_dst *)dst;
+ 	struct net_device *dev = x->xso.dev;
+ 
+-	if (!x->type_offload || x->encap)
++	if (!x->type_offload)
+ 		return false;
+ 
+ 	if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET ||
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index c13dc3ef79107..e69d588caa0c6 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3416,7 +3416,7 @@ decode_session4(const struct xfrm_flow_keys *flkeys, struct flowi *fl, bool reve
+ 	}
+ 
+ 	fl4->flowi4_proto = flkeys->basic.ip_proto;
+-	fl4->flowi4_tos = flkeys->ip.tos;
++	fl4->flowi4_tos = flkeys->ip.tos & ~INET_ECN_MASK;
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c b/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
+index c3b45745cbccd..6d8b54124cb35 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
+@@ -511,7 +511,7 @@ static void test_xdp_bonding_features(struct skeletons *skeletons)
+ 	if (!ASSERT_OK(err, "bond bpf_xdp_query"))
+ 		goto out;
+ 
+-	if (!ASSERT_EQ(query_opts.feature_flags, NETDEV_XDP_ACT_MASK,
++	if (!ASSERT_EQ(query_opts.feature_flags, 0,
+ 		       "bond query_opts.feature_flags"))
+ 		goto out;
+ 
+@@ -601,7 +601,7 @@ static void test_xdp_bonding_features(struct skeletons *skeletons)
+ 	if (!ASSERT_OK(err, "bond bpf_xdp_query"))
+ 		goto out;
+ 
+-	ASSERT_EQ(query_opts.feature_flags, NETDEV_XDP_ACT_MASK,
++	ASSERT_EQ(query_opts.feature_flags, 0,
+ 		  "bond query_opts.feature_flags");
+ out:
+ 	bpf_link__destroy(link);
+diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh
+index 9096bf5794888..25693b37f820d 100755
+--- a/tools/testing/selftests/net/mptcp/simult_flows.sh
++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh
+@@ -302,12 +302,12 @@ done
+ 
+ setup
+ run_test 10 10 0 0 "balanced bwidth"
+-run_test 10 10 1 50 "balanced bwidth with unbalanced delay"
++run_test 10 10 1 25 "balanced bwidth with unbalanced delay"
+ 
+ # we still need some additional infrastructure to pass the following test-cases
+-run_test 30 10 0 0 "unbalanced bwidth"
+-run_test 30 10 1 50 "unbalanced bwidth with unbalanced delay"
+-run_test 30 10 50 1 "unbalanced bwidth with opposed, unbalanced delay"
++run_test 10 3 0 0 "unbalanced bwidth"
++run_test 10 3 1 25 "unbalanced bwidth with unbalanced delay"
++run_test 10 3 25 1 "unbalanced bwidth with opposed, unbalanced delay"
+ 
+ mptcp_lib_result_print_all_tap
+ exit $ret


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-03-27 11:23 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-03-27 11:23 UTC (permalink / raw
  To: gentoo-commits

commit:     77f2c4cd0a4fe5318156c5ad8ff04d111f8e535a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 27 11:23:15 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 27 11:23:15 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=77f2c4cd

Linux patch 6.7.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1010_linux-6.7.11.patch | 33370 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 33374 insertions(+)

diff --git a/0000_README b/0000_README
index 9b12f1b7..6f168e56 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-6.7.10.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.10
 
+Patch:  1010_linux-6.7.11.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.11
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1010_linux-6.7.11.patch b/1010_linux-6.7.11.patch
new file mode 100644
index 00000000..79340830
--- /dev/null
+++ b/1010_linux-6.7.11.patch
@@ -0,0 +1,33370 @@
+diff --git a/Documentation/devicetree/bindings/display/msm/qcom,mdss.yaml b/Documentation/devicetree/bindings/display/msm/qcom,mdss.yaml
+index 0999ea07f47bb..e4576546bf0db 100644
+--- a/Documentation/devicetree/bindings/display/msm/qcom,mdss.yaml
++++ b/Documentation/devicetree/bindings/display/msm/qcom,mdss.yaml
+@@ -127,6 +127,7 @@ patternProperties:
+           - qcom,dsi-phy-20nm
+           - qcom,dsi-phy-28nm-8226
+           - qcom,dsi-phy-28nm-hpm
++          - qcom,dsi-phy-28nm-hpm-fam-b
+           - qcom,dsi-phy-28nm-lp
+           - qcom,hdmi-phy-8084
+           - qcom,hdmi-phy-8660
+diff --git a/Documentation/netlink/specs/devlink.yaml b/Documentation/netlink/specs/devlink.yaml
+index 572d83a414d0d..42a9d77803b2b 100644
+--- a/Documentation/netlink/specs/devlink.yaml
++++ b/Documentation/netlink/specs/devlink.yaml
+@@ -244,7 +244,7 @@ attribute-sets:
+ 
+       -
+         name: eswitch-inline-mode
+-        type: u16
++        type: u8
+         enum: eswitch-inline-mode
+       -
+         name: dpipe-tables
+diff --git a/Documentation/netlink/specs/dpll.yaml b/Documentation/netlink/specs/dpll.yaml
+index 2b4c4bcd83615..5b25ad589e3d0 100644
+--- a/Documentation/netlink/specs/dpll.yaml
++++ b/Documentation/netlink/specs/dpll.yaml
+@@ -274,6 +274,7 @@ attribute-sets:
+       -
+         name: capabilities
+         type: u32
++        enum: pin-capabilities
+       -
+         name: parent-device
+         type: nest
+diff --git a/Makefile b/Makefile
+index 00c159535dabe..6e3182cdf0162 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/amazon/alpine.dtsi b/arch/arm/boot/dts/amazon/alpine.dtsi
+index ff68dfb4eb787..90bd12feac010 100644
+--- a/arch/arm/boot/dts/amazon/alpine.dtsi
++++ b/arch/arm/boot/dts/amazon/alpine.dtsi
+@@ -167,7 +167,6 @@ pcie@fbc00000 {
+ 		msix: msix@fbe00000 {
+ 			compatible = "al,alpine-msix";
+ 			reg = <0x0 0xfbe00000 0x0 0x100000>;
+-			interrupt-controller;
+ 			msi-controller;
+ 			al,msi-base-spi = <96>;
+ 			al,msi-num-spis = <64>;
+diff --git a/arch/arm/boot/dts/arm/arm-realview-pb1176.dts b/arch/arm/boot/dts/arm/arm-realview-pb1176.dts
+index efed325af88d2..d99bac02232b3 100644
+--- a/arch/arm/boot/dts/arm/arm-realview-pb1176.dts
++++ b/arch/arm/boot/dts/arm/arm-realview-pb1176.dts
+@@ -451,7 +451,7 @@ pb1176_serial3: serial@1010f000 {
+ 
+ 		/* Direct-mapped development chip ROM */
+ 		pb1176_rom@10200000 {
+-			compatible = "direct-mapped";
++			compatible = "mtd-rom";
+ 			reg = <0x10200000 0x4000>;
+ 			bank-width = <1>;
+ 		};
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi
+index 530491ae5eb26..857cb26ed6d7e 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi
++++ b/arch/arm/boot/dts/aspeed/aspeed-g4.dtsi
+@@ -466,7 +466,6 @@ i2c_ic: interrupt-controller@0 {
+ 	i2c0: i2c-bus@40 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x40 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -482,7 +481,6 @@ i2c0: i2c-bus@40 {
+ 	i2c1: i2c-bus@80 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x80 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -498,7 +496,6 @@ i2c1: i2c-bus@80 {
+ 	i2c2: i2c-bus@c0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0xc0 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -515,7 +512,6 @@ i2c2: i2c-bus@c0 {
+ 	i2c3: i2c-bus@100 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x100 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -532,7 +528,6 @@ i2c3: i2c-bus@100 {
+ 	i2c4: i2c-bus@140 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x140 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -549,7 +544,6 @@ i2c4: i2c-bus@140 {
+ 	i2c5: i2c-bus@180 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x180 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -566,7 +560,6 @@ i2c5: i2c-bus@180 {
+ 	i2c6: i2c-bus@1c0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x1c0 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -583,7 +576,6 @@ i2c6: i2c-bus@1c0 {
+ 	i2c7: i2c-bus@300 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x300 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -600,7 +592,6 @@ i2c7: i2c-bus@300 {
+ 	i2c8: i2c-bus@340 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x340 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -617,7 +608,6 @@ i2c8: i2c-bus@340 {
+ 	i2c9: i2c-bus@380 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x380 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -634,7 +624,6 @@ i2c9: i2c-bus@380 {
+ 	i2c10: i2c-bus@3c0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x3c0 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -651,7 +640,6 @@ i2c10: i2c-bus@3c0 {
+ 	i2c11: i2c-bus@400 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x400 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -668,7 +656,6 @@ i2c11: i2c-bus@400 {
+ 	i2c12: i2c-bus@440 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x440 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+@@ -685,7 +672,6 @@ i2c12: i2c-bus@440 {
+ 	i2c13: i2c-bus@480 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x480 0x40>;
+ 		compatible = "aspeed,ast2400-i2c-bus";
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi
+index 04f98d1dbb97c..e6f3cf3c721e5 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi
++++ b/arch/arm/boot/dts/aspeed/aspeed-g5.dtsi
+@@ -363,6 +363,7 @@ sgpio: sgpio@1e780200 {
+ 				interrupts = <40>;
+ 				reg = <0x1e780200 0x0100>;
+ 				clocks = <&syscon ASPEED_CLK_APB>;
++				#interrupt-cells = <2>;
+ 				interrupt-controller;
+ 				bus-frequency = <12000000>;
+ 				pinctrl-names = "default";
+@@ -594,7 +595,6 @@ i2c_ic: interrupt-controller@0 {
+ 	i2c0: i2c-bus@40 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x40 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -610,7 +610,6 @@ i2c0: i2c-bus@40 {
+ 	i2c1: i2c-bus@80 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x80 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -626,7 +625,6 @@ i2c1: i2c-bus@80 {
+ 	i2c2: i2c-bus@c0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0xc0 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -643,7 +641,6 @@ i2c2: i2c-bus@c0 {
+ 	i2c3: i2c-bus@100 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x100 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -660,7 +657,6 @@ i2c3: i2c-bus@100 {
+ 	i2c4: i2c-bus@140 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x140 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -677,7 +673,6 @@ i2c4: i2c-bus@140 {
+ 	i2c5: i2c-bus@180 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x180 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -694,7 +689,6 @@ i2c5: i2c-bus@180 {
+ 	i2c6: i2c-bus@1c0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x1c0 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -711,7 +705,6 @@ i2c6: i2c-bus@1c0 {
+ 	i2c7: i2c-bus@300 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x300 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -728,7 +721,6 @@ i2c7: i2c-bus@300 {
+ 	i2c8: i2c-bus@340 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x340 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -745,7 +737,6 @@ i2c8: i2c-bus@340 {
+ 	i2c9: i2c-bus@380 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x380 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -762,7 +753,6 @@ i2c9: i2c-bus@380 {
+ 	i2c10: i2c-bus@3c0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x3c0 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -779,7 +769,6 @@ i2c10: i2c-bus@3c0 {
+ 	i2c11: i2c-bus@400 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x400 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -796,7 +785,6 @@ i2c11: i2c-bus@400 {
+ 	i2c12: i2c-bus@440 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x440 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+@@ -813,7 +801,6 @@ i2c12: i2c-bus@440 {
+ 	i2c13: i2c-bus@480 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 
+ 		reg = <0x480 0x40>;
+ 		compatible = "aspeed,ast2500-i2c-bus";
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi
+index c4d1faade8be3..29f94696d8b18 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi
++++ b/arch/arm/boot/dts/aspeed/aspeed-g6.dtsi
+@@ -474,6 +474,7 @@ sgpiom0: sgpiom@1e780500 {
+ 				reg = <0x1e780500 0x100>;
+ 				interrupts = <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&syscon ASPEED_CLK_APB2>;
++				#interrupt-cells = <2>;
+ 				interrupt-controller;
+ 				bus-frequency = <12000000>;
+ 				pinctrl-names = "default";
+@@ -488,6 +489,7 @@ sgpiom1: sgpiom@1e780600 {
+ 				reg = <0x1e780600 0x100>;
+ 				interrupts = <GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&syscon ASPEED_CLK_APB2>;
++				#interrupt-cells = <2>;
+ 				interrupt-controller;
+ 				bus-frequency = <12000000>;
+ 				pinctrl-names = "default";
+@@ -902,7 +904,6 @@ &i2c {
+ 	i2c0: i2c-bus@80 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x80 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -917,7 +918,6 @@ i2c0: i2c-bus@80 {
+ 	i2c1: i2c-bus@100 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x100 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -932,7 +932,6 @@ i2c1: i2c-bus@100 {
+ 	i2c2: i2c-bus@180 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x180 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -947,7 +946,6 @@ i2c2: i2c-bus@180 {
+ 	i2c3: i2c-bus@200 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x200 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -962,7 +960,6 @@ i2c3: i2c-bus@200 {
+ 	i2c4: i2c-bus@280 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x280 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -977,7 +974,6 @@ i2c4: i2c-bus@280 {
+ 	i2c5: i2c-bus@300 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x300 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -992,7 +988,6 @@ i2c5: i2c-bus@300 {
+ 	i2c6: i2c-bus@380 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x380 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1007,7 +1002,6 @@ i2c6: i2c-bus@380 {
+ 	i2c7: i2c-bus@400 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x400 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1022,7 +1016,6 @@ i2c7: i2c-bus@400 {
+ 	i2c8: i2c-bus@480 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x480 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1037,7 +1030,6 @@ i2c8: i2c-bus@480 {
+ 	i2c9: i2c-bus@500 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x500 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1052,7 +1044,6 @@ i2c9: i2c-bus@500 {
+ 	i2c10: i2c-bus@580 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x580 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1067,7 +1058,6 @@ i2c10: i2c-bus@580 {
+ 	i2c11: i2c-bus@600 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x600 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1082,7 +1072,6 @@ i2c11: i2c-bus@600 {
+ 	i2c12: i2c-bus@680 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x680 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1097,7 +1086,6 @@ i2c12: i2c-bus@680 {
+ 	i2c13: i2c-bus@700 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x700 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1112,7 +1100,6 @@ i2c13: i2c-bus@700 {
+ 	i2c14: i2c-bus@780 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x780 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+@@ -1127,7 +1114,6 @@ i2c14: i2c-bus@780 {
+ 	i2c15: i2c-bus@800 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		#interrupt-cells = <1>;
+ 		reg = <0x800 0x80>;
+ 		compatible = "aspeed,ast2600-i2c-bus";
+ 		clocks = <&syscon ASPEED_CLK_APB2>;
+diff --git a/arch/arm/boot/dts/broadcom/bcm-cygnus.dtsi b/arch/arm/boot/dts/broadcom/bcm-cygnus.dtsi
+index f9f79ed825181..07ca0d993c9fd 100644
+--- a/arch/arm/boot/dts/broadcom/bcm-cygnus.dtsi
++++ b/arch/arm/boot/dts/broadcom/bcm-cygnus.dtsi
+@@ -167,6 +167,7 @@ gpio_crmu: gpio@3024800 {
+ 			#gpio-cells = <2>;
+ 			gpio-controller;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupt-parent = <&mailbox>;
+ 			interrupts = <0>;
+ 		};
+@@ -247,6 +248,7 @@ gpio_ccm: gpio@1800a000 {
+ 			gpio-controller;
+ 			interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 		};
+ 
+ 		i2c1: i2c@1800b000 {
+@@ -518,6 +520,7 @@ gpio_asiu: gpio@180a5000 {
+ 			gpio-controller;
+ 
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupts = <GIC_SPI 174 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-ranges = <&pinctrl 0 42 1>,
+ 					<&pinctrl 1 44 3>,
+diff --git a/arch/arm/boot/dts/broadcom/bcm-hr2.dtsi b/arch/arm/boot/dts/broadcom/bcm-hr2.dtsi
+index 788a6806191a3..75545b10ef2fa 100644
+--- a/arch/arm/boot/dts/broadcom/bcm-hr2.dtsi
++++ b/arch/arm/boot/dts/broadcom/bcm-hr2.dtsi
+@@ -200,6 +200,7 @@ gpiob: gpio@30000 {
+ 			gpio-controller;
+ 			ngpios = <4>;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/broadcom/bcm-nsp.dtsi b/arch/arm/boot/dts/broadcom/bcm-nsp.dtsi
+index 9d20ba3b1ffb1..6a4482c931674 100644
+--- a/arch/arm/boot/dts/broadcom/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/broadcom/bcm-nsp.dtsi
+@@ -180,6 +180,7 @@ gpioa: gpio@20 {
+ 			gpio-controller;
+ 			ngpios = <32>;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-ranges = <&pinctrl 0 0 32>;
+ 		};
+@@ -352,6 +353,7 @@ gpiob: gpio@30000 {
+ 			gpio-controller;
+ 			ngpios = <4>;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateway-7001.dts b/arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateway-7001.dts
+index 4d70f6afd13ab..6d5e69035f94d 100644
+--- a/arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateway-7001.dts
++++ b/arch/arm/boot/dts/intel/ixp/intel-ixp42x-gateway-7001.dts
+@@ -60,6 +60,8 @@ pci@c0000000 {
+ 			 * We have slots (IDSEL) 1 and 2 with one assigned IRQ
+ 			 * each handling all IRQs.
+ 			 */
++			#interrupt-cells = <1>;
++			interrupt-map-mask = <0xf800 0 0 7>;
+ 			interrupt-map =
+ 			/* IDSEL 1 */
+ 			<0x0800 0 0 1 &gpio0 11 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 1 is irq 11 */
+diff --git a/arch/arm/boot/dts/intel/ixp/intel-ixp42x-goramo-multilink.dts b/arch/arm/boot/dts/intel/ixp/intel-ixp42x-goramo-multilink.dts
+index 9ec0169bacf8c..5f4c849915db7 100644
+--- a/arch/arm/boot/dts/intel/ixp/intel-ixp42x-goramo-multilink.dts
++++ b/arch/arm/boot/dts/intel/ixp/intel-ixp42x-goramo-multilink.dts
+@@ -89,6 +89,8 @@ pci@c0000000 {
+ 			 * The slots have Ethernet, Ethernet, NEC and MPCI.
+ 			 * The IDSELs are 11, 12, 13, 14.
+ 			 */
++			#interrupt-cells = <1>;
++			interrupt-map-mask = <0xf800 0 0 7>;
+ 			interrupt-map =
+ 			/* IDSEL 11 - Ethernet A */
+ 			<0x5800 0 0 1 &gpio0 4 IRQ_TYPE_LEVEL_LOW>, /* INT A on slot 11 is irq 4 */
+diff --git a/arch/arm/boot/dts/marvell/kirkwood-l-50.dts b/arch/arm/boot/dts/marvell/kirkwood-l-50.dts
+index dffb9f84e67c5..c841eb8e7fb1d 100644
+--- a/arch/arm/boot/dts/marvell/kirkwood-l-50.dts
++++ b/arch/arm/boot/dts/marvell/kirkwood-l-50.dts
+@@ -65,6 +65,7 @@ i2c@11000 {
+ 			gpio2: gpio-expander@20 {
+ 				#gpio-cells = <2>;
+ 				#interrupt-cells = <2>;
++				interrupt-controller;
+ 				compatible = "semtech,sx1505q";
+ 				reg = <0x20>;
+ 
+@@ -79,6 +80,7 @@ gpio2: gpio-expander@20 {
+ 			gpio3: gpio-expander@21 {
+ 				#gpio-cells = <2>;
+ 				#interrupt-cells = <2>;
++				interrupt-controller;
+ 				compatible = "semtech,sx1505q";
+ 				reg = <0x21>;
+ 
+diff --git a/arch/arm/boot/dts/nuvoton/nuvoton-wpcm450.dtsi b/arch/arm/boot/dts/nuvoton/nuvoton-wpcm450.dtsi
+index fd671c7a1e5d6..6e1f0f164cb4f 100644
+--- a/arch/arm/boot/dts/nuvoton/nuvoton-wpcm450.dtsi
++++ b/arch/arm/boot/dts/nuvoton/nuvoton-wpcm450.dtsi
+@@ -120,6 +120,7 @@ gpio0: gpio@0 {
+ 				interrupts = <2 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <3 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <4 IRQ_TYPE_LEVEL_HIGH>;
++				#interrupt-cells = <2>;
+ 				interrupt-controller;
+ 			};
+ 
+@@ -128,6 +129,7 @@ gpio1: gpio@1 {
+ 				gpio-controller;
+ 				#gpio-cells = <2>;
+ 				interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
++				#interrupt-cells = <2>;
+ 				interrupt-controller;
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/nvidia/tegra30-apalis-v1.1.dtsi b/arch/arm/boot/dts/nvidia/tegra30-apalis-v1.1.dtsi
+index 1640763fd4af2..ff0d684622f74 100644
+--- a/arch/arm/boot/dts/nvidia/tegra30-apalis-v1.1.dtsi
++++ b/arch/arm/boot/dts/nvidia/tegra30-apalis-v1.1.dtsi
+@@ -997,7 +997,6 @@ touchscreen@41 {
+ 			compatible = "st,stmpe811";
+ 			reg = <0x41>;
+ 			irq-gpio = <&gpio TEGRA_GPIO(V, 0) GPIO_ACTIVE_LOW>;
+-			interrupt-controller;
+ 			id = <0>;
+ 			blocks = <0x5>;
+ 			irq-trigger = <0x1>;
+diff --git a/arch/arm/boot/dts/nvidia/tegra30-apalis.dtsi b/arch/arm/boot/dts/nvidia/tegra30-apalis.dtsi
+index 3b6fad273cabf..d38f1dd38a906 100644
+--- a/arch/arm/boot/dts/nvidia/tegra30-apalis.dtsi
++++ b/arch/arm/boot/dts/nvidia/tegra30-apalis.dtsi
+@@ -980,7 +980,6 @@ touchscreen@41 {
+ 			compatible = "st,stmpe811";
+ 			reg = <0x41>;
+ 			irq-gpio = <&gpio TEGRA_GPIO(V, 0) GPIO_ACTIVE_LOW>;
+-			interrupt-controller;
+ 			id = <0>;
+ 			blocks = <0x5>;
+ 			irq-trigger = <0x1>;
+diff --git a/arch/arm/boot/dts/nvidia/tegra30-colibri.dtsi b/arch/arm/boot/dts/nvidia/tegra30-colibri.dtsi
+index 4eb526fe9c558..81c8a5fd92cce 100644
+--- a/arch/arm/boot/dts/nvidia/tegra30-colibri.dtsi
++++ b/arch/arm/boot/dts/nvidia/tegra30-colibri.dtsi
+@@ -861,7 +861,6 @@ touchscreen@41 {
+ 			compatible = "st,stmpe811";
+ 			reg = <0x41>;
+ 			irq-gpio = <&gpio TEGRA_GPIO(V, 0) GPIO_ACTIVE_LOW>;
+-			interrupt-controller;
+ 			id = <0>;
+ 			blocks = <0x5>;
+ 			irq-trigger = <0x1>;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6dl-yapp4-common.dtsi b/arch/arm/boot/dts/nxp/imx/imx6dl-yapp4-common.dtsi
+index 3be38a3c4bb11..c32ea040fecdd 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6dl-yapp4-common.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6dl-yapp4-common.dtsi
+@@ -117,17 +117,9 @@ mdio {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		phy_port2: phy@1 {
+-			reg = <1>;
+-		};
+-
+-		phy_port3: phy@2 {
+-			reg = <2>;
+-		};
+-
+ 		switch@10 {
+ 			compatible = "qca,qca8334";
+-			reg = <10>;
++			reg = <0x10>;
+ 			reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
+ 
+ 			switch_ports: ports {
+@@ -149,15 +141,30 @@ fixed-link {
+ 				eth2: port@2 {
+ 					reg = <2>;
+ 					label = "eth2";
++					phy-mode = "internal";
+ 					phy-handle = <&phy_port2>;
+ 				};
+ 
+ 				eth1: port@3 {
+ 					reg = <3>;
+ 					label = "eth1";
++					phy-mode = "internal";
+ 					phy-handle = <&phy_port3>;
+ 				};
+ 			};
++
++			mdio {
++				#address-cells = <1>;
++				#size-cells = <0>;
++
++				phy_port2: ethernet-phy@1 {
++					reg = <1>;
++				};
++
++				phy_port3: ethernet-phy@2 {
++					reg = <2>;
++				};
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6q-b850v3.dts b/arch/arm/boot/dts/nxp/imx/imx6q-b850v3.dts
+index db8c332df6a1d..cad112e054758 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6q-b850v3.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx6q-b850v3.dts
+@@ -227,7 +227,6 @@ bridge@1,0 {
+ 
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		#interrupt-cells = <1>;
+ 
+ 		bridge@2,1 {
+ 			compatible = "pci10b5,8605";
+@@ -235,7 +234,6 @@ bridge@2,1 {
+ 
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			#interrupt-cells = <1>;
+ 
+ 			/* Intel Corporation I210 Gigabit Network Connection */
+ 			ethernet@3,0 {
+@@ -250,7 +248,6 @@ bridge@2,2 {
+ 
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-			#interrupt-cells = <1>;
+ 
+ 			/* Intel Corporation I210 Gigabit Network Connection */
+ 			switch_nic: ethernet@4,0 {
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6q-bx50v3.dtsi b/arch/arm/boot/dts/nxp/imx/imx6q-bx50v3.dtsi
+index 99f4f6ac71d4a..c1ae7c47b4422 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6q-bx50v3.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6q-bx50v3.dtsi
+@@ -245,6 +245,7 @@ pca9539: pca9539@74 {
+ 				reg = <0x74>;
+ 				gpio-controller;
+ 				#gpio-cells = <2>;
++				#interrupt-cells = <2>;
+ 				interrupt-controller;
+ 				interrupt-parent = <&gpio2>;
+ 				interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+@@ -390,7 +391,6 @@ pci_root: root@0,0 {
+ 
+ 		#address-cells = <3>;
+ 		#size-cells = <2>;
+-		#interrupt-cells = <1>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
+index 4cc965277c521..dcb4f6a32f809 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
+@@ -619,7 +619,6 @@ stmpe811@41 {
+ 		blocks = <0x5>;
+ 		id = <0>;
+ 		interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
+-		interrupt-controller;
+ 		interrupt-parent = <&gpio4>;
+ 		irq-trigger = <0x1>;
+ 		pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-colibri.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-colibri.dtsi
+index 11d9c7a2dacb1..6cc4d6fd5f28b 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-colibri.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-colibri.dtsi
+@@ -543,7 +543,6 @@ stmpe811@41 {
+ 		blocks = <0x5>;
+ 		interrupts = <20 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-parent = <&gpio6>;
+-		interrupt-controller;
+ 		id = <0>;
+ 		irq-trigger = <0x1>;
+ 		pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-emcon.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-emcon.dtsi
+index a63e73adc1fc5..42b2ba23aefc9 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-emcon.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-emcon.dtsi
+@@ -225,7 +225,6 @@ da9063: pmic@58 {
+ 		pinctrl-0 = <&pinctrl_pmic>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <8 IRQ_TYPE_LEVEL_LOW>;
+-		interrupt-controller;
+ 
+ 		onkey {
+ 			compatible = "dlg,da9063-onkey";
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-pfla02.dtsi
+index 113974520d544..c0c47adc5866e 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-pfla02.dtsi
+@@ -124,6 +124,7 @@ pmic@58 {
+ 		reg = <0x58>;
+ 		interrupt-parent = <&gpio2>;
+ 		interrupts = <9 IRQ_TYPE_LEVEL_LOW>; /* active-low GPIO2_9 */
++		#interrupt-cells = <2>;
+ 		interrupt-controller;
+ 
+ 		regulators {
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-phycore-som.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-phycore-som.dtsi
+index 86b4269e0e011..85e278eb20161 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-phycore-som.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-phytec-phycore-som.dtsi
+@@ -100,6 +100,7 @@ pmic: pmic@58 {
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7d-pico-dwarf.dts b/arch/arm/boot/dts/nxp/imx/imx7d-pico-dwarf.dts
+index 12361fcbe24af..1b965652291bf 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7d-pico-dwarf.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx7d-pico-dwarf.dts
+@@ -63,6 +63,7 @@ pca9554: io-expander@25 {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		#interrupt-cells = <2>;
++		interrupt-controller;
+ 		reg = <0x25>;
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/nxp/vf/vf610-zii-dev-rev-b.dts b/arch/arm/boot/dts/nxp/vf/vf610-zii-dev-rev-b.dts
+index 16b4e06c4efad..a248b8a453421 100644
+--- a/arch/arm/boot/dts/nxp/vf/vf610-zii-dev-rev-b.dts
++++ b/arch/arm/boot/dts/nxp/vf/vf610-zii-dev-rev-b.dts
+@@ -338,6 +338,7 @@ gpio6: io-expander@22 {
+ 		reg = <0x22>;
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
++		#interrupt-cells = <2>;
+ 		interrupt-controller;
+ 		interrupt-parent = <&gpio3>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm/boot/dts/qcom/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom/qcom-msm8974.dtsi
+index 0bc2e66d15b15..f41cc4b4b518b 100644
+--- a/arch/arm/boot/dts/qcom/qcom-msm8974.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-msm8974.dtsi
+@@ -1226,7 +1226,7 @@ restart@fc4ab000 {
+ 
+ 		qfprom: qfprom@fc4bc000 {
+ 			compatible = "qcom,msm8974-qfprom", "qcom,qfprom";
+-			reg = <0xfc4bc000 0x1000>;
++			reg = <0xfc4bc000 0x2100>;
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
+index a976370264fcf..fb6764b2dcde8 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-sdx55.dtsi
+@@ -345,10 +345,10 @@ pcie_rc: pcie@1c00000 {
+ 					  "msi8";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 0 0 141 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+-					<0 0 0 2 &intc 0 0 0 142 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+-					<0 0 0 3 &intc 0 0 0 143 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+-					<0 0 0 4 &intc 0 0 0 144 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
++			interrupt-map = <0 0 0 1 &intc 0 141 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
++					<0 0 0 2 &intc 0 142 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
++					<0 0 0 3 &intc 0 143 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
++					<0 0 0 4 &intc 0 144 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+ 			clocks = <&gcc GCC_PCIE_PIPE_CLK>,
+ 				 <&gcc GCC_PCIE_AUX_CLK>,
+diff --git a/arch/arm/boot/dts/renesas/r8a73a4-ape6evm.dts b/arch/arm/boot/dts/renesas/r8a73a4-ape6evm.dts
+index ed75c01dbee10..3d02f065f71c2 100644
+--- a/arch/arm/boot/dts/renesas/r8a73a4-ape6evm.dts
++++ b/arch/arm/boot/dts/renesas/r8a73a4-ape6evm.dts
+@@ -209,6 +209,18 @@ &cmt1 {
+ 	status = "okay";
+ };
+ 
++&extal1_clk {
++	clock-frequency = <26000000>;
++};
++
++&extal2_clk {
++	clock-frequency = <48000000>;
++};
++
++&extalr_clk {
++	clock-frequency = <32768>;
++};
++
+ &pfc {
+ 	scifa0_pins: scifa0 {
+ 		groups = "scifa0_data";
+diff --git a/arch/arm/boot/dts/renesas/r8a73a4.dtsi b/arch/arm/boot/dts/renesas/r8a73a4.dtsi
+index c39066967053f..d1f4cbd099efb 100644
+--- a/arch/arm/boot/dts/renesas/r8a73a4.dtsi
++++ b/arch/arm/boot/dts/renesas/r8a73a4.dtsi
+@@ -450,17 +450,20 @@ clocks {
+ 		extalr_clk: extalr {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+-			clock-frequency = <32768>;
++			/* This value must be overridden by the board. */
++			clock-frequency = <0>;
+ 		};
+ 		extal1_clk: extal1 {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+-			clock-frequency = <25000000>;
++			/* This value must be overridden by the board. */
++			clock-frequency = <0>;
+ 		};
+ 		extal2_clk: extal2 {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+-			clock-frequency = <48000000>;
++			/* This value must be overridden by the board. */
++			clock-frequency = <0>;
+ 		};
+ 		fsiack_clk: fsiack {
+ 			compatible = "fixed-clock";
+diff --git a/arch/arm/boot/dts/renesas/r8a7790-lager.dts b/arch/arm/boot/dts/renesas/r8a7790-lager.dts
+index 4d666ad8b114b..b17a9f9307e59 100644
+--- a/arch/arm/boot/dts/renesas/r8a7790-lager.dts
++++ b/arch/arm/boot/dts/renesas/r8a7790-lager.dts
+@@ -432,6 +432,7 @@ pmic@58 {
+ 			interrupt-parent = <&irqc0>;
+ 			interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 
+ 			rtc {
+ 				compatible = "dlg,da9063-rtc";
+diff --git a/arch/arm/boot/dts/renesas/r8a7790-stout.dts b/arch/arm/boot/dts/renesas/r8a7790-stout.dts
+index fe14727eefe1e..25956661a8754 100644
+--- a/arch/arm/boot/dts/renesas/r8a7790-stout.dts
++++ b/arch/arm/boot/dts/renesas/r8a7790-stout.dts
+@@ -332,6 +332,7 @@ pmic@58 {
+ 		interrupt-parent = <&irqc0>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		onkey {
+ 			compatible = "dlg,da9063-onkey";
+diff --git a/arch/arm/boot/dts/renesas/r8a7791-koelsch.dts b/arch/arm/boot/dts/renesas/r8a7791-koelsch.dts
+index 545515b41ea3f..ec01cc8595161 100644
+--- a/arch/arm/boot/dts/renesas/r8a7791-koelsch.dts
++++ b/arch/arm/boot/dts/renesas/r8a7791-koelsch.dts
+@@ -795,6 +795,7 @@ pmic@58 {
+ 		interrupt-parent = <&irqc0>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		rtc {
+ 			compatible = "dlg,da9063-rtc";
+diff --git a/arch/arm/boot/dts/renesas/r8a7791-porter.dts b/arch/arm/boot/dts/renesas/r8a7791-porter.dts
+index ec0a20d5130d6..fcc9a2313e1df 100644
+--- a/arch/arm/boot/dts/renesas/r8a7791-porter.dts
++++ b/arch/arm/boot/dts/renesas/r8a7791-porter.dts
+@@ -389,6 +389,7 @@ pmic@5a {
+ 		interrupt-parent = <&irqc0>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		watchdog {
+ 			compatible = "dlg,da9063-watchdog";
+diff --git a/arch/arm/boot/dts/renesas/r8a7792-blanche.dts b/arch/arm/boot/dts/renesas/r8a7792-blanche.dts
+index e793134f32a30..af16f251849c6 100644
+--- a/arch/arm/boot/dts/renesas/r8a7792-blanche.dts
++++ b/arch/arm/boot/dts/renesas/r8a7792-blanche.dts
+@@ -332,6 +332,7 @@ pmic@58 {
+ 		interrupt-parent = <&irqc>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		rtc {
+ 			compatible = "dlg,da9063-rtc";
+diff --git a/arch/arm/boot/dts/renesas/r8a7793-gose.dts b/arch/arm/boot/dts/renesas/r8a7793-gose.dts
+index 79b537b246426..9358fc7d0e9f6 100644
+--- a/arch/arm/boot/dts/renesas/r8a7793-gose.dts
++++ b/arch/arm/boot/dts/renesas/r8a7793-gose.dts
+@@ -735,6 +735,7 @@ pmic@58 {
+ 		interrupt-parent = <&irqc0>;
+ 		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		rtc {
+ 			compatible = "dlg,da9063-rtc";
+diff --git a/arch/arm/boot/dts/renesas/r8a7794-alt.dts b/arch/arm/boot/dts/renesas/r8a7794-alt.dts
+index 08df031bc27c9..73ec4d3541541 100644
+--- a/arch/arm/boot/dts/renesas/r8a7794-alt.dts
++++ b/arch/arm/boot/dts/renesas/r8a7794-alt.dts
+@@ -453,6 +453,7 @@ pmic@58 {
+ 		interrupt-parent = <&gpio3>;
+ 		interrupts = <31 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		rtc {
+ 			compatible = "dlg,da9063-rtc";
+diff --git a/arch/arm/boot/dts/renesas/r8a7794-silk.dts b/arch/arm/boot/dts/renesas/r8a7794-silk.dts
+index b7af1befa126b..b825f2e25dd06 100644
+--- a/arch/arm/boot/dts/renesas/r8a7794-silk.dts
++++ b/arch/arm/boot/dts/renesas/r8a7794-silk.dts
+@@ -424,6 +424,7 @@ pmic@58 {
+ 		interrupt-parent = <&gpio3>;
+ 		interrupts = <31 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		onkey {
+ 			compatible = "dlg,da9063-onkey";
+diff --git a/arch/arm/boot/dts/rockchip/rv1108.dtsi b/arch/arm/boot/dts/rockchip/rv1108.dtsi
+index abf3006f0a842..f3291f3bbc6fd 100644
+--- a/arch/arm/boot/dts/rockchip/rv1108.dtsi
++++ b/arch/arm/boot/dts/rockchip/rv1108.dtsi
+@@ -196,7 +196,6 @@ spi: spi@10270000 {
+ 	pwm4: pwm@10280000 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x10280000 0x10>;
+-		interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM>, <&cru PCLK_PWM>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -208,7 +207,6 @@ pwm4: pwm@10280000 {
+ 	pwm5: pwm@10280010 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x10280010 0x10>;
+-		interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM>, <&cru PCLK_PWM>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -220,7 +218,6 @@ pwm5: pwm@10280010 {
+ 	pwm6: pwm@10280020 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x10280020 0x10>;
+-		interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM>, <&cru PCLK_PWM>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -232,7 +229,6 @@ pwm6: pwm@10280020 {
+ 	pwm7: pwm@10280030 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x10280030 0x10>;
+-		interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM>, <&cru PCLK_PWM>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -386,7 +382,6 @@ i2c0: i2c@20000000 {
+ 	pwm0: pwm@20040000 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x20040000 0x10>;
+-		interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM0_PMU>, <&cru PCLK_PWM0_PMU>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -398,7 +393,6 @@ pwm0: pwm@20040000 {
+ 	pwm1: pwm@20040010 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x20040010 0x10>;
+-		interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM0_PMU>, <&cru PCLK_PWM0_PMU>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -410,7 +404,6 @@ pwm1: pwm@20040010 {
+ 	pwm2: pwm@20040020 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x20040020 0x10>;
+-		interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM0_PMU>, <&cru PCLK_PWM0_PMU>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+@@ -422,7 +415,6 @@ pwm2: pwm@20040020 {
+ 	pwm3: pwm@20040030 {
+ 		compatible = "rockchip,rv1108-pwm", "rockchip,rk3288-pwm";
+ 		reg = <0x20040030 0x10>;
+-		interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_PWM0_PMU>, <&cru PCLK_PWM0_PMU>;
+ 		clock-names = "pwm", "pclk";
+ 		pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/st/stm32429i-eval.dts b/arch/arm/boot/dts/st/stm32429i-eval.dts
+index 576235ec3c516..afa417b34b25f 100644
+--- a/arch/arm/boot/dts/st/stm32429i-eval.dts
++++ b/arch/arm/boot/dts/st/stm32429i-eval.dts
+@@ -222,7 +222,6 @@ stmpe1600: stmpe1600@42 {
+ 		reg = <0x42>;
+ 		interrupts = <8 3>;
+ 		interrupt-parent = <&gpioi>;
+-		interrupt-controller;
+ 		wakeup-source;
+ 
+ 		stmpegpio: stmpe_gpio {
+diff --git a/arch/arm/boot/dts/st/stm32mp157c-dk2.dts b/arch/arm/boot/dts/st/stm32mp157c-dk2.dts
+index 510cca5acb79c..7a701f7ef0c70 100644
+--- a/arch/arm/boot/dts/st/stm32mp157c-dk2.dts
++++ b/arch/arm/boot/dts/st/stm32mp157c-dk2.dts
+@@ -64,7 +64,6 @@ touchscreen@38 {
+ 		reg = <0x38>;
+ 		interrupts = <2 2>;
+ 		interrupt-parent = <&gpiof>;
+-		interrupt-controller;
+ 		touchscreen-size-x = <480>;
+ 		touchscreen-size-y = <800>;
+ 		status = "okay";
+diff --git a/arch/arm/boot/dts/ti/omap/am5729-beagleboneai.dts b/arch/arm/boot/dts/ti/omap/am5729-beagleboneai.dts
+index 9a234dc1431d1..5b240769d300e 100644
+--- a/arch/arm/boot/dts/ti/omap/am5729-beagleboneai.dts
++++ b/arch/arm/boot/dts/ti/omap/am5729-beagleboneai.dts
+@@ -415,7 +415,6 @@ stmpe811@41 {
+ 		reg = <0x41>;
+ 		interrupts = <30 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-parent = <&gpio2>;
+-		interrupt-controller;
+ 		id = <0>;
+ 		blocks = <0x5>;
+ 		irq-trigger = <0x1>;
+diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c
+index 433ee4ddce6c8..f85933fdec75f 100644
+--- a/arch/arm/crypto/sha256_glue.c
++++ b/arch/arm/crypto/sha256_glue.c
+@@ -24,8 +24,8 @@
+ 
+ #include "sha256_glue.h"
+ 
+-asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
+-					unsigned int num_blks);
++asmlinkage void sha256_block_data_order(struct sha256_state *state,
++					const u8 *data, int num_blks);
+ 
+ int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
+ 			     unsigned int len)
+@@ -33,23 +33,20 @@ int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
+ 	/* make sure casting to sha256_block_fn() is safe */
+ 	BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
+ 
+-	return sha256_base_do_update(desc, data, len,
+-				(sha256_block_fn *)sha256_block_data_order);
++	return sha256_base_do_update(desc, data, len, sha256_block_data_order);
+ }
+ EXPORT_SYMBOL(crypto_sha256_arm_update);
+ 
+ static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out)
+ {
+-	sha256_base_do_finalize(desc,
+-				(sha256_block_fn *)sha256_block_data_order);
++	sha256_base_do_finalize(desc, sha256_block_data_order);
+ 	return sha256_base_finish(desc, out);
+ }
+ 
+ int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data,
+ 			    unsigned int len, u8 *out)
+ {
+-	sha256_base_do_update(desc, data, len,
+-			      (sha256_block_fn *)sha256_block_data_order);
++	sha256_base_do_update(desc, data, len, sha256_block_data_order);
+ 	return crypto_sha256_arm_final(desc, out);
+ }
+ EXPORT_SYMBOL(crypto_sha256_arm_finup);
+diff --git a/arch/arm/crypto/sha512-glue.c b/arch/arm/crypto/sha512-glue.c
+index 0635a65aa488b..1be5bd498af36 100644
+--- a/arch/arm/crypto/sha512-glue.c
++++ b/arch/arm/crypto/sha512-glue.c
+@@ -25,27 +25,25 @@ MODULE_ALIAS_CRYPTO("sha512");
+ MODULE_ALIAS_CRYPTO("sha384-arm");
+ MODULE_ALIAS_CRYPTO("sha512-arm");
+ 
+-asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks);
++asmlinkage void sha512_block_data_order(struct sha512_state *state,
++					u8 const *src, int blocks);
+ 
+ int sha512_arm_update(struct shash_desc *desc, const u8 *data,
+ 		      unsigned int len)
+ {
+-	return sha512_base_do_update(desc, data, len,
+-		(sha512_block_fn *)sha512_block_data_order);
++	return sha512_base_do_update(desc, data, len, sha512_block_data_order);
+ }
+ 
+ static int sha512_arm_final(struct shash_desc *desc, u8 *out)
+ {
+-	sha512_base_do_finalize(desc,
+-		(sha512_block_fn *)sha512_block_data_order);
++	sha512_base_do_finalize(desc, sha512_block_data_order);
+ 	return sha512_base_finish(desc, out);
+ }
+ 
+ int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
+ 		     unsigned int len, u8 *out)
+ {
+-	sha512_base_do_update(desc, data, len,
+-		(sha512_block_fn *)sha512_block_data_order);
++	sha512_base_do_update(desc, data, len, sha512_block_data_order);
+ 	return sha512_arm_final(desc, out);
+ }
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 456e8680e16ea..77dfbc8ab2d3b 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -196,7 +196,7 @@ config ARM64
+ 		if DYNAMIC_FTRACE_WITH_ARGS && DYNAMIC_FTRACE_WITH_CALL_OPS
+ 	select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS \
+ 		if (DYNAMIC_FTRACE_WITH_ARGS && !CFI_CLANG && \
+-		    !CC_OPTIMIZE_FOR_SIZE)
++		    (CC_IS_CLANG || !CC_OPTIMIZE_FOR_SIZE))
+ 	select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \
+ 		if DYNAMIC_FTRACE_WITH_ARGS
+ 	select HAVE_SAMPLE_FTRACE_DIRECT
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+index 9ec49ac2f6fd5..381d58cea092d 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+@@ -291,6 +291,8 @@ sw {
+ };
+ 
+ &spdif {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spdif_tx_pin>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix.dtsi
+index 4903d6358112d..855b7d43bc503 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix.dtsi
+@@ -166,6 +166,8 @@ &r_ir {
+ };
+ 
+ &spdif {
++	pinctrl-names = "default";
++	pinctrl-0 = <&spdif_tx_pin>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
+index ca1d287a0a01d..d11e5041bae9a 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
+@@ -406,6 +406,7 @@ spi1_cs_pin: spi1-cs-pin {
+ 				function = "spi1";
+ 			};
+ 
++			/omit-if-no-ref/
+ 			spdif_tx_pin: spdif-tx-pin {
+ 				pins = "PH7";
+ 				function = "spdif";
+@@ -655,10 +656,8 @@ spdif: spdif@5093000 {
+ 			clocks = <&ccu CLK_BUS_SPDIF>, <&ccu CLK_SPDIF>;
+ 			clock-names = "apb", "spdif";
+ 			resets = <&ccu RST_BUS_SPDIF>;
+-			dmas = <&dma 2>;
+-			dma-names = "tx";
+-			pinctrl-names = "default";
+-			pinctrl-0 = <&spdif_tx_pin>;
++			dmas = <&dma 2>, <&dma 2>;
++			dma-names = "rx", "tx";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/amazon/alpine-v2.dtsi b/arch/arm64/boot/dts/amazon/alpine-v2.dtsi
+index dccbba6e7f98e..dbf2dce8d1d68 100644
+--- a/arch/arm64/boot/dts/amazon/alpine-v2.dtsi
++++ b/arch/arm64/boot/dts/amazon/alpine-v2.dtsi
+@@ -145,7 +145,6 @@ pci@fbc00000 {
+ 		msix: msix@fbe00000 {
+ 			compatible = "al,alpine-msix";
+ 			reg = <0x0 0xfbe00000 0x0 0x100000>;
+-			interrupt-controller;
+ 			msi-controller;
+ 			al,msi-base-spi = <160>;
+ 			al,msi-num-spis = <160>;
+diff --git a/arch/arm64/boot/dts/amazon/alpine-v3.dtsi b/arch/arm64/boot/dts/amazon/alpine-v3.dtsi
+index 39481d7fd7d4d..3ea178acdddfe 100644
+--- a/arch/arm64/boot/dts/amazon/alpine-v3.dtsi
++++ b/arch/arm64/boot/dts/amazon/alpine-v3.dtsi
+@@ -355,7 +355,6 @@ pcie@fbd00000 {
+ 		msix: msix@fbe00000 {
+ 			compatible = "al,alpine-msix";
+ 			reg = <0x0 0xfbe00000 0x0 0x100000>;
+-			interrupt-controller;
+ 			msi-controller;
+ 			al,msi-base-spi = <336>;
+ 			al,msi-num-spis = <959>;
+diff --git a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
+index 2f124b027bbf0..aadfa0ae05252 100644
+--- a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
+@@ -227,9 +227,6 @@ ethernet-switch@0 {
+ 				brcm,num-gphy = <5>;
+ 				brcm,num-rgmii-ports = <2>;
+ 
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+ 				ports: ports {
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+index 9dcd25ec2c041..896d1f33b5b61 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+@@ -586,6 +586,7 @@ gpio_g: gpio@660a0000 {
+ 			#gpio-cells = <2>;
+ 			gpio-controller;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupts = <GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+index f049687d6b96d..d8516ec0dae74 100644
+--- a/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
++++ b/arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
+@@ -450,6 +450,7 @@ gpio_hsls: gpio@d0000 {
+ 			#gpio-cells = <2>;
+ 			gpio-controller;
+ 			interrupt-controller;
++			#interrupt-cells = <2>;
+ 			interrupts = <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-ranges = <&pinmux 0 0 16>,
+ 					<&pinmux 16 71 2>,
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl-osm-s.dts b/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl-osm-s.dts
+index 8b16bd68576c0..d9fa0deea7002 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl-osm-s.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl-osm-s.dts
+@@ -294,8 +294,8 @@ MX8MM_IOMUXC_SAI3_MCLK_GPIO5_IO2		0x19
+ 
+ 	pinctrl_i2c4: i2c4grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_I2C4_SCL_I2C4_SCL			0x400001c3
+-			MX8MM_IOMUXC_I2C4_SDA_I2C4_SDA			0x400001c3
++			MX8MM_IOMUXC_I2C4_SCL_I2C4_SCL			0x40000083
++			MX8MM_IOMUXC_I2C4_SDA_I2C4_SDA			0x40000083
+ 		>;
+ 	};
+ 
+@@ -313,19 +313,19 @@ MX8MM_IOMUXC_SAI5_MCLK_GPIO3_IO25		0x19
+ 
+ 	pinctrl_uart1: uart1grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SAI2_RXC_UART1_DCE_RX		0x140
+-			MX8MM_IOMUXC_SAI2_RXFS_UART1_DCE_TX		0x140
+-			MX8MM_IOMUXC_SAI2_RXD0_UART1_DCE_RTS_B		0x140
+-			MX8MM_IOMUXC_SAI2_TXFS_UART1_DCE_CTS_B		0x140
++			MX8MM_IOMUXC_SAI2_RXC_UART1_DCE_RX		0x0
++			MX8MM_IOMUXC_SAI2_RXFS_UART1_DCE_TX		0x0
++			MX8MM_IOMUXC_SAI2_RXD0_UART1_DCE_RTS_B		0x0
++			MX8MM_IOMUXC_SAI2_TXFS_UART1_DCE_CTS_B		0x0
+ 		>;
+ 	};
+ 
+ 	pinctrl_uart2: uart2grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SAI3_TXFS_UART2_DCE_RX		0x140
+-			MX8MM_IOMUXC_SAI3_TXC_UART2_DCE_TX		0x140
+-			MX8MM_IOMUXC_SAI3_RXD_UART2_DCE_RTS_B		0x140
+-			MX8MM_IOMUXC_SAI3_RXC_UART2_DCE_CTS_B		0x140
++			MX8MM_IOMUXC_SAI3_TXFS_UART2_DCE_RX		0x0
++			MX8MM_IOMUXC_SAI3_TXC_UART2_DCE_TX		0x0
++			MX8MM_IOMUXC_SAI3_RXD_UART2_DCE_RTS_B		0x0
++			MX8MM_IOMUXC_SAI3_RXC_UART2_DCE_CTS_B		0x0
+ 		>;
+ 	};
+ 
+@@ -337,40 +337,40 @@ MX8MM_IOMUXC_NAND_CE1_B_GPIO3_IO2		0x19
+ 
+ 	pinctrl_usdhc2: usdhc2grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x190
++			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x90
+ 			MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD			0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0		0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA1_USDHC2_DATA1		0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2		0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3		0x1d0
+-			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x019
+-			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0x1d0
++			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x19
++			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0xd0
+ 		>;
+ 	};
+ 
+ 	pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x194
++			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x94
+ 			MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD			0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0		0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA1_USDHC2_DATA1		0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2		0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3		0x1d4
+-			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x019
+-			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0x1d0
++			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x19
++			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0xd0
+ 		>;
+ 	};
+ 
+ 	pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x196
++			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x96
+ 			MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD			0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0		0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA1_USDHC2_DATA1		0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2		0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3		0x1d6
+-			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x019
+-			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0x1d0
++			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x19
++			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0xd0
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl.dts b/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl.dts
+index dcec57c20399e..aab8e24216501 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-bl.dts
+@@ -279,8 +279,8 @@ MX8MM_IOMUXC_SAI3_MCLK_GPIO5_IO2		0x19
+ 
+ 	pinctrl_i2c4: i2c4grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_I2C4_SCL_I2C4_SCL			0x400001c3
+-			MX8MM_IOMUXC_I2C4_SDA_I2C4_SDA			0x400001c3
++			MX8MM_IOMUXC_I2C4_SCL_I2C4_SCL			0x40000083
++			MX8MM_IOMUXC_I2C4_SDA_I2C4_SDA			0x40000083
+ 		>;
+ 	};
+ 
+@@ -292,19 +292,19 @@ MX8MM_IOMUXC_SPDIF_RX_PWM2_OUT			0x19
+ 
+ 	pinctrl_uart1: uart1grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SAI2_RXC_UART1_DCE_RX		0x140
+-			MX8MM_IOMUXC_SAI2_RXFS_UART1_DCE_TX		0x140
+-			MX8MM_IOMUXC_SAI2_RXD0_UART1_DCE_RTS_B		0x140
+-			MX8MM_IOMUXC_SAI2_TXFS_UART1_DCE_CTS_B		0x140
++			MX8MM_IOMUXC_SAI2_RXC_UART1_DCE_RX		0x0
++			MX8MM_IOMUXC_SAI2_RXFS_UART1_DCE_TX		0x0
++			MX8MM_IOMUXC_SAI2_RXD0_UART1_DCE_RTS_B		0x0
++			MX8MM_IOMUXC_SAI2_TXFS_UART1_DCE_CTS_B		0x0
+ 		>;
+ 	};
+ 
+ 	pinctrl_uart2: uart2grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SAI3_TXFS_UART2_DCE_RX		0x140
+-			MX8MM_IOMUXC_SAI3_TXC_UART2_DCE_TX		0x140
+-			MX8MM_IOMUXC_SAI3_RXD_UART2_DCE_RTS_B		0x140
+-			MX8MM_IOMUXC_SAI3_RXC_UART2_DCE_CTS_B		0x140
++			MX8MM_IOMUXC_SAI3_TXFS_UART2_DCE_RX		0x0
++			MX8MM_IOMUXC_SAI3_TXC_UART2_DCE_TX		0x0
++			MX8MM_IOMUXC_SAI3_RXD_UART2_DCE_RTS_B		0x0
++			MX8MM_IOMUXC_SAI3_RXC_UART2_DCE_CTS_B		0x0
+ 		>;
+ 	};
+ 
+@@ -316,40 +316,40 @@ MX8MM_IOMUXC_NAND_CE1_B_GPIO3_IO2		0x19
+ 
+ 	pinctrl_usdhc2: usdhc2grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x190
++			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x90
+ 			MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD			0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0		0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA1_USDHC2_DATA1		0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2		0x1d0
+ 			MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3		0x1d0
+-			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x019
+-			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0x1d0
++			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x19
++			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0xd0
+ 		>;
+ 	};
+ 
+ 	pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x194
++			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x94
+ 			MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD			0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0		0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA1_USDHC2_DATA1		0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2		0x1d4
+ 			MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3		0x1d4
+-			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x019
+-			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0x1d0
++			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x19
++			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0xd0
+ 		>;
+ 	};
+ 
+ 	pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x196
++			MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK			0x96
+ 			MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD			0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0		0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA1_USDHC2_DATA1		0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA2_USDHC2_DATA2		0x1d6
+ 			MX8MM_IOMUXC_SD2_DATA3_USDHC2_DATA3		0x1d6
+-			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x019
+-			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0x1d0
++			MX8MM_IOMUXC_SD2_CD_B_GPIO2_IO12		0x19
++			MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT		0xd0
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-osm-s.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-kontron-osm-s.dtsi
+index 6e75ab879bf59..60abcb636cedf 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-osm-s.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-osm-s.dtsi
+@@ -210,7 +210,7 @@ rv3028: rtc@52 {
+ 		reg = <0x52>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&pinctrl_rtc>;
+-		interrupts-extended = <&gpio4 1 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts-extended = <&gpio4 1 IRQ_TYPE_LEVEL_LOW>;
+ 		trickle-diode-disable;
+ 	};
+ };
+@@ -252,8 +252,8 @@ MX8MM_IOMUXC_ECSPI1_SS0_GPIO5_IO9		0x19
+ 
+ 	pinctrl_i2c1: i2c1grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_I2C1_SCL_I2C1_SCL			0x400001c3
+-			MX8MM_IOMUXC_I2C1_SDA_I2C1_SDA			0x400001c3
++			MX8MM_IOMUXC_I2C1_SCL_I2C1_SCL			0x40000083
++			MX8MM_IOMUXC_I2C1_SDA_I2C1_SDA			0x40000083
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-kontron-sl.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-kontron-sl.dtsi
+index 1f8326613ee9e..2076148e08627 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-kontron-sl.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-kontron-sl.dtsi
+@@ -237,8 +237,8 @@ MX8MM_IOMUXC_ECSPI1_SS0_GPIO5_IO9		0x19
+ 
+ 	pinctrl_i2c1: i2c1grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_I2C1_SCL_I2C1_SCL			0x400001c3
+-			MX8MM_IOMUXC_I2C1_SDA_I2C1_SDA			0x400001c3
++			MX8MM_IOMUXC_I2C1_SCL_I2C1_SCL			0x40000083
++			MX8MM_IOMUXC_I2C1_SDA_I2C1_SDA			0x40000083
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+index 6425773f68e0a..bbbaf2165ea28 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+@@ -47,17 +47,6 @@ pps {
+ 		gpios = <&gpio1 15 GPIO_ACTIVE_HIGH>;
+ 		status = "okay";
+ 	};
+-
+-	reg_usb_otg1_vbus: regulator-usb-otg1 {
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&pinctrl_reg_usb1_en>;
+-		compatible = "regulator-fixed";
+-		regulator-name = "usb_otg1_vbus";
+-		gpio = <&gpio1 10 GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-	};
+ };
+ 
+ /* off-board header */
+@@ -144,9 +133,10 @@ &uart3 {
+ };
+ 
+ &usbotg1 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&pinctrl_usbotg1>;
+ 	dr_mode = "otg";
+ 	over-current-active-low;
+-	vbus-supply = <&reg_usb_otg1_vbus>;
+ 	status = "okay";
+ };
+ 
+@@ -204,14 +194,6 @@ MX8MM_IOMUXC_GPIO1_IO15_GPIO1_IO15	0x41
+ 		>;
+ 	};
+ 
+-	pinctrl_reg_usb1_en: regusb1grp {
+-		fsl,pins = <
+-			MX8MM_IOMUXC_GPIO1_IO10_GPIO1_IO10	0x41
+-			MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12	0x141
+-			MX8MM_IOMUXC_GPIO1_IO13_USB1_OTG_OC	0x41
+-		>;
+-	};
+-
+ 	pinctrl_spi2: spi2grp {
+ 		fsl,pins = <
+ 			MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK	0xd6
+@@ -234,4 +216,11 @@ MX8MM_IOMUXC_UART3_RXD_UART3_DCE_RX	0x140
+ 			MX8MM_IOMUXC_UART3_TXD_UART3_DCE_TX	0x140
+ 		>;
+ 	};
++
++	pinctrl_usbotg1: usbotg1grp {
++		fsl,pins = <
++			MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12	0x141
++			MX8MM_IOMUXC_GPIO1_IO13_USB1_OTG_OC	0x41
++		>;
++	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
+index 5828c9d7821de..b5ce7b14b5434 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
+@@ -121,7 +121,7 @@ &ecspi1 {
+ 	flash@0 {	/* W25Q128JVEI */
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+-		spi-max-frequency = <100000000>;	/* Up to 133 MHz */
++		spi-max-frequency = <40000000>;
+ 		spi-tx-bus-width = <1>;
+ 		spi-rx-bus-width = <1>;
+ 	};
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+index cc9d468b43ab8..92f8cc05fe9da 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+@@ -23,7 +23,7 @@ hdmi-connector {
+ 
+ 		port {
+ 			hdmi_connector_in: endpoint {
+-				remote-endpoint = <&adv7533_out>;
++				remote-endpoint = <&adv7535_out>;
+ 			};
+ 		};
+ 	};
+@@ -107,6 +107,13 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
+ 		enable-active-high;
+ 	};
+ 
++	reg_vext_3v3: regulator-vext-3v3 {
++		compatible = "regulator-fixed";
++		regulator-name = "VEXT_3V3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++	};
++
+ 	sound {
+ 		compatible = "simple-audio-card";
+ 		simple-audio-card,name = "wm8960-audio";
+@@ -342,7 +349,7 @@ BUCK4 {
+ 				regulator-always-on;
+ 			};
+ 
+-			BUCK5 {
++			reg_buck5: BUCK5 {
+ 				regulator-name = "BUCK5";
+ 				regulator-min-microvolt = <1650000>;
+ 				regulator-max-microvolt = <1950000>;
+@@ -393,14 +400,16 @@ &i2c2 {
+ 
+ 	hdmi@3d {
+ 		compatible = "adi,adv7535";
+-		reg = <0x3d>, <0x3c>, <0x3e>, <0x3f>;
+-		reg-names = "main", "cec", "edid", "packet";
++		reg = <0x3d>;
++		interrupt-parent = <&gpio1>;
++		interrupts = <9 IRQ_TYPE_EDGE_FALLING>;
+ 		adi,dsi-lanes = <4>;
+-		adi,input-depth = <8>;
+-		adi,input-colorspace = "rgb";
+-		adi,input-clock = "1x";
+-		adi,input-style = <1>;
+-		adi,input-justification = "evenly";
++		avdd-supply = <&reg_buck5>;
++		dvdd-supply = <&reg_buck5>;
++		pvdd-supply = <&reg_buck5>;
++		a2vdd-supply = <&reg_buck5>;
++		v3p3-supply = <&reg_vext_3v3>;
++		v1p2-supply = <&reg_buck5>;
+ 
+ 		ports {
+ 			#address-cells = <1>;
+@@ -409,7 +418,7 @@ ports {
+ 			port@0 {
+ 				reg = <0>;
+ 
+-				adv7533_in: endpoint {
++				adv7535_in: endpoint {
+ 					remote-endpoint = <&dsi_out>;
+ 				};
+ 			};
+@@ -417,7 +426,7 @@ adv7533_in: endpoint {
+ 			port@1 {
+ 				reg = <1>;
+ 
+-				adv7533_out: endpoint {
++				adv7535_out: endpoint {
+ 					remote-endpoint = <&hdmi_connector_in>;
+ 				};
+ 			};
+@@ -502,7 +511,7 @@ port@1 {
+ 			reg = <1>;
+ 
+ 			dsi_out: endpoint {
+-				remote-endpoint = <&adv7533_in>;
++				remote-endpoint = <&adv7535_in>;
+ 				data-lanes = <1 2 3 4>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/freescale/imx8qm-ss-dma.dtsi b/arch/arm64/boot/dts/freescale/imx8qm-ss-dma.dtsi
+index 8439dd6b39353..459ba2b9b7f31 100644
+--- a/arch/arm64/boot/dts/freescale/imx8qm-ss-dma.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8qm-ss-dma.dtsi
+@@ -96,15 +96,30 @@ &edma2 {
+ 	status = "okay";
+ };
+ 
++/* It is eDMA1 in 8QM RM, but 8QXP it is eDMA3 */
+ &edma3 {
++	reg = <0x5a9f0000 0x210000>;
++	dma-channels = <10>;
++	interrupts = <GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 426 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 427 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 428 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 429 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 430 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 431 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 432 IRQ_TYPE_LEVEL_HIGH>,
++		     <GIC_SPI 433 IRQ_TYPE_LEVEL_HIGH>;
+ 	power-domains = <&pd IMX_SC_R_DMA_1_CH0>,
+-		     <&pd IMX_SC_R_DMA_1_CH1>,
+-		     <&pd IMX_SC_R_DMA_1_CH2>,
+-		     <&pd IMX_SC_R_DMA_1_CH3>,
+-		     <&pd IMX_SC_R_DMA_1_CH4>,
+-		     <&pd IMX_SC_R_DMA_1_CH5>,
+-		     <&pd IMX_SC_R_DMA_1_CH6>,
+-		     <&pd IMX_SC_R_DMA_1_CH7>;
++			<&pd IMX_SC_R_DMA_1_CH1>,
++			<&pd IMX_SC_R_DMA_1_CH2>,
++			<&pd IMX_SC_R_DMA_1_CH3>,
++			<&pd IMX_SC_R_DMA_1_CH4>,
++			<&pd IMX_SC_R_DMA_1_CH5>,
++			<&pd IMX_SC_R_DMA_1_CH6>,
++			<&pd IMX_SC_R_DMA_1_CH7>,
++			<&pd IMX_SC_R_DMA_1_CH8>,
++			<&pd IMX_SC_R_DMA_1_CH9>;
+ };
+ 
+ &flexcan1 {
+diff --git a/arch/arm64/boot/dts/lg/lg1312.dtsi b/arch/arm64/boot/dts/lg/lg1312.dtsi
+index 48ec4ebec0a83..b864ffa74ea8b 100644
+--- a/arch/arm64/boot/dts/lg/lg1312.dtsi
++++ b/arch/arm64/boot/dts/lg/lg1312.dtsi
+@@ -126,7 +126,6 @@ eth0: ethernet@c1b00000 {
+ 	amba {
+ 		#address-cells = <2>;
+ 		#size-cells = <1>;
+-		#interrupt-cells = <3>;
+ 
+ 		compatible = "simple-bus";
+ 		interrupt-parent = <&gic>;
+diff --git a/arch/arm64/boot/dts/lg/lg1313.dtsi b/arch/arm64/boot/dts/lg/lg1313.dtsi
+index 3869460aa5dcb..996fb39bb50c1 100644
+--- a/arch/arm64/boot/dts/lg/lg1313.dtsi
++++ b/arch/arm64/boot/dts/lg/lg1313.dtsi
+@@ -126,7 +126,6 @@ eth0: ethernet@c3700000 {
+ 	amba {
+ 		#address-cells = <2>;
+ 		#size-cells = <1>;
+-		#interrupt-cells = <3>;
+ 
+ 		compatible = "simple-bus";
+ 		interrupt-parent = <&gic>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index e300145ad1a6f..1cc3fa1c354de 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -431,14 +431,14 @@ xor11 {
+ 			crypto: crypto@90000 {
+ 				compatible = "inside-secure,safexcel-eip97ies";
+ 				reg = <0x90000 0x20000>;
+-				interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
+-					     <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
++				interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
+ 					     <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
+-					     <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>;
+-				interrupt-names = "mem", "ring0", "ring1",
+-						  "ring2", "ring3", "eip";
++					     <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
++					     <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
++				interrupt-names = "ring0", "ring1", "ring2",
++						  "ring3", "eip", "mem";
+ 				clocks = <&nb_periph_clk 15>;
+ 			};
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-ap80x.dtsi b/arch/arm64/boot/dts/marvell/armada-ap80x.dtsi
+index 2c920e22cec2b..7ec7c789d87ef 100644
+--- a/arch/arm64/boot/dts/marvell/armada-ap80x.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-ap80x.dtsi
+@@ -138,7 +138,6 @@ pmu {
+ 
+ 			odmi: odmi@300000 {
+ 				compatible = "marvell,odmi-controller";
+-				interrupt-controller;
+ 				msi-controller;
+ 				marvell,odmi-frames = <4>;
+ 				reg = <0x300000 0x4000>,
+diff --git a/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi b/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
+index 4ec1aae0a3a9c..7e595ac80043a 100644
+--- a/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-cp11x.dtsi
+@@ -511,14 +511,14 @@ CP11X_LABEL(sdhci0): mmc@780000 {
+ 		CP11X_LABEL(crypto): crypto@800000 {
+ 			compatible = "inside-secure,safexcel-eip197b";
+ 			reg = <0x800000 0x200000>;
+-			interrupts = <87 IRQ_TYPE_LEVEL_HIGH>,
+-				<88 IRQ_TYPE_LEVEL_HIGH>,
++			interrupts = <88 IRQ_TYPE_LEVEL_HIGH>,
+ 				<89 IRQ_TYPE_LEVEL_HIGH>,
+ 				<90 IRQ_TYPE_LEVEL_HIGH>,
+ 				<91 IRQ_TYPE_LEVEL_HIGH>,
+-				<92 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "mem", "ring0", "ring1",
+-				"ring2", "ring3", "eip";
++				<92 IRQ_TYPE_LEVEL_HIGH>,
++				<87 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "ring0", "ring1", "ring2", "ring3",
++					  "eip", "mem";
+ 			clock-names = "core", "reg";
+ 			clocks = <&CP11X_LABEL(clk) 1 26>,
+ 				 <&CP11X_LABEL(clk) 1 17>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+index c46682150e502..26cb3268ccbaa 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
+@@ -75,6 +75,7 @@ led-1 {
+ 
+ 	memory@40000000 {
+ 		reg = <0 0x40000000 0 0x40000000>;
++		device_type = "memory";
+ 	};
+ 
+ 	reg_1p8v: regulator-1p8v {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+index 2dc1bdc74e212..5c26021fc4cf1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+@@ -57,6 +57,7 @@ key-wps {
+ 
+ 	memory@40000000 {
+ 		reg = <0 0x40000000 0 0x20000000>;
++		device_type = "memory";
+ 	};
+ 
+ 	reg_1p8v: regulator-1p8v {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3.dts b/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3.dts
+index b876e501216be..e1ec2cccf4444 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3.dts
+@@ -43,7 +43,7 @@ fan: pwm-fan {
+ 		#cooling-cells = <2>;
+ 		/* cooling level (0, 1, 2) - pwm inverted */
+ 		cooling-levels = <255 96 0>;
+-		pwms = <&pwm 0 10000 0>;
++		pwms = <&pwm 0 10000>;
+ 		status = "okay";
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts b/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts
+index 3ef371ca254e8..2f884c24f1eb4 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts
+@@ -237,12 +237,13 @@ &spi0 {
+ 	pinctrl-0 = <&spi_flash_pins>;
+ 	cs-gpios = <0>, <0>;
+ 	status = "okay";
+-	spi_nand: spi_nand@0 {
++
++	spi_nand: flash@0 {
+ 		compatible = "spi-nand";
+ 		reg = <0>;
+ 		spi-max-frequency = <10000000>;
+-		spi-tx-buswidth = <4>;
+-		spi-rx-buswidth = <4>;
++		spi-tx-bus-width = <4>;
++		spi-rx-bus-width = <4>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
+index fc751e049953c..d974739eae1c9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
+@@ -153,6 +153,7 @@ infracfg: infracfg@10001000 {
+ 			compatible = "mediatek,mt7986-infracfg", "syscon";
+ 			reg = <0 0x10001000 0 0x1000>;
+ 			#clock-cells = <1>;
++			#reset-cells = <1>;
+ 		};
+ 
+ 		wed_pcie: wed-pcie@10003000 {
+@@ -234,7 +235,6 @@ crypto: crypto@10320000 {
+ 				     <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "ring0", "ring1", "ring2", "ring3";
+ 			clocks = <&infracfg CLK_INFRA_EIP97_CK>;
+-			clock-names = "infra_eip97_ck";
+ 			assigned-clocks = <&topckgen CLK_TOP_EIP_B_SEL>;
+ 			assigned-clock-parents = <&apmixedsys CLK_APMIXED_NET2PLL>;
+ 			status = "disabled";
+@@ -243,7 +243,6 @@ crypto: crypto@10320000 {
+ 		pwm: pwm@10048000 {
+ 			compatible = "mediatek,mt7986-pwm";
+ 			reg = <0 0x10048000 0 0x1000>;
+-			#clock-cells = <1>;
+ 			#pwm-cells = <2>;
+ 			interrupts = <GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&topckgen CLK_TOP_PWM_SEL>,
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts b/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts
+index dde190442e386..57dcaeef31d7f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts
+@@ -152,12 +152,13 @@ &spi0 {
+ 	pinctrl-0 = <&spi_flash_pins>;
+ 	cs-gpios = <0>, <0>;
+ 	status = "okay";
+-	spi_nand: spi_nand@0 {
++
++	spi_nand: flash@0 {
+ 		compatible = "spi-nand";
+ 		reg = <0>;
+ 		spi-max-frequency = <10000000>;
+-		spi-tx-buswidth = <4>;
+-		spi-rx-buswidth = <4>;
++		spi-tx-bus-width = <4>;
++		spi-rx-bus-width = <4>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+index a11adeb29b1f2..0d3c7b8162ff0 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+@@ -373,6 +373,10 @@ pen_eject {
+ };
+ 
+ &cros_ec {
++	cbas {
++		compatible = "google,cros-cbas";
++	};
++
+ 	keyboard-controller {
+ 		compatible = "google,cros-ec-keyb-switches";
+ 	};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+index 4864c39e53a4f..e73113cb51f53 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+@@ -340,6 +340,10 @@ touch_pin_reset: pin_reset {
+ };
+ 
+ &cros_ec {
++	cbas {
++		compatible = "google,cros-cbas";
++	};
++
+ 	keyboard-controller {
+ 		compatible = "google,cros-ec-keyb-switches";
+ 	};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+index d5f41c6c98814..181da69d18f46 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+@@ -344,6 +344,10 @@ rst_pin {
+ };
+ 
+ &cros_ec {
++	cbas {
++		compatible = "google,cros-cbas";
++	};
++
+ 	keyboard-controller {
+ 		compatible = "google,cros-ec-keyb-switches";
+ 	};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 7881a27be0297..49a1c3ccb0041 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -935,10 +935,6 @@ usbc_extcon: extcon0 {
+ 			google,usb-port-id = <0>;
+ 		};
+ 
+-		cbas {
+-			compatible = "google,cros-cbas";
+-		};
+-
+ 		typec {
+ 			compatible = "google,cros-ec-typec";
+ 			#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index 2fec6fd1c1a71..84ec6c1aa12b9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -931,11 +931,17 @@ power-domain@MT8186_POWER_DOMAIN_CSIRX_TOP {
+ 
+ 				power-domain@MT8186_POWER_DOMAIN_SSUSB {
+ 					reg = <MT8186_POWER_DOMAIN_SSUSB>;
++					clocks = <&topckgen CLK_TOP_USB_TOP>,
++						 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_REF>;
++					clock-names = "sys_ck", "ref_ck";
+ 					#power-domain-cells = <0>;
+ 				};
+ 
+ 				power-domain@MT8186_POWER_DOMAIN_SSUSB_P1 {
+ 					reg = <MT8186_POWER_DOMAIN_SSUSB_P1>;
++					clocks = <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_SYS>,
++						 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_REF>;
++					clock-names = "sys_ck", "ref_ck";
+ 					#power-domain-cells = <0>;
+ 				};
+ 
+@@ -1061,7 +1067,7 @@ power-domain@MT8186_POWER_DOMAIN_VENC {
+ 						reg = <MT8186_POWER_DOMAIN_VENC>;
+ 						clocks = <&topckgen CLK_TOP_VENC>,
+ 							 <&vencsys CLK_VENC_CKE1_VENC>;
+-						clock-names = "venc0", "larb";
++						clock-names = "venc0", "subsys-larb";
+ 						mediatek,infracfg = <&infracfg_ao>;
+ 						#power-domain-cells = <0>;
+ 					};
+@@ -1530,8 +1536,9 @@ ssusb0: usb@11201000 {
+ 			clocks = <&topckgen CLK_TOP_USB_TOP>,
+ 				 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_REF>,
+ 				 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_HCLK>,
+-				 <&infracfg_ao CLK_INFRA_AO_ICUSB>;
+-			clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck";
++				 <&infracfg_ao CLK_INFRA_AO_ICUSB>,
++				 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_XHCI>;
++			clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck", "xhci_ck";
+ 			interrupts = <GIC_SPI 303 IRQ_TYPE_LEVEL_HIGH 0>;
+ 			phys = <&u2port0 PHY_TYPE_USB2>;
+ 			power-domains = <&spm MT8186_POWER_DOMAIN_SSUSB>;
+@@ -1595,8 +1602,9 @@ ssusb1: usb@11281000 {
+ 			clocks = <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_SYS>,
+ 				 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_REF>,
+ 				 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_HCLK>,
+-				 <&clk26m>;
+-			clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck";
++				 <&clk26m>,
++				 <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_XHCI>;
++			clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck", "xhci_ck";
+ 			interrupts = <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH 0>;
+ 			phys = <&u2port1 PHY_TYPE_USB2>, <&u3port1 PHY_TYPE_USB3>;
+ 			power-domains = <&spm MT8186_POWER_DOMAIN_SSUSB_P1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+index f2281250ac35d..02ce05bc151ae 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+@@ -1336,10 +1336,6 @@ cros_ec: ec@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		base_detection: cbas {
+-			compatible = "google,cros-cbas";
+-		};
+-
+ 		cros_ec_pwm: pwm {
+ 			compatible = "google,cros-ec-pwm";
+ 			#pwm-cells = <1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 69f4cded5dbbf..f1fc14e53f8c7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -1770,7 +1770,7 @@ vcodec_enc: vcodec@17020000 {
+ 			mediatek,scp = <&scp>;
+ 			power-domains = <&spm MT8192_POWER_DOMAIN_VENC>;
+ 			clocks = <&vencsys CLK_VENC_SET1_VENC>;
+-			clock-names = "venc-set1";
++			clock-names = "venc_sel";
+ 			assigned-clocks = <&topckgen CLK_TOP_VENC_SEL>;
+ 			assigned-clock-parents = <&topckgen CLK_TOP_UNIVPLL_D4>;
+ 		};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r1.dts b/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r1.dts
+index 2d5e8f371b6de..a82d716f10d44 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r1.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r1.dts
+@@ -23,3 +23,7 @@ &sound {
+ &ts_10 {
+ 	status = "okay";
+ };
++
++&watchdog {
++	/delete-property/ mediatek,disable-extrst;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r2.dts b/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r2.dts
+index 2586c32ce6e6f..2fe20e0dad836 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r2.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r2.dts
+@@ -43,3 +43,7 @@ &sound {
+ &ts_10 {
+ 	status = "okay";
+ };
++
++&watchdog {
++	/delete-property/ mediatek,disable-extrst;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r3.dts b/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r3.dts
+index f54f9477b99da..dd294ca98194c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r3.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry-tomato-r3.dts
+@@ -44,3 +44,7 @@ &sound {
+ &ts_10 {
+ 	status = "okay";
+ };
++
++&watchdog {
++	/delete-property/ mediatek,disable-extrst;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-demo.dts b/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
+index 69c7f3954ae59..4127cb84eba41 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
+@@ -128,6 +128,7 @@ mt6360: pmic@34 {
+ 		compatible = "mediatek,mt6360";
+ 		reg = <0x34>;
+ 		interrupt-controller;
++		#interrupt-cells = <1>;
+ 		interrupts-extended = <&pio 101 IRQ_TYPE_EDGE_FALLING>;
+ 		interrupt-names = "IRQB";
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
+index ea13c4a7027c4..81a82933e3500 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
+@@ -175,7 +175,7 @@ ethernet@6800000 {
+ 			status = "okay";
+ 
+ 			phy-handle = <&mgbe0_phy>;
+-			phy-mode = "usxgmii";
++			phy-mode = "10gbase-r";
+ 
+ 			mdio {
+ 				#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 0b1330b521df4..cf4e501c84bcc 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -759,10 +759,10 @@ pcie0: pci@20000000 {
+ 
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 75 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+-					<0 0 0 2 &intc 0 78 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+-					<0 0 0 3 &intc 0 79 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+-					<0 0 0 4 &intc 0 83 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
++			interrupt-map = <0 0 0 1 &intc 0 0 0 75 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
++					<0 0 0 2 &intc 0 0 0 78 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
++					<0 0 0 3 &intc 0 0 0 79 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
++					<0 0 0 4 &intc 0 0 0 83 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+ 			clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
+ 				 <&gcc GCC_PCIE0_AXI_M_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 2f275c84e5665..b33145b756ebe 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -806,13 +806,13 @@ pcie1: pci@10000000 {
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 142
++			interrupt-map = <0 0 0 1 &intc 0 0 142
+ 					 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+-					<0 0 0 2 &intc 0 143
++					<0 0 0 2 &intc 0 0 143
+ 					 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+-					<0 0 0 3 &intc 0 144
++					<0 0 0 3 &intc 0 0 144
+ 					 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+-					<0 0 0 4 &intc 0 145
++					<0 0 0 4 &intc 0 0 145
+ 					 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+ 			clocks = <&gcc GCC_SYS_NOC_PCIE1_AXI_CLK>,
+@@ -868,13 +868,13 @@ pcie0: pci@20000000 {
+ 			interrupt-names = "msi";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+-			interrupt-map = <0 0 0 1 &intc 0 75
++			interrupt-map = <0 0 0 1 &intc 0 0 75
+ 					 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+-					<0 0 0 2 &intc 0 78
++					<0 0 0 2 &intc 0 0 78
+ 					 IRQ_TYPE_LEVEL_HIGH>, /* int_b */
+-					<0 0 0 3 &intc 0 79
++					<0 0 0 3 &intc 0 0 79
+ 					 IRQ_TYPE_LEVEL_HIGH>, /* int_c */
+-					<0 0 0 4 &intc 0 83
++					<0 0 0 4 &intc 0 0 83
+ 					 IRQ_TYPE_LEVEL_HIGH>; /* int_d */
+ 
+ 			clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+index d46e591e72b5c..40a8506553ef5 100644
+--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+@@ -418,6 +418,11 @@ tcsr_mutex: hwlock@340000 {
+ 			#hwlock-cells = <1>;
+ 		};
+ 
++		tcsr_regs: syscon@3c0000 {
++			compatible = "qcom,qcm2290-tcsr", "syscon";
++			reg = <0x0 0x003c0000 0x0 0x40000>;
++		};
++
+ 		tlmm: pinctrl@500000 {
+ 			compatible = "qcom,qcm2290-tlmm";
+ 			reg = <0x0 0x00500000 0x0 0x300000>;
+@@ -665,6 +670,8 @@ usb_qmpphy: phy@1615000 {
+ 
+ 			#phy-cells = <0>;
+ 
++			qcom,tcsr-reg = <&tcsr_regs 0xb244>;
++
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sa8540p.dtsi b/arch/arm64/boot/dts/qcom/sa8540p.dtsi
+index 96b2c59ad02b4..23888029cc117 100644
+--- a/arch/arm64/boot/dts/qcom/sa8540p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8540p.dtsi
+@@ -168,6 +168,9 @@ opp-2592000000 {
+ };
+ 
+ &gpucc {
++	/* SA8295P and SA8540P doesn't provide gfx.lvl */
++	/delete-property/ power-domains;
++
+ 	status = "disabled";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index b389d49d3ec8c..6760f6a340975 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -289,7 +289,7 @@ LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+ 			BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x40000004>;
+-				entry-latency-us = <241>;
++				entry-latency-us = <2411>;
+ 				exit-latency-us = <1461>;
+ 				min-residency-us = <4488>;
+ 				local-timer-stop;
+@@ -297,7 +297,15 @@ BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+ 		};
+ 
+ 		domain-idle-states {
+-			CLUSTER_SLEEP_0: cluster-sleep-0 {
++			CLUSTER_SLEEP_APSS_OFF: cluster-sleep-0 {
++				compatible = "domain-idle-state";
++				arm,psci-suspend-param = <0x41000044>;
++				entry-latency-us = <3300>;
++				exit-latency-us = <3300>;
++				min-residency-us = <6000>;
++			};
++
++			CLUSTER_SLEEP_AOSS_SLEEP: cluster-sleep-1 {
+ 				compatible = "domain-idle-state";
+ 				arm,psci-suspend-param = <0x4100a344>;
+ 				entry-latency-us = <3263>;
+@@ -581,7 +589,7 @@ CPU_PD7: power-domain-cpu7 {
+ 
+ 		CLUSTER_PD: power-domain-cpu-cluster0 {
+ 			#power-domain-cells = <0>;
+-			domain-idle-states = <&CLUSTER_SLEEP_0>;
++			domain-idle-states = <&CLUSTER_SLEEP_APSS_OFF &CLUSTER_SLEEP_AOSS_SLEEP>;
+ 		};
+ 	};
+ 
+@@ -781,6 +789,7 @@ gcc: clock-controller@100000 {
+ 			clock-names = "bi_tcxo",
+ 				      "bi_tcxo_ao",
+ 				      "sleep_clk";
++			power-domains = <&rpmhpd SC8180X_CX>;
+ 		};
+ 
+ 		qupv3_id_0: geniqup@8c0000 {
+@@ -2714,10 +2723,8 @@ mdss_mdp: mdp@ae01000 {
+ 					      "core",
+ 					      "vsync";
+ 
+-				assigned-clocks = <&dispcc DISP_CC_MDSS_MDP_CLK>,
+-						  <&dispcc DISP_CC_MDSS_VSYNC_CLK>;
+-				assigned-clock-rates = <460000000>,
+-						       <19200000>;
++				assigned-clocks = <&dispcc DISP_CC_MDSS_VSYNC_CLK>;
++				assigned-clock-rates = <19200000>;
+ 
+ 				operating-points-v2 = <&mdp_opp_table>;
+ 				power-domains = <&rpmhpd SC8180X_MMCX>;
+@@ -3177,7 +3184,7 @@ edp_phy: phy@aec2a00 {
+ 				 <&dispcc DISP_CC_MDSS_AHB_CLK>;
+ 			clock-names = "aux", "cfg_ahb";
+ 
+-			power-domains = <&dispcc MDSS_GDSC>;
++			power-domains = <&rpmhpd SC8180X_MX>;
+ 
+ 			#clock-cells = <1>;
+ 			#phy-cells = <0>;
+@@ -3203,6 +3210,7 @@ dispcc: clock-controller@af00000 {
+ 				      "edp_phy_pll_link_clk",
+ 				      "edp_phy_pll_vco_div_clk";
+ 			power-domains = <&rpmhpd SC8180X_MMCX>;
++			required-opps = <&rpmhpd_opp_low_svs>;
+ 			#clock-cells = <1>;
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
+@@ -3241,7 +3249,7 @@ tsens1: thermal-sensor@c265000 {
+ 
+ 		aoss_qmp: power-controller@c300000 {
+ 			compatible = "qcom,sc8180x-aoss-qmp", "qcom,aoss-qmp";
+-			reg = <0x0 0x0c300000 0x0 0x100000>;
++			reg = <0x0 0x0c300000 0x0 0x400>;
+ 			interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>;
+ 			mboxes = <&apss_shared 0>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index 7e7bf3fb3be63..0a891a0122446 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -580,7 +580,7 @@ &mss_pil {
+ &pcie0 {
+ 	status = "okay";
+ 	perst-gpios = <&tlmm 35 GPIO_ACTIVE_LOW>;
+-	enable-gpio = <&tlmm 134 GPIO_ACTIVE_HIGH>;
++	wake-gpios = <&tlmm 134 GPIO_ACTIVE_HIGH>;
+ 
+ 	vddpe-3v3-supply = <&pcie0_3p3v_dual>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+index b523b5fff7022..13d7e088aa2e1 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+@@ -485,13 +485,13 @@ &pmi8998_charger {
+ };
+ 
+ &q6afedai {
+-	qi2s@22 {
+-		reg = <22>;
++	dai@22 {
++		reg = <QUATERNARY_MI2S_RX>;
+ 		qcom,sd-lines = <1>;
+ 	};
+ 
+-	qi2s@23 {
+-		reg = <23>;
++	dai@23 {
++		reg = <QUATERNARY_MI2S_TX>;
+ 		qcom,sd-lines = <0>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 9e594d21ecd80..88ed543de81bf 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -3353,8 +3353,8 @@ slpi_pas: remoteproc@5c00000 {
+ 
+ 			qcom,qmp = <&aoss_qmp>;
+ 
+-			power-domains = <&rpmhpd SDM845_CX>,
+-					<&rpmhpd SDM845_MX>;
++			power-domains = <&rpmhpd SDM845_LCX>,
++					<&rpmhpd SDM845_LMX>;
+ 			power-domain-names = "lcx", "lmx";
+ 
+ 			memory-region = <&slpi_mem>;
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 839c603512403..87cbc4e8b1ed5 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -591,6 +591,11 @@ tcsr_mutex: hwlock@340000 {
+ 			#hwlock-cells = <1>;
+ 		};
+ 
++		tcsr_regs: syscon@3c0000 {
++			compatible = "qcom,sm6115-tcsr", "syscon";
++			reg = <0x0 0x003c0000 0x0 0x40000>;
++		};
++
+ 		tlmm: pinctrl@500000 {
+ 			compatible = "qcom,sm6115-tlmm";
+ 			reg = <0x0 0x00500000 0x0 0x400000>,
+@@ -856,6 +861,8 @@ usb_qmpphy: phy@1615000 {
+ 
+ 			#phy-cells = <0>;
+ 
++			qcom,tcsr-reg = <&tcsr_regs 0xb244>;
++
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 3221478663ac6..2678eac1ec794 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -1878,8 +1878,8 @@ pcie0: pci@1c00000 {
+ 			phys = <&pcie0_phy>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpio = <&tlmm 35 GPIO_ACTIVE_HIGH>;
+-			enable-gpio = <&tlmm 37 GPIO_ACTIVE_HIGH>;
++			perst-gpios = <&tlmm 35 GPIO_ACTIVE_HIGH>;
++			wake-gpios = <&tlmm 37 GPIO_ACTIVE_HIGH>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie0_default_state>;
+@@ -1972,7 +1972,7 @@ pcie1: pci@1c08000 {
+ 			phys = <&pcie1_phy>;
+ 			phy-names = "pciephy";
+ 
+-			perst-gpio = <&tlmm 102 GPIO_ACTIVE_HIGH>;
++			perst-gpios = <&tlmm 102 GPIO_ACTIVE_HIGH>;
+ 			enable-gpio = <&tlmm 104 GPIO_ACTIVE_HIGH>;
+ 
+ 			pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index f82480570d4b6..9d06fb0d4d1fd 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -1026,6 +1026,12 @@ uart20: serial@894000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_uart20_default>;
+ 				interrupts = <GIC_SPI 587 IRQ_TYPE_LEVEL_HIGH>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>;
++				interconnect-names = "qup-core",
++						     "qup-config";
+ 				status = "disabled";
+ 			};
+ 
+@@ -1418,6 +1424,12 @@ uart7: serial@99c000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_uart7_tx>, <&qup_uart7_rx>;
+ 				interrupts = <GIC_SPI 608 IRQ_TYPE_LEVEL_HIGH>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>;
++				interconnect-names = "qup-core",
++						     "qup-config";
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index e15564ed54cef..d7e68e0f57131 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -3047,7 +3047,7 @@ sram@c3f0000 {
+ 		spmi_bus: spmi@c400000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+ 			reg = <0 0x0c400000 0 0x3000>,
+-			      <0 0x0c500000 0 0x4000000>,
++			      <0 0x0c500000 0 0x400000>,
+ 			      <0 0x0c440000 0 0x80000>,
+ 			      <0 0x0c4c0000 0 0x20000>,
+ 			      <0 0x0c42d000 0 0x4000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index 4e67a03564971..504ac8c93faf5 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -658,7 +658,7 @@ channel7 {
+ 		avb0: ethernet@e6800000 {
+ 			compatible = "renesas,etheravb-r8a779a0",
+ 				     "renesas,etheravb-rcar-gen4";
+-			reg = <0 0xe6800000 0 0x800>;
++			reg = <0 0xe6800000 0 0x1000>;
+ 			interrupts = <GIC_SPI 256 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 257 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 258 IRQ_TYPE_LEVEL_HIGH>,
+@@ -706,7 +706,7 @@ avb0: ethernet@e6800000 {
+ 		avb1: ethernet@e6810000 {
+ 			compatible = "renesas,etheravb-r8a779a0",
+ 				     "renesas,etheravb-rcar-gen4";
+-			reg = <0 0xe6810000 0 0x800>;
++			reg = <0 0xe6810000 0 0x1000>;
+ 			interrupts = <GIC_SPI 281 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 282 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 283 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
+index d3d25e077c5d5..d7677595204dc 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779g0.dtsi
+@@ -161,11 +161,6 @@ L3_CA76_1: cache-controller-1 {
+ 		};
+ 	};
+ 
+-	psci {
+-		compatible = "arm,psci-1.0", "arm,psci-0.2";
+-		method = "smc";
+-	};
+-
+ 	extal_clk: extal {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+@@ -185,13 +180,24 @@ pmu_a76 {
+ 		interrupts-extended = <&gic GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+-	/* External SCIF clock - to be overridden by boards that provide it */
++	psci {
++		compatible = "arm,psci-1.0", "arm,psci-0.2";
++		method = "smc";
++	};
++
++	/* External SCIF clocks - to be overridden by boards that provide them */
+ 	scif_clk: scif {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <0>;
+ 	};
+ 
++	scif_clk2: scif2 {
++		compatible = "fixed-clock";
++		#clock-cells = <0>;
++		clock-frequency = <0>;
++	};
++
+ 	soc: soc {
+ 		compatible = "simple-bus";
+ 		interrupt-parent = <&gic>;
+@@ -681,7 +687,7 @@ hscif2: serial@e6560000 {
+ 			interrupts = <GIC_SPI 248 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 516>,
+ 				 <&cpg CPG_CORE R8A779G0_CLK_SASYNCPERD1>,
+-				 <&scif_clk>;
++				 <&scif_clk2>;
+ 			clock-names = "fck", "brg_int", "scif_clk";
+ 			dmas = <&dmac0 0x35>, <&dmac0 0x34>,
+ 			       <&dmac1 0x35>, <&dmac1 0x34>;
+@@ -761,7 +767,7 @@ channel7 {
+ 		avb0: ethernet@e6800000 {
+ 			compatible = "renesas,etheravb-r8a779g0",
+ 				     "renesas,etheravb-rcar-gen4";
+-			reg = <0 0xe6800000 0 0x800>;
++			reg = <0 0xe6800000 0 0x1000>;
+ 			interrupts = <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
+@@ -808,7 +814,7 @@ avb0: ethernet@e6800000 {
+ 		avb1: ethernet@e6810000 {
+ 			compatible = "renesas,etheravb-r8a779g0",
+ 				     "renesas,etheravb-rcar-gen4";
+-			reg = <0 0xe6810000 0 0x800>;
++			reg = <0 0xe6810000 0 0x1000>;
+ 			interrupts = <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 362 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1057,7 +1063,7 @@ scif4: serial@e6c40000 {
+ 			interrupts = <GIC_SPI 254 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 705>,
+ 				 <&cpg CPG_CORE R8A779G0_CLK_SASYNCPERD1>,
+-				 <&scif_clk>;
++				 <&scif_clk2>;
+ 			clock-names = "fck", "brg_int", "scif_clk";
+ 			dmas = <&dmac0 0x59>, <&dmac0 0x58>,
+ 			       <&dmac1 0x59>, <&dmac1 0x58>;
+@@ -1777,6 +1783,37 @@ ssi0: ssi-0 {
+ 			};
+ 		};
+ 
++		mmc0: mmc@ee140000 {
++			compatible = "renesas,sdhi-r8a779g0",
++				     "renesas,rcar-gen4-sdhi";
++			reg = <0 0xee140000 0 0x2000>;
++			interrupts = <GIC_SPI 440 IRQ_TYPE_LEVEL_HIGH>;
++			clocks = <&cpg CPG_MOD 706>,
++				 <&cpg CPG_CORE R8A779G0_CLK_SD0H>;
++			clock-names = "core", "clkh";
++			power-domains = <&sysc R8A779G0_PD_ALWAYS_ON>;
++			resets = <&cpg 706>;
++			max-frequency = <200000000>;
++			iommus = <&ipmmu_ds0 32>;
++			status = "disabled";
++		};
++
++		rpc: spi@ee200000 {
++			compatible = "renesas,r8a779g0-rpc-if",
++				     "renesas,rcar-gen4-rpc-if";
++			reg = <0 0xee200000 0 0x200>,
++			      <0 0x08000000 0 0x04000000>,
++			      <0 0xee208000 0 0x100>;
++			reg-names = "regs", "dirmap", "wbuf";
++			interrupts = <GIC_SPI 225 IRQ_TYPE_LEVEL_HIGH>;
++			clocks = <&cpg CPG_MOD 629>;
++			power-domains = <&sysc R8A779G0_PD_ALWAYS_ON>;
++			resets = <&cpg 629>;
++			#address-cells = <1>;
++			#size-cells = <0>;
++			status = "disabled";
++		};
++
+ 		ipmmu_rt0: iommu@ee480000 {
+ 			compatible = "renesas,ipmmu-r8a779g0",
+ 				     "renesas,rcar-gen4-ipmmu-vmsa";
+@@ -1886,37 +1923,6 @@ ipmmu_mm: iommu@eefc0000 {
+ 			#iommu-cells = <1>;
+ 		};
+ 
+-		mmc0: mmc@ee140000 {
+-			compatible = "renesas,sdhi-r8a779g0",
+-				     "renesas,rcar-gen4-sdhi";
+-			reg = <0 0xee140000 0 0x2000>;
+-			interrupts = <GIC_SPI 440 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&cpg CPG_MOD 706>,
+-				 <&cpg CPG_CORE R8A779G0_CLK_SD0H>;
+-			clock-names = "core", "clkh";
+-			power-domains = <&sysc R8A779G0_PD_ALWAYS_ON>;
+-			resets = <&cpg 706>;
+-			max-frequency = <200000000>;
+-			iommus = <&ipmmu_ds0 32>;
+-			status = "disabled";
+-		};
+-
+-		rpc: spi@ee200000 {
+-			compatible = "renesas,r8a779g0-rpc-if",
+-				     "renesas,rcar-gen4-rpc-if";
+-			reg = <0 0xee200000 0 0x200>,
+-			      <0 0x08000000 0 0x04000000>,
+-			      <0 0xee208000 0 0x100>;
+-			reg-names = "regs", "dirmap", "wbuf";
+-			interrupts = <GIC_SPI 225 IRQ_TYPE_LEVEL_HIGH>;
+-			clocks = <&cpg CPG_MOD 629>;
+-			power-domains = <&sysc R8A779G0_PD_ALWAYS_ON>;
+-			resets = <&cpg 629>;
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			status = "disabled";
+-		};
+-
+ 		gic: interrupt-controller@f1000000 {
+ 			compatible = "arm,gic-v3";
+ 			#interrupt-cells = <3>;
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+index 2ab231572d95f..b3f83d0ebcbb5 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g043u.dtsi
+@@ -109,7 +109,13 @@ irqc: interrupt-controller@110a0000 {
+ 			     <SOC_PERIPHERAL_IRQ(473) IRQ_TYPE_LEVEL_HIGH>,
+ 			     <SOC_PERIPHERAL_IRQ(474) IRQ_TYPE_LEVEL_HIGH>,
+ 			     <SOC_PERIPHERAL_IRQ(475) IRQ_TYPE_LEVEL_HIGH>,
+-			     <SOC_PERIPHERAL_IRQ(25) IRQ_TYPE_EDGE_RISING>;
++			     <SOC_PERIPHERAL_IRQ(25) IRQ_TYPE_EDGE_RISING>,
++			     <SOC_PERIPHERAL_IRQ(34) IRQ_TYPE_EDGE_RISING>,
++			     <SOC_PERIPHERAL_IRQ(35) IRQ_TYPE_EDGE_RISING>,
++			     <SOC_PERIPHERAL_IRQ(36) IRQ_TYPE_EDGE_RISING>,
++			     <SOC_PERIPHERAL_IRQ(37) IRQ_TYPE_EDGE_RISING>,
++			     <SOC_PERIPHERAL_IRQ(38) IRQ_TYPE_EDGE_RISING>,
++			     <SOC_PERIPHERAL_IRQ(39) IRQ_TYPE_EDGE_RISING>;
+ 		interrupt-names = "nmi",
+ 				  "irq0", "irq1", "irq2", "irq3",
+ 				  "irq4", "irq5", "irq6", "irq7",
+@@ -121,7 +127,9 @@ irqc: interrupt-controller@110a0000 {
+ 				  "tint20", "tint21", "tint22", "tint23",
+ 				  "tint24", "tint25", "tint26", "tint27",
+ 				  "tint28", "tint29", "tint30", "tint31",
+-				  "bus-err";
++				  "bus-err", "ec7tie1-0", "ec7tie2-0",
++				  "ec7tiovf-0", "ec7tie1-1", "ec7tie2-1",
++				  "ec7tiovf-1";
+ 		clocks = <&cpg CPG_MOD R9A07G043_IA55_CLK>,
+ 			<&cpg CPG_MOD R9A07G043_IA55_PCLK>;
+ 		clock-names = "clk", "pclk";
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+index 66f68fc2b2411..081d8f49db879 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g044.dtsi
+@@ -905,7 +905,27 @@ irqc: interrupt-controller@110a0000 {
+ 				     <GIC_SPI 472 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 473 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 474 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 475 IRQ_TYPE_LEVEL_HIGH>;
++				     <GIC_SPI 475 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 25 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 34 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 35 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 36 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 37 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 38 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 39 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "nmi", "irq0", "irq1", "irq2", "irq3",
++					  "irq4", "irq5", "irq6", "irq7",
++					  "tint0", "tint1", "tint2", "tint3",
++					  "tint4", "tint5", "tint6", "tint7",
++					  "tint8", "tint9", "tint10", "tint11",
++					  "tint12", "tint13", "tint14", "tint15",
++					  "tint16", "tint17", "tint18", "tint19",
++					  "tint20", "tint21", "tint22", "tint23",
++					  "tint24", "tint25", "tint26", "tint27",
++					  "tint28", "tint29", "tint30", "tint31",
++					  "bus-err", "ec7tie1-0", "ec7tie2-0",
++					  "ec7tiovf-0", "ec7tie1-1", "ec7tie2-1",
++					  "ec7tiovf-1";
+ 			clocks = <&cpg CPG_MOD R9A07G044_IA55_CLK>,
+ 				 <&cpg CPG_MOD R9A07G044_IA55_PCLK>;
+ 			clock-names = "clk", "pclk";
+diff --git a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+index 1f1d481dc7830..0d327464d2baf 100644
+--- a/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
++++ b/arch/arm64/boot/dts/renesas/r9a07g054.dtsi
+@@ -912,7 +912,27 @@ irqc: interrupt-controller@110a0000 {
+ 				     <GIC_SPI 472 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 473 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 474 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 475 IRQ_TYPE_LEVEL_HIGH>;
++				     <GIC_SPI 475 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 25 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 34 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 35 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 36 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 37 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 38 IRQ_TYPE_EDGE_RISING>,
++				     <GIC_SPI 39 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "nmi", "irq0", "irq1", "irq2", "irq3",
++					  "irq4", "irq5", "irq6", "irq7",
++					  "tint0", "tint1", "tint2", "tint3",
++					  "tint4", "tint5", "tint6", "tint7",
++					  "tint8", "tint9", "tint10", "tint11",
++					  "tint12", "tint13", "tint14", "tint15",
++					  "tint16", "tint17", "tint18", "tint19",
++					  "tint20", "tint21", "tint22", "tint23",
++					  "tint24", "tint25", "tint26", "tint27",
++					  "tint28", "tint29", "tint30", "tint31",
++					  "bus-err", "ec7tie1-0", "ec7tie2-0",
++					  "ec7tiovf-0", "ec7tie1-1", "ec7tie2-1",
++					  "ec7tiovf-1";
+ 			clocks = <&cpg CPG_MOD R9A07G054_IA55_CLK>,
+ 				 <&cpg CPG_MOD R9A07G054_IA55_PCLK>;
+ 			clock-names = "clk", "pclk";
+diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+index 3885ef3454ff6..50de17e4fb3f2 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+@@ -234,6 +234,7 @@ gpio_exp_74: gpio@74 {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 		interrupt-parent = <&gpio6>;
+ 		interrupts = <8 IRQ_TYPE_EDGE_FALLING>;
+ 
+@@ -294,6 +295,7 @@ gpio_exp_75: gpio@75 {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 		interrupt-parent = <&gpio6>;
+ 		interrupts = <4 IRQ_TYPE_EDGE_FALLING>;
+ 	};
+@@ -314,6 +316,7 @@ gpio_exp_76: gpio@76 {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 		interrupt-parent = <&gpio7>;
+ 		interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
+ 	};
+@@ -324,6 +327,7 @@ gpio_exp_77: gpio@77 {
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 		interrupt-parent = <&gpio5>;
+ 		interrupts = <9 IRQ_TYPE_EDGE_FALLING>;
+ 	};
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+index c19c0f1b3778f..92f96ec01385d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+@@ -597,6 +597,7 @@ vpu: video-codec@fdea0400 {
+ 		compatible = "rockchip,rk3568-vpu";
+ 		reg = <0x0 0xfdea0000 0x0 0x800>;
+ 		interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-names = "vdpu";
+ 		clocks = <&cru ACLK_VPU>, <&cru HCLK_VPU>;
+ 		clock-names = "aclk", "hclk";
+ 		iommus = <&vdpu_mmu>;
+@@ -1123,7 +1124,7 @@ i2s2_2ch: i2s@fe420000 {
+ 		dmas = <&dmac1 4>, <&dmac1 5>;
+ 		dma-names = "tx", "rx";
+ 		resets = <&cru SRST_M_I2S2_2CH>;
+-		reset-names = "m";
++		reset-names = "tx-m";
+ 		rockchip,grf = <&grf>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&i2s2m0_sclktx
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-evb1-v10.dts b/arch/arm64/boot/dts/rockchip/rk3588-evb1-v10.dts
+index b9d789d57862c..bbbe00bcd14e7 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-evb1-v10.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-evb1-v10.dts
+@@ -351,6 +351,7 @@ pmic@0 {
+ 			    <&rk806_dvs2_null>, <&rk806_dvs3_null>;
+ 		pinctrl-names = "default";
+ 		spi-max-frequency = <1000000>;
++		system-power-controller;
+ 
+ 		vcc1-supply = <&vcc5v0_sys>;
+ 		vcc2-supply = <&vcc5v0_sys>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
+index f9f9749848bd9..1d262cd54c990 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588s.dtsi
+@@ -1589,7 +1589,6 @@ i2s2_2ch: i2s@fe490000 {
+ 		dmas = <&dmac1 0>, <&dmac1 1>;
+ 		dma-names = "tx", "rx";
+ 		power-domains = <&power RK3588_PD_AUDIO>;
+-		rockchip,trcm-sync-tx-only;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&i2s2m1_lrck
+ 			     &i2s2m1_sclk
+@@ -1610,7 +1609,6 @@ i2s3_2ch: i2s@fe4a0000 {
+ 		dmas = <&dmac1 2>, <&dmac1 3>;
+ 		dma-names = "tx", "rx";
+ 		power-domains = <&power RK3588_PD_AUDIO>;
+-		rockchip,trcm-sync-tx-only;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&i2s3_lrck
+ 			     &i2s3_sclk
+diff --git a/arch/arm64/boot/dts/ti/Makefile b/arch/arm64/boot/dts/ti/Makefile
+index 77a347f9f47d5..c1513f0fa47fd 100644
+--- a/arch/arm64/boot/dts/ti/Makefile
++++ b/arch/arm64/boot/dts/ti/Makefile
+@@ -57,6 +57,7 @@ dtb-$(CONFIG_ARCH_K3) += k3-am654-base-board.dtb
+ dtb-$(CONFIG_ARCH_K3) += k3-am654-gp-evm.dtb
+ dtb-$(CONFIG_ARCH_K3) += k3-am654-evm.dtb
+ dtb-$(CONFIG_ARCH_K3) += k3-am654-idk.dtb
++dtb-$(CONFIG_ARCH_K3) += k3-am654-base-board-rocktech-rk101-panel.dtbo
+ 
+ # Boards with J7200 SoC
+ k3-j7200-evm-dtbs := k3-j7200-common-proc-board.dtb k3-j7200-evm-quad-port-eth-exp.dtbo
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index e5c64c86d1d5a..2f318c5287581 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -623,6 +623,8 @@ usb0: usb@31000000 {
+ 			interrupt-names = "host", "peripheral";
+ 			maximum-speed = "high-speed";
+ 			dr_mode = "otg";
++			snps,usb2-gadget-lpm-disable;
++			snps,usb2-lpm-disable;
+ 		};
+ 	};
+ 
+@@ -646,6 +648,8 @@ usb1: usb@31100000 {
+ 			interrupt-names = "host", "peripheral";
+ 			maximum-speed = "high-speed";
+ 			dr_mode = "otg";
++			snps,usb2-gadget-lpm-disable;
++			snps,usb2-lpm-disable;
+ 		};
+ 	};
+ 
+@@ -753,9 +757,10 @@ dss: dss@30200000 {
+ 		      <0x00 0x30207000 0x00 0x1000>, /* ovr1 */
+ 		      <0x00 0x30208000 0x00 0x1000>, /* ovr2 */
+ 		      <0x00 0x3020a000 0x00 0x1000>, /* vp1: Used for OLDI */
+-		      <0x00 0x3020b000 0x00 0x1000>; /* vp2: Used as DPI Out */
++		      <0x00 0x3020b000 0x00 0x1000>, /* vp2: Used as DPI Out */
++		      <0x00 0x30201000 0x00 0x1000>; /* common1 */
+ 		reg-names = "common", "vidl1", "vid",
+-			    "ovr1", "ovr2", "vp1", "vp2";
++			    "ovr1", "ovr2", "vp1", "vp2", "common1";
+ 		power-domains = <&k3_pds 186 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 186 6>,
+ 			 <&dss_vp1_clk>,
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-mcu.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-mcu.dtsi
+index c4b0b91d70cf3..14eb9ba836d32 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-mcu.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-mcu.dtsi
+@@ -187,6 +187,8 @@ mcu_r5fss0: r5fss@79000000 {
+ 		ranges = <0x79000000 0x00 0x79000000 0x8000>,
+ 			 <0x79020000 0x00 0x79020000 0x8000>;
+ 		power-domains = <&k3_pds 7 TI_SCI_PD_EXCLUSIVE>;
++		status = "disabled";
++
+ 		mcu_r5fss0_core0: r5f@79000000 {
+ 			compatible = "ti,am62-r5f";
+ 			reg = <0x79000000 0x00008000>,
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-wakeup.dtsi
+index 19f42b39394ee..10a7059b2d9b5 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-wakeup.dtsi
+@@ -78,6 +78,7 @@ wkup_r5fss0: r5fss@78000000 {
+ 		ranges = <0x78000000 0x00 0x78000000 0x8000>,
+ 			 <0x78100000 0x00 0x78100000 0x8000>;
+ 		power-domains = <&k3_pds 119 TI_SCI_PD_EXCLUSIVE>;
++		status = "disabled";
+ 
+ 		wkup_r5fss0_core0: r5f@78000000 {
+ 			compatible = "ti,am62-r5f";
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p.dtsi b/arch/arm64/boot/dts/ti/k3-am62p.dtsi
+index 84ffe7b9dcaf3..4f22b5d9fb9f0 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p.dtsi
+@@ -71,7 +71,7 @@ cbass_main: bus@f0000 {
+ 			 <0x00 0x43600000 0x00 0x43600000 0x00 0x00010000>, /* SA3 sproxy data */
+ 			 <0x00 0x44043000 0x00 0x44043000 0x00 0x00000fe0>, /* TI SCI DEBUG */
+ 			 <0x00 0x44860000 0x00 0x44860000 0x00 0x00040000>, /* SA3 sproxy config */
+-			 <0x00 0x48000000 0x00 0x48000000 0x00 0x06400000>, /* DMSS */
++			 <0x00 0x48000000 0x00 0x48000000 0x00 0x06408000>, /* DMSS */
+ 			 <0x00 0x60000000 0x00 0x60000000 0x00 0x08000000>, /* FSS0 DAT1 */
+ 			 <0x00 0x70000000 0x00 0x70000000 0x00 0x00010000>, /* OCSRAM */
+ 			 <0x01 0x00000000 0x01 0x00000000 0x00 0x00310000>, /* A53 PERIPHBASE */
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts b/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
+index f377eadef0c12..04e51934d24ca 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am62p5-sk.dts
+@@ -445,6 +445,10 @@ &cpsw_port2 {
+ };
+ 
+ &cpsw3g_mdio {
++	pinctrl-names = "default";
++	pinctrl-0 = <&main_mdio1_pins_default>;
++	status = "okay";
++
+ 	cpsw3g_phy0: ethernet-phy@0 {
+ 		reg = <0>;
+ 		ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+index 0be642bc1b86d..45042216e5b89 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+@@ -623,6 +623,10 @@ sdhci0: mmc@fa10000 {
+ 		ti,otap-del-sel-mmc-hs = <0x0>;
+ 		ti,otap-del-sel-ddr52 = <0x6>;
+ 		ti,otap-del-sel-hs200 = <0x7>;
++		ti,itap-del-sel-legacy = <0x10>;
++		ti,itap-del-sel-mmc-hs = <0xa>;
++		ti,itap-del-sel-ddr52 = <0x3>;
++		status = "disabled";
+ 	};
+ 
+ 	sdhci1: mmc@fa00000 {
+@@ -634,13 +638,18 @@ sdhci1: mmc@fa00000 {
+ 		clock-names = "clk_ahb", "clk_xin";
+ 		ti,trm-icp = <0x2>;
+ 		ti,otap-del-sel-legacy = <0x0>;
+-		ti,otap-del-sel-sd-hs = <0xf>;
++		ti,otap-del-sel-sd-hs = <0x0>;
+ 		ti,otap-del-sel-sdr12 = <0xf>;
+ 		ti,otap-del-sel-sdr25 = <0xf>;
+ 		ti,otap-del-sel-sdr50 = <0xc>;
+ 		ti,otap-del-sel-sdr104 = <0x6>;
+ 		ti,otap-del-sel-ddr50 = <0x9>;
++		ti,itap-del-sel-legacy = <0x0>;
++		ti,itap-del-sel-sd-hs = <0x0>;
++		ti,itap-del-sel-sdr12 = <0x0>;
++		ti,itap-del-sel-sdr25 = <0x0>;
+ 		ti,clkbuf-sel = <0x7>;
++		status = "disabled";
+ 	};
+ 
+ 	cpsw3g: ethernet@8000000 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi b/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi
+index f87f09d83c956..b8f844f667afc 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi
+@@ -211,6 +211,7 @@ flash@0 {
+ };
+ 
+ &sdhci0 {
++	status = "okay";
+ 	bus-width = <8>;
+ 	non-removable;
+ 	ti,driver-strength-ohm = <50>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-evm.dts b/arch/arm64/boot/dts/ti/k3-am642-evm.dts
+index 4dba18941015d..256606be56fef 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-evm.dts
+@@ -487,17 +487,19 @@ eeprom@0 {
+ 	};
+ };
+ 
++/* eMMC */
+ &sdhci0 {
+-	/* emmc */
++	status = "okay";
+ 	bus-width = <8>;
+ 	non-removable;
+ 	ti,driver-strength-ohm = <50>;
+ 	disable-wp;
+ };
+ 
++/* SD/MMC */
+ &sdhci1 {
+-	/* SD/MMC */
+ 	bootph-all;
++	status = "okay";
+ 	vmmc-supply = <&vdd_mmc1>;
+ 	pinctrl-names = "default";
+ 	bus-width = <4>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts b/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
+index 9175e96842d82..53b64e55413f9 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
+@@ -264,6 +264,7 @@ &main_uart1 {
+ };
+ 
+ &sdhci1 {
++	status = "okay";
+ 	vmmc-supply = <&vcc_3v3_mmc>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&main_mmc1_pins_default>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-sk.dts b/arch/arm64/boot/dts/ti/k3-am642-sk.dts
+index f29c8a9b59ba7..bffbd234f715a 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-sk.dts
+@@ -439,6 +439,7 @@ &mcu_gpio0 {
+ };
+ 
+ &sdhci0 {
++	status = "okay";
+ 	vmmc-supply = <&wlan_en>;
+ 	bus-width = <4>;
+ 	non-removable;
+@@ -458,9 +459,10 @@ wlcore: wlcore@2 {
+ 	};
+ };
+ 
++/* SD/MMC */
+ &sdhci1 {
+-	/* SD/MMC */
+ 	bootph-all;
++	status = "okay";
+ 	vmmc-supply = <&vdd_mmc1>;
+ 	pinctrl-names = "default";
+ 	bus-width = <4>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl-mbax4xxl.dts b/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl-mbax4xxl.dts
+index d95d80076a427..55102d35cecc1 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl-mbax4xxl.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl-mbax4xxl.dts
+@@ -425,7 +425,6 @@ &sdhci1 {
+ 	ti,driver-strength-ohm = <50>;
+ 	ti,fails-without-test-cd;
+ 	/* Enabled by overlay */
+-	status = "disabled";
+ };
+ 
+ &tscadc0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl.dtsi b/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl.dtsi
+index d82d4a98306a7..6c785eff7d2ff 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am642-tqma64xxl.dtsi
+@@ -219,6 +219,7 @@ partitions {
+ };
+ 
+ &sdhci0 {
++	status = "okay";
+ 	non-removable;
+ 	disable-wp;
+ 	no-sdio;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index 29048d6577cf6..fa2304a7cb1ec 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -1013,9 +1013,10 @@ dss: dss@4a00000 {
+ 		      <0x0 0x04a07000 0x0 0x1000>, /* ovr1 */
+ 		      <0x0 0x04a08000 0x0 0x1000>, /* ovr2 */
+ 		      <0x0 0x04a0a000 0x0 0x1000>, /* vp1 */
+-		      <0x0 0x04a0b000 0x0 0x1000>; /* vp2 */
++		      <0x0 0x04a0b000 0x0 0x1000>, /* vp2 */
++		      <0x0 0x04a01000 0x0 0x1000>; /* common1 */
+ 		reg-names = "common", "vidl1", "vid",
+-			"ovr1", "ovr2", "vp1", "vp2";
++			"ovr1", "ovr2", "vp1", "vp2", "common1";
+ 
+ 		ti,am65x-oldi-io-ctrl = <&dss_oldi_io_ctrl>;
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am69-sk.dts b/arch/arm64/boot/dts/ti/k3-am69-sk.dts
+index 9868c7049bfb9..fafa09d6dcc64 100644
+--- a/arch/arm64/boot/dts/ti/k3-am69-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am69-sk.dts
+@@ -824,13 +824,9 @@ &dss {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&dss_vout0_pins_default>;
+ 	assigned-clocks = <&k3_clks 218 2>,
+-			  <&k3_clks 218 5>,
+-			  <&k3_clks 218 14>,
+-			  <&k3_clks 218 18>;
++			  <&k3_clks 218 5>;
+ 	assigned-clock-parents = <&k3_clks 218 3>,
+-				 <&k3_clks 218 7>,
+-				 <&k3_clks 218 16>,
+-				 <&k3_clks 218 22>;
++				 <&k3_clks 218 7>;
+ };
+ 
+ &serdes_wiz4 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index cee2b4b0eb87d..7a0c599f2b1c3 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -91,24 +91,25 @@ vdd_sd_dv: gpio-regulator-TLV71033 {
+ };
+ 
+ &wkup_pmx0 {
++};
++
++&wkup_pmx2 {
+ 	mcu_uart0_pins_default: mcu-uart0-default-pins {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0xf4, PIN_INPUT, 0) /* (D20) MCU_UART0_RXD */
+-			J721E_WKUP_IOPAD(0xf0, PIN_OUTPUT, 0) /* (D19) MCU_UART0_TXD */
+-			J721E_WKUP_IOPAD(0xf8, PIN_INPUT, 0) /* (E20) MCU_UART0_CTSn */
+-			J721E_WKUP_IOPAD(0xfc, PIN_OUTPUT, 0) /* (E21) MCU_UART0_RTSn */
++			J721E_WKUP_IOPAD(0x90, PIN_INPUT, 0) /* (E20) MCU_UART0_CTSn */
++			J721E_WKUP_IOPAD(0x94, PIN_OUTPUT, 0) /* (E21) MCU_UART0_RTSn */
++			J721E_WKUP_IOPAD(0x8c, PIN_INPUT, 0) /* (D20) MCU_UART0_RXD */
++			J721E_WKUP_IOPAD(0x88, PIN_OUTPUT, 0) /* (D19) MCU_UART0_TXD */
+ 		>;
+ 	};
+ 
+ 	wkup_uart0_pins_default: wkup-uart0-default-pins {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0xb0, PIN_INPUT, 0) /* (B14) WKUP_UART0_RXD */
+-			J721E_WKUP_IOPAD(0xb4, PIN_OUTPUT, 0) /* (A14) WKUP_UART0_TXD */
++			J721E_WKUP_IOPAD(0x48, PIN_INPUT, 0) /* (B14) WKUP_UART0_RXD */
++			J721E_WKUP_IOPAD(0x4c, PIN_OUTPUT, 0) /* (A14) WKUP_UART0_TXD */
+ 		>;
+ 	};
+-};
+ 
+-&wkup_pmx2 {
+ 	mcu_cpsw_pins_default: mcu-cpsw-default-pins {
+ 		pinctrl-single,pins = <
+ 			J721E_WKUP_IOPAD(0x0000, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
+@@ -210,7 +211,6 @@ &mcu_uart0 {
+ 	status = "okay";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mcu_uart0_pins_default>;
+-	clock-frequency = <96000000>;
+ };
+ 
+ &main_uart0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts
+index c6b85bbf9a179..1ba1f53c72d03 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts
+@@ -190,8 +190,6 @@ J721S2_IOPAD(0x038, PIN_OUTPUT, 0) /* (AB28) MCASP0_ACLKX.MCAN5_TX */
+ &wkup_pmx2 {
+ 	wkup_uart0_pins_default: wkup-uart0-default-pins {
+ 		pinctrl-single,pins = <
+-			J721S2_WKUP_IOPAD(0x070, PIN_INPUT, 0) /* (E25) WKUP_GPIO0_6.WKUP_UART0_CTSn */
+-			J721S2_WKUP_IOPAD(0x074, PIN_OUTPUT, 0) /* (F28) WKUP_GPIO0_7.WKUP_UART0_RTSn */
+ 			J721S2_WKUP_IOPAD(0x048, PIN_INPUT, 0) /* (D28) WKUP_UART0_RXD */
+ 			J721S2_WKUP_IOPAD(0x04c, PIN_OUTPUT, 0) /* (D27) WKUP_UART0_TXD */
+ 		>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+index 7254f3bd3634d..c9cf1dc04f7d9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+@@ -652,7 +652,7 @@ wkup_vtm0: temperature-sensor@42040000 {
+ 		compatible = "ti,j7200-vtm";
+ 		reg = <0x00 0x42040000 0x0 0x350>,
+ 		      <0x00 0x42050000 0x0 0x350>;
+-		power-domains = <&k3_pds 154 TI_SCI_PD_SHARED>;
++		power-domains = <&k3_pds 180 TI_SCI_PD_SHARED>;
+ 		#thermal-sensor-cells = <1>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+index f1f4c8634ab69..50210700fdc8e 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+@@ -331,8 +331,6 @@ &wkup_pmx2 {
+ 	wkup_uart0_pins_default: wkup-uart0-default-pins {
+ 		bootph-all;
+ 		pinctrl-single,pins = <
+-			J721S2_WKUP_IOPAD(0x070, PIN_INPUT, 0) /* (L37) WKUP_GPIO0_6.WKUP_UART0_CTSn */
+-			J721S2_WKUP_IOPAD(0x074, PIN_INPUT, 0) /* (L36) WKUP_GPIO0_7.WKUP_UART0_RTSn */
+ 			J721S2_WKUP_IOPAD(0x048, PIN_INPUT, 0) /* (K35) WKUP_UART0_RXD */
+ 			J721S2_WKUP_IOPAD(0x04c, PIN_INPUT, 0) /* (K34) WKUP_UART0_TXD */
+ 		>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+index adb5ea6b97321..37fc48aae6945 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+@@ -616,7 +616,7 @@ wkup_vtm0: temperature-sensor@42040000 {
+ 		compatible = "ti,j7200-vtm";
+ 		reg = <0x00 0x42040000 0x00 0x350>,
+ 		      <0x00 0x42050000 0x00 0x350>;
+-		power-domains = <&k3_pds 154 TI_SCI_PD_SHARED>;
++		power-domains = <&k3_pds 243 TI_SCI_PD_SHARED>;
+ 		#thermal-sensor-cells = <1>;
+ 	};
+ 
+diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
+index 7780d343ef080..b67b89c54e1c8 100644
+--- a/arch/arm64/include/asm/fpsimd.h
++++ b/arch/arm64/include/asm/fpsimd.h
+@@ -62,13 +62,13 @@ static inline void cpacr_restore(unsigned long cpacr)
+  * When we defined the maximum SVE vector length we defined the ABI so
+  * that the maximum vector length included all the reserved for future
+  * expansion bits in ZCR rather than those just currently defined by
+- * the architecture. While SME follows a similar pattern the fact that
+- * it includes a square matrix means that any allocations that attempt
+- * to cover the maximum potential vector length (such as happen with
+- * the regset used for ptrace) end up being extremely large. Define
+- * the much lower actual limit for use in such situations.
++ * the architecture.  Using this length to allocate worst size buffers
++ * results in excessively large allocations, and this effect is even
++ * more pronounced for SME due to ZA.  Define more suitable VLs for
++ * these situations.
+  */
+-#define SME_VQ_MAX	16
++#define ARCH_SVE_VQ_MAX ((ZCR_ELx_LEN_MASK >> ZCR_ELx_LEN_SHIFT) + 1)
++#define SME_VQ_MAX	((SMCR_ELx_LEN_MASK >> SMCR_ELx_LEN_SHIFT) + 1)
+ 
+ struct task_struct;
+ 
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index b3f64144b5cd9..c94c0f8c9a737 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1499,7 +1499,8 @@ static const struct user_regset aarch64_regsets[] = {
+ #ifdef CONFIG_ARM64_SVE
+ 	[REGSET_SVE] = { /* Scalable Vector Extension */
+ 		.core_note_type = NT_ARM_SVE,
+-		.n = DIV_ROUND_UP(SVE_PT_SIZE(SVE_VQ_MAX, SVE_PT_REGS_SVE),
++		.n = DIV_ROUND_UP(SVE_PT_SIZE(ARCH_SVE_VQ_MAX,
++					      SVE_PT_REGS_SVE),
+ 				  SVE_VQ_BYTES),
+ 		.size = SVE_VQ_BYTES,
+ 		.align = SVE_VQ_BYTES,
+diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
+index 701a233583c2c..d14d0e37ad02d 100644
+--- a/arch/mips/include/asm/ptrace.h
++++ b/arch/mips/include/asm/ptrace.h
+@@ -60,6 +60,7 @@ static inline void instruction_pointer_set(struct pt_regs *regs,
+                                            unsigned long val)
+ {
+ 	regs->cp0_epc = val;
++	regs->cp0_cause &= ~CAUSEF_BD;
+ }
+ 
+ /* Query offset/name of register from its name/offset */
+diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
+index d1defb9ede70c..621a4b386ae4f 100644
+--- a/arch/parisc/kernel/ftrace.c
++++ b/arch/parisc/kernel/ftrace.c
+@@ -78,7 +78,7 @@ asmlinkage void notrace __hot ftrace_function_trampoline(unsigned long parent,
+ #endif
+ }
+ 
+-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
++#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
+ int ftrace_enable_ftrace_graph_caller(void)
+ {
+ 	static_key_enable(&ftrace_graph_enable.key);
+diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h
+index 4c69ece52a31e..59ed89890c902 100644
+--- a/arch/powerpc/include/asm/vmalloc.h
++++ b/arch/powerpc/include/asm/vmalloc.h
+@@ -7,14 +7,14 @@
+ #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+ 
+ #define arch_vmap_pud_supported arch_vmap_pud_supported
+-static inline bool arch_vmap_pud_supported(pgprot_t prot)
++static __always_inline bool arch_vmap_pud_supported(pgprot_t prot)
+ {
+ 	/* HPT does not cope with large pages in the vmalloc area */
+ 	return radix_enabled();
+ }
+ 
+ #define arch_vmap_pmd_supported arch_vmap_pmd_supported
+-static inline bool arch_vmap_pmd_supported(pgprot_t prot)
++static __always_inline bool arch_vmap_pmd_supported(pgprot_t prot)
+ {
+ 	return radix_enabled();
+ }
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index 27f18119fda17..241551d1282f8 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -695,6 +695,20 @@ static unsigned long single_gpci_request(u32 req, u32 starting_index,
+ 
+ 	ret = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO,
+ 			virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE);
++
++	/*
++	 * ret value as 'H_PARAMETER' with detail_rc as 'GEN_BUF_TOO_SMALL',
++	 * specifies that the current buffer size cannot accommodate
++	 * all the information and a partial buffer returned.
++	 * Since in this function we are only accessing data for a given starting index,
++	 * we don't need to accommodate whole data and can get required count by
++	 * accessing first entry data.
++	 * Hence hcall fails only incase the ret value is other than H_SUCCESS or
++	 * H_PARAMETER with detail_rc value as GEN_BUF_TOO_SMALL(0x1B).
++	 */
++	if (ret == H_PARAMETER && be32_to_cpu(arg->params.detail_rc) == 0x1B)
++		ret = 0;
++
+ 	if (ret) {
+ 		pr_devel("hcall failed: 0x%lx\n", ret);
+ 		goto out;
+@@ -759,6 +773,7 @@ static int h_gpci_event_init(struct perf_event *event)
+ {
+ 	u64 count;
+ 	u8 length;
++	unsigned long ret;
+ 
+ 	/* Not our event */
+ 	if (event->attr.type != event->pmu->type)
+@@ -789,13 +804,23 @@ static int h_gpci_event_init(struct perf_event *event)
+ 	}
+ 
+ 	/* check if the request works... */
+-	if (single_gpci_request(event_get_request(event),
++	ret = single_gpci_request(event_get_request(event),
+ 				event_get_starting_index(event),
+ 				event_get_secondary_index(event),
+ 				event_get_counter_info_version(event),
+ 				event_get_offset(event),
+ 				length,
+-				&count)) {
++				&count);
++
++	/*
++	 * ret value as H_AUTHORITY implies that partition is not permitted to retrieve
++	 * performance information, and required to set
++	 * "Enable Performance Information Collection" option.
++	 */
++	if (ret == H_AUTHORITY)
++		return -EPERM;
++
++	if (ret) {
+ 		pr_devel("gpci hcall failed\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/arch/powerpc/platforms/embedded6xx/linkstation.c b/arch/powerpc/platforms/embedded6xx/linkstation.c
+index 9c10aac40c7b1..e265f026eee2a 100644
+--- a/arch/powerpc/platforms/embedded6xx/linkstation.c
++++ b/arch/powerpc/platforms/embedded6xx/linkstation.c
+@@ -99,9 +99,6 @@ static void __init linkstation_init_IRQ(void)
+ 	mpic_init(mpic);
+ }
+ 
+-extern void avr_uart_configure(void);
+-extern void avr_uart_send(const char);
+-
+ static void __noreturn linkstation_restart(char *cmd)
+ {
+ 	local_irq_disable();
+diff --git a/arch/powerpc/platforms/embedded6xx/mpc10x.h b/arch/powerpc/platforms/embedded6xx/mpc10x.h
+index 5ad12023e5628..ebc258fa4858d 100644
+--- a/arch/powerpc/platforms/embedded6xx/mpc10x.h
++++ b/arch/powerpc/platforms/embedded6xx/mpc10x.h
+@@ -156,4 +156,7 @@ int mpc10x_disable_store_gathering(struct pci_controller *hose);
+ /* For MPC107 boards that use the built-in openpic */
+ void mpc10x_set_openpic(void);
+ 
++void avr_uart_configure(void);
++void avr_uart_send(const char c);
++
+ #endif	/* __PPC_KERNEL_MPC10X_H */
+diff --git a/arch/powerpc/platforms/powermac/Kconfig b/arch/powerpc/platforms/powermac/Kconfig
+index 8bdae0caf21e5..84f101ec53a96 100644
+--- a/arch/powerpc/platforms/powermac/Kconfig
++++ b/arch/powerpc/platforms/powermac/Kconfig
+@@ -2,7 +2,7 @@
+ config PPC_PMAC
+ 	bool "Apple PowerMac based machines"
+ 	depends on PPC_BOOK3S && CPU_BIG_ENDIAN
+-	select ADB_CUDA if POWER_RESET && PPC32
++	select ADB_CUDA if POWER_RESET && ADB
+ 	select MPIC
+ 	select FORCE_PCI
+ 	select PPC_INDIRECT_PCI if PPC32
+diff --git a/arch/powerpc/platforms/ps3/Kconfig b/arch/powerpc/platforms/ps3/Kconfig
+index a44869e5ea70f..1bd1b0b49bc62 100644
+--- a/arch/powerpc/platforms/ps3/Kconfig
++++ b/arch/powerpc/platforms/ps3/Kconfig
+@@ -67,6 +67,7 @@ config PS3_VUART
+ config PS3_PS3AV
+ 	depends on PPC_PS3
+ 	tristate "PS3 AV settings driver" if PS3_ADVANCED
++	select VIDEO
+ 	select PS3_VUART
+ 	default y
+ 	help
+diff --git a/arch/powerpc/platforms/pseries/papr_platform_attributes.c b/arch/powerpc/platforms/pseries/papr_platform_attributes.c
+index 526c621b098be..eea2041b270b5 100644
+--- a/arch/powerpc/platforms/pseries/papr_platform_attributes.c
++++ b/arch/powerpc/platforms/pseries/papr_platform_attributes.c
+@@ -101,10 +101,12 @@ static int papr_get_attr(u64 id, struct energy_scale_attribute *esi)
+ 		esi_buf_size = ESI_HDR_SIZE + (CURR_MAX_ESI_ATTRS * max_esi_attrs);
+ 
+ 		temp_buf = krealloc(buf, esi_buf_size, GFP_KERNEL);
+-		if (temp_buf)
++		if (temp_buf) {
+ 			buf = temp_buf;
+-		else
+-			return -ENOMEM;
++		} else {
++			ret = -ENOMEM;
++			goto out_buf;
++		}
+ 
+ 		goto retry;
+ 	}
+diff --git a/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
+index 07387f9c135ca..72b87b08ab444 100644
+--- a/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
++++ b/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
+@@ -123,6 +123,7 @@ pmic@58 {
+ 		interrupt-parent = <&gpio>;
+ 		interrupts = <1 IRQ_TYPE_LEVEL_LOW>;
+ 		interrupt-controller;
++		#interrupt-cells = <2>;
+ 
+ 		onkey {
+ 			compatible = "dlg,da9063-onkey";
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 76b131e7bbcad..a66845e31abad 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -439,9 +439,11 @@ static inline pte_t pte_mkhuge(pte_t pte)
+ 	return pte;
+ }
+ 
++#ifdef CONFIG_RISCV_ISA_SVNAPOT
+ #define pte_leaf_size(pte)	(pte_napot(pte) ?				\
+ 					napot_cont_size(napot_cont_order(pte)) :\
+ 					PAGE_SIZE)
++#endif
+ 
+ #ifdef CONFIG_NUMA_BALANCING
+ /*
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 5255f8134aeff..1ed769c87ae44 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -632,7 +632,7 @@ void unaligned_emulation_finish(void)
+ 	 * accesses emulated since tasks requesting such control can run on any
+ 	 * CPU.
+ 	 */
+-	for_each_present_cpu(cpu) {
++	for_each_online_cpu(cpu) {
+ 		if (per_cpu(misaligned_access_speed, cpu) !=
+ 					RISCV_HWPROBE_MISALIGNED_EMULATED) {
+ 			return;
+diff --git a/arch/s390/kernel/cache.c b/arch/s390/kernel/cache.c
+index 56254fa06f990..4f26690302209 100644
+--- a/arch/s390/kernel/cache.c
++++ b/arch/s390/kernel/cache.c
+@@ -166,5 +166,6 @@ int populate_cache_leaves(unsigned int cpu)
+ 			ci_leaf_init(this_leaf++, pvt, ctype, level, cpu);
+ 		}
+ 	}
++	this_cpu_ci->cpu_map_populated = true;
+ 	return 0;
+ }
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index 39a91b00438a7..fc2dff1e4f796 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -715,7 +715,7 @@ static int __init attr_event_init(void)
+ 	for (i = 0; i < ARRAY_SIZE(paicrypt_ctrnames); i++) {
+ 		ret = attr_event_init_one(attrs, i);
+ 		if (ret) {
+-			attr_event_free(attrs, i - 1);
++			attr_event_free(attrs, i);
+ 			return ret;
+ 		}
+ 	}
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index e7013a2e89605..32b467c300603 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -604,7 +604,7 @@ static int __init attr_event_init(void)
+ 	for (i = 0; i < ARRAY_SIZE(paiext_ctrnames); i++) {
+ 		ret = attr_event_init_one(attrs, i);
+ 		if (ret) {
+-			attr_event_free(attrs, i - 1);
++			attr_event_free(attrs, i);
+ 			return ret;
+ 		}
+ 	}
+diff --git a/arch/s390/kernel/vdso32/Makefile b/arch/s390/kernel/vdso32/Makefile
+index caec7db6f9668..b12a274cbb473 100644
+--- a/arch/s390/kernel/vdso32/Makefile
++++ b/arch/s390/kernel/vdso32/Makefile
+@@ -22,7 +22,7 @@ KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
+ KBUILD_CFLAGS_32 := $(filter-out -mno-pic-data-is-text-relative,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 += -m31 -fPIC -shared -fno-common -fno-builtin
+ 
+-LDFLAGS_vdso32.so.dbg += -fPIC -shared -soname=linux-vdso32.so.1 \
++LDFLAGS_vdso32.so.dbg += -shared -soname=linux-vdso32.so.1 \
+ 	--hash-style=both --build-id=sha1 -melf_s390 -T
+ 
+ $(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+diff --git a/arch/s390/kernel/vdso64/Makefile b/arch/s390/kernel/vdso64/Makefile
+index e3c9085f8fa72..caa4ebff8a193 100644
+--- a/arch/s390/kernel/vdso64/Makefile
++++ b/arch/s390/kernel/vdso64/Makefile
+@@ -26,7 +26,7 @@ KBUILD_AFLAGS_64 += -m64
+ KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS))
+ KBUILD_CFLAGS_64 := $(filter-out -mno-pic-data-is-text-relative,$(KBUILD_CFLAGS_64))
+ KBUILD_CFLAGS_64 += -m64 -fPIC -fno-common -fno-builtin
+-ldflags-y := -fPIC -shared -soname=linux-vdso64.so.1 \
++ldflags-y := -shared -soname=linux-vdso64.so.1 \
+ 	     --hash-style=both --build-id=sha1 -T
+ 
+ $(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_64)
+diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
+index e0a88dcaf5cb7..24a18e5ef6e8e 100644
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -210,13 +210,13 @@ void vtime_flush(struct task_struct *tsk)
+ 		virt_timer_expire();
+ 
+ 	steal = S390_lowcore.steal_timer;
+-	avg_steal = S390_lowcore.avg_steal_timer / 2;
++	avg_steal = S390_lowcore.avg_steal_timer;
+ 	if ((s64) steal > 0) {
+ 		S390_lowcore.steal_timer = 0;
+ 		account_steal_time(cputime_to_nsecs(steal));
+ 		avg_steal += steal;
+ 	}
+-	S390_lowcore.avg_steal_timer = avg_steal;
++	S390_lowcore.avg_steal_timer = avg_steal / 2;
+ }
+ 
+ static u64 vtime_delta(void)
+diff --git a/arch/sparc/kernel/leon_pci_grpci1.c b/arch/sparc/kernel/leon_pci_grpci1.c
+index 8700a0e3b0df7..b2b639bee0684 100644
+--- a/arch/sparc/kernel/leon_pci_grpci1.c
++++ b/arch/sparc/kernel/leon_pci_grpci1.c
+@@ -697,7 +697,7 @@ static int grpci1_of_probe(struct platform_device *ofdev)
+ 	return err;
+ }
+ 
+-static const struct of_device_id grpci1_of_match[] __initconst = {
++static const struct of_device_id grpci1_of_match[] = {
+ 	{
+ 	 .name = "GAISLER_PCIFBRG",
+ 	 },
+diff --git a/arch/sparc/kernel/leon_pci_grpci2.c b/arch/sparc/kernel/leon_pci_grpci2.c
+index 60b6bdf7761fb..ac2acd62a24ec 100644
+--- a/arch/sparc/kernel/leon_pci_grpci2.c
++++ b/arch/sparc/kernel/leon_pci_grpci2.c
+@@ -889,7 +889,7 @@ static int grpci2_of_probe(struct platform_device *ofdev)
+ 	return err;
+ }
+ 
+-static const struct of_device_id grpci2_of_match[] __initconst = {
++static const struct of_device_id grpci2_of_match[] = {
+ 	{
+ 	 .name = "GAISLER_GRPCI2",
+ 	 },
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index e24976593a298..5365d6acbf090 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -604,7 +604,6 @@ static void amd_pmu_cpu_dead(int cpu)
+ 
+ 	kfree(cpuhw->lbr_sel);
+ 	cpuhw->lbr_sel = NULL;
+-	amd_pmu_cpu_reset(cpu);
+ 
+ 	if (!x86_pmu.amd_nb_constraints)
+ 		return;
+diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
+index 96e6c51515f50..c8062975a5316 100644
+--- a/arch/x86/hyperv/hv_vtl.c
++++ b/arch/x86/hyperv/hv_vtl.c
+@@ -12,10 +12,16 @@
+ #include <asm/i8259.h>
+ #include <asm/mshyperv.h>
+ #include <asm/realmode.h>
++#include <../kernel/smpboot.h>
+ 
+ extern struct boot_params boot_params;
+ static struct real_mode_header hv_vtl_real_mode_header;
+ 
++static bool __init hv_vtl_msi_ext_dest_id(void)
++{
++	return true;
++}
++
+ void __init hv_vtl_init_platform(void)
+ {
+ 	pr_info("Linux runs in Hyper-V Virtual Trust Level\n");
+@@ -38,6 +44,8 @@ void __init hv_vtl_init_platform(void)
+ 	x86_platform.legacy.warm_reset = 0;
+ 	x86_platform.legacy.reserve_bios_regions = 0;
+ 	x86_platform.legacy.devices.pnpbios = 0;
++
++	x86_init.hyper.msi_ext_dest_id = hv_vtl_msi_ext_dest_id;
+ }
+ 
+ static inline u64 hv_vtl_system_desc_base(struct ldttss_desc *desc)
+@@ -57,7 +65,7 @@ static void hv_vtl_ap_entry(void)
+ 	((secondary_startup_64_fn)secondary_startup_64)(&boot_params, &boot_params);
+ }
+ 
+-static int hv_vtl_bringup_vcpu(u32 target_vp_index, u64 eip_ignored)
++static int hv_vtl_bringup_vcpu(u32 target_vp_index, int cpu, u64 eip_ignored)
+ {
+ 	u64 status;
+ 	int ret = 0;
+@@ -71,7 +79,9 @@ static int hv_vtl_bringup_vcpu(u32 target_vp_index, u64 eip_ignored)
+ 	struct ldttss_desc *ldt;
+ 	struct desc_struct *gdt;
+ 
+-	u64 rsp = current->thread.sp;
++	struct task_struct *idle = idle_thread_get(cpu);
++	u64 rsp = (unsigned long)idle->thread.sp;
++
+ 	u64 rip = (u64)&hv_vtl_ap_entry;
+ 
+ 	native_store_gdt(&gdt_ptr);
+@@ -198,7 +208,15 @@ static int hv_vtl_apicid_to_vp_id(u32 apic_id)
+ 
+ static int hv_vtl_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip)
+ {
+-	int vp_id;
++	int vp_id, cpu;
++
++	/* Find the logical CPU for the APIC ID */
++	for_each_present_cpu(cpu) {
++		if (arch_match_cpu_phys_id(cpu, apicid))
++			break;
++	}
++	if (cpu >= nr_cpu_ids)
++		return -EINVAL;
+ 
+ 	pr_debug("Bringing up CPU with APIC ID %d in VTL2...\n", apicid);
+ 	vp_id = hv_vtl_apicid_to_vp_id(apicid);
+@@ -212,7 +230,7 @@ static int hv_vtl_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip)
+ 		return -EINVAL;
+ 	}
+ 
+-	return hv_vtl_bringup_vcpu(vp_id, start_eip);
++	return hv_vtl_bringup_vcpu(vp_id, cpu, start_eip);
+ }
+ 
+ int __init hv_vtl_early_init(void)
+diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
+index d18e5c332cb9f..1b93ff80b43bc 100644
+--- a/arch/x86/include/asm/page.h
++++ b/arch/x86/include/asm/page.h
+@@ -66,10 +66,14 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
+  * virt_addr_valid(kaddr) returns true.
+  */
+ #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+-#define pfn_to_kaddr(pfn)      __va((pfn) << PAGE_SHIFT)
+ extern bool __virt_addr_valid(unsigned long kaddr);
+ #define virt_addr_valid(kaddr)	__virt_addr_valid((unsigned long) (kaddr))
+ 
++static __always_inline void *pfn_to_kaddr(unsigned long pfn)
++{
++	return __va(pfn << PAGE_SHIFT);
++}
++
+ static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+ {
+ 	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
+diff --git a/arch/x86/include/asm/vsyscall.h b/arch/x86/include/asm/vsyscall.h
+index ab60a71a8dcb9..472f0263dbc61 100644
+--- a/arch/x86/include/asm/vsyscall.h
++++ b/arch/x86/include/asm/vsyscall.h
+@@ -4,6 +4,7 @@
+ 
+ #include <linux/seqlock.h>
+ #include <uapi/asm/vsyscall.h>
++#include <asm/page_types.h>
+ 
+ #ifdef CONFIG_X86_VSYSCALL_EMULATION
+ extern void map_vsyscall(void);
+@@ -24,4 +25,13 @@ static inline bool emulate_vsyscall(unsigned long error_code,
+ }
+ #endif
+ 
++/*
++ * The (legacy) vsyscall page is the long page in the kernel portion
++ * of the address space that has user-accessible permissions.
++ */
++static inline bool is_vsyscall_vaddr(unsigned long vaddr)
++{
++	return unlikely((vaddr & PAGE_MASK) == VSYSCALL_ADDR);
++}
++
+ #endif /* _ASM_X86_VSYSCALL_H */
+diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
+index 8d8752b44f113..ff8f25faca3dd 100644
+--- a/arch/x86/kernel/acpi/cppc.c
++++ b/arch/x86/kernel/acpi/cppc.c
+@@ -20,7 +20,7 @@ bool cpc_supported_by_cpu(void)
+ 		    (boot_cpu_data.x86_model >= 0x20 && boot_cpu_data.x86_model <= 0x2f)))
+ 			return true;
+ 		else if (boot_cpu_data.x86 == 0x17 &&
+-			 boot_cpu_data.x86_model >= 0x70 && boot_cpu_data.x86_model <= 0x7f)
++			 boot_cpu_data.x86_model >= 0x30 && boot_cpu_data.x86_model <= 0x7f)
+ 			return true;
+ 		return boot_cpu_has(X86_FEATURE_CPPC);
+ 	}
+diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
+index 19e0681f04356..d04371e851b4c 100644
+--- a/arch/x86/kernel/cpu/resctrl/core.c
++++ b/arch/x86/kernel/cpu/resctrl/core.c
+@@ -231,9 +231,7 @@ static bool __get_mem_config_intel(struct rdt_resource *r)
+ static bool __rdt_get_mem_config_amd(struct rdt_resource *r)
+ {
+ 	struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
+-	union cpuid_0x10_3_eax eax;
+-	union cpuid_0x10_x_edx edx;
+-	u32 ebx, ecx, subleaf;
++	u32 eax, ebx, ecx, edx, subleaf;
+ 
+ 	/*
+ 	 * Query CPUID_Fn80000020_EDX_x01 for MBA and
+@@ -241,9 +239,9 @@ static bool __rdt_get_mem_config_amd(struct rdt_resource *r)
+ 	 */
+ 	subleaf = (r->rid == RDT_RESOURCE_SMBA) ? 2 :  1;
+ 
+-	cpuid_count(0x80000020, subleaf, &eax.full, &ebx, &ecx, &edx.full);
+-	hw_res->num_closid = edx.split.cos_max + 1;
+-	r->default_ctrl = MAX_MBA_BW_AMD;
++	cpuid_count(0x80000020, subleaf, &eax, &ebx, &ecx, &edx);
++	hw_res->num_closid = edx + 1;
++	r->default_ctrl = 1 << eax;
+ 
+ 	/* AMD does not use delay */
+ 	r->membw.delay_linear = false;
+diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
+index a4f1aa15f0a2a..52e7e7deee106 100644
+--- a/arch/x86/kernel/cpu/resctrl/internal.h
++++ b/arch/x86/kernel/cpu/resctrl/internal.h
+@@ -18,7 +18,6 @@
+ #define MBM_OVERFLOW_INTERVAL		1000
+ #define MAX_MBA_BW			100u
+ #define MBA_IS_LINEAR			0x4
+-#define MAX_MBA_BW_AMD			0x800
+ #define MBM_CNTR_WIDTH_OFFSET_AMD	20
+ 
+ #define RMID_VAL_ERROR			BIT_ULL(63)
+@@ -296,14 +295,10 @@ struct rftype {
+  * struct mbm_state - status for each MBM counter in each domain
+  * @prev_bw_bytes: Previous bytes value read for bandwidth calculation
+  * @prev_bw:	The most recent bandwidth in MBps
+- * @delta_bw:	Difference between the current and previous bandwidth
+- * @delta_comp:	Indicates whether to compute the delta_bw
+  */
+ struct mbm_state {
+ 	u64	prev_bw_bytes;
+ 	u32	prev_bw;
+-	u32	delta_bw;
+-	bool	delta_comp;
+ };
+ 
+ /**
+@@ -395,6 +390,8 @@ struct rdt_parse_data {
+  * @msr_update:		Function pointer to update QOS MSRs
+  * @mon_scale:		cqm counter * mon_scale = occupancy in bytes
+  * @mbm_width:		Monitor width, to detect and correct for overflow.
++ * @mbm_cfg_mask:	Bandwidth sources that can be tracked when Bandwidth
++ *			Monitoring Event Configuration (BMEC) is supported.
+  * @cdp_enabled:	CDP state of this resource
+  *
+  * Members of this structure are either private to the architecture
+@@ -409,6 +406,7 @@ struct rdt_hw_resource {
+ 				 struct rdt_resource *r);
+ 	unsigned int		mon_scale;
+ 	unsigned int		mbm_width;
++	unsigned int		mbm_cfg_mask;
+ 	bool			cdp_enabled;
+ };
+ 
+diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
+index f136ac046851c..3a6c069614eb8 100644
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -440,9 +440,6 @@ static void mbm_bw_count(u32 rmid, struct rmid_read *rr)
+ 
+ 	cur_bw = bytes / SZ_1M;
+ 
+-	if (m->delta_comp)
+-		m->delta_bw = abs(cur_bw - m->prev_bw);
+-	m->delta_comp = false;
+ 	m->prev_bw = cur_bw;
+ }
+ 
+@@ -520,11 +517,11 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
+ {
+ 	u32 closid, rmid, cur_msr_val, new_msr_val;
+ 	struct mbm_state *pmbm_data, *cmbm_data;
+-	u32 cur_bw, delta_bw, user_bw;
+ 	struct rdt_resource *r_mba;
+ 	struct rdt_domain *dom_mba;
+ 	struct list_head *head;
+ 	struct rdtgroup *entry;
++	u32 cur_bw, user_bw;
+ 
+ 	if (!is_mbm_local_enabled())
+ 		return;
+@@ -543,7 +540,6 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
+ 
+ 	cur_bw = pmbm_data->prev_bw;
+ 	user_bw = dom_mba->mbps_val[closid];
+-	delta_bw = pmbm_data->delta_bw;
+ 
+ 	/* MBA resource doesn't support CDP */
+ 	cur_msr_val = resctrl_arch_get_config(r_mba, dom_mba, closid, CDP_NONE);
+@@ -555,49 +551,31 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
+ 	list_for_each_entry(entry, head, mon.crdtgrp_list) {
+ 		cmbm_data = &dom_mbm->mbm_local[entry->mon.rmid];
+ 		cur_bw += cmbm_data->prev_bw;
+-		delta_bw += cmbm_data->delta_bw;
+ 	}
+ 
+ 	/*
+ 	 * Scale up/down the bandwidth linearly for the ctrl group.  The
+ 	 * bandwidth step is the bandwidth granularity specified by the
+ 	 * hardware.
+-	 *
+-	 * The delta_bw is used when increasing the bandwidth so that we
+-	 * dont alternately increase and decrease the control values
+-	 * continuously.
+-	 *
+-	 * For ex: consider cur_bw = 90MBps, user_bw = 100MBps and if
+-	 * bandwidth step is 20MBps(> user_bw - cur_bw), we would keep
+-	 * switching between 90 and 110 continuously if we only check
+-	 * cur_bw < user_bw.
++	 * Always increase throttling if current bandwidth is above the
++	 * target set by user.
++	 * But avoid thrashing up and down on every poll by checking
++	 * whether a decrease in throttling is likely to push the group
++	 * back over target. E.g. if currently throttling to 30% of bandwidth
++	 * on a system with 10% granularity steps, check whether moving to
++	 * 40% would go past the limit by multiplying current bandwidth by
++	 * "(30 + 10) / 30".
+ 	 */
+ 	if (cur_msr_val > r_mba->membw.min_bw && user_bw < cur_bw) {
+ 		new_msr_val = cur_msr_val - r_mba->membw.bw_gran;
+ 	} else if (cur_msr_val < MAX_MBA_BW &&
+-		   (user_bw > (cur_bw + delta_bw))) {
++		   (user_bw > (cur_bw * (cur_msr_val + r_mba->membw.min_bw) / cur_msr_val))) {
+ 		new_msr_val = cur_msr_val + r_mba->membw.bw_gran;
+ 	} else {
+ 		return;
+ 	}
+ 
+ 	resctrl_arch_update_one(r_mba, dom_mba, closid, CDP_NONE, new_msr_val);
+-
+-	/*
+-	 * Delta values are updated dynamically package wise for each
+-	 * rdtgrp every time the throttle MSR changes value.
+-	 *
+-	 * This is because (1)the increase in bandwidth is not perfectly
+-	 * linear and only "approximately" linear even when the hardware
+-	 * says it is linear.(2)Also since MBA is a core specific
+-	 * mechanism, the delta values vary based on number of cores used
+-	 * by the rdtgrp.
+-	 */
+-	pmbm_data->delta_comp = true;
+-	list_for_each_entry(entry, head, mon.crdtgrp_list) {
+-		cmbm_data = &dom_mbm->mbm_local[entry->mon.rmid];
+-		cmbm_data->delta_comp = true;
+-	}
+ }
+ 
+ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
+@@ -813,6 +791,12 @@ int __init rdt_get_mon_l3_config(struct rdt_resource *r)
+ 		return ret;
+ 
+ 	if (rdt_cpu_has(X86_FEATURE_BMEC)) {
++		u32 eax, ebx, ecx, edx;
++
++		/* Detect list of bandwidth sources that can be tracked */
++		cpuid_count(0x80000020, 3, &eax, &ebx, &ecx, &edx);
++		hw_res->mbm_cfg_mask = ecx & MAX_EVT_CONFIG_BITS;
++
+ 		if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) {
+ 			mbm_total_event.configurable = true;
+ 			mbm_config_rftype_init("mbm_total_bytes_config");
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 69a1de92384ab..2b69e560b05f1 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -1620,12 +1620,6 @@ static int mbm_config_write_domain(struct rdt_resource *r,
+ 	struct mon_config_info mon_info = {0};
+ 	int ret = 0;
+ 
+-	/* mon_config cannot be more than the supported set of events */
+-	if (val > MAX_EVT_CONFIG_BITS) {
+-		rdt_last_cmd_puts("Invalid event configuration\n");
+-		return -EINVAL;
+-	}
+-
+ 	/*
+ 	 * Read the current config value first. If both are the same then
+ 	 * no need to write it again.
+@@ -1663,6 +1657,7 @@ static int mbm_config_write_domain(struct rdt_resource *r,
+ 
+ static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid)
+ {
++	struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
+ 	char *dom_str = NULL, *id_str;
+ 	unsigned long dom_id, val;
+ 	struct rdt_domain *d;
+@@ -1686,6 +1681,13 @@ static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid)
+ 		return -EINVAL;
+ 	}
+ 
++	/* Value from user cannot be more than the supported set of events */
++	if ((val & hw_res->mbm_cfg_mask) != val) {
++		rdt_last_cmd_printf("Invalid event configuration: max valid mask is 0x%02x\n",
++				    hw_res->mbm_cfg_mask);
++		return -EINVAL;
++	}
++
+ 	list_for_each_entry(d, &r->domains, list) {
+ 		if (d->id == dom_id) {
+ 			ret = mbm_config_write_domain(r, d, evtid, val);
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 679b09cfe241c..d6375b3c633bc 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -798,15 +798,6 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
+ 	show_opcodes(regs, loglvl);
+ }
+ 
+-/*
+- * The (legacy) vsyscall page is the long page in the kernel portion
+- * of the address space that has user-accessible permissions.
+- */
+-static bool is_vsyscall_vaddr(unsigned long vaddr)
+-{
+-	return unlikely((vaddr & PAGE_MASK) == VSYSCALL_ADDR);
+-}
+-
+ static void
+ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
+ 		       unsigned long address, u32 pkey, int si_code)
+diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
+index 6993f026adec9..42115ac079cfe 100644
+--- a/arch/x86/mm/maccess.c
++++ b/arch/x86/mm/maccess.c
+@@ -3,6 +3,8 @@
+ #include <linux/uaccess.h>
+ #include <linux/kernel.h>
+ 
++#include <asm/vsyscall.h>
++
+ #ifdef CONFIG_X86_64
+ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ {
+@@ -15,6 +17,14 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
+ 	if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
+ 		return false;
+ 
++	/*
++	 * Reading from the vsyscall page may cause an unhandled fault in
++	 * certain cases.  Though it is at an address above TASK_SIZE_MAX, it is
++	 * usually considered as a user space address.
++	 */
++	if (is_vsyscall_vaddr(vaddr))
++		return false;
++
+ 	/*
+ 	 * Allow everything during early boot before 'x86_virt_bits'
+ 	 * is initialized.  Needed for instruction decoding in early
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index d73aeb16417fc..7f72472a34d6d 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -507,7 +507,6 @@ void __init sme_enable(struct boot_params *bp)
+ 	const char *cmdline_ptr, *cmdline_arg, *cmdline_on, *cmdline_off;
+ 	unsigned int eax, ebx, ecx, edx;
+ 	unsigned long feature_mask;
+-	bool active_by_default;
+ 	unsigned long me_mask;
+ 	char buffer[16];
+ 	bool snp;
+@@ -593,22 +592,19 @@ void __init sme_enable(struct boot_params *bp)
+ 	     : "p" (sme_cmdline_off));
+ 
+ 	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT))
+-		active_by_default = true;
+-	else
+-		active_by_default = false;
++		sme_me_mask = me_mask;
+ 
+ 	cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr |
+ 				     ((u64)bp->ext_cmd_line_ptr << 32));
+ 
+ 	if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0)
+-		return;
++		goto out;
+ 
+ 	if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
+ 		sme_me_mask = me_mask;
+ 	else if (!strncmp(buffer, cmdline_off, sizeof(buffer)))
+ 		sme_me_mask = 0;
+-	else
+-		sme_me_mask = active_by_default ? me_mask : 0;
++
+ out:
+ 	if (sme_me_mask) {
+ 		physical_mask &= ~sme_me_mask;
+diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
+index d30949e25ebd9..e7013283640f5 100644
+--- a/arch/x86/tools/relocs.c
++++ b/arch/x86/tools/relocs.c
+@@ -653,6 +653,14 @@ static void print_absolute_relocs(void)
+ 		if (!(sec_applies->shdr.sh_flags & SHF_ALLOC)) {
+ 			continue;
+ 		}
++		/*
++		 * Do not perform relocations in .notes section; any
++		 * values there are meant for pre-boot consumption (e.g.
++		 * startup_xen).
++		 */
++		if (sec_applies->shdr.sh_type == SHT_NOTE) {
++			continue;
++		}
+ 		sh_symtab  = sec_symtab->symtab;
+ 		sym_strtab = sec_symtab->link->strtab;
+ 		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index 4b0d6fff88de5..1fb9a1644d944 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -65,6 +65,8 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	char *resched_name, *callfunc_name, *debug_name;
+ 
+ 	resched_name = kasprintf(GFP_KERNEL, "resched%d", cpu);
++	if (!resched_name)
++		goto fail_mem;
+ 	per_cpu(xen_resched_irq, cpu).name = resched_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
+ 				    cpu,
+@@ -77,6 +79,8 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	per_cpu(xen_resched_irq, cpu).irq = rc;
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfunc%d", cpu);
++	if (!callfunc_name)
++		goto fail_mem;
+ 	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
+ 				    cpu,
+@@ -90,6 +94,9 @@ int xen_smp_intr_init(unsigned int cpu)
+ 
+ 	if (!xen_fifo_events) {
+ 		debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
++		if (!debug_name)
++			goto fail_mem;
++
+ 		per_cpu(xen_debug_irq, cpu).name = debug_name;
+ 		rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
+ 					     xen_debug_interrupt,
+@@ -101,6 +108,9 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	}
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
++	if (!callfunc_name)
++		goto fail_mem;
++
+ 	per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
+ 				    cpu,
+@@ -114,6 +124,8 @@ int xen_smp_intr_init(unsigned int cpu)
+ 
+ 	return 0;
+ 
++ fail_mem:
++	rc = -ENOMEM;
+  fail:
+ 	xen_smp_intr_free(cpu);
+ 	return rc;
+diff --git a/block/holder.c b/block/holder.c
+index 37d18c13d9581..791091a7eac23 100644
+--- a/block/holder.c
++++ b/block/holder.c
+@@ -8,6 +8,8 @@ struct bd_holder_disk {
+ 	int			refcnt;
+ };
+ 
++static DEFINE_MUTEX(blk_holder_mutex);
++
+ static struct bd_holder_disk *bd_find_holder_disk(struct block_device *bdev,
+ 						  struct gendisk *disk)
+ {
+@@ -80,7 +82,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
+ 	kobject_get(bdev->bd_holder_dir);
+ 	mutex_unlock(&bdev->bd_disk->open_mutex);
+ 
+-	mutex_lock(&disk->open_mutex);
++	mutex_lock(&blk_holder_mutex);
+ 	WARN_ON_ONCE(!bdev->bd_holder);
+ 
+ 	holder = bd_find_holder_disk(bdev, disk);
+@@ -108,7 +110,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
+ 		goto out_del_symlink;
+ 	list_add(&holder->list, &disk->slave_bdevs);
+ 
+-	mutex_unlock(&disk->open_mutex);
++	mutex_unlock(&blk_holder_mutex);
+ 	return 0;
+ 
+ out_del_symlink:
+@@ -116,7 +118,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
+ out_free_holder:
+ 	kfree(holder);
+ out_unlock:
+-	mutex_unlock(&disk->open_mutex);
++	mutex_unlock(&blk_holder_mutex);
+ 	if (ret)
+ 		kobject_put(bdev->bd_holder_dir);
+ 	return ret;
+@@ -140,7 +142,7 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
+ 	if (WARN_ON_ONCE(!disk->slave_dir))
+ 		return;
+ 
+-	mutex_lock(&disk->open_mutex);
++	mutex_lock(&blk_holder_mutex);
+ 	holder = bd_find_holder_disk(bdev, disk);
+ 	if (!WARN_ON_ONCE(holder == NULL) && !--holder->refcnt) {
+ 		del_symlink(disk->slave_dir, bdev_kobj(bdev));
+@@ -149,6 +151,6 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
+ 		list_del_init(&holder->list);
+ 		kfree(holder);
+ 	}
+-	mutex_unlock(&disk->open_mutex);
++	mutex_unlock(&blk_holder_mutex);
+ }
+ EXPORT_SYMBOL_GPL(bd_unlink_disk_holder);
+diff --git a/block/opal_proto.h b/block/opal_proto.h
+index dec7ce3a3edb7..d247a457bf6e3 100644
+--- a/block/opal_proto.h
++++ b/block/opal_proto.h
+@@ -71,6 +71,7 @@ enum opal_response_token {
+ #define SHORT_ATOM_BYTE  0xBF
+ #define MEDIUM_ATOM_BYTE 0xDF
+ #define LONG_ATOM_BYTE   0xE3
++#define EMPTY_ATOM_BYTE  0xFF
+ 
+ #define OPAL_INVAL_PARAM 12
+ #define OPAL_MANUFACTURED_INACTIVE 0x08
+diff --git a/block/sed-opal.c b/block/sed-opal.c
+index 3d9e9cd250bd5..fa4dba5d85319 100644
+--- a/block/sed-opal.c
++++ b/block/sed-opal.c
+@@ -1056,16 +1056,20 @@ static int response_parse(const u8 *buf, size_t length,
+ 			token_length = response_parse_medium(iter, pos);
+ 		else if (pos[0] <= LONG_ATOM_BYTE) /* long atom */
+ 			token_length = response_parse_long(iter, pos);
++		else if (pos[0] == EMPTY_ATOM_BYTE) /* empty atom */
++			token_length = 1;
+ 		else /* TOKEN */
+ 			token_length = response_parse_token(iter, pos);
+ 
+ 		if (token_length < 0)
+ 			return token_length;
+ 
++		if (pos[0] != EMPTY_ATOM_BYTE)
++			num_entries++;
++
+ 		pos += token_length;
+ 		total -= token_length;
+ 		iter++;
+-		num_entries++;
+ 	}
+ 
+ 	resp->num = num_entries;
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 70661f58ee41c..dd5353c5eb24d 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1292,10 +1292,11 @@ config CRYPTO_JITTERENTROPY
+ 
+ 	  A non-physical non-deterministic ("true") RNG (e.g., an entropy source
+ 	  compliant with NIST SP800-90B) intended to provide a seed to a
+-	  deterministic RNG (e.g.  per NIST SP800-90C).
++	  deterministic RNG (e.g., per NIST SP800-90C).
+ 	  This RNG does not perform any cryptographic whitening of the generated
++	  random numbers.
+ 
+-	  See https://www.chronox.de/jent.html
++	  See https://www.chronox.de/jent/
+ 
+ if CRYPTO_JITTERENTROPY
+ if CRYPTO_FIPS && EXPERT
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 55437f5e0c3ae..bd6a7857ce058 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -1430,6 +1430,8 @@ int acpi_processor_power_exit(struct acpi_processor *pr)
+ 		acpi_processor_registered--;
+ 		if (acpi_processor_registered == 0)
+ 			cpuidle_unregister_driver(&acpi_idle_driver);
++
++		kfree(dev);
+ 	}
+ 
+ 	pr->flags.power_setup_done = 0;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 03b52d31a3e3f..c843feb02e980 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -576,6 +576,39 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "GM6BG0Q"),
+ 		},
+ 	},
++	{
++		/* Infinity E15-5A165-BM */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GM5RG1E0009COM"),
++		},
++	},
++	{
++		/* Infinity E15-5A305-1M */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "GM5RGEE0016COM"),
++		},
++	},
++	{
++		/* Lunnen Ground 15 / AMD Ryzen 5 5500U */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Lunnen"),
++			DMI_MATCH(DMI_BOARD_NAME, "LLL5DAW"),
++		},
++	},
++	{
++		/* Lunnen Ground 16 / AMD Ryzen 7 5800U */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Lunnen"),
++			DMI_MATCH(DMI_BOARD_NAME, "LL6FA"),
++		},
++	},
++	{
++		/* MAIBENBEN X577 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MAIBENBEN"),
++			DMI_MATCH(DMI_BOARD_NAME, "X577"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 02bb2cce423f4..35ad5781f0a5e 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -314,18 +314,14 @@ static int acpi_scan_device_check(struct acpi_device *adev)
+ 		 * again).
+ 		 */
+ 		if (adev->handler) {
+-			dev_warn(&adev->dev, "Already enumerated\n");
+-			return -EALREADY;
++			dev_dbg(&adev->dev, "Already enumerated\n");
++			return 0;
+ 		}
+ 		error = acpi_bus_scan(adev->handle);
+ 		if (error) {
+ 			dev_warn(&adev->dev, "Namespace scan failure\n");
+ 			return error;
+ 		}
+-		if (!adev->handler) {
+-			dev_warn(&adev->dev, "Enumeration failure\n");
+-			error = -ENODEV;
+-		}
+ 	} else {
+ 		error = acpi_scan_device_not_enumerated(adev);
+ 	}
+diff --git a/drivers/base/regmap/regmap-kunit.c b/drivers/base/regmap/regmap-kunit.c
+index e14cc03a17f6c..4c9e38ead5c56 100644
+--- a/drivers/base/regmap/regmap-kunit.c
++++ b/drivers/base/regmap/regmap-kunit.c
+@@ -9,6 +9,23 @@
+ 
+ #define BLOCK_TEST_SIZE 12
+ 
++static void get_changed_bytes(void *orig, void *new, size_t size)
++{
++	char *o = orig;
++	char *n = new;
++	int i;
++
++	get_random_bytes(new, size);
++
++	/*
++	 * This could be nicer and more efficient but we shouldn't
++	 * super care.
++	 */
++	for (i = 0; i < size; i++)
++		while (n[i] == o[i])
++			get_random_bytes(&n[i], 1);
++}
++
+ static const struct regmap_config test_regmap_config = {
+ 	.max_register = BLOCK_TEST_SIZE,
+ 	.reg_stride = 1,
+@@ -1192,7 +1209,7 @@ static void raw_sync(struct kunit *test)
+ 	struct regmap *map;
+ 	struct regmap_config config;
+ 	struct regmap_ram_data *data;
+-	u16 val[2];
++	u16 val[3];
+ 	u16 *hw_buf;
+ 	unsigned int rval;
+ 	int i;
+@@ -1206,17 +1223,13 @@ static void raw_sync(struct kunit *test)
+ 
+ 	hw_buf = (u16 *)data->vals;
+ 
+-	get_random_bytes(&val, sizeof(val));
++	get_changed_bytes(&hw_buf[2], &val[0], sizeof(val));
+ 
+ 	/* Do a regular write and a raw write in cache only mode */
+ 	regcache_cache_only(map, true);
+-	KUNIT_EXPECT_EQ(test, 0, regmap_raw_write(map, 2, val, sizeof(val)));
+-	if (config.val_format_endian == REGMAP_ENDIAN_BIG)
+-		KUNIT_EXPECT_EQ(test, 0, regmap_write(map, 6,
+-						      be16_to_cpu(val[0])));
+-	else
+-		KUNIT_EXPECT_EQ(test, 0, regmap_write(map, 6,
+-						      le16_to_cpu(val[0])));
++	KUNIT_EXPECT_EQ(test, 0, regmap_raw_write(map, 2, val,
++						  sizeof(u16) * 2));
++	KUNIT_EXPECT_EQ(test, 0, regmap_write(map, 4, val[2]));
+ 
+ 	/* We should read back the new values, and defaults for the rest */
+ 	for (i = 0; i < config.max_register + 1; i++) {
+@@ -1225,24 +1238,34 @@ static void raw_sync(struct kunit *test)
+ 		switch (i) {
+ 		case 2:
+ 		case 3:
+-		case 6:
+ 			if (config.val_format_endian == REGMAP_ENDIAN_BIG) {
+ 				KUNIT_EXPECT_EQ(test, rval,
+-						be16_to_cpu(val[i % 2]));
++						be16_to_cpu(val[i - 2]));
+ 			} else {
+ 				KUNIT_EXPECT_EQ(test, rval,
+-						le16_to_cpu(val[i % 2]));
++						le16_to_cpu(val[i - 2]));
+ 			}
+ 			break;
++		case 4:
++			KUNIT_EXPECT_EQ(test, rval, val[i - 2]);
++			break;
+ 		default:
+ 			KUNIT_EXPECT_EQ(test, config.reg_defaults[i].def, rval);
+ 			break;
+ 		}
+ 	}
++
++	/*
++	 * The value written via _write() was translated by the core,
++	 * translate the original copy for comparison purposes.
++	 */
++	if (config.val_format_endian == REGMAP_ENDIAN_BIG)
++		val[2] = cpu_to_be16(val[2]);
++	else
++		val[2] = cpu_to_le16(val[2]);
+ 	
+ 	/* The values should not appear in the "hardware" */
+-	KUNIT_EXPECT_MEMNEQ(test, &hw_buf[2], val, sizeof(val));
+-	KUNIT_EXPECT_MEMNEQ(test, &hw_buf[6], val, sizeof(u16));
++	KUNIT_EXPECT_MEMNEQ(test, &hw_buf[2], &val[0], sizeof(val));
+ 
+ 	for (i = 0; i < config.max_register + 1; i++)
+ 		data->written[i] = false;
+@@ -1253,8 +1276,7 @@ static void raw_sync(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, 0, regcache_sync(map));
+ 
+ 	/* The values should now appear in the "hardware" */
+-	KUNIT_EXPECT_MEMEQ(test, &hw_buf[2], val, sizeof(val));
+-	KUNIT_EXPECT_MEMEQ(test, &hw_buf[6], val, sizeof(u16));
++	KUNIT_EXPECT_MEMEQ(test, &hw_buf[2], &val[0], sizeof(val));
+ 
+ 	regmap_exit(map);
+ }
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index d7317425be510..cc9077b588d7e 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -419,13 +419,16 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff_head *qu
+ 	rcu_read_lock();
+ 	for_each_netdev_rcu(&init_net, ifp) {
+ 		dev_hold(ifp);
+-		if (!is_aoe_netif(ifp))
+-			goto cont;
++		if (!is_aoe_netif(ifp)) {
++			dev_put(ifp);
++			continue;
++		}
+ 
+ 		skb = new_skb(sizeof *h + sizeof *ch);
+ 		if (skb == NULL) {
+ 			printk(KERN_INFO "aoe: skb alloc failure\n");
+-			goto cont;
++			dev_put(ifp);
++			continue;
+ 		}
+ 		skb_put(skb, sizeof *h + sizeof *ch);
+ 		skb->dev = ifp;
+@@ -440,9 +443,6 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff_head *qu
+ 		h->major = cpu_to_be16(aoemajor);
+ 		h->minor = aoeminor;
+ 		h->cmd = AOECMD_CFG;
+-
+-cont:
+-		dev_put(ifp);
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/drivers/block/aoe/aoenet.c b/drivers/block/aoe/aoenet.c
+index c51ea95bc2ce4..923a134fd7665 100644
+--- a/drivers/block/aoe/aoenet.c
++++ b/drivers/block/aoe/aoenet.c
+@@ -63,6 +63,7 @@ tx(int id) __must_hold(&txlock)
+ 			pr_warn("aoe: packet could not be sent on %s.  %s\n",
+ 				ifp ? ifp->name : "netif",
+ 				"consider increasing tx_queue_len");
++		dev_put(ifp);
+ 		spin_lock_irq(&txlock);
+ 	}
+ 	return 0;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index aa65313aabb8d..df738eab02433 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2437,6 +2437,12 @@ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	dev_list = nla_nest_start_noflag(reply, NBD_ATTR_DEVICE_LIST);
++	if (!dev_list) {
++		nlmsg_free(reply);
++		ret = -EMSGSIZE;
++		goto out;
++	}
++
+ 	if (index == -1) {
+ 		ret = idr_for_each(&nbd_index_idr, &status_cb, reply);
+ 		if (ret) {
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index aaabb732082cd..285418dbb43f5 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -372,8 +372,10 @@ int btmtk_process_coredump(struct hci_dev *hdev, struct sk_buff *skb)
+ 	struct btmediatek_data *data = hci_get_priv(hdev);
+ 	int err;
+ 
+-	if (!IS_ENABLED(CONFIG_DEV_COREDUMP))
++	if (!IS_ENABLED(CONFIG_DEV_COREDUMP)) {
++		kfree_skb(skb);
+ 		return 0;
++	}
+ 
+ 	switch (data->cd_info.state) {
+ 	case HCI_DEVCOREDUMP_IDLE:
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index b8e9de887b5de..81ccb88c904a3 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3271,7 +3271,6 @@ static int btusb_recv_acl_mtk(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ 	struct btusb_data *data = hci_get_drvdata(hdev);
+ 	u16 handle = le16_to_cpu(hci_acl_hdr(skb)->handle);
+-	struct sk_buff *skb_cd;
+ 
+ 	switch (handle) {
+ 	case 0xfc6f:		/* Firmware dump from device */
+@@ -3284,9 +3283,12 @@ static int btusb_recv_acl_mtk(struct hci_dev *hdev, struct sk_buff *skb)
+ 		 * for backward compatibility, so we have to clone the packet
+ 		 * extraly for the in-kernel coredump support.
+ 		 */
+-		skb_cd = skb_clone(skb, GFP_ATOMIC);
+-		if (skb_cd)
+-			btmtk_process_coredump(hdev, skb_cd);
++		if (IS_ENABLED(CONFIG_DEV_COREDUMP)) {
++			struct sk_buff *skb_cd = skb_clone(skb, GFP_ATOMIC);
++
++			if (skb_cd)
++				btmtk_process_coredump(hdev, skb_cd);
++		}
+ 
+ 		fallthrough;
+ 	case 0x05ff:		/* Firmware debug logging 1 */
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 71e748a9477e4..c0436881a533c 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -113,6 +113,7 @@ struct h5_vnd {
+ 	int (*suspend)(struct h5 *h5);
+ 	int (*resume)(struct h5 *h5);
+ 	const struct acpi_gpio_mapping *acpi_gpio_map;
++	int sizeof_priv;
+ };
+ 
+ struct h5_device_data {
+@@ -863,7 +864,8 @@ static int h5_serdev_probe(struct serdev_device *serdev)
+ 	if (IS_ERR(h5->device_wake_gpio))
+ 		return PTR_ERR(h5->device_wake_gpio);
+ 
+-	return hci_uart_register_device(&h5->serdev_hu, &h5p);
++	return hci_uart_register_device_priv(&h5->serdev_hu, &h5p,
++					     h5->vnd->sizeof_priv);
+ }
+ 
+ static void h5_serdev_remove(struct serdev_device *serdev)
+@@ -1070,6 +1072,7 @@ static struct h5_vnd rtl_vnd = {
+ 	.suspend	= h5_btrtl_suspend,
+ 	.resume		= h5_btrtl_resume,
+ 	.acpi_gpio_map	= acpi_btrtl_gpios,
++	.sizeof_priv    = sizeof(struct btrealtek_data),
+ };
+ 
+ static const struct h5_device_data h5_data_rtl8822cs = {
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index a65de7309da4c..bb69b7ea8119d 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2304,7 +2304,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 
+ 		qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ 					       GPIOD_OUT_LOW);
+-		if (IS_ERR_OR_NULL(qcadev->bt_en) &&
++		if (IS_ERR(qcadev->bt_en) &&
+ 		    (data->soc_type == QCA_WCN6750 ||
+ 		     data->soc_type == QCA_WCN6855)) {
+ 			dev_err(&serdev->dev, "failed to acquire BT_EN gpio\n");
+@@ -2313,7 +2313,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 
+ 		qcadev->sw_ctrl = devm_gpiod_get_optional(&serdev->dev, "swctrl",
+ 					       GPIOD_IN);
+-		if (IS_ERR_OR_NULL(qcadev->sw_ctrl) &&
++		if (IS_ERR(qcadev->sw_ctrl) &&
+ 		    (data->soc_type == QCA_WCN6750 ||
+ 		     data->soc_type == QCA_WCN6855 ||
+ 		     data->soc_type == QCA_WCN7850))
+@@ -2335,7 +2335,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ 	default:
+ 		qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ 					       GPIOD_OUT_LOW);
+-		if (IS_ERR_OR_NULL(qcadev->bt_en)) {
++		if (IS_ERR(qcadev->bt_en)) {
+ 			dev_warn(&serdev->dev, "failed to acquire enable gpio\n");
+ 			power_ctrl_enabled = false;
+ 		}
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index f16fd79bc02b8..611a11fbb2f3a 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -300,8 +300,9 @@ static const struct serdev_device_ops hci_serdev_client_ops = {
+ 	.write_wakeup = hci_uart_write_wakeup,
+ };
+ 
+-int hci_uart_register_device(struct hci_uart *hu,
+-			     const struct hci_uart_proto *p)
++int hci_uart_register_device_priv(struct hci_uart *hu,
++			     const struct hci_uart_proto *p,
++			     int sizeof_priv)
+ {
+ 	int err;
+ 	struct hci_dev *hdev;
+@@ -325,7 +326,7 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 	set_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 
+ 	/* Initialize and register HCI device */
+-	hdev = hci_alloc_dev();
++	hdev = hci_alloc_dev_priv(sizeof_priv);
+ 	if (!hdev) {
+ 		BT_ERR("Can't allocate HCI device");
+ 		err = -ENOMEM;
+@@ -394,7 +395,7 @@ int hci_uart_register_device(struct hci_uart *hu,
+ 	percpu_free_rwsem(&hu->proto_lock);
+ 	return err;
+ }
+-EXPORT_SYMBOL_GPL(hci_uart_register_device);
++EXPORT_SYMBOL_GPL(hci_uart_register_device_priv);
+ 
+ void hci_uart_unregister_device(struct hci_uart *hu)
+ {
+diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h
+index fb4a2d0d8cc80..68c8c7e95d64d 100644
+--- a/drivers/bluetooth/hci_uart.h
++++ b/drivers/bluetooth/hci_uart.h
+@@ -97,7 +97,17 @@ struct hci_uart {
+ 
+ int hci_uart_register_proto(const struct hci_uart_proto *p);
+ int hci_uart_unregister_proto(const struct hci_uart_proto *p);
+-int hci_uart_register_device(struct hci_uart *hu, const struct hci_uart_proto *p);
++
++int hci_uart_register_device_priv(struct hci_uart *hu,
++				  const struct hci_uart_proto *p,
++				  int sizeof_priv);
++
++static inline int hci_uart_register_device(struct hci_uart *hu,
++					   const struct hci_uart_proto *p)
++{
++	return hci_uart_register_device_priv(hu, p, 0);
++}
++
+ void hci_uart_unregister_device(struct hci_uart *hu);
+ 
+ int hci_uart_tx_wakeup(struct hci_uart *hu);
+diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
+index e6742998f372c..d5e7fa9173a16 100644
+--- a/drivers/bus/Kconfig
++++ b/drivers/bus/Kconfig
+@@ -186,11 +186,12 @@ config SUNXI_RSB
+ 
+ config TEGRA_ACONNECT
+ 	tristate "Tegra ACONNECT Bus Driver"
+-	depends on ARCH_TEGRA_210_SOC
++	depends on ARCH_TEGRA
+ 	depends on OF && PM
+ 	help
+ 	  Driver for the Tegra ACONNECT bus which is used to interface with
+-	  the devices inside the Audio Processing Engine (APE) for Tegra210.
++	  the devices inside the Audio Processing Engine (APE) for
++	  Tegra210 and later.
+ 
+ config TEGRA_GMI
+ 	tristate "Tegra Generic Memory Interface bus driver"
+diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
+index 582d5c166a75e..934cdbca08e44 100644
+--- a/drivers/bus/mhi/ep/main.c
++++ b/drivers/bus/mhi/ep/main.c
+@@ -1427,7 +1427,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+ 	mhi_cntrl->ring_item_cache = kmem_cache_create("mhi_ep_ring_item",
+ 							sizeof(struct mhi_ep_ring_item), 0,
+ 							0, NULL);
+-	if (!mhi_cntrl->ev_ring_el_cache) {
++	if (!mhi_cntrl->ring_item_cache) {
+ 		ret = -ENOMEM;
+ 		goto err_destroy_tre_buf_cache;
+ 	}
+diff --git a/drivers/char/xilinx_hwicap/xilinx_hwicap.c b/drivers/char/xilinx_hwicap/xilinx_hwicap.c
+index 019cf6079cecd..6d2eadefd9dc9 100644
+--- a/drivers/char/xilinx_hwicap/xilinx_hwicap.c
++++ b/drivers/char/xilinx_hwicap/xilinx_hwicap.c
+@@ -639,8 +639,8 @@ static int hwicap_setup(struct platform_device *pdev, int id,
+ 	dev_set_drvdata(dev, (void *)drvdata);
+ 
+ 	drvdata->base_address = devm_platform_ioremap_resource(pdev, 0);
+-	if (!drvdata->base_address) {
+-		retval = -ENODEV;
++	if (IS_ERR(drvdata->base_address)) {
++		retval = PTR_ERR(drvdata->base_address);
+ 		goto failed;
+ 	}
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 2253c154a8248..20c4b28fed061 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -418,6 +418,9 @@ static struct clk_core *clk_core_get(struct clk_core *core, u8 p_index)
+ 	if (IS_ERR(hw))
+ 		return ERR_CAST(hw);
+ 
++	if (!hw)
++		return NULL;
++
+ 	return hw->core;
+ }
+ 
+diff --git a/drivers/clk/hisilicon/clk-hi3519.c b/drivers/clk/hisilicon/clk-hi3519.c
+index b871872d9960d..141b727ff60d6 100644
+--- a/drivers/clk/hisilicon/clk-hi3519.c
++++ b/drivers/clk/hisilicon/clk-hi3519.c
+@@ -130,7 +130,7 @@ static void hi3519_clk_unregister(struct platform_device *pdev)
+ 	of_clk_del_provider(pdev->dev.of_node);
+ 
+ 	hisi_clk_unregister_gate(hi3519_gate_clks,
+-				ARRAY_SIZE(hi3519_mux_clks),
++				ARRAY_SIZE(hi3519_gate_clks),
+ 				crg->clk_data);
+ 	hisi_clk_unregister_mux(hi3519_mux_clks,
+ 				ARRAY_SIZE(hi3519_mux_clks),
+diff --git a/drivers/clk/hisilicon/clk-hi3559a.c b/drivers/clk/hisilicon/clk-hi3559a.c
+index ff4ca0edce06a..4623befafaec4 100644
+--- a/drivers/clk/hisilicon/clk-hi3559a.c
++++ b/drivers/clk/hisilicon/clk-hi3559a.c
+@@ -491,7 +491,6 @@ static void hisi_clk_register_pll(struct hi3559av100_pll_clock *clks,
+ 
+ 		clk = clk_register(NULL, &p_clk->hw);
+ 		if (IS_ERR(clk)) {
+-			devm_kfree(dev, p_clk);
+ 			dev_err(dev, "%s: failed to register clock %s\n",
+ 			       __func__, clks[i].name);
+ 			continue;
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index e4300df88f1ac..55ed211a5e0b1 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -18,7 +18,12 @@
+ 
+ #define CLKEN0			0x000
+ #define CLKEN1			0x004
+-#define SAI_MCLK_SEL(n)		(0x300 + 4 * (n))	/* n in 0..5 */
++#define SAI1_MCLK_SEL		0x300
++#define SAI2_MCLK_SEL		0x304
++#define SAI3_MCLK_SEL		0x308
++#define SAI5_MCLK_SEL		0x30C
++#define SAI6_MCLK_SEL		0x310
++#define SAI7_MCLK_SEL		0x314
+ #define PDM_SEL			0x318
+ #define SAI_PLL_GNRL_CTL	0x400
+ 
+@@ -95,13 +100,13 @@ static const struct clk_parent_data clk_imx8mp_audiomix_pll_bypass_sels[] = {
+ 		IMX8MP_CLK_AUDIOMIX_SAI##n##_MCLK1_SEL, {},		\
+ 		clk_imx8mp_audiomix_sai##n##_mclk1_parents,		\
+ 		ARRAY_SIZE(clk_imx8mp_audiomix_sai##n##_mclk1_parents), \
+-		SAI_MCLK_SEL(n), 1, 0					\
++		SAI##n##_MCLK_SEL, 1, 0					\
+ 	}, {								\
+ 		"sai"__stringify(n)"_mclk2_sel",			\
+ 		IMX8MP_CLK_AUDIOMIX_SAI##n##_MCLK2_SEL, {},		\
+ 		clk_imx8mp_audiomix_sai_mclk2_parents,			\
+ 		ARRAY_SIZE(clk_imx8mp_audiomix_sai_mclk2_parents),	\
+-		SAI_MCLK_SEL(n), 4, 1					\
++		SAI##n##_MCLK_SEL, 4, 1					\
+ 	}, {								\
+ 		"sai"__stringify(n)"_ipg_cg",				\
+ 		IMX8MP_CLK_AUDIOMIX_SAI##n##_IPG,			\
+diff --git a/drivers/clk/mediatek/clk-mt7622-apmixedsys.c b/drivers/clk/mediatek/clk-mt7622-apmixedsys.c
+index 9cffd278e9a43..1b8f859b6b6cc 100644
+--- a/drivers/clk/mediatek/clk-mt7622-apmixedsys.c
++++ b/drivers/clk/mediatek/clk-mt7622-apmixedsys.c
+@@ -127,7 +127,6 @@ static void clk_mt7622_apmixed_remove(struct platform_device *pdev)
+ 	of_clk_del_provider(node);
+ 	mtk_clk_unregister_gates(apmixed_clks, ARRAY_SIZE(apmixed_clks), clk_data);
+ 	mtk_clk_unregister_plls(plls, ARRAY_SIZE(plls), clk_data);
+-	mtk_free_clk_data(clk_data);
+ }
+ 
+ static const struct of_device_id of_match_clk_mt7622_apmixed[] = {
+diff --git a/drivers/clk/mediatek/clk-mt7981-topckgen.c b/drivers/clk/mediatek/clk-mt7981-topckgen.c
+index 682f4ca9e89ad..493aa11d3a175 100644
+--- a/drivers/clk/mediatek/clk-mt7981-topckgen.c
++++ b/drivers/clk/mediatek/clk-mt7981-topckgen.c
+@@ -357,8 +357,9 @@ static const struct mtk_mux top_muxes[] = {
+ 	MUX_GATE_CLR_SET_UPD(CLK_TOP_SGM_325M_SEL, "sgm_325m_sel",
+ 			     sgm_325m_parents, 0x050, 0x054, 0x058, 8, 1, 15,
+ 			     0x1C0, 21),
+-	MUX_GATE_CLR_SET_UPD(CLK_TOP_SGM_REG_SEL, "sgm_reg_sel", sgm_reg_parents,
+-			     0x050, 0x054, 0x058, 16, 1, 23, 0x1C0, 22),
++	MUX_GATE_CLR_SET_UPD_FLAGS(CLK_TOP_SGM_REG_SEL, "sgm_reg_sel", sgm_reg_parents,
++				   0x050, 0x054, 0x058, 16, 1, 23, 0x1C0, 22,
++				   CLK_IS_CRITICAL | CLK_SET_RATE_PARENT),
+ 	MUX_GATE_CLR_SET_UPD(CLK_TOP_EIP97B_SEL, "eip97b_sel", eip97b_parents,
+ 			     0x050, 0x054, 0x058, 24, 3, 31, 0x1C0, 23),
+ 	/* CLK_CFG_6 */
+diff --git a/drivers/clk/mediatek/clk-mt8135-apmixedsys.c b/drivers/clk/mediatek/clk-mt8135-apmixedsys.c
+index d1239b4b3db74..41bb2d2e2ea74 100644
+--- a/drivers/clk/mediatek/clk-mt8135-apmixedsys.c
++++ b/drivers/clk/mediatek/clk-mt8135-apmixedsys.c
+@@ -59,7 +59,7 @@ static int clk_mt8135_apmixed_probe(struct platform_device *pdev)
+ 
+ 	ret = mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
+ 	if (ret)
+-		return ret;
++		goto free_clk_data;
+ 
+ 	ret = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ 	if (ret)
+@@ -69,6 +69,8 @@ static int clk_mt8135_apmixed_probe(struct platform_device *pdev)
+ 
+ unregister_plls:
+ 	mtk_clk_unregister_plls(plls, ARRAY_SIZE(plls), clk_data);
++free_clk_data:
++	mtk_free_clk_data(clk_data);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/clk/mediatek/clk-mt8183.c b/drivers/clk/mediatek/clk-mt8183.c
+index 6e23461a04559..934d5a15acfc5 100644
+--- a/drivers/clk/mediatek/clk-mt8183.c
++++ b/drivers/clk/mediatek/clk-mt8183.c
+@@ -790,7 +790,7 @@ static const struct mtk_gate infra_clks[] = {
+ 	/* infra_sspm_26m_self is main clock in co-processor, should not be closed in Linux. */
+ 	GATE_INFRA3_FLAGS(CLK_INFRA_SSPM_26M_SELF, "infra_sspm_26m_self", "f_f26m_ck", 3, CLK_IS_CRITICAL),
+ 	/* infra_sspm_32k_self is main clock in co-processor, should not be closed in Linux. */
+-	GATE_INFRA3_FLAGS(CLK_INFRA_SSPM_32K_SELF, "infra_sspm_32k_self", "f_f26m_ck", 4, CLK_IS_CRITICAL),
++	GATE_INFRA3_FLAGS(CLK_INFRA_SSPM_32K_SELF, "infra_sspm_32k_self", "clk32k", 4, CLK_IS_CRITICAL),
+ 	GATE_INFRA3(CLK_INFRA_UFS_AXI, "infra_ufs_axi", "axi_sel", 5),
+ 	GATE_INFRA3(CLK_INFRA_I2C6, "infra_i2c6", "i2c_sel", 6),
+ 	GATE_INFRA3(CLK_INFRA_AP_MSDC0, "infra_ap_msdc0", "msdc50_hclk_sel", 7),
+diff --git a/drivers/clk/meson/axg.c b/drivers/clk/meson/axg.c
+index c12f81dfa6745..5f60f2bcca592 100644
+--- a/drivers/clk/meson/axg.c
++++ b/drivers/clk/meson/axg.c
+@@ -2142,7 +2142,9 @@ static struct clk_regmap *const axg_clk_regmaps[] = {
+ 	&axg_vclk_input,
+ 	&axg_vclk2_input,
+ 	&axg_vclk_div,
++	&axg_vclk_div1,
+ 	&axg_vclk2_div,
++	&axg_vclk2_div1,
+ 	&axg_vclk_div2_en,
+ 	&axg_vclk_div4_en,
+ 	&axg_vclk_div6_en,
+diff --git a/drivers/clk/qcom/dispcc-sdm845.c b/drivers/clk/qcom/dispcc-sdm845.c
+index 735adfefc3798..e792e0b130d33 100644
+--- a/drivers/clk/qcom/dispcc-sdm845.c
++++ b/drivers/clk/qcom/dispcc-sdm845.c
+@@ -759,6 +759,8 @@ static struct clk_branch disp_cc_mdss_vsync_clk = {
+ 
+ static struct gdsc mdss_gdsc = {
+ 	.gdscr = 0x3000,
++	.en_few_wait_val = 0x6,
++	.en_rest_wait_val = 0x5,
+ 	.pd = {
+ 		.name = "mdss_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/gcc-ipq5018.c b/drivers/clk/qcom/gcc-ipq5018.c
+index 4aba47e8700d2..e2bd54826a4ce 100644
+--- a/drivers/clk/qcom/gcc-ipq5018.c
++++ b/drivers/clk/qcom/gcc-ipq5018.c
+@@ -1754,7 +1754,7 @@ static struct clk_branch gcc_gmac0_sys_clk = {
+ 	.halt_check = BRANCH_HALT_DELAY,
+ 	.halt_bit = 31,
+ 	.clkr = {
+-		.enable_reg = 0x683190,
++		.enable_reg = 0x68190,
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(struct clk_init_data) {
+ 			.name = "gcc_gmac0_sys_clk",
+@@ -2180,7 +2180,7 @@ static struct clk_branch gcc_pcie1_axi_s_clk = {
+ };
+ 
+ static struct clk_branch gcc_pcie1_pipe_clk = {
+-	.halt_reg = 8,
++	.halt_reg = 0x76018,
+ 	.halt_check = BRANCH_HALT_DELAY,
+ 	.halt_bit = 31,
+ 	.clkr = {
+@@ -3632,7 +3632,7 @@ static const struct qcom_reset_map gcc_ipq5018_resets[] = {
+ 	[GCC_SYSTEM_NOC_BCR] = { 0x26000, 0 },
+ 	[GCC_TCSR_BCR] = { 0x28000, 0 },
+ 	[GCC_TLMM_BCR] = { 0x34000, 0 },
+-	[GCC_UBI0_AXI_ARES] = { 0x680},
++	[GCC_UBI0_AXI_ARES] = { 0x68010, 0 },
+ 	[GCC_UBI0_AHB_ARES] = { 0x68010, 1 },
+ 	[GCC_UBI0_NC_AXI_ARES] = { 0x68010, 2 },
+ 	[GCC_UBI0_DBG_ARES] = { 0x68010, 3 },
+diff --git a/drivers/clk/qcom/reset.c b/drivers/clk/qcom/reset.c
+index e45e32804d2c7..d96c96a9089f4 100644
+--- a/drivers/clk/qcom/reset.c
++++ b/drivers/clk/qcom/reset.c
+@@ -22,8 +22,8 @@ static int qcom_reset(struct reset_controller_dev *rcdev, unsigned long id)
+ 	return 0;
+ }
+ 
+-static int
+-qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
++static int qcom_reset_set_assert(struct reset_controller_dev *rcdev,
++				 unsigned long id, bool assert)
+ {
+ 	struct qcom_reset_controller *rst;
+ 	const struct qcom_reset_map *map;
+@@ -33,21 +33,22 @@ qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
+ 	map = &rst->reset_map[id];
+ 	mask = map->bitmask ? map->bitmask : BIT(map->bit);
+ 
+-	return regmap_update_bits(rst->regmap, map->reg, mask, mask);
++	regmap_update_bits(rst->regmap, map->reg, mask, assert ? mask : 0);
++
++	/* Read back the register to ensure write completion, ignore the value */
++	regmap_read(rst->regmap, map->reg, &mask);
++
++	return 0;
+ }
+ 
+-static int
+-qcom_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
++static int qcom_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
+ {
+-	struct qcom_reset_controller *rst;
+-	const struct qcom_reset_map *map;
+-	u32 mask;
+-
+-	rst = to_qcom_reset_controller(rcdev);
+-	map = &rst->reset_map[id];
+-	mask = map->bitmask ? map->bitmask : BIT(map->bit);
++	return qcom_reset_set_assert(rcdev, id, true);
++}
+ 
+-	return regmap_update_bits(rst->regmap, map->reg, mask, 0);
++static int qcom_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
++{
++	return qcom_reset_set_assert(rcdev, id, false);
+ }
+ 
+ const struct reset_control_ops qcom_reset_ops = {
+diff --git a/drivers/clk/renesas/r8a779f0-cpg-mssr.c b/drivers/clk/renesas/r8a779f0-cpg-mssr.c
+index f721835c7e212..cc06127406ab5 100644
+--- a/drivers/clk/renesas/r8a779f0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a779f0-cpg-mssr.c
+@@ -161,7 +161,7 @@ static const struct mssr_mod_clk r8a779f0_mod_clks[] __initconst = {
+ 	DEF_MOD("cmt1",		911,	R8A779F0_CLK_R),
+ 	DEF_MOD("cmt2",		912,	R8A779F0_CLK_R),
+ 	DEF_MOD("cmt3",		913,	R8A779F0_CLK_R),
+-	DEF_MOD("pfc0",		915,	R8A779F0_CLK_CL16M),
++	DEF_MOD("pfc0",		915,	R8A779F0_CLK_CPEX),
+ 	DEF_MOD("tsc",		919,	R8A779F0_CLK_CL16M),
+ 	DEF_MOD("rswitch2",	1505,	R8A779F0_CLK_RSW2),
+ 	DEF_MOD("ether-serdes",	1506,	R8A779F0_CLK_S0D2_HSC),
+diff --git a/drivers/clk/renesas/r8a779g0-cpg-mssr.c b/drivers/clk/renesas/r8a779g0-cpg-mssr.c
+index 7cc580d673626..7999faa9a921b 100644
+--- a/drivers/clk/renesas/r8a779g0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a779g0-cpg-mssr.c
+@@ -22,7 +22,7 @@
+ 
+ enum clk_ids {
+ 	/* Core Clock Outputs exported to DT */
+-	LAST_DT_CORE_CLK = R8A779G0_CLK_R,
++	LAST_DT_CORE_CLK = R8A779G0_CLK_CP,
+ 
+ 	/* External Input Clocks */
+ 	CLK_EXTAL,
+@@ -141,6 +141,7 @@ static const struct cpg_core_clk r8a779g0_core_clks[] __initconst = {
+ 	DEF_FIXED("svd2_vip",	R8A779G0_CLK_SVD2_VIP,	CLK_SV_VIP,	2, 1),
+ 	DEF_FIXED("cbfusa",	R8A779G0_CLK_CBFUSA,	CLK_EXTAL,	2, 1),
+ 	DEF_FIXED("cpex",	R8A779G0_CLK_CPEX,	CLK_EXTAL,	2, 1),
++	DEF_FIXED("cp",		R8A779G0_CLK_CP,	CLK_EXTAL,	2, 1),
+ 	DEF_FIXED("viobus",	R8A779G0_CLK_VIOBUS,	CLK_VIO,	1, 1),
+ 	DEF_FIXED("viobusd2",	R8A779G0_CLK_VIOBUSD2,	CLK_VIO,	2, 1),
+ 	DEF_FIXED("vcbus",	R8A779G0_CLK_VCBUS,	CLK_VC,		1, 1),
+@@ -230,10 +231,10 @@ static const struct mssr_mod_clk r8a779g0_mod_clks[] __initconst = {
+ 	DEF_MOD("cmt1",		911,	R8A779G0_CLK_R),
+ 	DEF_MOD("cmt2",		912,	R8A779G0_CLK_R),
+ 	DEF_MOD("cmt3",		913,	R8A779G0_CLK_R),
+-	DEF_MOD("pfc0",		915,	R8A779G0_CLK_CL16M),
+-	DEF_MOD("pfc1",		916,	R8A779G0_CLK_CL16M),
+-	DEF_MOD("pfc2",		917,	R8A779G0_CLK_CL16M),
+-	DEF_MOD("pfc3",		918,	R8A779G0_CLK_CL16M),
++	DEF_MOD("pfc0",		915,	R8A779G0_CLK_CP),
++	DEF_MOD("pfc1",		916,	R8A779G0_CLK_CP),
++	DEF_MOD("pfc2",		917,	R8A779G0_CLK_CP),
++	DEF_MOD("pfc3",		918,	R8A779G0_CLK_CP),
+ 	DEF_MOD("tsc",		919,	R8A779G0_CLK_CL16M),
+ 	DEF_MOD("ssiu",		2926,	R8A779G0_CLK_S0D6_PER),
+ 	DEF_MOD("ssi",		2927,	R8A779G0_CLK_S0D6_PER),
+diff --git a/drivers/clk/renesas/r9a07g043-cpg.c b/drivers/clk/renesas/r9a07g043-cpg.c
+index b70bb378ab469..075ade0925d45 100644
+--- a/drivers/clk/renesas/r9a07g043-cpg.c
++++ b/drivers/clk/renesas/r9a07g043-cpg.c
+@@ -138,7 +138,7 @@ static const struct cpg_core_clk r9a07g043_core_clks[] __initconst = {
+ 	DEF_FIXED("SPI1", R9A07G043_CLK_SPI1, CLK_DIV_PLL3_C, 1, 4),
+ 	DEF_SD_MUX("SD0", R9A07G043_CLK_SD0, SEL_SDHI0, SEL_SDHI0_STS, sel_shdi,
+ 		   mtable_sdhi, 0, rzg2l_cpg_sd_clk_mux_notifier),
+-	DEF_SD_MUX("SD1", R9A07G043_CLK_SD1, SEL_SDHI1, SEL_SDHI0_STS, sel_shdi,
++	DEF_SD_MUX("SD1", R9A07G043_CLK_SD1, SEL_SDHI1, SEL_SDHI1_STS, sel_shdi,
+ 		   mtable_sdhi, 0, rzg2l_cpg_sd_clk_mux_notifier),
+ 	DEF_FIXED("SD0_DIV4", CLK_SD0_DIV4, R9A07G043_CLK_SD0, 1, 4),
+ 	DEF_FIXED("SD1_DIV4", CLK_SD1_DIV4, R9A07G043_CLK_SD1, 1, 4),
+diff --git a/drivers/clk/renesas/r9a07g044-cpg.c b/drivers/clk/renesas/r9a07g044-cpg.c
+index 1047278c9079a..bc822b9fd7ce6 100644
+--- a/drivers/clk/renesas/r9a07g044-cpg.c
++++ b/drivers/clk/renesas/r9a07g044-cpg.c
+@@ -178,7 +178,7 @@ static const struct {
+ 		DEF_FIXED("SPI1", R9A07G044_CLK_SPI1, CLK_DIV_PLL3_C, 1, 4),
+ 		DEF_SD_MUX("SD0", R9A07G044_CLK_SD0, SEL_SDHI0, SEL_SDHI0_STS, sel_shdi,
+ 			   mtable_sdhi, 0, rzg2l_cpg_sd_clk_mux_notifier),
+-		DEF_SD_MUX("SD1", R9A07G044_CLK_SD1, SEL_SDHI1, SEL_SDHI0_STS, sel_shdi,
++		DEF_SD_MUX("SD1", R9A07G044_CLK_SD1, SEL_SDHI1, SEL_SDHI1_STS, sel_shdi,
+ 			   mtable_sdhi, 0, rzg2l_cpg_sd_clk_mux_notifier),
+ 		DEF_FIXED("SD0_DIV4", CLK_SD0_DIV4, R9A07G044_CLK_SD0, 1, 4),
+ 		DEF_FIXED("SD1_DIV4", CLK_SD1_DIV4, R9A07G044_CLK_SD1, 1, 4),
+diff --git a/drivers/clk/samsung/clk-exynos850.c b/drivers/clk/samsung/clk-exynos850.c
+index bdc1eef7d6e54..c7b0b9751307b 100644
+--- a/drivers/clk/samsung/clk-exynos850.c
++++ b/drivers/clk/samsung/clk-exynos850.c
+@@ -605,7 +605,7 @@ static const struct samsung_div_clock apm_div_clks[] __initconst = {
+ 
+ static const struct samsung_gate_clock apm_gate_clks[] __initconst = {
+ 	GATE(CLK_GOUT_CLKCMU_CMGP_BUS, "gout_clkcmu_cmgp_bus", "dout_apm_bus",
+-	     CLK_CON_GAT_CLKCMU_CMGP_BUS, 21, 0, 0),
++	     CLK_CON_GAT_CLKCMU_CMGP_BUS, 21, CLK_SET_RATE_PARENT, 0),
+ 	GATE(CLK_GOUT_CLKCMU_CHUB_BUS, "gout_clkcmu_chub_bus",
+ 	     "mout_clkcmu_chub_bus",
+ 	     CLK_CON_GAT_GATE_CLKCMU_CHUB_BUS, 21, 0, 0),
+@@ -974,19 +974,19 @@ static const struct samsung_fixed_rate_clock cmgp_fixed_clks[] __initconst = {
+ static const struct samsung_mux_clock cmgp_mux_clks[] __initconst = {
+ 	MUX(CLK_MOUT_CMGP_ADC, "mout_cmgp_adc", mout_cmgp_adc_p,
+ 	    CLK_CON_MUX_CLK_CMGP_ADC, 0, 1),
+-	MUX(CLK_MOUT_CMGP_USI0, "mout_cmgp_usi0", mout_cmgp_usi0_p,
+-	    CLK_CON_MUX_MUX_CLK_CMGP_USI_CMGP0, 0, 1),
+-	MUX(CLK_MOUT_CMGP_USI1, "mout_cmgp_usi1", mout_cmgp_usi1_p,
+-	    CLK_CON_MUX_MUX_CLK_CMGP_USI_CMGP1, 0, 1),
++	MUX_F(CLK_MOUT_CMGP_USI0, "mout_cmgp_usi0", mout_cmgp_usi0_p,
++	      CLK_CON_MUX_MUX_CLK_CMGP_USI_CMGP0, 0, 1, CLK_SET_RATE_PARENT, 0),
++	MUX_F(CLK_MOUT_CMGP_USI1, "mout_cmgp_usi1", mout_cmgp_usi1_p,
++	      CLK_CON_MUX_MUX_CLK_CMGP_USI_CMGP1, 0, 1, CLK_SET_RATE_PARENT, 0),
+ };
+ 
+ static const struct samsung_div_clock cmgp_div_clks[] __initconst = {
+ 	DIV(CLK_DOUT_CMGP_ADC, "dout_cmgp_adc", "gout_clkcmu_cmgp_bus",
+ 	    CLK_CON_DIV_DIV_CLK_CMGP_ADC, 0, 4),
+-	DIV(CLK_DOUT_CMGP_USI0, "dout_cmgp_usi0", "mout_cmgp_usi0",
+-	    CLK_CON_DIV_DIV_CLK_CMGP_USI_CMGP0, 0, 5),
+-	DIV(CLK_DOUT_CMGP_USI1, "dout_cmgp_usi1", "mout_cmgp_usi1",
+-	    CLK_CON_DIV_DIV_CLK_CMGP_USI_CMGP1, 0, 5),
++	DIV_F(CLK_DOUT_CMGP_USI0, "dout_cmgp_usi0", "mout_cmgp_usi0",
++	      CLK_CON_DIV_DIV_CLK_CMGP_USI_CMGP0, 0, 5, CLK_SET_RATE_PARENT, 0),
++	DIV_F(CLK_DOUT_CMGP_USI1, "dout_cmgp_usi1", "mout_cmgp_usi1",
++	      CLK_CON_DIV_DIV_CLK_CMGP_USI_CMGP1, 0, 5, CLK_SET_RATE_PARENT, 0),
+ };
+ 
+ static const struct samsung_gate_clock cmgp_gate_clks[] __initconst = {
+@@ -1001,12 +1001,12 @@ static const struct samsung_gate_clock cmgp_gate_clks[] __initconst = {
+ 	     "gout_clkcmu_cmgp_bus",
+ 	     CLK_CON_GAT_GOUT_CMGP_GPIO_PCLK, 21, CLK_IGNORE_UNUSED, 0),
+ 	GATE(CLK_GOUT_CMGP_USI0_IPCLK, "gout_cmgp_usi0_ipclk", "dout_cmgp_usi0",
+-	     CLK_CON_GAT_GOUT_CMGP_USI_CMGP0_IPCLK, 21, 0, 0),
++	     CLK_CON_GAT_GOUT_CMGP_USI_CMGP0_IPCLK, 21, CLK_SET_RATE_PARENT, 0),
+ 	GATE(CLK_GOUT_CMGP_USI0_PCLK, "gout_cmgp_usi0_pclk",
+ 	     "gout_clkcmu_cmgp_bus",
+ 	     CLK_CON_GAT_GOUT_CMGP_USI_CMGP0_PCLK, 21, 0, 0),
+ 	GATE(CLK_GOUT_CMGP_USI1_IPCLK, "gout_cmgp_usi1_ipclk", "dout_cmgp_usi1",
+-	     CLK_CON_GAT_GOUT_CMGP_USI_CMGP1_IPCLK, 21, 0, 0),
++	     CLK_CON_GAT_GOUT_CMGP_USI_CMGP1_IPCLK, 21, CLK_SET_RATE_PARENT, 0),
+ 	GATE(CLK_GOUT_CMGP_USI1_PCLK, "gout_cmgp_usi1_pclk",
+ 	     "gout_clkcmu_cmgp_bus",
+ 	     CLK_CON_GAT_GOUT_CMGP_USI_CMGP1_PCLK, 21, 0, 0),
+@@ -1557,8 +1557,9 @@ static const struct samsung_mux_clock peri_mux_clks[] __initconst = {
+ 	    mout_peri_uart_user_p, PLL_CON0_MUX_CLKCMU_PERI_UART_USER, 4, 1),
+ 	MUX(CLK_MOUT_PERI_HSI2C_USER, "mout_peri_hsi2c_user",
+ 	    mout_peri_hsi2c_user_p, PLL_CON0_MUX_CLKCMU_PERI_HSI2C_USER, 4, 1),
+-	MUX(CLK_MOUT_PERI_SPI_USER, "mout_peri_spi_user", mout_peri_spi_user_p,
+-	    PLL_CON0_MUX_CLKCMU_PERI_SPI_USER, 4, 1),
++	MUX_F(CLK_MOUT_PERI_SPI_USER, "mout_peri_spi_user",
++	      mout_peri_spi_user_p, PLL_CON0_MUX_CLKCMU_PERI_SPI_USER, 4, 1,
++	      CLK_SET_RATE_PARENT, 0),
+ };
+ 
+ static const struct samsung_div_clock peri_div_clks[] __initconst = {
+@@ -1568,8 +1569,8 @@ static const struct samsung_div_clock peri_div_clks[] __initconst = {
+ 	    CLK_CON_DIV_DIV_CLK_PERI_HSI2C_1, 0, 5),
+ 	DIV(CLK_DOUT_PERI_HSI2C2, "dout_peri_hsi2c2", "gout_peri_hsi2c2",
+ 	    CLK_CON_DIV_DIV_CLK_PERI_HSI2C_2, 0, 5),
+-	DIV(CLK_DOUT_PERI_SPI0, "dout_peri_spi0", "mout_peri_spi_user",
+-	    CLK_CON_DIV_DIV_CLK_PERI_SPI_0, 0, 5),
++	DIV_F(CLK_DOUT_PERI_SPI0, "dout_peri_spi0", "mout_peri_spi_user",
++	      CLK_CON_DIV_DIV_CLK_PERI_SPI_0, 0, 5, CLK_SET_RATE_PARENT, 0),
+ };
+ 
+ static const struct samsung_gate_clock peri_gate_clks[] __initconst = {
+@@ -1611,7 +1612,7 @@ static const struct samsung_gate_clock peri_gate_clks[] __initconst = {
+ 	     "mout_peri_bus_user",
+ 	     CLK_CON_GAT_GOUT_PERI_PWM_MOTOR_PCLK, 21, 0, 0),
+ 	GATE(CLK_GOUT_SPI0_IPCLK, "gout_spi0_ipclk", "dout_peri_spi0",
+-	     CLK_CON_GAT_GOUT_PERI_SPI_0_IPCLK, 21, 0, 0),
++	     CLK_CON_GAT_GOUT_PERI_SPI_0_IPCLK, 21, CLK_SET_RATE_PARENT, 0),
+ 	GATE(CLK_GOUT_SPI0_PCLK, "gout_spi0_pclk", "mout_peri_bus_user",
+ 	     CLK_CON_GAT_GOUT_PERI_SPI_0_PCLK, 21, 0, 0),
+ 	GATE(CLK_GOUT_SYSREG_PERI_PCLK, "gout_sysreg_peri_pclk",
+diff --git a/drivers/clk/zynq/clkc.c b/drivers/clk/zynq/clkc.c
+index 7bdeaff2bfd68..c28d3dacf0fb2 100644
+--- a/drivers/clk/zynq/clkc.c
++++ b/drivers/clk/zynq/clkc.c
+@@ -42,6 +42,7 @@ static void __iomem *zynq_clkc_base;
+ #define SLCR_SWDT_CLK_SEL		(zynq_clkc_base + 0x204)
+ 
+ #define NUM_MIO_PINS	54
++#define CLK_NAME_LEN	16
+ 
+ #define DBG_CLK_CTRL_CLKACT_TRC		BIT(0)
+ #define DBG_CLK_CTRL_CPU_1XCLKACT	BIT(1)
+@@ -215,7 +216,7 @@ static void __init zynq_clk_setup(struct device_node *np)
+ 	int i;
+ 	u32 tmp;
+ 	int ret;
+-	char *clk_name;
++	char clk_name[CLK_NAME_LEN];
+ 	unsigned int fclk_enable = 0;
+ 	const char *clk_output_name[clk_max];
+ 	const char *cpu_parents[4];
+@@ -426,12 +427,10 @@ static void __init zynq_clk_setup(struct device_node *np)
+ 			"gem1_emio_mux", CLK_SET_RATE_PARENT,
+ 			SLCR_GEM1_CLK_CTRL, 0, 0, &gem1clk_lock);
+ 
+-	tmp = strlen("mio_clk_00x");
+-	clk_name = kmalloc(tmp, GFP_KERNEL);
+ 	for (i = 0; i < NUM_MIO_PINS; i++) {
+ 		int idx;
+ 
+-		snprintf(clk_name, tmp, "mio_clk_%2.2d", i);
++		snprintf(clk_name, CLK_NAME_LEN, "mio_clk_%2.2d", i);
+ 		idx = of_property_match_string(np, "clock-names", clk_name);
+ 		if (idx >= 0)
+ 			can_mio_mux_parents[i] = of_clk_get_parent_name(np,
+@@ -439,7 +438,6 @@ static void __init zynq_clk_setup(struct device_node *np)
+ 		else
+ 			can_mio_mux_parents[i] = dummy_nm;
+ 	}
+-	kfree(clk_name);
+ 	clk_register_mux(NULL, "can_mux", periph_parents, 4,
+ 			CLK_SET_RATE_NO_REPARENT, SLCR_CAN_CLK_CTRL, 4, 2, 0,
+ 			&canclk_lock);
+diff --git a/drivers/comedi/drivers/comedi_8255.c b/drivers/comedi/drivers/comedi_8255.c
+index e4974b508328d..a933ef53845a5 100644
+--- a/drivers/comedi/drivers/comedi_8255.c
++++ b/drivers/comedi/drivers/comedi_8255.c
+@@ -159,6 +159,7 @@ static int __subdev_8255_init(struct comedi_device *dev,
+ 		return -ENOMEM;
+ 
+ 	spriv->context = context;
++	spriv->io      = io;
+ 
+ 	s->type		= COMEDI_SUBD_DIO;
+ 	s->subdev_flags	= SDF_READABLE | SDF_WRITABLE;
+diff --git a/drivers/comedi/drivers/comedi_test.c b/drivers/comedi/drivers/comedi_test.c
+index 30ea8b53ebf81..05ae9122823f8 100644
+--- a/drivers/comedi/drivers/comedi_test.c
++++ b/drivers/comedi/drivers/comedi_test.c
+@@ -87,6 +87,8 @@ struct waveform_private {
+ 	struct comedi_device *dev;	/* parent comedi device */
+ 	u64 ao_last_scan_time;		/* time of previous AO scan in usec */
+ 	unsigned int ao_scan_period;	/* AO scan period in usec */
++	bool ai_timer_enable:1;		/* should AI timer be running? */
++	bool ao_timer_enable:1;		/* should AO timer be running? */
+ 	unsigned short ao_loopbacks[N_CHANS];
+ };
+ 
+@@ -236,8 +238,12 @@ static void waveform_ai_timer(struct timer_list *t)
+ 			time_increment = devpriv->ai_convert_time - now;
+ 		else
+ 			time_increment = 1;
+-		mod_timer(&devpriv->ai_timer,
+-			  jiffies + usecs_to_jiffies(time_increment));
++		spin_lock(&dev->spinlock);
++		if (devpriv->ai_timer_enable) {
++			mod_timer(&devpriv->ai_timer,
++				  jiffies + usecs_to_jiffies(time_increment));
++		}
++		spin_unlock(&dev->spinlock);
+ 	}
+ 
+ overrun:
+@@ -393,9 +399,12 @@ static int waveform_ai_cmd(struct comedi_device *dev,
+ 	 * Seem to need an extra jiffy here, otherwise timer expires slightly
+ 	 * early!
+ 	 */
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ai_timer_enable = true;
+ 	devpriv->ai_timer.expires =
+ 		jiffies + usecs_to_jiffies(devpriv->ai_convert_period) + 1;
+ 	add_timer(&devpriv->ai_timer);
++	spin_unlock_bh(&dev->spinlock);
+ 	return 0;
+ }
+ 
+@@ -404,6 +413,9 @@ static int waveform_ai_cancel(struct comedi_device *dev,
+ {
+ 	struct waveform_private *devpriv = dev->private;
+ 
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ai_timer_enable = false;
++	spin_unlock_bh(&dev->spinlock);
+ 	if (in_softirq()) {
+ 		/* Assume we were called from the timer routine itself. */
+ 		del_timer(&devpriv->ai_timer);
+@@ -495,8 +507,12 @@ static void waveform_ao_timer(struct timer_list *t)
+ 		unsigned int time_inc = devpriv->ao_last_scan_time +
+ 					devpriv->ao_scan_period - now;
+ 
+-		mod_timer(&devpriv->ao_timer,
+-			  jiffies + usecs_to_jiffies(time_inc));
++		spin_lock(&dev->spinlock);
++		if (devpriv->ao_timer_enable) {
++			mod_timer(&devpriv->ao_timer,
++				  jiffies + usecs_to_jiffies(time_inc));
++		}
++		spin_unlock(&dev->spinlock);
+ 	}
+ 
+ underrun:
+@@ -517,9 +533,12 @@ static int waveform_ao_inttrig_start(struct comedi_device *dev,
+ 	async->inttrig = NULL;
+ 
+ 	devpriv->ao_last_scan_time = ktime_to_us(ktime_get());
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ao_timer_enable = true;
+ 	devpriv->ao_timer.expires =
+ 		jiffies + usecs_to_jiffies(devpriv->ao_scan_period);
+ 	add_timer(&devpriv->ao_timer);
++	spin_unlock_bh(&dev->spinlock);
+ 
+ 	return 1;
+ }
+@@ -604,6 +623,9 @@ static int waveform_ao_cancel(struct comedi_device *dev,
+ 	struct waveform_private *devpriv = dev->private;
+ 
+ 	s->async->inttrig = NULL;
++	spin_lock_bh(&dev->spinlock);
++	devpriv->ao_timer_enable = false;
++	spin_unlock_bh(&dev->spinlock);
+ 	if (in_softirq()) {
+ 		/* Assume we were called from the timer routine itself. */
+ 		del_timer(&devpriv->ao_timer);
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index f911606897b8d..a0ebad77666e3 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -173,6 +173,7 @@ config ARM_QCOM_CPUFREQ_NVMEM
+ config ARM_QCOM_CPUFREQ_HW
+ 	tristate "QCOM CPUFreq HW driver"
+ 	depends on ARCH_QCOM || COMPILE_TEST
++	depends on COMMON_CLK
+ 	help
+ 	  Support for the CPUFreq HW driver.
+ 	  Some QCOM chipsets have a HW engine to offload the steps
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index 35fb3a559ea97..1a1857b0a6f48 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -481,6 +481,8 @@ static bool brcm_avs_is_firmware_loaded(struct private_data *priv)
+ static unsigned int brcm_avs_cpufreq_get(unsigned int cpu)
+ {
+ 	struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
++	if (!policy)
++		return 0;
+ 	struct private_data *priv = policy->driver_data;
+ 
+ 	cpufreq_cpu_put(policy);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 934d35f570b7a..5104f853a923e 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -644,14 +644,16 @@ static ssize_t store_local_boost(struct cpufreq_policy *policy,
+ 	if (policy->boost_enabled == enable)
+ 		return count;
+ 
++	policy->boost_enabled = enable;
++
+ 	cpus_read_lock();
+ 	ret = cpufreq_driver->set_boost(policy, enable);
+ 	cpus_read_unlock();
+ 
+-	if (ret)
++	if (ret) {
++		policy->boost_enabled = !policy->boost_enabled;
+ 		return ret;
+-
+-	policy->boost_enabled = enable;
++	}
+ 
+ 	return count;
+ }
+@@ -1419,6 +1421,9 @@ static int cpufreq_online(unsigned int cpu)
+ 			goto out_free_policy;
+ 		}
+ 
++		/* Let the per-policy boost flag mirror the cpufreq_driver boost during init */
++		policy->boost_enabled = cpufreq_boost_enabled() && policy_has_boost_freq(policy);
++
+ 		/*
+ 		 * The initialization has succeeded and the policy is online.
+ 		 * If there is a problem with its frequency table, take it
+@@ -2755,11 +2760,12 @@ int cpufreq_boost_trigger_state(int state)
+ 
+ 	cpus_read_lock();
+ 	for_each_active_policy(policy) {
++		policy->boost_enabled = state;
+ 		ret = cpufreq_driver->set_boost(policy, state);
+-		if (ret)
++		if (ret) {
++			policy->boost_enabled = !policy->boost_enabled;
+ 			goto err_reset_state;
+-
+-		policy->boost_enabled = state;
++		}
+ 	}
+ 	cpus_read_unlock();
+ 
+diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c
+index c4d4643b6ca65..c17dc51a5a022 100644
+--- a/drivers/cpufreq/freq_table.c
++++ b/drivers/cpufreq/freq_table.c
+@@ -40,7 +40,7 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
+ 	cpufreq_for_each_valid_entry(pos, table) {
+ 		freq = pos->frequency;
+ 
+-		if (!cpufreq_boost_enabled()
++		if ((!cpufreq_boost_enabled() || !policy->boost_enabled)
+ 		    && (pos->flags & CPUFREQ_BOOST_FREQ))
+ 			continue;
+ 
+diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
+index d46afb3c00923..8d097dcddda47 100644
+--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
++++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
+@@ -13,6 +13,7 @@
+ #include <linux/of.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
++#include <linux/regulator/consumer.h>
+ #include <linux/slab.h>
+ 
+ #define LUT_MAX_ENTRIES			32U
+@@ -300,7 +301,23 @@ static struct cpufreq_driver cpufreq_mtk_hw_driver = {
+ static int mtk_cpufreq_hw_driver_probe(struct platform_device *pdev)
+ {
+ 	const void *data;
+-	int ret;
++	int ret, cpu;
++	struct device *cpu_dev;
++	struct regulator *cpu_reg;
++
++	/* Make sure that all CPU supplies are available before proceeding. */
++	for_each_possible_cpu(cpu) {
++		cpu_dev = get_cpu_device(cpu);
++		if (!cpu_dev)
++			return dev_err_probe(&pdev->dev, -EPROBE_DEFER,
++					     "Failed to get cpu%d device\n", cpu);
++
++		cpu_reg = devm_regulator_get(cpu_dev, "cpu");
++		if (IS_ERR(cpu_reg))
++			return dev_err_probe(&pdev->dev, PTR_ERR(cpu_reg),
++					     "CPU%d regulator get failed\n", cpu);
++	}
++
+ 
+ 	data = of_device_get_match_data(&pdev->dev);
+ 	if (!data)
+diff --git a/drivers/crypto/ccp/platform-access.c b/drivers/crypto/ccp/platform-access.c
+index 94367bc49e35b..1b8ed33897332 100644
+--- a/drivers/crypto/ccp/platform-access.c
++++ b/drivers/crypto/ccp/platform-access.c
+@@ -118,9 +118,16 @@ int psp_send_platform_access_msg(enum psp_platform_access_msg msg,
+ 		goto unlock;
+ 	}
+ 
+-	/* Store the status in request header for caller to investigate */
++	/*
++	 * Read status from PSP. If status is non-zero, it indicates an error
++	 * occurred during "processing" of the command.
++	 * If status is zero, it indicates the command was "processed"
++	 * successfully, but the result of the command is in the payload.
++	 * Return both cases to the caller as -EIO to investigate.
++	 */
+ 	cmd_reg = ioread32(cmd);
+-	req->header.status = FIELD_GET(PSP_CMDRESP_STS, cmd_reg);
++	if (FIELD_GET(PSP_CMDRESP_STS, cmd_reg))
++		req->header.status = FIELD_GET(PSP_CMDRESP_STS, cmd_reg);
+ 	if (req->header.status) {
+ 		ret = -EIO;
+ 		goto unlock;
+diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+index 0faedb5b2eb5a..b64aaecdd98ba 100644
+--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+@@ -120,29 +120,6 @@ static struct adf_hw_device_class adf_4xxx_class = {
+ 	.instances = 0,
+ };
+ 
+-static int get_service_enabled(struct adf_accel_dev *accel_dev)
+-{
+-	char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
+-	int ret;
+-
+-	ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
+-				      ADF_SERVICES_ENABLED, services);
+-	if (ret) {
+-		dev_err(&GET_DEV(accel_dev),
+-			ADF_SERVICES_ENABLED " param not found\n");
+-		return ret;
+-	}
+-
+-	ret = match_string(adf_cfg_services, ARRAY_SIZE(adf_cfg_services),
+-			   services);
+-	if (ret < 0)
+-		dev_err(&GET_DEV(accel_dev),
+-			"Invalid value of " ADF_SERVICES_ENABLED " param: %s\n",
+-			services);
+-
+-	return ret;
+-}
+-
+ static u32 get_accel_mask(struct adf_hw_device_data *self)
+ {
+ 	return ADF_4XXX_ACCELERATORS_MASK;
+@@ -275,7 +252,7 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 		capabilities_dc &= ~ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64;
+ 	}
+ 
+-	switch (get_service_enabled(accel_dev)) {
++	switch (adf_get_service_enabled(accel_dev)) {
+ 	case SVC_CY:
+ 	case SVC_CY2:
+ 		return capabilities_sym | capabilities_asym;
+@@ -311,7 +288,7 @@ static enum dev_sku_info get_sku(struct adf_hw_device_data *self)
+ 
+ static const u32 *adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev)
+ {
+-	switch (get_service_enabled(accel_dev)) {
++	switch (adf_get_service_enabled(accel_dev)) {
+ 	case SVC_DC:
+ 		return thrd_to_arb_map_dc;
+ 	case SVC_DCC:
+@@ -420,7 +397,7 @@ static u32 uof_get_num_objs(void)
+ 
+ static const struct adf_fw_config *get_fw_config(struct adf_accel_dev *accel_dev)
+ {
+-	switch (get_service_enabled(accel_dev)) {
++	switch (adf_get_service_enabled(accel_dev)) {
+ 	case SVC_CY:
+ 	case SVC_CY2:
+ 		return adf_fw_cy_config;
+@@ -460,6 +437,13 @@ static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
+ 	if (!fw_config)
+ 		return 0;
+ 
++	/* If dcc, all rings handle compression requests */
++	if (adf_get_service_enabled(accel_dev) == SVC_DCC) {
++		for (i = 0; i < RP_GROUP_COUNT; i++)
++			rps[i] = COMP;
++		goto set_mask;
++	}
++
+ 	for (i = 0; i < RP_GROUP_COUNT; i++) {
+ 		switch (fw_config[i].ae_mask) {
+ 		case ADF_AE_GROUP_0:
+@@ -488,6 +472,7 @@ static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
+ 		}
+ 	}
+ 
++set_mask:
+ 	ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT |
+ 			  rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT |
+ 			  rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT |
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c
+index 8e13fe938959b..2680522944684 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c
+@@ -2,6 +2,9 @@
+ /* Copyright(c) 2023 Intel Corporation */
+ 
+ #include <linux/export.h>
++#include <linux/pci.h>
++#include <linux/string.h>
++#include "adf_cfg.h"
+ #include "adf_cfg_services.h"
+ #include "adf_cfg_strings.h"
+ 
+@@ -18,3 +21,27 @@ const char *const adf_cfg_services[] = {
+ 	[SVC_SYM_DC] = ADF_CFG_SYM_DC,
+ };
+ EXPORT_SYMBOL_GPL(adf_cfg_services);
++
++int adf_get_service_enabled(struct adf_accel_dev *accel_dev)
++{
++	char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
++	int ret;
++
++	ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
++				      ADF_SERVICES_ENABLED, services);
++	if (ret) {
++		dev_err(&GET_DEV(accel_dev),
++			ADF_SERVICES_ENABLED " param not found\n");
++		return ret;
++	}
++
++	ret = match_string(adf_cfg_services, ARRAY_SIZE(adf_cfg_services),
++			   services);
++	if (ret < 0)
++		dev_err(&GET_DEV(accel_dev),
++			"Invalid value of " ADF_SERVICES_ENABLED " param: %s\n",
++			services);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(adf_get_service_enabled);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h
+index f78fd697b4bee..c6b0328b0f5b0 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h
+@@ -5,6 +5,8 @@
+ 
+ #include "adf_cfg_strings.h"
+ 
++struct adf_accel_dev;
++
+ enum adf_services {
+ 	SVC_CY = 0,
+ 	SVC_CY2,
+@@ -21,4 +23,6 @@ enum adf_services {
+ 
+ extern const char *const adf_cfg_services[SVC_COUNT];
+ 
++int adf_get_service_enabled(struct adf_accel_dev *accel_dev);
++
+ #endif
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_clock.c b/drivers/crypto/intel/qat/qat_common/adf_clock.c
+index 01e0a389e462b..cf89f57de2a70 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_clock.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_clock.c
+@@ -83,6 +83,9 @@ static int measure_clock(struct adf_accel_dev *accel_dev, u32 *frequency)
+ 	}
+ 
+ 	delta_us = timespec_to_us(&ts3) - timespec_to_us(&ts1);
++	if (!delta_us)
++		return -EINVAL;
++
+ 	temp = (timestamp2 - timestamp1) * ME_CLK_DIVIDER * 10;
+ 	temp = DIV_ROUND_CLOSEST_ULL(temp, delta_us);
+ 	/*
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c
+index 07119c487da01..627953a72d478 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c
+@@ -16,7 +16,6 @@
+ 
+ #define CNV_ERR_INFO_MASK		GENMASK(11, 0)
+ #define CNV_ERR_TYPE_MASK		GENMASK(15, 12)
+-#define CNV_SLICE_ERR_MASK		GENMASK(7, 0)
+ #define CNV_SLICE_ERR_SIGN_BIT_INDEX	7
+ #define CNV_DELTA_ERR_SIGN_BIT_INDEX	11
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
+index 048c246079390..2dd3772bf58a6 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
+@@ -1007,8 +1007,7 @@ static bool adf_handle_spppar_err(struct adf_accel_dev *accel_dev,
+ static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev,
+ 				     void __iomem *csr, u32 iastatssm)
+ {
+-	u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR);
+-	u32 bits_num = BITS_PER_REG(reg);
++	u32 reg, bits_num = BITS_PER_REG(reg);
+ 	bool reset_required = false;
+ 	unsigned long errs_bits;
+ 	u32 bit_iterator;
+@@ -1106,8 +1105,7 @@ static bool adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
+ static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev,
+ 				     void __iomem *csr, u32 iastatssm)
+ {
+-	u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH);
+-	u32 bits_num = BITS_PER_REG(reg);
++	u32 reg, bits_num = BITS_PER_REG(reg);
+ 	bool reset_required = false;
+ 	unsigned long errs_bits;
+ 	u32 bit_iterator;
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c b/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c
+index bf8c0ee629175..2ba4aa22e0927 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_comp_algs.c
+@@ -13,15 +13,6 @@
+ #include "qat_compression.h"
+ #include "qat_algs_send.h"
+ 
+-#define QAT_RFC_1950_HDR_SIZE 2
+-#define QAT_RFC_1950_FOOTER_SIZE 4
+-#define QAT_RFC_1950_CM_DEFLATE 8
+-#define QAT_RFC_1950_CM_DEFLATE_CINFO_32K 7
+-#define QAT_RFC_1950_CM_MASK 0x0f
+-#define QAT_RFC_1950_CM_OFFSET 4
+-#define QAT_RFC_1950_DICT_MASK 0x20
+-#define QAT_RFC_1950_COMP_HDR 0x785e
+-
+ static DEFINE_MUTEX(algs_lock);
+ static unsigned int active_devs;
+ 
+diff --git a/drivers/crypto/xilinx/zynqmp-aes-gcm.c b/drivers/crypto/xilinx/zynqmp-aes-gcm.c
+index 3c205324b22b6..e614057188409 100644
+--- a/drivers/crypto/xilinx/zynqmp-aes-gcm.c
++++ b/drivers/crypto/xilinx/zynqmp-aes-gcm.c
+@@ -231,7 +231,10 @@ static int zynqmp_handle_aes_req(struct crypto_engine *engine,
+ 		err = zynqmp_aes_aead_cipher(areq);
+ 	}
+ 
++	local_bh_disable();
+ 	crypto_finalize_aead_request(engine, areq, err);
++	local_bh_enable();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index 7bb656237fa0c..8f0a2507ddecf 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -729,12 +729,17 @@ static int match_auto_decoder(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
+-static struct cxl_decoder *cxl_region_find_decoder(struct cxl_port *port,
+-						   struct cxl_region *cxlr)
++static struct cxl_decoder *
++cxl_region_find_decoder(struct cxl_port *port,
++			struct cxl_endpoint_decoder *cxled,
++			struct cxl_region *cxlr)
+ {
+ 	struct device *dev;
+ 	int id = 0;
+ 
++	if (port == cxled_to_port(cxled))
++		return &cxled->cxld;
++
+ 	if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags))
+ 		dev = device_find_child(&port->dev, &cxlr->params,
+ 					match_auto_decoder);
+@@ -752,8 +757,31 @@ static struct cxl_decoder *cxl_region_find_decoder(struct cxl_port *port,
+ 	return to_cxl_decoder(dev);
+ }
+ 
+-static struct cxl_region_ref *alloc_region_ref(struct cxl_port *port,
+-					       struct cxl_region *cxlr)
++static bool auto_order_ok(struct cxl_port *port, struct cxl_region *cxlr_iter,
++			  struct cxl_decoder *cxld)
++{
++	struct cxl_region_ref *rr = cxl_rr_load(port, cxlr_iter);
++	struct cxl_decoder *cxld_iter = rr->decoder;
++
++	/*
++	 * Allow the out of order assembly of auto-discovered regions.
++	 * Per CXL Spec 3.1 8.2.4.20.12 software must commit decoders
++	 * in HPA order. Confirm that the decoder with the lesser HPA
++	 * starting address has the lesser id.
++	 */
++	dev_dbg(&cxld->dev, "check for HPA violation %s:%d < %s:%d\n",
++		dev_name(&cxld->dev), cxld->id,
++		dev_name(&cxld_iter->dev), cxld_iter->id);
++
++	if (cxld_iter->id > cxld->id)
++		return true;
++
++	return false;
++}
++
++static struct cxl_region_ref *
++alloc_region_ref(struct cxl_port *port, struct cxl_region *cxlr,
++		 struct cxl_endpoint_decoder *cxled)
+ {
+ 	struct cxl_region_params *p = &cxlr->params;
+ 	struct cxl_region_ref *cxl_rr, *iter;
+@@ -763,16 +791,21 @@ static struct cxl_region_ref *alloc_region_ref(struct cxl_port *port,
+ 	xa_for_each(&port->regions, index, iter) {
+ 		struct cxl_region_params *ip = &iter->region->params;
+ 
+-		if (!ip->res)
++		if (!ip->res || ip->res->start < p->res->start)
+ 			continue;
+ 
+-		if (ip->res->start > p->res->start) {
+-			dev_dbg(&cxlr->dev,
+-				"%s: HPA order violation %s:%pr vs %pr\n",
+-				dev_name(&port->dev),
+-				dev_name(&iter->region->dev), ip->res, p->res);
+-			return ERR_PTR(-EBUSY);
++		if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags)) {
++			struct cxl_decoder *cxld;
++
++			cxld = cxl_region_find_decoder(port, cxled, cxlr);
++			if (auto_order_ok(port, iter->region, cxld))
++				continue;
+ 		}
++		dev_dbg(&cxlr->dev, "%s: HPA order violation %s:%pr vs %pr\n",
++			dev_name(&port->dev),
++			dev_name(&iter->region->dev), ip->res, p->res);
++
++		return ERR_PTR(-EBUSY);
+ 	}
+ 
+ 	cxl_rr = kzalloc(sizeof(*cxl_rr), GFP_KERNEL);
+@@ -852,10 +885,7 @@ static int cxl_rr_alloc_decoder(struct cxl_port *port, struct cxl_region *cxlr,
+ {
+ 	struct cxl_decoder *cxld;
+ 
+-	if (port == cxled_to_port(cxled))
+-		cxld = &cxled->cxld;
+-	else
+-		cxld = cxl_region_find_decoder(port, cxlr);
++	cxld = cxl_region_find_decoder(port, cxled, cxlr);
+ 	if (!cxld) {
+ 		dev_dbg(&cxlr->dev, "%s: no decoder available\n",
+ 			dev_name(&port->dev));
+@@ -952,7 +982,7 @@ static int cxl_port_attach_region(struct cxl_port *port,
+ 			nr_targets_inc = true;
+ 		}
+ 	} else {
+-		cxl_rr = alloc_region_ref(port, cxlr);
++		cxl_rr = alloc_region_ref(port, cxlr, cxled);
+ 		if (IS_ERR(cxl_rr)) {
+ 			dev_dbg(&cxlr->dev,
+ 				"%s: failed to allocate region reference\n",
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index 70ba506dabab5..de6eb370d485d 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -629,16 +629,16 @@ config TEGRA20_APB_DMA
+ 
+ config TEGRA210_ADMA
+ 	tristate "NVIDIA Tegra210 ADMA support"
+-	depends on (ARCH_TEGRA_210_SOC || COMPILE_TEST)
++	depends on (ARCH_TEGRA || COMPILE_TEST)
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+-	  Support for the NVIDIA Tegra210 ADMA controller driver. The
+-	  DMA controller has multiple DMA channels and is used to service
+-	  various audio clients in the Tegra210 audio processing engine
+-	  (APE). This DMA controller transfers data from memory to
+-	  peripheral and vice versa. It does not support memory to
+-	  memory data transfer.
++	  Support for the NVIDIA Tegra210/Tegra186/Tegra194/Tegra234 ADMA
++	  controller driver. The DMA controller has multiple DMA channels
++	  and is used to service various audio clients in the Tegra210
++	  audio processing engine (APE). This DMA controller transfers
++	  data from memory to peripheral and vice versa. It does not
++	  support memory to memory data transfer.
+ 
+ config TIMB_DMA
+ 	tristate "Timberdale FPGA DMA support"
+diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
+index f8fbf03942888..e72ebdf621509 100644
+--- a/drivers/dpll/dpll_core.c
++++ b/drivers/dpll/dpll_core.c
+@@ -128,9 +128,9 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin,
+ 		reg = dpll_pin_registration_find(ref, ops, priv);
+ 		if (WARN_ON(!reg))
+ 			return -EINVAL;
++		list_del(&reg->list);
++		kfree(reg);
+ 		if (refcount_dec_and_test(&ref->refcount)) {
+-			list_del(&reg->list);
+-			kfree(reg);
+ 			xa_erase(xa_pins, i);
+ 			WARN_ON(!list_empty(&ref->registration_list));
+ 			kfree(ref);
+@@ -208,9 +208,9 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll,
+ 		reg = dpll_pin_registration_find(ref, ops, priv);
+ 		if (WARN_ON(!reg))
+ 			return;
++		list_del(&reg->list);
++		kfree(reg);
+ 		if (refcount_dec_and_test(&ref->refcount)) {
+-			list_del(&reg->list);
+-			kfree(reg);
+ 			xa_erase(xa_dplls, i);
+ 			WARN_ON(!list_empty(&ref->registration_list));
+ 			kfree(ref);
+diff --git a/drivers/firewire/core-card.c b/drivers/firewire/core-card.c
+index 8aaa7fcb2630d..401a77e3b5fa8 100644
+--- a/drivers/firewire/core-card.c
++++ b/drivers/firewire/core-card.c
+@@ -500,7 +500,19 @@ static void bm_work(struct work_struct *work)
+ 		fw_notice(card, "phy config: new root=%x, gap_count=%d\n",
+ 			  new_root_id, gap_count);
+ 		fw_send_phy_config(card, new_root_id, generation, gap_count);
+-		reset_bus(card, true);
++		/*
++		 * Where possible, use a short bus reset to minimize
++		 * disruption to isochronous transfers. But in the event
++		 * of a gap count inconsistency, use a long bus reset.
++		 *
++		 * As noted in 1394a 8.4.6.2, nodes on a mixed 1394/1394a bus
++		 * may set different gap counts after a bus reset. On a mixed
++		 * 1394/1394a bus, a short bus reset can get doubled. Some
++		 * nodes may treat the double reset as one bus reset and others
++		 * may treat it as two, causing a gap count inconsistency
++		 * again. Using a long bus reset prevents this.
++		 */
++		reset_bus(card, card->gap_count != 0);
+ 		/* Will allocate broadcast channel after the reset. */
+ 		goto out;
+ 	}
+diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c
+index 7611e9665038d..39936e1dd30e9 100644
+--- a/drivers/firmware/arm_scmi/smc.c
++++ b/drivers/firmware/arm_scmi/smc.c
+@@ -214,6 +214,13 @@ static int smc_chan_free(int id, void *p, void *data)
+ 	struct scmi_chan_info *cinfo = p;
+ 	struct scmi_smc *scmi_info = cinfo->transport_info;
+ 
++	/*
++	 * Different protocols might share the same chan info, so a previous
++	 * smc_chan_free call might have already freed the structure.
++	 */
++	if (!scmi_info)
++		return 0;
++
+ 	/* Ignore any possible further reception on the IRQ path */
+ 	if (scmi_info->irq > 0)
+ 		free_irq(scmi_info->irq, scmi_info);
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 99429bc4b0c7e..c9857ee3880c2 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -21,6 +21,8 @@
+ #include "efistub.h"
+ #include "x86-stub.h"
+ 
++extern char _bss[], _ebss[];
++
+ const efi_system_table_t *efi_system_table;
+ const efi_dxe_services_table_t *efi_dxe_table;
+ static efi_loaded_image_t *image = NULL;
+@@ -465,6 +467,9 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 	efi_status_t status;
+ 	char *cmdline_ptr;
+ 
++	if (efi_is_native())
++		memset(_bss, 0, _ebss - _bss);
++
+ 	efi_system_table = sys_table_arg;
+ 
+ 	/* Check if we were booted by the EFI firmware */
+@@ -958,8 +963,6 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
+ void efi_handover_entry(efi_handle_t handle, efi_system_table_t *sys_table_arg,
+ 			struct boot_params *boot_params)
+ {
+-	extern char _bss[], _ebss[];
+-
+ 	memset(_bss, 0, _ebss - _bss);
+ 	efi_stub_entry(handle, sys_table_arg, boot_params);
+ }
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index b3a133ed31ee5..41b51536bd867 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -691,7 +691,8 @@ config GPIO_UNIPHIER
+ 	  Say yes here to support UniPhier GPIOs.
+ 
+ config GPIO_VF610
+-	def_bool y
++	bool "VF610 GPIO support"
++	default y if SOC_VF610
+ 	depends on ARCH_MXC
+ 	select GPIOLIB_IRQCHIP
+ 	help
+diff --git a/drivers/gpio/gpiolib-devres.c b/drivers/gpio/gpiolib-devres.c
+index fe9ce6b19f15c..4987e62dcb3d1 100644
+--- a/drivers/gpio/gpiolib-devres.c
++++ b/drivers/gpio/gpiolib-devres.c
+@@ -158,7 +158,7 @@ struct gpio_desc *devm_fwnode_gpiod_get_index(struct device *dev,
+ 	if (!dr)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	desc = fwnode_gpiod_get_index(fwnode, con_id, index, flags, label);
++	desc = gpiod_find_and_request(dev, fwnode, con_id, index, flags, label, false);
+ 	if (IS_ERR(desc)) {
+ 		devres_free(dr);
+ 		return desc;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 1d033106cf396..2d8c07a442227 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -4127,13 +4127,13 @@ static struct gpio_desc *gpiod_find_by_fwnode(struct fwnode_handle *fwnode,
+ 	return desc;
+ }
+ 
+-static struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+-						struct fwnode_handle *fwnode,
+-						const char *con_id,
+-						unsigned int idx,
+-						enum gpiod_flags flags,
+-						const char *label,
+-						bool platform_lookup_allowed)
++struct gpio_desc *gpiod_find_and_request(struct device *consumer,
++					 struct fwnode_handle *fwnode,
++					 const char *con_id,
++					 unsigned int idx,
++					 enum gpiod_flags flags,
++					 const char *label,
++					 bool platform_lookup_allowed)
+ {
+ 	unsigned long lookupflags = GPIO_LOOKUP_FLAGS_DEFAULT;
+ 	struct gpio_desc *desc;
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 3ccacf3c1288e..87ec7ba5b6b38 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -207,6 +207,14 @@ static inline int gpiod_request_user(struct gpio_desc *desc, const char *label)
+ 	return ret;
+ }
+ 
++struct gpio_desc *gpiod_find_and_request(struct device *consumer,
++					 struct fwnode_handle *fwnode,
++					 const char *con_id,
++					 unsigned int idx,
++					 enum gpiod_flags flags,
++					 const char *label,
++					 bool platform_lookup_allowed);
++
+ int gpiod_configure_flags(struct gpio_desc *desc, const char *con_id,
+ 		unsigned long lflags, enum gpiod_flags dflags);
+ int gpio_set_debounce_timeout(struct gpio_desc *desc, unsigned int debounce);
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 3eee8636f847a..9b079f3a1b811 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -198,7 +198,7 @@ config DRM_TTM
+ config DRM_TTM_KUNIT_TEST
+         tristate "KUnit tests for TTM" if !KUNIT_ALL_TESTS
+         default n
+-        depends on DRM && KUNIT && MMU
++        depends on DRM && KUNIT && MMU && (UML || COMPILE_TEST)
+         select DRM_TTM
+         select DRM_EXPORT_FOR_TESTS if m
+         select DRM_KUNIT_TEST_HELPERS
+@@ -206,7 +206,8 @@ config DRM_TTM_KUNIT_TEST
+         help
+           Enables unit tests for TTM, a GPU memory manager subsystem used
+           to manage memory buffers. This option is mostly useful for kernel
+-          developers.
++          developers. It depends on (UML || COMPILE_TEST) since no other driver
++          which uses TTM can be loaded while running the tests.
+ 
+           If in doubt, say "N".
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index c7d60dd0fb975..4f9900779ef9e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1278,11 +1278,10 @@ static int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
+ 				 *     0b10 : encode is disabled
+ 				 *     0b01 : decode is disabled
+ 				 */
+-				adev->vcn.vcn_config[adev->vcn.num_vcn_inst] =
+-					ip->revision & 0xc0;
+-				ip->revision &= ~0xc0;
+ 				if (adev->vcn.num_vcn_inst <
+ 				    AMDGPU_MAX_VCN_INSTANCES) {
++					adev->vcn.vcn_config[adev->vcn.num_vcn_inst] =
++						ip->revision & 0xc0;
+ 					adev->vcn.num_vcn_inst++;
+ 					adev->vcn.inst_mask |=
+ 						(1U << ip->instance_number);
+@@ -1293,6 +1292,7 @@ static int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
+ 						adev->vcn.num_vcn_inst + 1,
+ 						AMDGPU_MAX_VCN_INSTANCES);
+ 				}
++				ip->revision &= ~0xc0;
+ 			}
+ 			if (le16_to_cpu(ip->hw_id) == SDMA0_HWID ||
+ 			    le16_to_cpu(ip->hw_id) == SDMA1_HWID ||
+diff --git a/drivers/gpu/drm/amd/amdgpu/atom.c b/drivers/gpu/drm/amd/amdgpu/atom.c
+index 2c221000782cd..1b51be9b5f36b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/atom.c
++++ b/drivers/gpu/drm/amd/amdgpu/atom.c
+@@ -313,7 +313,7 @@ static uint32_t atom_get_src_int(atom_exec_context *ctx, uint8_t attr,
+ 				DEBUG("IMM 0x%02X\n", val);
+ 			return val;
+ 		}
+-		return 0;
++		break;
+ 	case ATOM_ARG_PLL:
+ 		idx = U8(*ptr);
+ 		(*ptr)++;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index c9c653cfc765b..3f1692194b7ad 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -570,6 +570,7 @@ static void gmc_v11_0_set_mmhub_funcs(struct amdgpu_device *adev)
+ 		adev->mmhub.funcs = &mmhub_v3_0_2_funcs;
+ 		break;
+ 	case IP_VERSION(3, 3, 0):
++	case IP_VERSION(3, 3, 1):
+ 		adev->mmhub.funcs = &mmhub_v3_3_funcs;
+ 		break;
+ 	default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
+index dc4812ecc98d6..238ea40c24500 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
+@@ -98,16 +98,16 @@ mmhub_v3_3_print_l2_protection_fault_status(struct amdgpu_device *adev,
+ 
+ 	switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
+ 	case IP_VERSION(3, 3, 0):
+-		mmhub_cid = mmhub_client_ids_v3_3[cid][rw];
++	case IP_VERSION(3, 3, 1):
++		mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_3) ?
++			    mmhub_client_ids_v3_3[cid][rw] :
++			    cid == 0x140 ? "UMSCH" : NULL;
+ 		break;
+ 	default:
+ 		mmhub_cid = NULL;
+ 		break;
+ 	}
+ 
+-	if (!mmhub_cid && cid == 0x140)
+-		mmhub_cid = "UMSCH";
+-
+ 	dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
+ 		mmhub_cid ? mmhub_cid : "unknown", cid);
+ 	dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index 0f24af6f28102..8eed9a73d0355 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -429,16 +429,11 @@ static void sdma_v4_4_2_inst_gfx_stop(struct amdgpu_device *adev,
+ 	struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
+ 	u32 doorbell_offset, doorbell;
+ 	u32 rb_cntl, ib_cntl;
+-	int i, unset = 0;
++	int i;
+ 
+ 	for_each_inst(i, inst_mask) {
+ 		sdma[i] = &adev->sdma.instance[i].ring;
+ 
+-		if ((adev->mman.buffer_funcs_ring == sdma[i]) && unset != 1) {
+-			amdgpu_ttm_set_buffer_funcs_status(adev, false);
+-			unset = 1;
+-		}
+-
+ 		rb_cntl = RREG32_SDMA(i, regSDMA_GFX_RB_CNTL);
+ 		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA_GFX_RB_CNTL, RB_ENABLE, 0);
+ 		WREG32_SDMA(i, regSDMA_GFX_RB_CNTL, rb_cntl);
+@@ -485,20 +480,10 @@ static void sdma_v4_4_2_inst_rlc_stop(struct amdgpu_device *adev,
+ static void sdma_v4_4_2_inst_page_stop(struct amdgpu_device *adev,
+ 				       uint32_t inst_mask)
+ {
+-	struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
+ 	u32 rb_cntl, ib_cntl;
+ 	int i;
+-	bool unset = false;
+ 
+ 	for_each_inst(i, inst_mask) {
+-		sdma[i] = &adev->sdma.instance[i].page;
+-
+-		if ((adev->mman.buffer_funcs_ring == sdma[i]) &&
+-			(!unset)) {
+-			amdgpu_ttm_set_buffer_funcs_status(adev, false);
+-			unset = true;
+-		}
+-
+ 		rb_cntl = RREG32_SDMA(i, regSDMA_PAGE_RB_CNTL);
+ 		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA_PAGE_RB_CNTL,
+ 					RB_ENABLE, 0);
+@@ -948,13 +933,7 @@ static int sdma_v4_4_2_inst_start(struct amdgpu_device *adev,
+ 			r = amdgpu_ring_test_helper(page);
+ 			if (r)
+ 				return r;
+-
+-			if (adev->mman.buffer_funcs_ring == page)
+-				amdgpu_ttm_set_buffer_funcs_status(adev, true);
+ 		}
+-
+-		if (adev->mman.buffer_funcs_ring == ring)
+-			amdgpu_ttm_set_buffer_funcs_status(adev, true);
+ 	}
+ 
+ 	return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 9b5af3f1383a7..f9ba1803046d9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -574,11 +574,34 @@ soc15_asic_reset_method(struct amdgpu_device *adev)
+ 		return AMD_RESET_METHOD_MODE1;
+ }
+ 
++static bool soc15_need_reset_on_resume(struct amdgpu_device *adev)
++{
++	u32 sol_reg;
++
++	sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
++
++	/* Will reset for the following suspend abort cases.
++	 * 1) Only reset limit on APU side, dGPU hasn't checked yet.
++	 * 2) S3 suspend abort and TOS already launched.
++	 */
++	if (adev->flags & AMD_IS_APU && adev->in_s3 &&
++			!adev->suspend_complete &&
++			sol_reg)
++		return true;
++
++	return false;
++}
++
+ static int soc15_asic_reset(struct amdgpu_device *adev)
+ {
+ 	/* original raven doesn't have full asic reset */
+-	if ((adev->apu_flags & AMD_APU_IS_RAVEN) ||
+-	    (adev->apu_flags & AMD_APU_IS_RAVEN2))
++	/* On the latest Raven, the GPU reset can be performed
++	 * successfully. So now, temporarily enable it for the
++	 * S3 suspend abort case.
++	 */
++	if (((adev->apu_flags & AMD_APU_IS_RAVEN) ||
++	    (adev->apu_flags & AMD_APU_IS_RAVEN2)) &&
++		!soc15_need_reset_on_resume(adev))
+ 		return 0;
+ 
+ 	switch (soc15_asic_reset_method(adev)) {
+@@ -1297,24 +1320,6 @@ static int soc15_common_suspend(void *handle)
+ 	return soc15_common_hw_fini(adev);
+ }
+ 
+-static bool soc15_need_reset_on_resume(struct amdgpu_device *adev)
+-{
+-	u32 sol_reg;
+-
+-	sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81);
+-
+-	/* Will reset for the following suspend abort cases.
+-	 * 1) Only reset limit on APU side, dGPU hasn't checked yet.
+-	 * 2) S3 suspend abort and TOS already launched.
+-	 */
+-	if (adev->flags & AMD_IS_APU && adev->in_s3 &&
+-			!adev->suspend_complete &&
+-			sol_reg)
+-		return true;
+-
+-	return false;
+-}
+-
+ static int soc15_common_resume(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 272c27495ede6..49f0c9454a6e6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1896,17 +1896,15 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev)
+ 		adev->dm.hdcp_workqueue = NULL;
+ 	}
+ 
+-	if (adev->dm.dc)
++	if (adev->dm.dc) {
+ 		dc_deinit_callbacks(adev->dm.dc);
+-
+-	if (adev->dm.dc)
+ 		dc_dmub_srv_destroy(&adev->dm.dc->ctx->dmub_srv);
+-
+-	if (dc_enable_dmub_notifications(adev->dm.dc)) {
+-		kfree(adev->dm.dmub_notify);
+-		adev->dm.dmub_notify = NULL;
+-		destroy_workqueue(adev->dm.delayed_hpd_wq);
+-		adev->dm.delayed_hpd_wq = NULL;
++		if (dc_enable_dmub_notifications(adev->dm.dc)) {
++			kfree(adev->dm.dmub_notify);
++			adev->dm.dmub_notify = NULL;
++			destroy_workqueue(adev->dm.delayed_hpd_wq);
++			adev->dm.delayed_hpd_wq = NULL;
++		}
+ 	}
+ 
+ 	if (adev->dm.dmub_bo)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 45c972f2630d8..417c9cb47b89e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -1483,7 +1483,7 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf,
+ 	const uint32_t rd_buf_size = 10;
+ 	struct pipe_ctx *pipe_ctx;
+ 	ssize_t result = 0;
+-	int i, r, str_len = 30;
++	int i, r, str_len = 10;
+ 
+ 	rd_buf = kcalloc(rd_buf_size, sizeof(char), GFP_KERNEL);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index 2c379be19aa84..16452dae4acac 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -398,7 +398,6 @@ void dml2_init_soc_states(struct dml2_context *dml2, const struct dc *in_dc,
+ 	/* Copy clocks tables entries, if available */
+ 	if (dml2->config.bbox_overrides.clks_table.num_states) {
+ 		p->in_states->num_states = dml2->config.bbox_overrides.clks_table.num_states;
+-
+ 		for (i = 0; i < dml2->config.bbox_overrides.clks_table.num_entries_per_clk.num_dcfclk_levels; i++) {
+ 			p->in_states->state_array[i].dcfclk_mhz = dml2->config.bbox_overrides.clks_table.clk_entries[i].dcfclk_mhz;
+ 		}
+@@ -437,6 +436,14 @@ void dml2_init_soc_states(struct dml2_context *dml2, const struct dc *in_dc,
+ 	}
+ 
+ 	dml2_policy_build_synthetic_soc_states(s, p);
++	if (dml2->v20.dml_core_ctx.project == dml_project_dcn35 ||
++		dml2->v20.dml_core_ctx.project == dml_project_dcn351) {
++		// Override last out_state with data from last in_state
++		// This will ensure that out_state contains max fclk
++		memcpy(&p->out_states->state_array[p->out_states->num_states - 1],
++				&p->in_states->state_array[p->in_states->num_states - 1],
++				sizeof(struct soc_state_bounding_box_st));
++	}
+ }
+ 
+ void dml2_translate_ip_params(const struct dc *in, struct ip_params_st *out)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 1fc8436c8130e..2b3ef5cdbd458 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1834,6 +1834,9 @@ bool dcn10_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ {
+ 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
+ 
++	if (!stream)
++		return false;
++
+ 	if (dpp == NULL)
+ 		return false;
+ 
+@@ -1856,8 +1859,8 @@ bool dcn10_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+ 	} else
+ 		dpp->funcs->dpp_program_regamma_pwl(dpp, NULL, OPP_REGAMMA_BYPASS);
+ 
+-	if (stream != NULL && stream->ctx != NULL &&
+-			stream->out_transfer_func != NULL) {
++	if (stream->ctx &&
++	    stream->out_transfer_func) {
+ 		log_tf(stream->ctx,
+ 				stream->out_transfer_func,
+ 				dpp->regamma_params.hw_points_num);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+index bdae54c4648bc..7c4a93d3cda5d 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+@@ -889,7 +889,8 @@ bool edp_set_replay_allow_active(struct dc_link *link, const bool *allow_active,
+ 
+ 	/* Set power optimization flag */
+ 	if (power_opts && link->replay_settings.replay_power_opt_active != *power_opts) {
+-		if (link->replay_settings.replay_feature_enabled && replay->funcs->replay_set_power_opt) {
++		if (replay != NULL && link->replay_settings.replay_feature_enabled &&
++		    replay->funcs->replay_set_power_opt) {
+ 			replay->funcs->replay_set_power_opt(replay, *power_opts, panel_inst);
+ 			link->replay_settings.replay_power_opt_active = *power_opts;
+ 		}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index b9c5398cb986c..294a890beceb4 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -2272,8 +2272,8 @@ static uint16_t arcturus_get_current_pcie_link_speed(struct smu_context *smu)
+ 
+ 	/* TODO: confirm this on real target */
+ 	esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
+-	if ((esm_ctrl >> 15) & 0x1FFFF)
+-		return (uint16_t)(((esm_ctrl >> 8) & 0x3F) + 128);
++	if ((esm_ctrl >> 15) & 0x1)
++		return (uint16_t)(((esm_ctrl >> 8) & 0x7F) + 128);
+ 
+ 	return smu_v11_0_get_current_pcie_link_speed(smu);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index f1440869d1ce0..e3579a8fd7fcf 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1683,8 +1683,8 @@ static int aldebaran_get_current_pcie_link_speed(struct smu_context *smu)
+ 
+ 	/* TODO: confirm this on real target */
+ 	esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
+-	if ((esm_ctrl >> 15) & 0x1FFFF)
+-		return (((esm_ctrl >> 8) & 0x3F) + 128);
++	if ((esm_ctrl >> 15) & 0x1)
++		return (((esm_ctrl >> 8) & 0x7F) + 128);
+ 
+ 	return smu_v13_0_get_current_pcie_link_speed(smu);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+index 8cd2b8cc3d58a..58f63b7a72cd9 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+@@ -2016,8 +2016,8 @@ static int smu_v13_0_6_get_current_pcie_link_speed(struct smu_context *smu)
+ 
+ 	/* TODO: confirm this on real target */
+ 	esm_ctrl = RREG32_PCIE(smnPCIE_ESM_CTRL);
+-	if ((esm_ctrl >> 15) & 0x1FFFF)
+-		return (((esm_ctrl >> 8) & 0x3F) + 128);
++	if ((esm_ctrl >> 15) & 0x1)
++		return (((esm_ctrl >> 8) & 0x7F) + 128);
+ 
+ 	speed_level = (RREG32_PCIE(smnPCIE_LC_SPEED_CNTL) &
+ 		PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK)
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 8be235144f6d9..6fc292393c674 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1277,17 +1277,6 @@ static int adv7511_probe(struct i2c_client *i2c)
+ 
+ 	INIT_WORK(&adv7511->hpd_work, adv7511_hpd_work);
+ 
+-	if (i2c->irq) {
+-		init_waitqueue_head(&adv7511->wq);
+-
+-		ret = devm_request_threaded_irq(dev, i2c->irq, NULL,
+-						adv7511_irq_handler,
+-						IRQF_ONESHOT, dev_name(dev),
+-						adv7511);
+-		if (ret)
+-			goto err_unregister_cec;
+-	}
+-
+ 	adv7511_power_off(adv7511);
+ 
+ 	i2c_set_clientdata(i2c, adv7511);
+@@ -1311,6 +1300,17 @@ static int adv7511_probe(struct i2c_client *i2c)
+ 
+ 	adv7511_audio_init(dev, adv7511);
+ 
++	if (i2c->irq) {
++		init_waitqueue_head(&adv7511->wq);
++
++		ret = devm_request_threaded_irq(dev, i2c->irq, NULL,
++						adv7511_irq_handler,
++						IRQF_ONESHOT, dev_name(dev),
++						adv7511);
++		if (ret)
++			goto err_unregister_audio;
++	}
++
+ 	if (adv7511->info->has_dsi) {
+ 		ret = adv7533_attach_dsi(adv7511);
+ 		if (ret)
+diff --git a/drivers/gpu/drm/ci/test.yml b/drivers/gpu/drm/ci/test.yml
+index f285ed67eb3d8..2aa4b19166c06 100644
+--- a/drivers/gpu/drm/ci/test.yml
++++ b/drivers/gpu/drm/ci/test.yml
+@@ -104,7 +104,10 @@ msm:apq8016:
+     DRIVER_NAME: msm
+     BM_DTB: https://${PIPELINE_ARTIFACTS_BASE}/arm64/apq8016-sbc.dtb
+     GPU_VERSION: apq8016
+-    BM_CMDLINE: "ip=dhcp console=ttyMSM0,115200n8 $BM_KERNEL_EXTRA_ARGS root=/dev/nfs rw nfsrootdebug nfsroot=,tcp,nfsvers=4.2 init=/init $BM_KERNELARGS"
++    # disabling unused clocks congests with the MDSS runtime PM trying to
++    # disable those clocks and causes boot to fail.
++    # Reproducer: DRM_MSM=y, DRM_I2C_ADV7511=m
++    BM_KERNEL_EXTRA_ARGS: clk_ignore_unused
+     RUNNER_TAG: google-freedreno-db410c
+   script:
+     - ./install/bare-metal/fastboot.sh
+diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
+index f3a6ac908f815..5ebdd6f8f36e6 100644
+--- a/drivers/gpu/drm/drm_buddy.c
++++ b/drivers/gpu/drm/drm_buddy.c
+@@ -771,8 +771,12 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
+ 		return -EINVAL;
+ 
+ 	/* Actual range allocation */
+-	if (start + size == end)
++	if (start + size == end) {
++		if (!IS_ALIGNED(start | end, min_block_size))
++			return -EINVAL;
++
+ 		return __drm_buddy_alloc_range(mm, start, size, NULL, blocks);
++	}
+ 
+ 	original_size = size;
+ 	original_min_size = min_block_size;
+diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
+index 4f9736e5f929b..7ea244d876ca6 100644
+--- a/drivers/gpu/drm/lima/lima_gem.c
++++ b/drivers/gpu/drm/lima/lima_gem.c
+@@ -75,29 +75,34 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
+ 	} else {
+ 		bo->base.sgt = kmalloc(sizeof(*bo->base.sgt), GFP_KERNEL);
+ 		if (!bo->base.sgt) {
+-			sg_free_table(&sgt);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto err_out0;
+ 		}
+ 	}
+ 
+ 	ret = dma_map_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0);
+-	if (ret) {
+-		sg_free_table(&sgt);
+-		kfree(bo->base.sgt);
+-		bo->base.sgt = NULL;
+-		return ret;
+-	}
++	if (ret)
++		goto err_out1;
+ 
+ 	*bo->base.sgt = sgt;
+ 
+ 	if (vm) {
+ 		ret = lima_vm_map_bo(vm, bo, old_size >> PAGE_SHIFT);
+ 		if (ret)
+-			return ret;
++			goto err_out2;
+ 	}
+ 
+ 	bo->heap_size = new_size;
+ 	return 0;
++
++err_out2:
++	dma_unmap_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0);
++err_out1:
++	kfree(bo->base.sgt);
++	bo->base.sgt = NULL;
++err_out0:
++	sg_free_table(&sgt);
++	return ret;
+ }
+ 
+ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index db43f9dff912e..d645b85f97215 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -95,11 +95,13 @@ static void mtk_drm_crtc_finish_page_flip(struct mtk_drm_crtc *mtk_crtc)
+ 	struct drm_crtc *crtc = &mtk_crtc->base;
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&crtc->dev->event_lock, flags);
+-	drm_crtc_send_vblank_event(crtc, mtk_crtc->event);
+-	drm_crtc_vblank_put(crtc);
+-	mtk_crtc->event = NULL;
+-	spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
++	if (mtk_crtc->event) {
++		spin_lock_irqsave(&crtc->dev->event_lock, flags);
++		drm_crtc_send_vblank_event(crtc, mtk_crtc->event);
++		drm_crtc_vblank_put(crtc);
++		mtk_crtc->event = NULL;
++		spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
++	}
+ }
+ 
+ static void mtk_drm_finish_page_flip(struct mtk_drm_crtc *mtk_crtc)
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index a2fdfc8ddb153..cd19885ee017f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -71,8 +71,8 @@
+ #define DSI_PS_WC			0x3fff
+ #define DSI_PS_SEL			(3 << 16)
+ #define PACKED_PS_16BIT_RGB565		(0 << 16)
+-#define LOOSELY_PS_18BIT_RGB666		(1 << 16)
+-#define PACKED_PS_18BIT_RGB666		(2 << 16)
++#define PACKED_PS_18BIT_RGB666		(1 << 16)
++#define LOOSELY_PS_24BIT_RGB666		(2 << 16)
+ #define PACKED_PS_24BIT_RGB888		(3 << 16)
+ 
+ #define DSI_VSA_NL		0x20
+@@ -369,10 +369,10 @@ static void mtk_dsi_ps_control_vact(struct mtk_dsi *dsi)
+ 		ps_bpp_mode |= PACKED_PS_24BIT_RGB888;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666:
+-		ps_bpp_mode |= PACKED_PS_18BIT_RGB666;
++		ps_bpp_mode |= LOOSELY_PS_24BIT_RGB666;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666_PACKED:
+-		ps_bpp_mode |= LOOSELY_PS_18BIT_RGB666;
++		ps_bpp_mode |= PACKED_PS_18BIT_RGB666;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB565:
+ 		ps_bpp_mode |= PACKED_PS_16BIT_RGB565;
+@@ -426,7 +426,7 @@ static void mtk_dsi_ps_control(struct mtk_dsi *dsi)
+ 		dsi_tmp_buf_bpp = 3;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666:
+-		tmp_reg = LOOSELY_PS_18BIT_RGB666;
++		tmp_reg = LOOSELY_PS_24BIT_RGB666;
+ 		dsi_tmp_buf_bpp = 3;
+ 		break;
+ 	case MIPI_DSI_FMT_RGB666_PACKED:
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 500ed2d183fcc..edc1eeef844ef 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -2416,7 +2416,7 @@ static int a6xx_gmu_pm_resume(struct msm_gpu *gpu)
+ 
+ 	msm_devfreq_resume(gpu);
+ 
+-	adreno_is_a7xx(adreno_gpu) ? a7xx_llc_activate : a6xx_llc_activate(a6xx_gpu);
++	adreno_is_a7xx(adreno_gpu) ? a7xx_llc_activate(a6xx_gpu) : a6xx_llc_activate(a6xx_gpu);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index cf0d44b6e7a30..b26d23b2ab6af 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -226,6 +226,13 @@ bool dpu_encoder_is_widebus_enabled(const struct drm_encoder *drm_enc)
+ 	return dpu_enc->wide_bus_en;
+ }
+ 
++bool dpu_encoder_is_dsc_enabled(const struct drm_encoder *drm_enc)
++{
++	const struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc);
++
++	return dpu_enc->dsc ? true : false;
++}
++
+ int dpu_encoder_get_crc_values_cnt(const struct drm_encoder *drm_enc)
+ {
+ 	struct dpu_encoder_virt *dpu_enc;
+@@ -1860,7 +1867,9 @@ static void dpu_encoder_prep_dsc(struct dpu_encoder_virt *dpu_enc,
+ 	dsc_common_mode = 0;
+ 	pic_width = dsc->pic_width;
+ 
+-	dsc_common_mode = DSC_MODE_MULTIPLEX | DSC_MODE_SPLIT_PANEL;
++	dsc_common_mode = DSC_MODE_SPLIT_PANEL;
++	if (dpu_encoder_use_dsc_merge(enc_master->parent))
++		dsc_common_mode |= DSC_MODE_MULTIPLEX;
+ 	if (enc_master->intf_mode == INTF_MODE_VIDEO)
+ 		dsc_common_mode |= DSC_MODE_VIDEO;
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
+index 4c05fd5e9ed18..fe6b1d312a742 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
+@@ -158,6 +158,13 @@ int dpu_encoder_get_vsync_count(struct drm_encoder *drm_enc);
+ 
+ bool dpu_encoder_is_widebus_enabled(const struct drm_encoder *drm_enc);
+ 
++/**
++ * dpu_encoder_is_dsc_enabled - indicate whether dsc is enabled
++ *				for the encoder.
++ * @drm_enc:    Pointer to previously created drm encoder structure
++ */
++bool dpu_encoder_is_dsc_enabled(const struct drm_encoder *drm_enc);
++
+ /**
+  * dpu_encoder_get_crc_values_cnt - get number of physical encoders contained
+  *	in virtual encoder that can collect CRC values
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+index eeb0acf9665e8..f66e0f125a03c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
+@@ -100,6 +100,7 @@ static void drm_mode_to_intf_timing_params(
+ 	}
+ 
+ 	timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent);
++	timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent);
+ 
+ 	/*
+ 	 * for DP, divide the horizonal parameters by 2 when
+@@ -257,12 +258,14 @@ static void dpu_encoder_phys_vid_setup_timing_engine(
+ 		mode.htotal >>= 1;
+ 		mode.hsync_start >>= 1;
+ 		mode.hsync_end >>= 1;
++		mode.hskew >>= 1;
+ 
+ 		DPU_DEBUG_VIDENC(phys_enc,
+-			"split_role %d, halve horizontal %d %d %d %d\n",
++			"split_role %d, halve horizontal %d %d %d %d %d\n",
+ 			phys_enc->split_role,
+ 			mode.hdisplay, mode.htotal,
+-			mode.hsync_start, mode.hsync_end);
++			mode.hsync_start, mode.hsync_end,
++			mode.hskew);
+ 	}
+ 
+ 	drm_mode_to_intf_timing_params(phys_enc, &mode, &timing_params);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+index 86182c7346060..e7b680a151d6d 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+@@ -4,6 +4,9 @@
+  */
+ 
+ #include <linux/delay.h>
++
++#include <drm/drm_managed.h>
++
+ #include "dpu_hwio.h"
+ #include "dpu_hw_ctl.h"
+ #include "dpu_kms.h"
+@@ -680,14 +683,15 @@ static void _setup_ctl_ops(struct dpu_hw_ctl_ops *ops,
+ 		ops->set_active_pipes = dpu_hw_ctl_set_fetch_pipe_active;
+ };
+ 
+-struct dpu_hw_ctl *dpu_hw_ctl_init(const struct dpu_ctl_cfg *cfg,
+-		void __iomem *addr,
+-		u32 mixer_count,
+-		const struct dpu_lm_cfg *mixer)
++struct dpu_hw_ctl *dpu_hw_ctl_init(struct drm_device *dev,
++				   const struct dpu_ctl_cfg *cfg,
++				   void __iomem *addr,
++				   u32 mixer_count,
++				   const struct dpu_lm_cfg *mixer)
+ {
+ 	struct dpu_hw_ctl *c;
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -702,8 +706,3 @@ struct dpu_hw_ctl *dpu_hw_ctl_init(const struct dpu_ctl_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_ctl_destroy(struct dpu_hw_ctl *ctx)
+-{
+-	kfree(ctx);
+-}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
+index 1c242298ff2ee..279ebd8dfbff7 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h
+@@ -274,20 +274,16 @@ static inline struct dpu_hw_ctl *to_dpu_hw_ctl(struct dpu_hw_blk *hw)
+ /**
+  * dpu_hw_ctl_init() - Initializes the ctl_path hw driver object.
+  * Should be called before accessing any ctl_path register.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  ctl_path catalog entry for which driver object is required
+  * @addr: mapped register io address of MDP
+  * @mixer_count: Number of mixers in @mixer
+  * @mixer: Pointer to an array of Layer Mixers defined in the catalog
+  */
+-struct dpu_hw_ctl *dpu_hw_ctl_init(const struct dpu_ctl_cfg *cfg,
+-		void __iomem *addr,
+-		u32 mixer_count,
+-		const struct dpu_lm_cfg *mixer);
+-
+-/**
+- * dpu_hw_ctl_destroy(): Destroys ctl driver context
+- * should be called to free the context
+- */
+-void dpu_hw_ctl_destroy(struct dpu_hw_ctl *ctx);
++struct dpu_hw_ctl *dpu_hw_ctl_init(struct drm_device *dev,
++				   const struct dpu_ctl_cfg *cfg,
++				   void __iomem *addr,
++				   u32 mixer_count,
++				   const struct dpu_lm_cfg *mixer);
+ 
+ #endif /*_DPU_HW_CTL_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+index 509dbaa51d878..5e9aad1b2aa28 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+@@ -3,6 +3,8 @@
+  * Copyright (c) 2020-2022, Linaro Limited
+  */
+ 
++#include <drm/drm_managed.h>
++
+ #include <drm/display/drm_dsc_helper.h>
+ 
+ #include "dpu_kms.h"
+@@ -188,12 +190,13 @@ static void _setup_dsc_ops(struct dpu_hw_dsc_ops *ops,
+ 		ops->dsc_bind_pingpong_blk = dpu_hw_dsc_bind_pingpong_blk;
+ };
+ 
+-struct dpu_hw_dsc *dpu_hw_dsc_init(const struct dpu_dsc_cfg *cfg,
++struct dpu_hw_dsc *dpu_hw_dsc_init(struct drm_device *dev,
++				   const struct dpu_dsc_cfg *cfg,
+ 				   void __iomem *addr)
+ {
+ 	struct dpu_hw_dsc *c;
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -206,8 +209,3 @@ struct dpu_hw_dsc *dpu_hw_dsc_init(const struct dpu_dsc_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_dsc_destroy(struct dpu_hw_dsc *dsc)
+-{
+-	kfree(dsc);
+-}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.h
+index d5b597ab8c5ca..989c88d2449b6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.h
+@@ -64,20 +64,24 @@ struct dpu_hw_dsc {
+ 
+ /**
+  * dpu_hw_dsc_init() - Initializes the DSC hw driver object.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  DSC catalog entry for which driver object is required
+  * @addr: Mapped register io address of MDP
+  * Return: Error code or allocated dpu_hw_dsc context
+  */
+-struct dpu_hw_dsc *dpu_hw_dsc_init(const struct dpu_dsc_cfg *cfg,
+-		void __iomem *addr);
++struct dpu_hw_dsc *dpu_hw_dsc_init(struct drm_device *dev,
++				   const struct dpu_dsc_cfg *cfg,
++				   void __iomem *addr);
+ 
+ /**
+  * dpu_hw_dsc_init_1_2() - initializes the v1.2 DSC hw driver object
++ * @dev:  Corresponding device for devres management
+  * @cfg:  DSC catalog entry for which driver object is required
+  * @addr: Mapped register io address of MDP
+  * Returns: Error code or allocated dpu_hw_dsc context
+  */
+-struct dpu_hw_dsc *dpu_hw_dsc_init_1_2(const struct dpu_dsc_cfg *cfg,
++struct dpu_hw_dsc *dpu_hw_dsc_init_1_2(struct drm_device *dev,
++				       const struct dpu_dsc_cfg *cfg,
+ 				       void __iomem *addr);
+ 
+ /**
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc_1_2.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc_1_2.c
+index 24fe1d98eb86a..ba193b0376fe8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc_1_2.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc_1_2.c
+@@ -4,6 +4,8 @@
+  * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved
+  */
+ 
++#include <drm/drm_managed.h>
++
+ #include <drm/display/drm_dsc_helper.h>
+ 
+ #include "dpu_kms.h"
+@@ -367,12 +369,13 @@ static void _setup_dcs_ops_1_2(struct dpu_hw_dsc_ops *ops,
+ 	ops->dsc_bind_pingpong_blk = dpu_hw_dsc_bind_pingpong_blk_1_2;
+ }
+ 
+-struct dpu_hw_dsc *dpu_hw_dsc_init_1_2(const struct dpu_dsc_cfg *cfg,
++struct dpu_hw_dsc *dpu_hw_dsc_init_1_2(struct drm_device *dev,
++				       const struct dpu_dsc_cfg *cfg,
+ 				       void __iomem *addr)
+ {
+ 	struct dpu_hw_dsc *c;
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
+index 9419b2209af88..b1da88e2935f4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
+@@ -2,6 +2,8 @@
+ /* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+  */
+ 
++#include <drm/drm_managed.h>
++
+ #include "dpu_hwio.h"
+ #include "dpu_hw_catalog.h"
+ #include "dpu_hw_lm.h"
+@@ -68,15 +70,16 @@ static void _setup_dspp_ops(struct dpu_hw_dspp *c,
+ 		c->ops.setup_pcc = dpu_setup_dspp_pcc;
+ }
+ 
+-struct dpu_hw_dspp *dpu_hw_dspp_init(const struct dpu_dspp_cfg *cfg,
+-			void __iomem *addr)
++struct dpu_hw_dspp *dpu_hw_dspp_init(struct drm_device *dev,
++				     const struct dpu_dspp_cfg *cfg,
++				     void __iomem *addr)
+ {
+ 	struct dpu_hw_dspp *c;
+ 
+ 	if (!addr)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -90,10 +93,3 @@ struct dpu_hw_dspp *dpu_hw_dspp_init(const struct dpu_dspp_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_dspp_destroy(struct dpu_hw_dspp *dspp)
+-{
+-	kfree(dspp);
+-}
+-
+-
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.h
+index bea9656813301..3b435690b6ccb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.h
+@@ -81,18 +81,14 @@ static inline struct dpu_hw_dspp *to_dpu_hw_dspp(struct dpu_hw_blk *hw)
+ /**
+  * dpu_hw_dspp_init() - Initializes the DSPP hw driver object.
+  * should be called once before accessing every DSPP.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  DSPP catalog entry for which driver object is required
+  * @addr: Mapped register io address of MDP
+  * Return: pointer to structure or ERR_PTR
+  */
+-struct dpu_hw_dspp *dpu_hw_dspp_init(const struct dpu_dspp_cfg *cfg,
+-	void __iomem *addr);
+-
+-/**
+- * dpu_hw_dspp_destroy(): Destroys DSPP driver context
+- * @dspp: Pointer to DSPP driver context
+- */
+-void dpu_hw_dspp_destroy(struct dpu_hw_dspp *dspp);
++struct dpu_hw_dspp *dpu_hw_dspp_init(struct drm_device *dev,
++				     const struct dpu_dspp_cfg *cfg,
++				     void __iomem *addr);
+ 
+ #endif /*_DPU_HW_DSPP_H */
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index 27b9373cd7846..965692ef7892c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -12,6 +12,8 @@
+ 
+ #include <linux/iopoll.h>
+ 
++#include <drm/drm_managed.h>
++
+ #define INTF_TIMING_ENGINE_EN           0x000
+ #define INTF_CONFIG                     0x004
+ #define INTF_HSYNC_CTL                  0x008
+@@ -161,13 +163,8 @@ static void dpu_hw_intf_setup_timing_engine(struct dpu_hw_intf *ctx,
+ 	hsync_ctl = (hsync_period << 16) | p->hsync_pulse_width;
+ 	display_hctl = (hsync_end_x << 16) | hsync_start_x;
+ 
+-	/*
+-	 * DATA_HCTL_EN controls data timing which can be different from
+-	 * video timing. It is recommended to enable it for all cases, except
+-	 * if compression is enabled in 1 pixel per clock mode
+-	 */
+ 	if (p->wide_bus_en)
+-		intf_cfg2 |= INTF_CFG2_DATABUS_WIDEN | INTF_CFG2_DATA_HCTL_EN;
++		intf_cfg2 |= INTF_CFG2_DATABUS_WIDEN;
+ 
+ 	data_width = p->width;
+ 
+@@ -227,6 +224,14 @@ static void dpu_hw_intf_setup_timing_engine(struct dpu_hw_intf *ctx,
+ 	DPU_REG_WRITE(c, INTF_CONFIG, intf_cfg);
+ 	DPU_REG_WRITE(c, INTF_PANEL_FORMAT, panel_format);
+ 	if (ctx->cap->features & BIT(DPU_DATA_HCTL_EN)) {
++		/*
++		 * DATA_HCTL_EN controls data timing which can be different from
++		 * video timing. It is recommended to enable it for all cases, except
++		 * if compression is enabled in 1 pixel per clock mode
++		 */
++		if (!(p->compression_en && !p->wide_bus_en))
++			intf_cfg2 |= INTF_CFG2_DATA_HCTL_EN;
++
+ 		DPU_REG_WRITE(c, INTF_CONFIG2, intf_cfg2);
+ 		DPU_REG_WRITE(c, INTF_DISPLAY_DATA_HCTL, display_data_hctl);
+ 		DPU_REG_WRITE(c, INTF_ACTIVE_DATA_HCTL, active_data_hctl);
+@@ -527,8 +532,10 @@ static void dpu_hw_intf_program_intf_cmd_cfg(struct dpu_hw_intf *ctx,
+ 	DPU_REG_WRITE(&ctx->hw, INTF_CONFIG2, intf_cfg2);
+ }
+ 
+-struct dpu_hw_intf *dpu_hw_intf_init(const struct dpu_intf_cfg *cfg,
+-		void __iomem *addr, const struct dpu_mdss_version *mdss_rev)
++struct dpu_hw_intf *dpu_hw_intf_init(struct drm_device *dev,
++				     const struct dpu_intf_cfg *cfg,
++				     void __iomem *addr,
++				     const struct dpu_mdss_version *mdss_rev)
+ {
+ 	struct dpu_hw_intf *c;
+ 
+@@ -537,7 +544,7 @@ struct dpu_hw_intf *dpu_hw_intf_init(const struct dpu_intf_cfg *cfg,
+ 		return NULL;
+ 	}
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -581,9 +588,3 @@ struct dpu_hw_intf *dpu_hw_intf_init(const struct dpu_intf_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_intf_destroy(struct dpu_hw_intf *intf)
+-{
+-	kfree(intf);
+-}
+-
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+index 66a5603dc7eda..6f4c87244f944 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+@@ -33,6 +33,7 @@ struct dpu_hw_intf_timing_params {
+ 	u32 hsync_skew;
+ 
+ 	bool wide_bus_en;
++	bool compression_en;
+ };
+ 
+ struct dpu_hw_intf_prog_fetch {
+@@ -131,17 +132,14 @@ struct dpu_hw_intf {
+ /**
+  * dpu_hw_intf_init() - Initializes the INTF driver for the passed
+  * interface catalog entry.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  interface catalog entry for which driver object is required
+  * @addr: mapped register io address of MDP
+  * @mdss_rev: dpu core's major and minor versions
+  */
+-struct dpu_hw_intf *dpu_hw_intf_init(const struct dpu_intf_cfg *cfg,
+-		void __iomem *addr, const struct dpu_mdss_version *mdss_rev);
+-
+-/**
+- * dpu_hw_intf_destroy(): Destroys INTF driver context
+- * @intf:   Pointer to INTF driver context
+- */
+-void dpu_hw_intf_destroy(struct dpu_hw_intf *intf);
++struct dpu_hw_intf *dpu_hw_intf_init(struct drm_device *dev,
++				     const struct dpu_intf_cfg *cfg,
++				     void __iomem *addr,
++				     const struct dpu_mdss_version *mdss_rev);
+ 
+ #endif /*_DPU_HW_INTF_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+index a590c1f7465fb..1d3ccf3228c62 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+@@ -4,6 +4,8 @@
+  * Copyright (c) 2015-2021, The Linux Foundation. All rights reserved.
+  */
+ 
++#include <drm/drm_managed.h>
++
+ #include "dpu_kms.h"
+ #include "dpu_hw_catalog.h"
+ #include "dpu_hwio.h"
+@@ -156,8 +158,9 @@ static void _setup_mixer_ops(struct dpu_hw_lm_ops *ops,
+ 	ops->collect_misr = dpu_hw_lm_collect_misr;
+ }
+ 
+-struct dpu_hw_mixer *dpu_hw_lm_init(const struct dpu_lm_cfg *cfg,
+-		void __iomem *addr)
++struct dpu_hw_mixer *dpu_hw_lm_init(struct drm_device *dev,
++				    const struct dpu_lm_cfg *cfg,
++				    void __iomem *addr)
+ {
+ 	struct dpu_hw_mixer *c;
+ 
+@@ -166,7 +169,7 @@ struct dpu_hw_mixer *dpu_hw_lm_init(const struct dpu_lm_cfg *cfg,
+ 		return NULL;
+ 	}
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -180,8 +183,3 @@ struct dpu_hw_mixer *dpu_hw_lm_init(const struct dpu_lm_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_lm_destroy(struct dpu_hw_mixer *lm)
+-{
+-	kfree(lm);
+-}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
+index 98b77cda65472..0a33817552499 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
+@@ -96,16 +96,12 @@ static inline struct dpu_hw_mixer *to_dpu_hw_mixer(struct dpu_hw_blk *hw)
+ /**
+  * dpu_hw_lm_init() - Initializes the mixer hw driver object.
+  * should be called once before accessing every mixer.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  mixer catalog entry for which driver object is required
+  * @addr: mapped register io address of MDP
+  */
+-struct dpu_hw_mixer *dpu_hw_lm_init(const struct dpu_lm_cfg *cfg,
+-		void __iomem *addr);
+-
+-/**
+- * dpu_hw_lm_destroy(): Destroys layer mixer driver context
+- * @lm:   Pointer to LM driver context
+- */
+-void dpu_hw_lm_destroy(struct dpu_hw_mixer *lm);
++struct dpu_hw_mixer *dpu_hw_lm_init(struct drm_device *dev,
++				    const struct dpu_lm_cfg *cfg,
++				    void __iomem *addr);
+ 
+ #endif /*_DPU_HW_LM_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.c
+index 90e0e05eff8d3..ddfa40a959cbd 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.c
+@@ -4,6 +4,8 @@
+ 
+ #include <linux/iopoll.h>
+ 
++#include <drm/drm_managed.h>
++
+ #include "dpu_hw_mdss.h"
+ #include "dpu_hwio.h"
+ #include "dpu_hw_catalog.h"
+@@ -37,12 +39,13 @@ static void _setup_merge_3d_ops(struct dpu_hw_merge_3d *c,
+ 	c->ops.setup_3d_mode = dpu_hw_merge_3d_setup_3d_mode;
+ };
+ 
+-struct dpu_hw_merge_3d *dpu_hw_merge_3d_init(const struct dpu_merge_3d_cfg *cfg,
+-		void __iomem *addr)
++struct dpu_hw_merge_3d *dpu_hw_merge_3d_init(struct drm_device *dev,
++					     const struct dpu_merge_3d_cfg *cfg,
++					     void __iomem *addr)
+ {
+ 	struct dpu_hw_merge_3d *c;
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -55,8 +58,3 @@ struct dpu_hw_merge_3d *dpu_hw_merge_3d_init(const struct dpu_merge_3d_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_merge_3d_destroy(struct dpu_hw_merge_3d *hw)
+-{
+-	kfree(hw);
+-}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.h
+index 19cec5e887221..c192f02ec1abc 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_merge3d.h
+@@ -48,18 +48,13 @@ static inline struct dpu_hw_merge_3d *to_dpu_hw_merge_3d(struct dpu_hw_blk *hw)
+ /**
+  * dpu_hw_merge_3d_init() - Initializes the merge_3d driver for the passed
+  * merge3d catalog entry.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  Pingpong catalog entry for which driver object is required
+  * @addr: Mapped register io address of MDP
+  * Return: Error code or allocated dpu_hw_merge_3d context
+  */
+-struct dpu_hw_merge_3d *dpu_hw_merge_3d_init(const struct dpu_merge_3d_cfg *cfg,
+-		void __iomem *addr);
+-
+-/**
+- * dpu_hw_merge_3d_destroy - destroys merge_3d driver context
+- *	should be called to free the context
+- * @pp:   Pointer to PP driver context returned by dpu_hw_merge_3d_init
+- */
+-void dpu_hw_merge_3d_destroy(struct dpu_hw_merge_3d *pp);
++struct dpu_hw_merge_3d *dpu_hw_merge_3d_init(struct drm_device *dev,
++					     const struct dpu_merge_3d_cfg *cfg,
++					     void __iomem *addr);
+ 
+ #endif /*_DPU_HW_MERGE3D_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c
+index 057cac7f5d936..2db4c6fba37ac 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c
+@@ -4,6 +4,8 @@
+ 
+ #include <linux/iopoll.h>
+ 
++#include <drm/drm_managed.h>
++
+ #include "dpu_hw_mdss.h"
+ #include "dpu_hwio.h"
+ #include "dpu_hw_catalog.h"
+@@ -281,12 +283,14 @@ static int dpu_hw_pp_setup_dsc(struct dpu_hw_pingpong *pp)
+ 	return 0;
+ }
+ 
+-struct dpu_hw_pingpong *dpu_hw_pingpong_init(const struct dpu_pingpong_cfg *cfg,
+-		void __iomem *addr, const struct dpu_mdss_version *mdss_rev)
++struct dpu_hw_pingpong *dpu_hw_pingpong_init(struct drm_device *dev,
++					     const struct dpu_pingpong_cfg *cfg,
++					     void __iomem *addr,
++					     const struct dpu_mdss_version *mdss_rev)
+ {
+ 	struct dpu_hw_pingpong *c;
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -317,8 +321,3 @@ struct dpu_hw_pingpong *dpu_hw_pingpong_init(const struct dpu_pingpong_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_pingpong_destroy(struct dpu_hw_pingpong *pp)
+-{
+-	kfree(pp);
+-}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
+index 0d541ca5b0566..a48b69fd79a3b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
+@@ -121,19 +121,15 @@ static inline struct dpu_hw_pingpong *to_dpu_hw_pingpong(struct dpu_hw_blk *hw)
+ /**
+  * dpu_hw_pingpong_init() - initializes the pingpong driver for the passed
+  * pingpong catalog entry.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  Pingpong catalog entry for which driver object is required
+  * @addr: Mapped register io address of MDP
+  * @mdss_rev: dpu core's major and minor versions
+  * Return: Error code or allocated dpu_hw_pingpong context
+  */
+-struct dpu_hw_pingpong *dpu_hw_pingpong_init(const struct dpu_pingpong_cfg *cfg,
+-		void __iomem *addr, const struct dpu_mdss_version *mdss_rev);
+-
+-/**
+- * dpu_hw_pingpong_destroy - destroys pingpong driver context
+- *	should be called to free the context
+- * @pp:   Pointer to PP driver context returned by dpu_hw_pingpong_init
+- */
+-void dpu_hw_pingpong_destroy(struct dpu_hw_pingpong *pp);
++struct dpu_hw_pingpong *dpu_hw_pingpong_init(struct drm_device *dev,
++					     const struct dpu_pingpong_cfg *cfg,
++					     void __iomem *addr,
++					     const struct dpu_mdss_version *mdss_rev);
+ 
+ #endif /*_DPU_HW_PINGPONG_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+index 8e3c65989c498..069bf429e5203 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+@@ -11,6 +11,7 @@
+ #include "msm_mdss.h"
+ 
+ #include <drm/drm_file.h>
++#include <drm/drm_managed.h>
+ 
+ #define DPU_FETCH_CONFIG_RESET_VALUE   0x00000087
+ 
+@@ -685,16 +686,18 @@ int _dpu_hw_sspp_init_debugfs(struct dpu_hw_sspp *hw_pipe, struct dpu_kms *kms,
+ }
+ #endif
+ 
+-struct dpu_hw_sspp *dpu_hw_sspp_init(const struct dpu_sspp_cfg *cfg,
+-		void __iomem *addr, const struct msm_mdss_data *mdss_data,
+-		const struct dpu_mdss_version *mdss_rev)
++struct dpu_hw_sspp *dpu_hw_sspp_init(struct drm_device *dev,
++				     const struct dpu_sspp_cfg *cfg,
++				     void __iomem *addr,
++				     const struct msm_mdss_data *mdss_data,
++				     const struct dpu_mdss_version *mdss_rev)
+ {
+ 	struct dpu_hw_sspp *hw_pipe;
+ 
+ 	if (!addr)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	hw_pipe = kzalloc(sizeof(*hw_pipe), GFP_KERNEL);
++	hw_pipe = drmm_kzalloc(dev, sizeof(*hw_pipe), GFP_KERNEL);
+ 	if (!hw_pipe)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -709,9 +712,3 @@ struct dpu_hw_sspp *dpu_hw_sspp_init(const struct dpu_sspp_cfg *cfg,
+ 
+ 	return hw_pipe;
+ }
+-
+-void dpu_hw_sspp_destroy(struct dpu_hw_sspp *ctx)
+-{
+-	kfree(ctx);
+-}
+-
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
+index f93969fddb225..3641ef6bef530 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.h
+@@ -339,21 +339,17 @@ struct dpu_kms;
+ /**
+  * dpu_hw_sspp_init() - Initializes the sspp hw driver object.
+  * Should be called once before accessing every pipe.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  Pipe catalog entry for which driver object is required
+  * @addr: Mapped register io address of MDP
+  * @mdss_data: UBWC / MDSS configuration data
+  * @mdss_rev: dpu core's major and minor versions
+  */
+-struct dpu_hw_sspp *dpu_hw_sspp_init(const struct dpu_sspp_cfg *cfg,
+-		void __iomem *addr, const struct msm_mdss_data *mdss_data,
+-		const struct dpu_mdss_version *mdss_rev);
+-
+-/**
+- * dpu_hw_sspp_destroy(): Destroys SSPP driver context
+- * should be called during Hw pipe cleanup.
+- * @ctx:  Pointer to SSPP driver context returned by dpu_hw_sspp_init
+- */
+-void dpu_hw_sspp_destroy(struct dpu_hw_sspp *ctx);
++struct dpu_hw_sspp *dpu_hw_sspp_init(struct drm_device *dev,
++				     const struct dpu_sspp_cfg *cfg,
++				     void __iomem *addr,
++				     const struct msm_mdss_data *mdss_data,
++				     const struct dpu_mdss_version *mdss_rev);
+ 
+ int _dpu_hw_sspp_init_debugfs(struct dpu_hw_sspp *hw_pipe, struct dpu_kms *kms,
+ 			      struct dentry *entry);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+index 24e734768a727..05e48cf4ec1d2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+@@ -2,6 +2,8 @@
+ /* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+  */
+ 
++#include <drm/drm_managed.h>
++
+ #include "dpu_hwio.h"
+ #include "dpu_hw_catalog.h"
+ #include "dpu_hw_top.h"
+@@ -247,16 +249,17 @@ static void _setup_mdp_ops(struct dpu_hw_mdp_ops *ops,
+ 		ops->intf_audio_select = dpu_hw_intf_audio_select;
+ }
+ 
+-struct dpu_hw_mdp *dpu_hw_mdptop_init(const struct dpu_mdp_cfg *cfg,
+-		void __iomem *addr,
+-		const struct dpu_mdss_cfg *m)
++struct dpu_hw_mdp *dpu_hw_mdptop_init(struct drm_device *dev,
++				      const struct dpu_mdp_cfg *cfg,
++				      void __iomem *addr,
++				      const struct dpu_mdss_cfg *m)
+ {
+ 	struct dpu_hw_mdp *mdp;
+ 
+ 	if (!addr)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	mdp = kzalloc(sizeof(*mdp), GFP_KERNEL);
++	mdp = drmm_kzalloc(dev, sizeof(*mdp), GFP_KERNEL);
+ 	if (!mdp)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -271,9 +274,3 @@ struct dpu_hw_mdp *dpu_hw_mdptop_init(const struct dpu_mdp_cfg *cfg,
+ 
+ 	return mdp;
+ }
+-
+-void dpu_hw_mdp_destroy(struct dpu_hw_mdp *mdp)
+-{
+-	kfree(mdp);
+-}
+-
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h
+index 8b1463d2b2f0b..6f3dc98087dfe 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h
+@@ -145,13 +145,15 @@ struct dpu_hw_mdp {
+ 
+ /**
+  * dpu_hw_mdptop_init - initializes the top driver for the passed config
++ * @dev:  Corresponding device for devres management
+  * @cfg:  MDP TOP configuration from catalog
+  * @addr: Mapped register io address of MDP
+  * @m:    Pointer to mdss catalog data
+  */
+-struct dpu_hw_mdp *dpu_hw_mdptop_init(const struct dpu_mdp_cfg *cfg,
+-		void __iomem *addr,
+-		const struct dpu_mdss_cfg *m);
++struct dpu_hw_mdp *dpu_hw_mdptop_init(struct drm_device *dev,
++				      const struct dpu_mdp_cfg *cfg,
++				      void __iomem *addr,
++				      const struct dpu_mdss_cfg *m);
+ 
+ void dpu_hw_mdp_destroy(struct dpu_hw_mdp *mdp);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
+index d49b3ef7689eb..e75995f7fcea9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.c
+@@ -3,6 +3,8 @@
+   * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved
+   */
+ 
++#include <drm/drm_managed.h>
++
+ #include "dpu_hw_mdss.h"
+ #include "dpu_hwio.h"
+ #include "dpu_hw_catalog.h"
+@@ -211,15 +213,17 @@ static void _setup_wb_ops(struct dpu_hw_wb_ops *ops,
+ 		ops->setup_clk_force_ctrl = dpu_hw_wb_setup_clk_force_ctrl;
+ }
+ 
+-struct dpu_hw_wb *dpu_hw_wb_init(const struct dpu_wb_cfg *cfg,
+-		void __iomem *addr, const struct dpu_mdss_version *mdss_rev)
++struct dpu_hw_wb *dpu_hw_wb_init(struct drm_device *dev,
++				 const struct dpu_wb_cfg *cfg,
++				 void __iomem *addr,
++				 const struct dpu_mdss_version *mdss_rev)
+ {
+ 	struct dpu_hw_wb *c;
+ 
+ 	if (!addr)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	c = kzalloc(sizeof(*c), GFP_KERNEL);
++	c = drmm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
+ 	if (!c)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -233,8 +237,3 @@ struct dpu_hw_wb *dpu_hw_wb_init(const struct dpu_wb_cfg *cfg,
+ 
+ 	return c;
+ }
+-
+-void dpu_hw_wb_destroy(struct dpu_hw_wb *hw_wb)
+-{
+-	kfree(hw_wb);
+-}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h
+index 88792f450a927..e671796ea379c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_wb.h
+@@ -76,18 +76,15 @@ struct dpu_hw_wb {
+ 
+ /**
+  * dpu_hw_wb_init() - Initializes the writeback hw driver object.
++ * @dev:  Corresponding device for devres management
+  * @cfg:  wb_path catalog entry for which driver object is required
+  * @addr: mapped register io address of MDP
+  * @mdss_rev: dpu core's major and minor versions
+  * Return: Error code or allocated dpu_hw_wb context
+  */
+-struct dpu_hw_wb *dpu_hw_wb_init(const struct dpu_wb_cfg *cfg,
+-		void __iomem *addr, const struct dpu_mdss_version *mdss_rev);
+-
+-/**
+- * dpu_hw_wb_destroy(): Destroy writeback hw driver object.
+- * @hw_wb:  Pointer to writeback hw driver object
+- */
+-void dpu_hw_wb_destroy(struct dpu_hw_wb *hw_wb);
++struct dpu_hw_wb *dpu_hw_wb_init(struct drm_device *dev,
++				 const struct dpu_wb_cfg *cfg,
++				 void __iomem *addr,
++				 const struct dpu_mdss_version *mdss_rev);
+ 
+ #endif /*_DPU_HW_WB_H */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index fe7267b3bff53..81acbebaceadb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -385,6 +385,12 @@ static int dpu_kms_global_obj_init(struct dpu_kms *dpu_kms)
+ 	return 0;
+ }
+ 
++static void dpu_kms_global_obj_fini(struct dpu_kms *dpu_kms)
++{
++	drm_atomic_private_obj_fini(&dpu_kms->global_state);
++	drm_modeset_lock_fini(&dpu_kms->global_state_lock);
++}
++
+ static int dpu_kms_parse_data_bus_icc_path(struct dpu_kms *dpu_kms)
+ {
+ 	struct icc_path *path0;
+@@ -822,14 +828,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
+ 		}
+ 	}
+ 
+-	if (dpu_kms->rm_init)
+-		dpu_rm_destroy(&dpu_kms->rm);
+-	dpu_kms->rm_init = false;
++	dpu_kms_global_obj_fini(dpu_kms);
+ 
+ 	dpu_kms->catalog = NULL;
+ 
+-	if (dpu_kms->hw_mdp)
+-		dpu_hw_mdp_destroy(dpu_kms->hw_mdp);
+ 	dpu_kms->hw_mdp = NULL;
+ }
+ 
+@@ -1104,15 +1106,14 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 		goto power_error;
+ 	}
+ 
+-	rc = dpu_rm_init(&dpu_kms->rm, dpu_kms->catalog, dpu_kms->mdss, dpu_kms->mmio);
++	rc = dpu_rm_init(dev, &dpu_kms->rm, dpu_kms->catalog, dpu_kms->mdss, dpu_kms->mmio);
+ 	if (rc) {
+ 		DPU_ERROR("rm init failed: %d\n", rc);
+ 		goto power_error;
+ 	}
+ 
+-	dpu_kms->rm_init = true;
+-
+-	dpu_kms->hw_mdp = dpu_hw_mdptop_init(dpu_kms->catalog->mdp,
++	dpu_kms->hw_mdp = dpu_hw_mdptop_init(dev,
++					     dpu_kms->catalog->mdp,
+ 					     dpu_kms->mmio,
+ 					     dpu_kms->catalog);
+ 	if (IS_ERR(dpu_kms->hw_mdp)) {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+index f5473d4dea92f..9112a702a00f2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+@@ -89,7 +89,6 @@ struct dpu_kms {
+ 	struct drm_private_obj global_state;
+ 
+ 	struct dpu_rm rm;
+-	bool rm_init;
+ 
+ 	struct dpu_hw_vbif *hw_vbif[VBIF_MAX];
+ 	struct dpu_hw_mdp *hw_mdp;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+index 8759466e2f379..0bb28cf4a6cbc 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+@@ -34,72 +34,8 @@ struct dpu_rm_requirements {
+ 	struct msm_display_topology topology;
+ };
+ 
+-int dpu_rm_destroy(struct dpu_rm *rm)
+-{
+-	int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(rm->dspp_blks); i++) {
+-		struct dpu_hw_dspp *hw;
+-
+-		if (rm->dspp_blks[i]) {
+-			hw = to_dpu_hw_dspp(rm->dspp_blks[i]);
+-			dpu_hw_dspp_destroy(hw);
+-		}
+-	}
+-	for (i = 0; i < ARRAY_SIZE(rm->pingpong_blks); i++) {
+-		struct dpu_hw_pingpong *hw;
+-
+-		if (rm->pingpong_blks[i]) {
+-			hw = to_dpu_hw_pingpong(rm->pingpong_blks[i]);
+-			dpu_hw_pingpong_destroy(hw);
+-		}
+-	}
+-	for (i = 0; i < ARRAY_SIZE(rm->merge_3d_blks); i++) {
+-		struct dpu_hw_merge_3d *hw;
+-
+-		if (rm->merge_3d_blks[i]) {
+-			hw = to_dpu_hw_merge_3d(rm->merge_3d_blks[i]);
+-			dpu_hw_merge_3d_destroy(hw);
+-		}
+-	}
+-	for (i = 0; i < ARRAY_SIZE(rm->mixer_blks); i++) {
+-		struct dpu_hw_mixer *hw;
+-
+-		if (rm->mixer_blks[i]) {
+-			hw = to_dpu_hw_mixer(rm->mixer_blks[i]);
+-			dpu_hw_lm_destroy(hw);
+-		}
+-	}
+-	for (i = 0; i < ARRAY_SIZE(rm->ctl_blks); i++) {
+-		struct dpu_hw_ctl *hw;
+-
+-		if (rm->ctl_blks[i]) {
+-			hw = to_dpu_hw_ctl(rm->ctl_blks[i]);
+-			dpu_hw_ctl_destroy(hw);
+-		}
+-	}
+-	for (i = 0; i < ARRAY_SIZE(rm->hw_intf); i++)
+-		dpu_hw_intf_destroy(rm->hw_intf[i]);
+-
+-	for (i = 0; i < ARRAY_SIZE(rm->dsc_blks); i++) {
+-		struct dpu_hw_dsc *hw;
+-
+-		if (rm->dsc_blks[i]) {
+-			hw = to_dpu_hw_dsc(rm->dsc_blks[i]);
+-			dpu_hw_dsc_destroy(hw);
+-		}
+-	}
+-
+-	for (i = 0; i < ARRAY_SIZE(rm->hw_wb); i++)
+-		dpu_hw_wb_destroy(rm->hw_wb[i]);
+-
+-	for (i = 0; i < ARRAY_SIZE(rm->hw_sspp); i++)
+-		dpu_hw_sspp_destroy(rm->hw_sspp[i]);
+-
+-	return 0;
+-}
+-
+-int dpu_rm_init(struct dpu_rm *rm,
++int dpu_rm_init(struct drm_device *dev,
++		struct dpu_rm *rm,
+ 		const struct dpu_mdss_cfg *cat,
+ 		const struct msm_mdss_data *mdss_data,
+ 		void __iomem *mmio)
+@@ -119,7 +55,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_mixer *hw;
+ 		const struct dpu_lm_cfg *lm = &cat->mixer[i];
+ 
+-		hw = dpu_hw_lm_init(lm, mmio);
++		hw = dpu_hw_lm_init(dev, lm, mmio);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed lm object creation: err %d\n", rc);
+@@ -132,7 +68,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_merge_3d *hw;
+ 		const struct dpu_merge_3d_cfg *merge_3d = &cat->merge_3d[i];
+ 
+-		hw = dpu_hw_merge_3d_init(merge_3d, mmio);
++		hw = dpu_hw_merge_3d_init(dev, merge_3d, mmio);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed merge_3d object creation: err %d\n",
+@@ -146,7 +82,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_pingpong *hw;
+ 		const struct dpu_pingpong_cfg *pp = &cat->pingpong[i];
+ 
+-		hw = dpu_hw_pingpong_init(pp, mmio, cat->mdss_ver);
++		hw = dpu_hw_pingpong_init(dev, pp, mmio, cat->mdss_ver);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed pingpong object creation: err %d\n",
+@@ -162,7 +98,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_intf *hw;
+ 		const struct dpu_intf_cfg *intf = &cat->intf[i];
+ 
+-		hw = dpu_hw_intf_init(intf, mmio, cat->mdss_ver);
++		hw = dpu_hw_intf_init(dev, intf, mmio, cat->mdss_ver);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed intf object creation: err %d\n", rc);
+@@ -175,7 +111,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_wb *hw;
+ 		const struct dpu_wb_cfg *wb = &cat->wb[i];
+ 
+-		hw = dpu_hw_wb_init(wb, mmio, cat->mdss_ver);
++		hw = dpu_hw_wb_init(dev, wb, mmio, cat->mdss_ver);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed wb object creation: err %d\n", rc);
+@@ -188,7 +124,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_ctl *hw;
+ 		const struct dpu_ctl_cfg *ctl = &cat->ctl[i];
+ 
+-		hw = dpu_hw_ctl_init(ctl, mmio, cat->mixer_count, cat->mixer);
++		hw = dpu_hw_ctl_init(dev, ctl, mmio, cat->mixer_count, cat->mixer);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed ctl object creation: err %d\n", rc);
+@@ -201,7 +137,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_dspp *hw;
+ 		const struct dpu_dspp_cfg *dspp = &cat->dspp[i];
+ 
+-		hw = dpu_hw_dspp_init(dspp, mmio);
++		hw = dpu_hw_dspp_init(dev, dspp, mmio);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed dspp object creation: err %d\n", rc);
+@@ -215,9 +151,9 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		const struct dpu_dsc_cfg *dsc = &cat->dsc[i];
+ 
+ 		if (test_bit(DPU_DSC_HW_REV_1_2, &dsc->features))
+-			hw = dpu_hw_dsc_init_1_2(dsc, mmio);
++			hw = dpu_hw_dsc_init_1_2(dev, dsc, mmio);
+ 		else
+-			hw = dpu_hw_dsc_init(dsc, mmio);
++			hw = dpu_hw_dsc_init(dev, dsc, mmio);
+ 
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+@@ -231,7 +167,7 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 		struct dpu_hw_sspp *hw;
+ 		const struct dpu_sspp_cfg *sspp = &cat->sspp[i];
+ 
+-		hw = dpu_hw_sspp_init(sspp, mmio, mdss_data, cat->mdss_ver);
++		hw = dpu_hw_sspp_init(dev, sspp, mmio, mdss_data, cat->mdss_ver);
+ 		if (IS_ERR(hw)) {
+ 			rc = PTR_ERR(hw);
+ 			DPU_ERROR("failed sspp object creation: err %d\n", rc);
+@@ -243,8 +179,6 @@ int dpu_rm_init(struct dpu_rm *rm,
+ 	return 0;
+ 
+ fail:
+-	dpu_rm_destroy(rm);
+-
+ 	return rc ? rc : -EFAULT;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h
+index 2b551566cbf48..36752d837be4c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h
+@@ -38,24 +38,19 @@ struct dpu_rm {
+ /**
+  * dpu_rm_init - Read hardware catalog and create reservation tracking objects
+  *	for all HW blocks.
++ * @dev:  Corresponding device for devres management
+  * @rm: DPU Resource Manager handle
+  * @cat: Pointer to hardware catalog
+  * @mdss_data: Pointer to MDSS / UBWC configuration
+  * @mmio: mapped register io address of MDP
+  * @Return: 0 on Success otherwise -ERROR
+  */
+-int dpu_rm_init(struct dpu_rm *rm,
++int dpu_rm_init(struct drm_device *dev,
++		struct dpu_rm *rm,
+ 		const struct dpu_mdss_cfg *cat,
+ 		const struct msm_mdss_data *mdss_data,
+ 		void __iomem *mmio);
+ 
+-/**
+- * dpu_rm_destroy - Free all memory allocated by dpu_rm_init
+- * @rm: DPU Resource Manager handle
+- * @Return: 0 on Success otherwise -ERROR
+- */
+-int dpu_rm_destroy(struct dpu_rm *rm);
+-
+ /**
+  * dpu_rm_reserve - Given a CRTC->Encoder->Connector display chain, analyze
+  *	the use connections and user requirements, specified through related
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 280d1d9a559ba..254d6c9ef2023 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -1255,6 +1255,8 @@ nouveau_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *reg)
+ 			drm_vma_node_unmap(&nvbo->bo.base.vma_node,
+ 					   bdev->dev_mapping);
+ 			nouveau_ttm_io_mem_free_locked(drm, nvbo->bo.resource);
++			nvbo->bo.resource->bus.offset = 0;
++			nvbo->bo.resource->bus.addr = NULL;
+ 			goto retry;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/r535.c
+index 666eb93b1742c..11b4c9c274a1a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/r535.c
+@@ -41,7 +41,6 @@ r535_devinit_new(const struct nvkm_devinit_func *hw,
+ 
+ 	rm->dtor = r535_devinit_dtor;
+ 	rm->post = hw->post;
+-	rm->disable = hw->disable;
+ 
+ 	ret = nv50_devinit_new_(rm, device, type, inst, pdevinit);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+index c4c0f08e92026..bc08814954f9b 100644
+--- a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
++++ b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
+@@ -1871,6 +1871,8 @@ static int boe_panel_add(struct boe_panel *boe)
+ 
+ 	gpiod_set_value(boe->enable_gpio, 0);
+ 
++	boe->base.prepare_prev_first = true;
++
+ 	drm_panel_init(&boe->base, dev, &boe_panel_funcs,
+ 		       DRM_MODE_CONNECTOR_DSI);
+ 	err = of_drm_get_panel_orientation(dev->of_node, &boe->orientation);
+diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
+index cba5a93e60822..70feee7876114 100644
+--- a/drivers/gpu/drm/panel/panel-edp.c
++++ b/drivers/gpu/drm/panel/panel-edp.c
+@@ -413,8 +413,7 @@ static int panel_edp_unprepare(struct drm_panel *panel)
+ 	if (!p->prepared)
+ 		return 0;
+ 
+-	pm_runtime_mark_last_busy(panel->dev);
+-	ret = pm_runtime_put_autosuspend(panel->dev);
++	ret = pm_runtime_put_sync_suspend(panel->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 	p->prepared = false;
+diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
+index 927e5f42e97d0..3e48cbb522a1c 100644
+--- a/drivers/gpu/drm/radeon/ni.c
++++ b/drivers/gpu/drm/radeon/ni.c
+@@ -813,7 +813,7 @@ int ni_init_microcode(struct radeon_device *rdev)
+ 			err = 0;
+ 		} else if (rdev->smc_fw->size != smc_req_size) {
+ 			pr_err("ni_mc: Bogus length %zu in firmware \"%s\"\n",
+-			       rdev->mc_fw->size, fw_name);
++			       rdev->smc_fw->size, fw_name);
+ 			err = -EINVAL;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/rockchip/inno_hdmi.c b/drivers/gpu/drm/rockchip/inno_hdmi.c
+index 6e5b922a121e2..345253e033c53 100644
+--- a/drivers/gpu/drm/rockchip/inno_hdmi.c
++++ b/drivers/gpu/drm/rockchip/inno_hdmi.c
+@@ -412,7 +412,7 @@ static int inno_hdmi_config_video_timing(struct inno_hdmi *hdmi,
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HBLANK_L, value & 0xFF);
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HBLANK_H, (value >> 8) & 0xFF);
+ 
+-	value = mode->hsync_start - mode->hdisplay;
++	value = mode->htotal - mode->hsync_start;
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HDELAY_L, value & 0xFF);
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_HDELAY_H, (value >> 8) & 0xFF);
+ 
+@@ -427,7 +427,7 @@ static int inno_hdmi_config_video_timing(struct inno_hdmi *hdmi,
+ 	value = mode->vtotal - mode->vdisplay;
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_VBLANK, value & 0xFF);
+ 
+-	value = mode->vsync_start - mode->vdisplay;
++	value = mode->vtotal - mode->vsync_start;
+ 	hdmi_writeb(hdmi, HDMI_VIDEO_EXT_VDELAY, value & 0xFF);
+ 
+ 	value = mode->vsync_end - mode->vsync_start;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index f0f47e9abf5aa..f2831d304e7bd 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -577,8 +577,7 @@ static int rockchip_lvds_bind(struct device *dev, struct device *master,
+ 		ret = -EINVAL;
+ 		goto err_put_port;
+ 	} else if (ret) {
+-		DRM_DEV_ERROR(dev, "failed to find panel and bridge node\n");
+-		ret = -EPROBE_DEFER;
++		dev_err_probe(dev, ret, "failed to find panel and bridge node\n");
+ 		goto err_put_port;
+ 	}
+ 	if (lvds->panel)
+diff --git a/drivers/gpu/drm/tegra/dpaux.c b/drivers/gpu/drm/tegra/dpaux.c
+index ef02d530f78d7..ae12d001a04bf 100644
+--- a/drivers/gpu/drm/tegra/dpaux.c
++++ b/drivers/gpu/drm/tegra/dpaux.c
+@@ -522,7 +522,7 @@ static int tegra_dpaux_probe(struct platform_device *pdev)
+ 	if (err < 0) {
+ 		dev_err(dpaux->dev, "failed to request IRQ#%u: %d\n",
+ 			dpaux->irq, err);
+-		return err;
++		goto err_pm_disable;
+ 	}
+ 
+ 	disable_irq(dpaux->irq);
+@@ -542,7 +542,7 @@ static int tegra_dpaux_probe(struct platform_device *pdev)
+ 	 */
+ 	err = tegra_dpaux_pad_config(dpaux, DPAUX_PADCTL_FUNC_I2C);
+ 	if (err < 0)
+-		return err;
++		goto err_pm_disable;
+ 
+ #ifdef CONFIG_GENERIC_PINCONF
+ 	dpaux->desc.name = dev_name(&pdev->dev);
+@@ -555,7 +555,8 @@ static int tegra_dpaux_probe(struct platform_device *pdev)
+ 	dpaux->pinctrl = devm_pinctrl_register(&pdev->dev, &dpaux->desc, dpaux);
+ 	if (IS_ERR(dpaux->pinctrl)) {
+ 		dev_err(&pdev->dev, "failed to register pincontrol\n");
+-		return PTR_ERR(dpaux->pinctrl);
++		err = PTR_ERR(dpaux->pinctrl);
++		goto err_pm_disable;
+ 	}
+ #endif
+ 	/* enable and clear all interrupts */
+@@ -571,10 +572,15 @@ static int tegra_dpaux_probe(struct platform_device *pdev)
+ 	err = devm_of_dp_aux_populate_ep_devices(&dpaux->aux);
+ 	if (err < 0) {
+ 		dev_err(dpaux->dev, "failed to populate AUX bus: %d\n", err);
+-		return err;
++		goto err_pm_disable;
+ 	}
+ 
+ 	return 0;
++
++err_pm_disable:
++	pm_runtime_put_sync(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
++	return err;
+ }
+ 
+ static void tegra_dpaux_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c
+index fbfe92a816d4b..db606e151afc8 100644
+--- a/drivers/gpu/drm/tegra/dsi.c
++++ b/drivers/gpu/drm/tegra/dsi.c
+@@ -1544,9 +1544,11 @@ static int tegra_dsi_ganged_probe(struct tegra_dsi *dsi)
+ 	np = of_parse_phandle(dsi->dev->of_node, "nvidia,ganged-mode", 0);
+ 	if (np) {
+ 		struct platform_device *gangster = of_find_device_by_node(np);
++		of_node_put(np);
++		if (!gangster)
++			return -EPROBE_DEFER;
+ 
+ 		dsi->slave = platform_get_drvdata(gangster);
+-		of_node_put(np);
+ 
+ 		if (!dsi->slave) {
+ 			put_device(&gangster->dev);
+@@ -1594,44 +1596,58 @@ static int tegra_dsi_probe(struct platform_device *pdev)
+ 
+ 	if (!pdev->dev.pm_domain) {
+ 		dsi->rst = devm_reset_control_get(&pdev->dev, "dsi");
+-		if (IS_ERR(dsi->rst))
+-			return PTR_ERR(dsi->rst);
++		if (IS_ERR(dsi->rst)) {
++			err = PTR_ERR(dsi->rst);
++			goto remove;
++		}
+ 	}
+ 
+ 	dsi->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(dsi->clk))
+-		return dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk),
+-				     "cannot get DSI clock\n");
++	if (IS_ERR(dsi->clk)) {
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk),
++				    "cannot get DSI clock\n");
++		goto remove;
++	}
+ 
+ 	dsi->clk_lp = devm_clk_get(&pdev->dev, "lp");
+-	if (IS_ERR(dsi->clk_lp))
+-		return dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_lp),
+-				     "cannot get low-power clock\n");
++	if (IS_ERR(dsi->clk_lp)) {
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_lp),
++				    "cannot get low-power clock\n");
++		goto remove;
++	}
+ 
+ 	dsi->clk_parent = devm_clk_get(&pdev->dev, "parent");
+-	if (IS_ERR(dsi->clk_parent))
+-		return dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_parent),
+-				     "cannot get parent clock\n");
++	if (IS_ERR(dsi->clk_parent)) {
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->clk_parent),
++				    "cannot get parent clock\n");
++		goto remove;
++	}
+ 
+ 	dsi->vdd = devm_regulator_get(&pdev->dev, "avdd-dsi-csi");
+-	if (IS_ERR(dsi->vdd))
+-		return dev_err_probe(&pdev->dev, PTR_ERR(dsi->vdd),
+-				     "cannot get VDD supply\n");
++	if (IS_ERR(dsi->vdd)) {
++		err = dev_err_probe(&pdev->dev, PTR_ERR(dsi->vdd),
++				    "cannot get VDD supply\n");
++		goto remove;
++	}
+ 
+ 	err = tegra_dsi_setup_clocks(dsi);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "cannot setup clocks\n");
+-		return err;
++		goto remove;
+ 	}
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	dsi->regs = devm_ioremap_resource(&pdev->dev, regs);
+-	if (IS_ERR(dsi->regs))
+-		return PTR_ERR(dsi->regs);
++	if (IS_ERR(dsi->regs)) {
++		err = PTR_ERR(dsi->regs);
++		goto remove;
++	}
+ 
+ 	dsi->mipi = tegra_mipi_request(&pdev->dev, pdev->dev.of_node);
+-	if (IS_ERR(dsi->mipi))
+-		return PTR_ERR(dsi->mipi);
++	if (IS_ERR(dsi->mipi)) {
++		err = PTR_ERR(dsi->mipi);
++		goto remove;
++	}
+ 
+ 	dsi->host.ops = &tegra_dsi_host_ops;
+ 	dsi->host.dev = &pdev->dev;
+@@ -1659,9 +1675,12 @@ static int tegra_dsi_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ unregister:
++	pm_runtime_disable(&pdev->dev);
+ 	mipi_dsi_host_unregister(&dsi->host);
+ mipi_free:
+ 	tegra_mipi_free(dsi->mipi);
++remove:
++	tegra_output_remove(&dsi->output);
+ 	return err;
+ }
+ 
+diff --git a/drivers/gpu/drm/tegra/fb.c b/drivers/gpu/drm/tegra/fb.c
+index a719af1dc9a57..46170753699dc 100644
+--- a/drivers/gpu/drm/tegra/fb.c
++++ b/drivers/gpu/drm/tegra/fb.c
+@@ -159,6 +159,7 @@ struct drm_framebuffer *tegra_fb_create(struct drm_device *drm,
+ 
+ 		if (gem->size < size) {
+ 			err = -EINVAL;
++			drm_gem_object_put(gem);
+ 			goto unreference;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/tegra/hdmi.c b/drivers/gpu/drm/tegra/hdmi.c
+index 0ba3ca3ac509d..4cfa4df2141cf 100644
+--- a/drivers/gpu/drm/tegra/hdmi.c
++++ b/drivers/gpu/drm/tegra/hdmi.c
+@@ -1855,12 +1855,14 @@ static int tegra_hdmi_probe(struct platform_device *pdev)
+ 		return err;
+ 
+ 	hdmi->regs = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(hdmi->regs))
+-		return PTR_ERR(hdmi->regs);
++	if (IS_ERR(hdmi->regs)) {
++		err = PTR_ERR(hdmi->regs);
++		goto remove;
++	}
+ 
+ 	err = platform_get_irq(pdev, 0);
+ 	if (err < 0)
+-		return err;
++		goto remove;
+ 
+ 	hdmi->irq = err;
+ 
+@@ -1869,18 +1871,18 @@ static int tegra_hdmi_probe(struct platform_device *pdev)
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n",
+ 			hdmi->irq, err);
+-		return err;
++		goto remove;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, hdmi);
+ 
+ 	err = devm_pm_runtime_enable(&pdev->dev);
+ 	if (err)
+-		return err;
++		goto remove;
+ 
+ 	err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev);
+ 	if (err)
+-		return err;
++		goto remove;
+ 
+ 	INIT_LIST_HEAD(&hdmi->client.list);
+ 	hdmi->client.ops = &hdmi_client_ops;
+@@ -1890,10 +1892,14 @@ static int tegra_hdmi_probe(struct platform_device *pdev)
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "failed to register host1x client: %d\n",
+ 			err);
+-		return err;
++		goto remove;
+ 	}
+ 
+ 	return 0;
++
++remove:
++	tegra_output_remove(&hdmi->output);
++	return err;
+ }
+ 
+ static void tegra_hdmi_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/tegra/output.c b/drivers/gpu/drm/tegra/output.c
+index dc2dcb5ca1c89..d7d2389ac2f5a 100644
+--- a/drivers/gpu/drm/tegra/output.c
++++ b/drivers/gpu/drm/tegra/output.c
+@@ -142,8 +142,10 @@ int tegra_output_probe(struct tegra_output *output)
+ 					GPIOD_IN,
+ 					"HDMI hotplug detect");
+ 	if (IS_ERR(output->hpd_gpio)) {
+-		if (PTR_ERR(output->hpd_gpio) != -ENOENT)
+-			return PTR_ERR(output->hpd_gpio);
++		if (PTR_ERR(output->hpd_gpio) != -ENOENT) {
++			err = PTR_ERR(output->hpd_gpio);
++			goto put_i2c;
++		}
+ 
+ 		output->hpd_gpio = NULL;
+ 	}
+@@ -152,7 +154,7 @@ int tegra_output_probe(struct tegra_output *output)
+ 		err = gpiod_to_irq(output->hpd_gpio);
+ 		if (err < 0) {
+ 			dev_err(output->dev, "gpiod_to_irq(): %d\n", err);
+-			return err;
++			goto put_i2c;
+ 		}
+ 
+ 		output->hpd_irq = err;
+@@ -165,7 +167,7 @@ int tegra_output_probe(struct tegra_output *output)
+ 		if (err < 0) {
+ 			dev_err(output->dev, "failed to request IRQ#%u: %d\n",
+ 				output->hpd_irq, err);
+-			return err;
++			goto put_i2c;
+ 		}
+ 
+ 		output->connector.polled = DRM_CONNECTOR_POLL_HPD;
+@@ -179,6 +181,12 @@ int tegra_output_probe(struct tegra_output *output)
+ 	}
+ 
+ 	return 0;
++
++put_i2c:
++	if (output->ddc)
++		i2c_put_adapter(output->ddc);
++
++	return err;
+ }
+ 
+ void tegra_output_remove(struct tegra_output *output)
+diff --git a/drivers/gpu/drm/tegra/rgb.c b/drivers/gpu/drm/tegra/rgb.c
+index fc66bbd913b24..1e8ec50b759e4 100644
+--- a/drivers/gpu/drm/tegra/rgb.c
++++ b/drivers/gpu/drm/tegra/rgb.c
+@@ -225,26 +225,28 @@ int tegra_dc_rgb_probe(struct tegra_dc *dc)
+ 	rgb->clk = devm_clk_get(dc->dev, NULL);
+ 	if (IS_ERR(rgb->clk)) {
+ 		dev_err(dc->dev, "failed to get clock\n");
+-		return PTR_ERR(rgb->clk);
++		err = PTR_ERR(rgb->clk);
++		goto remove;
+ 	}
+ 
+ 	rgb->clk_parent = devm_clk_get(dc->dev, "parent");
+ 	if (IS_ERR(rgb->clk_parent)) {
+ 		dev_err(dc->dev, "failed to get parent clock\n");
+-		return PTR_ERR(rgb->clk_parent);
++		err = PTR_ERR(rgb->clk_parent);
++		goto remove;
+ 	}
+ 
+ 	err = clk_set_parent(rgb->clk, rgb->clk_parent);
+ 	if (err < 0) {
+ 		dev_err(dc->dev, "failed to set parent clock: %d\n", err);
+-		return err;
++		goto remove;
+ 	}
+ 
+ 	rgb->pll_d_out0 = clk_get_sys(NULL, "pll_d_out0");
+ 	if (IS_ERR(rgb->pll_d_out0)) {
+ 		err = PTR_ERR(rgb->pll_d_out0);
+ 		dev_err(dc->dev, "failed to get pll_d_out0: %d\n", err);
+-		return err;
++		goto remove;
+ 	}
+ 
+ 	if (dc->soc->has_pll_d2_out0) {
+@@ -252,13 +254,19 @@ int tegra_dc_rgb_probe(struct tegra_dc *dc)
+ 		if (IS_ERR(rgb->pll_d2_out0)) {
+ 			err = PTR_ERR(rgb->pll_d2_out0);
+ 			dev_err(dc->dev, "failed to get pll_d2_out0: %d\n", err);
+-			return err;
++			goto put_pll;
+ 		}
+ 	}
+ 
+ 	dc->rgb = &rgb->output;
+ 
+ 	return 0;
++
++put_pll:
++	clk_put(rgb->pll_d_out0);
++remove:
++	tegra_output_remove(&rgb->output);
++	return err;
+ }
+ 
+ void tegra_dc_rgb_remove(struct tegra_dc *dc)
+diff --git a/drivers/gpu/drm/tidss/tidss_crtc.c b/drivers/gpu/drm/tidss/tidss_crtc.c
+index 7c78c074e3a2e..1baa4ace12e15 100644
+--- a/drivers/gpu/drm/tidss/tidss_crtc.c
++++ b/drivers/gpu/drm/tidss/tidss_crtc.c
+@@ -269,6 +269,16 @@ static void tidss_crtc_atomic_disable(struct drm_crtc *crtc,
+ 
+ 	reinit_completion(&tcrtc->framedone_completion);
+ 
++	/*
++	 * If a layer is left enabled when the videoport is disabled, and the
++	 * vid pipeline that was used for the layer is taken into use on
++	 * another videoport, the DSS will report sync lost issues. Disable all
++	 * the layers here as a work-around.
++	 */
++	for (u32 layer = 0; layer < tidss->feat->num_planes; layer++)
++		dispc_ovr_enable_layer(tidss->dispc, tcrtc->hw_videoport, layer,
++				       false);
++
+ 	dispc_vp_disable(tidss->dispc, tcrtc->hw_videoport);
+ 
+ 	if (!wait_for_completion_timeout(&tcrtc->framedone_completion,
+diff --git a/drivers/gpu/drm/tidss/tidss_plane.c b/drivers/gpu/drm/tidss/tidss_plane.c
+index e1c0ef0c3894c..68fed531f6a7f 100644
+--- a/drivers/gpu/drm/tidss/tidss_plane.c
++++ b/drivers/gpu/drm/tidss/tidss_plane.c
+@@ -213,7 +213,7 @@ struct tidss_plane *tidss_plane_create(struct tidss_device *tidss,
+ 
+ 	drm_plane_helper_add(&tplane->plane, &tidss_plane_helper_funcs);
+ 
+-	drm_plane_create_zpos_property(&tplane->plane, hw_plane_id, 0,
++	drm_plane_create_zpos_property(&tplane->plane, tidss->num_planes, 0,
+ 				       num_planes - 1);
+ 
+ 	ret = drm_plane_create_color_properties(&tplane->plane,
+diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c
+index 3c99fb8b54e2d..e7441b227b3ce 100644
+--- a/drivers/gpu/drm/vkms/vkms_composer.c
++++ b/drivers/gpu/drm/vkms/vkms_composer.c
+@@ -123,6 +123,8 @@ static u16 apply_lut_to_channel_value(const struct vkms_color_lut *lut, u16 chan
+ 				      enum lut_channel channel)
+ {
+ 	s64 lut_index = get_lut_index(lut, channel_value);
++	u16 *floor_lut_value, *ceil_lut_value;
++	u16 floor_channel_value, ceil_channel_value;
+ 
+ 	/*
+ 	 * This checks if `struct drm_color_lut` has any gap added by the compiler
+@@ -130,11 +132,15 @@ static u16 apply_lut_to_channel_value(const struct vkms_color_lut *lut, u16 chan
+ 	 */
+ 	static_assert(sizeof(struct drm_color_lut) == sizeof(__u16) * 4);
+ 
+-	u16 *floor_lut_value = (__u16 *)&lut->base[drm_fixp2int(lut_index)];
+-	u16 *ceil_lut_value = (__u16 *)&lut->base[drm_fixp2int_ceil(lut_index)];
++	floor_lut_value = (__u16 *)&lut->base[drm_fixp2int(lut_index)];
++	if (drm_fixp2int(lut_index) == (lut->lut_length - 1))
++		/* We're at the end of the LUT array, use same value for ceil and floor */
++		ceil_lut_value = floor_lut_value;
++	else
++		ceil_lut_value = (__u16 *)&lut->base[drm_fixp2int_ceil(lut_index)];
+ 
+-	u16 floor_channel_value = floor_lut_value[channel];
+-	u16 ceil_channel_value = ceil_lut_value[channel];
++	floor_channel_value = floor_lut_value[channel];
++	ceil_channel_value = ceil_lut_value[channel];
+ 
+ 	return lerp_u16(floor_channel_value, ceil_channel_value,
+ 			lut_index & DRM_FIXED_DECIMAL_MASK);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+index ceb4d3d3b965a..a0b47c9b33f55 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+@@ -64,8 +64,11 @@ static int vmw_gmrid_man_get_node(struct ttm_resource_manager *man,
+ 	ttm_resource_init(bo, place, *res);
+ 
+ 	id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL);
+-	if (id < 0)
++	if (id < 0) {
++		ttm_resource_fini(man, *res);
++		kfree(*res);
+ 		return id;
++	}
+ 
+ 	spin_lock(&gman->lock);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 818b7f109f538..b51578918cf8d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -272,6 +272,7 @@ static int vmw_du_get_cursor_mob(struct vmw_cursor_plane *vcp,
+ 	u32 size = vmw_du_cursor_mob_size(vps->base.crtc_w, vps->base.crtc_h);
+ 	u32 i;
+ 	u32 cursor_max_dim, mob_max_size;
++	struct vmw_fence_obj *fence = NULL;
+ 	int ret;
+ 
+ 	if (!dev_priv->has_mob ||
+@@ -313,7 +314,15 @@ static int vmw_du_get_cursor_mob(struct vmw_cursor_plane *vcp,
+ 	if (ret != 0)
+ 		goto teardown;
+ 
+-	vmw_bo_fence_single(&vps->cursor.bo->tbo, NULL);
++	ret = vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL);
++	if (ret != 0) {
++		ttm_bo_unreserve(&vps->cursor.bo->tbo);
++		goto teardown;
++	}
++
++	dma_fence_wait(&fence->base, false);
++	dma_fence_put(&fence->base);
++
+ 	ttm_bo_unreserve(&vps->cursor.bo->tbo);
+ 	return 0;
+ 
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 2530fa98b568b..ce449da08e9ba 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -35,6 +35,8 @@ static int sensor_mask_override = -1;
+ module_param_named(sensor_mask, sensor_mask_override, int, 0444);
+ MODULE_PARM_DESC(sensor_mask, "override the detected sensors mask");
+ 
++static bool intr_disable = true;
++
+ static int amd_sfh_wait_response_v2(struct amd_mp2_dev *mp2, u8 sid, u32 sensor_sts)
+ {
+ 	union cmd_response cmd_resp;
+@@ -55,7 +57,7 @@ static void amd_start_sensor_v2(struct amd_mp2_dev *privdata, struct amd_mp2_sen
+ 
+ 	cmd_base.ul = 0;
+ 	cmd_base.cmd_v2.cmd_id = ENABLE_SENSOR;
+-	cmd_base.cmd_v2.intr_disable = 1;
++	cmd_base.cmd_v2.intr_disable = intr_disable;
+ 	cmd_base.cmd_v2.period = info.period;
+ 	cmd_base.cmd_v2.sensor_id = info.sensor_idx;
+ 	cmd_base.cmd_v2.length = 16;
+@@ -73,7 +75,7 @@ static void amd_stop_sensor_v2(struct amd_mp2_dev *privdata, u16 sensor_idx)
+ 
+ 	cmd_base.ul = 0;
+ 	cmd_base.cmd_v2.cmd_id = DISABLE_SENSOR;
+-	cmd_base.cmd_v2.intr_disable = 1;
++	cmd_base.cmd_v2.intr_disable = intr_disable;
+ 	cmd_base.cmd_v2.period = 0;
+ 	cmd_base.cmd_v2.sensor_id = sensor_idx;
+ 	cmd_base.cmd_v2.length  = 16;
+@@ -87,7 +89,7 @@ static void amd_stop_all_sensor_v2(struct amd_mp2_dev *privdata)
+ 	union sfh_cmd_base cmd_base;
+ 
+ 	cmd_base.cmd_v2.cmd_id = STOP_ALL_SENSORS;
+-	cmd_base.cmd_v2.intr_disable = 1;
++	cmd_base.cmd_v2.intr_disable = intr_disable;
+ 	cmd_base.cmd_v2.period = 0;
+ 	cmd_base.cmd_v2.sensor_id = 0;
+ 
+@@ -292,6 +294,26 @@ int amd_sfh_irq_init(struct amd_mp2_dev *privdata)
+ 	return 0;
+ }
+ 
++static int mp2_disable_intr(const struct dmi_system_id *id)
++{
++	intr_disable = false;
++	return 0;
++}
++
++static const struct dmi_system_id dmi_sfh_table[] = {
++	{
++		/*
++		 * https://bugzilla.kernel.org/show_bug.cgi?id=218104
++		 */
++		.callback = mp2_disable_intr,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook x360 435 G7"),
++		},
++	},
++	{}
++};
++
+ static const struct dmi_system_id dmi_nodevs[] = {
+ 	{
+ 		/*
+@@ -315,6 +337,8 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (dmi_first_match(dmi_nodevs))
+ 		return -ENODEV;
+ 
++	dmi_check_system(dmi_sfh_table);
++
+ 	privdata = devm_kzalloc(&pdev->dev, sizeof(*privdata), GFP_KERNEL);
+ 	if (!privdata)
+ 		return -ENOMEM;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+index 70add75fc5066..05e400a4a83e4 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+@@ -90,10 +90,10 @@ enum mem_use_type {
+ struct hpd_status {
+ 	union {
+ 		struct {
+-			u32 human_presence_report : 4;
+-			u32 human_presence_actual : 4;
+-			u32 probablity		  : 8;
+ 			u32 object_distance       : 16;
++			u32 probablity		  : 8;
++			u32 human_presence_actual : 4;
++			u32 human_presence_report : 4;
+ 		} shpd;
+ 		u32 val;
+ 	};
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index 149a3c74346b4..f86c1ea83a037 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -54,10 +54,10 @@ struct lenovo_drvdata {
+ 	/* 0: Up
+ 	 * 1: Down (undecided)
+ 	 * 2: Scrolling
+-	 * 3: Patched firmware, disable workaround
+ 	 */
+ 	u8 middlebutton_state;
+ 	bool fn_lock;
++	bool middleclick_workaround_cptkbd;
+ };
+ 
+ #define map_key_clear(c) hid_map_usage_clear(hi, usage, bit, max, EV_KEY, (c))
+@@ -621,6 +621,36 @@ static ssize_t attr_sensitivity_store_cptkbd(struct device *dev,
+ 	return count;
+ }
+ 
++static ssize_t attr_middleclick_workaround_show_cptkbd(struct device *dev,
++		struct device_attribute *attr,
++		char *buf)
++{
++	struct hid_device *hdev = to_hid_device(dev);
++	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
++
++	return snprintf(buf, PAGE_SIZE, "%u\n",
++		cptkbd_data->middleclick_workaround_cptkbd);
++}
++
++static ssize_t attr_middleclick_workaround_store_cptkbd(struct device *dev,
++		struct device_attribute *attr,
++		const char *buf,
++		size_t count)
++{
++	struct hid_device *hdev = to_hid_device(dev);
++	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
++	int value;
++
++	if (kstrtoint(buf, 10, &value))
++		return -EINVAL;
++	if (value < 0 || value > 1)
++		return -EINVAL;
++
++	cptkbd_data->middleclick_workaround_cptkbd = !!value;
++
++	return count;
++}
++
+ 
+ static struct device_attribute dev_attr_fn_lock =
+ 	__ATTR(fn_lock, S_IWUSR | S_IRUGO,
+@@ -632,10 +662,16 @@ static struct device_attribute dev_attr_sensitivity_cptkbd =
+ 			attr_sensitivity_show_cptkbd,
+ 			attr_sensitivity_store_cptkbd);
+ 
++static struct device_attribute dev_attr_middleclick_workaround_cptkbd =
++	__ATTR(middleclick_workaround, S_IWUSR | S_IRUGO,
++			attr_middleclick_workaround_show_cptkbd,
++			attr_middleclick_workaround_store_cptkbd);
++
+ 
+ static struct attribute *lenovo_attributes_cptkbd[] = {
+ 	&dev_attr_fn_lock.attr,
+ 	&dev_attr_sensitivity_cptkbd.attr,
++	&dev_attr_middleclick_workaround_cptkbd.attr,
+ 	NULL
+ };
+ 
+@@ -686,23 +722,7 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
+ {
+ 	struct lenovo_drvdata *cptkbd_data = hid_get_drvdata(hdev);
+ 
+-	if (cptkbd_data->middlebutton_state != 3) {
+-		/* REL_X and REL_Y events during middle button pressed
+-		 * are only possible on patched, bug-free firmware
+-		 * so set middlebutton_state to 3
+-		 * to never apply workaround anymore
+-		 */
+-		if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD &&
+-				cptkbd_data->middlebutton_state == 1 &&
+-				usage->type == EV_REL &&
+-				(usage->code == REL_X || usage->code == REL_Y)) {
+-			cptkbd_data->middlebutton_state = 3;
+-			/* send middle button press which was hold before */
+-			input_event(field->hidinput->input,
+-				EV_KEY, BTN_MIDDLE, 1);
+-			input_sync(field->hidinput->input);
+-		}
+-
++	if (cptkbd_data->middleclick_workaround_cptkbd) {
+ 		/* "wheel" scroll events */
+ 		if (usage->type == EV_REL && (usage->code == REL_WHEEL ||
+ 				usage->code == REL_HWHEEL)) {
+@@ -1166,6 +1186,7 @@ static int lenovo_probe_cptkbd(struct hid_device *hdev)
+ 	cptkbd_data->middlebutton_state = 0;
+ 	cptkbd_data->fn_lock = true;
+ 	cptkbd_data->sensitivity = 0x05;
++	cptkbd_data->middleclick_workaround_cptkbd = true;
+ 	lenovo_features_set_cptkbd(hdev);
+ 
+ 	ret = sysfs_create_group(&hdev->dev.kobj, &lenovo_attr_group_cptkbd);
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 6ef0c88e3e60a..d2f3f234f29de 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -203,6 +203,8 @@ struct hidpp_device {
+ 	struct hidpp_scroll_counter vertical_wheel_counter;
+ 
+ 	u8 wireless_feature_index;
++
++	bool connected_once;
+ };
+ 
+ /* HID++ 1.0 error codes */
+@@ -988,8 +990,13 @@ static int hidpp_root_get_protocol_version(struct hidpp_device *hidpp)
+ 	hidpp->protocol_minor = response.rap.params[1];
+ 
+ print_version:
+-	hid_info(hidpp->hid_dev, "HID++ %u.%u device connected.\n",
+-		 hidpp->protocol_major, hidpp->protocol_minor);
++	if (!hidpp->connected_once) {
++		hid_info(hidpp->hid_dev, "HID++ %u.%u device connected.\n",
++			 hidpp->protocol_major, hidpp->protocol_minor);
++		hidpp->connected_once = true;
++	} else
++		hid_dbg(hidpp->hid_dev, "HID++ %u.%u device connected.\n",
++			 hidpp->protocol_major, hidpp->protocol_minor);
+ 	return 0;
+ }
+ 
+@@ -4184,7 +4191,7 @@ static void hidpp_connect_event(struct work_struct *work)
+ 	/* Get device version to check if it is connected */
+ 	ret = hidpp_root_get_protocol_version(hidpp);
+ 	if (ret) {
+-		hid_info(hidpp->hid_dev, "Disconnected\n");
++		hid_dbg(hidpp->hid_dev, "Disconnected\n");
+ 		if (hidpp->battery.ps) {
+ 			hidpp->battery.online = false;
+ 			hidpp->battery.status = POWER_SUPPLY_STATUS_UNKNOWN;
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index fd5b0637dad68..3e91e4d6ba6fa 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2151,6 +2151,10 @@ static const struct hid_device_id mt_devices[] = {
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_SYNAPTICS, 0xcd7e) },
+ 
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_SYNAPTICS, 0xcddc) },
++
+ 	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+ 		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
+ 			USB_VENDOR_ID_SYNAPTICS, 0xce08) },
+diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
+index 00242107d62e0..862c47b191afe 100644
+--- a/drivers/hv/Kconfig
++++ b/drivers/hv/Kconfig
+@@ -16,6 +16,7 @@ config HYPERV
+ config HYPERV_VTL_MODE
+ 	bool "Enable Linux to boot in VTL context"
+ 	depends on X86_64 && HYPERV
++	depends on SMP
+ 	default n
+ 	help
+ 	  Virtual Secure Mode (VSM) is a set of hypervisor capabilities and
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 9fabe00a40d6a..4b80026db1ab6 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -441,8 +441,26 @@ static void coresight_disable_helpers(struct coresight_device *csdev)
+ 	}
+ }
+ 
++/*
++ * Helper function to call source_ops(csdev)->disable and also disable the
++ * helpers.
++ *
++ * There is an imbalance between coresight_enable_path() and
++ * coresight_disable_path(). Enabling also enables the source's helpers as part
++ * of the path, but disabling always skips the first item in the path (which is
++ * the source), so sources and their helpers don't get disabled as part of that
++ * function and we need the extra step here.
++ */
++void coresight_disable_source(struct coresight_device *csdev, void *data)
++{
++	if (source_ops(csdev)->disable)
++		source_ops(csdev)->disable(csdev, data);
++	coresight_disable_helpers(csdev);
++}
++EXPORT_SYMBOL_GPL(coresight_disable_source);
++
+ /**
+- *  coresight_disable_source - Drop the reference count by 1 and disable
++ *  coresight_disable_source_sysfs - Drop the reference count by 1 and disable
+  *  the device if there are no users left.
+  *
+  *  @csdev: The coresight device to disable
+@@ -451,17 +469,15 @@ static void coresight_disable_helpers(struct coresight_device *csdev)
+  *
+  *  Returns true if the device has been disabled.
+  */
+-bool coresight_disable_source(struct coresight_device *csdev, void *data)
++static bool coresight_disable_source_sysfs(struct coresight_device *csdev,
++					   void *data)
+ {
+ 	if (atomic_dec_return(&csdev->refcnt) == 0) {
+-		if (source_ops(csdev)->disable)
+-			source_ops(csdev)->disable(csdev, data);
+-		coresight_disable_helpers(csdev);
++		coresight_disable_source(csdev, data);
+ 		csdev->enable = false;
+ 	}
+ 	return !csdev->enable;
+ }
+-EXPORT_SYMBOL_GPL(coresight_disable_source);
+ 
+ /*
+  * coresight_disable_path_from : Disable components in the given path beyond
+@@ -1202,7 +1218,7 @@ void coresight_disable(struct coresight_device *csdev)
+ 	if (ret)
+ 		goto out;
+ 
+-	if (!csdev->enable || !coresight_disable_source(csdev, NULL))
++	if (!csdev->enable || !coresight_disable_source_sysfs(csdev, NULL))
+ 		goto out;
+ 
+ 	switch (csdev->subtype.source_subtype) {
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index 89e8ed214ea49..58b32b399fac2 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -587,7 +587,7 @@ static void etm_event_stop(struct perf_event *event, int mode)
+ 		return;
+ 
+ 	/* stop tracer */
+-	source_ops(csdev)->disable(csdev, event);
++	coresight_disable_source(csdev, event);
+ 
+ 	/* tell the core */
+ 	event->hw.state = PERF_HES_STOPPED;
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 34aee59dd1473..18c4544f60454 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -1160,6 +1160,7 @@ static void etm4_init_arch_data(void *info)
+ 	struct etm4_init_arg *init_arg = info;
+ 	struct etmv4_drvdata *drvdata;
+ 	struct csdev_access *csa;
++	struct device *dev = init_arg->dev;
+ 	int i;
+ 
+ 	drvdata = dev_get_drvdata(init_arg->dev);
+@@ -1173,6 +1174,10 @@ static void etm4_init_arch_data(void *info)
+ 	if (!etm4_init_csdev_access(drvdata, csa))
+ 		return;
+ 
++	if (!csa->io_mem ||
++	    fwnode_property_present(dev_fwnode(dev), "qcom,skip-power-up"))
++		drvdata->skip_power_up = true;
++
+ 	/* Detect the support for OS Lock before we actually use it */
+ 	etm_detect_os_lock(drvdata, csa);
+ 
+@@ -1998,11 +2003,6 @@ static int etm4_add_coresight_dev(struct etm4_init_arg *init_arg)
+ 	if (!drvdata->arch)
+ 		return -EINVAL;
+ 
+-	/* TRCPDCR is not accessible with system instructions. */
+-	if (!desc.access.io_mem ||
+-	    fwnode_property_present(dev_fwnode(dev), "qcom,skip-power-up"))
+-		drvdata->skip_power_up = true;
+-
+ 	major = ETM_ARCH_MAJOR_VERSION(drvdata->arch);
+ 	minor = ETM_ARCH_MINOR_VERSION(drvdata->arch);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
+index 767076e079701..30c051055e54b 100644
+--- a/drivers/hwtracing/coresight/coresight-priv.h
++++ b/drivers/hwtracing/coresight/coresight-priv.h
+@@ -233,6 +233,6 @@ void coresight_set_percpu_sink(int cpu, struct coresight_device *csdev);
+ struct coresight_device *coresight_get_percpu_sink(int cpu);
+ int coresight_enable_source(struct coresight_device *csdev, enum cs_mode mode,
+ 			    void *data);
+-bool coresight_disable_source(struct coresight_device *csdev, void *data);
++void coresight_disable_source(struct coresight_device *csdev, void *data);
+ 
+ #endif
+diff --git a/drivers/hwtracing/ptt/hisi_ptt.c b/drivers/hwtracing/ptt/hisi_ptt.c
+index a991ecb7515a3..24a1f7797aeb6 100644
+--- a/drivers/hwtracing/ptt/hisi_ptt.c
++++ b/drivers/hwtracing/ptt/hisi_ptt.c
+@@ -995,6 +995,9 @@ static int hisi_ptt_pmu_event_init(struct perf_event *event)
+ 	int ret;
+ 	u32 val;
+ 
++	if (event->attr.type != hisi_ptt->hisi_ptt_pmu.type)
++		return -ENOENT;
++
+ 	if (event->cpu < 0) {
+ 		dev_dbg(event->pmu->dev, "Per-task mode not supported\n");
+ 		return -EOPNOTSUPP;
+@@ -1003,9 +1006,6 @@ static int hisi_ptt_pmu_event_init(struct perf_event *event)
+ 	if (event->attach_state & PERF_ATTACH_TASK)
+ 		return -EOPNOTSUPP;
+ 
+-	if (event->attr.type != hisi_ptt->hisi_ptt_pmu.type)
+-		return -ENOENT;
+-
+ 	ret = hisi_ptt_trace_valid_filter(hisi_ptt, event->attr.config);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
+index ef5751e91cc9e..276153e10f5a4 100644
+--- a/drivers/i3c/master/dw-i3c-master.c
++++ b/drivers/i3c/master/dw-i3c-master.c
+@@ -1163,8 +1163,10 @@ static void dw_i3c_master_set_sir_enabled(struct dw_i3c_master *master,
+ 		global = reg == 0xffffffff;
+ 		reg &= ~BIT(idx);
+ 	} else {
+-		global = reg == 0;
++		bool hj_rejected = !!(readl(master->regs + DEVICE_CTRL) & DEV_CTRL_HOT_JOIN_NACK);
++
+ 		reg |= BIT(idx);
++		global = (reg == 0xffffffff) && hj_rejected;
+ 	}
+ 	writel(reg, master->regs + IBI_SIR_REQ_REJECT);
+ 
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 7653261d2dc2b..b51eb6cb766f3 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -34,24 +34,11 @@
+ static int iio_gts_get_gain(const u64 max, const u64 scale)
+ {
+ 	u64 full = max;
+-	int tmp = 1;
+ 
+ 	if (scale > full || !scale)
+ 		return -EINVAL;
+ 
+-	if (U64_MAX - full < scale) {
+-		/* Risk of overflow */
+-		if (full - scale < scale)
+-			return 1;
+-
+-		full -= scale;
+-		tmp++;
+-	}
+-
+-	while (full > scale * (u64)tmp)
+-		tmp++;
+-
+-	return tmp;
++	return div64_u64(full, scale);
+ }
+ 
+ /**
+diff --git a/drivers/iio/pressure/mprls0025pa.c b/drivers/iio/pressure/mprls0025pa.c
+index 30fb2de368210..e3f0de020a40c 100644
+--- a/drivers/iio/pressure/mprls0025pa.c
++++ b/drivers/iio/pressure/mprls0025pa.c
+@@ -323,6 +323,7 @@ static int mpr_probe(struct i2c_client *client)
+ 	struct iio_dev *indio_dev;
+ 	struct device *dev = &client->dev;
+ 	s64 scale, offset;
++	u32 func;
+ 
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_READ_BYTE))
+ 		return dev_err_probe(dev, -EOPNOTSUPP,
+@@ -362,10 +363,11 @@ static int mpr_probe(struct i2c_client *client)
+ 			return dev_err_probe(dev, ret,
+ 				"honeywell,pmax-pascal could not be read\n");
+ 		ret = device_property_read_u32(dev,
+-				"honeywell,transfer-function", &data->function);
++				"honeywell,transfer-function", &func);
+ 		if (ret)
+ 			return dev_err_probe(dev, ret,
+ 				"honeywell,transfer-function could not be read\n");
++		data->function = func - 1;
+ 		if (data->function > MPR_FUNCTION_C)
+ 			return dev_err_probe(dev, -EINVAL,
+ 				"honeywell,transfer-function %d invalid\n",
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 67bcea7a153c6..07cb6c5ffda00 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1730,7 +1730,7 @@ static int assign_client_id(struct ib_client *client)
+ {
+ 	int ret;
+ 
+-	down_write(&clients_rwsem);
++	lockdep_assert_held(&clients_rwsem);
+ 	/*
+ 	 * The add/remove callbacks must be called in FIFO/LIFO order. To
+ 	 * achieve this we assign client_ids so they are sorted in
+@@ -1739,14 +1739,11 @@ static int assign_client_id(struct ib_client *client)
+ 	client->client_id = highest_client_id;
+ 	ret = xa_insert(&clients, client->client_id, client, GFP_KERNEL);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+ 	highest_client_id++;
+ 	xa_set_mark(&clients, client->client_id, CLIENT_REGISTERED);
+-
+-out:
+-	up_write(&clients_rwsem);
+-	return ret;
++	return 0;
+ }
+ 
+ static void remove_client_id(struct ib_client *client)
+@@ -1776,25 +1773,35 @@ int ib_register_client(struct ib_client *client)
+ {
+ 	struct ib_device *device;
+ 	unsigned long index;
++	bool need_unreg = false;
+ 	int ret;
+ 
+ 	refcount_set(&client->uses, 1);
+ 	init_completion(&client->uses_zero);
++
++	/*
++	 * The devices_rwsem is held in write mode to ensure that a racing
++	 * ib_register_device() sees a consisent view of clients and devices.
++	 */
++	down_write(&devices_rwsem);
++	down_write(&clients_rwsem);
+ 	ret = assign_client_id(client);
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+-	down_read(&devices_rwsem);
++	need_unreg = true;
+ 	xa_for_each_marked (&devices, index, device, DEVICE_REGISTERED) {
+ 		ret = add_client_context(device, client);
+-		if (ret) {
+-			up_read(&devices_rwsem);
+-			ib_unregister_client(client);
+-			return ret;
+-		}
++		if (ret)
++			goto out;
+ 	}
+-	up_read(&devices_rwsem);
+-	return 0;
++	ret = 0;
++out:
++	up_write(&clients_rwsem);
++	up_write(&devices_rwsem);
++	if (need_unreg && ret)
++		ib_unregister_client(client);
++	return ret;
+ }
+ EXPORT_SYMBOL(ib_register_client);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 1627f3b0ef28b..f08417d3721dc 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -584,6 +584,13 @@ struct hns_roce_work {
+ 	u32 queue_num;
+ };
+ 
++enum hns_roce_cong_type {
++	CONG_TYPE_DCQCN,
++	CONG_TYPE_LDCP,
++	CONG_TYPE_HC3,
++	CONG_TYPE_DIP,
++};
++
+ struct hns_roce_qp {
+ 	struct ib_qp		ibqp;
+ 	struct hns_roce_wq	rq;
+@@ -627,6 +634,7 @@ struct hns_roce_qp {
+ 	struct list_head	sq_node; /* all send qps are on a list */
+ 	struct hns_user_mmap_entry *dwqe_mmap_entry;
+ 	u32			config;
++	enum hns_roce_cong_type	cong_type;
+ };
+ 
+ struct hns_roce_ib_iboe {
+@@ -698,13 +706,6 @@ struct hns_roce_eq_table {
+ 	struct hns_roce_eq	*eq;
+ };
+ 
+-enum cong_type {
+-	CONG_TYPE_DCQCN,
+-	CONG_TYPE_LDCP,
+-	CONG_TYPE_HC3,
+-	CONG_TYPE_DIP,
+-};
+-
+ struct hns_roce_caps {
+ 	u64		fw_ver;
+ 	u8		num_ports;
+@@ -834,7 +835,7 @@ struct hns_roce_caps {
+ 	u16		default_aeq_period;
+ 	u16		default_aeq_arm_st;
+ 	u16		default_ceq_arm_st;
+-	enum cong_type	cong_type;
++	enum hns_roce_cong_type cong_type;
+ };
+ 
+ enum hns_roce_device_state {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index aa9527ac2fe06..81d6d4331cdd6 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -4733,12 +4733,15 @@ static int check_cong_type(struct ib_qp *ibqp,
+ 			   struct hns_roce_congestion_algorithm *cong_alg)
+ {
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
++	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+ 
+-	if (ibqp->qp_type == IB_QPT_UD)
+-		hr_dev->caps.cong_type = CONG_TYPE_DCQCN;
++	if (ibqp->qp_type == IB_QPT_UD || ibqp->qp_type == IB_QPT_GSI)
++		hr_qp->cong_type = CONG_TYPE_DCQCN;
++	else
++		hr_qp->cong_type = hr_dev->caps.cong_type;
+ 
+ 	/* different congestion types match different configurations */
+-	switch (hr_dev->caps.cong_type) {
++	switch (hr_qp->cong_type) {
+ 	case CONG_TYPE_DCQCN:
+ 		cong_alg->alg_sel = CONG_DCQCN;
+ 		cong_alg->alg_sub_sel = UNSUPPORT_CONG_LEVEL;
+@@ -4766,8 +4769,8 @@ static int check_cong_type(struct ib_qp *ibqp,
+ 	default:
+ 		ibdev_warn(&hr_dev->ib_dev,
+ 			   "invalid type(%u) for congestion selection.\n",
+-			   hr_dev->caps.cong_type);
+-		hr_dev->caps.cong_type = CONG_TYPE_DCQCN;
++			   hr_qp->cong_type);
++		hr_qp->cong_type = CONG_TYPE_DCQCN;
+ 		cong_alg->alg_sel = CONG_DCQCN;
+ 		cong_alg->alg_sub_sel = UNSUPPORT_CONG_LEVEL;
+ 		cong_alg->dip_vld = DIP_INVALID;
+@@ -4786,6 +4789,7 @@ static int fill_cong_field(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ 	struct hns_roce_congestion_algorithm cong_field;
+ 	struct ib_device *ibdev = ibqp->device;
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(ibdev);
++	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+ 	u32 dip_idx = 0;
+ 	int ret;
+ 
+@@ -4798,7 +4802,7 @@ static int fill_cong_field(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ 		return ret;
+ 
+ 	hr_reg_write(context, QPC_CONG_ALGO_TMPL_ID, hr_dev->cong_algo_tmpl_id +
+-		     hr_dev->caps.cong_type * HNS_ROCE_CONG_SIZE);
++		     hr_qp->cong_type * HNS_ROCE_CONG_SIZE);
+ 	hr_reg_clear(qpc_mask, QPC_CONG_ALGO_TMPL_ID);
+ 	hr_reg_write(&context->ext, QPCEX_CONG_ALG_SEL, cong_field.alg_sel);
+ 	hr_reg_clear(&qpc_mask->ext, QPCEX_CONG_ALG_SEL);
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 0b046c061742b..12704efb7b19a 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -719,7 +719,6 @@ static int irdma_setup_kmode_qp(struct irdma_device *iwdev,
+ 		info->rq_pa + (ukinfo->rq_depth * IRDMA_QP_WQE_MIN_SIZE);
+ 	ukinfo->sq_size = ukinfo->sq_depth >> ukinfo->sq_shift;
+ 	ukinfo->rq_size = ukinfo->rq_depth >> ukinfo->rq_shift;
+-	ukinfo->qp_id = iwqp->ibqp.qp_num;
+ 
+ 	iwqp->max_send_wr = (ukinfo->sq_depth - IRDMA_SQ_RSVD) >> ukinfo->sq_shift;
+ 	iwqp->max_recv_wr = (ukinfo->rq_depth - IRDMA_RQ_RSVD) >> ukinfo->rq_shift;
+@@ -944,7 +943,7 @@ static int irdma_create_qp(struct ib_qp *ibqp,
+ 	iwqp->host_ctx.size = IRDMA_QP_CTX_SIZE;
+ 
+ 	init_info.pd = &iwpd->sc_pd;
+-	init_info.qp_uk_init_info.qp_id = iwqp->ibqp.qp_num;
++	init_info.qp_uk_init_info.qp_id = qp_num;
+ 	if (!rdma_protocol_roce(&iwdev->ibdev, 1))
+ 		init_info.qp_uk_init_info.first_sq_wq = 1;
+ 	iwqp->ctx_info.qp_compl_ctx = (uintptr_t)qp;
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 7be4c3adb4e2b..ab91009aea883 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -358,7 +358,7 @@ int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem,
+ 			     sizeof(struct gdma_create_dma_region_resp));
+ 
+ 	create_req->length = umem->length;
+-	create_req->offset_in_page = umem->address & (page_sz - 1);
++	create_req->offset_in_page = ib_umem_dma_offset(umem, page_sz);
+ 	create_req->gdma_page_type = order_base_2(page_sz) - PAGE_SHIFT;
+ 	create_req->page_count = num_pages_total;
+ 
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 8ba53edf23119..6e19974ecf6e7 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -2949,7 +2949,7 @@ DECLARE_UVERBS_NAMED_METHOD(
+ 	MLX5_IB_METHOD_DEVX_OBJ_MODIFY,
+ 	UVERBS_ATTR_IDR(MLX5_IB_ATTR_DEVX_OBJ_MODIFY_HANDLE,
+ 			UVERBS_IDR_ANY_OBJECT,
+-			UVERBS_ACCESS_WRITE,
++			UVERBS_ACCESS_READ,
+ 			UA_MANDATORY),
+ 	UVERBS_ATTR_PTR_IN(
+ 		MLX5_IB_ATTR_DEVX_OBJ_MODIFY_CMD_IN,
+diff --git a/drivers/infiniband/hw/mlx5/wr.c b/drivers/infiniband/hw/mlx5/wr.c
+index df1d1b0a3ef72..9947feb7fb8a0 100644
+--- a/drivers/infiniband/hw/mlx5/wr.c
++++ b/drivers/infiniband/hw/mlx5/wr.c
+@@ -78,7 +78,7 @@ static void set_eth_seg(const struct ib_send_wr *wr, struct mlx5_ib_qp *qp,
+ 		 */
+ 		copysz = min_t(u64, *cur_edge - (void *)eseg->inline_hdr.start,
+ 			       left);
+-		memcpy(eseg->inline_hdr.start, pdata, copysz);
++		memcpy(eseg->inline_hdr.data, pdata, copysz);
+ 		stride = ALIGN(sizeof(struct mlx5_wqe_eth_seg) -
+ 			       sizeof(eseg->inline_hdr.start) + copysz, 16);
+ 		*size += stride / 16;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c
+index d3c436ead6946..4aa80c9388f05 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c
+@@ -133,7 +133,7 @@ static ssize_t mpath_policy_store(struct device *dev,
+ 
+ 	/* distinguish "mi" and "min-latency" with length */
+ 	len = strnlen(buf, NAME_MAX);
+-	if (buf[len - 1] == '\n')
++	if (len && buf[len - 1] == '\n')
+ 		len--;
+ 
+ 	if (!strncasecmp(buf, "round-robin", 11) ||
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 040234c01be4d..9632afbd727b6 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -3209,7 +3209,6 @@ static int srpt_add_one(struct ib_device *device)
+ 
+ 	INIT_IB_EVENT_HANDLER(&sdev->event_handler, sdev->device,
+ 			      srpt_event_handler);
+-	ib_register_event_handler(&sdev->event_handler);
+ 
+ 	for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ 		sport = &sdev->port[i - 1];
+@@ -3232,6 +3231,7 @@ static int srpt_add_one(struct ib_device *device)
+ 		}
+ 	}
+ 
++	ib_register_event_handler(&sdev->event_handler);
+ 	spin_lock(&srpt_dev_lock);
+ 	list_add_tail(&sdev->list, &srpt_dev_list);
+ 	spin_unlock(&srpt_dev_lock);
+@@ -3242,7 +3242,6 @@ static int srpt_add_one(struct ib_device *device)
+ 
+ err_port:
+ 	srpt_unregister_mad_agent(sdev, i);
+-	ib_unregister_event_handler(&sdev->event_handler);
+ err_cm:
+ 	if (sdev->cm_id)
+ 		ib_destroy_cm_id(sdev->cm_id);
+diff --git a/drivers/input/keyboard/gpio_keys_polled.c b/drivers/input/keyboard/gpio_keys_polled.c
+index ba00ecfbd343b..b41fd1240f431 100644
+--- a/drivers/input/keyboard/gpio_keys_polled.c
++++ b/drivers/input/keyboard/gpio_keys_polled.c
+@@ -315,12 +315,10 @@ static int gpio_keys_polled_probe(struct platform_device *pdev)
+ 
+ 			error = devm_gpio_request_one(dev, button->gpio,
+ 					flags, button->desc ? : DRV_NAME);
+-			if (error) {
+-				dev_err(dev,
+-					"unable to claim gpio %u, err=%d\n",
+-					button->gpio, error);
+-				return error;
+-			}
++			if (error)
++				return dev_err_probe(dev, error,
++						     "unable to claim gpio %u\n",
++						     button->gpio);
+ 
+ 			bdata->gpiod = gpio_to_desc(button->gpio);
+ 			if (!bdata->gpiod) {
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 36aeeae776110..9ca5a743f19fe 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -620,6 +620,118 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ 			},
+ 		},
+ 	},
++	{
++		.prod_num = IQS7222_PROD_NUM_D,
++		.fw_major = 1,
++		.fw_minor = 2,
++		.touch_link = 1770,
++		.allow_offset = 9,
++		.event_offset = 10,
++		.comms_offset = 11,
++		.reg_grps = {
++			[IQS7222_REG_GRP_STAT] = {
++				.base = IQS7222_SYS_STATUS,
++				.num_row = 1,
++				.num_col = 7,
++			},
++			[IQS7222_REG_GRP_CYCLE] = {
++				.base = 0x8000,
++				.num_row = 7,
++				.num_col = 2,
++			},
++			[IQS7222_REG_GRP_GLBL] = {
++				.base = 0x8700,
++				.num_row = 1,
++				.num_col = 3,
++			},
++			[IQS7222_REG_GRP_BTN] = {
++				.base = 0x9000,
++				.num_row = 14,
++				.num_col = 3,
++			},
++			[IQS7222_REG_GRP_CHAN] = {
++				.base = 0xA000,
++				.num_row = 14,
++				.num_col = 4,
++			},
++			[IQS7222_REG_GRP_FILT] = {
++				.base = 0xAE00,
++				.num_row = 1,
++				.num_col = 2,
++			},
++			[IQS7222_REG_GRP_TPAD] = {
++				.base = 0xB000,
++				.num_row = 1,
++				.num_col = 24,
++			},
++			[IQS7222_REG_GRP_GPIO] = {
++				.base = 0xC000,
++				.num_row = 3,
++				.num_col = 3,
++			},
++			[IQS7222_REG_GRP_SYS] = {
++				.base = IQS7222_SYS_SETUP,
++				.num_row = 1,
++				.num_col = 12,
++			},
++		},
++	},
++	{
++		.prod_num = IQS7222_PROD_NUM_D,
++		.fw_major = 1,
++		.fw_minor = 1,
++		.touch_link = 1774,
++		.allow_offset = 9,
++		.event_offset = 10,
++		.comms_offset = 11,
++		.reg_grps = {
++			[IQS7222_REG_GRP_STAT] = {
++				.base = IQS7222_SYS_STATUS,
++				.num_row = 1,
++				.num_col = 7,
++			},
++			[IQS7222_REG_GRP_CYCLE] = {
++				.base = 0x8000,
++				.num_row = 7,
++				.num_col = 2,
++			},
++			[IQS7222_REG_GRP_GLBL] = {
++				.base = 0x8700,
++				.num_row = 1,
++				.num_col = 3,
++			},
++			[IQS7222_REG_GRP_BTN] = {
++				.base = 0x9000,
++				.num_row = 14,
++				.num_col = 3,
++			},
++			[IQS7222_REG_GRP_CHAN] = {
++				.base = 0xA000,
++				.num_row = 14,
++				.num_col = 4,
++			},
++			[IQS7222_REG_GRP_FILT] = {
++				.base = 0xAE00,
++				.num_row = 1,
++				.num_col = 2,
++			},
++			[IQS7222_REG_GRP_TPAD] = {
++				.base = 0xB000,
++				.num_row = 1,
++				.num_col = 24,
++			},
++			[IQS7222_REG_GRP_GPIO] = {
++				.base = 0xC000,
++				.num_row = 3,
++				.num_col = 3,
++			},
++			[IQS7222_REG_GRP_SYS] = {
++				.base = IQS7222_SYS_SETUP,
++				.num_row = 1,
++				.num_col = 12,
++			},
++		},
++	},
+ 	{
+ 		.prod_num = IQS7222_PROD_NUM_D,
+ 		.fw_major = 0,
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index 7673bb82945b6..a66f7df7a2b0b 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -195,7 +195,7 @@ source "drivers/iommu/iommufd/Kconfig"
+ config IRQ_REMAP
+ 	bool "Support for Interrupt Remapping"
+ 	depends on X86_64 && X86_IO_APIC && PCI_MSI && ACPI
+-	select DMAR_TABLE
++	select DMAR_TABLE if INTEL_IOMMU
+ 	help
+ 	  Supports Interrupt remapping for IO-APIC and MSI devices.
+ 	  To use x2apic mode in the CPU's which support x2APIC enhancements or
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 64bcf3df37ee5..7f65f3ecb231e 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -2068,6 +2068,9 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
+ 	/* Prevent binding other PCI device drivers to IOMMU devices */
+ 	iommu->dev->match_driver = false;
+ 
++	/* ACPI _PRT won't have an IRQ for IOMMU */
++	iommu->dev->irq_managed = 1;
++
+ 	pci_read_config_dword(iommu->dev, cap_ptr + MMIO_CAP_HDR_OFFSET,
+ 			      &iommu->cap);
+ 
+diff --git a/drivers/iommu/intel/Makefile b/drivers/iommu/intel/Makefile
+index 5dabf081a7793..5402b699a1229 100644
+--- a/drivers/iommu/intel/Makefile
++++ b/drivers/iommu/intel/Makefile
+@@ -5,5 +5,7 @@ obj-$(CONFIG_DMAR_TABLE) += trace.o cap_audit.o
+ obj-$(CONFIG_DMAR_PERF) += perf.o
+ obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += debugfs.o
+ obj-$(CONFIG_INTEL_IOMMU_SVM) += svm.o
++ifdef CONFIG_INTEL_IOMMU
+ obj-$(CONFIG_IRQ_REMAP) += irq_remapping.o
++endif
+ obj-$(CONFIG_INTEL_IOMMU_PERF_EVENTS) += perfmon.o
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 6e102cbbde845..cc2490e5cfb6b 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -481,6 +481,9 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
+ 	if (!info || !info->ats_enabled)
+ 		return;
+ 
++	if (pci_dev_is_disconnected(to_pci_dev(dev)))
++		return;
++
+ 	sid = info->bus << 8 | info->devfn;
+ 	qdep = info->ats_qdep;
+ 	pfsid = info->pfsid;
+diff --git a/drivers/iommu/irq_remapping.c b/drivers/iommu/irq_remapping.c
+index 83314b9d8f38b..ee59647c20501 100644
+--- a/drivers/iommu/irq_remapping.c
++++ b/drivers/iommu/irq_remapping.c
+@@ -99,7 +99,8 @@ int __init irq_remapping_prepare(void)
+ 	if (disable_irq_remap)
+ 		return -ENOSYS;
+ 
+-	if (intel_irq_remap_ops.prepare() == 0)
++	if (IS_ENABLED(CONFIG_INTEL_IOMMU) &&
++	    intel_irq_remap_ops.prepare() == 0)
+ 		remap_ops = &intel_irq_remap_ops;
+ 	else if (IS_ENABLED(CONFIG_AMD_IOMMU) &&
+ 		 amd_iommu_irq_ops.prepare() == 0)
+diff --git a/drivers/leds/flash/leds-sgm3140.c b/drivers/leds/flash/leds-sgm3140.c
+index eb648ff54b4e5..db0ac6641954e 100644
+--- a/drivers/leds/flash/leds-sgm3140.c
++++ b/drivers/leds/flash/leds-sgm3140.c
+@@ -114,8 +114,11 @@ static int sgm3140_brightness_set(struct led_classdev *led_cdev,
+ 				"failed to enable regulator: %d\n", ret);
+ 			return ret;
+ 		}
++		gpiod_set_value_cansleep(priv->flash_gpio, 0);
+ 		gpiod_set_value_cansleep(priv->enable_gpio, 1);
+ 	} else {
++		del_timer_sync(&priv->powerdown_timer);
++		gpiod_set_value_cansleep(priv->flash_gpio, 0);
+ 		gpiod_set_value_cansleep(priv->enable_gpio, 0);
+ 		ret = regulator_disable(priv->vin_regulator);
+ 		if (ret) {
+diff --git a/drivers/leds/leds-aw2013.c b/drivers/leds/leds-aw2013.c
+index 91f44b23cb113..17235a5e576ae 100644
+--- a/drivers/leds/leds-aw2013.c
++++ b/drivers/leds/leds-aw2013.c
+@@ -405,6 +405,7 @@ static int aw2013_probe(struct i2c_client *client)
+ 			       chip->regulators);
+ 
+ error:
++	mutex_unlock(&chip->mutex);
+ 	mutex_destroy(&chip->mutex);
+ 	return ret;
+ }
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index f03d7dba270c5..4f2808ef387f6 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -1315,7 +1315,7 @@ static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector,
+ 		io_req.mem.ptr.vma = (char *)b->data + offset;
+ 	}
+ 
+-	r = dm_io(&io_req, 1, &region, NULL);
++	r = dm_io(&io_req, 1, &region, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r))
+ 		b->end_io(b, errno_to_blk_status(r));
+ }
+@@ -2167,7 +2167,7 @@ int dm_bufio_issue_flush(struct dm_bufio_client *c)
+ 	if (WARN_ON_ONCE(dm_bufio_in_request()))
+ 		return -EINVAL;
+ 
+-	return dm_io(&io_req, 1, &io_reg, NULL);
++	return dm_io(&io_req, 1, &io_reg, NULL, IOPRIO_DEFAULT);
+ }
+ EXPORT_SYMBOL_GPL(dm_bufio_issue_flush);
+ 
+@@ -2191,7 +2191,7 @@ int dm_bufio_issue_discard(struct dm_bufio_client *c, sector_t block, sector_t c
+ 	if (WARN_ON_ONCE(dm_bufio_in_request()))
+ 		return -EINVAL; /* discards are optional */
+ 
+-	return dm_io(&io_req, 1, &io_reg, NULL);
++	return dm_io(&io_req, 1, &io_reg, NULL, IOPRIO_DEFAULT);
+ }
+ EXPORT_SYMBOL_GPL(dm_bufio_issue_discard);
+ 
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 4ab4e8dcfd3e2..35f50193959e0 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -53,11 +53,11 @@
+ struct convert_context {
+ 	struct completion restart;
+ 	struct bio *bio_in;
+-	struct bio *bio_out;
+ 	struct bvec_iter iter_in;
++	struct bio *bio_out;
+ 	struct bvec_iter iter_out;
+-	u64 cc_sector;
+ 	atomic_t cc_pending;
++	u64 cc_sector;
+ 	union {
+ 		struct skcipher_request *req;
+ 		struct aead_request *req_aead;
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index e8e8fc33d3440..2cc30b9ab232a 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -555,7 +555,7 @@ static int sync_rw_sb(struct dm_integrity_c *ic, blk_opf_t opf)
+ 		}
+ 	}
+ 
+-	r = dm_io(&io_req, 1, &io_loc, NULL);
++	r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r))
+ 		return r;
+ 
+@@ -1073,7 +1073,7 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, blk_opf_t opf,
+ 	io_loc.sector = ic->start + SB_SECTORS + sector;
+ 	io_loc.count = n_sectors;
+ 
+-	r = dm_io(&io_req, 1, &io_loc, NULL);
++	r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r)) {
+ 		dm_integrity_io_error(ic, (opf & REQ_OP_MASK) == REQ_OP_READ ?
+ 				      "reading journal" : "writing journal", r);
+@@ -1190,7 +1190,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned int section, u
+ 	io_loc.sector = target;
+ 	io_loc.count = n_sectors;
+ 
+-	r = dm_io(&io_req, 1, &io_loc, NULL);
++	r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r)) {
+ 		WARN_ONCE(1, "asynchronous dm_io failed: %d", r);
+ 		fn(-1UL, data);
+@@ -1519,7 +1519,7 @@ static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_dat
+ 		fr.io_reg.count = 0,
+ 		fr.ic = ic;
+ 		init_completion(&fr.comp);
+-		r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL);
++		r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL, IOPRIO_DEFAULT);
+ 		BUG_ON(r);
+ 	}
+ 
+@@ -1699,7 +1699,6 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ 	struct bio_vec bv;
+ 	sector_t sector, logical_sector, area, offset;
+ 	struct page *page;
+-	void *buffer;
+ 
+ 	get_area_and_offset(ic, dio->range.logical_sector, &area, &offset);
+ 	dio->metadata_block = get_metadata_sector_and_offset(ic, area, offset,
+@@ -1708,13 +1707,14 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ 	logical_sector = dio->range.logical_sector;
+ 
+ 	page = mempool_alloc(&ic->recheck_pool, GFP_NOIO);
+-	buffer = page_to_virt(page);
+ 
+ 	__bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) {
+ 		unsigned pos = 0;
+ 
+ 		do {
++			sector_t alignment;
+ 			char *mem;
++			char *buffer = page_to_virt(page);
+ 			int r;
+ 			struct dm_io_request io_req;
+ 			struct dm_io_region io_loc;
+@@ -1727,7 +1727,15 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ 			io_loc.sector = sector;
+ 			io_loc.count = ic->sectors_per_block;
+ 
+-			r = dm_io(&io_req, 1, &io_loc, NULL);
++			/* Align the bio to logical block size */
++			alignment = dio->range.logical_sector | bio_sectors(bio) | (PAGE_SIZE >> SECTOR_SHIFT);
++			alignment &= -alignment;
++			io_loc.sector = round_down(io_loc.sector, alignment);
++			io_loc.count += sector - io_loc.sector;
++			buffer += (sector - io_loc.sector) << SECTOR_SHIFT;
++			io_loc.count = round_up(io_loc.count, alignment);
++
++			r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ 			if (unlikely(r)) {
+ 				dio->bi_status = errno_to_blk_status(r);
+ 				goto free_ret;
+@@ -1848,12 +1856,12 @@ static void integrity_metadata(struct work_struct *w)
+ 			r = dm_integrity_rw_tag(ic, checksums, &dio->metadata_block, &dio->metadata_offset,
+ 						checksums_ptr - checksums, dio->op == REQ_OP_READ ? TAG_CMP : TAG_WRITE);
+ 			if (unlikely(r)) {
++				if (likely(checksums != checksums_onstack))
++					kfree(checksums);
+ 				if (r > 0) {
+-					integrity_recheck(dio, checksums);
++					integrity_recheck(dio, checksums_onstack);
+ 					goto skip_io;
+ 				}
+-				if (likely(checksums != checksums_onstack))
+-					kfree(checksums);
+ 				goto error;
+ 			}
+ 
+@@ -2806,7 +2814,7 @@ static void integrity_recalc(struct work_struct *w)
+ 	io_loc.sector = get_data_sector(ic, area, offset);
+ 	io_loc.count = n_sectors;
+ 
+-	r = dm_io(&io_req, 1, &io_loc, NULL);
++	r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r)) {
+ 		dm_integrity_io_error(ic, "reading data", r);
+ 		goto err;
+diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
+index f053ce2458147..7409490259d1d 100644
+--- a/drivers/md/dm-io.c
++++ b/drivers/md/dm-io.c
+@@ -305,7 +305,7 @@ static void km_dp_init(struct dpages *dp, void *data)
+  */
+ static void do_region(const blk_opf_t opf, unsigned int region,
+ 		      struct dm_io_region *where, struct dpages *dp,
+-		      struct io *io)
++		      struct io *io, unsigned short ioprio)
+ {
+ 	struct bio *bio;
+ 	struct page *page;
+@@ -354,6 +354,7 @@ static void do_region(const blk_opf_t opf, unsigned int region,
+ 				       &io->client->bios);
+ 		bio->bi_iter.bi_sector = where->sector + (where->count - remaining);
+ 		bio->bi_end_io = endio;
++		bio->bi_ioprio = ioprio;
+ 		store_io_and_region_in_bio(bio, io, region);
+ 
+ 		if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) {
+@@ -383,7 +384,7 @@ static void do_region(const blk_opf_t opf, unsigned int region,
+ 
+ static void dispatch_io(blk_opf_t opf, unsigned int num_regions,
+ 			struct dm_io_region *where, struct dpages *dp,
+-			struct io *io, int sync)
++			struct io *io, int sync, unsigned short ioprio)
+ {
+ 	int i;
+ 	struct dpages old_pages = *dp;
+@@ -400,7 +401,7 @@ static void dispatch_io(blk_opf_t opf, unsigned int num_regions,
+ 	for (i = 0; i < num_regions; i++) {
+ 		*dp = old_pages;
+ 		if (where[i].count || (opf & REQ_PREFLUSH))
+-			do_region(opf, i, where + i, dp, io);
++			do_region(opf, i, where + i, dp, io, ioprio);
+ 	}
+ 
+ 	/*
+@@ -425,7 +426,7 @@ static void sync_io_complete(unsigned long error, void *context)
+ 
+ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
+ 		   struct dm_io_region *where, blk_opf_t opf, struct dpages *dp,
+-		   unsigned long *error_bits)
++		   unsigned long *error_bits, unsigned short ioprio)
+ {
+ 	struct io *io;
+ 	struct sync_io sio;
+@@ -447,7 +448,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
+ 	io->vma_invalidate_address = dp->vma_invalidate_address;
+ 	io->vma_invalidate_size = dp->vma_invalidate_size;
+ 
+-	dispatch_io(opf, num_regions, where, dp, io, 1);
++	dispatch_io(opf, num_regions, where, dp, io, 1, ioprio);
+ 
+ 	wait_for_completion_io(&sio.wait);
+ 
+@@ -459,7 +460,8 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
+ 
+ static int async_io(struct dm_io_client *client, unsigned int num_regions,
+ 		    struct dm_io_region *where, blk_opf_t opf,
+-		    struct dpages *dp, io_notify_fn fn, void *context)
++		    struct dpages *dp, io_notify_fn fn, void *context,
++		    unsigned short ioprio)
+ {
+ 	struct io *io;
+ 
+@@ -479,7 +481,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions,
+ 	io->vma_invalidate_address = dp->vma_invalidate_address;
+ 	io->vma_invalidate_size = dp->vma_invalidate_size;
+ 
+-	dispatch_io(opf, num_regions, where, dp, io, 0);
++	dispatch_io(opf, num_regions, where, dp, io, 0, ioprio);
+ 	return 0;
+ }
+ 
+@@ -521,7 +523,8 @@ static int dp_init(struct dm_io_request *io_req, struct dpages *dp,
+ }
+ 
+ int dm_io(struct dm_io_request *io_req, unsigned int num_regions,
+-	  struct dm_io_region *where, unsigned long *sync_error_bits)
++	  struct dm_io_region *where, unsigned long *sync_error_bits,
++	  unsigned short ioprio)
+ {
+ 	int r;
+ 	struct dpages dp;
+@@ -532,11 +535,11 @@ int dm_io(struct dm_io_request *io_req, unsigned int num_regions,
+ 
+ 	if (!io_req->notify.fn)
+ 		return sync_io(io_req->client, num_regions, where,
+-			       io_req->bi_opf, &dp, sync_error_bits);
++			       io_req->bi_opf, &dp, sync_error_bits, ioprio);
+ 
+ 	return async_io(io_req->client, num_regions, where,
+ 			io_req->bi_opf, &dp, io_req->notify.fn,
+-			io_req->notify.context);
++			io_req->notify.context, ioprio);
+ }
+ EXPORT_SYMBOL(dm_io);
+ 
+diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
+index d01807c50f20b..79c65c9ad5fa8 100644
+--- a/drivers/md/dm-kcopyd.c
++++ b/drivers/md/dm-kcopyd.c
+@@ -578,9 +578,9 @@ static int run_io_job(struct kcopyd_job *job)
+ 	io_job_start(job->kc->throttle);
+ 
+ 	if (job->op == REQ_OP_READ)
+-		r = dm_io(&io_req, 1, &job->source, NULL);
++		r = dm_io(&io_req, 1, &job->source, NULL, IOPRIO_DEFAULT);
+ 	else
+-		r = dm_io(&io_req, job->num_dests, job->dests, NULL);
++		r = dm_io(&io_req, job->num_dests, job->dests, NULL, IOPRIO_DEFAULT);
+ 
+ 	return r;
+ }
+diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
+index f9f84236dfcd7..f7f9c2100937b 100644
+--- a/drivers/md/dm-log.c
++++ b/drivers/md/dm-log.c
+@@ -300,7 +300,7 @@ static int rw_header(struct log_c *lc, enum req_op op)
+ {
+ 	lc->io_req.bi_opf = op;
+ 
+-	return dm_io(&lc->io_req, 1, &lc->header_location, NULL);
++	return dm_io(&lc->io_req, 1, &lc->header_location, NULL, IOPRIO_DEFAULT);
+ }
+ 
+ static int flush_header(struct log_c *lc)
+@@ -313,7 +313,7 @@ static int flush_header(struct log_c *lc)
+ 
+ 	lc->io_req.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+ 
+-	return dm_io(&lc->io_req, 1, &null_location, NULL);
++	return dm_io(&lc->io_req, 1, &null_location, NULL, IOPRIO_DEFAULT);
+ }
+ 
+ static int read_header(struct log_c *log)
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index eb009d6bb03a1..13eb47b997f94 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3329,14 +3329,14 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
+ 	struct mddev *mddev = &rs->md;
+ 
+ 	/*
+-	 * If we're reshaping to add disk(s)), ti->len and
++	 * If we're reshaping to add disk(s), ti->len and
+ 	 * mddev->array_sectors will differ during the process
+ 	 * (ti->len > mddev->array_sectors), so we have to requeue
+ 	 * bios with addresses > mddev->array_sectors here or
+ 	 * there will occur accesses past EOD of the component
+ 	 * data images thus erroring the raid set.
+ 	 */
+-	if (unlikely(bio_end_sector(bio) > mddev->array_sectors))
++	if (unlikely(bio_has_data(bio) && bio_end_sector(bio) > mddev->array_sectors))
+ 		return DM_MAPIO_REQUEUE;
+ 
+ 	md_handle_request(mddev, bio);
+diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
+index ddcb2bc4a6179..9511dae5b556a 100644
+--- a/drivers/md/dm-raid1.c
++++ b/drivers/md/dm-raid1.c
+@@ -278,7 +278,7 @@ static int mirror_flush(struct dm_target *ti)
+ 	}
+ 
+ 	error_bits = -1;
+-	dm_io(&io_req, ms->nr_mirrors, io, &error_bits);
++	dm_io(&io_req, ms->nr_mirrors, io, &error_bits, IOPRIO_DEFAULT);
+ 	if (unlikely(error_bits != 0)) {
+ 		for (i = 0; i < ms->nr_mirrors; i++)
+ 			if (test_bit(i, &error_bits))
+@@ -554,7 +554,7 @@ static void read_async_bio(struct mirror *m, struct bio *bio)
+ 
+ 	map_region(&io, m, bio);
+ 	bio_set_m(bio, m);
+-	BUG_ON(dm_io(&io_req, 1, &io, NULL));
++	BUG_ON(dm_io(&io_req, 1, &io, NULL, IOPRIO_DEFAULT));
+ }
+ 
+ static inline int region_in_sync(struct mirror_set *ms, region_t region,
+@@ -681,7 +681,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
+ 	 */
+ 	bio_set_m(bio, get_default_mirror(ms));
+ 
+-	BUG_ON(dm_io(&io_req, ms->nr_mirrors, io, NULL));
++	BUG_ON(dm_io(&io_req, ms->nr_mirrors, io, NULL, IOPRIO_DEFAULT));
+ }
+ 
+ static void do_writes(struct mirror_set *ms, struct bio_list *writes)
+diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
+index 15649921f2a9b..568d10842b1f4 100644
+--- a/drivers/md/dm-snap-persistent.c
++++ b/drivers/md/dm-snap-persistent.c
+@@ -223,7 +223,7 @@ static void do_metadata(struct work_struct *work)
+ {
+ 	struct mdata_req *req = container_of(work, struct mdata_req, work);
+ 
+-	req->result = dm_io(req->io_req, 1, req->where, NULL);
++	req->result = dm_io(req->io_req, 1, req->where, NULL, IOPRIO_DEFAULT);
+ }
+ 
+ /*
+@@ -247,7 +247,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, blk_opf_t opf,
+ 	struct mdata_req req;
+ 
+ 	if (!metadata)
+-		return dm_io(&io_req, 1, &where, NULL);
++		return dm_io(&io_req, 1, &where, NULL, IOPRIO_DEFAULT);
+ 
+ 	req.where = &where;
+ 	req.io_req = &io_req;
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 7b620b187da90..49e4a35d70196 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -511,7 +511,7 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io,
+ 	io_loc.bdev = v->data_dev->bdev;
+ 	io_loc.sector = cur_block << (v->data_dev_block_bits - SECTOR_SHIFT);
+ 	io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT);
+-	r = dm_io(&io_req, 1, &io_loc, NULL);
++	r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r))
+ 		goto free_ret;
+ 
+diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
+index 4620a98c99561..db93a91169d5e 100644
+--- a/drivers/md/dm-verity.h
++++ b/drivers/md/dm-verity.h
+@@ -80,12 +80,12 @@ struct dm_verity_io {
+ 	/* original value of bio->bi_end_io */
+ 	bio_end_io_t *orig_bi_end_io;
+ 
++	struct bvec_iter iter;
++
+ 	sector_t block;
+ 	unsigned int n_blocks;
+ 	bool in_tasklet;
+ 
+-	struct bvec_iter iter;
+-
+ 	struct work_struct work;
+ 
+ 	char *recheck_buffer;
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 074cb785eafc1..6a4279bfb1e77 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -531,7 +531,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc, bool wait_for_ios)
+ 		req.notify.context = &endio;
+ 
+ 		/* writing via async dm-io (implied by notify.fn above) won't return an error */
+-		(void) dm_io(&req, 1, &region, NULL);
++		(void) dm_io(&req, 1, &region, NULL, IOPRIO_DEFAULT);
+ 		i = j;
+ 	}
+ 
+@@ -568,7 +568,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc)
+ 	req.notify.fn = NULL;
+ 	req.notify.context = NULL;
+ 
+-	r = dm_io(&req, 1, &region, NULL);
++	r = dm_io(&req, 1, &region, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r))
+ 		writecache_error(wc, r, "error writing superblock");
+ }
+@@ -596,7 +596,7 @@ static void writecache_disk_flush(struct dm_writecache *wc, struct dm_dev *dev)
+ 	req.client = wc->dm_io;
+ 	req.notify.fn = NULL;
+ 
+-	r = dm_io(&req, 1, &region, NULL);
++	r = dm_io(&req, 1, &region, NULL, IOPRIO_DEFAULT);
+ 	if (unlikely(r))
+ 		writecache_error(wc, r, "error flushing metadata: %d", r);
+ }
+@@ -990,7 +990,7 @@ static int writecache_read_metadata(struct dm_writecache *wc, sector_t n_sectors
+ 	req.client = wc->dm_io;
+ 	req.notify.fn = NULL;
+ 
+-	return dm_io(&req, 1, &region, NULL);
++	return dm_io(&req, 1, &region, NULL, IOPRIO_DEFAULT);
+ }
+ 
+ static void writecache_resume(struct dm_target *ti)
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 23c32cd1f1d88..4ff9bebb81ad5 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2945,6 +2945,9 @@ static void __dm_internal_suspend(struct mapped_device *md, unsigned int suspend
+ 
+ static void __dm_internal_resume(struct mapped_device *md)
+ {
++	int r;
++	struct dm_table *map;
++
+ 	BUG_ON(!md->internal_suspend_count);
+ 
+ 	if (--md->internal_suspend_count)
+@@ -2953,12 +2956,23 @@ static void __dm_internal_resume(struct mapped_device *md)
+ 	if (dm_suspended_md(md))
+ 		goto done; /* resume from nested suspend */
+ 
+-	/*
+-	 * NOTE: existing callers don't need to call dm_table_resume_targets
+-	 * (which may fail -- so best to avoid it for now by passing NULL map)
+-	 */
+-	(void) __dm_resume(md, NULL);
+-
++	map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
++	r = __dm_resume(md, map);
++	if (r) {
++		/*
++		 * If a preresume method of some target failed, we are in a
++		 * tricky situation. We can't return an error to the caller. We
++		 * can't fake success because then the "resume" and
++		 * "postsuspend" methods would not be paired correctly, and it
++		 * would break various targets, for example it would cause list
++		 * corruption in the "origin" target.
++		 *
++		 * So, we fake normal suspend here, to make sure that the
++		 * "resume" and "postsuspend" methods will be paired correctly.
++		 */
++		DMERR("Preresume method failed: %d", r);
++		set_bit(DMF_SUSPENDED, &md->flags);
++	}
+ done:
+ 	clear_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
+ 	smp_mb__after_atomic();
+diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c
+index d22276870283d..aa77133f31887 100644
+--- a/drivers/md/md-multipath.c
++++ b/drivers/md/md-multipath.c
+@@ -258,15 +258,6 @@ static int multipath_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 			goto abort;
+ 		}
+ 		p->rdev = NULL;
+-		if (!test_bit(RemoveSynchronized, &rdev->flags)) {
+-			synchronize_rcu();
+-			if (atomic_read(&rdev->nr_pending)) {
+-				/* lost the race, try later */
+-				err = -EBUSY;
+-				p->rdev = rdev;
+-				goto abort;
+-			}
+-		}
+ 		err = md_integrity_register(mddev);
+ 	}
+ abort:
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 58889bc72659a..99b60d37114c4 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2582,6 +2582,7 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
+  fail:
+ 	pr_warn("md: failed to register dev-%s for %s\n",
+ 		b, mdname(mddev));
++	mddev_destroy_serial_pool(mddev, rdev);
+ 	return err;
+ }
+ 
+@@ -6303,7 +6304,15 @@ static void md_clean(struct mddev *mddev)
+ 	mddev->persistent = 0;
+ 	mddev->level = LEVEL_NONE;
+ 	mddev->clevel[0] = 0;
+-	mddev->flags = 0;
++	/*
++	 * Don't clear MD_CLOSING, or mddev can be opened again.
++	 * 'hold_active != 0' means mddev is still in the creation
++	 * process and will be used later.
++	 */
++	if (mddev->hold_active)
++		mddev->flags = 0;
++	else
++		mddev->flags &= BIT_ULL_MASK(MD_CLOSING);
+ 	mddev->sb_flags = 0;
+ 	mddev->ro = MD_RDWR;
+ 	mddev->metadata_type[0] = 0;
+@@ -7649,7 +7658,6 @@ static int md_ioctl(struct block_device *bdev, blk_mode_t mode,
+ 	int err = 0;
+ 	void __user *argp = (void __user *)arg;
+ 	struct mddev *mddev = NULL;
+-	bool did_set_md_closing = false;
+ 
+ 	if (!md_ioctl_valid(cmd))
+ 		return -ENOTTY;
+@@ -7733,7 +7741,6 @@ static int md_ioctl(struct block_device *bdev, blk_mode_t mode,
+ 			err = -EBUSY;
+ 			goto out;
+ 		}
+-		did_set_md_closing = true;
+ 		mutex_unlock(&mddev->open_mutex);
+ 		sync_blockdev(bdev);
+ 	}
+@@ -7875,7 +7882,7 @@ static int md_ioctl(struct block_device *bdev, blk_mode_t mode,
+ 				     mddev_unlock(mddev);
+ 
+ out:
+-	if(did_set_md_closing)
++	if (cmd == STOP_ARRAY_RO || (err && cmd == STOP_ARRAY))
+ 		clear_bit(MD_CLOSING, &mddev->flags);
+ 	return err;
+ }
+@@ -9307,44 +9314,19 @@ static int remove_and_add_spares(struct mddev *mddev,
+ 	struct md_rdev *rdev;
+ 	int spares = 0;
+ 	int removed = 0;
+-	bool remove_some = false;
+ 
+ 	if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
+ 		/* Mustn't remove devices when resync thread is running */
+ 		return 0;
+ 
+ 	rdev_for_each(rdev, mddev) {
+-		if ((this == NULL || rdev == this) &&
+-		    rdev->raid_disk >= 0 &&
+-		    !test_bit(Blocked, &rdev->flags) &&
+-		    test_bit(Faulty, &rdev->flags) &&
+-		    atomic_read(&rdev->nr_pending)==0) {
+-			/* Faulty non-Blocked devices with nr_pending == 0
+-			 * never get nr_pending incremented,
+-			 * never get Faulty cleared, and never get Blocked set.
+-			 * So we can synchronize_rcu now rather than once per device
+-			 */
+-			remove_some = true;
+-			set_bit(RemoveSynchronized, &rdev->flags);
+-		}
+-	}
+-
+-	if (remove_some)
+-		synchronize_rcu();
+-	rdev_for_each(rdev, mddev) {
+-		if ((this == NULL || rdev == this) &&
+-		    (test_bit(RemoveSynchronized, &rdev->flags) ||
+-		     rdev_removeable(rdev))) {
+-			if (mddev->pers->hot_remove_disk(
+-				    mddev, rdev) == 0) {
+-				sysfs_unlink_rdev(mddev, rdev);
+-				rdev->saved_raid_disk = rdev->raid_disk;
+-				rdev->raid_disk = -1;
+-				removed++;
+-			}
++		if ((this == NULL || rdev == this) && rdev_removeable(rdev) &&
++		    !mddev->pers->hot_remove_disk(mddev, rdev)) {
++			sysfs_unlink_rdev(mddev, rdev);
++			rdev->saved_raid_disk = rdev->raid_disk;
++			rdev->raid_disk = -1;
++			removed++;
+ 		}
+-		if (remove_some && test_bit(RemoveSynchronized, &rdev->flags))
+-			clear_bit(RemoveSynchronized, &rdev->flags);
+ 	}
+ 
+ 	if (removed && mddev->kobj.sd)
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index ade83af123a22..27d187ca6258a 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -190,11 +190,6 @@ enum flag_bits {
+ 				 * than other devices in the array
+ 				 */
+ 	ClusterRemove,
+-	RemoveSynchronized,	/* synchronize_rcu() was called after
+-				 * this device was known to be faulty,
+-				 * so it is safe to remove without
+-				 * another synchronize_rcu() call.
+-				 */
+ 	ExternalBbl,            /* External metadata provides bad
+ 				 * block management for a disk
+ 				 */
+@@ -212,6 +207,7 @@ enum flag_bits {
+ 				 * check if there is collision between raid1
+ 				 * serial bios.
+ 				 */
++	Nonrot,			/* non-rotational device (SSD) */
+ };
+ 
+ static inline int is_badblock(struct md_rdev *rdev, sector_t s, int sectors,
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index e138922d51292..750a802478e52 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -600,16 +600,13 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 	const sector_t this_sector = r1_bio->sector;
+ 	int sectors;
+ 	int best_good_sectors;
+-	int best_disk, best_dist_disk, best_pending_disk;
+-	int has_nonrot_disk;
++	int best_disk, best_dist_disk, best_pending_disk, sequential_disk;
+ 	int disk;
+ 	sector_t best_dist;
+ 	unsigned int min_pending;
+ 	struct md_rdev *rdev;
+ 	int choose_first;
+-	int choose_next_idle;
+ 
+-	rcu_read_lock();
+ 	/*
+ 	 * Check if we can balance. We can balance on the whole
+ 	 * device if no resync is going on, or below the resync window.
+@@ -619,12 +616,11 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 	sectors = r1_bio->sectors;
+ 	best_disk = -1;
+ 	best_dist_disk = -1;
++	sequential_disk = -1;
+ 	best_dist = MaxSector;
+ 	best_pending_disk = -1;
+ 	min_pending = UINT_MAX;
+ 	best_good_sectors = 0;
+-	has_nonrot_disk = 0;
+-	choose_next_idle = 0;
+ 	clear_bit(R1BIO_FailFast, &r1_bio->state);
+ 
+ 	if ((conf->mddev->recovery_cp < this_sector + sectors) ||
+@@ -640,9 +636,8 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 		sector_t first_bad;
+ 		int bad_sectors;
+ 		unsigned int pending;
+-		bool nonrot;
+ 
+-		rdev = rcu_dereference(conf->mirrors[disk].rdev);
++		rdev = conf->mirrors[disk].rdev;
+ 		if (r1_bio->bios[disk] == IO_BLOCKED
+ 		    || rdev == NULL
+ 		    || test_bit(Faulty, &rdev->flags))
+@@ -706,8 +701,6 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 			/* At least two disks to choose from so failfast is OK */
+ 			set_bit(R1BIO_FailFast, &r1_bio->state);
+ 
+-		nonrot = bdev_nonrot(rdev->bdev);
+-		has_nonrot_disk |= nonrot;
+ 		pending = atomic_read(&rdev->nr_pending);
+ 		dist = abs(this_sector - conf->mirrors[disk].head_position);
+ 		if (choose_first) {
+@@ -720,7 +713,6 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 			int opt_iosize = bdev_io_opt(rdev->bdev) >> 9;
+ 			struct raid1_info *mirror = &conf->mirrors[disk];
+ 
+-			best_disk = disk;
+ 			/*
+ 			 * If buffered sequential IO size exceeds optimal
+ 			 * iosize, check if there is idle disk. If yes, choose
+@@ -734,20 +726,27 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 			 * small, but not a big deal since when the second disk
+ 			 * starts IO, the first disk is likely still busy.
+ 			 */
+-			if (nonrot && opt_iosize > 0 &&
++			if (test_bit(Nonrot, &rdev->flags) && opt_iosize > 0 &&
+ 			    mirror->seq_start != MaxSector &&
+ 			    mirror->next_seq_sect > opt_iosize &&
+ 			    mirror->next_seq_sect - opt_iosize >=
+ 			    mirror->seq_start) {
+-				choose_next_idle = 1;
+-				continue;
++				/*
++				 * Add 'pending' to avoid choosing this disk if
++				 * there is other idle disk.
++				 */
++				pending++;
++				/*
++				 * If there is no other idle disk, this disk
++				 * will be chosen.
++				 */
++				sequential_disk = disk;
++			} else {
++				best_disk = disk;
++				break;
+ 			}
+-			break;
+ 		}
+ 
+-		if (choose_next_idle)
+-			continue;
+-
+ 		if (min_pending > pending) {
+ 			min_pending = pending;
+ 			best_pending_disk = disk;
+@@ -759,6 +758,13 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 		}
+ 	}
+ 
++	/*
++	 * sequential IO size exceeds optimal iosize, however, there is no other
++	 * idle disk, so choose the sequential disk.
++	 */
++	if (best_disk == -1 && min_pending != 0)
++		best_disk = sequential_disk;
++
+ 	/*
+ 	 * If all disks are rotational, choose the closest disk. If any disk is
+ 	 * non-rotational, choose the disk with less pending request even the
+@@ -766,14 +772,14 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 	 * mixed ratation/non-rotational disks depending on workload.
+ 	 */
+ 	if (best_disk == -1) {
+-		if (has_nonrot_disk || min_pending == 0)
++		if (READ_ONCE(conf->nonrot_disks) || min_pending == 0)
+ 			best_disk = best_pending_disk;
+ 		else
+ 			best_disk = best_dist_disk;
+ 	}
+ 
+ 	if (best_disk >= 0) {
+-		rdev = rcu_dereference(conf->mirrors[best_disk].rdev);
++		rdev = conf->mirrors[best_disk].rdev;
+ 		if (!rdev)
+ 			goto retry;
+ 		atomic_inc(&rdev->nr_pending);
+@@ -784,7 +790,6 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
+ 
+ 		conf->mirrors[best_disk].next_seq_sect = this_sector + sectors;
+ 	}
+-	rcu_read_unlock();
+ 	*max_sectors = sectors;
+ 
+ 	return best_disk;
+@@ -1235,14 +1240,12 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
+ 
+ 	if (r1bio_existed) {
+ 		/* Need to get the block device name carefully */
+-		struct md_rdev *rdev;
+-		rcu_read_lock();
+-		rdev = rcu_dereference(conf->mirrors[r1_bio->read_disk].rdev);
++		struct md_rdev *rdev = conf->mirrors[r1_bio->read_disk].rdev;
++
+ 		if (rdev)
+ 			snprintf(b, sizeof(b), "%pg", rdev->bdev);
+ 		else
+ 			strcpy(b, "???");
+-		rcu_read_unlock();
+ 	}
+ 
+ 	/*
+@@ -1396,10 +1399,9 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 
+ 	disks = conf->raid_disks * 2;
+ 	blocked_rdev = NULL;
+-	rcu_read_lock();
+ 	max_sectors = r1_bio->sectors;
+ 	for (i = 0;  i < disks; i++) {
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
++		struct md_rdev *rdev = conf->mirrors[i].rdev;
+ 
+ 		/*
+ 		 * The write-behind io is only attempted on drives marked as
+@@ -1465,7 +1467,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 		}
+ 		r1_bio->bios[i] = bio;
+ 	}
+-	rcu_read_unlock();
+ 
+ 	if (unlikely(blocked_rdev)) {
+ 		/* Wait for this device to become unblocked */
+@@ -1617,15 +1618,16 @@ static void raid1_status(struct seq_file *seq, struct mddev *mddev)
+ 	struct r1conf *conf = mddev->private;
+ 	int i;
+ 
++	lockdep_assert_held(&mddev->lock);
++
+ 	seq_printf(seq, " [%d/%d] [", conf->raid_disks,
+ 		   conf->raid_disks - mddev->degraded);
+-	rcu_read_lock();
+ 	for (i = 0; i < conf->raid_disks; i++) {
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
++		struct md_rdev *rdev = READ_ONCE(conf->mirrors[i].rdev);
++
+ 		seq_printf(seq, "%s",
+ 			   rdev && test_bit(In_sync, &rdev->flags) ? "U" : "_");
+ 	}
+-	rcu_read_unlock();
+ 	seq_printf(seq, "]");
+ }
+ 
+@@ -1691,16 +1693,15 @@ static void print_conf(struct r1conf *conf)
+ 	pr_debug(" --- wd:%d rd:%d\n", conf->raid_disks - conf->mddev->degraded,
+ 		 conf->raid_disks);
+ 
+-	rcu_read_lock();
++	lockdep_assert_held(&conf->mddev->reconfig_mutex);
+ 	for (i = 0; i < conf->raid_disks; i++) {
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
++		struct md_rdev *rdev = conf->mirrors[i].rdev;
+ 		if (rdev)
+ 			pr_debug(" disk %d, wo:%d, o:%d, dev:%pg\n",
+ 				 i, !test_bit(In_sync, &rdev->flags),
+ 				 !test_bit(Faulty, &rdev->flags),
+ 				 rdev->bdev);
+ 	}
+-	rcu_read_unlock();
+ }
+ 
+ static void close_sync(struct r1conf *conf)
+@@ -1767,6 +1768,52 @@ static int raid1_spare_active(struct mddev *mddev)
+ 	return count;
+ }
+ 
++static bool raid1_add_conf(struct r1conf *conf, struct md_rdev *rdev, int disk,
++			   bool replacement)
++{
++	struct raid1_info *info = conf->mirrors + disk;
++
++	if (replacement)
++		info += conf->raid_disks;
++
++	if (info->rdev)
++		return false;
++
++	if (bdev_nonrot(rdev->bdev)) {
++		set_bit(Nonrot, &rdev->flags);
++		WRITE_ONCE(conf->nonrot_disks, conf->nonrot_disks + 1);
++	}
++
++	rdev->raid_disk = disk;
++	info->head_position = 0;
++	info->seq_start = MaxSector;
++	WRITE_ONCE(info->rdev, rdev);
++
++	return true;
++}
++
++static bool raid1_remove_conf(struct r1conf *conf, int disk)
++{
++	struct raid1_info *info = conf->mirrors + disk;
++	struct md_rdev *rdev = info->rdev;
++
++	if (!rdev || test_bit(In_sync, &rdev->flags) ||
++	    atomic_read(&rdev->nr_pending))
++		return false;
++
++	/* Only remove non-faulty devices if recovery is not possible. */
++	if (!test_bit(Faulty, &rdev->flags) &&
++	    rdev->mddev->recovery_disabled != conf->recovery_disabled &&
++	    rdev->mddev->degraded < conf->raid_disks)
++		return false;
++
++	if (test_and_clear_bit(Nonrot, &rdev->flags))
++		WRITE_ONCE(conf->nonrot_disks, conf->nonrot_disks - 1);
++
++	WRITE_ONCE(info->rdev, NULL);
++	return true;
++}
++
+ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ {
+ 	struct r1conf *conf = mddev->private;
+@@ -1802,15 +1849,13 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 				disk_stack_limits(mddev->gendisk, rdev->bdev,
+ 						  rdev->data_offset << 9);
+ 
+-			p->head_position = 0;
+-			rdev->raid_disk = mirror;
++			raid1_add_conf(conf, rdev, mirror, false);
+ 			err = 0;
+ 			/* As all devices are equivalent, we don't need a full recovery
+ 			 * if this was recently any drive of the array
+ 			 */
+ 			if (rdev->saved_raid_disk < 0)
+ 				conf->fullsync = 1;
+-			rcu_assign_pointer(p->rdev, rdev);
+ 			break;
+ 		}
+ 		if (test_bit(WantReplacement, &p->rdev->flags) &&
+@@ -1820,13 +1865,11 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 
+ 	if (err && repl_slot >= 0) {
+ 		/* Add this device as a replacement */
+-		p = conf->mirrors + repl_slot;
+ 		clear_bit(In_sync, &rdev->flags);
+ 		set_bit(Replacement, &rdev->flags);
+-		rdev->raid_disk = repl_slot;
++		raid1_add_conf(conf, rdev, repl_slot, true);
+ 		err = 0;
+ 		conf->fullsync = 1;
+-		rcu_assign_pointer(p[conf->raid_disks].rdev, rdev);
+ 	}
+ 
+ 	print_conf(conf);
+@@ -1843,36 +1886,20 @@ static int raid1_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	if (unlikely(number >= conf->raid_disks))
+ 		goto abort;
+ 
+-	if (rdev != p->rdev)
+-		p = conf->mirrors + conf->raid_disks + number;
++	if (rdev != p->rdev) {
++		number += conf->raid_disks;
++		p = conf->mirrors + number;
++	}
+ 
+ 	print_conf(conf);
+ 	if (rdev == p->rdev) {
+-		if (test_bit(In_sync, &rdev->flags) ||
+-		    atomic_read(&rdev->nr_pending)) {
++		if (!raid1_remove_conf(conf, number)) {
+ 			err = -EBUSY;
+ 			goto abort;
+ 		}
+-		/* Only remove non-faulty devices if recovery
+-		 * is not possible.
+-		 */
+-		if (!test_bit(Faulty, &rdev->flags) &&
+-		    mddev->recovery_disabled != conf->recovery_disabled &&
+-		    mddev->degraded < conf->raid_disks) {
+-			err = -EBUSY;
+-			goto abort;
+-		}
+-		p->rdev = NULL;
+-		if (!test_bit(RemoveSynchronized, &rdev->flags)) {
+-			synchronize_rcu();
+-			if (atomic_read(&rdev->nr_pending)) {
+-				/* lost the race, try later */
+-				err = -EBUSY;
+-				p->rdev = rdev;
+-				goto abort;
+-			}
+-		}
+-		if (conf->mirrors[conf->raid_disks + number].rdev) {
++
++		if (number < conf->raid_disks &&
++		    conf->mirrors[conf->raid_disks + number].rdev) {
+ 			/* We just removed a device that is being replaced.
+ 			 * Move down the replacement.  We drain all IO before
+ 			 * doing this to avoid confusion.
+@@ -1892,7 +1919,7 @@ static int raid1_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 				goto abort;
+ 			}
+ 			clear_bit(Replacement, &repl->flags);
+-			p->rdev = repl;
++			WRITE_ONCE(p->rdev, repl);
+ 			conf->mirrors[conf->raid_disks + number].rdev = NULL;
+ 			unfreeze_array(conf);
+ 		}
+@@ -2290,8 +2317,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
+ 			sector_t first_bad;
+ 			int bad_sectors;
+ 
+-			rcu_read_lock();
+-			rdev = rcu_dereference(conf->mirrors[d].rdev);
++			rdev = conf->mirrors[d].rdev;
+ 			if (rdev &&
+ 			    (test_bit(In_sync, &rdev->flags) ||
+ 			     (!test_bit(Faulty, &rdev->flags) &&
+@@ -2299,15 +2325,14 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
+ 			    is_badblock(rdev, sect, s,
+ 					&first_bad, &bad_sectors) == 0) {
+ 				atomic_inc(&rdev->nr_pending);
+-				rcu_read_unlock();
+ 				if (sync_page_io(rdev, sect, s<<9,
+ 					 conf->tmppage, REQ_OP_READ, false))
+ 					success = 1;
+ 				rdev_dec_pending(rdev, mddev);
+ 				if (success)
+ 					break;
+-			} else
+-				rcu_read_unlock();
++			}
++
+ 			d++;
+ 			if (d == conf->raid_disks * 2)
+ 				d = 0;
+@@ -2326,29 +2351,24 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
+ 			if (d==0)
+ 				d = conf->raid_disks * 2;
+ 			d--;
+-			rcu_read_lock();
+-			rdev = rcu_dereference(conf->mirrors[d].rdev);
++			rdev = conf->mirrors[d].rdev;
+ 			if (rdev &&
+ 			    !test_bit(Faulty, &rdev->flags)) {
+ 				atomic_inc(&rdev->nr_pending);
+-				rcu_read_unlock();
+ 				r1_sync_page_io(rdev, sect, s,
+ 						conf->tmppage, REQ_OP_WRITE);
+ 				rdev_dec_pending(rdev, mddev);
+-			} else
+-				rcu_read_unlock();
++			}
+ 		}
+ 		d = start;
+ 		while (d != read_disk) {
+ 			if (d==0)
+ 				d = conf->raid_disks * 2;
+ 			d--;
+-			rcu_read_lock();
+-			rdev = rcu_dereference(conf->mirrors[d].rdev);
++			rdev = conf->mirrors[d].rdev;
+ 			if (rdev &&
+ 			    !test_bit(Faulty, &rdev->flags)) {
+ 				atomic_inc(&rdev->nr_pending);
+-				rcu_read_unlock();
+ 				if (r1_sync_page_io(rdev, sect, s,
+ 						conf->tmppage, REQ_OP_READ)) {
+ 					atomic_add(s, &rdev->corrected_errors);
+@@ -2359,8 +2379,7 @@ static void fix_read_error(struct r1conf *conf, int read_disk,
+ 						rdev->bdev);
+ 				}
+ 				rdev_dec_pending(rdev, mddev);
+-			} else
+-				rcu_read_unlock();
++			}
+ 		}
+ 		sectors -= s;
+ 		sect += s;
+@@ -2741,7 +2760,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 
+ 	r1_bio = raid1_alloc_init_r1buf(conf);
+ 
+-	rcu_read_lock();
+ 	/*
+ 	 * If we get a correctably read error during resync or recovery,
+ 	 * we might want to read from a different device.  So we
+@@ -2762,7 +2780,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 		struct md_rdev *rdev;
+ 		bio = r1_bio->bios[i];
+ 
+-		rdev = rcu_dereference(conf->mirrors[i].rdev);
++		rdev = conf->mirrors[i].rdev;
+ 		if (rdev == NULL ||
+ 		    test_bit(Faulty, &rdev->flags)) {
+ 			if (i < conf->raid_disks)
+@@ -2820,7 +2838,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				bio->bi_opf |= MD_FAILFAST;
+ 		}
+ 	}
+-	rcu_read_unlock();
+ 	if (disk < 0)
+ 		disk = wonly;
+ 	r1_bio->read_disk = disk;
+@@ -3025,23 +3042,17 @@ static struct r1conf *setup_conf(struct mddev *mddev)
+ 
+ 	err = -EINVAL;
+ 	spin_lock_init(&conf->device_lock);
++	conf->raid_disks = mddev->raid_disks;
+ 	rdev_for_each(rdev, mddev) {
+ 		int disk_idx = rdev->raid_disk;
+-		if (disk_idx >= mddev->raid_disks
+-		    || disk_idx < 0)
++
++		if (disk_idx >= conf->raid_disks || disk_idx < 0)
+ 			continue;
+-		if (test_bit(Replacement, &rdev->flags))
+-			disk = conf->mirrors + mddev->raid_disks + disk_idx;
+-		else
+-			disk = conf->mirrors + disk_idx;
+ 
+-		if (disk->rdev)
++		if (!raid1_add_conf(conf, rdev, disk_idx,
++				    test_bit(Replacement, &rdev->flags)))
+ 			goto abort;
+-		disk->rdev = rdev;
+-		disk->head_position = 0;
+-		disk->seq_start = MaxSector;
+ 	}
+-	conf->raid_disks = mddev->raid_disks;
+ 	conf->mddev = mddev;
+ 	INIT_LIST_HEAD(&conf->retry_list);
+ 	INIT_LIST_HEAD(&conf->bio_end_io_list);
+diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
+index 14d4211a123a8..5300cbaa58a41 100644
+--- a/drivers/md/raid1.h
++++ b/drivers/md/raid1.h
+@@ -71,6 +71,7 @@ struct r1conf {
+ 						 * allow for replacements.
+ 						 */
+ 	int			raid_disks;
++	int			nonrot_disks;
+ 
+ 	spinlock_t		device_lock;
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index b7b0a573e7f8b..6e828a6aa0b0a 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2247,15 +2247,6 @@ static int raid10_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		goto abort;
+ 	}
+ 	*rdevp = NULL;
+-	if (!test_bit(RemoveSynchronized, &rdev->flags)) {
+-		synchronize_rcu();
+-		if (atomic_read(&rdev->nr_pending)) {
+-			/* lost the race, try later */
+-			err = -EBUSY;
+-			*rdevp = rdev;
+-			goto abort;
+-		}
+-	}
+ 	if (p->replacement) {
+ 		/* We must have just cleared 'rdev' */
+ 		p->rdev = p->replacement;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 6fe334bb954ab..f03e4231bec11 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -8241,15 +8241,6 @@ static int raid5_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		goto abort;
+ 	}
+ 	*rdevp = NULL;
+-	if (!test_bit(RemoveSynchronized, &rdev->flags)) {
+-		lockdep_assert_held(&mddev->reconfig_mutex);
+-		synchronize_rcu();
+-		if (atomic_read(&rdev->nr_pending)) {
+-			/* lost the race, try later */
+-			err = -EBUSY;
+-			rcu_assign_pointer(*rdevp, rdev);
+-		}
+-	}
+ 	if (!err) {
+ 		err = log_modify(conf, rdev, false);
+ 		if (err)
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index a366566f22c3b..642c48e8c1f58 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -113,6 +113,7 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+ {
+ 	unsigned pat;
+ 	unsigned plane;
++	int ret = 0;
+ 
+ 	tpg->max_line_width = max_w;
+ 	for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++) {
+@@ -121,14 +122,18 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+ 
+ 			tpg->lines[pat][plane] =
+ 				vzalloc(array3_size(max_w, 2, pixelsz));
+-			if (!tpg->lines[pat][plane])
+-				return -ENOMEM;
++			if (!tpg->lines[pat][plane]) {
++				ret = -ENOMEM;
++				goto free_lines;
++			}
+ 			if (plane == 0)
+ 				continue;
+ 			tpg->downsampled_lines[pat][plane] =
+ 				vzalloc(array3_size(max_w, 2, pixelsz));
+-			if (!tpg->downsampled_lines[pat][plane])
+-				return -ENOMEM;
++			if (!tpg->downsampled_lines[pat][plane]) {
++				ret = -ENOMEM;
++				goto free_lines;
++			}
+ 		}
+ 	}
+ 	for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
+@@ -136,18 +141,45 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+ 
+ 		tpg->contrast_line[plane] =
+ 			vzalloc(array_size(pixelsz, max_w));
+-		if (!tpg->contrast_line[plane])
+-			return -ENOMEM;
++		if (!tpg->contrast_line[plane]) {
++			ret = -ENOMEM;
++			goto free_contrast_line;
++		}
+ 		tpg->black_line[plane] =
+ 			vzalloc(array_size(pixelsz, max_w));
+-		if (!tpg->black_line[plane])
+-			return -ENOMEM;
++		if (!tpg->black_line[plane]) {
++			ret = -ENOMEM;
++			goto free_contrast_line;
++		}
+ 		tpg->random_line[plane] =
+ 			vzalloc(array3_size(max_w, 2, pixelsz));
+-		if (!tpg->random_line[plane])
+-			return -ENOMEM;
++		if (!tpg->random_line[plane]) {
++			ret = -ENOMEM;
++			goto free_contrast_line;
++		}
+ 	}
+ 	return 0;
++
++free_contrast_line:
++	for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
++		vfree(tpg->contrast_line[plane]);
++		vfree(tpg->black_line[plane]);
++		vfree(tpg->random_line[plane]);
++		tpg->contrast_line[plane] = NULL;
++		tpg->black_line[plane] = NULL;
++		tpg->random_line[plane] = NULL;
++	}
++free_lines:
++	for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++)
++		for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
++			vfree(tpg->lines[pat][plane]);
++			tpg->lines[pat][plane] = NULL;
++			if (plane == 0)
++				continue;
++			vfree(tpg->downsampled_lines[pat][plane]);
++			tpg->downsampled_lines[pat][plane] = NULL;
++		}
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(tpg_alloc);
+ 
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 49f0eb7d0b9d3..733d0bc4b4cc3 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -490,6 +490,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		dvbdevfops = kmemdup(template->fops, sizeof(*dvbdevfops), GFP_KERNEL);
+ 		if (!dvbdevfops) {
+ 			kfree(dvbdev);
++			*pdvbdev = NULL;
+ 			mutex_unlock(&dvbdev_register_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -498,6 +499,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		if (!new_node) {
+ 			kfree(dvbdevfops);
+ 			kfree(dvbdev);
++			*pdvbdev = NULL;
+ 			mutex_unlock(&dvbdev_register_lock);
+ 			return -ENOMEM;
+ 		}
+@@ -531,6 +533,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		}
+ 		list_del(&dvbdev->list_head);
+ 		kfree(dvbdev);
++		*pdvbdev = NULL;
+ 		up_write(&minor_rwsem);
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return -EINVAL;
+@@ -553,6 +556,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		dvb_media_device_free(dvbdev);
+ 		list_del(&dvbdev->list_head);
+ 		kfree(dvbdev);
++		*pdvbdev = NULL;
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return ret;
+ 	}
+@@ -571,6 +575,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		dvb_media_device_free(dvbdev);
+ 		list_del(&dvbdev->list_head);
+ 		kfree(dvbdev);
++		*pdvbdev = NULL;
+ 		mutex_unlock(&dvbdev_register_lock);
+ 		return PTR_ERR(clsdev);
+ 	}
+diff --git a/drivers/media/dvb-frontends/stv0367.c b/drivers/media/dvb-frontends/stv0367.c
+index 48326434488c4..72540ef4e5f88 100644
+--- a/drivers/media/dvb-frontends/stv0367.c
++++ b/drivers/media/dvb-frontends/stv0367.c
+@@ -118,50 +118,32 @@ static const s32 stv0367cab_RF_LookUp2[RF_LOOKUP_TABLE2_SIZE][RF_LOOKUP_TABLE2_S
+ 	}
+ };
+ 
+-static
+-int stv0367_writeregs(struct stv0367_state *state, u16 reg, u8 *data, int len)
++static noinline_for_stack
++int stv0367_writereg(struct stv0367_state *state, u16 reg, u8 data)
+ {
+-	u8 buf[MAX_XFER_SIZE];
++	u8 buf[3] = { MSB(reg), LSB(reg), data };
+ 	struct i2c_msg msg = {
+ 		.addr = state->config->demod_address,
+ 		.flags = 0,
+ 		.buf = buf,
+-		.len = len + 2
++		.len = 3,
+ 	};
+ 	int ret;
+ 
+-	if (2 + len > sizeof(buf)) {
+-		printk(KERN_WARNING
+-		       "%s: i2c wr reg=%04x: len=%d is too big!\n",
+-		       KBUILD_MODNAME, reg, len);
+-		return -EINVAL;
+-	}
+-
+-
+-	buf[0] = MSB(reg);
+-	buf[1] = LSB(reg);
+-	memcpy(buf + 2, data, len);
+-
+ 	if (i2cdebug)
+ 		printk(KERN_DEBUG "%s: [%02x] %02x: %02x\n", __func__,
+-			state->config->demod_address, reg, buf[2]);
++			state->config->demod_address, reg, data);
+ 
+ 	ret = i2c_transfer(state->i2c, &msg, 1);
+ 	if (ret != 1)
+ 		printk(KERN_ERR "%s: i2c write error! ([%02x] %02x: %02x)\n",
+-			__func__, state->config->demod_address, reg, buf[2]);
++			__func__, state->config->demod_address, reg, data);
+ 
+ 	return (ret != 1) ? -EREMOTEIO : 0;
+ }
+ 
+-static int stv0367_writereg(struct stv0367_state *state, u16 reg, u8 data)
+-{
+-	u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
+-
+-	return stv0367_writeregs(state, reg, &tmp, 1);
+-}
+-
+-static u8 stv0367_readreg(struct stv0367_state *state, u16 reg)
++static noinline_for_stack
++u8 stv0367_readreg(struct stv0367_state *state, u16 reg)
+ {
+ 	u8 b0[] = { 0, 0 };
+ 	u8 b1[] = { 0 };
+diff --git a/drivers/media/i2c/imx290.c b/drivers/media/i2c/imx290.c
+index c6fea5837a19f..6dacd38ae947a 100644
+--- a/drivers/media/i2c/imx290.c
++++ b/drivers/media/i2c/imx290.c
+@@ -150,10 +150,10 @@
+ 
+ #define IMX290_PIXEL_ARRAY_WIDTH			1945
+ #define IMX290_PIXEL_ARRAY_HEIGHT			1097
+-#define IMX920_PIXEL_ARRAY_MARGIN_LEFT			12
+-#define IMX920_PIXEL_ARRAY_MARGIN_RIGHT			13
+-#define IMX920_PIXEL_ARRAY_MARGIN_TOP			8
+-#define IMX920_PIXEL_ARRAY_MARGIN_BOTTOM		9
++#define IMX290_PIXEL_ARRAY_MARGIN_LEFT			12
++#define IMX290_PIXEL_ARRAY_MARGIN_RIGHT			13
++#define IMX290_PIXEL_ARRAY_MARGIN_TOP			8
++#define IMX290_PIXEL_ARRAY_MARGIN_BOTTOM		9
+ #define IMX290_PIXEL_ARRAY_RECORDING_WIDTH		1920
+ #define IMX290_PIXEL_ARRAY_RECORDING_HEIGHT		1080
+ 
+@@ -1161,10 +1161,10 @@ static int imx290_get_selection(struct v4l2_subdev *sd,
+ 		 * The sensor moves the readout by 1 pixel based on flips to
+ 		 * keep the Bayer order the same.
+ 		 */
+-		sel->r.top = IMX920_PIXEL_ARRAY_MARGIN_TOP
++		sel->r.top = IMX290_PIXEL_ARRAY_MARGIN_TOP
+ 			   + (IMX290_PIXEL_ARRAY_RECORDING_HEIGHT - format->height) / 2
+ 			   + imx290->vflip->val;
+-		sel->r.left = IMX920_PIXEL_ARRAY_MARGIN_LEFT
++		sel->r.left = IMX290_PIXEL_ARRAY_MARGIN_LEFT
+ 			    + (IMX290_PIXEL_ARRAY_RECORDING_WIDTH - format->width) / 2
+ 			    + imx290->hflip->val;
+ 		sel->r.width = format->width;
+@@ -1183,8 +1183,8 @@ static int imx290_get_selection(struct v4l2_subdev *sd,
+ 		return 0;
+ 
+ 	case V4L2_SEL_TGT_CROP_DEFAULT:
+-		sel->r.top = IMX920_PIXEL_ARRAY_MARGIN_TOP;
+-		sel->r.left = IMX920_PIXEL_ARRAY_MARGIN_LEFT;
++		sel->r.top = IMX290_PIXEL_ARRAY_MARGIN_TOP;
++		sel->r.left = IMX290_PIXEL_ARRAY_MARGIN_LEFT;
+ 		sel->r.width = IMX290_PIXEL_ARRAY_RECORDING_WIDTH;
+ 		sel->r.height = IMX290_PIXEL_ARRAY_RECORDING_HEIGHT;
+ 
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 2785935da497b..558152575d102 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -2091,9 +2091,6 @@ static int tc358743_probe(struct i2c_client *client)
+ 	state->mbus_fmt_code = MEDIA_BUS_FMT_RGB888_1X24;
+ 
+ 	sd->dev = &client->dev;
+-	err = v4l2_async_register_subdev(sd);
+-	if (err < 0)
+-		goto err_hdl;
+ 
+ 	mutex_init(&state->confctl_mutex);
+ 
+@@ -2151,6 +2148,10 @@ static int tc358743_probe(struct i2c_client *client)
+ 	if (err)
+ 		goto err_work_queues;
+ 
++	err = v4l2_async_register_subdev(sd);
++	if (err < 0)
++		goto err_work_queues;
++
+ 	v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name,
+ 		  client->addr << 1, client->adapter->name);
+ 
+diff --git a/drivers/media/pci/intel/ivsc/mei_csi.c b/drivers/media/pci/intel/ivsc/mei_csi.c
+index 2a6b828fd8dd5..e111fd6ff6f6b 100644
+--- a/drivers/media/pci/intel/ivsc/mei_csi.c
++++ b/drivers/media/pci/intel/ivsc/mei_csi.c
+@@ -71,8 +71,8 @@ enum ivsc_privacy_status {
+ };
+ 
+ enum csi_pads {
+-	CSI_PAD_SOURCE,
+ 	CSI_PAD_SINK,
++	CSI_PAD_SOURCE,
+ 	CSI_NUM_PADS
+ };
+ 
+@@ -584,7 +584,7 @@ static int mei_csi_notify_bound(struct v4l2_async_notifier *notifier,
+ 	csi->remote_pad = pad;
+ 
+ 	return media_create_pad_link(&subdev->entity, pad,
+-				     &csi->subdev.entity, 1,
++				     &csi->subdev.entity, CSI_PAD_SINK,
+ 				     MEDIA_LNK_FL_ENABLED |
+ 				     MEDIA_LNK_FL_IMMUTABLE);
+ }
+diff --git a/drivers/media/pci/ttpci/budget-av.c b/drivers/media/pci/ttpci/budget-av.c
+index 230b104a7cdf0..a47c5850ef875 100644
+--- a/drivers/media/pci/ttpci/budget-av.c
++++ b/drivers/media/pci/ttpci/budget-av.c
+@@ -1463,7 +1463,8 @@ static int budget_av_attach(struct saa7146_dev *dev, struct saa7146_pci_extensio
+ 		budget_av->has_saa7113 = 1;
+ 		err = saa7146_vv_init(dev, &vv_data);
+ 		if (err != 0) {
+-			/* fixme: proper cleanup here */
++			ttpci_budget_deinit(&budget_av->budget);
++			kfree(budget_av);
+ 			ERR("cannot init vv subsystem\n");
+ 			return err;
+ 		}
+@@ -1472,9 +1473,10 @@ static int budget_av_attach(struct saa7146_dev *dev, struct saa7146_pci_extensio
+ 		vv_data.vid_ops.vidioc_s_input = vidioc_s_input;
+ 
+ 		if ((err = saa7146_register_device(&budget_av->vd, dev, "knc1", VFL_TYPE_VIDEO))) {
+-			/* fixme: proper cleanup here */
+-			ERR("cannot register capture v4l2 device\n");
+ 			saa7146_vv_release(dev);
++			ttpci_budget_deinit(&budget_av->budget);
++			kfree(budget_av);
++			ERR("cannot register capture v4l2 device\n");
+ 			return err;
+ 		}
+ 
+diff --git a/drivers/media/platform/cadence/cdns-csi2rx.c b/drivers/media/platform/cadence/cdns-csi2rx.c
+index 889f4fbbafb3c..c2a979397b39f 100644
+--- a/drivers/media/platform/cadence/cdns-csi2rx.c
++++ b/drivers/media/platform/cadence/cdns-csi2rx.c
+@@ -465,7 +465,7 @@ static int csi2rx_async_bound(struct v4l2_async_notifier *notifier,
+ 	struct csi2rx_priv *csi2rx = v4l2_subdev_to_csi2rx(subdev);
+ 
+ 	csi2rx->source_pad = media_entity_get_fwnode_pad(&s_subdev->entity,
+-							 s_subdev->fwnode,
++							 asd->match.fwnode,
+ 							 MEDIA_PAD_FL_SOURCE);
+ 	if (csi2rx->source_pad < 0) {
+ 		dev_err(csi2rx->dev, "Couldn't find output pad for subdev %s\n",
+diff --git a/drivers/media/platform/mediatek/mdp/mtk_mdp_vpu.c b/drivers/media/platform/mediatek/mdp/mtk_mdp_vpu.c
+index b065ccd069140..378a1cba0144f 100644
+--- a/drivers/media/platform/mediatek/mdp/mtk_mdp_vpu.c
++++ b/drivers/media/platform/mediatek/mdp/mtk_mdp_vpu.c
+@@ -26,7 +26,7 @@ static void mtk_mdp_vpu_handle_init_ack(const struct mdp_ipi_comm_ack *msg)
+ 	vpu->inst_addr = msg->vpu_inst_addr;
+ }
+ 
+-static void mtk_mdp_vpu_ipi_handler(const void *data, unsigned int len,
++static void mtk_mdp_vpu_ipi_handler(void *data, unsigned int len,
+ 				    void *priv)
+ {
+ 	const struct mdp_ipi_comm_ack *msg = data;
+diff --git a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_vpu.c b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_vpu.c
+index 9f6e4b59455da..4c34344dc7dcb 100644
+--- a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_vpu.c
++++ b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_vpu.c
+@@ -29,15 +29,7 @@ static int mtk_vcodec_vpu_set_ipi_register(struct mtk_vcodec_fw *fw, int id,
+ 					   mtk_vcodec_ipi_handler handler,
+ 					   const char *name, void *priv)
+ {
+-	/*
+-	 * The handler we receive takes a void * as its first argument. We
+-	 * cannot change this because it needs to be passed down to the rproc
+-	 * subsystem when SCP is used. VPU takes a const argument, which is
+-	 * more constrained, so the conversion below is safe.
+-	 */
+-	ipi_handler_t handler_const = (ipi_handler_t)handler;
+-
+-	return vpu_ipi_register(fw->pdev, id, handler_const, name, priv);
++	return vpu_ipi_register(fw->pdev, id, handler, name, priv);
+ }
+ 
+ static int mtk_vcodec_vpu_ipi_send(struct mtk_vcodec_fw *fw, int id, void *buf,
+diff --git a/drivers/media/platform/mediatek/vpu/mtk_vpu.c b/drivers/media/platform/mediatek/vpu/mtk_vpu.c
+index 7243604a82a5b..724ae7c2ab3ba 100644
+--- a/drivers/media/platform/mediatek/vpu/mtk_vpu.c
++++ b/drivers/media/platform/mediatek/vpu/mtk_vpu.c
+@@ -635,7 +635,7 @@ int vpu_load_firmware(struct platform_device *pdev)
+ }
+ EXPORT_SYMBOL_GPL(vpu_load_firmware);
+ 
+-static void vpu_init_ipi_handler(const void *data, unsigned int len, void *priv)
++static void vpu_init_ipi_handler(void *data, unsigned int len, void *priv)
+ {
+ 	struct mtk_vpu *vpu = priv;
+ 	const struct vpu_run *run = data;
+diff --git a/drivers/media/platform/mediatek/vpu/mtk_vpu.h b/drivers/media/platform/mediatek/vpu/mtk_vpu.h
+index a56053ff135af..da05f3e740810 100644
+--- a/drivers/media/platform/mediatek/vpu/mtk_vpu.h
++++ b/drivers/media/platform/mediatek/vpu/mtk_vpu.h
+@@ -17,7 +17,7 @@
+  * VPU interfaces with other blocks by share memory and interrupt.
+  */
+ 
+-typedef void (*ipi_handler_t) (const void *data,
++typedef void (*ipi_handler_t) (void *data,
+ 			       unsigned int len,
+ 			       void *priv);
+ 
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c
+index c6d7e01c89494..3752b702e270b 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c
+@@ -725,6 +725,9 @@ irqreturn_t rkisp1_capture_isr(int irq, void *ctx)
+ 	unsigned int i;
+ 	u32 status;
+ 
++	if (!rkisp1->irqs_enabled)
++		return IRQ_NONE;
++
+ 	status = rkisp1_read(rkisp1, RKISP1_CIF_MI_MIS);
+ 	if (!status)
+ 		return IRQ_NONE;
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h b/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
+index 2d7f06281c390..a4e272adc1ad0 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
+@@ -449,6 +449,7 @@ struct rkisp1_debug {
+  * @debug:	   debug params to be exposed on debugfs
+  * @info:	   version-specific ISP information
+  * @irqs:          IRQ line numbers
++ * @irqs_enabled:  the hardware is enabled and can cause interrupts
+  */
+ struct rkisp1_device {
+ 	void __iomem *base_addr;
+@@ -470,6 +471,7 @@ struct rkisp1_device {
+ 	struct rkisp1_debug debug;
+ 	const struct rkisp1_info *info;
+ 	int irqs[RKISP1_NUM_IRQS];
++	bool irqs_enabled;
+ };
+ 
+ /*
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
+index 702adee83322b..7320c1c72e688 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-csi.c
+@@ -196,6 +196,9 @@ irqreturn_t rkisp1_csi_isr(int irq, void *ctx)
+ 	struct rkisp1_device *rkisp1 = dev_get_drvdata(dev);
+ 	u32 val, status;
+ 
++	if (!rkisp1->irqs_enabled)
++		return IRQ_NONE;
++
+ 	status = rkisp1_read(rkisp1, RKISP1_CIF_MIPI_MIS);
+ 	if (!status)
+ 		return IRQ_NONE;
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+index acc559652d6eb..73cf08a740118 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-dev.c
+@@ -305,6 +305,24 @@ static int __maybe_unused rkisp1_runtime_suspend(struct device *dev)
+ {
+ 	struct rkisp1_device *rkisp1 = dev_get_drvdata(dev);
+ 
++	rkisp1->irqs_enabled = false;
++	/* Make sure the IRQ handler will see the above */
++	mb();
++
++	/*
++	 * Wait until any running IRQ handler has returned. The IRQ handler
++	 * may get called even after this (as it's a shared interrupt line)
++	 * but the 'irqs_enabled' flag will make the handler return immediately.
++	 */
++	for (unsigned int il = 0; il < ARRAY_SIZE(rkisp1->irqs); ++il) {
++		if (rkisp1->irqs[il] == -1)
++			continue;
++
++		/* Skip if the irq line is the same as previous */
++		if (il == 0 || rkisp1->irqs[il - 1] != rkisp1->irqs[il])
++			synchronize_irq(rkisp1->irqs[il]);
++	}
++
+ 	clk_bulk_disable_unprepare(rkisp1->clk_size, rkisp1->clks);
+ 	return pinctrl_pm_select_sleep_state(dev);
+ }
+@@ -321,6 +339,10 @@ static int __maybe_unused rkisp1_runtime_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	rkisp1->irqs_enabled = true;
++	/* Make sure the IRQ handler will see the above */
++	mb();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
+index 5fbc47bda6831..caffea6a46186 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
+@@ -971,6 +971,9 @@ irqreturn_t rkisp1_isp_isr(int irq, void *ctx)
+ 	struct rkisp1_device *rkisp1 = dev_get_drvdata(dev);
+ 	u32 status, isp_err;
+ 
++	if (!rkisp1->irqs_enabled)
++		return IRQ_NONE;
++
+ 	status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_MIS);
+ 	if (!status)
+ 		return IRQ_NONE;
+diff --git a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+index 90ab1d77b6a5e..f7ff0937828cf 100644
+--- a/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
++++ b/drivers/media/platform/sunxi/sun8i-di/sun8i-di.c
+@@ -66,6 +66,7 @@ static void deinterlace_device_run(void *priv)
+ 	struct vb2_v4l2_buffer *src, *dst;
+ 	unsigned int hstep, vstep;
+ 	dma_addr_t addr;
++	int i;
+ 
+ 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+@@ -160,6 +161,26 @@ static void deinterlace_device_run(void *priv)
+ 	deinterlace_write(dev, DEINTERLACE_CH1_HORZ_FACT, hstep);
+ 	deinterlace_write(dev, DEINTERLACE_CH1_VERT_FACT, vstep);
+ 
++	/* neutral filter coefficients */
++	deinterlace_set_bits(dev, DEINTERLACE_FRM_CTRL,
++			     DEINTERLACE_FRM_CTRL_COEF_ACCESS);
++	readl_poll_timeout(dev->base + DEINTERLACE_STATUS, val,
++			   val & DEINTERLACE_STATUS_COEF_STATUS, 2, 40);
++
++	for (i = 0; i < 32; i++) {
++		deinterlace_write(dev, DEINTERLACE_CH0_HORZ_COEF0 + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++		deinterlace_write(dev, DEINTERLACE_CH0_VERT_COEF + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++		deinterlace_write(dev, DEINTERLACE_CH1_HORZ_COEF0 + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++		deinterlace_write(dev, DEINTERLACE_CH1_VERT_COEF + i * 4,
++				  DEINTERLACE_IDENTITY_COEF);
++	}
++
++	deinterlace_clr_set_bits(dev, DEINTERLACE_FRM_CTRL,
++				 DEINTERLACE_FRM_CTRL_COEF_ACCESS, 0);
++
+ 	deinterlace_clr_set_bits(dev, DEINTERLACE_FIELD_CTRL,
+ 				 DEINTERLACE_FIELD_CTRL_FIELD_CNT_MSK,
+ 				 DEINTERLACE_FIELD_CTRL_FIELD_CNT(ctx->field));
+@@ -248,7 +269,6 @@ static irqreturn_t deinterlace_irq(int irq, void *data)
+ static void deinterlace_init(struct deinterlace_dev *dev)
+ {
+ 	u32 val;
+-	int i;
+ 
+ 	deinterlace_write(dev, DEINTERLACE_BYPASS,
+ 			  DEINTERLACE_BYPASS_CSC);
+@@ -284,27 +304,7 @@ static void deinterlace_init(struct deinterlace_dev *dev)
+ 
+ 	deinterlace_clr_set_bits(dev, DEINTERLACE_CHROMA_DIFF,
+ 				 DEINTERLACE_CHROMA_DIFF_TH_MSK,
+-				 DEINTERLACE_CHROMA_DIFF_TH(5));
+-
+-	/* neutral filter coefficients */
+-	deinterlace_set_bits(dev, DEINTERLACE_FRM_CTRL,
+-			     DEINTERLACE_FRM_CTRL_COEF_ACCESS);
+-	readl_poll_timeout(dev->base + DEINTERLACE_STATUS, val,
+-			   val & DEINTERLACE_STATUS_COEF_STATUS, 2, 40);
+-
+-	for (i = 0; i < 32; i++) {
+-		deinterlace_write(dev, DEINTERLACE_CH0_HORZ_COEF0 + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-		deinterlace_write(dev, DEINTERLACE_CH0_VERT_COEF + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-		deinterlace_write(dev, DEINTERLACE_CH1_HORZ_COEF0 + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-		deinterlace_write(dev, DEINTERLACE_CH1_VERT_COEF + i * 4,
+-				  DEINTERLACE_IDENTITY_COEF);
+-	}
+-
+-	deinterlace_clr_set_bits(dev, DEINTERLACE_FRM_CTRL,
+-				 DEINTERLACE_FRM_CTRL_COEF_ACCESS, 0);
++				 DEINTERLACE_CHROMA_DIFF_TH(31));
+ }
+ 
+ static inline struct deinterlace_ctx *deinterlace_file2ctx(struct file *file)
+@@ -929,11 +929,18 @@ static int deinterlace_runtime_resume(struct device *device)
+ 		return ret;
+ 	}
+ 
++	ret = reset_control_deassert(dev->rstc);
++	if (ret) {
++		dev_err(dev->dev, "Failed to apply reset\n");
++
++		goto err_exclusive_rate;
++	}
++
+ 	ret = clk_prepare_enable(dev->bus_clk);
+ 	if (ret) {
+ 		dev_err(dev->dev, "Failed to enable bus clock\n");
+ 
+-		goto err_exclusive_rate;
++		goto err_rst;
+ 	}
+ 
+ 	ret = clk_prepare_enable(dev->mod_clk);
+@@ -950,23 +957,16 @@ static int deinterlace_runtime_resume(struct device *device)
+ 		goto err_mod_clk;
+ 	}
+ 
+-	ret = reset_control_deassert(dev->rstc);
+-	if (ret) {
+-		dev_err(dev->dev, "Failed to apply reset\n");
+-
+-		goto err_ram_clk;
+-	}
+-
+ 	deinterlace_init(dev);
+ 
+ 	return 0;
+ 
+-err_ram_clk:
+-	clk_disable_unprepare(dev->ram_clk);
+ err_mod_clk:
+ 	clk_disable_unprepare(dev->mod_clk);
+ err_bus_clk:
+ 	clk_disable_unprepare(dev->bus_clk);
++err_rst:
++	reset_control_assert(dev->rstc);
+ err_exclusive_rate:
+ 	clk_rate_exclusive_put(dev->mod_clk);
+ 
+@@ -977,11 +977,12 @@ static int deinterlace_runtime_suspend(struct device *device)
+ {
+ 	struct deinterlace_dev *dev = dev_get_drvdata(device);
+ 
+-	reset_control_assert(dev->rstc);
+-
+ 	clk_disable_unprepare(dev->ram_clk);
+ 	clk_disable_unprepare(dev->mod_clk);
+ 	clk_disable_unprepare(dev->bus_clk);
++
++	reset_control_assert(dev->rstc);
++
+ 	clk_rate_exclusive_put(dev->mod_clk);
+ 
+ 	return 0;
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 4d037c92af7c5..bae76023cf71d 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -4094,6 +4094,10 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ 	 * topology will likely change after the load of the em28xx subdrivers.
+ 	 */
+ #ifdef CONFIG_MEDIA_CONTROLLER
++	/*
++	 * No need to check the return value, the device will still be
++	 * usable without media controller API.
++	 */
+ 	retval = media_device_register(dev->media_dev);
+ #endif
+ 
+diff --git a/drivers/media/usb/go7007/go7007-driver.c b/drivers/media/usb/go7007/go7007-driver.c
+index 0c24e29843048..eb03f98b2ef11 100644
+--- a/drivers/media/usb/go7007/go7007-driver.c
++++ b/drivers/media/usb/go7007/go7007-driver.c
+@@ -80,7 +80,7 @@ static int go7007_load_encoder(struct go7007 *go)
+ 	const struct firmware *fw_entry;
+ 	char fw_name[] = "go7007/go7007fw.bin";
+ 	void *bounce;
+-	int fw_len, rv = 0;
++	int fw_len;
+ 	u16 intr_val, intr_data;
+ 
+ 	if (go->boot_fw == NULL) {
+@@ -109,9 +109,11 @@ static int go7007_load_encoder(struct go7007 *go)
+ 	    go7007_read_interrupt(go, &intr_val, &intr_data) < 0 ||
+ 			(intr_val & ~0x1) != 0x5a5a) {
+ 		v4l2_err(go, "error transferring firmware\n");
+-		rv = -1;
++		kfree(go->boot_fw);
++		go->boot_fw = NULL;
++		return -1;
+ 	}
+-	return rv;
++	return 0;
+ }
+ 
+ MODULE_FIRMWARE("go7007/go7007fw.bin");
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index eeb85981e02b6..762c13e49bfa5 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1201,7 +1201,9 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ 				u16 channel;
+ 
+ 				/* read channel number from GPIO[1:0] */
+-				go7007_read_addr(go, 0x3c81, &channel);
++				if (go7007_read_addr(go, 0x3c81, &channel))
++					goto allocfail;
++
+ 				channel &= 0x3;
+ 				go->board_id = GO7007_BOARDID_ADLINK_MPG24;
+ 				usb->board = board = &board_adlink_mpg24;
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-context.c b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+index 1764674de98bc..73c95ba2328a4 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-context.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-context.c
+@@ -90,8 +90,10 @@ static void pvr2_context_destroy(struct pvr2_context *mp)
+ }
+ 
+ 
+-static void pvr2_context_notify(struct pvr2_context *mp)
++static void pvr2_context_notify(void *ptr)
+ {
++	struct pvr2_context *mp = ptr;
++
+ 	pvr2_context_set_notify(mp,!0);
+ }
+ 
+@@ -106,9 +108,7 @@ static void pvr2_context_check(struct pvr2_context *mp)
+ 		pvr2_trace(PVR2_TRACE_CTXT,
+ 			   "pvr2_context %p (initialize)", mp);
+ 		/* Finish hardware initialization */
+-		if (pvr2_hdw_initialize(mp->hdw,
+-					(void (*)(void *))pvr2_context_notify,
+-					mp)) {
++		if (pvr2_hdw_initialize(mp->hdw, pvr2_context_notify, mp)) {
+ 			mp->video_stream.stream =
+ 				pvr2_hdw_get_video_stream(mp->hdw);
+ 			/* Trigger interface initialization.  By doing this
+@@ -267,9 +267,9 @@ static void pvr2_context_exit(struct pvr2_context *mp)
+ void pvr2_context_disconnect(struct pvr2_context *mp)
+ {
+ 	pvr2_hdw_disconnect(mp->hdw);
+-	mp->disconnect_flag = !0;
+ 	if (!pvr2_context_shutok())
+ 		pvr2_context_notify(mp);
++	mp->disconnect_flag = !0;
+ }
+ 
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-dvb.c b/drivers/media/usb/pvrusb2/pvrusb2-dvb.c
+index 26811efe0fb58..9a9bae21c6147 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-dvb.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-dvb.c
+@@ -88,8 +88,10 @@ static int pvr2_dvb_feed_thread(void *data)
+ 	return stat;
+ }
+ 
+-static void pvr2_dvb_notify(struct pvr2_dvb_adapter *adap)
++static void pvr2_dvb_notify(void *ptr)
+ {
++	struct pvr2_dvb_adapter *adap = ptr;
++
+ 	wake_up(&adap->buffer_wait_data);
+ }
+ 
+@@ -149,7 +151,7 @@ static int pvr2_dvb_stream_do_start(struct pvr2_dvb_adapter *adap)
+ 	}
+ 
+ 	pvr2_stream_set_callback(pvr->video_stream.stream,
+-				 (pvr2_stream_callback) pvr2_dvb_notify, adap);
++				 pvr2_dvb_notify, adap);
+ 
+ 	ret = pvr2_stream_set_buffer_count(stream, PVR2_DVB_BUFFER_COUNT);
+ 	if (ret < 0) return ret;
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c b/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
+index c04ab7258d645..d608b793fa847 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
+@@ -1033,8 +1033,10 @@ static int pvr2_v4l2_open(struct file *file)
+ }
+ 
+ 
+-static void pvr2_v4l2_notify(struct pvr2_v4l2_fh *fhp)
++static void pvr2_v4l2_notify(void *ptr)
+ {
++	struct pvr2_v4l2_fh *fhp = ptr;
++
+ 	wake_up(&fhp->wait_data);
+ }
+ 
+@@ -1067,7 +1069,7 @@ static int pvr2_v4l2_iosetup(struct pvr2_v4l2_fh *fh)
+ 
+ 	hdw = fh->channel.mc_head->hdw;
+ 	sp = fh->pdi->stream->stream;
+-	pvr2_stream_set_callback(sp,(pvr2_stream_callback)pvr2_v4l2_notify,fh);
++	pvr2_stream_set_callback(sp, pvr2_v4l2_notify, fh);
+ 	pvr2_hdw_set_stream_type(hdw,fh->pdi->config);
+ 	if ((ret = pvr2_hdw_set_streaming(hdw,!0)) < 0) return ret;
+ 	return pvr2_ioread_set_enabled(fh->rhp,!0);
+@@ -1198,11 +1200,6 @@ static void pvr2_v4l2_dev_init(struct pvr2_v4l2_dev *dip,
+ 		dip->minor_type = pvr2_v4l_type_video;
+ 		nr_ptr = video_nr;
+ 		caps |= V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_AUDIO;
+-		if (!dip->stream) {
+-			pr_err(KBUILD_MODNAME
+-				": Failed to set up pvrusb2 v4l video dev due to missing stream instance\n");
+-			return;
+-		}
+ 		break;
+ 	case VFL_TYPE_VBI:
+ 		dip->config = pvr2_config_vbi;
+diff --git a/drivers/media/v4l2-core/v4l2-cci.c b/drivers/media/v4l2-core/v4l2-cci.c
+index 10005c80f43b5..ee3475bed37fa 100644
+--- a/drivers/media/v4l2-core/v4l2-cci.c
++++ b/drivers/media/v4l2-core/v4l2-cci.c
+@@ -32,7 +32,7 @@ int cci_read(struct regmap *map, u32 reg, u64 *val, int *err)
+ 
+ 	ret = regmap_bulk_read(map, reg, buf, len);
+ 	if (ret) {
+-		dev_err(regmap_get_device(map), "Error reading reg 0x%4x: %d\n",
++		dev_err(regmap_get_device(map), "Error reading reg 0x%04x: %d\n",
+ 			reg, ret);
+ 		goto out;
+ 	}
+@@ -131,7 +131,7 @@ int cci_write(struct regmap *map, u32 reg, u64 val, int *err)
+ 
+ 	ret = regmap_bulk_write(map, reg, buf, len);
+ 	if (ret)
+-		dev_err(regmap_get_device(map), "Error writing reg 0x%4x: %d\n",
++		dev_err(regmap_get_device(map), "Error writing reg 0x%04x: %d\n",
+ 			reg, ret);
+ 
+ out:
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index 0cc30397fbad5..8db9ac9c1433f 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -1084,11 +1084,17 @@ static int v4l2_m2m_register_entity(struct media_device *mdev,
+ 	entity->function = function;
+ 
+ 	ret = media_entity_pads_init(entity, num_pads, pads);
+-	if (ret)
++	if (ret) {
++		kfree(entity->name);
++		entity->name = NULL;
+ 		return ret;
++	}
+ 	ret = media_device_register_entity(mdev, entity);
+-	if (ret)
++	if (ret) {
++		kfree(entity->name);
++		entity->name = NULL;
+ 		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/memory/tegra/tegra234.c b/drivers/memory/tegra/tegra234.c
+index abff87f917cb4..b8a7af2d36c11 100644
+--- a/drivers/memory/tegra/tegra234.c
++++ b/drivers/memory/tegra/tegra234.c
+@@ -121,7 +121,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1RDB,
+-		.name = "dla0rdb",
++		.name = "dla1rdb",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -407,7 +407,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1RDB1,
+-		.name = "dla0rdb1",
++		.name = "dla1rdb1",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -417,7 +417,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1WRB,
+-		.name = "dla0wrb",
++		.name = "dla1wrb",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -699,7 +699,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1RDA,
+-		.name = "dla0rda",
++		.name = "dla1rda",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -709,7 +709,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1FALRDB,
+-		.name = "dla0falrdb",
++		.name = "dla1falrdb",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -719,7 +719,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1WRA,
+-		.name = "dla0wra",
++		.name = "dla1wra",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -729,7 +729,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1FALWRB,
+-		.name = "dla0falwrb",
++		.name = "dla1falwrb",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+@@ -917,7 +917,7 @@ static const struct tegra_mc_client tegra234_mc_clients[] = {
+ 		},
+ 	}, {
+ 		.id = TEGRA234_MEMORY_CLIENT_DLA1RDA1,
+-		.name = "dla0rda1",
++		.name = "dla1rda1",
+ 		.sid = TEGRA234_SID_NVDLA1,
+ 		.regs = {
+ 			.sid = {
+diff --git a/drivers/mfd/altera-sysmgr.c b/drivers/mfd/altera-sysmgr.c
+index 0e52bd2ebd74b..fb5f988e61f37 100644
+--- a/drivers/mfd/altera-sysmgr.c
++++ b/drivers/mfd/altera-sysmgr.c
+@@ -109,7 +109,9 @@ struct regmap *altr_sysmgr_regmap_lookup_by_phandle(struct device_node *np,
+ 
+ 	dev = driver_find_device_by_of_node(&altr_sysmgr_driver.driver,
+ 					    (void *)sysmgr_np);
+-	of_node_put(sysmgr_np);
++	if (property)
++		of_node_put(sysmgr_np);
++
+ 	if (!dev)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
+diff --git a/drivers/mfd/cs42l43.c b/drivers/mfd/cs42l43.c
+index 7b6d07cbe6fc6..1cea3f8f467d4 100644
+--- a/drivers/mfd/cs42l43.c
++++ b/drivers/mfd/cs42l43.c
+@@ -84,7 +84,7 @@ const struct reg_default cs42l43_reg_default[CS42L43_N_DEFAULTS] = {
+ 	{ CS42L43_DRV_CTRL_5,				0x136C00C0 },
+ 	{ CS42L43_GPIO_CTRL1,				0x00000707 },
+ 	{ CS42L43_GPIO_CTRL2,				0x00000000 },
+-	{ CS42L43_GPIO_FN_SEL,				0x00000000 },
++	{ CS42L43_GPIO_FN_SEL,				0x00000004 },
+ 	{ CS42L43_MCLK_SRC_SEL,				0x00000000 },
+ 	{ CS42L43_SAMPLE_RATE1,				0x00000003 },
+ 	{ CS42L43_SAMPLE_RATE2,				0x00000003 },
+@@ -131,38 +131,38 @@ const struct reg_default cs42l43_reg_default[CS42L43_N_DEFAULTS] = {
+ 	{ CS42L43_ASP_TX_CH4_CTRL,			0x00170091 },
+ 	{ CS42L43_ASP_TX_CH5_CTRL,			0x001700C1 },
+ 	{ CS42L43_ASP_TX_CH6_CTRL,			0x001700F1 },
+-	{ CS42L43_ASPTX1_INPUT,				0x00800000 },
+-	{ CS42L43_ASPTX2_INPUT,				0x00800000 },
+-	{ CS42L43_ASPTX3_INPUT,				0x00800000 },
+-	{ CS42L43_ASPTX4_INPUT,				0x00800000 },
+-	{ CS42L43_ASPTX5_INPUT,				0x00800000 },
+-	{ CS42L43_ASPTX6_INPUT,				0x00800000 },
+-	{ CS42L43_SWIRE_DP1_CH1_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP1_CH2_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP1_CH3_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP1_CH4_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP2_CH1_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP2_CH2_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP3_CH1_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP3_CH2_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP4_CH1_INPUT,			0x00800000 },
+-	{ CS42L43_SWIRE_DP4_CH2_INPUT,			0x00800000 },
+-	{ CS42L43_ASRC_INT1_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_INT2_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_INT3_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_INT4_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_DEC1_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_DEC2_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_DEC3_INPUT1,			0x00800000 },
+-	{ CS42L43_ASRC_DEC4_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC1INT1_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC1INT2_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC1DEC1_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC1DEC2_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC2INT1_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC2INT2_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC2DEC1_INPUT1,			0x00800000 },
+-	{ CS42L43_ISRC2DEC2_INPUT1,			0x00800000 },
++	{ CS42L43_ASPTX1_INPUT,				0x00000000 },
++	{ CS42L43_ASPTX2_INPUT,				0x00000000 },
++	{ CS42L43_ASPTX3_INPUT,				0x00000000 },
++	{ CS42L43_ASPTX4_INPUT,				0x00000000 },
++	{ CS42L43_ASPTX5_INPUT,				0x00000000 },
++	{ CS42L43_ASPTX6_INPUT,				0x00000000 },
++	{ CS42L43_SWIRE_DP1_CH1_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP1_CH2_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP1_CH3_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP1_CH4_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP2_CH1_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP2_CH2_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP3_CH1_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP3_CH2_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP4_CH1_INPUT,			0x00000000 },
++	{ CS42L43_SWIRE_DP4_CH2_INPUT,			0x00000000 },
++	{ CS42L43_ASRC_INT1_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_INT2_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_INT3_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_INT4_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_DEC1_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_DEC2_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_DEC3_INPUT1,			0x00000000 },
++	{ CS42L43_ASRC_DEC4_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC1INT1_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC1INT2_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC1DEC1_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC1DEC2_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC2INT1_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC2INT2_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC2DEC1_INPUT1,			0x00000000 },
++	{ CS42L43_ISRC2DEC2_INPUT1,			0x00000000 },
+ 	{ CS42L43_EQ1MIX_INPUT1,			0x00800000 },
+ 	{ CS42L43_EQ1MIX_INPUT2,			0x00800000 },
+ 	{ CS42L43_EQ1MIX_INPUT3,			0x00800000 },
+@@ -171,8 +171,8 @@ const struct reg_default cs42l43_reg_default[CS42L43_N_DEFAULTS] = {
+ 	{ CS42L43_EQ2MIX_INPUT2,			0x00800000 },
+ 	{ CS42L43_EQ2MIX_INPUT3,			0x00800000 },
+ 	{ CS42L43_EQ2MIX_INPUT4,			0x00800000 },
+-	{ CS42L43_SPDIF1_INPUT1,			0x00800000 },
+-	{ CS42L43_SPDIF2_INPUT1,			0x00800000 },
++	{ CS42L43_SPDIF1_INPUT1,			0x00000000 },
++	{ CS42L43_SPDIF2_INPUT1,			0x00000000 },
+ 	{ CS42L43_AMP1MIX_INPUT1,			0x00800000 },
+ 	{ CS42L43_AMP1MIX_INPUT2,			0x00800000 },
+ 	{ CS42L43_AMP1MIX_INPUT3,			0x00800000 },
+@@ -217,7 +217,7 @@ const struct reg_default cs42l43_reg_default[CS42L43_N_DEFAULTS] = {
+ 	{ CS42L43_CTRL_REG,				0x00000006 },
+ 	{ CS42L43_FDIV_FRAC,				0x40000000 },
+ 	{ CS42L43_CAL_RATIO,				0x00000080 },
+-	{ CS42L43_SPI_CLK_CONFIG1,			0x00000000 },
++	{ CS42L43_SPI_CLK_CONFIG1,			0x00000001 },
+ 	{ CS42L43_SPI_CONFIG1,				0x00000000 },
+ 	{ CS42L43_SPI_CONFIG2,				0x00000000 },
+ 	{ CS42L43_SPI_CONFIG3,				0x00000001 },
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index c9550368d9ea5..7d0e91164cbaa 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -238,7 +238,9 @@ struct regmap *syscon_regmap_lookup_by_phandle(struct device_node *np,
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	regmap = syscon_node_to_regmap(syscon_np);
+-	of_node_put(syscon_np);
++
++	if (property)
++		of_node_put(syscon_np);
+ 
+ 	return regmap;
+ }
+diff --git a/drivers/misc/mei/gsc_proxy/mei_gsc_proxy.c b/drivers/misc/mei/gsc_proxy/mei_gsc_proxy.c
+index be52b113aea93..89364bdbb1290 100644
+--- a/drivers/misc/mei/gsc_proxy/mei_gsc_proxy.c
++++ b/drivers/misc/mei/gsc_proxy/mei_gsc_proxy.c
+@@ -96,7 +96,8 @@ static const struct component_master_ops mei_component_master_ops = {
+  *
+  *    The function checks if the device is pci device and
+  *    Intel VGA adapter, the subcomponent is SW Proxy
+- *    and the parent of MEI PCI and the parent of VGA are the same PCH device.
++ *    and the VGA is on the bus 0 reserved for built-in devices
++ *    to reject discrete GFX.
+  *
+  * @dev: master device
+  * @subcomponent: subcomponent to match (I915_COMPONENT_SWPROXY)
+@@ -123,7 +124,8 @@ static int mei_gsc_proxy_component_match(struct device *dev, int subcomponent,
+ 	if (subcomponent != I915_COMPONENT_GSC_PROXY)
+ 		return 0;
+ 
+-	return component_compare_dev(dev->parent, ((struct device *)data)->parent);
++	/* Only built-in GFX */
++	return (pdev->bus->number == 0);
+ }
+ 
+ static int mei_gsc_proxy_probe(struct mei_cl_device *cldev,
+@@ -146,7 +148,7 @@ static int mei_gsc_proxy_probe(struct mei_cl_device *cldev,
+ 	}
+ 
+ 	component_match_add_typed(&cldev->dev, &master_match,
+-				  mei_gsc_proxy_component_match, cldev->dev.parent);
++				  mei_gsc_proxy_component_match, NULL);
+ 	if (IS_ERR_OR_NULL(master_match)) {
+ 		ret = -ENOMEM;
+ 		goto err_exit;
+diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
+index 77d5f1d244899..860380931b6cd 100644
+--- a/drivers/mmc/host/wmt-sdmmc.c
++++ b/drivers/mmc/host/wmt-sdmmc.c
+@@ -883,7 +883,6 @@ static void wmt_mci_remove(struct platform_device *pdev)
+ {
+ 	struct mmc_host *mmc;
+ 	struct wmt_mci_priv *priv;
+-	struct resource *res;
+ 	u32 reg_tmp;
+ 
+ 	mmc = platform_get_drvdata(pdev);
+@@ -911,9 +910,6 @@ static void wmt_mci_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(priv->clk_sdmmc);
+ 	clk_put(priv->clk_sdmmc);
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	release_mem_region(res->start, resource_size(res));
+-
+ 	mmc_free_host(mmc);
+ 
+ 	dev_info(&pdev->dev, "WMT MCI device removed\n");
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index 746a27d15d440..96eb2e782c382 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -518,7 +518,7 @@ static int physmap_flash_probe(struct platform_device *dev)
+ 		if (!info->maps[i].phys)
+ 			info->maps[i].phys = res->start;
+ 
+-		info->win_order = get_bitmask_order(resource_size(res)) - 1;
++		info->win_order = fls64(resource_size(res)) - 1;
+ 		info->maps[i].size = BIT(info->win_order +
+ 					 (info->gpios ?
+ 					  info->gpios->ndescs : 0));
+diff --git a/drivers/mtd/nand/raw/lpc32xx_mlc.c b/drivers/mtd/nand/raw/lpc32xx_mlc.c
+index 488fd452611a6..677fcb03f9bef 100644
+--- a/drivers/mtd/nand/raw/lpc32xx_mlc.c
++++ b/drivers/mtd/nand/raw/lpc32xx_mlc.c
+@@ -303,8 +303,9 @@ static int lpc32xx_nand_device_ready(struct nand_chip *nand_chip)
+ 	return 0;
+ }
+ 
+-static irqreturn_t lpc3xxx_nand_irq(int irq, struct lpc32xx_nand_host *host)
++static irqreturn_t lpc3xxx_nand_irq(int irq, void *data)
+ {
++	struct lpc32xx_nand_host *host = data;
+ 	uint8_t sr;
+ 
+ 	/* Clear interrupt flag by reading status */
+@@ -780,7 +781,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
+ 		goto release_dma_chan;
+ 	}
+ 
+-	if (request_irq(host->irq, (irq_handler_t)&lpc3xxx_nand_irq,
++	if (request_irq(host->irq, &lpc3xxx_nand_irq,
+ 			IRQF_TRIGGER_HIGH, DRV_NAME, host)) {
+ 		dev_err(&pdev->dev, "Error requesting NAND IRQ\n");
+ 		res = -ENXIO;
+diff --git a/drivers/mtd/nand/spi/esmt.c b/drivers/mtd/nand/spi/esmt.c
+index 31c439a557b18..4597a82de23a4 100644
+--- a/drivers/mtd/nand/spi/esmt.c
++++ b/drivers/mtd/nand/spi/esmt.c
+@@ -104,7 +104,8 @@ static const struct mtd_ooblayout_ops f50l1g41lb_ooblayout = {
+ 
+ static const struct spinand_info esmt_c8_spinand_table[] = {
+ 	SPINAND_INFO("F50L1G41LB",
+-		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x01),
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x01, 0x7f,
++				0x7f, 0x7f),
+ 		     NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+ 		     NAND_ECCREQ(1, 512),
+ 		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+@@ -113,7 +114,8 @@ static const struct spinand_info esmt_c8_spinand_table[] = {
+ 		     0,
+ 		     SPINAND_ECCINFO(&f50l1g41lb_ooblayout, NULL)),
+ 	SPINAND_INFO("F50D1G41LB",
+-		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x11),
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x11, 0x7f,
++				0x7f, 0x7f),
+ 		     NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+ 		     NAND_ECCREQ(1, 512),
+ 		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+@@ -122,7 +124,8 @@ static const struct spinand_info esmt_c8_spinand_table[] = {
+ 		     0,
+ 		     SPINAND_ECCINFO(&f50l1g41lb_ooblayout, NULL)),
+ 	SPINAND_INFO("F50D2G41KA",
+-		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x51),
++		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x51, 0x7f,
++				0x7f, 0x7f),
+ 		     NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+ 		     NAND_ECCREQ(8, 512),
+ 		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 16ecc11c7f62a..2395b1225cc8a 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -418,6 +418,13 @@ static void m_can_config_endisable(struct m_can_classdev *cdev, bool enable)
+ 
+ static inline void m_can_enable_all_interrupts(struct m_can_classdev *cdev)
+ {
++	if (!cdev->net->irq) {
++		dev_dbg(cdev->dev, "Start hrtimer\n");
++		hrtimer_start(&cdev->hrtimer,
++			      ms_to_ktime(HRTIMER_POLL_INTERVAL_MS),
++			      HRTIMER_MODE_REL_PINNED);
++	}
++
+ 	/* Only interrupt line 0 is used in this driver */
+ 	m_can_write(cdev, M_CAN_ILE, ILE_EINT0);
+ }
+@@ -425,6 +432,11 @@ static inline void m_can_enable_all_interrupts(struct m_can_classdev *cdev)
+ static inline void m_can_disable_all_interrupts(struct m_can_classdev *cdev)
+ {
+ 	m_can_write(cdev, M_CAN_ILE, 0x0);
++
++	if (!cdev->net->irq) {
++		dev_dbg(cdev->dev, "Stop hrtimer\n");
++		hrtimer_cancel(&cdev->hrtimer);
++	}
+ }
+ 
+ /* Retrieve internal timestamp counter from TSCV.TSC, and shift it to 32-bit
+@@ -1417,12 +1429,6 @@ static int m_can_start(struct net_device *dev)
+ 
+ 	m_can_enable_all_interrupts(cdev);
+ 
+-	if (!dev->irq) {
+-		dev_dbg(cdev->dev, "Start hrtimer\n");
+-		hrtimer_start(&cdev->hrtimer, ms_to_ktime(HRTIMER_POLL_INTERVAL_MS),
+-			      HRTIMER_MODE_REL_PINNED);
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -1577,11 +1583,6 @@ static void m_can_stop(struct net_device *dev)
+ {
+ 	struct m_can_classdev *cdev = netdev_priv(dev);
+ 
+-	if (!dev->irq) {
+-		dev_dbg(cdev->dev, "Stop hrtimer\n");
+-		hrtimer_cancel(&cdev->hrtimer);
+-	}
+-
+ 	/* disable all interrupts */
+ 	m_can_disable_all_interrupts(cdev);
+ 
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index ff4b39601c937..d5d7e48b7b928 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -2176,6 +2176,8 @@ static int ksz_pirq_setup(struct ksz_device *dev, u8 p)
+ 	return ksz_irq_common_setup(dev, pirq);
+ }
+ 
++static int ksz_parse_drive_strength(struct ksz_device *dev);
++
+ static int ksz_setup(struct dsa_switch *ds)
+ {
+ 	struct ksz_device *dev = ds->priv;
+@@ -2197,6 +2199,10 @@ static int ksz_setup(struct dsa_switch *ds)
+ 		return ret;
+ 	}
+ 
++	ret = ksz_parse_drive_strength(dev);
++	if (ret)
++		return ret;
++
+ 	/* set broadcast storm protection 10% rate */
+ 	regmap_update_bits(ksz_regmap_16(dev), regs[S_BROADCAST_CTRL],
+ 			   BROADCAST_STORM_RATE,
+@@ -4236,10 +4242,6 @@ int ksz_switch_register(struct ksz_device *dev)
+ 	for (port_num = 0; port_num < dev->info->port_cnt; ++port_num)
+ 		dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA;
+ 	if (dev->dev->of_node) {
+-		ret = ksz_parse_drive_strength(dev);
+-		if (ret)
+-			return ret;
+-
+ 		ret = of_get_phy_mode(dev->dev->of_node, &interface);
+ 		if (ret == 0)
+ 			dev->compat_interface = interface;
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 2333f6383b542..e6b8bf6035565 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -998,20 +998,56 @@ static void mt7530_setup_port5(struct dsa_switch *ds, phy_interface_t interface)
+ 	mutex_unlock(&priv->reg_mutex);
+ }
+ 
++/* On page 205, section "8.6.3 Frame filtering" of the active standard, IEEE Std
++ * 802.1Q™-2022, it is stated that frames with 01:80:C2:00:00:00-0F as MAC DA
++ * must only be propagated to C-VLAN and MAC Bridge components. That means
++ * VLAN-aware and VLAN-unaware bridges. On the switch designs with CPU ports,
++ * these frames are supposed to be processed by the CPU (software). So we make
++ * the switch only forward them to the CPU port. And if received from a CPU
++ * port, forward to a single port. The software is responsible of making the
++ * switch conform to the latter by setting a single port as destination port on
++ * the special tag.
++ *
++ * This switch intellectual property cannot conform to this part of the standard
++ * fully. Whilst the REV_UN frame tag covers the remaining :04-0D and :0F MAC
++ * DAs, it also includes :22-FF which the scope of propagation is not supposed
++ * to be restricted for these MAC DAs.
++ */
+ static void
+ mt753x_trap_frames(struct mt7530_priv *priv)
+ {
+-	/* Trap BPDUs to the CPU port(s) */
+-	mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
++	/* Trap 802.1X PAE frames and BPDUs to the CPU port(s) and egress them
++	 * VLAN-untagged.
++	 */
++	mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_EG_TAG_MASK |
++		   MT753X_PAE_PORT_FW_MASK | MT753X_BPDU_EG_TAG_MASK |
++		   MT753X_BPDU_PORT_FW_MASK,
++		   MT753X_PAE_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++		   MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY) |
++		   MT753X_BPDU_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
+ 		   MT753X_BPDU_CPU_ONLY);
+ 
+-	/* Trap 802.1X PAE frames to the CPU port(s) */
+-	mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_PORT_FW_MASK,
+-		   MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY));
++	/* Trap frames with :01 and :02 MAC DAs to the CPU port(s) and egress
++	 * them VLAN-untagged.
++	 */
++	mt7530_rmw(priv, MT753X_RGAC1, MT753X_R02_EG_TAG_MASK |
++		   MT753X_R02_PORT_FW_MASK | MT753X_R01_EG_TAG_MASK |
++		   MT753X_R01_PORT_FW_MASK,
++		   MT753X_R02_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++		   MT753X_R02_PORT_FW(MT753X_BPDU_CPU_ONLY) |
++		   MT753X_R01_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++		   MT753X_BPDU_CPU_ONLY);
+ 
+-	/* Trap LLDP frames with :0E MAC DA to the CPU port(s) */
+-	mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK,
+-		   MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));
++	/* Trap frames with :03 and :0E MAC DAs to the CPU port(s) and egress
++	 * them VLAN-untagged.
++	 */
++	mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_EG_TAG_MASK |
++		   MT753X_R0E_PORT_FW_MASK | MT753X_R03_EG_TAG_MASK |
++		   MT753X_R03_PORT_FW_MASK,
++		   MT753X_R0E_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++		   MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY) |
++		   MT753X_R03_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++		   MT753X_BPDU_CPU_ONLY);
+ }
+ 
+ static int
+@@ -2243,11 +2279,11 @@ mt7530_setup(struct dsa_switch *ds)
+ 	 */
+ 	if (priv->mcm) {
+ 		reset_control_assert(priv->rstc);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		reset_control_deassert(priv->rstc);
+ 	} else {
+ 		gpiod_set_value_cansleep(priv->reset, 0);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		gpiod_set_value_cansleep(priv->reset, 1);
+ 	}
+ 
+@@ -2449,11 +2485,11 @@ mt7531_setup(struct dsa_switch *ds)
+ 	 */
+ 	if (priv->mcm) {
+ 		reset_control_assert(priv->rstc);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		reset_control_deassert(priv->rstc);
+ 	} else {
+ 		gpiod_set_value_cansleep(priv->reset, 0);
+-		usleep_range(1000, 1100);
++		usleep_range(5000, 5100);
+ 		gpiod_set_value_cansleep(priv->reset, 1);
+ 	}
+ 
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index 17e42d30fff4b..75bc9043c8c0a 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -65,14 +65,33 @@ enum mt753x_id {
+ 
+ /* Registers for BPDU and PAE frame control*/
+ #define MT753X_BPC			0x24
+-#define  MT753X_BPDU_PORT_FW_MASK	GENMASK(2, 0)
++#define  MT753X_PAE_EG_TAG_MASK		GENMASK(24, 22)
++#define  MT753X_PAE_EG_TAG(x)		FIELD_PREP(MT753X_PAE_EG_TAG_MASK, x)
+ #define  MT753X_PAE_PORT_FW_MASK	GENMASK(18, 16)
+ #define  MT753X_PAE_PORT_FW(x)		FIELD_PREP(MT753X_PAE_PORT_FW_MASK, x)
++#define  MT753X_BPDU_EG_TAG_MASK	GENMASK(8, 6)
++#define  MT753X_BPDU_EG_TAG(x)		FIELD_PREP(MT753X_BPDU_EG_TAG_MASK, x)
++#define  MT753X_BPDU_PORT_FW_MASK	GENMASK(2, 0)
++
++/* Register for :01 and :02 MAC DA frame control */
++#define MT753X_RGAC1			0x28
++#define  MT753X_R02_EG_TAG_MASK		GENMASK(24, 22)
++#define  MT753X_R02_EG_TAG(x)		FIELD_PREP(MT753X_R02_EG_TAG_MASK, x)
++#define  MT753X_R02_PORT_FW_MASK	GENMASK(18, 16)
++#define  MT753X_R02_PORT_FW(x)		FIELD_PREP(MT753X_R02_PORT_FW_MASK, x)
++#define  MT753X_R01_EG_TAG_MASK		GENMASK(8, 6)
++#define  MT753X_R01_EG_TAG(x)		FIELD_PREP(MT753X_R01_EG_TAG_MASK, x)
++#define  MT753X_R01_PORT_FW_MASK	GENMASK(2, 0)
+ 
+ /* Register for :03 and :0E MAC DA frame control */
+ #define MT753X_RGAC2			0x2c
++#define  MT753X_R0E_EG_TAG_MASK		GENMASK(24, 22)
++#define  MT753X_R0E_EG_TAG(x)		FIELD_PREP(MT753X_R0E_EG_TAG_MASK, x)
+ #define  MT753X_R0E_PORT_FW_MASK	GENMASK(18, 16)
+ #define  MT753X_R0E_PORT_FW(x)		FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x)
++#define  MT753X_R03_EG_TAG_MASK		GENMASK(8, 6)
++#define  MT753X_R03_EG_TAG(x)		FIELD_PREP(MT753X_R03_EG_TAG_MASK, x)
++#define  MT753X_R03_PORT_FW_MASK	GENMASK(2, 0)
+ 
+ enum mt753x_bpdu_port_fw {
+ 	MT753X_BPDU_FOLLOW_MFC,
+@@ -253,6 +272,7 @@ enum mt7530_port_mode {
+ enum mt7530_vlan_port_eg_tag {
+ 	MT7530_VLAN_EG_DISABLED = 0,
+ 	MT7530_VLAN_EG_CONSISTENT = 1,
++	MT7530_VLAN_EG_UNTAGGED = 4,
+ };
+ 
+ enum mt7530_vlan_port_attr {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index c44c44e26ddfe..4fa27c9a33974 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -3238,22 +3238,6 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ }
+ 
+-static u16 ena_select_queue(struct net_device *dev, struct sk_buff *skb,
+-			    struct net_device *sb_dev)
+-{
+-	u16 qid;
+-	/* we suspect that this is good for in--kernel network services that
+-	 * want to loop incoming skb rx to tx in normal user generated traffic,
+-	 * most probably we will not get to this
+-	 */
+-	if (skb_rx_queue_recorded(skb))
+-		qid = skb_get_rx_queue(skb);
+-	else
+-		qid = netdev_pick_tx(dev, skb, NULL);
+-
+-	return qid;
+-}
+-
+ static void ena_config_host_info(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -3424,7 +3408,6 @@ static const struct net_device_ops ena_netdev_ops = {
+ 	.ndo_open		= ena_open,
+ 	.ndo_stop		= ena_close,
+ 	.ndo_start_xmit		= ena_start_xmit,
+-	.ndo_select_queue	= ena_select_queue,
+ 	.ndo_get_stats64	= ena_get_stats64,
+ 	.ndo_tx_timeout		= ena_tx_timeout,
+ 	.ndo_change_mtu		= ena_change_mtu,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+index d8b1824c334d3..0bc1367fd6492 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+@@ -1002,9 +1002,6 @@ static inline void bnx2x_set_fw_mac_addr(__le16 *fw_hi, __le16 *fw_mid,
+ static inline void bnx2x_free_rx_mem_pool(struct bnx2x *bp,
+ 					  struct bnx2x_alloc_pool *pool)
+ {
+-	if (!pool->page)
+-		return;
+-
+ 	put_page(pool->page);
+ 
+ 	pool->page = NULL;
+@@ -1015,6 +1012,9 @@ static inline void bnx2x_free_rx_sge_range(struct bnx2x *bp,
+ {
+ 	int i;
+ 
++	if (!fp->page_pool.page)
++		return;
++
+ 	if (fp->mode == TPA_MODE_DISABLED)
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
+index 3b6dbf158b98d..f72dc0cee30e5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_dcbnl.c
+@@ -76,7 +76,7 @@ static int hns3_dcbnl_ieee_delapp(struct net_device *ndev, struct dcb_app *app)
+ 	if (hns3_nic_resetting(ndev))
+ 		return -EBUSY;
+ 
+-	if (h->kinfo.dcb_ops->ieee_setapp)
++	if (h->kinfo.dcb_ops->ieee_delapp)
+ 		return h->kinfo.dcb_ops->ieee_delapp(h, app);
+ 
+ 	return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 5ea9e59569eff..609d3799d7738 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2890,7 +2890,10 @@ static int hclge_mac_init(struct hclge_dev *hdev)
+ 	int ret;
+ 
+ 	hdev->support_sfp_query = true;
+-	hdev->hw.mac.duplex = HCLGE_MAC_FULL;
++
++	if (!test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
++		hdev->hw.mac.duplex = HCLGE_MAC_FULL;
++
+ 	ret = hclge_cfg_mac_speed_dup_hw(hdev, hdev->hw.mac.speed,
+ 					 hdev->hw.mac.duplex, hdev->hw.mac.lane_num);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index 80a2a0073d97a..507d7ce26d831 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -108,7 +108,7 @@ void hclge_ptp_get_rx_hwts(struct hnae3_handle *handle, struct sk_buff *skb,
+ 	u64 ns = nsec;
+ 	u32 sec_h;
+ 
+-	if (!test_bit(HCLGE_PTP_FLAG_RX_EN, &hdev->ptp->flags))
++	if (!hdev->ptp || !test_bit(HCLGE_PTP_FLAG_RX_EN, &hdev->ptp->flags))
+ 		return;
+ 
+ 	/* Since the BD does not have enough space for the higher 16 bits of
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index a9cca2d24120a..dabf33cec3e1b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6572,6 +6572,7 @@ static void ice_update_vsi_ring_stats(struct ice_vsi *vsi)
+ {
+ 	struct rtnl_link_stats64 *net_stats, *stats_prev;
+ 	struct rtnl_link_stats64 *vsi_stats;
++	struct ice_pf *pf = vsi->back;
+ 	u64 pkts, bytes;
+ 	int i;
+ 
+@@ -6617,21 +6618,18 @@ static void ice_update_vsi_ring_stats(struct ice_vsi *vsi)
+ 	net_stats = &vsi->net_stats;
+ 	stats_prev = &vsi->net_stats_prev;
+ 
+-	/* clear prev counters after reset */
+-	if (vsi_stats->tx_packets < stats_prev->tx_packets ||
+-	    vsi_stats->rx_packets < stats_prev->rx_packets) {
+-		stats_prev->tx_packets = 0;
+-		stats_prev->tx_bytes = 0;
+-		stats_prev->rx_packets = 0;
+-		stats_prev->rx_bytes = 0;
++	/* Update netdev counters, but keep in mind that values could start at
++	 * random value after PF reset. And as we increase the reported stat by
++	 * diff of Prev-Cur, we need to be sure that Prev is valid. If it's not,
++	 * let's skip this round.
++	 */
++	if (likely(pf->stat_prev_loaded)) {
++		net_stats->tx_packets += vsi_stats->tx_packets - stats_prev->tx_packets;
++		net_stats->tx_bytes += vsi_stats->tx_bytes - stats_prev->tx_bytes;
++		net_stats->rx_packets += vsi_stats->rx_packets - stats_prev->rx_packets;
++		net_stats->rx_bytes += vsi_stats->rx_bytes - stats_prev->rx_bytes;
+ 	}
+ 
+-	/* update netdev counters */
+-	net_stats->tx_packets += vsi_stats->tx_packets - stats_prev->tx_packets;
+-	net_stats->tx_bytes += vsi_stats->tx_bytes - stats_prev->tx_bytes;
+-	net_stats->rx_packets += vsi_stats->rx_packets - stats_prev->rx_packets;
+-	net_stats->rx_bytes += vsi_stats->rx_bytes - stats_prev->rx_bytes;
+-
+ 	stats_prev->tx_packets = vsi_stats->tx_packets;
+ 	stats_prev->tx_bytes = vsi_stats->tx_bytes;
+ 	stats_prev->rx_packets = vsi_stats->rx_packets;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b2295caa2f0ab..ada42ba635498 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -6984,44 +6984,31 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
+ static void igb_tsync_interrupt(struct igb_adapter *adapter)
+ {
+ 	struct e1000_hw *hw = &adapter->hw;
+-	u32 ack = 0, tsicr = rd32(E1000_TSICR);
++	u32 tsicr = rd32(E1000_TSICR);
+ 	struct ptp_clock_event event;
+ 
+ 	if (tsicr & TSINTR_SYS_WRAP) {
+ 		event.type = PTP_CLOCK_PPS;
+ 		if (adapter->ptp_caps.pps)
+ 			ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= TSINTR_SYS_WRAP;
+ 	}
+ 
+ 	if (tsicr & E1000_TSICR_TXTS) {
+ 		/* retrieve hardware timestamp */
+ 		schedule_work(&adapter->ptp_tx_work);
+-		ack |= E1000_TSICR_TXTS;
+ 	}
+ 
+-	if (tsicr & TSINTR_TT0) {
++	if (tsicr & TSINTR_TT0)
+ 		igb_perout(adapter, 0);
+-		ack |= TSINTR_TT0;
+-	}
+ 
+-	if (tsicr & TSINTR_TT1) {
++	if (tsicr & TSINTR_TT1)
+ 		igb_perout(adapter, 1);
+-		ack |= TSINTR_TT1;
+-	}
+ 
+-	if (tsicr & TSINTR_AUTT0) {
++	if (tsicr & TSINTR_AUTT0)
+ 		igb_extts(adapter, 0);
+-		ack |= TSINTR_AUTT0;
+-	}
+ 
+-	if (tsicr & TSINTR_AUTT1) {
++	if (tsicr & TSINTR_AUTT1)
+ 		igb_extts(adapter, 1);
+-		ack |= TSINTR_AUTT1;
+-	}
+-
+-	/* acknowledge the interrupts */
+-	wr32(E1000_TSICR, ack);
+ }
+ 
+ static irqreturn_t igb_msix_other(int irq, void *data)
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 58ffddc6419ad..45716d271d955 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -5304,25 +5304,22 @@ igc_features_check(struct sk_buff *skb, struct net_device *dev,
+ 
+ static void igc_tsync_interrupt(struct igc_adapter *adapter)
+ {
+-	u32 ack, tsauxc, sec, nsec, tsicr;
+ 	struct igc_hw *hw = &adapter->hw;
++	u32 tsauxc, sec, nsec, tsicr;
+ 	struct ptp_clock_event event;
+ 	struct timespec64 ts;
+ 
+ 	tsicr = rd32(IGC_TSICR);
+-	ack = 0;
+ 
+ 	if (tsicr & IGC_TSICR_SYS_WRAP) {
+ 		event.type = PTP_CLOCK_PPS;
+ 		if (adapter->ptp_caps.pps)
+ 			ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= IGC_TSICR_SYS_WRAP;
+ 	}
+ 
+ 	if (tsicr & IGC_TSICR_TXTS) {
+ 		/* retrieve hardware timestamp */
+ 		igc_ptp_tx_tstamp_event(adapter);
+-		ack |= IGC_TSICR_TXTS;
+ 	}
+ 
+ 	if (tsicr & IGC_TSICR_TT0) {
+@@ -5336,7 +5333,6 @@ static void igc_tsync_interrupt(struct igc_adapter *adapter)
+ 		wr32(IGC_TSAUXC, tsauxc);
+ 		adapter->perout[0].start = ts;
+ 		spin_unlock(&adapter->tmreg_lock);
+-		ack |= IGC_TSICR_TT0;
+ 	}
+ 
+ 	if (tsicr & IGC_TSICR_TT1) {
+@@ -5350,7 +5346,6 @@ static void igc_tsync_interrupt(struct igc_adapter *adapter)
+ 		wr32(IGC_TSAUXC, tsauxc);
+ 		adapter->perout[1].start = ts;
+ 		spin_unlock(&adapter->tmreg_lock);
+-		ack |= IGC_TSICR_TT1;
+ 	}
+ 
+ 	if (tsicr & IGC_TSICR_AUTT0) {
+@@ -5360,7 +5355,6 @@ static void igc_tsync_interrupt(struct igc_adapter *adapter)
+ 		event.index = 0;
+ 		event.timestamp = sec * NSEC_PER_SEC + nsec;
+ 		ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= IGC_TSICR_AUTT0;
+ 	}
+ 
+ 	if (tsicr & IGC_TSICR_AUTT1) {
+@@ -5370,11 +5364,7 @@ static void igc_tsync_interrupt(struct igc_adapter *adapter)
+ 		event.index = 1;
+ 		event.timestamp = sec * NSEC_PER_SEC + nsec;
+ 		ptp_clock_event(adapter->ptp_clock, &event);
+-		ack |= IGC_TSICR_AUTT1;
+ 	}
+-
+-	/* acknowledge the interrupts */
+-	wr32(IGC_TSICR, ack);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 6c70c84986904..3c0f55b3e48ea 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -1338,7 +1338,7 @@ static irqreturn_t cgx_fwi_event_handler(int irq, void *data)
+ 
+ 		/* Release thread waiting for completion  */
+ 		lmac->cmd_pend = false;
+-		wake_up_interruptible(&lmac->wq_cmd_cmplt);
++		wake_up(&lmac->wq_cmd_cmplt);
+ 		break;
+ 	case CGX_EVT_ASYNC:
+ 		if (cgx_event_is_linkevent(event))
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+index 9690ac01f02c8..7d741e3ba8c51 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+@@ -214,11 +214,12 @@ int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid)
+ }
+ EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp);
+ 
+-void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
++static void otx2_mbox_msg_send_data(struct otx2_mbox *mbox, int devid, u64 data)
+ {
+ 	struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+ 	struct mbox_hdr *tx_hdr, *rx_hdr;
+ 	void *hw_mbase = mdev->hwbase;
++	u64 intr_val;
+ 
+ 	tx_hdr = hw_mbase + mbox->tx_start;
+ 	rx_hdr = hw_mbase + mbox->rx_start;
+@@ -254,14 +255,52 @@ void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
+ 
+ 	spin_unlock(&mdev->mbox_lock);
+ 
++	/* Check if interrupt pending */
++	intr_val = readq((void __iomem *)mbox->reg_base +
++		     (mbox->trigger | (devid << mbox->tr_shift)));
++
++	intr_val |= data;
+ 	/* The interrupt should be fired after num_msgs is written
+ 	 * to the shared memory
+ 	 */
+-	writeq(1, (void __iomem *)mbox->reg_base +
++	writeq(intr_val, (void __iomem *)mbox->reg_base +
+ 	       (mbox->trigger | (devid << mbox->tr_shift)));
+ }
++
++void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
++{
++	otx2_mbox_msg_send_data(mbox, devid, MBOX_DOWN_MSG);
++}
+ EXPORT_SYMBOL(otx2_mbox_msg_send);
+ 
++void otx2_mbox_msg_send_up(struct otx2_mbox *mbox, int devid)
++{
++	otx2_mbox_msg_send_data(mbox, devid, MBOX_UP_MSG);
++}
++EXPORT_SYMBOL(otx2_mbox_msg_send_up);
++
++bool otx2_mbox_wait_for_zero(struct otx2_mbox *mbox, int devid)
++{
++	u64 data;
++
++	data = readq((void __iomem *)mbox->reg_base +
++		     (mbox->trigger | (devid << mbox->tr_shift)));
++
++	/* If data is non-zero wait for ~1ms and return to caller
++	 * whether data has changed to zero or not after the wait.
++	 */
++	if (!data)
++		return true;
++
++	usleep_range(950, 1000);
++
++	data = readq((void __iomem *)mbox->reg_base +
++		     (mbox->trigger | (devid << mbox->tr_shift)));
++
++	return data == 0;
++}
++EXPORT_SYMBOL(otx2_mbox_wait_for_zero);
++
+ struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+ 					    int size, int size_rsp)
+ {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index 5df42634ceb84..bd4b9661ee373 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -16,6 +16,9 @@
+ 
+ #define MBOX_SIZE		SZ_64K
+ 
++#define MBOX_DOWN_MSG		1
++#define MBOX_UP_MSG		2
++
+ /* AF/PF: PF initiated, PF/VF VF initiated */
+ #define MBOX_DOWN_RX_START	0
+ #define MBOX_DOWN_RX_SIZE	(46 * SZ_1K)
+@@ -101,6 +104,7 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase,
+ 			   struct pci_dev *pdev, void __force *reg_base,
+ 			   int direction, int ndevs, unsigned long *bmap);
+ void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
++void otx2_mbox_msg_send_up(struct otx2_mbox *mbox, int devid);
+ int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
+ int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
+ struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+@@ -118,6 +122,8 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
+ 	return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
+ }
+ 
++bool otx2_mbox_wait_for_zero(struct otx2_mbox *mbox, int devid);
++
+ /* Mailbox message types */
+ #define MBOX_MSG_MASK				0xFFFF
+ #define MBOX_MSG_INVALID			0xFFFE
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index dfd23580e3b8e..d39d86e694ccf 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -121,13 +121,17 @@ int mcs_add_intr_wq_entry(struct mcs *mcs, struct mcs_intr_event *event)
+ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ {
+ 	struct mcs_intr_info *req;
+-	int err, pf;
++	int pf;
+ 
+ 	pf = rvu_get_pf(event->pcifunc);
+ 
++	mutex_lock(&rvu->mbox_lock);
++
+ 	req = otx2_mbox_alloc_msg_mcs_intr_notify(rvu, pf);
+-	if (!req)
++	if (!req) {
++		mutex_unlock(&rvu->mbox_lock);
+ 		return -ENOMEM;
++	}
+ 
+ 	req->mcs_id = event->mcs_id;
+ 	req->intr_mask = event->intr_mask;
+@@ -135,10 +139,11 @@ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ 	req->hdr.pcifunc = event->pcifunc;
+ 	req->lmac_id = event->lmac_id;
+ 
+-	otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pf);
+-	err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf);
+-	if (err)
+-		dev_warn(rvu->dev, "MCS notification to pf %d failed\n", pf);
++	otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf);
++
++	otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
++
++	mutex_unlock(&rvu->mbox_lock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 731bb82b577c2..32645aefd5934 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2114,7 +2114,7 @@ MBOX_MESSAGES
+ 	}
+ }
+ 
+-static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
++static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll)
+ {
+ 	struct rvu *rvu = mwork->rvu;
+ 	int offset, err, id, devid;
+@@ -2181,6 +2181,9 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
+ 	}
+ 	mw->mbox_wrk[devid].num_msgs = 0;
+ 
++	if (poll)
++		otx2_mbox_wait_for_zero(mbox, devid);
++
+ 	/* Send mbox responses to VF/PF */
+ 	otx2_mbox_msg_send(mbox, devid);
+ }
+@@ -2188,15 +2191,18 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
+ static inline void rvu_afpf_mbox_handler(struct work_struct *work)
+ {
+ 	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
++	struct rvu *rvu = mwork->rvu;
+ 
+-	__rvu_mbox_handler(mwork, TYPE_AFPF);
++	mutex_lock(&rvu->mbox_lock);
++	__rvu_mbox_handler(mwork, TYPE_AFPF, true);
++	mutex_unlock(&rvu->mbox_lock);
+ }
+ 
+ static inline void rvu_afvf_mbox_handler(struct work_struct *work)
+ {
+ 	struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+ 
+-	__rvu_mbox_handler(mwork, TYPE_AFVF);
++	__rvu_mbox_handler(mwork, TYPE_AFVF, false);
+ }
+ 
+ static void __rvu_mbox_up_handler(struct rvu_work *mwork, int type)
+@@ -2371,6 +2377,8 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+ 		}
+ 	}
+ 
++	mutex_init(&rvu->mbox_lock);
++
+ 	mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL);
+ 	if (!mbox_regions) {
+ 		err = -ENOMEM;
+@@ -2520,10 +2528,9 @@ static void rvu_queue_work(struct mbox_wq_info *mw, int first,
+ 	}
+ }
+ 
+-static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
++static irqreturn_t rvu_mbox_pf_intr_handler(int irq, void *rvu_irq)
+ {
+ 	struct rvu *rvu = (struct rvu *)rvu_irq;
+-	int vfs = rvu->vfs;
+ 	u64 intr;
+ 
+ 	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
+@@ -2537,6 +2544,18 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+ 
+ 	rvu_queue_work(&rvu->afpf_wq_info, 0, rvu->hw->total_pfs, intr);
+ 
++	return IRQ_HANDLED;
++}
++
++static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
++{
++	struct rvu *rvu = (struct rvu *)rvu_irq;
++	int vfs = rvu->vfs;
++	u64 intr;
++
++	/* Sync with mbox memory region */
++	rmb();
++
+ 	/* Handle VF interrupts */
+ 	if (vfs > 64) {
+ 		intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(1));
+@@ -2874,7 +2893,7 @@ static int rvu_register_interrupts(struct rvu *rvu)
+ 	/* Register mailbox interrupt handler */
+ 	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox");
+ 	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX),
+-			  rvu_mbox_intr_handler, 0,
++			  rvu_mbox_pf_intr_handler, 0,
+ 			  &rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu);
+ 	if (ret) {
+ 		dev_err(rvu->dev,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 8802961b8889f..185c296eaaf0d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -551,6 +551,8 @@ struct rvu {
+ 	spinlock_t		mcs_intrq_lock;
+ 	/* CPT interrupt lock */
+ 	spinlock_t		cpt_intr_lock;
++
++	struct mutex		mbox_lock; /* Serialize mbox up and down msgs */
+ };
+ 
+ static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 38acdc7a73bbe..72e060cf6b618 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -232,7 +232,7 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
+ 	struct cgx_link_user_info *linfo;
+ 	struct cgx_link_info_msg *msg;
+ 	unsigned long pfmap;
+-	int err, pfid;
++	int pfid;
+ 
+ 	linfo = &event->link_uinfo;
+ 	pfmap = cgxlmac_to_pfmap(rvu, event->cgx_id, event->lmac_id);
+@@ -255,16 +255,22 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
+ 			continue;
+ 		}
+ 
++		mutex_lock(&rvu->mbox_lock);
++
+ 		/* Send mbox message to PF */
+ 		msg = otx2_mbox_alloc_msg_cgx_link_event(rvu, pfid);
+-		if (!msg)
++		if (!msg) {
++			mutex_unlock(&rvu->mbox_lock);
+ 			continue;
++		}
++
+ 		msg->link_info = *linfo;
+-		otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pfid);
+-		err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid);
+-		if (err)
+-			dev_warn(rvu->dev, "notification to pf %d failed\n",
+-				 pfid);
++
++		otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pfid);
++
++		otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pfid);
++
++		mutex_unlock(&rvu->mbox_lock);
+ 	} while (pfmap);
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 02d0b707aea5b..a85ac039d779b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1592,7 +1592,7 @@ int otx2_detach_resources(struct mbox *mbox)
+ 	detach->partial = false;
+ 
+ 	/* Send detach request to AF */
+-	otx2_mbox_msg_send(&mbox->mbox, 0);
++	otx2_sync_mbox_msg(mbox);
+ 	mutex_unlock(&mbox->lock);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 06910307085ef..7e16a341ec588 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -815,7 +815,7 @@ static inline int otx2_sync_mbox_up_msg(struct mbox *mbox, int devid)
+ 
+ 	if (!otx2_mbox_nonempty(&mbox->mbox_up, devid))
+ 		return 0;
+-	otx2_mbox_msg_send(&mbox->mbox_up, devid);
++	otx2_mbox_msg_send_up(&mbox->mbox_up, devid);
+ 	err = otx2_mbox_wait_for_rsp(&mbox->mbox_up, devid);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index e5fe67e738655..b40bd0e467514 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -292,8 +292,8 @@ static int otx2_pf_flr_init(struct otx2_nic *pf, int num_vfs)
+ 	return 0;
+ }
+ 
+-static void otx2_queue_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
+-			    int first, int mdevs, u64 intr, int type)
++static void otx2_queue_vf_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
++			       int first, int mdevs, u64 intr)
+ {
+ 	struct otx2_mbox_dev *mdev;
+ 	struct otx2_mbox *mbox;
+@@ -307,40 +307,26 @@ static void otx2_queue_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
+ 
+ 		mbox = &mw->mbox;
+ 		mdev = &mbox->dev[i];
+-		if (type == TYPE_PFAF)
+-			otx2_sync_mbox_bbuf(mbox, i);
+ 		hdr = mdev->mbase + mbox->rx_start;
+ 		/* The hdr->num_msgs is set to zero immediately in the interrupt
+-		 * handler to  ensure that it holds a correct value next time
+-		 * when the interrupt handler is called.
+-		 * pf->mbox.num_msgs holds the data for use in pfaf_mbox_handler
+-		 * pf>mbox.up_num_msgs holds the data for use in
+-		 * pfaf_mbox_up_handler.
++		 * handler to ensure that it holds a correct value next time
++		 * when the interrupt handler is called. pf->mw[i].num_msgs
++		 * holds the data for use in otx2_pfvf_mbox_handler and
++		 * pf->mw[i].up_num_msgs holds the data for use in
++		 * otx2_pfvf_mbox_up_handler.
+ 		 */
+ 		if (hdr->num_msgs) {
+ 			mw[i].num_msgs = hdr->num_msgs;
+ 			hdr->num_msgs = 0;
+-			if (type == TYPE_PFAF)
+-				memset(mbox->hwbase + mbox->rx_start, 0,
+-				       ALIGN(sizeof(struct mbox_hdr),
+-					     sizeof(u64)));
+-
+ 			queue_work(mbox_wq, &mw[i].mbox_wrk);
+ 		}
+ 
+ 		mbox = &mw->mbox_up;
+ 		mdev = &mbox->dev[i];
+-		if (type == TYPE_PFAF)
+-			otx2_sync_mbox_bbuf(mbox, i);
+ 		hdr = mdev->mbase + mbox->rx_start;
+ 		if (hdr->num_msgs) {
+ 			mw[i].up_num_msgs = hdr->num_msgs;
+ 			hdr->num_msgs = 0;
+-			if (type == TYPE_PFAF)
+-				memset(mbox->hwbase + mbox->rx_start, 0,
+-				       ALIGN(sizeof(struct mbox_hdr),
+-					     sizeof(u64)));
+-
+ 			queue_work(mbox_wq, &mw[i].mbox_up_wrk);
+ 		}
+ 	}
+@@ -356,8 +342,10 @@ static void otx2_forward_msg_pfvf(struct otx2_mbox_dev *mdev,
+ 	/* Msgs are already copied, trigger VF's mbox irq */
+ 	smp_wmb();
+ 
++	otx2_mbox_wait_for_zero(pfvf_mbox, devid);
++
+ 	offset = pfvf_mbox->trigger | (devid << pfvf_mbox->tr_shift);
+-	writeq(1, (void __iomem *)pfvf_mbox->reg_base + offset);
++	writeq(MBOX_DOWN_MSG, (void __iomem *)pfvf_mbox->reg_base + offset);
+ 
+ 	/* Restore VF's mbox bounce buffer region address */
+ 	src_mdev->mbase = bbuf_base;
+@@ -547,7 +535,7 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work)
+ end:
+ 		offset = mbox->rx_start + msg->next_msgoff;
+ 		if (mdev->msgs_acked == (vf_mbox->up_num_msgs - 1))
+-			__otx2_mbox_reset(mbox, 0);
++			__otx2_mbox_reset(mbox, vf_idx);
+ 		mdev->msgs_acked++;
+ 	}
+ }
+@@ -564,8 +552,7 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+ 	if (vfs > 64) {
+ 		intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(1));
+ 		otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(1), intr);
+-		otx2_queue_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr,
+-				TYPE_PFVF);
++		otx2_queue_vf_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr);
+ 		if (intr)
+ 			trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+ 		vfs = 64;
+@@ -574,7 +561,7 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+ 	intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(0));
+ 	otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(0), intr);
+ 
+-	otx2_queue_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr, TYPE_PFVF);
++	otx2_queue_vf_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr);
+ 
+ 	if (intr)
+ 		trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+@@ -597,8 +584,9 @@ static int otx2_pfvf_mbox_init(struct otx2_nic *pf, int numvfs)
+ 	if (!pf->mbox_pfvf)
+ 		return -ENOMEM;
+ 
+-	pf->mbox_pfvf_wq = alloc_ordered_workqueue("otx2_pfvf_mailbox",
+-						   WQ_HIGHPRI | WQ_MEM_RECLAIM);
++	pf->mbox_pfvf_wq = alloc_workqueue("otx2_pfvf_mailbox",
++					   WQ_UNBOUND | WQ_HIGHPRI |
++					   WQ_MEM_RECLAIM, 0);
+ 	if (!pf->mbox_pfvf_wq)
+ 		return -ENOMEM;
+ 
+@@ -821,20 +809,22 @@ static void otx2_pfaf_mbox_handler(struct work_struct *work)
+ 	struct mbox *af_mbox;
+ 	struct otx2_nic *pf;
+ 	int offset, id;
++	u16 num_msgs;
+ 
+ 	af_mbox = container_of(work, struct mbox, mbox_wrk);
+ 	mbox = &af_mbox->mbox;
+ 	mdev = &mbox->dev[0];
+ 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++	num_msgs = rsp_hdr->num_msgs;
+ 
+ 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+ 	pf = af_mbox->pfvf;
+ 
+-	for (id = 0; id < af_mbox->num_msgs; id++) {
++	for (id = 0; id < num_msgs; id++) {
+ 		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ 		otx2_process_pfaf_mbox_msg(pf, msg);
+ 		offset = mbox->rx_start + msg->next_msgoff;
+-		if (mdev->msgs_acked == (af_mbox->num_msgs - 1))
++		if (mdev->msgs_acked == (num_msgs - 1))
+ 			__otx2_mbox_reset(mbox, 0);
+ 		mdev->msgs_acked++;
+ 	}
+@@ -945,12 +935,14 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work)
+ 	int offset, id, devid = 0;
+ 	struct mbox_hdr *rsp_hdr;
+ 	struct mbox_msghdr *msg;
++	u16 num_msgs;
+ 
+ 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++	num_msgs = rsp_hdr->num_msgs;
+ 
+ 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+ 
+-	for (id = 0; id < af_mbox->up_num_msgs; id++) {
++	for (id = 0; id < num_msgs; id++) {
+ 		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ 
+ 		devid = msg->pcifunc & RVU_PFVF_FUNC_MASK;
+@@ -959,10 +951,11 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work)
+ 			otx2_process_mbox_msg_up(pf, msg);
+ 		offset = mbox->rx_start + msg->next_msgoff;
+ 	}
+-	if (devid) {
++	/* Forward to VF iff VFs are really present */
++	if (devid && pci_num_vf(pf->pdev)) {
+ 		otx2_forward_vf_mbox_msgs(pf, &pf->mbox.mbox_up,
+ 					  MBOX_DIR_PFVF_UP, devid - 1,
+-					  af_mbox->up_num_msgs);
++					  num_msgs);
+ 		return;
+ 	}
+ 
+@@ -972,16 +965,49 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work)
+ static irqreturn_t otx2_pfaf_mbox_intr_handler(int irq, void *pf_irq)
+ {
+ 	struct otx2_nic *pf = (struct otx2_nic *)pf_irq;
+-	struct mbox *mbox;
++	struct mbox *mw = &pf->mbox;
++	struct otx2_mbox_dev *mdev;
++	struct otx2_mbox *mbox;
++	struct mbox_hdr *hdr;
++	u64 mbox_data;
+ 
+ 	/* Clear the IRQ */
+ 	otx2_write64(pf, RVU_PF_INT, BIT_ULL(0));
+ 
+-	mbox = &pf->mbox;
+ 
+-	trace_otx2_msg_interrupt(mbox->mbox.pdev, "AF to PF", BIT_ULL(0));
++	mbox_data = otx2_read64(pf, RVU_PF_PFAF_MBOX0);
++
++	if (mbox_data & MBOX_UP_MSG) {
++		mbox_data &= ~MBOX_UP_MSG;
++		otx2_write64(pf, RVU_PF_PFAF_MBOX0, mbox_data);
++
++		mbox = &mw->mbox_up;
++		mdev = &mbox->dev[0];
++		otx2_sync_mbox_bbuf(mbox, 0);
++
++		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++		if (hdr->num_msgs)
++			queue_work(pf->mbox_wq, &mw->mbox_up_wrk);
++
++		trace_otx2_msg_interrupt(pf->pdev, "UP message from AF to PF",
++					 BIT_ULL(0));
++	}
++
++	if (mbox_data & MBOX_DOWN_MSG) {
++		mbox_data &= ~MBOX_DOWN_MSG;
++		otx2_write64(pf, RVU_PF_PFAF_MBOX0, mbox_data);
++
++		mbox = &mw->mbox;
++		mdev = &mbox->dev[0];
++		otx2_sync_mbox_bbuf(mbox, 0);
++
++		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++		if (hdr->num_msgs)
++			queue_work(pf->mbox_wq, &mw->mbox_wrk);
+ 
+-	otx2_queue_work(mbox, pf->mbox_wq, 0, 1, 1, TYPE_PFAF);
++		trace_otx2_msg_interrupt(pf->pdev, "DOWN reply from AF to PF",
++					 BIT_ULL(0));
++	}
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -3087,6 +3113,7 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ 	struct otx2_vf_config *config;
+ 	struct cgx_link_info_msg *req;
+ 	struct mbox_msghdr *msghdr;
++	struct delayed_work *dwork;
+ 	struct otx2_nic *pf;
+ 	int vf_idx;
+ 
+@@ -3095,10 +3122,24 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ 	vf_idx = config - config->pf->vf_configs;
+ 	pf = config->pf;
+ 
++	if (config->intf_down)
++		return;
++
++	mutex_lock(&pf->mbox.lock);
++
++	dwork = &config->link_event_work;
++
++	if (!otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx)) {
++		schedule_delayed_work(dwork, msecs_to_jiffies(100));
++		mutex_unlock(&pf->mbox.lock);
++		return;
++	}
++
+ 	msghdr = otx2_mbox_alloc_msg_rsp(&pf->mbox_pfvf[0].mbox_up, vf_idx,
+ 					 sizeof(*req), sizeof(struct msg_rsp));
+ 	if (!msghdr) {
+ 		dev_err(pf->dev, "Failed to create VF%d link event\n", vf_idx);
++		mutex_unlock(&pf->mbox.lock);
+ 		return;
+ 	}
+ 
+@@ -3107,7 +3148,11 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ 	req->hdr.sig = OTX2_MBOX_REQ_SIG;
+ 	memcpy(&req->link_info, &pf->linfo, sizeof(req->link_info));
+ 
++	otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx);
++
+ 	otx2_sync_mbox_up_msg(&pf->mbox_pfvf[0], vf_idx);
++
++	mutex_unlock(&pf->mbox.lock);
+ }
+ 
+ static int otx2_sriov_enable(struct pci_dev *pdev, int numvfs)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 35e06048356f4..cf0aa16d75407 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -89,16 +89,20 @@ static void otx2vf_vfaf_mbox_handler(struct work_struct *work)
+ 	struct otx2_mbox *mbox;
+ 	struct mbox *af_mbox;
+ 	int offset, id;
++	u16 num_msgs;
+ 
+ 	af_mbox = container_of(work, struct mbox, mbox_wrk);
+ 	mbox = &af_mbox->mbox;
+ 	mdev = &mbox->dev[0];
+ 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+-	if (af_mbox->num_msgs == 0)
++	num_msgs = rsp_hdr->num_msgs;
++
++	if (num_msgs == 0)
+ 		return;
++
+ 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+ 
+-	for (id = 0; id < af_mbox->num_msgs; id++) {
++	for (id = 0; id < num_msgs; id++) {
+ 		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ 		otx2vf_process_vfaf_mbox_msg(af_mbox->pfvf, msg);
+ 		offset = mbox->rx_start + msg->next_msgoff;
+@@ -151,6 +155,7 @@ static void otx2vf_vfaf_mbox_up_handler(struct work_struct *work)
+ 	struct mbox *vf_mbox;
+ 	struct otx2_nic *vf;
+ 	int offset, id;
++	u16 num_msgs;
+ 
+ 	vf_mbox = container_of(work, struct mbox, mbox_up_wrk);
+ 	vf = vf_mbox->pfvf;
+@@ -158,12 +163,14 @@ static void otx2vf_vfaf_mbox_up_handler(struct work_struct *work)
+ 	mdev = &mbox->dev[0];
+ 
+ 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+-	if (vf_mbox->up_num_msgs == 0)
++	num_msgs = rsp_hdr->num_msgs;
++
++	if (num_msgs == 0)
+ 		return;
+ 
+ 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+ 
+-	for (id = 0; id < vf_mbox->up_num_msgs; id++) {
++	for (id = 0; id < num_msgs; id++) {
+ 		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ 		otx2vf_process_mbox_msg_up(vf, msg);
+ 		offset = mbox->rx_start + msg->next_msgoff;
+@@ -178,40 +185,48 @@ static irqreturn_t otx2vf_vfaf_mbox_intr_handler(int irq, void *vf_irq)
+ 	struct otx2_mbox_dev *mdev;
+ 	struct otx2_mbox *mbox;
+ 	struct mbox_hdr *hdr;
++	u64 mbox_data;
+ 
+ 	/* Clear the IRQ */
+ 	otx2_write64(vf, RVU_VF_INT, BIT_ULL(0));
+ 
++	mbox_data = otx2_read64(vf, RVU_VF_VFPF_MBOX0);
++
+ 	/* Read latest mbox data */
+ 	smp_rmb();
+ 
+-	/* Check for PF => VF response messages */
+-	mbox = &vf->mbox.mbox;
+-	mdev = &mbox->dev[0];
+-	otx2_sync_mbox_bbuf(mbox, 0);
++	if (mbox_data & MBOX_DOWN_MSG) {
++		mbox_data &= ~MBOX_DOWN_MSG;
++		otx2_write64(vf, RVU_VF_VFPF_MBOX0, mbox_data);
++
++		/* Check for PF => VF response messages */
++		mbox = &vf->mbox.mbox;
++		mdev = &mbox->dev[0];
++		otx2_sync_mbox_bbuf(mbox, 0);
+ 
+-	trace_otx2_msg_interrupt(mbox->pdev, "PF to VF", BIT_ULL(0));
++		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++		if (hdr->num_msgs)
++			queue_work(vf->mbox_wq, &vf->mbox.mbox_wrk);
+ 
+-	hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+-	if (hdr->num_msgs) {
+-		vf->mbox.num_msgs = hdr->num_msgs;
+-		hdr->num_msgs = 0;
+-		memset(mbox->hwbase + mbox->rx_start, 0,
+-		       ALIGN(sizeof(struct mbox_hdr), sizeof(u64)));
+-		queue_work(vf->mbox_wq, &vf->mbox.mbox_wrk);
++		trace_otx2_msg_interrupt(mbox->pdev, "DOWN reply from PF to VF",
++					 BIT_ULL(0));
+ 	}
+-	/* Check for PF => VF notification messages */
+-	mbox = &vf->mbox.mbox_up;
+-	mdev = &mbox->dev[0];
+-	otx2_sync_mbox_bbuf(mbox, 0);
+ 
+-	hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+-	if (hdr->num_msgs) {
+-		vf->mbox.up_num_msgs = hdr->num_msgs;
+-		hdr->num_msgs = 0;
+-		memset(mbox->hwbase + mbox->rx_start, 0,
+-		       ALIGN(sizeof(struct mbox_hdr), sizeof(u64)));
+-		queue_work(vf->mbox_wq, &vf->mbox.mbox_up_wrk);
++	if (mbox_data & MBOX_UP_MSG) {
++		mbox_data &= ~MBOX_UP_MSG;
++		otx2_write64(vf, RVU_VF_VFPF_MBOX0, mbox_data);
++
++		/* Check for PF => VF notification messages */
++		mbox = &vf->mbox.mbox_up;
++		mdev = &mbox->dev[0];
++		otx2_sync_mbox_bbuf(mbox, 0);
++
++		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++		if (hdr->num_msgs)
++			queue_work(vf->mbox_wq, &vf->mbox.mbox_up_wrk);
++
++		trace_otx2_msg_interrupt(mbox->pdev, "UP message from PF to VF",
++					 BIT_ULL(0));
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -760,8 +775,8 @@ static void otx2vf_remove(struct pci_dev *pdev)
+ 	otx2_mcam_flow_del(vf);
+ 	otx2_shutdown_tc(vf);
+ 	otx2_shutdown_qos(vf);
+-	otx2vf_disable_mbox_intr(vf);
+ 	otx2_detach_resources(&vf->mbox);
++	otx2vf_disable_mbox_intr(vf);
+ 	free_percpu(vf->hw.lmt_info);
+ 	if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
+ 		qmem_free(vf->dev, vf->dync_lmt);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index d2c039f830195..a1231203ecf3d 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -677,8 +677,7 @@ static int mtk_mac_finish(struct phylink_config *config, unsigned int mode,
+ 	mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
+ 	mcr_new = mcr_cur;
+ 	mcr_new |= MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE |
+-		   MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK |
+-		   MAC_MCR_RX_FIFO_CLR_DIS;
++		   MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_RX_FIFO_CLR_DIS;
+ 
+ 	/* Only update control register when needed! */
+ 	if (mcr_new != mcr_cur)
+@@ -694,7 +693,7 @@ static void mtk_mac_link_down(struct phylink_config *config, unsigned int mode,
+ 					   phylink_config);
+ 	u32 mcr = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
+ 
+-	mcr &= ~(MAC_MCR_TX_EN | MAC_MCR_RX_EN);
++	mcr &= ~(MAC_MCR_TX_EN | MAC_MCR_RX_EN | MAC_MCR_FORCE_LINK);
+ 	mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id));
+ }
+ 
+@@ -803,7 +802,7 @@ static void mtk_mac_link_up(struct phylink_config *config,
+ 	if (rx_pause)
+ 		mcr |= MAC_MCR_FORCE_RX_FC;
+ 
+-	mcr |= MAC_MCR_TX_EN | MAC_MCR_RX_EN;
++	mcr |= MAC_MCR_TX_EN | MAC_MCR_RX_EN | MAC_MCR_FORCE_LINK;
+ 	mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id));
+ }
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
+index b2a5d9c3733d4..6ce0db3a1a920 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
+@@ -994,7 +994,7 @@ void mtk_ppe_start(struct mtk_ppe *ppe)
+ 			 MTK_PPE_KEEPALIVE_DISABLE) |
+ 	      FIELD_PREP(MTK_PPE_TB_CFG_HASH_MODE, 1) |
+ 	      FIELD_PREP(MTK_PPE_TB_CFG_SCAN_MODE,
+-			 MTK_PPE_SCAN_MODE_KEEPALIVE_AGE) |
++			 MTK_PPE_SCAN_MODE_CHECK_AGE) |
+ 	      FIELD_PREP(MTK_PPE_TB_CFG_ENTRY_NUM,
+ 			 MTK_PPE_ENTRIES_SHIFT);
+ 	if (mtk_is_netsys_v2_or_greater(ppe->eth))
+@@ -1090,17 +1090,21 @@ int mtk_ppe_stop(struct mtk_ppe *ppe)
+ 
+ 	mtk_ppe_cache_enable(ppe, false);
+ 
+-	/* disable offload engine */
+-	ppe_clear(ppe, MTK_PPE_GLO_CFG, MTK_PPE_GLO_CFG_EN);
+-	ppe_w32(ppe, MTK_PPE_FLOW_CFG, 0);
+-
+ 	/* disable aging */
+ 	val = MTK_PPE_TB_CFG_AGE_NON_L4 |
+ 	      MTK_PPE_TB_CFG_AGE_UNBIND |
+ 	      MTK_PPE_TB_CFG_AGE_TCP |
+ 	      MTK_PPE_TB_CFG_AGE_UDP |
+-	      MTK_PPE_TB_CFG_AGE_TCP_FIN;
++	      MTK_PPE_TB_CFG_AGE_TCP_FIN |
++		  MTK_PPE_TB_CFG_SCAN_MODE;
+ 	ppe_clear(ppe, MTK_PPE_TB_CFG, val);
+ 
+-	return mtk_ppe_wait_busy(ppe);
++	if (mtk_ppe_wait_busy(ppe))
++		return -ETIMEDOUT;
++
++	/* disable offload engine */
++	ppe_clear(ppe, MTK_PPE_GLO_CFG, MTK_PPE_GLO_CFG_EN);
++	ppe_w32(ppe, MTK_PPE_FLOW_CFG, 0);
++
++	return 0;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
+index 88d6d992e7d07..86db8e8141407 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/lag_conf.c
+@@ -338,6 +338,11 @@ static void nfp_fl_lag_do_work(struct work_struct *work)
+ 
+ 		acti_netdevs = kmalloc_array(entry->slave_cnt,
+ 					     sizeof(*acti_netdevs), GFP_KERNEL);
++		if (!acti_netdevs) {
++			schedule_delayed_work(&lag->work,
++					      NFP_FL_LAG_DELAY);
++			continue;
++		}
+ 
+ 		/* Include sanity check in the loop. It may be that a bond has
+ 		 * changed between processing the last notification and the
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+index b6c06adb86560..b95187a0847f7 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+@@ -551,7 +551,7 @@ static int txgbe_clock_register(struct txgbe *txgbe)
+ 	char clk_name[32];
+ 	struct clk *clk;
+ 
+-	snprintf(clk_name, sizeof(clk_name), "i2c_designware.%d",
++	snprintf(clk_name, sizeof(clk_name), "i2c_dw.%d",
+ 		 pci_dev_id(pdev));
+ 
+ 	clk = clk_register_fixed_rate(NULL, clk_name, NULL, 0, 156250000);
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index b7cb71817780c..29e1cbea6dc0c 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -380,7 +380,7 @@ static int dp83822_config_init(struct phy_device *phydev)
+ {
+ 	struct dp83822_private *dp83822 = phydev->priv;
+ 	struct device *dev = &phydev->mdio.dev;
+-	int rgmii_delay;
++	int rgmii_delay = 0;
+ 	s32 rx_int_delay;
+ 	s32 tx_int_delay;
+ 	int err = 0;
+@@ -390,30 +390,33 @@ static int dp83822_config_init(struct phy_device *phydev)
+ 		rx_int_delay = phy_get_internal_delay(phydev, dev, NULL, 0,
+ 						      true);
+ 
+-		if (rx_int_delay <= 0)
+-			rgmii_delay = 0;
+-		else
+-			rgmii_delay = DP83822_RX_CLK_SHIFT;
++		/* Set DP83822_RX_CLK_SHIFT to enable rx clk internal delay */
++		if (rx_int_delay > 0)
++			rgmii_delay |= DP83822_RX_CLK_SHIFT;
+ 
+ 		tx_int_delay = phy_get_internal_delay(phydev, dev, NULL, 0,
+ 						      false);
++
++		/* Set DP83822_TX_CLK_SHIFT to disable tx clk internal delay */
+ 		if (tx_int_delay <= 0)
+-			rgmii_delay &= ~DP83822_TX_CLK_SHIFT;
+-		else
+ 			rgmii_delay |= DP83822_TX_CLK_SHIFT;
+ 
+-		if (rgmii_delay) {
+-			err = phy_set_bits_mmd(phydev, DP83822_DEVADDR,
+-					       MII_DP83822_RCSR, rgmii_delay);
+-			if (err)
+-				return err;
+-		}
++		err = phy_modify_mmd(phydev, DP83822_DEVADDR, MII_DP83822_RCSR,
++				     DP83822_RX_CLK_SHIFT | DP83822_TX_CLK_SHIFT, rgmii_delay);
++		if (err)
++			return err;
++
++		err = phy_set_bits_mmd(phydev, DP83822_DEVADDR,
++				       MII_DP83822_RCSR, DP83822_RGMII_MODE_EN);
+ 
+-		phy_set_bits_mmd(phydev, DP83822_DEVADDR,
+-					MII_DP83822_RCSR, DP83822_RGMII_MODE_EN);
++		if (err)
++			return err;
+ 	} else {
+-		phy_clear_bits_mmd(phydev, DP83822_DEVADDR,
+-					MII_DP83822_RCSR, DP83822_RGMII_MODE_EN);
++		err = phy_clear_bits_mmd(phydev, DP83822_DEVADDR,
++					 MII_DP83822_RCSR, DP83822_RGMII_MODE_EN);
++
++		if (err)
++			return err;
+ 	}
+ 
+ 	if (dp83822->fx_enabled) {
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index a42df2c1bd043..813b753e21dec 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -2701,8 +2701,8 @@ EXPORT_SYMBOL(genphy_resume);
+ int genphy_loopback(struct phy_device *phydev, bool enable)
+ {
+ 	if (enable) {
+-		u16 val, ctl = BMCR_LOOPBACK;
+-		int ret;
++		u16 ctl = BMCR_LOOPBACK;
++		int ret, val;
+ 
+ 		ctl |= mii_bmcr_encode_fixed(phydev->speed, phydev->duplex);
+ 
+@@ -2954,7 +2954,7 @@ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ 	if (delay < 0)
+ 		return delay;
+ 
+-	if (delay && size == 0)
++	if (size == 0)
+ 		return delay;
+ 
+ 	if (delay < delay_values[0] || delay > delay_values[size - 1]) {
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index a530f20ee2575..2fa46baa589e5 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -2104,6 +2104,11 @@ static const struct usb_device_id products[] = {
+ 		USB_DEVICE(0x0424, 0x9E08),
+ 		.driver_info = (unsigned long) &smsc95xx_info,
+ 	},
++	{
++		/* SYSTEC USB-SPEmodule1 10BASE-T1L Ethernet Device */
++		USB_DEVICE(0x0878, 0x1400),
++		.driver_info = (unsigned long)&smsc95xx_info,
++	},
+ 	{
+ 		/* Microchip's EVB-LAN8670-USB 10BASE-T1S Ethernet Device */
+ 		USB_DEVICE(0x184F, 0x0051),
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index 143bd4ab160df..57947a5590cca 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -737,7 +737,9 @@ static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf)
+ 
+ 	data->eeprom_len = SR9800_EEPROM_LEN;
+ 
+-	usbnet_get_endpoints(dev, intf);
++	ret = usbnet_get_endpoints(dev, intf);
++	if (ret)
++		goto out;
+ 
+ 	/* LED Setting Rule :
+ 	 * AABB:CCDD
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index a2e80278eb2f9..2f3fd287378fd 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -1533,8 +1533,6 @@ static netdev_features_t veth_fix_features(struct net_device *dev,
+ 		if (peer_priv->_xdp_prog)
+ 			features &= ~NETIF_F_GSO_SOFTWARE;
+ 	}
+-	if (priv->_xdp_prog)
+-		features |= NETIF_F_GRO;
+ 
+ 	return features;
+ }
+@@ -1638,14 +1636,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 		}
+ 
+ 		if (!old_prog) {
+-			if (!veth_gro_requested(dev)) {
+-				/* user-space did not require GRO, but adding
+-				 * XDP is supposed to get GRO working
+-				 */
+-				dev->features |= NETIF_F_GRO;
+-				netdev_features_change(dev);
+-			}
+-
+ 			peer->hw_features &= ~NETIF_F_GSO_SOFTWARE;
+ 			peer->max_mtu = max_mtu;
+ 		}
+@@ -1661,14 +1651,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 			if (dev->flags & IFF_UP)
+ 				veth_disable_xdp(dev);
+ 
+-			/* if user-space did not require GRO, since adding XDP
+-			 * enabled it, clear it now
+-			 */
+-			if (!veth_gro_requested(dev)) {
+-				dev->features &= ~NETIF_F_GRO;
+-				netdev_features_change(dev);
+-			}
+-
+ 			if (peer) {
+ 				peer->hw_features |= NETIF_F_GSO_SOFTWARE;
+ 				peer->max_mtu = ETH_MAX_MTU;
+diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
+index 80ddaff759d47..a6c787454a1ae 100644
+--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
++++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
+@@ -382,12 +382,12 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter,
+ 	page = rbi->page;
+ 	dma_sync_single_for_cpu(&adapter->pdev->dev,
+ 				page_pool_get_dma_addr(page) +
+-				rq->page_pool->p.offset, rcd->len,
++				rq->page_pool->p.offset, rbi->len,
+ 				page_pool_get_dma_dir(rq->page_pool));
+ 
+-	xdp_init_buff(&xdp, rbi->len, &rq->xdp_rxq);
++	xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq);
+ 	xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset,
+-			 rcd->len, false);
++			 rbi->len, false);
+ 	xdp_buff_clear_frags_flag(&xdp);
+ 
+ 	xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog);
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index a176653c88616..db01ec03bda00 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -251,7 +251,7 @@ static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
+ 
+ 	if (unlikely(!READ_ONCE(keypair->receiving.is_valid) ||
+ 		  wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) ||
+-		  keypair->receiving_counter.counter >= REJECT_AFTER_MESSAGES)) {
++		  READ_ONCE(keypair->receiving_counter.counter) >= REJECT_AFTER_MESSAGES)) {
+ 		WRITE_ONCE(keypair->receiving.is_valid, false);
+ 		return false;
+ 	}
+@@ -318,7 +318,7 @@ static bool counter_validate(struct noise_replay_counter *counter, u64 their_cou
+ 		for (i = 1; i <= top; ++i)
+ 			counter->backtrack[(i + index_current) &
+ 				((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0;
+-		counter->counter = their_counter;
++		WRITE_ONCE(counter->counter, their_counter);
+ 	}
+ 
+ 	index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1;
+@@ -463,7 +463,7 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
+ 			net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n",
+ 					    peer->device->dev->name,
+ 					    PACKET_CB(skb)->nonce,
+-					    keypair->receiving_counter.counter);
++					    READ_ONCE(keypair->receiving_counter.counter));
+ 			goto next;
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 6b6aa3c367448..0ce08e9a0a3d2 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -851,6 +851,10 @@ ath10k_wmi_tlv_op_pull_mgmt_tx_compl_ev(struct ath10k *ar, struct sk_buff *skb,
+ 	}
+ 
+ 	ev = tb[WMI_TLV_TAG_STRUCT_MGMT_TX_COMPL_EVENT];
++	if (!ev) {
++		kfree(tb);
++		return -EPROTO;
++	}
+ 
+ 	arg->desc_id = ev->desc_id;
+ 	arg->status = ev->status;
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 71c6dab1aedba..1c4613b1bd225 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -2297,6 +2297,8 @@ static void ath11k_peer_assoc_h_he(struct ath11k *ar,
+ 	mcs_160_map = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_160);
+ 	mcs_80_map = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_80);
+ 
++	/* Initialize rx_mcs_160 to 9 which is an invalid value */
++	rx_mcs_160 = 9;
+ 	if (support_160) {
+ 		for (i = 7; i >= 0; i--) {
+ 			u8 mcs_160 = (mcs_160_map >> (2 * i)) & 3;
+@@ -2308,6 +2310,8 @@ static void ath11k_peer_assoc_h_he(struct ath11k *ar,
+ 		}
+ 	}
+ 
++	/* Initialize rx_mcs_80 to 9 which is an invalid value */
++	rx_mcs_80 = 9;
+ 	for (i = 7; i >= 0; i--) {
+ 		u8 mcs_80 = (mcs_80_map >> (2 * i)) & 3;
+ 
+@@ -3026,7 +3030,14 @@ static void ath11k_bss_assoc(struct ieee80211_hw *hw,
+ 
+ 	rcu_read_unlock();
+ 
++	if (!ath11k_mac_vif_recalc_sta_he_txbf(ar, vif, &he_cap)) {
++		ath11k_warn(ar->ab, "failed to recalc he txbf for vdev %i on bss %pM\n",
++			    arvif->vdev_id, bss_conf->bssid);
++		return;
++	}
++
+ 	peer_arg.is_assoc = true;
++
+ 	ret = ath11k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
+ 	if (ret) {
+ 		ath11k_warn(ar->ab, "failed to run peer assoc for %pM vdev %i: %d\n",
+@@ -3049,12 +3060,6 @@ static void ath11k_bss_assoc(struct ieee80211_hw *hw,
+ 		return;
+ 	}
+ 
+-	if (!ath11k_mac_vif_recalc_sta_he_txbf(ar, vif, &he_cap)) {
+-		ath11k_warn(ar->ab, "failed to recalc he txbf for vdev %i on bss %pM\n",
+-			    arvif->vdev_id, bss_conf->bssid);
+-		return;
+-	}
+-
+ 	WARN_ON(arvif->is_up);
+ 
+ 	arvif->aid = vif->cfg.aid;
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 68c42ca44fcb5..d67494de44920 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -418,7 +418,7 @@ struct ath12k_sta {
+ };
+ 
+ #define ATH12K_MIN_5G_FREQ 4150
+-#define ATH12K_MIN_6G_FREQ 5945
++#define ATH12K_MIN_6G_FREQ 5925
+ #define ATH12K_MAX_6G_FREQ 7115
+ #define ATH12K_NUM_CHANS 100
+ #define ATH12K_MAX_5G_CHAN 173
+diff --git a/drivers/net/wireless/ath/ath12k/dbring.c b/drivers/net/wireless/ath/ath12k/dbring.c
+index 8fbf868e6f7ec..788160c84c686 100644
+--- a/drivers/net/wireless/ath/ath12k/dbring.c
++++ b/drivers/net/wireless/ath/ath12k/dbring.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include "core.h"
+diff --git a/drivers/net/wireless/ath/ath12k/debug.c b/drivers/net/wireless/ath/ath12k/debug.c
+index 45d33279e665d..fe5a732ba9ec9 100644
+--- a/drivers/net/wireless/ath/ath12k/debug.c
++++ b/drivers/net/wireless/ath/ath12k/debug.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/vmalloc.h>
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index 492ca6ce67140..62f9cdbb811c0 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include "core.h"
+diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
+index b896dfe66dad3..1bdab8604db94 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.c
++++ b/drivers/net/wireless/ath/ath12k/hal.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include <linux/dma-mapping.h>
+ #include "hal_tx.h"
+@@ -449,8 +449,8 @@ static u8 *ath12k_hw_qcn9274_rx_desc_mpdu_start_addr2(struct hal_rx_desc *desc)
+ 
+ static bool ath12k_hw_qcn9274_rx_desc_is_da_mcbc(struct hal_rx_desc *desc)
+ {
+-	return __le16_to_cpu(desc->u.qcn9274.msdu_end.info5) &
+-	       RX_MSDU_END_INFO5_DA_IS_MCBC;
++	return __le32_to_cpu(desc->u.qcn9274.mpdu_start.info6) &
++	       RX_MPDU_START_INFO6_MCAST_BCAST;
+ }
+ 
+ static void ath12k_hw_qcn9274_rx_desc_get_dot11_hdr(struct hal_rx_desc *desc,
+diff --git a/drivers/net/wireless/ath/ath12k/hal.h b/drivers/net/wireless/ath/ath12k/hal.h
+index 66035a787c728..fc47e7e6b498a 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.h
++++ b/drivers/net/wireless/ath/ath12k/hal.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HAL_H
+diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.c b/drivers/net/wireless/ath/ath12k/hal_rx.c
+index f6afbd8196bf5..4f25eb9f77453 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_rx.c
++++ b/drivers/net/wireless/ath/ath12k/hal_rx.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include "debug.h"
+diff --git a/drivers/net/wireless/ath/ath12k/hif.h b/drivers/net/wireless/ath/ath12k/hif.h
+index 4095fd82b1b3f..c653ca1f59b22 100644
+--- a/drivers/net/wireless/ath/ath12k/hif.h
++++ b/drivers/net/wireless/ath/ath12k/hif.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HIF_H
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index b55cf33e37bd8..de60d988d8608 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/types.h>
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 2d6427cf41a4d..d2622bfef9422 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HW_H
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index b698e55a5b7bf..237fb8e7cfa46 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <net/mac80211.h>
+@@ -5261,7 +5261,7 @@ ath12k_mac_get_vdev_stats_id(struct ath12k_vif *arvif)
+ 	do {
+ 		if (ab->free_vdev_stats_id_map & (1LL << vdev_stats_id)) {
+ 			vdev_stats_id++;
+-			if (vdev_stats_id <= ATH12K_INVAL_VDEV_STATS_ID) {
++			if (vdev_stats_id >= ATH12K_MAX_VDEV_STATS_ID) {
+ 				vdev_stats_id = ATH12K_INVAL_VDEV_STATS_ID;
+ 				break;
+ 			}
+@@ -7188,7 +7188,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 	}
+ 
+ 	if (supported_bands & WMI_HOST_WLAN_5G_CAP) {
+-		if (reg_cap->high_5ghz_chan >= ATH12K_MAX_6G_FREQ) {
++		if (reg_cap->high_5ghz_chan >= ATH12K_MIN_6G_FREQ) {
+ 			channels = kmemdup(ath12k_6ghz_channels,
+ 					   sizeof(ath12k_6ghz_channels), GFP_KERNEL);
+ 			if (!channels) {
+diff --git a/drivers/net/wireless/ath/ath12k/mac.h b/drivers/net/wireless/ath/ath12k/mac.h
+index 59b4e8f5eee05..7d71ae1aba45e 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.h
++++ b/drivers/net/wireless/ath/ath12k/mac.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_MAC_H
+diff --git a/drivers/net/wireless/ath/ath12k/mhi.c b/drivers/net/wireless/ath/ath12k/mhi.c
+index 39e640293cdc0..27eb38b2b1bd2 100644
+--- a/drivers/net/wireless/ath/ath12k/mhi.c
++++ b/drivers/net/wireless/ath/ath12k/mhi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/msi.h>
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 3006cd3fbe119..a6a5f9bcffbd6 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/module.h>
+diff --git a/drivers/net/wireless/ath/ath12k/pci.h b/drivers/net/wireless/ath/ath12k/pci.h
+index 0f24fd9395cd9..9a17a7dcdd6a6 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.h
++++ b/drivers/net/wireless/ath/ath12k/pci.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #ifndef ATH12K_PCI_H
+ #define ATH12K_PCI_H
+diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
+index c6edb24cbedd8..7b3500b5c8c20 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.h
++++ b/drivers/net/wireless/ath/ath12k/peer.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_PEER_H
+diff --git a/drivers/net/wireless/ath/ath12k/qmi.c b/drivers/net/wireless/ath/ath12k/qmi.c
+index f6e949c618d0a..77a132f6bbd1b 100644
+--- a/drivers/net/wireless/ath/ath12k/qmi.c
++++ b/drivers/net/wireless/ath/ath12k/qmi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/elf.h>
+diff --git a/drivers/net/wireless/ath/ath12k/qmi.h b/drivers/net/wireless/ath/ath12k/qmi.h
+index e20d6511d1ca0..e25bbaa125e83 100644
+--- a/drivers/net/wireless/ath/ath12k/qmi.h
++++ b/drivers/net/wireless/ath/ath12k/qmi.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_QMI_H
+diff --git a/drivers/net/wireless/ath/ath12k/reg.c b/drivers/net/wireless/ath/ath12k/reg.c
+index 5c006256c82ad..1dac8439802ad 100644
+--- a/drivers/net/wireless/ath/ath12k/reg.c
++++ b/drivers/net/wireless/ath/ath12k/reg.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include <linux/rtnetlink.h>
+ #include "core.h"
+@@ -103,7 +103,7 @@ int ath12k_reg_update_chan_list(struct ath12k *ar)
+ 
+ 	bands = hw->wiphy->bands;
+ 	for (band = 0; band < NUM_NL80211_BANDS; band++) {
+-		if (!bands[band])
++		if (!(ar->mac.sbands[band].channels && bands[band]))
+ 			continue;
+ 
+ 		for (i = 0; i < bands[band]->n_channels; i++) {
+@@ -129,7 +129,7 @@ int ath12k_reg_update_chan_list(struct ath12k *ar)
+ 	ch = arg->channel;
+ 
+ 	for (band = 0; band < NUM_NL80211_BANDS; band++) {
+-		if (!bands[band])
++		if (!(ar->mac.sbands[band].channels && bands[band]))
+ 			continue;
+ 
+ 		for (i = 0; i < bands[band]->n_channels; i++) {
+diff --git a/drivers/net/wireless/ath/ath12k/reg.h b/drivers/net/wireless/ath/ath12k/reg.h
+index 35569f03042d3..d4a0776e10341 100644
+--- a/drivers/net/wireless/ath/ath12k/reg.h
++++ b/drivers/net/wireless/ath/ath12k/reg.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_REG_H
+diff --git a/drivers/net/wireless/ath/ath12k/rx_desc.h b/drivers/net/wireless/ath/ath12k/rx_desc.h
+index c4058abc516ee..55f20c446ca9c 100644
+--- a/drivers/net/wireless/ath/ath12k/rx_desc.h
++++ b/drivers/net/wireless/ath/ath12k/rx_desc.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #ifndef ATH12K_RX_DESC_H
+ #define ATH12K_RX_DESC_H
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 0e5bf5ce8d4c3..11cc3005c0f98 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include <linux/skbuff.h>
+ #include <linux/ctype.h>
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index 237f4ec2cffd7..6c33e898b3000 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -306,7 +306,6 @@ struct ath9k_htc_tx {
+ 	DECLARE_BITMAP(tx_slot, MAX_TX_BUF_NUM);
+ 	struct timer_list cleanup_timer;
+ 	spinlock_t tx_lock;
+-	bool initialized;
+ };
+ 
+ struct ath9k_htc_tx_ctl {
+@@ -515,6 +514,7 @@ struct ath9k_htc_priv {
+ 	unsigned long ps_usecount;
+ 	bool ps_enabled;
+ 	bool ps_idle;
++	bool initialized;
+ 
+ #ifdef CONFIG_MAC80211_LEDS
+ 	enum led_brightness brightness;
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index dae3d9c7b6408..fc339079ee8c9 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -966,6 +966,10 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ 
+ 	htc_handle->drv_priv = priv;
+ 
++	/* Allow ath9k_wmi_event_tasklet() to operate. */
++	smp_wmb();
++	priv->initialized = true;
++
+ 	return 0;
+ 
+ err_init:
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index efcaeccb055aa..ce9c04e418b8d 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -815,10 +815,6 @@ int ath9k_tx_init(struct ath9k_htc_priv *priv)
+ 	skb_queue_head_init(&priv->tx.data_vo_queue);
+ 	skb_queue_head_init(&priv->tx.tx_failed);
+ 
+-	/* Allow ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) to operate. */
+-	smp_wmb();
+-	priv->tx.initialized = true;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index 1476b42b52a91..805ad31edba2b 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -155,6 +155,12 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
+ 		}
+ 		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
++		/* Check if ath9k_htc_probe_device() completed. */
++		if (!data_race(priv->initialized)) {
++			kfree_skb(skb);
++			continue;
++		}
++
+ 		hdr = (struct wmi_cmd_hdr *) skb->data;
+ 		cmd_id = be16_to_cpu(hdr->command_id);
+ 		wmi_event = skb_pull(skb, sizeof(struct wmi_cmd_hdr));
+@@ -169,10 +175,6 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
+ 					     &wmi->drv_priv->fatal_work);
+ 			break;
+ 		case WMI_TXSTATUS_EVENTID:
+-			/* Check if ath9k_tx_init() completed. */
+-			if (!data_race(priv->tx.initialized))
+-				break;
+-
+ 			spin_lock_bh(&priv->tx.tx_lock);
+ 			if (priv->tx.flags & ATH9K_HTC_OP_TX_DRAIN) {
+ 				spin_unlock_bh(&priv->tx.tx_lock);
+diff --git a/drivers/net/wireless/broadcom/b43/b43.h b/drivers/net/wireless/broadcom/b43/b43.h
+index 67b4bac048e58..c0d8fc0b22fb2 100644
+--- a/drivers/net/wireless/broadcom/b43/b43.h
++++ b/drivers/net/wireless/broadcom/b43/b43.h
+@@ -1082,6 +1082,22 @@ static inline bool b43_using_pio_transfers(struct b43_wldev *dev)
+ 	return dev->__using_pio_transfers;
+ }
+ 
++static inline void b43_wake_queue(struct b43_wldev *dev, int queue_prio)
++{
++	if (dev->qos_enabled)
++		ieee80211_wake_queue(dev->wl->hw, queue_prio);
++	else
++		ieee80211_wake_queue(dev->wl->hw, 0);
++}
++
++static inline void b43_stop_queue(struct b43_wldev *dev, int queue_prio)
++{
++	if (dev->qos_enabled)
++		ieee80211_stop_queue(dev->wl->hw, queue_prio);
++	else
++		ieee80211_stop_queue(dev->wl->hw, 0);
++}
++
+ /* Message printing */
+ __printf(2, 3) void b43info(struct b43_wl *wl, const char *fmt, ...);
+ __printf(2, 3) void b43err(struct b43_wl *wl, const char *fmt, ...);
+diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
+index 760d1a28edc6c..6ac7dcebfff9d 100644
+--- a/drivers/net/wireless/broadcom/b43/dma.c
++++ b/drivers/net/wireless/broadcom/b43/dma.c
+@@ -1399,7 +1399,7 @@ int b43_dma_tx(struct b43_wldev *dev, struct sk_buff *skb)
+ 	    should_inject_overflow(ring)) {
+ 		/* This TX ring is full. */
+ 		unsigned int skb_mapping = skb_get_queue_mapping(skb);
+-		ieee80211_stop_queue(dev->wl->hw, skb_mapping);
++		b43_stop_queue(dev, skb_mapping);
+ 		dev->wl->tx_queue_stopped[skb_mapping] = true;
+ 		ring->stopped = true;
+ 		if (b43_debug(dev, B43_DBG_DMAVERBOSE)) {
+@@ -1570,7 +1570,7 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev,
+ 	} else {
+ 		/* If the driver queue is running wake the corresponding
+ 		 * mac80211 queue. */
+-		ieee80211_wake_queue(dev->wl->hw, ring->queue_prio);
++		b43_wake_queue(dev, ring->queue_prio);
+ 		if (b43_debug(dev, B43_DBG_DMAVERBOSE)) {
+ 			b43dbg(dev->wl, "Woke up TX ring %d\n", ring->index);
+ 		}
+diff --git a/drivers/net/wireless/broadcom/b43/main.c b/drivers/net/wireless/broadcom/b43/main.c
+index 92ca0b2ca286d..effb6c23f8257 100644
+--- a/drivers/net/wireless/broadcom/b43/main.c
++++ b/drivers/net/wireless/broadcom/b43/main.c
+@@ -2587,7 +2587,8 @@ static void b43_request_firmware(struct work_struct *work)
+ 
+ start_ieee80211:
+ 	wl->hw->queues = B43_QOS_QUEUE_NUM;
+-	if (!modparam_qos || dev->fw.opensource)
++	if (!modparam_qos || dev->fw.opensource ||
++	    dev->dev->chip_id == BCMA_CHIP_ID_BCM4331)
+ 		wl->hw->queues = 1;
+ 
+ 	err = ieee80211_register_hw(wl->hw);
+@@ -3603,7 +3604,7 @@ static void b43_tx_work(struct work_struct *work)
+ 				err = b43_dma_tx(dev, skb);
+ 			if (err == -ENOSPC) {
+ 				wl->tx_queue_stopped[queue_num] = true;
+-				ieee80211_stop_queue(wl->hw, queue_num);
++				b43_stop_queue(dev, queue_num);
+ 				skb_queue_head(&wl->tx_queue[queue_num], skb);
+ 				break;
+ 			}
+@@ -3627,6 +3628,7 @@ static void b43_op_tx(struct ieee80211_hw *hw,
+ 		      struct sk_buff *skb)
+ {
+ 	struct b43_wl *wl = hw_to_b43_wl(hw);
++	u16 skb_queue_mapping;
+ 
+ 	if (unlikely(skb->len < 2 + 2 + 6)) {
+ 		/* Too short, this can't be a valid frame. */
+@@ -3635,12 +3637,12 @@ static void b43_op_tx(struct ieee80211_hw *hw,
+ 	}
+ 	B43_WARN_ON(skb_shinfo(skb)->nr_frags);
+ 
+-	skb_queue_tail(&wl->tx_queue[skb->queue_mapping], skb);
+-	if (!wl->tx_queue_stopped[skb->queue_mapping]) {
++	skb_queue_mapping = skb_get_queue_mapping(skb);
++	skb_queue_tail(&wl->tx_queue[skb_queue_mapping], skb);
++	if (!wl->tx_queue_stopped[skb_queue_mapping])
+ 		ieee80211_queue_work(wl->hw, &wl->tx_work);
+-	} else {
+-		ieee80211_stop_queue(wl->hw, skb->queue_mapping);
+-	}
++	else
++		b43_stop_queue(wl->current_dev, skb_queue_mapping);
+ }
+ 
+ static void b43_qos_params_upload(struct b43_wldev *dev,
+diff --git a/drivers/net/wireless/broadcom/b43/pio.c b/drivers/net/wireless/broadcom/b43/pio.c
+index 0cf70fdb60a6a..e41f2f5b4c266 100644
+--- a/drivers/net/wireless/broadcom/b43/pio.c
++++ b/drivers/net/wireless/broadcom/b43/pio.c
+@@ -525,7 +525,7 @@ int b43_pio_tx(struct b43_wldev *dev, struct sk_buff *skb)
+ 	if (total_len > (q->buffer_size - q->buffer_used)) {
+ 		/* Not enough memory on the queue. */
+ 		err = -EBUSY;
+-		ieee80211_stop_queue(dev->wl->hw, skb_get_queue_mapping(skb));
++		b43_stop_queue(dev, skb_get_queue_mapping(skb));
+ 		q->stopped = true;
+ 		goto out;
+ 	}
+@@ -552,7 +552,7 @@ int b43_pio_tx(struct b43_wldev *dev, struct sk_buff *skb)
+ 	if (((q->buffer_size - q->buffer_used) < roundup(2 + 2 + 6, 4)) ||
+ 	    (q->free_packet_slots == 0)) {
+ 		/* The queue is full. */
+-		ieee80211_stop_queue(dev->wl->hw, skb_get_queue_mapping(skb));
++		b43_stop_queue(dev, skb_get_queue_mapping(skb));
+ 		q->stopped = true;
+ 	}
+ 
+@@ -587,7 +587,7 @@ void b43_pio_handle_txstatus(struct b43_wldev *dev,
+ 	list_add(&pack->list, &q->packets_list);
+ 
+ 	if (q->stopped) {
+-		ieee80211_wake_queue(dev->wl->hw, q->queue_prio);
++		b43_wake_queue(dev, q->queue_prio);
+ 		q->stopped = false;
+ 	}
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 44cea18dd20ed..8facd40d713e6 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -4322,6 +4322,9 @@ brcmf_pmksa_v3_op(struct brcmf_if *ifp, struct cfg80211_pmksa *pmksa,
+ 	int ret;
+ 
+ 	pmk_op = kzalloc(sizeof(*pmk_op), GFP_KERNEL);
++	if (!pmk_op)
++		return -ENOMEM;
++
+ 	pmk_op->version = cpu_to_le16(BRCMF_PMKSA_VER_3);
+ 
+ 	if (!pmksa) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
+index ccc621b8ed9f2..4a1fe982a948e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_cmn.c
+@@ -383,8 +383,9 @@ struct shared_phy *wlc_phy_shared_attach(struct shared_phy_params *shp)
+ 	return sh;
+ }
+ 
+-static void wlc_phy_timercb_phycal(struct brcms_phy *pi)
++static void wlc_phy_timercb_phycal(void *ptr)
+ {
++	struct brcms_phy *pi = ptr;
+ 	uint delay = 5;
+ 
+ 	if (PHY_PERICAL_MPHASE_PENDING(pi)) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c
+index a0de5db0cd646..b723817915365 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.c
+@@ -57,12 +57,11 @@ void wlc_phy_shim_detach(struct phy_shim_info *physhim)
+ }
+ 
+ struct wlapi_timer *wlapi_init_timer(struct phy_shim_info *physhim,
+-				     void (*fn)(struct brcms_phy *pi),
++				     void (*fn)(void *pi),
+ 				     void *arg, const char *name)
+ {
+ 	return (struct wlapi_timer *)
+-			brcms_init_timer(physhim->wl, (void (*)(void *))fn,
+-					 arg, name);
++			brcms_init_timer(physhim->wl, fn, arg, name);
+ }
+ 
+ void wlapi_free_timer(struct wlapi_timer *t)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h
+index dd8774717adee..27d0934e600ed 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy_shim.h
+@@ -131,7 +131,7 @@ void wlc_phy_shim_detach(struct phy_shim_info *physhim);
+ 
+ /* PHY to WL utility functions */
+ struct wlapi_timer *wlapi_init_timer(struct phy_shim_info *physhim,
+-				     void (*fn)(struct brcms_phy *pi),
++				     void (*fn)(void *pi),
+ 				     void *arg, const char *name);
+ void wlapi_free_timer(struct wlapi_timer *t);
+ void wlapi_add_timer(struct wlapi_timer *t, uint ms, int periodic);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index dcc4810cb3247..e161b44539069 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -767,7 +767,7 @@ int iwl_sar_get_ewrd_table(struct iwl_fw_runtime *fwrt)
+ 	 * from index 1, so the maximum value allowed here is
+ 	 * ACPI_SAR_PROFILES_NUM - 1.
+ 	 */
+-	if (n_profiles <= 0 || n_profiles >= ACPI_SAR_PROFILE_NUM) {
++	if (n_profiles >= ACPI_SAR_PROFILE_NUM) {
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+@@ -1296,7 +1296,6 @@ void iwl_acpi_get_phy_filters(struct iwl_fw_runtime *fwrt,
+ 	if (IS_ERR(data))
+ 		return;
+ 
+-	/* try to read wtas table revision 1 or revision 0*/
+ 	wifi_pkg = iwl_acpi_get_wifi_pkg(fwrt->dev, data,
+ 					 ACPI_WPFC_WIFI_DATA_SIZE,
+ 					 &tbl_rev);
+@@ -1306,13 +1305,14 @@ void iwl_acpi_get_phy_filters(struct iwl_fw_runtime *fwrt,
+ 	if (tbl_rev != 0)
+ 		goto out_free;
+ 
+-	BUILD_BUG_ON(ARRAY_SIZE(filters->filter_cfg_chains) != ACPI_WPFC_WIFI_DATA_SIZE);
++	BUILD_BUG_ON(ARRAY_SIZE(filters->filter_cfg_chains) !=
++		     ACPI_WPFC_WIFI_DATA_SIZE - 1);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(filters->filter_cfg_chains); i++) {
+-		if (wifi_pkg->package.elements[i].type != ACPI_TYPE_INTEGER)
+-			return;
++		if (wifi_pkg->package.elements[i + 1].type != ACPI_TYPE_INTEGER)
++			goto out_free;
+ 		tmp.filter_cfg_chains[i] =
+-			cpu_to_le32(wifi_pkg->package.elements[i].integer.value);
++			cpu_to_le32(wifi_pkg->package.elements[i + 1].integer.value);
+ 	}
+ 
+ 	IWL_DEBUG_RADIO(fwrt, "Loaded WPFC filter config from ACPI\n");
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
+index e9277f6f35821..39106ccb4b9b6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
+@@ -56,7 +56,7 @@
+ #define ACPI_EWRD_WIFI_DATA_SIZE_REV2	((ACPI_SAR_PROFILE_NUM - 1) * \
+ 					 ACPI_SAR_NUM_CHAINS_REV2 * \
+ 					 ACPI_SAR_NUM_SUB_BANDS_REV2 + 3)
+-#define ACPI_WPFC_WIFI_DATA_SIZE	4 /* 4 filter config words */
++#define ACPI_WPFC_WIFI_DATA_SIZE	5 /* domain and 4 filter config words */
+ 
+ /* revision 0 and 1 are identical, except for the semantics in the FW */
+ #define ACPI_GEO_NUM_BANDS_REV0		2
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h b/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
+index 9c69d36743846..e6c0f928a6bbf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2005-2014, 2019-2021, 2023 Intel Corporation
++ * Copyright (C) 2005-2014, 2019-2021, 2023-2024 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+@@ -66,6 +66,16 @@ enum iwl_gen2_tx_fifo {
+ 	IWL_GEN2_TRIG_TX_FIFO_VO,
+ };
+ 
++enum iwl_bz_tx_fifo {
++	IWL_BZ_EDCA_TX_FIFO_BK,
++	IWL_BZ_EDCA_TX_FIFO_BE,
++	IWL_BZ_EDCA_TX_FIFO_VI,
++	IWL_BZ_EDCA_TX_FIFO_VO,
++	IWL_BZ_TRIG_TX_FIFO_BK,
++	IWL_BZ_TRIG_TX_FIFO_BE,
++	IWL_BZ_TRIG_TX_FIFO_VI,
++	IWL_BZ_TRIG_TX_FIFO_VO,
++};
+ /**
+  * enum iwl_tx_queue_cfg_actions - TXQ config options
+  * @TX_QUEUE_CFG_ENABLE_QUEUE: enable a queue
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index 650e4bde9c17b..56ee0ceed78ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -255,21 +255,27 @@ static u8 *iwl_get_pnvm_image(struct iwl_trans *trans_p, size_t *len)
+ 	struct pnvm_sku_package *package;
+ 	u8 *image = NULL;
+ 
+-	/* First attempt to get the PNVM from BIOS */
+-	package = iwl_uefi_get_pnvm(trans_p, len);
+-	if (!IS_ERR_OR_NULL(package)) {
+-		if (*len >= sizeof(*package)) {
+-			/* we need only the data */
+-			*len -= sizeof(*package);
+-			image = kmemdup(package->data, *len, GFP_KERNEL);
++	/* Get PNVM from BIOS for non-Intel SKU */
++	if (trans_p->sku_id[2]) {
++		package = iwl_uefi_get_pnvm(trans_p, len);
++		if (!IS_ERR_OR_NULL(package)) {
++			if (*len >= sizeof(*package)) {
++				/* we need only the data */
++				*len -= sizeof(*package);
++				image = kmemdup(package->data,
++						*len, GFP_KERNEL);
++			}
++			/*
++			 * free package regardless of whether kmemdup
++			 * succeeded
++			 */
++			kfree(package);
++			if (image)
++				return image;
+ 		}
+-		/* free package regardless of whether kmemdup succeeded */
+-		kfree(package);
+-		if (image)
+-			return image;
+ 	}
+ 
+-	/* If it's not available, try from the filesystem */
++	/* If it's not available, or for Intel SKU, try from the filesystem */
+ 	if (iwl_pnvm_get_from_fs(trans_p, &image, len))
+ 		return NULL;
+ 	return image;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+index 357727774db90..baabb8e321f8d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+@@ -173,9 +173,9 @@ struct iwl_fw_runtime {
+ 	struct iwl_sar_offset_mapping_cmd sgom_table;
+ 	bool sgom_enabled;
+ 	u8 reduced_power_flags;
+-	bool uats_enabled;
+ 	struct iwl_uats_table_cmd uats_table;
+ #endif
++	bool uats_enabled;
+ };
+ 
+ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 9160d81a871ee..55d2ed13a9c09 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -103,6 +103,12 @@ static int iwl_dbg_tlv_alloc_debug_info(struct iwl_trans *trans,
+ 	if (le32_to_cpu(tlv->length) != sizeof(*debug_info))
+ 		return -EINVAL;
+ 
++	/* we use this as a string, ensure input was NUL terminated */
++	if (strnlen(debug_info->debug_cfg_name,
++		    sizeof(debug_info->debug_cfg_name)) ==
++			sizeof(debug_info->debug_cfg_name))
++		return -EINVAL;
++
+ 	IWL_DEBUG_FW(trans, "WRT: Loading debug cfg: %s\n",
+ 		     debug_info->debug_cfg_name);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index 678c4a071869f..d85cadd8e16cb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -2090,7 +2090,7 @@ struct iwl_nvm_data *iwl_get_nvm(struct iwl_trans *trans,
+ 		!!(mac_flags & NVM_MAC_SKU_FLAGS_BAND_5_2_ENABLED);
+ 	nvm->sku_cap_mimo_disabled =
+ 		!!(mac_flags & NVM_MAC_SKU_FLAGS_MIMO_DISABLED);
+-	if (CSR_HW_RFID_TYPE(trans->hw_rf_id) == IWL_CFG_RF_TYPE_FM)
++	if (CSR_HW_RFID_TYPE(trans->hw_rf_id) >= IWL_CFG_RF_TYPE_FM)
+ 		nvm->sku_cap_11be_enable = true;
+ 
+ 	/* Initialize PHY sku data */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 92c45571bd691..d99a712e7f44d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -461,12 +461,10 @@ static int iwl_mvm_wowlan_config_rsc_tsc(struct iwl_mvm *mvm,
+ 		struct wowlan_key_rsc_v5_data data = {};
+ 		int i;
+ 
+-		data.rsc = kmalloc(sizeof(*data.rsc), GFP_KERNEL);
++		data.rsc = kzalloc(sizeof(*data.rsc), GFP_KERNEL);
+ 		if (!data.rsc)
+ 			return -ENOMEM;
+ 
+-		memset(data.rsc, 0xff, sizeof(*data.rsc));
+-
+ 		for (i = 0; i < ARRAY_SIZE(data.rsc->mcast_key_id_map); i++)
+ 			data.rsc->mcast_key_id_map[i] =
+ 				IWL_MCAST_KEY_MAP_INVALID;
+@@ -1286,7 +1284,9 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
+ 
+ 		mvm->net_detect = true;
+ 	} else {
+-		struct iwl_wowlan_config_cmd wowlan_config_cmd = {};
++		struct iwl_wowlan_config_cmd wowlan_config_cmd = {
++			.offloading_tid = 0,
++		};
+ 
+ 		wowlan_config_cmd.sta_id = mvmvif->deflink.ap_sta_id;
+ 
+@@ -1298,6 +1298,11 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
+ 			goto out_noreset;
+ 		}
+ 
++		ret = iwl_mvm_sta_ensure_queue(
++			mvm, ap_sta->txq[wowlan_config_cmd.offloading_tid]);
++		if (ret)
++			goto out_noreset;
++
+ 		ret = iwl_mvm_get_wowlan_config(mvm, wowlan, &wowlan_config_cmd,
+ 						vif, mvmvif, ap_sta);
+ 		if (ret)
+@@ -2200,7 +2205,10 @@ static void iwl_mvm_convert_gtk_v3(struct iwl_wowlan_status_data *status,
+ static void iwl_mvm_convert_igtk(struct iwl_wowlan_status_data *status,
+ 				 struct iwl_wowlan_igtk_status *data)
+ {
++	int i;
++
+ 	BUILD_BUG_ON(sizeof(status->igtk.key) < sizeof(data->key));
++	BUILD_BUG_ON(sizeof(status->igtk.ipn) != sizeof(data->ipn));
+ 
+ 	if (!data->key_len)
+ 		return;
+@@ -2212,7 +2220,10 @@ static void iwl_mvm_convert_igtk(struct iwl_wowlan_status_data *status,
+ 		+ WOWLAN_IGTK_MIN_INDEX;
+ 
+ 	memcpy(status->igtk.key, data->key, sizeof(data->key));
+-	memcpy(status->igtk.ipn, data->ipn, sizeof(data->ipn));
++
++	/* mac80211 expects big endian for memcmp() to work, convert */
++	for (i = 0; i < sizeof(data->ipn); i++)
++		status->igtk.ipn[i] = data->ipn[sizeof(data->ipn) - i - 1];
+ }
+ 
+ static void iwl_mvm_convert_bigtk(struct iwl_wowlan_status_data *status,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index c4f96125cf33a..25a5a31e63c2a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -31,6 +31,17 @@ const u8 iwl_mvm_ac_to_gen2_tx_fifo[] = {
+ 	IWL_GEN2_TRIG_TX_FIFO_BK,
+ };
+ 
++const u8 iwl_mvm_ac_to_bz_tx_fifo[] = {
++	IWL_BZ_EDCA_TX_FIFO_VO,
++	IWL_BZ_EDCA_TX_FIFO_VI,
++	IWL_BZ_EDCA_TX_FIFO_BE,
++	IWL_BZ_EDCA_TX_FIFO_BK,
++	IWL_BZ_TRIG_TX_FIFO_VO,
++	IWL_BZ_TRIG_TX_FIFO_VI,
++	IWL_BZ_TRIG_TX_FIFO_BE,
++	IWL_BZ_TRIG_TX_FIFO_BK,
++};
++
+ struct iwl_mvm_mac_iface_iterator_data {
+ 	struct iwl_mvm *mvm;
+ 	struct ieee80211_vif *vif;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 350dc6b8a3302..68c3d54f587cc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3684,6 +3684,19 @@ iwl_mvm_sta_state_notexist_to_none(struct iwl_mvm *mvm,
+ 	if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+ 		mvmvif->ap_sta = sta;
+ 
++	/*
++	 * Initialize the rates here already - this really tells
++	 * the firmware only what the supported legacy rates are
++	 * (may be) since it's initialized already from what the
++	 * AP advertised in the beacon/probe response. This will
++	 * allow the firmware to send auth/assoc frames with one
++	 * of the supported rates already, rather than having to
++	 * use a mandatory rate.
++	 * If we're the AP, we'll just assume mandatory rates at
++	 * this point, but we know nothing about the STA anyway.
++	 */
++	iwl_mvm_rs_rate_init_all_links(mvm, vif, sta);
++
+ 	return 0;
+ }
+ 
+@@ -3782,13 +3795,17 @@ iwl_mvm_sta_state_assoc_to_authorized(struct iwl_mvm *mvm,
+ 
+ 	mvm_sta->authorized = true;
+ 
+-	iwl_mvm_rs_rate_init_all_links(mvm, vif, sta);
+-
+ 	/* MFP is set by default before the station is authorized.
+ 	 * Clear it here in case it's not used.
+ 	 */
+-	if (!sta->mfp)
+-		return callbacks->update_sta(mvm, vif, sta);
++	if (!sta->mfp) {
++		int ret = callbacks->update_sta(mvm, vif, sta);
++
++		if (ret)
++			return ret;
++	}
++
++	iwl_mvm_rs_rate_init_all_links(mvm, vif, sta);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+index ea3e9e9c6e26c..fe4b39b19a612 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2022 - 2023 Intel Corporation
++ * Copyright (C) 2022 - 2024 Intel Corporation
+  */
+ #include <linux/kernel.h>
+ #include <net/mac80211.h>
+@@ -62,11 +62,13 @@ u32 iwl_mvm_get_sec_flags(struct iwl_mvm *mvm,
+ 			  struct ieee80211_key_conf *keyconf)
+ {
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
++	bool pairwise = keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE;
++	bool igtk = keyconf->keyidx == 4 || keyconf->keyidx == 5;
+ 	u32 flags = 0;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+-	if (!(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE))
++	if (!pairwise)
+ 		flags |= IWL_SEC_KEY_FLAG_MCAST_KEY;
+ 
+ 	switch (keyconf->cipher) {
+@@ -96,12 +98,14 @@ u32 iwl_mvm_get_sec_flags(struct iwl_mvm *mvm,
+ 	if (!sta && vif->type == NL80211_IFTYPE_STATION)
+ 		sta = mvmvif->ap_sta;
+ 
+-	/* Set the MFP flag also for an AP interface where the key is an IGTK
+-	 * key as in such a case the station would always be NULL
++	/*
++	 * If we are installing an iGTK (in AP or STA mode), we need to tell
++	 * the firmware this key will en/decrypt MGMT frames.
++	 * Same goes if we are installing a pairwise key for an MFP station.
++	 * In case we're installing a groupwise key (which is not an iGTK),
++	 * then, we will not use this key for MGMT frames.
+ 	 */
+-	if ((!IS_ERR_OR_NULL(sta) && sta->mfp) ||
+-	    (vif->type == NL80211_IFTYPE_AP &&
+-	     (keyconf->keyidx == 4 || keyconf->keyidx == 5)))
++	if ((!IS_ERR_OR_NULL(sta) && sta->mfp && pairwise) || igtk)
+ 		flags |= IWL_SEC_KEY_FLAG_MFP;
+ 
+ 	return flags;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
+index f313a8d771e42..ad78c69cc6cb7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
+@@ -167,7 +167,7 @@ static int iwl_mvm_mld_mac_ctxt_cmd_listener(struct iwl_mvm *mvm,
+ 	iwl_mvm_mld_mac_ctxt_cmd_common(mvm, vif, &cmd, action);
+ 
+ 	cmd.filter_flags = cpu_to_le32(MAC_CFG_FILTER_PROMISC |
+-				       MAC_FILTER_IN_CONTROL_AND_MGMT |
++				       MAC_CFG_FILTER_ACCEPT_CONTROL_AND_MGMT |
+ 				       MAC_CFG_FILTER_ACCEPT_BEACON |
+ 				       MAC_CFG_FILTER_ACCEPT_PROBE_REQ |
+ 				       MAC_CFG_FILTER_ACCEPT_GRP);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index f2af3e5714090..1e33de05d2e2c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -121,7 +121,7 @@ struct iwl_mvm_time_event_data {
+ 	 * if the te is in the time event list or not (when id == TE_MAX)
+ 	 */
+ 	u32 id;
+-	u8 link_id;
++	s8 link_id;
+ };
+ 
+  /* Power management */
+@@ -1574,12 +1574,16 @@ static inline int iwl_mvm_max_active_links(struct iwl_mvm *mvm,
+ 
+ extern const u8 iwl_mvm_ac_to_tx_fifo[];
+ extern const u8 iwl_mvm_ac_to_gen2_tx_fifo[];
++extern const u8 iwl_mvm_ac_to_bz_tx_fifo[];
+ 
+ static inline u8 iwl_mvm_mac_ac_to_tx_fifo(struct iwl_mvm *mvm,
+ 					   enum ieee80211_ac_numbers ac)
+ {
+-	return iwl_mvm_has_new_tx_api(mvm) ?
+-		iwl_mvm_ac_to_gen2_tx_fifo[ac] : iwl_mvm_ac_to_tx_fifo[ac];
++	if (mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ)
++		return iwl_mvm_ac_to_bz_tx_fifo[ac];
++	if (iwl_mvm_has_new_tx_api(mvm))
++		return iwl_mvm_ac_to_gen2_tx_fifo[ac];
++	return iwl_mvm_ac_to_tx_fifo[ac];
+ }
+ 
+ struct iwl_rate_info {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index af15d470c69bd..7bf2a5947e5e9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -282,6 +282,7 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 				u32 status,
+ 				struct ieee80211_rx_status *stats)
+ {
++	struct wireless_dev *wdev;
+ 	struct iwl_mvm_sta *mvmsta;
+ 	struct iwl_mvm_vif *mvmvif;
+ 	u8 keyid;
+@@ -303,9 +304,15 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 	if (!ieee80211_is_beacon(hdr->frame_control))
+ 		return 0;
+ 
++	if (!sta)
++		return -1;
++
++	mvmsta = iwl_mvm_sta_from_mac80211(sta);
++	mvmvif = iwl_mvm_vif_from_mac80211(mvmsta->vif);
++
+ 	/* key mismatch - will also report !MIC_OK but we shouldn't count it */
+ 	if (!(status & IWL_RX_MPDU_STATUS_KEY_VALID))
+-		return -1;
++		goto report;
+ 
+ 	/* good cases */
+ 	if (likely(status & IWL_RX_MPDU_STATUS_MIC_OK &&
+@@ -314,13 +321,6 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 		return 0;
+ 	}
+ 
+-	if (!sta)
+-		return -1;
+-
+-	mvmsta = iwl_mvm_sta_from_mac80211(sta);
+-
+-	mvmvif = iwl_mvm_vif_from_mac80211(mvmsta->vif);
+-
+ 	/*
+ 	 * both keys will have the same cipher and MIC length, use
+ 	 * whichever one is available
+@@ -329,11 +329,11 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 	if (!key) {
+ 		key = rcu_dereference(mvmvif->bcn_prot.keys[1]);
+ 		if (!key)
+-			return -1;
++			goto report;
+ 	}
+ 
+ 	if (len < key->icv_len + IEEE80211_GMAC_PN_LEN + 2)
+-		return -1;
++		goto report;
+ 
+ 	/* get the real key ID */
+ 	keyid = frame[len - key->icv_len - IEEE80211_GMAC_PN_LEN - 2];
+@@ -347,7 +347,7 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 			return -1;
+ 		key = rcu_dereference(mvmvif->bcn_prot.keys[keyid - 6]);
+ 		if (!key)
+-			return -1;
++			goto report;
+ 	}
+ 
+ 	/* Report status to mac80211 */
+@@ -355,6 +355,10 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 		ieee80211_key_mic_failure(key);
+ 	else if (status & IWL_RX_MPDU_STATUS_REPLAY_ERROR)
+ 		ieee80211_key_replay(key);
++report:
++	wdev = ieee80211_vif_to_wdev(mvmsta->vif);
++	if (wdev->netdev)
++		cfg80211_rx_unprot_mlme_mgmt(wdev->netdev, (void *)hdr, len);
+ 
+ 	return -1;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index bba96a9688906..9905925142279 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -1502,6 +1502,34 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	return ret;
+ }
+ 
++int iwl_mvm_sta_ensure_queue(struct iwl_mvm *mvm,
++			     struct ieee80211_txq *txq)
++{
++	struct iwl_mvm_txq *mvmtxq = iwl_mvm_txq_from_mac80211(txq);
++	int ret = -EINVAL;
++
++	lockdep_assert_held(&mvm->mutex);
++
++	if (likely(test_bit(IWL_MVM_TXQ_STATE_READY, &mvmtxq->state)) ||
++	    !txq->sta) {
++		return 0;
++	}
++
++	if (!iwl_mvm_sta_alloc_queue(mvm, txq->sta, txq->ac, txq->tid)) {
++		set_bit(IWL_MVM_TXQ_STATE_READY, &mvmtxq->state);
++		ret = 0;
++	}
++
++	local_bh_disable();
++	spin_lock(&mvm->add_stream_lock);
++	if (!list_empty(&mvmtxq->list))
++		list_del_init(&mvmtxq->list);
++	spin_unlock(&mvm->add_stream_lock);
++	local_bh_enable();
++
++	return ret;
++}
++
+ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk)
+ {
+ 	struct iwl_mvm *mvm = container_of(wk, struct iwl_mvm,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+index b33a0ce096d46..3cf8a70274ce8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2024 Intel Corporation
+  * Copyright (C) 2013-2014 Intel Mobile Communications GmbH
+  * Copyright (C) 2015-2016 Intel Deutschland GmbH
+  */
+@@ -571,6 +571,7 @@ void iwl_mvm_modify_all_sta_disable_tx(struct iwl_mvm *mvm,
+ 				       bool disable);
+ 
+ void iwl_mvm_csa_client_absent(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
++int iwl_mvm_sta_ensure_queue(struct iwl_mvm *mvm, struct ieee80211_txq *txq);
+ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk);
+ int iwl_mvm_add_pasn_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 			 struct iwl_mvm_int_sta *sta, u8 *addr, u32 cipher,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 2e653a417d626..da00ef6e4fbcf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -692,7 +692,7 @@ void iwl_mvm_protect_session(struct iwl_mvm *mvm,
+ /* Determine whether mac or link id should be used, and validate the link id */
+ static int iwl_mvm_get_session_prot_id(struct iwl_mvm *mvm,
+ 				       struct ieee80211_vif *vif,
+-				       u32 link_id)
++				       s8 link_id)
+ {
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ 	int ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+@@ -706,8 +706,7 @@ static int iwl_mvm_get_session_prot_id(struct iwl_mvm *mvm,
+ 		 "Invalid link ID for session protection: %u\n", link_id))
+ 		return -EINVAL;
+ 
+-	if (WARN(ieee80211_vif_is_mld(vif) &&
+-		 !(vif->active_links & BIT(link_id)),
++	if (WARN(!mvmvif->link[link_id]->active,
+ 		 "Session Protection on an inactive link: %u\n", link_id))
+ 		return -EINVAL;
+ 
+@@ -716,7 +715,7 @@ static int iwl_mvm_get_session_prot_id(struct iwl_mvm *mvm,
+ 
+ static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm,
+ 					      struct ieee80211_vif *vif,
+-					      u32 id, u32 link_id)
++					      u32 id, s8 link_id)
+ {
+ 	int mac_link_id = iwl_mvm_get_session_prot_id(mvm, vif, link_id);
+ 	struct iwl_mvm_session_prot_cmd cmd = {
+@@ -745,7 +744,7 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
+ 	struct ieee80211_vif *vif = te_data->vif;
+ 	struct iwl_mvm_vif *mvmvif;
+ 	enum nl80211_iftype iftype;
+-	unsigned int link_id;
++	s8 link_id;
+ 
+ 	if (!vif)
+ 		return false;
+@@ -1297,7 +1296,7 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
+ 	struct iwl_mvm_time_event_data *te_data = &mvmvif->time_event_data;
+ 	const u16 notif[] = { WIDE_ID(MAC_CONF_GROUP, SESSION_PROTECTION_NOTIF) };
+ 	struct iwl_notification_wait wait_notif;
+-	int mac_link_id = iwl_mvm_get_session_prot_id(mvm, vif, link_id);
++	int mac_link_id = iwl_mvm_get_session_prot_id(mvm, vif, (s8)link_id);
+ 	struct iwl_mvm_session_prot_cmd cmd = {
+ 		.id_and_color = cpu_to_le32(mac_link_id),
+ 		.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index db986bfc4dc3f..930742e75c02a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2024 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+@@ -520,13 +520,24 @@ static void iwl_mvm_set_tx_cmd_crypto(struct iwl_mvm *mvm,
+ 	}
+ }
+ 
++static void iwl_mvm_copy_hdr(void *cmd, const void *hdr, int hdrlen,
++			     const u8 *addr3_override)
++{
++	struct ieee80211_hdr *out_hdr = cmd;
++
++	memcpy(cmd, hdr, hdrlen);
++	if (addr3_override)
++		memcpy(out_hdr->addr3, addr3_override, ETH_ALEN);
++}
++
+ /*
+  * Allocates and sets the Tx cmd the driver data pointers in the skb
+  */
+ static struct iwl_device_tx_cmd *
+ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 		      struct ieee80211_tx_info *info, int hdrlen,
+-		      struct ieee80211_sta *sta, u8 sta_id)
++		      struct ieee80211_sta *sta, u8 sta_id,
++		      const u8 *addr3_override)
+ {
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct iwl_device_tx_cmd *dev_cmd;
+@@ -584,7 +595,7 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 			cmd->len = cpu_to_le16((u16)skb->len);
+ 
+ 			/* Copy MAC header from skb into command buffer */
+-			memcpy(cmd->hdr, hdr, hdrlen);
++			iwl_mvm_copy_hdr(cmd->hdr, hdr, hdrlen, addr3_override);
+ 
+ 			cmd->flags = cpu_to_le16(flags);
+ 			cmd->rate_n_flags = cpu_to_le32(rate_n_flags);
+@@ -599,7 +610,7 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 			cmd->len = cpu_to_le16((u16)skb->len);
+ 
+ 			/* Copy MAC header from skb into command buffer */
+-			memcpy(cmd->hdr, hdr, hdrlen);
++			iwl_mvm_copy_hdr(cmd->hdr, hdr, hdrlen, addr3_override);
+ 
+ 			cmd->flags = cpu_to_le32(flags);
+ 			cmd->rate_n_flags = cpu_to_le32(rate_n_flags);
+@@ -617,7 +628,7 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	iwl_mvm_set_tx_cmd_rate(mvm, tx_cmd, info, sta, hdr->frame_control);
+ 
+ 	/* Copy MAC header from skb into command buffer */
+-	memcpy(tx_cmd->hdr, hdr, hdrlen);
++	iwl_mvm_copy_hdr(tx_cmd->hdr, hdr, hdrlen, addr3_override);
+ 
+ out:
+ 	return dev_cmd;
+@@ -820,7 +831,8 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
+ 
+ 	IWL_DEBUG_TX(mvm, "station Id %d, queue=%d\n", sta_id, queue);
+ 
+-	dev_cmd = iwl_mvm_set_tx_params(mvm, skb, &info, hdrlen, NULL, sta_id);
++	dev_cmd = iwl_mvm_set_tx_params(mvm, skb, &info, hdrlen, NULL, sta_id,
++					NULL);
+ 	if (!dev_cmd)
+ 		return -1;
+ 
+@@ -1140,7 +1152,8 @@ static int iwl_mvm_tx_pkt_queued(struct iwl_mvm *mvm,
+  */
+ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 			   struct ieee80211_tx_info *info,
+-			   struct ieee80211_sta *sta)
++			   struct ieee80211_sta *sta,
++			   const u8 *addr3_override)
+ {
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct iwl_mvm_sta *mvmsta;
+@@ -1172,7 +1185,8 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 		iwl_mvm_probe_resp_set_noa(mvm, skb);
+ 
+ 	dev_cmd = iwl_mvm_set_tx_params(mvm, skb, info, hdrlen,
+-					sta, mvmsta->deflink.sta_id);
++					sta, mvmsta->deflink.sta_id,
++					addr3_override);
+ 	if (!dev_cmd)
+ 		goto drop;
+ 
+@@ -1294,9 +1308,11 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ 	struct ieee80211_tx_info info;
+ 	struct sk_buff_head mpdus_skbs;
++	struct ieee80211_vif *vif;
+ 	unsigned int payload_len;
+ 	int ret;
+ 	struct sk_buff *orig_skb = skb;
++	const u8 *addr3;
+ 
+ 	if (WARN_ON_ONCE(!mvmsta))
+ 		return -1;
+@@ -1307,26 +1323,59 @@ int iwl_mvm_tx_skb_sta(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	memcpy(&info, skb->cb, sizeof(info));
+ 
+ 	if (!skb_is_gso(skb))
+-		return iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
++		return iwl_mvm_tx_mpdu(mvm, skb, &info, sta, NULL);
+ 
+ 	payload_len = skb_tail_pointer(skb) - skb_transport_header(skb) -
+ 		tcp_hdrlen(skb) + skb->data_len;
+ 
+ 	if (payload_len <= skb_shinfo(skb)->gso_size)
+-		return iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
++		return iwl_mvm_tx_mpdu(mvm, skb, &info, sta, NULL);
+ 
+ 	__skb_queue_head_init(&mpdus_skbs);
+ 
++	vif = info.control.vif;
++	if (!vif)
++		return -1;
++
+ 	ret = iwl_mvm_tx_tso(mvm, skb, &info, sta, &mpdus_skbs);
+ 	if (ret)
+ 		return ret;
+ 
+ 	WARN_ON(skb_queue_empty(&mpdus_skbs));
+ 
++	/*
++	 * As described in IEEE sta 802.11-2020, table 9-30 (Address
++	 * field contents), A-MSDU address 3 should contain the BSSID
++	 * address.
++	 * Pass address 3 down to iwl_mvm_tx_mpdu() and further to set it
++	 * in the command header. We need to preserve the original
++	 * address 3 in the skb header to correctly create all the
++	 * A-MSDU subframe headers from it.
++	 */
++	switch (vif->type) {
++	case NL80211_IFTYPE_STATION:
++		addr3 = vif->cfg.ap_addr;
++		break;
++	case NL80211_IFTYPE_AP:
++		addr3 = vif->addr;
++		break;
++	default:
++		addr3 = NULL;
++		break;
++	}
++
+ 	while (!skb_queue_empty(&mpdus_skbs)) {
++		struct ieee80211_hdr *hdr;
++		bool amsdu;
++
+ 		skb = __skb_dequeue(&mpdus_skbs);
++		hdr = (void *)skb->data;
++		amsdu = ieee80211_is_data_qos(hdr->frame_control) &&
++			(*ieee80211_get_qos_ctl(hdr) &
++			 IEEE80211_QOS_CTL_A_MSDU_PRESENT);
+ 
+-		ret = iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
++		ret = iwl_mvm_tx_mpdu(mvm, skb, &info, sta,
++				      amsdu ? addr3 : NULL);
+ 		if (ret) {
+ 			/* Free skbs created as part of TSO logic that have not yet been dequeued */
+ 			__skb_queue_purge(&mpdus_skbs);
+@@ -1587,12 +1636,18 @@ static void iwl_mvm_tx_status_check_trigger(struct iwl_mvm *mvm,
+  * of the batch. This is why the SSN of the SCD is written at the end of the
+  * whole struct at a variable offset. This function knows how to cope with the
+  * variable offset and returns the SSN of the SCD.
++ *
++ * For 22000-series and lower, this is just 12 bits. For later, 16 bits.
+  */
+ static inline u32 iwl_mvm_get_scd_ssn(struct iwl_mvm *mvm,
+ 				      struct iwl_mvm_tx_resp *tx_resp)
+ {
+-	return le32_to_cpup((__le32 *)iwl_mvm_get_agg_status(mvm, tx_resp) +
+-			    tx_resp->frame_count) & 0xfff;
++	u32 val = le32_to_cpup((__le32 *)iwl_mvm_get_agg_status(mvm, tx_resp) +
++			       tx_resp->frame_count);
++
++	if (mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
++		return val & 0xFFFF;
++	return val & 0xFFF;
+ }
+ 
+ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+diff --git a/drivers/net/wireless/marvell/libertas/cmd.c b/drivers/net/wireless/marvell/libertas/cmd.c
+index 104d2b6dc9af6..5a525da434c28 100644
+--- a/drivers/net/wireless/marvell/libertas/cmd.c
++++ b/drivers/net/wireless/marvell/libertas/cmd.c
+@@ -1132,7 +1132,7 @@ int lbs_allocate_cmd_buffer(struct lbs_private *priv)
+ 		if (!cmdarray[i].cmdbuf) {
+ 			lbs_deb_host("ALLOC_CMD_BUF: ptempvirtualaddr is NULL\n");
+ 			ret = -1;
+-			goto done;
++			goto free_cmd_array;
+ 		}
+ 	}
+ 
+@@ -1140,8 +1140,17 @@ int lbs_allocate_cmd_buffer(struct lbs_private *priv)
+ 		init_waitqueue_head(&cmdarray[i].cmdwait_q);
+ 		lbs_cleanup_and_insert_cmd(priv, &cmdarray[i]);
+ 	}
+-	ret = 0;
++	return 0;
+ 
++free_cmd_array:
++	for (i = 0; i < LBS_NUM_CMD_BUFFERS; i++) {
++		if (cmdarray[i].cmdbuf) {
++			kfree(cmdarray[i].cmdbuf);
++			cmdarray[i].cmdbuf = NULL;
++		}
++	}
++	kfree(priv->cmd_array);
++	priv->cmd_array = NULL;
+ done:
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+index f9c9fec7c792a..d14a0f4c1b6d7 100644
+--- a/drivers/net/wireless/marvell/mwifiex/debugfs.c
++++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+@@ -970,9 +970,6 @@ mwifiex_dev_debugfs_init(struct mwifiex_private *priv)
+ 	priv->dfs_dev_dir = debugfs_create_dir(priv->netdev->name,
+ 					       mwifiex_dfs_dir);
+ 
+-	if (!priv->dfs_dev_dir)
+-		return;
+-
+ 	MWIFIEX_DFS_ADD_FILE(info);
+ 	MWIFIEX_DFS_ADD_FILE(debug);
+ 	MWIFIEX_DFS_ADD_FILE(getlog);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index b475555097ff2..633f56210563c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -2100,7 +2100,7 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
+ 		int j, msg_len, num_ch;
+ 		struct sk_buff *skb;
+ 
+-		num_ch = i == batch_size - 1 ? n_chan % batch_len : batch_len;
++		num_ch = i == batch_size - 1 ? n_chan - i * batch_len : batch_len;
+ 		msg_len = sizeof(tx_power_tlv) + num_ch * sizeof(sku_tlbv);
+ 		skb = mt76_mcu_msg_alloc(dev, NULL, msg_len);
+ 		if (!skb) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 0563b1b22f485..bf023f317031d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -416,6 +416,14 @@ struct sta_rec_he_6g_capa {
+ 	u8 rsv[2];
+ } __packed;
+ 
++struct sta_rec_pn_info {
++	__le16 tag;
++	__le16 len;
++	u8 pn[6];
++	u8 tsc_type;
++	u8 rsv;
++} __packed;
++
+ struct sec_key {
+ 	u8 cipher_id;
+ 	u8 cipher_len;
+@@ -768,6 +776,7 @@ struct wtbl_raw {
+ 					 sizeof(struct sta_rec_sec) +	\
+ 					 sizeof(struct sta_rec_ra_fixed) + \
+ 					 sizeof(struct sta_rec_he_6g_capa) + \
++					 sizeof(struct sta_rec_pn_info) + \
+ 					 sizeof(struct tlv) +		\
+ 					 MT76_CONNAC_WTBL_UPDATE_MAX_SIZE)
+ 
+@@ -798,6 +807,8 @@ enum {
+ 	STA_REC_HE_V2 = 0x19,
+ 	STA_REC_MLD = 0x20,
+ 	STA_REC_EHT = 0x22,
++	STA_REC_PN_INFO = 0x26,
++	STA_REC_KEY_V3 = 0x27,
+ 	STA_REC_HDRT = 0x28,
+ 	STA_REC_HDR_TRANS = 0x2B,
+ 	STA_REC_MAX_NUM
+@@ -925,6 +936,9 @@ enum {
+ 	PHY_TYPE_INDEX_NUM
+ };
+ 
++#define HR_DSSS_ERP_BASIC_RATE			GENMASK(3, 0)
++#define OFDM_BASIC_RATE				(BIT(6) | BIT(8) | BIT(10))
++
+ #define PHY_TYPE_BIT_HR_DSSS			BIT(PHY_TYPE_HR_DSSS_INDEX)
+ #define PHY_TYPE_BIT_ERP			BIT(PHY_TYPE_ERP_INDEX)
+ #define PHY_TYPE_BIT_OFDM			BIT(PHY_TYPE_OFDM_INDEX)
+@@ -1088,6 +1102,13 @@ enum mcu_cipher_type {
+ 	MCU_CIPHER_GCMP_256,
+ 	MCU_CIPHER_WAPI,
+ 	MCU_CIPHER_BIP_CMAC_128,
++	MCU_CIPHER_BIP_CMAC_256,
++	MCU_CIPHER_BCN_PROT_CMAC_128,
++	MCU_CIPHER_BCN_PROT_CMAC_256,
++	MCU_CIPHER_BCN_PROT_GMAC_128,
++	MCU_CIPHER_BCN_PROT_GMAC_256,
++	MCU_CIPHER_BIP_GMAC_128,
++	MCU_CIPHER_BIP_GMAC_256,
+ };
+ 
+ enum {
+@@ -1307,6 +1328,7 @@ enum {
+ 	UNI_BSS_INFO_RATE = 11,
+ 	UNI_BSS_INFO_QBSS = 15,
+ 	UNI_BSS_INFO_SEC = 16,
++	UNI_BSS_INFO_BCN_PROT = 17,
+ 	UNI_BSS_INFO_TXCMD = 18,
+ 	UNI_BSS_INFO_UAPSD = 19,
+ 	UNI_BSS_INFO_PS = 21,
+@@ -1768,6 +1790,12 @@ mt76_connac_mcu_get_cipher(int cipher)
+ 		return MCU_CIPHER_GCMP;
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 		return MCU_CIPHER_GCMP_256;
++	case WLAN_CIPHER_SUITE_BIP_GMAC_128:
++		return MCU_CIPHER_BIP_GMAC_128;
++	case WLAN_CIPHER_SUITE_BIP_GMAC_256:
++		return MCU_CIPHER_BIP_GMAC_256;
++	case WLAN_CIPHER_SUITE_BIP_CMAC_256:
++		return MCU_CIPHER_BIP_CMAC_256;
+ 	case WLAN_CIPHER_SUITE_SMS4:
+ 		return MCU_CIPHER_WAPI;
+ 	default:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 399d7ca6bebcb..18056045d9759 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -1270,7 +1270,7 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ 		.acpi_conf = mt792x_acpi_get_flags(&dev->phy),
+ 	};
+ 	int ret, valid_cnt = 0;
+-	u16 buf_len = 0;
++	u32 buf_len = 0;
+ 	u8 *pos;
+ 
+ 	if (!clc)
+@@ -1281,7 +1281,7 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ 	if (mt76_find_power_limits_node(&dev->mt76))
+ 		req.cap |= CLC_CAP_DTS_EN;
+ 
+-	buf_len = le16_to_cpu(clc->len) - sizeof(*clc);
++	buf_len = le32_to_cpu(clc->len) - sizeof(*clc);
+ 	pos = clc->data;
+ 	while (buf_len > 16) {
+ 		struct mt7921_clc_rule *rule = (struct mt7921_clc_rule *)pos;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 42fd456eb6faf..4d04911245409 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -387,6 +387,7 @@ static void mt7921_pci_remove(struct pci_dev *pdev)
+ 	struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ 
+ 	mt7921e_unregister_device(dev);
++	set_bit(MT76_REMOVED, &mdev->phy.state);
+ 	devm_free_irq(&pdev->dev, pdev->irq, dev);
+ 	mt76_free_device(&dev->mt76);
+ 	pci_free_irq_vectors(pdev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index aa918b9b0469f..6f0b18ae313de 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -360,6 +360,7 @@ mt7925_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ 	mvif->sta.wcid.phy_idx = mvif->mt76.band_idx;
+ 	mvif->sta.wcid.hw_key_idx = -1;
+ 	mvif->sta.wcid.tx_info |= MT_WCID_TX_INFO_SET;
++	mvif->sta.vif = mvif;
+ 	mt76_wcid_init(&mvif->sta.wcid);
+ 
+ 	mt7925_mac_wtbl_update(dev, idx,
+@@ -527,7 +528,7 @@ static int mt7925_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	if (cmd == SET_KEY && !mvif->mt76.cipher) {
+ 		struct mt792x_phy *phy = mt792x_hw_phy(hw);
+ 
+-		mvif->mt76.cipher = mt76_connac_mcu_get_cipher(key->cipher);
++		mvif->mt76.cipher = mt7925_mcu_get_cipher(key->cipher);
+ 		mt7925_mcu_add_bss_info(phy, mvif->mt76.ctx, vif, sta, true);
+ 	}
+ 
+@@ -711,7 +712,7 @@ static void mt7925_bss_info_changed(struct ieee80211_hw *hw,
+ 
+ 		if (slottime != phy->slottime) {
+ 			phy->slottime = slottime;
+-			mt792x_mac_set_timeing(phy);
++			mt7925_mcu_set_timing(phy, vif);
+ 		}
+ 	}
+ 
+@@ -1274,6 +1275,25 @@ mt7925_channel_switch_beacon(struct ieee80211_hw *hw,
+ 	mt792x_mutex_release(dev);
+ }
+ 
++static int
++mt7925_conf_tx(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
++	       unsigned int link_id, u16 queue,
++	       const struct ieee80211_tx_queue_params *params)
++{
++	struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++	static const u8 mq_to_aci[] = {
++		    [IEEE80211_AC_VO] = 3,
++		    [IEEE80211_AC_VI] = 2,
++		    [IEEE80211_AC_BE] = 0,
++		    [IEEE80211_AC_BK] = 1,
++	};
++
++	/* firmware uses access class index */
++	mvif->queue_params[mq_to_aci[queue]] = *params;
++
++	return 0;
++}
++
+ static int
+ mt7925_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		struct ieee80211_bss_conf *link_conf)
+@@ -1397,7 +1417,7 @@ const struct ieee80211_ops mt7925_ops = {
+ 	.add_interface = mt7925_add_interface,
+ 	.remove_interface = mt792x_remove_interface,
+ 	.config = mt7925_config,
+-	.conf_tx = mt792x_conf_tx,
++	.conf_tx = mt7925_conf_tx,
+ 	.configure_filter = mt7925_configure_filter,
+ 	.bss_info_changed = mt7925_bss_info_changed,
+ 	.start_ap = mt7925_start_ap,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 9c0e397537acf..8e72012472874 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -814,6 +814,7 @@ mt7925_mcu_sta_hdr_trans_tlv(struct sk_buff *skb,
+ 			     struct ieee80211_vif *vif,
+ 			     struct ieee80211_sta *sta)
+ {
++	struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ 	struct sta_rec_hdr_trans *hdr_trans;
+ 	struct mt76_wcid *wcid;
+ 	struct tlv *tlv;
+@@ -827,7 +828,11 @@ mt7925_mcu_sta_hdr_trans_tlv(struct sk_buff *skb,
+ 	else
+ 		hdr_trans->from_ds = true;
+ 
+-	wcid = (struct mt76_wcid *)sta->drv_priv;
++	if (sta)
++		wcid = (struct mt76_wcid *)sta->drv_priv;
++	else
++		wcid = &mvif->sta.wcid;
++
+ 	if (!wcid)
+ 		return;
+ 
+@@ -895,7 +900,7 @@ int mt7925_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif)
+ 
+ 		e = (struct edca *)tlv;
+ 		e->set = WMM_PARAM_SET;
+-		e->queue = ac + mvif->mt76.wmm_idx * MT76_CONNAC_MAX_WMM_SETS;
++		e->queue = ac;
+ 		e->aifs = q->aifs;
+ 		e->txop = cpu_to_le16(q->txop);
+ 
+@@ -921,61 +926,67 @@ mt7925_mcu_sta_key_tlv(struct mt76_wcid *wcid,
+ 		       struct ieee80211_key_conf *key,
+ 		       enum set_key_cmd cmd)
+ {
++	struct mt792x_sta *msta = container_of(wcid, struct mt792x_sta, wcid);
+ 	struct sta_rec_sec_uni *sec;
++	struct mt792x_vif *mvif = msta->vif;
++	struct ieee80211_sta *sta;
++	struct ieee80211_vif *vif;
+ 	struct tlv *tlv;
+ 
+-	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_KEY_V2, sizeof(*sec));
++	sta = msta == &mvif->sta ?
++		      NULL :
++		      container_of((void *)msta, struct ieee80211_sta, drv_priv);
++	vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv);
++
++	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_KEY_V3, sizeof(*sec));
+ 	sec = (struct sta_rec_sec_uni *)tlv;
+-	sec->add = cmd;
++	sec->bss_idx = mvif->mt76.idx;
++	sec->is_authenticator = 0;
++	sec->mgmt_prot = 0;
++	sec->wlan_idx = (u8)wcid->idx;
++
++	if (sta) {
++		sec->tx_key = 1;
++		sec->key_type = 1;
++		memcpy(sec->peer_addr, sta->addr, ETH_ALEN);
++	} else {
++		memcpy(sec->peer_addr, vif->bss_conf.bssid, ETH_ALEN);
++	}
+ 
+ 	if (cmd == SET_KEY) {
+-		struct sec_key_uni *sec_key;
+ 		u8 cipher;
+ 
+-		cipher = mt76_connac_mcu_get_cipher(key->cipher);
+-		if (cipher == MCU_CIPHER_NONE)
++		sec->add = 1;
++		cipher = mt7925_mcu_get_cipher(key->cipher);
++		if (cipher == CONNAC3_CIPHER_NONE)
+ 			return -EOPNOTSUPP;
+ 
+-		sec_key = &sec->key[0];
+-		sec_key->cipher_len = sizeof(*sec_key);
+-
+-		if (cipher == MCU_CIPHER_BIP_CMAC_128) {
+-			sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+-			sec_key->cipher_id = MCU_CIPHER_AES_CCMP;
+-			sec_key->key_id = sta_key_conf->keyidx;
+-			sec_key->key_len = 16;
+-			memcpy(sec_key->key, sta_key_conf->key, 16);
+-
+-			sec_key = &sec->key[1];
+-			sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+-			sec_key->cipher_id = MCU_CIPHER_BIP_CMAC_128;
+-			sec_key->cipher_len = sizeof(*sec_key);
+-			sec_key->key_len = 16;
+-			memcpy(sec_key->key, key->key, 16);
+-			sec->n_cipher = 2;
++		if (cipher == CONNAC3_CIPHER_BIP_CMAC_128) {
++			sec->cipher_id = CONNAC3_CIPHER_BIP_CMAC_128;
++			sec->key_id = sta_key_conf->keyidx;
++			sec->key_len = 32;
++			memcpy(sec->key, sta_key_conf->key, 16);
++			memcpy(sec->key + 16, key->key, 16);
+ 		} else {
+-			sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+-			sec_key->cipher_id = cipher;
+-			sec_key->key_id = key->keyidx;
+-			sec_key->key_len = key->keylen;
+-			memcpy(sec_key->key, key->key, key->keylen);
++			sec->cipher_id = cipher;
++			sec->key_id = key->keyidx;
++			sec->key_len = key->keylen;
++			memcpy(sec->key, key->key, key->keylen);
+ 
+-			if (cipher == MCU_CIPHER_TKIP) {
++			if (cipher == CONNAC3_CIPHER_TKIP) {
+ 				/* Rx/Tx MIC keys are swapped */
+-				memcpy(sec_key->key + 16, key->key + 24, 8);
+-				memcpy(sec_key->key + 24, key->key + 16, 8);
++				memcpy(sec->key + 16, key->key + 24, 8);
++				memcpy(sec->key + 24, key->key + 16, 8);
+ 			}
+ 
+ 			/* store key_conf for BIP batch update */
+-			if (cipher == MCU_CIPHER_AES_CCMP) {
++			if (cipher == CONNAC3_CIPHER_AES_CCMP) {
+ 				memcpy(sta_key_conf->key, key->key, key->keylen);
+ 				sta_key_conf->keyidx = key->keyidx;
+ 			}
+-
+-			sec->n_cipher = 1;
+ 		}
+ 	} else {
+-		sec->n_cipher = 0;
++		sec->add = 0;
+ 	}
+ 
+ 	return 0;
+@@ -1460,12 +1471,10 @@ mt7925_mcu_sta_phy_tlv(struct sk_buff *skb,
+ 	struct tlv *tlv;
+ 	u8 af = 0, mm = 0;
+ 
+-	if (!sta->deflink.ht_cap.ht_supported && !sta->deflink.he_6ghz_capa.capa)
+-		return;
+-
+ 	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_PHY, sizeof(*phy));
+ 	phy = (struct sta_rec_phy *)tlv;
+ 	phy->phy_type = mt76_connac_get_phy_mode_v2(mvif->phy->mt76, vif, chandef->chan->band, sta);
++	phy->basic_rate = cpu_to_le16((u16)vif->bss_conf.basic_rates);
+ 	if (sta->deflink.ht_cap.ht_supported) {
+ 		af = sta->deflink.ht_cap.ampdu_factor;
+ 		mm = sta->deflink.ht_cap.ampdu_density;
+@@ -1573,8 +1582,6 @@ mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+ {
+ 	struct mt76_vif *mvif = (struct mt76_vif *)info->vif->drv_priv;
+ 	struct mt76_dev *dev = phy->dev;
+-	struct wtbl_req_hdr *wtbl_hdr;
+-	struct tlv *sta_wtbl;
+ 	struct sk_buff *skb;
+ 
+ 	skb = __mt76_connac_mcu_alloc_sta_req(dev, mvif, info->wcid,
+@@ -1598,30 +1605,11 @@ mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+ 		mt7925_mcu_sta_state_v2_tlv(phy, skb, info->sta,
+ 					    info->vif, info->rcpi,
+ 					    info->state);
+-		mt7925_mcu_sta_hdr_trans_tlv(skb, info->vif, info->sta);
+ 		mt7925_mcu_sta_mld_tlv(skb, info->vif, info->sta);
+ 	}
+ 
+-	sta_wtbl = mt76_connac_mcu_add_tlv(skb, STA_REC_WTBL,
+-					   sizeof(struct tlv));
+-
+-	wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(dev, info->wcid,
+-						  WTBL_RESET_AND_SET,
+-						  sta_wtbl, &skb);
+-	if (IS_ERR(wtbl_hdr))
+-		return PTR_ERR(wtbl_hdr);
+-
+-	if (info->enable) {
+-		mt76_connac_mcu_wtbl_generic_tlv(dev, skb, info->vif,
+-						 info->sta, sta_wtbl,
+-						 wtbl_hdr);
+-		mt76_connac_mcu_wtbl_hdr_trans_tlv(skb, info->vif, info->wcid,
+-						   sta_wtbl, wtbl_hdr);
+-		if (info->sta)
+-			mt76_connac_mcu_wtbl_ht_tlv(dev, skb, info->sta,
+-						    sta_wtbl, wtbl_hdr,
+-						    true, true);
+-	}
++	if (info->enable)
++		mt7925_mcu_sta_hdr_trans_tlv(skb, info->vif, info->sta);
+ 
+ 	return mt76_mcu_skb_send_msg(dev, skb, info->cmd, true);
+ }
+@@ -2049,9 +2037,9 @@ mt7925_mcu_bss_basic_tlv(struct sk_buff *skb,
+ 	struct cfg80211_chan_def *chandef = ctx ? &ctx->def : &phy->chandef;
+ 	enum nl80211_band band = chandef->chan->band;
+ 	struct mt76_connac_bss_basic_tlv *basic_req;
+-	u8 idx, basic_phy;
+ 	struct tlv *tlv;
+ 	int conn_type;
++	u8 idx;
+ 
+ 	tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_BASIC, sizeof(*basic_req));
+ 	basic_req = (struct mt76_connac_bss_basic_tlv *)tlv;
+@@ -2062,8 +2050,10 @@ mt7925_mcu_bss_basic_tlv(struct sk_buff *skb,
+ 
+ 	basic_req->phymode_ext = mt7925_get_phy_mode_ext(phy, vif, band, sta);
+ 
+-	basic_phy = mt76_connac_get_phy_mode_v2(phy, vif, band, sta);
+-	basic_req->nonht_basic_phy = cpu_to_le16(basic_phy);
++	if (band == NL80211_BAND_2GHZ)
++		basic_req->nonht_basic_phy = cpu_to_le16(PHY_TYPE_ERP_INDEX);
++	else
++		basic_req->nonht_basic_phy = cpu_to_le16(PHY_TYPE_OFDM_INDEX);
+ 
+ 	memcpy(basic_req->bssid, vif->bss_conf.bssid, ETH_ALEN);
+ 	basic_req->phymode = mt76_connac_get_phy_mode(phy, vif, band, sta);
+@@ -2122,21 +2112,21 @@ mt7925_mcu_bss_sec_tlv(struct sk_buff *skb, struct ieee80211_vif *vif)
+ 	sec = (struct bss_sec_tlv *)tlv;
+ 
+ 	switch (mvif->cipher) {
+-	case MCU_CIPHER_GCMP_256:
+-	case MCU_CIPHER_GCMP:
++	case CONNAC3_CIPHER_GCMP_256:
++	case CONNAC3_CIPHER_GCMP:
+ 		sec->mode = MODE_WPA3_SAE;
+ 		sec->status = 8;
+ 		break;
+-	case MCU_CIPHER_AES_CCMP:
++	case CONNAC3_CIPHER_AES_CCMP:
+ 		sec->mode = MODE_WPA2_PSK;
+ 		sec->status = 6;
+ 		break;
+-	case MCU_CIPHER_TKIP:
++	case CONNAC3_CIPHER_TKIP:
+ 		sec->mode = MODE_WPA2_PSK;
+ 		sec->status = 4;
+ 		break;
+-	case MCU_CIPHER_WEP104:
+-	case MCU_CIPHER_WEP40:
++	case CONNAC3_CIPHER_WEP104:
++	case CONNAC3_CIPHER_WEP40:
+ 		sec->mode = MODE_SHARED;
+ 		sec->status = 0;
+ 		break;
+@@ -2167,6 +2157,11 @@ mt7925_mcu_bss_bmc_tlv(struct sk_buff *skb, struct mt792x_phy *phy,
+ 
+ 	bmc = (struct bss_rate_tlv *)tlv;
+ 
++	if (band == NL80211_BAND_2GHZ)
++		bmc->basic_rate = cpu_to_le16(HR_DSSS_ERP_BASIC_RATE);
++	else
++		bmc->basic_rate = cpu_to_le16(OFDM_BASIC_RATE);
++
+ 	bmc->short_preamble = (band == NL80211_BAND_2GHZ);
+ 	bmc->bc_fixed_rate = idx;
+ 	bmc->mc_fixed_rate = idx;
+@@ -2249,6 +2244,38 @@ mt7925_mcu_bss_color_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ 		vif->bss_conf.he_bss_color.color : 0;
+ }
+ 
++static void
++mt7925_mcu_bss_ifs_tlv(struct sk_buff *skb, struct ieee80211_vif *vif)
++{
++	struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++	struct mt792x_phy *phy = mvif->phy;
++	struct bss_ifs_time_tlv *ifs_time;
++	struct tlv *tlv;
++
++	tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_IFS_TIME, sizeof(*ifs_time));
++	ifs_time = (struct bss_ifs_time_tlv *)tlv;
++	ifs_time->slot_valid = true;
++	ifs_time->slot_time = cpu_to_le16(phy->slottime);
++}
++
++int mt7925_mcu_set_timing(struct mt792x_phy *phy,
++			  struct ieee80211_vif *vif)
++{
++	struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++	struct mt792x_dev *dev = phy->dev;
++	struct sk_buff *skb;
++
++	skb = __mt7925_mcu_alloc_bss_req(&dev->mt76, &mvif->mt76,
++					 MT7925_BSS_UPDATE_MAX_SIZE);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
++	mt7925_mcu_bss_ifs_tlv(skb, vif);
++
++	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
++				     MCU_UNI_CMD(BSS_INFO_UPDATE), true);
++}
++
+ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ 			    struct ieee80211_chanctx_conf *ctx,
+ 			    struct ieee80211_vif *vif,
+@@ -2273,6 +2300,7 @@ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ 	mt7925_mcu_bss_bmc_tlv(skb, phy, ctx, vif, sta);
+ 	mt7925_mcu_bss_qos_tlv(skb, vif);
+ 	mt7925_mcu_bss_mld_tlv(skb, vif, sta);
++	mt7925_mcu_bss_ifs_tlv(skb, vif);
+ 
+ 	if (vif->bss_conf.he_support) {
+ 		mt7925_mcu_bss_he_tlv(skb, vif, phy);
+@@ -2845,12 +2873,16 @@ int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ 	if (cmd & __MCU_CMD_FIELD_UNI) {
+ 		uni_txd = (struct mt76_connac2_mcu_uni_txd *)txd;
+ 		uni_txd->len = cpu_to_le16(skb->len - sizeof(uni_txd->txd));
+-		uni_txd->option = MCU_CMD_UNI_EXT_ACK;
+ 		uni_txd->cid = cpu_to_le16(mcu_cmd);
+ 		uni_txd->s2d_index = MCU_S2D_H2N;
+ 		uni_txd->pkt_type = MCU_PKT_ID;
+ 		uni_txd->seq = seq;
+ 
++		if (cmd & __MCU_CMD_FIELD_QUERY)
++			uni_txd->option = MCU_CMD_UNI_QUERY_ACK;
++		else
++			uni_txd->option = MCU_CMD_UNI_EXT_ACK;
++
+ 		goto exit;
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+index 3c41e21303b1f..2cf39276118eb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+@@ -159,6 +159,20 @@ enum {
+ 	UNI_EVENT_SCAN_DONE_NLO = 3,
+ };
+ 
++enum connac3_mcu_cipher_type {
++	CONNAC3_CIPHER_NONE = 0,
++	CONNAC3_CIPHER_WEP40 = 1,
++	CONNAC3_CIPHER_TKIP = 2,
++	CONNAC3_CIPHER_AES_CCMP = 4,
++	CONNAC3_CIPHER_WEP104 = 5,
++	CONNAC3_CIPHER_BIP_CMAC_128 = 6,
++	CONNAC3_CIPHER_WEP128 = 7,
++	CONNAC3_CIPHER_WAPI = 8,
++	CONNAC3_CIPHER_CCMP_256 = 10,
++	CONNAC3_CIPHER_GCMP = 11,
++	CONNAC3_CIPHER_GCMP_256 = 12,
++};
++
+ struct mt7925_mcu_scan_chinfo_event {
+ 	u8 nr_chan;
+ 	u8 alpha2[3];
+@@ -334,7 +348,8 @@ struct bss_req_hdr {
+ struct bss_rate_tlv {
+ 	__le16 tag;
+ 	__le16 len;
+-	u8 __rsv1[4];
++	u8 __rsv1[2];
++	__le16 basic_rate;
+ 	__le16 bc_trans;
+ 	__le16 mc_trans;
+ 	u8 short_preamble;
+@@ -382,25 +397,22 @@ struct sta_rec_eht {
+ 	u8 _rsv2[3];
+ } __packed;
+ 
+-struct sec_key_uni {
+-	__le16 wlan_idx;
+-	u8 mgmt_prot;
+-	u8 cipher_id;
+-	u8 cipher_len;
+-	u8 key_id;
+-	u8 key_len;
+-	u8 need_resp;
+-	u8 key[32];
+-} __packed;
+-
+ struct sta_rec_sec_uni {
+ 	__le16 tag;
+ 	__le16 len;
+ 	u8 add;
+-	u8 n_cipher;
+-	u8 rsv[2];
+-
+-	struct sec_key_uni key[2];
++	u8 tx_key;
++	u8 key_type;
++	u8 is_authenticator;
++	u8 peer_addr[6];
++	u8 bss_idx;
++	u8 cipher_id;
++	u8 key_id;
++	u8 key_len;
++	u8 wlan_idx;
++	u8 mgmt_prot;
++	u8 key[32];
++	u8 key_rsc[16];
+ } __packed;
+ 
+ struct sta_rec_hdr_trans {
+@@ -428,6 +440,22 @@ struct sta_rec_mld {
+ 	} __packed link[2];
+ } __packed;
+ 
++struct bss_ifs_time_tlv {
++	__le16 tag;
++	__le16 len;
++	u8 slot_valid;
++	u8 sifs_valid;
++	u8 rifs_valid;
++	u8 eifs_valid;
++	__le16 slot_time;
++	__le16 sifs_time;
++	__le16 rifs_time;
++	__le16 eifs_time;
++	u8 eifs_cck_valid;
++	u8 rsv;
++	__le16 eifs_cck_time;
++} __packed;
++
+ #define MT7925_STA_UPDATE_MAX_SIZE	(sizeof(struct sta_req_hdr) +		\
+ 					 sizeof(struct sta_rec_basic) +		\
+ 					 sizeof(struct sta_rec_bf) +		\
+@@ -440,7 +468,7 @@ struct sta_rec_mld {
+ 					 sizeof(struct sta_rec_bfee) +		\
+ 					 sizeof(struct sta_rec_phy) +		\
+ 					 sizeof(struct sta_rec_ra) +		\
+-					 sizeof(struct sta_rec_sec) +		\
++					 sizeof(struct sta_rec_sec_uni) +   \
+ 					 sizeof(struct sta_rec_ra_fixed) +	\
+ 					 sizeof(struct sta_rec_he_6g_capa) +	\
+ 					 sizeof(struct sta_rec_eht) +		\
+@@ -455,6 +483,7 @@ struct sta_rec_mld {
+ 					 sizeof(struct bss_mld_tlv) +			\
+ 					 sizeof(struct bss_info_uni_he) +		\
+ 					 sizeof(struct bss_info_uni_bss_color) +	\
++					 sizeof(struct bss_ifs_time_tlv) +		\
+ 					 sizeof(struct tlv))
+ 
+ #define MT_CONNAC3_SKU_POWER_LIMIT      449
+@@ -509,6 +538,33 @@ struct mt7925_wow_pattern_tlv {
+ 	u8 rsv[4];
+ } __packed;
+ 
++static inline enum connac3_mcu_cipher_type
++mt7925_mcu_get_cipher(int cipher)
++{
++	switch (cipher) {
++	case WLAN_CIPHER_SUITE_WEP40:
++		return CONNAC3_CIPHER_WEP40;
++	case WLAN_CIPHER_SUITE_WEP104:
++		return CONNAC3_CIPHER_WEP104;
++	case WLAN_CIPHER_SUITE_TKIP:
++		return CONNAC3_CIPHER_TKIP;
++	case WLAN_CIPHER_SUITE_AES_CMAC:
++		return CONNAC3_CIPHER_BIP_CMAC_128;
++	case WLAN_CIPHER_SUITE_CCMP:
++		return CONNAC3_CIPHER_AES_CCMP;
++	case WLAN_CIPHER_SUITE_CCMP_256:
++		return CONNAC3_CIPHER_CCMP_256;
++	case WLAN_CIPHER_SUITE_GCMP:
++		return CONNAC3_CIPHER_GCMP;
++	case WLAN_CIPHER_SUITE_GCMP_256:
++		return CONNAC3_CIPHER_GCMP_256;
++	case WLAN_CIPHER_SUITE_SMS4:
++		return CONNAC3_CIPHER_WAPI;
++	default:
++		return CONNAC3_CIPHER_NONE;
++	}
++}
++
+ int mt7925_mcu_set_dbdc(struct mt76_phy *phy);
+ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		       struct ieee80211_scan_request *scan_req);
+@@ -525,6 +581,8 @@ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ 			    struct ieee80211_vif *vif,
+ 			    struct ieee80211_sta *sta,
+ 			    int enable);
++int mt7925_mcu_set_timing(struct mt792x_phy *phy,
++			  struct ieee80211_vif *vif);
+ int mt7925_mcu_set_deep_sleep(struct mt792x_dev *dev, bool enable);
+ int mt7925_mcu_set_channel_domain(struct mt76_phy *phy);
+ int mt7925_mcu_set_radio_en(struct mt792x_phy *phy, bool enable);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
+index 08ef75e24e1cf..bf02a15067ef6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
+@@ -386,6 +386,8 @@ static int mt7925_pci_probe(struct pci_dev *pdev,
+ 
+ 	dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+ 
++	mt76_rmw_field(dev, MT_HW_EMI_CTL, MT_HW_EMI_CTL_SLPPROT_EN, 1);
++
+ 	ret = mt792x_wfsys_reset(dev);
+ 	if (ret)
+ 		goto err_free_dev;
+@@ -425,6 +427,7 @@ static void mt7925_pci_remove(struct pci_dev *pdev)
+ 	struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ 
+ 	mt7925e_unregister_device(dev);
++	set_bit(MT76_REMOVED, &mdev->phy.state);
+ 	devm_free_irq(&pdev->dev, pdev->irq, dev);
+ 	mt76_free_device(&dev->mt76);
+ 	pci_free_irq_vectors(pdev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_acpi_sar.c b/drivers/net/wireless/mediatek/mt76/mt792x_acpi_sar.c
+index 303c0f5c9c662..c4e3bfcc519e2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_acpi_sar.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_acpi_sar.c
+@@ -66,13 +66,15 @@ mt792x_acpi_read(struct mt792x_dev *dev, u8 *method, u8 **tbl, u32 *len)
+ }
+ 
+ /* MTCL : Country List Table for 6G band */
+-static void
++static int
+ mt792x_asar_acpi_read_mtcl(struct mt792x_dev *dev, u8 **table, u8 *version)
+ {
+-	if (mt792x_acpi_read(dev, MT792x_ACPI_MTCL, table, NULL) < 0)
+-		*version = 1;
+-	else
+-		*version = 2;
++	int ret;
++
++	*version = ((ret = mt792x_acpi_read(dev, MT792x_ACPI_MTCL, table, NULL)) < 0)
++		   ? 1 : 2;
++
++	return ret;
+ }
+ 
+ /* MTDS : Dynamic SAR Power Table */
+@@ -166,16 +168,16 @@ int mt792x_init_acpi_sar(struct mt792x_dev *dev)
+ 	if (!asar)
+ 		return -ENOMEM;
+ 
+-	mt792x_asar_acpi_read_mtcl(dev, (u8 **)&asar->countrylist, &asar->ver);
++	ret = mt792x_asar_acpi_read_mtcl(dev, (u8 **)&asar->countrylist, &asar->ver);
++	if (ret) {
++		devm_kfree(dev->mt76.dev, asar->countrylist);
++		asar->countrylist = NULL;
++	}
+ 
+-	/* MTDS is mandatory. Return error if table is invalid */
+ 	ret = mt792x_asar_acpi_read_mtds(dev, (u8 **)&asar->dyn, asar->ver);
+ 	if (ret) {
+ 		devm_kfree(dev->mt76.dev, asar->dyn);
+-		devm_kfree(dev->mt76.dev, asar->countrylist);
+-		devm_kfree(dev->mt76.dev, asar);
+-
+-		return ret;
++		asar->dyn = NULL;
+ 	}
+ 
+ 	/* MTGS is optional */
+@@ -290,7 +292,7 @@ int mt792x_init_acpi_sar_power(struct mt792x_phy *phy, bool set_default)
+ 	const struct cfg80211_sar_capa *capa = phy->mt76->hw->wiphy->sar_capa;
+ 	int i;
+ 
+-	if (!phy->acpisar)
++	if (!phy->acpisar || !((struct mt792x_acpi_sar *)phy->acpisar)->dyn)
+ 		return 0;
+ 
+ 	/* When ACPI SAR enabled in HW, we should apply rules for .frp
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_core.c b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+index 502be22dbe367..fb35eff6dc7b3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_core.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+@@ -354,6 +354,7 @@ static const char mt792x_gstrings_stats[][ETH_GSTRING_LEN] = {
+ 	"v_tx_bw_40",
+ 	"v_tx_bw_80",
+ 	"v_tx_bw_160",
++	"v_tx_bw_320",
+ 	"v_tx_mcs_0",
+ 	"v_tx_mcs_1",
+ 	"v_tx_mcs_2",
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_dma.c b/drivers/net/wireless/mediatek/mt76/mt792x_dma.c
+index 488326ce5ed4d..5cc2d59b774af 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_dma.c
+@@ -12,6 +12,8 @@ irqreturn_t mt792x_irq_handler(int irq, void *dev_instance)
+ {
+ 	struct mt792x_dev *dev = dev_instance;
+ 
++	if (test_bit(MT76_REMOVED, &dev->mt76.phy.state))
++		return IRQ_NONE;
+ 	mt76_wr(dev, dev->irq_map->host_irq_enable, 0);
+ 
+ 	if (!test_bit(MT76_STATE_INITIALIZED, &dev->mphy.state))
+@@ -123,14 +125,13 @@ static void mt792x_dma_prefetch(struct mt792x_dev *dev)
+ 
+ int mt792x_dma_enable(struct mt792x_dev *dev)
+ {
+-	if (is_mt7925(&dev->mt76))
+-		mt76_rmw(dev, MT_UWFDMA0_GLO_CFG_EXT1, BIT(28), BIT(28));
+-
+ 	/* configure perfetch settings */
+ 	mt792x_dma_prefetch(dev);
+ 
+ 	/* reset dma idx */
+ 	mt76_wr(dev, MT_WFDMA0_RST_DTX_PTR, ~0);
++	if (is_mt7925(&dev->mt76))
++		mt76_wr(dev, MT_WFDMA0_RST_DRX_PTR, ~0);
+ 
+ 	/* configure delay interrupt */
+ 	mt76_wr(dev, MT_WFDMA0_PRI_DLY_INT_CFG0, 0);
+@@ -140,12 +141,20 @@ int mt792x_dma_enable(struct mt792x_dev *dev)
+ 		 MT_WFDMA0_GLO_CFG_FIFO_LITTLE_ENDIAN |
+ 		 MT_WFDMA0_GLO_CFG_CLK_GAT_DIS |
+ 		 MT_WFDMA0_GLO_CFG_OMIT_TX_INFO |
++		 FIELD_PREP(MT_WFDMA0_GLO_CFG_DMA_SIZE, 3) |
++		 MT_WFDMA0_GLO_CFG_FIFO_DIS_CHECK |
++		 MT_WFDMA0_GLO_CFG_RX_WB_DDONE |
+ 		 MT_WFDMA0_GLO_CFG_CSR_DISP_BASE_PTR_CHAIN_EN |
+ 		 MT_WFDMA0_GLO_CFG_OMIT_RX_INFO_PFET2);
+ 
+ 	mt76_set(dev, MT_WFDMA0_GLO_CFG,
+ 		 MT_WFDMA0_GLO_CFG_TX_DMA_EN | MT_WFDMA0_GLO_CFG_RX_DMA_EN);
+ 
++	if (is_mt7925(&dev->mt76)) {
++		mt76_rmw(dev, MT_UWFDMA0_GLO_CFG_EXT1, BIT(28), BIT(28));
++		mt76_set(dev, MT_WFDMA0_INT_RX_PRI, 0x0F00);
++		mt76_set(dev, MT_WFDMA0_INT_TX_PRI, 0x7F00);
++	}
+ 	mt76_set(dev, MT_WFDMA_DUMMY_CR, MT_WFDMA_NEED_REINIT);
+ 
+ 	/* enable interrupts for TX/RX rings */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_regs.h b/drivers/net/wireless/mediatek/mt76/mt792x_regs.h
+index a99af23e4b564..458cfd0260b13 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_regs.h
+@@ -292,9 +292,12 @@
+ #define MT_WFDMA0_GLO_CFG_TX_DMA_BUSY	BIT(1)
+ #define MT_WFDMA0_GLO_CFG_RX_DMA_EN	BIT(2)
+ #define MT_WFDMA0_GLO_CFG_RX_DMA_BUSY	BIT(3)
++#define MT_WFDMA0_GLO_CFG_DMA_SIZE	GENMASK(5, 4)
+ #define MT_WFDMA0_GLO_CFG_TX_WB_DDONE	BIT(6)
+ #define MT_WFDMA0_GLO_CFG_FW_DWLD_BYPASS_DMASHDL BIT(9)
++#define MT_WFDMA0_GLO_CFG_FIFO_DIS_CHECK	BIT(11)
+ #define MT_WFDMA0_GLO_CFG_FIFO_LITTLE_ENDIAN	BIT(12)
++#define MT_WFDMA0_GLO_CFG_RX_WB_DDONE	BIT(13)
+ #define MT_WFDMA0_GLO_CFG_CSR_DISP_BASE_PTR_CHAIN_EN BIT(15)
+ #define MT_WFDMA0_GLO_CFG_OMIT_RX_INFO_PFET2	BIT(21)
+ #define MT_WFDMA0_GLO_CFG_OMIT_RX_INFO	BIT(27)
+@@ -322,6 +325,8 @@
+ 
+ #define MT_WFDMA0_RST_DTX_PTR		MT_WFDMA0(0x20c)
+ #define MT_WFDMA0_RST_DRX_PTR		MT_WFDMA0(0x280)
++#define MT_WFDMA0_INT_RX_PRI		MT_WFDMA0(0x298)
++#define MT_WFDMA0_INT_TX_PRI		MT_WFDMA0(0x29c)
+ #define MT_WFDMA0_GLO_CFG_EXT0		MT_WFDMA0(0x2b0)
+ #define MT_WFDMA0_CSR_TX_DMASHDL_ENABLE	BIT(6)
+ #define MT_WFDMA0_PRI_DLY_INT_CFG0	MT_WFDMA0(0x2f0)
+@@ -389,6 +394,9 @@
+ #define MT_HW_CHIPID			0x70010200
+ #define MT_HW_REV			0x70010204
+ 
++#define MT_HW_EMI_CTL			0x18011100
++#define MT_HW_EMI_CTL_SLPPROT_EN	BIT(1)
++
+ #define MT_PCIE_MAC_BASE		0x10000
+ #define MT_PCIE_MAC(ofs)		(MT_PCIE_MAC_BASE + (ofs))
+ #define MT_PCIE_MAC_INT_ENABLE		MT_PCIE_MAC(0x188)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 55cb1770fa34e..f8b61df15fcb1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -297,7 +297,7 @@ static void mt7996_mac_init_basic_rates(struct mt7996_dev *dev)
+ 
+ void mt7996_mac_init(struct mt7996_dev *dev)
+ {
+-#define HIF_TXD_V2_1	4
++#define HIF_TXD_V2_1	0x21
+ 	int i;
+ 
+ 	mt76_clear(dev, MT_MDP_DCR2, MT_MDP_DCR2_RX_TRANS_SHORT);
+@@ -578,11 +578,12 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ 	/* the maximum cap is 4 x 3, (Nr, Nc) = (3, 2) */
+ 	elem->phy_cap_info[7] |= min_t(int, sts - 1, 2) << 3;
+ 
+-	if (vif != NL80211_IFTYPE_AP)
++	if (!(vif == NL80211_IFTYPE_AP || vif == NL80211_IFTYPE_STATION))
+ 		return;
+ 
+ 	elem->phy_cap_info[3] |= IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER;
+-	elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
++	if (vif == NL80211_IFTYPE_AP)
++		elem->phy_cap_info[4] |= IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER;
+ 
+ 	c = FIELD_PREP(IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK,
+ 		       sts - 1) |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index fa3001e59a364..7d33b0c8955ba 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -1178,25 +1178,28 @@ mt7996_mac_add_txs_skb(struct mt7996_dev *dev, struct mt76_wcid *wcid,
+ 	struct ieee80211_tx_info *info;
+ 	struct sk_buff_head list;
+ 	struct rate_info rate = {};
+-	struct sk_buff *skb;
++	struct sk_buff *skb = NULL;
+ 	bool cck = false;
+ 	u32 txrate, txs, mode, stbc;
+ 
+ 	txs = le32_to_cpu(txs_data[0]);
+ 
+ 	mt76_tx_status_lock(mdev, &list);
+-	skb = mt76_tx_status_skb_get(mdev, wcid, pid, &list);
+ 
+-	if (skb) {
+-		info = IEEE80211_SKB_CB(skb);
+-		if (!(txs & MT_TXS0_ACK_ERROR_MASK))
+-			info->flags |= IEEE80211_TX_STAT_ACK;
++	/* only report MPDU TXS */
++	if (le32_get_bits(txs_data[0], MT_TXS0_TXS_FORMAT) == 0) {
++		skb = mt76_tx_status_skb_get(mdev, wcid, pid, &list);
++		if (skb) {
++			info = IEEE80211_SKB_CB(skb);
++			if (!(txs & MT_TXS0_ACK_ERROR_MASK))
++				info->flags |= IEEE80211_TX_STAT_ACK;
+ 
+-		info->status.ampdu_len = 1;
+-		info->status.ampdu_ack_len =
+-			!!(info->flags & IEEE80211_TX_STAT_ACK);
++			info->status.ampdu_len = 1;
++			info->status.ampdu_ack_len =
++				!!(info->flags & IEEE80211_TX_STAT_ACK);
+ 
+-		info->status.rates[0].idx = -1;
++			info->status.rates[0].idx = -1;
++		}
+ 	}
+ 
+ 	if (mtk_wed_device_active(&dev->mt76.mmio.wed) && wcid->sta) {
+@@ -2458,6 +2461,34 @@ static int mt7996_mac_check_twt_req(struct ieee80211_twt_setup *twt)
+ 	return 0;
+ }
+ 
++static bool
++mt7996_mac_twt_param_equal(struct mt7996_sta *msta,
++			   struct ieee80211_twt_params *twt_agrt)
++{
++	u16 type = le16_to_cpu(twt_agrt->req_type);
++	u8 exp;
++	int i;
++
++	exp = FIELD_GET(IEEE80211_TWT_REQTYPE_WAKE_INT_EXP, type);
++	for (i = 0; i < MT7996_MAX_STA_TWT_AGRT; i++) {
++		struct mt7996_twt_flow *f;
++
++		if (!(msta->twt.flowid_mask & BIT(i)))
++			continue;
++
++		f = &msta->twt.flow[i];
++		if (f->duration == twt_agrt->min_twt_dur &&
++		    f->mantissa == twt_agrt->mantissa &&
++		    f->exp == exp &&
++		    f->protection == !!(type & IEEE80211_TWT_REQTYPE_PROTECTION) &&
++		    f->flowtype == !!(type & IEEE80211_TWT_REQTYPE_FLOWTYPE) &&
++		    f->trigger == !!(type & IEEE80211_TWT_REQTYPE_TRIGGER))
++			return true;
++	}
++
++	return false;
++}
++
+ void mt7996_mac_add_twt_setup(struct ieee80211_hw *hw,
+ 			      struct ieee80211_sta *sta,
+ 			      struct ieee80211_twt_setup *twt)
+@@ -2469,8 +2500,7 @@ void mt7996_mac_add_twt_setup(struct ieee80211_hw *hw,
+ 	enum ieee80211_twt_setup_cmd sta_setup_cmd;
+ 	struct mt7996_dev *dev = mt7996_hw_dev(hw);
+ 	struct mt7996_twt_flow *flow;
+-	int flowid, table_id;
+-	u8 exp;
++	u8 flowid, table_id, exp;
+ 
+ 	if (mt7996_mac_check_twt_req(twt))
+ 		goto out;
+@@ -2483,9 +2513,19 @@ void mt7996_mac_add_twt_setup(struct ieee80211_hw *hw,
+ 	if (hweight8(msta->twt.flowid_mask) == ARRAY_SIZE(msta->twt.flow))
+ 		goto unlock;
+ 
++	if (twt_agrt->min_twt_dur < MT7996_MIN_TWT_DUR) {
++		setup_cmd = TWT_SETUP_CMD_DICTATE;
++		twt_agrt->min_twt_dur = MT7996_MIN_TWT_DUR;
++		goto unlock;
++	}
++
++	if (mt7996_mac_twt_param_equal(msta, twt_agrt))
++		goto unlock;
++
+ 	flowid = ffs(~msta->twt.flowid_mask) - 1;
+-	le16p_replace_bits(&twt_agrt->req_type, flowid,
+-			   IEEE80211_TWT_REQTYPE_FLOWID);
++	twt_agrt->req_type &= ~cpu_to_le16(IEEE80211_TWT_REQTYPE_FLOWID);
++	twt_agrt->req_type |= le16_encode_bits(flowid,
++					       IEEE80211_TWT_REQTYPE_FLOWID);
+ 
+ 	table_id = ffs(~dev->twt.table_mask) - 1;
+ 	exp = FIELD_GET(IEEE80211_TWT_REQTYPE_WAKE_INT_EXP, req_type);
+@@ -2532,10 +2572,10 @@ void mt7996_mac_add_twt_setup(struct ieee80211_hw *hw,
+ unlock:
+ 	mutex_unlock(&dev->mt76.mutex);
+ out:
+-	le16p_replace_bits(&twt_agrt->req_type, setup_cmd,
+-			   IEEE80211_TWT_REQTYPE_SETUP_CMD);
+-	twt->control = (twt->control & IEEE80211_TWT_CONTROL_WAKE_DUR_UNIT) |
+-		       (twt->control & IEEE80211_TWT_CONTROL_RX_DISABLED);
++	twt_agrt->req_type &= ~cpu_to_le16(IEEE80211_TWT_REQTYPE_SETUP_CMD);
++	twt_agrt->req_type |=
++		le16_encode_bits(setup_cmd, IEEE80211_TWT_REQTYPE_SETUP_CMD);
++	twt->control = twt->control & IEEE80211_TWT_CONTROL_RX_DISABLED;
+ }
+ 
+ void mt7996_mac_twt_teardown_flow(struct mt7996_dev *dev,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 09c7a28a3d511..482a8f7d75d7a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -342,6 +342,8 @@ static int mt7996_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	case WLAN_CIPHER_SUITE_GCMP:
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 	case WLAN_CIPHER_SUITE_SMS4:
++	case WLAN_CIPHER_SUITE_BIP_GMAC_128:
++	case WLAN_CIPHER_SUITE_BIP_GMAC_256:
+ 		break;
+ 	case WLAN_CIPHER_SUITE_WEP40:
+ 	case WLAN_CIPHER_SUITE_WEP104:
+@@ -365,9 +367,13 @@ static int mt7996_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	}
+ 
+ 	mt76_wcid_key_setup(&dev->mt76, wcid, key);
+-	err = mt7996_mcu_add_key(&dev->mt76, vif, &msta->bip,
+-				 key, MCU_WMWA_UNI_CMD(STA_REC_UPDATE),
+-				 &msta->wcid, cmd);
++
++	if (key->keyidx == 6 || key->keyidx == 7)
++		err = mt7996_mcu_bcn_prot_enable(dev, vif, key);
++	else
++		err = mt7996_mcu_add_key(&dev->mt76, vif, key,
++					 MCU_WMWA_UNI_CMD(STA_REC_UPDATE),
++					 &msta->wcid, cmd);
+ out:
+ 	mutex_unlock(&dev->mt76.mutex);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index bf917beb94396..0586f5bd4164f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -1079,6 +1079,9 @@ mt7996_mcu_sta_he_6g_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+ static void
+ mt7996_mcu_sta_eht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+ {
++	struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
++	struct ieee80211_vif *vif = container_of((void *)msta->vif,
++						 struct ieee80211_vif, drv_priv);
+ 	struct ieee80211_eht_mcs_nss_supp *mcs_map;
+ 	struct ieee80211_eht_cap_elem_fixed *elem;
+ 	struct sta_rec_eht *eht;
+@@ -1098,8 +1101,17 @@ mt7996_mcu_sta_eht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+ 	eht->phy_cap = cpu_to_le64(*(u64 *)elem->phy_cap_info);
+ 	eht->phy_cap_ext = cpu_to_le64(elem->phy_cap_info[8]);
+ 
+-	if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
+-		memcpy(eht->mcs_map_bw20, &mcs_map->only_20mhz, sizeof(eht->mcs_map_bw20));
++	if (vif->type != NL80211_IFTYPE_STATION &&
++	    (sta->deflink.he_cap.he_cap_elem.phy_cap_info[0] &
++	     (IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G |
++	      IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
++	      IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G |
++	      IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)) == 0) {
++		memcpy(eht->mcs_map_bw20, &mcs_map->only_20mhz,
++		       sizeof(eht->mcs_map_bw20));
++		return;
++	}
++
+ 	memcpy(eht->mcs_map_bw80, &mcs_map->bw._80, sizeof(eht->mcs_map_bw80));
+ 	memcpy(eht->mcs_map_bw160, &mcs_map->bw._160, sizeof(eht->mcs_map_bw160));
+ 	memcpy(eht->mcs_map_bw320, &mcs_map->bw._320, sizeof(eht->mcs_map_bw320));
+@@ -2058,7 +2070,6 @@ int mt7996_mcu_add_sta(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ 
+ static int
+ mt7996_mcu_sta_key_tlv(struct mt76_wcid *wcid,
+-		       struct mt76_connac_sta_key_conf *sta_key_conf,
+ 		       struct sk_buff *skb,
+ 		       struct ieee80211_key_conf *key,
+ 		       enum set_key_cmd cmd)
+@@ -2079,43 +2090,22 @@ mt7996_mcu_sta_key_tlv(struct mt76_wcid *wcid,
+ 			return -EOPNOTSUPP;
+ 
+ 		sec_key = &sec->key[0];
++		sec_key->wlan_idx = cpu_to_le16(wcid->idx);
++		sec_key->mgmt_prot = 0;
++		sec_key->cipher_id = cipher;
+ 		sec_key->cipher_len = sizeof(*sec_key);
+-
+-		if (cipher == MCU_CIPHER_BIP_CMAC_128) {
+-			sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+-			sec_key->cipher_id = MCU_CIPHER_AES_CCMP;
+-			sec_key->key_id = sta_key_conf->keyidx;
+-			sec_key->key_len = 16;
+-			memcpy(sec_key->key, sta_key_conf->key, 16);
+-
+-			sec_key = &sec->key[1];
+-			sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+-			sec_key->cipher_id = MCU_CIPHER_BIP_CMAC_128;
+-			sec_key->cipher_len = sizeof(*sec_key);
+-			sec_key->key_len = 16;
+-			memcpy(sec_key->key, key->key, 16);
+-			sec->n_cipher = 2;
+-		} else {
+-			sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+-			sec_key->cipher_id = cipher;
+-			sec_key->key_id = key->keyidx;
+-			sec_key->key_len = key->keylen;
+-			memcpy(sec_key->key, key->key, key->keylen);
+-
+-			if (cipher == MCU_CIPHER_TKIP) {
+-				/* Rx/Tx MIC keys are swapped */
+-				memcpy(sec_key->key + 16, key->key + 24, 8);
+-				memcpy(sec_key->key + 24, key->key + 16, 8);
+-			}
+-
+-			/* store key_conf for BIP batch update */
+-			if (cipher == MCU_CIPHER_AES_CCMP) {
+-				memcpy(sta_key_conf->key, key->key, key->keylen);
+-				sta_key_conf->keyidx = key->keyidx;
+-			}
+-
+-			sec->n_cipher = 1;
++		sec_key->key_id = key->keyidx;
++		sec_key->key_len = key->keylen;
++		sec_key->need_resp = 0;
++		memcpy(sec_key->key, key->key, key->keylen);
++
++		if (cipher == MCU_CIPHER_TKIP) {
++			/* Rx/Tx MIC keys are swapped */
++			memcpy(sec_key->key + 16, key->key + 24, 8);
++			memcpy(sec_key->key + 24, key->key + 16, 8);
+ 		}
++
++		sec->n_cipher = 1;
+ 	} else {
+ 		sec->n_cipher = 0;
+ 	}
+@@ -2124,7 +2114,6 @@ mt7996_mcu_sta_key_tlv(struct mt76_wcid *wcid,
+ }
+ 
+ int mt7996_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
+-		       struct mt76_connac_sta_key_conf *sta_key_conf,
+ 		       struct ieee80211_key_conf *key, int mcu_cmd,
+ 		       struct mt76_wcid *wcid, enum set_key_cmd cmd)
+ {
+@@ -2137,13 +2126,99 @@ int mt7996_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ 	if (IS_ERR(skb))
+ 		return PTR_ERR(skb);
+ 
+-	ret = mt7996_mcu_sta_key_tlv(wcid, sta_key_conf, skb, key, cmd);
++	ret = mt7996_mcu_sta_key_tlv(wcid, skb, key, cmd);
+ 	if (ret)
+ 		return ret;
+ 
+ 	return mt76_mcu_skb_send_msg(dev, skb, mcu_cmd, true);
+ }
+ 
++static int mt7996_mcu_get_pn(struct mt7996_dev *dev, struct ieee80211_vif *vif,
++			     u8 *pn)
++{
++#define TSC_TYPE_BIGTK_PN 2
++	struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
++	struct sta_rec_pn_info *pn_info;
++	struct sk_buff *skb, *rskb;
++	struct tlv *tlv;
++	int ret;
++
++	skb = mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76, &mvif->sta.wcid);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
++	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_PN_INFO, sizeof(*pn_info));
++	pn_info = (struct sta_rec_pn_info *)tlv;
++
++	pn_info->tsc_type = TSC_TYPE_BIGTK_PN;
++	ret = mt76_mcu_skb_send_and_get_msg(&dev->mt76, skb,
++					    MCU_WM_UNI_CMD_QUERY(STA_REC_UPDATE),
++					    true, &rskb);
++	if (ret)
++		return ret;
++
++	skb_pull(rskb, 4);
++
++	pn_info = (struct sta_rec_pn_info *)rskb->data;
++	if (le16_to_cpu(pn_info->tag) == STA_REC_PN_INFO)
++		memcpy(pn, pn_info->pn, 6);
++
++	dev_kfree_skb(rskb);
++	return 0;
++}
++
++int mt7996_mcu_bcn_prot_enable(struct mt7996_dev *dev, struct ieee80211_vif *vif,
++			       struct ieee80211_key_conf *key)
++{
++	struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
++	struct mt7996_mcu_bcn_prot_tlv *bcn_prot;
++	struct sk_buff *skb;
++	struct tlv *tlv;
++	u8 pn[6] = {};
++	int len = sizeof(struct bss_req_hdr) +
++		  sizeof(struct mt7996_mcu_bcn_prot_tlv);
++	int ret;
++
++	skb = __mt7996_mcu_alloc_bss_req(&dev->mt76, &mvif->mt76, len);
++	if (IS_ERR(skb))
++		return PTR_ERR(skb);
++
++	tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_BCN_PROT, sizeof(*bcn_prot));
++
++	bcn_prot = (struct mt7996_mcu_bcn_prot_tlv *)tlv;
++
++	ret = mt7996_mcu_get_pn(dev, vif, pn);
++	if (ret) {
++		dev_kfree_skb(skb);
++		return ret;
++	}
++
++	switch (key->cipher) {
++	case WLAN_CIPHER_SUITE_AES_CMAC:
++		bcn_prot->cipher_id = MCU_CIPHER_BCN_PROT_CMAC_128;
++		break;
++	case WLAN_CIPHER_SUITE_BIP_GMAC_128:
++		bcn_prot->cipher_id = MCU_CIPHER_BCN_PROT_GMAC_128;
++		break;
++	case WLAN_CIPHER_SUITE_BIP_GMAC_256:
++		bcn_prot->cipher_id = MCU_CIPHER_BCN_PROT_GMAC_256;
++		break;
++	case WLAN_CIPHER_SUITE_BIP_CMAC_256:
++	default:
++		dev_err(dev->mt76.dev, "Not supported Bigtk Cipher\n");
++		dev_kfree_skb(skb);
++		return -EOPNOTSUPP;
++	}
++
++	pn[0]++;
++	memcpy(bcn_prot->pn, pn, 6);
++	bcn_prot->enable = BP_SW_MODE;
++	memcpy(bcn_prot->key, key->key, WLAN_MAX_KEY_LEN);
++	bcn_prot->key_id = key->keyidx;
++
++	return mt76_mcu_skb_send_msg(&dev->mt76, skb,
++				     MCU_WMWA_UNI_CMD(BSS_INFO_UPDATE), true);
++}
+ int mt7996_mcu_add_dev_info(struct mt7996_phy *phy,
+ 			    struct ieee80211_vif *vif, bool enable)
+ {
+@@ -3351,7 +3426,7 @@ int mt7996_mcu_get_eeprom(struct mt7996_dev *dev, u32 offset)
+ 		u32 addr = le32_to_cpu(*(__le32 *)(skb->data + 12));
+ 		u8 *buf = (u8 *)dev->mt76.eeprom.data + addr;
+ 
+-		skb_pull(skb, 64);
++		skb_pull(skb, 48);
+ 		memcpy(buf, skb->data, MT7996_EEPROM_BLOCK_SIZE);
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+index 9300cd8eeb76b..32ce57c8c4e6b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+@@ -250,6 +250,23 @@ struct bss_rate_tlv {
+ 	u8 __rsv2[9];
+ } __packed;
+ 
++enum {
++	BP_DISABLE,
++	BP_SW_MODE,
++	BP_HW_MODE,
++};
++
++struct mt7996_mcu_bcn_prot_tlv {
++	__le16 tag;
++	__le16 len;
++	u8 pn[6];
++	u8 enable;
++	u8 cipher_id;
++	u8 key[WLAN_MAX_KEY_LEN];
++	u8 key_id;
++	u8 __rsv[3];
++} __packed;
++
+ struct bss_ra_tlv {
+ 	__le16 tag;
+ 	__le16 len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index e53cf6a3704c4..6733ee9744d9b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -43,6 +43,7 @@
+ 
+ #define MT7996_MAX_TWT_AGRT		16
+ #define MT7996_MAX_STA_TWT_AGRT		8
++#define MT7996_MIN_TWT_DUR		64
+ #define MT7996_MAX_QUEUE		(__MT_RXQ_MAX +	__MT_MCUQ_MAX + 3)
+ 
+ /* NOTE: used to map mt76_rates. idx may change if firmware expands table */
+@@ -236,7 +237,7 @@ struct mt7996_dev {
+ 	struct rchan *relay_fwlog;
+ 
+ 	struct {
+-		u8 table_mask;
++		u16 table_mask;
+ 		u8 n_agrt;
+ 	} twt;
+ 
+@@ -485,9 +486,10 @@ int mt7996_init_debugfs(struct mt7996_phy *phy);
+ void mt7996_debugfs_rx_fw_monitor(struct mt7996_dev *dev, const void *data, int len);
+ bool mt7996_debugfs_rx_log(struct mt7996_dev *dev, const void *data, int len);
+ int mt7996_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
+-		       struct mt76_connac_sta_key_conf *sta_key_conf,
+ 		       struct ieee80211_key_conf *key, int mcu_cmd,
+ 		       struct mt76_wcid *wcid, enum set_key_cmd cmd);
++int mt7996_mcu_bcn_prot_enable(struct mt7996_dev *dev, struct ieee80211_vif *vif,
++			       struct ieee80211_key_conf *key);
+ int mt7996_mcu_wtbl_update_hdr_trans(struct mt7996_dev *dev,
+ 				     struct ieee80211_vif *vif,
+ 				     struct ieee80211_sta *sta);
+diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+index da52f91693b5b..e9a047a8c7dce 100644
+--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
++++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+@@ -1615,7 +1615,6 @@ static int del_virtual_intf(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 	cfg80211_unregister_netdevice(vif->ndev);
+ 	vif->monitor_flag = 0;
+ 
+-	wilc_set_operation_mode(vif, 0, 0, 0);
+ 	mutex_lock(&wl->vif_mutex);
+ 	list_del_rcu(&vif->list);
+ 	wl->vif_num--;
+@@ -1810,15 +1809,24 @@ int wilc_cfg80211_init(struct wilc **wilc, struct device *dev, int io_type,
+ 	INIT_LIST_HEAD(&wl->rxq_head.list);
+ 	INIT_LIST_HEAD(&wl->vif_list);
+ 
++	wl->hif_workqueue = alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM,
++						    wiphy_name(wl->wiphy));
++	if (!wl->hif_workqueue) {
++		ret = -ENOMEM;
++		goto free_cfg;
++	}
+ 	vif = wilc_netdev_ifc_init(wl, "wlan%d", WILC_STATION_MODE,
+ 				   NL80211_IFTYPE_STATION, false);
+ 	if (IS_ERR(vif)) {
+ 		ret = PTR_ERR(vif);
+-		goto free_cfg;
++		goto free_hq;
+ 	}
+ 
+ 	return 0;
+ 
++free_hq:
++	destroy_workqueue(wl->hif_workqueue);
++
+ free_cfg:
+ 	wilc_wlan_cfg_deinit(wl);
+ 
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index a28da59384813..e202013e6f2fe 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -374,38 +374,49 @@ static void handle_connect_timeout(struct work_struct *work)
+ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 				struct cfg80211_crypto_settings *crypto)
+ {
+-	struct wilc_join_bss_param *param;
+-	struct ieee80211_p2p_noa_attr noa_attr;
+-	u8 rates_len = 0;
+-	const u8 *tim_elm, *ssid_elm, *rates_ie, *supp_rates_ie;
++	const u8 *ies_data, *tim_elm, *ssid_elm, *rates_ie, *supp_rates_ie;
+ 	const u8 *ht_ie, *wpa_ie, *wmm_ie, *rsn_ie;
++	struct ieee80211_p2p_noa_attr noa_attr;
++	const struct cfg80211_bss_ies *ies;
++	struct wilc_join_bss_param *param;
++	u8 rates_len = 0, ies_len;
+ 	int ret;
+-	const struct cfg80211_bss_ies *ies = rcu_dereference(bss->ies);
+ 
+ 	param = kzalloc(sizeof(*param), GFP_KERNEL);
+ 	if (!param)
+ 		return NULL;
+ 
++	rcu_read_lock();
++	ies = rcu_dereference(bss->ies);
++	ies_data = kmemdup(ies->data, ies->len, GFP_ATOMIC);
++	if (!ies_data) {
++		rcu_read_unlock();
++		kfree(param);
++		return NULL;
++	}
++	ies_len = ies->len;
++	rcu_read_unlock();
++
+ 	param->beacon_period = cpu_to_le16(bss->beacon_interval);
+ 	param->cap_info = cpu_to_le16(bss->capability);
+ 	param->bss_type = WILC_FW_BSS_TYPE_INFRA;
+ 	param->ch = ieee80211_frequency_to_channel(bss->channel->center_freq);
+ 	ether_addr_copy(param->bssid, bss->bssid);
+ 
+-	ssid_elm = cfg80211_find_ie(WLAN_EID_SSID, ies->data, ies->len);
++	ssid_elm = cfg80211_find_ie(WLAN_EID_SSID, ies_data, ies_len);
+ 	if (ssid_elm) {
+ 		if (ssid_elm[1] <= IEEE80211_MAX_SSID_LEN)
+ 			memcpy(param->ssid, ssid_elm + 2, ssid_elm[1]);
+ 	}
+ 
+-	tim_elm = cfg80211_find_ie(WLAN_EID_TIM, ies->data, ies->len);
++	tim_elm = cfg80211_find_ie(WLAN_EID_TIM, ies_data, ies_len);
+ 	if (tim_elm && tim_elm[1] >= 2)
+ 		param->dtim_period = tim_elm[3];
+ 
+ 	memset(param->p_suites, 0xFF, 3);
+ 	memset(param->akm_suites, 0xFF, 3);
+ 
+-	rates_ie = cfg80211_find_ie(WLAN_EID_SUPP_RATES, ies->data, ies->len);
++	rates_ie = cfg80211_find_ie(WLAN_EID_SUPP_RATES, ies_data, ies_len);
+ 	if (rates_ie) {
+ 		rates_len = rates_ie[1];
+ 		if (rates_len > WILC_MAX_RATES_SUPPORTED)
+@@ -416,7 +427,7 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 
+ 	if (rates_len < WILC_MAX_RATES_SUPPORTED) {
+ 		supp_rates_ie = cfg80211_find_ie(WLAN_EID_EXT_SUPP_RATES,
+-						 ies->data, ies->len);
++						 ies_data, ies_len);
+ 		if (supp_rates_ie) {
+ 			u8 ext_rates = supp_rates_ie[1];
+ 
+@@ -431,11 +442,11 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 		}
+ 	}
+ 
+-	ht_ie = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies->data, ies->len);
++	ht_ie = cfg80211_find_ie(WLAN_EID_HT_CAPABILITY, ies_data, ies_len);
+ 	if (ht_ie)
+ 		param->ht_capable = true;
+ 
+-	ret = cfg80211_get_p2p_attr(ies->data, ies->len,
++	ret = cfg80211_get_p2p_attr(ies_data, ies_len,
+ 				    IEEE80211_P2P_ATTR_ABSENCE_NOTICE,
+ 				    (u8 *)&noa_attr, sizeof(noa_attr));
+ 	if (ret > 0) {
+@@ -459,7 +470,7 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 	}
+ 	wmm_ie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 					 WLAN_OUI_TYPE_MICROSOFT_WMM,
+-					 ies->data, ies->len);
++					 ies_data, ies_len);
+ 	if (wmm_ie) {
+ 		struct ieee80211_wmm_param_ie *ie;
+ 
+@@ -474,13 +485,13 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 
+ 	wpa_ie = cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
+ 					 WLAN_OUI_TYPE_MICROSOFT_WPA,
+-					 ies->data, ies->len);
++					 ies_data, ies_len);
+ 	if (wpa_ie) {
+ 		param->mode_802_11i = 1;
+ 		param->rsn_found = true;
+ 	}
+ 
+-	rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies->data, ies->len);
++	rsn_ie = cfg80211_find_ie(WLAN_EID_RSN, ies_data, ies_len);
+ 	if (rsn_ie) {
+ 		int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
+ 		int offset = 8;
+@@ -514,6 +525,7 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 			param->akm_suites[i] = crypto->akm_suites[i] & 0xFF;
+ 	}
+ 
++	kfree(ies_data);
+ 	return (void *)param;
+ }
+ 
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 91d71e0f7ef23..87fce5f41803d 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -890,8 +890,7 @@ static const struct net_device_ops wilc_netdev_ops = {
+ 
+ void wilc_netdev_cleanup(struct wilc *wilc)
+ {
+-	struct wilc_vif *vif;
+-	int srcu_idx, ifc_cnt = 0;
++	struct wilc_vif *vif, *vif_tmp;
+ 
+ 	if (!wilc)
+ 		return;
+@@ -901,32 +900,19 @@ void wilc_netdev_cleanup(struct wilc *wilc)
+ 		wilc->firmware = NULL;
+ 	}
+ 
+-	srcu_idx = srcu_read_lock(&wilc->srcu);
+-	list_for_each_entry_rcu(vif, &wilc->vif_list, list) {
++	list_for_each_entry_safe(vif, vif_tmp, &wilc->vif_list, list) {
++		mutex_lock(&wilc->vif_mutex);
++		list_del_rcu(&vif->list);
++		wilc->vif_num--;
++		mutex_unlock(&wilc->vif_mutex);
++		synchronize_srcu(&wilc->srcu);
+ 		if (vif->ndev)
+ 			unregister_netdev(vif->ndev);
+ 	}
+-	srcu_read_unlock(&wilc->srcu, srcu_idx);
+ 
+ 	wilc_wfi_deinit_mon_interface(wilc, false);
+ 	destroy_workqueue(wilc->hif_workqueue);
+ 
+-	while (ifc_cnt < WILC_NUM_CONCURRENT_IFC) {
+-		mutex_lock(&wilc->vif_mutex);
+-		if (wilc->vif_num <= 0) {
+-			mutex_unlock(&wilc->vif_mutex);
+-			break;
+-		}
+-		vif = wilc_get_wl_to_vif(wilc);
+-		if (!IS_ERR(vif))
+-			list_del_rcu(&vif->list);
+-
+-		wilc->vif_num--;
+-		mutex_unlock(&wilc->vif_mutex);
+-		synchronize_srcu(&wilc->srcu);
+-		ifc_cnt++;
+-	}
+-
+ 	wilc_wlan_cfg_deinit(wilc);
+ 	wlan_deinit_locks(wilc);
+ 	wiphy_unregister(wilc->wiphy);
+@@ -989,13 +975,6 @@ struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
+ 		goto error;
+ 	}
+ 
+-	wl->hif_workqueue = alloc_ordered_workqueue("%s-wq", WQ_MEM_RECLAIM,
+-						    ndev->name);
+-	if (!wl->hif_workqueue) {
+-		ret = -ENOMEM;
+-		goto unregister_netdev;
+-	}
+-
+ 	ndev->needs_free_netdev = true;
+ 	vif->iftype = vif_type;
+ 	vif->idx = wilc_get_available_idx(wl);
+@@ -1008,12 +987,11 @@ struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
+ 
+ 	return vif;
+ 
+-unregister_netdev:
++error:
+ 	if (rtnl_locked)
+ 		cfg80211_unregister_netdevice(ndev);
+ 	else
+ 		unregister_netdev(ndev);
+-  error:
+ 	free_netdev(ndev);
+ 	return ERR_PTR(ret);
+ }
+diff --git a/drivers/net/wireless/microchip/wilc1000/spi.c b/drivers/net/wireless/microchip/wilc1000/spi.c
+index 77b4cdff73c37..4cf8586ed55ae 100644
+--- a/drivers/net/wireless/microchip/wilc1000/spi.c
++++ b/drivers/net/wireless/microchip/wilc1000/spi.c
+@@ -192,11 +192,11 @@ static void wilc_wlan_power(struct wilc *wilc, bool on)
+ 		/* assert ENABLE: */
+ 		gpiod_set_value(gpios->enable, 1);
+ 		mdelay(5);
+-		/* assert RESET: */
+-		gpiod_set_value(gpios->reset, 1);
+-	} else {
+ 		/* deassert RESET: */
+ 		gpiod_set_value(gpios->reset, 0);
++	} else {
++		/* assert RESET: */
++		gpiod_set_value(gpios->reset, 1);
+ 		/* deassert ENABLE: */
+ 		gpiod_set_value(gpios->enable, 0);
+ 	}
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 180907319e8cd..04df0f54aa667 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -7304,6 +7304,7 @@ static void rtl8xxxu_stop(struct ieee80211_hw *hw)
+ 	if (priv->usb_interrupts)
+ 		rtl8xxxu_write32(priv, REG_USB_HIMR, 0);
+ 
++	cancel_work_sync(&priv->c2hcmd_work);
+ 	cancel_delayed_work_sync(&priv->ra_watchdog);
+ 
+ 	rtl8xxxu_free_rx_resources(priv);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 4a33d2e47f33f..63673005c2fb1 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -2027,8 +2027,6 @@ static int rtw_chip_board_info_setup(struct rtw_dev *rtwdev)
+ 	rtw_phy_setup_phy_cond(rtwdev, hal->pkg_type);
+ 
+ 	rtw_phy_init_tx_power(rtwdev);
+-	if (rfe_def->agc_btg_tbl)
+-		rtw_load_table(rtwdev, rfe_def->agc_btg_tbl);
+ 	rtw_load_table(rtwdev, rfe_def->phy_pg_tbl);
+ 	rtw_load_table(rtwdev, rfe_def->txpwr_lmt_tbl);
+ 	rtw_phy_tx_power_by_rate_config(hal);
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
+index 128e75a81bf3c..37ef80c9091db 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.c
++++ b/drivers/net/wireless/realtek/rtw88/phy.c
+@@ -1761,12 +1761,15 @@ static void rtw_load_rfk_table(struct rtw_dev *rtwdev)
+ 
+ void rtw_phy_load_tables(struct rtw_dev *rtwdev)
+ {
++	const struct rtw_rfe_def *rfe_def = rtw_get_rfe_def(rtwdev);
+ 	const struct rtw_chip_info *chip = rtwdev->chip;
+ 	u8 rf_path;
+ 
+ 	rtw_load_table(rtwdev, chip->mac_tbl);
+ 	rtw_load_table(rtwdev, chip->bb_tbl);
+ 	rtw_load_table(rtwdev, chip->agc_tbl);
++	if (rfe_def->agc_btg_tbl)
++		rtw_load_table(rtwdev, rfe_def->agc_btg_tbl);
+ 	rtw_load_rfk_table(rtwdev);
+ 
+ 	for (rf_path = 0; rf_path < rtwdev->hal.rf_path_num; rf_path++) {
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index 429bb420b0563..fe5d8e1883509 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -773,9 +773,9 @@ static void rtw8821c_false_alarm_statistics(struct rtw_dev *rtwdev)
+ 
+ 	dm_info->cck_fa_cnt = cck_fa_cnt;
+ 	dm_info->ofdm_fa_cnt = ofdm_fa_cnt;
++	dm_info->total_fa_cnt = ofdm_fa_cnt;
+ 	if (cck_enable)
+ 		dm_info->total_fa_cnt += cck_fa_cnt;
+-	dm_info->total_fa_cnt = ofdm_fa_cnt;
+ 
+ 	crc32_cnt = rtw_read32(rtwdev, REG_CRC_CCK);
+ 	dm_info->cck_ok_cnt = FIELD_GET(GENMASK(15, 0), crc32_cnt);
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index e6ab1ac6d7093..a0188511099a1 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -33,6 +33,36 @@ static void rtw_usb_fill_tx_checksum(struct rtw_usb *rtwusb,
+ 	rtw_tx_fill_txdesc_checksum(rtwdev, &pkt_info, skb->data);
+ }
+ 
++static void rtw_usb_reg_sec(struct rtw_dev *rtwdev, u32 addr, __le32 *data)
++{
++	struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
++	struct usb_device *udev = rtwusb->udev;
++	bool reg_on_section = false;
++	u16 t_reg = 0x4e0;
++	u8 t_len = 1;
++	int status;
++
++	/* There are three sections:
++	 * 1. on (0x00~0xFF; 0x1000~0x10FF): this section is always powered on
++	 * 2. off (< 0xFE00, excluding "on" section): this section could be
++	 *    powered off
++	 * 3. local (>= 0xFE00): usb specific registers section
++	 */
++	if (addr <= 0xff || (addr >= 0x1000 && addr <= 0x10ff))
++		reg_on_section = true;
++
++	if (!reg_on_section)
++		return;
++
++	status = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
++				 RTW_USB_CMD_REQ, RTW_USB_CMD_WRITE,
++				 t_reg, 0, data, t_len, 500);
++
++	if (status != t_len && status != -ENODEV)
++		rtw_err(rtwdev, "%s: reg 0x%x, usb write %u fail, status: %d\n",
++			__func__, t_reg, t_len, status);
++}
++
+ static u32 rtw_usb_read(struct rtw_dev *rtwdev, u32 addr, u16 len)
+ {
+ 	struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+@@ -58,6 +88,11 @@ static u32 rtw_usb_read(struct rtw_dev *rtwdev, u32 addr, u16 len)
+ 		rtw_err(rtwdev, "read register 0x%x failed with %d\n",
+ 			addr, ret);
+ 
++	if (rtwdev->chip->id == RTW_CHIP_TYPE_8822C ||
++	    rtwdev->chip->id == RTW_CHIP_TYPE_8822B ||
++	    rtwdev->chip->id == RTW_CHIP_TYPE_8821C)
++		rtw_usb_reg_sec(rtwdev, addr, data);
++
+ 	return le32_to_cpu(*data);
+ }
+ 
+@@ -102,6 +137,11 @@ static void rtw_usb_write(struct rtw_dev *rtwdev, u32 addr, u32 val, int len)
+ 	if (ret < 0 && ret != -ENODEV && count++ < 4)
+ 		rtw_err(rtwdev, "write register 0x%x failed with %d\n",
+ 			addr, ret);
++
++	if (rtwdev->chip->id == RTW_CHIP_TYPE_8822C ||
++	    rtwdev->chip->id == RTW_CHIP_TYPE_8822B ||
++	    rtwdev->chip->id == RTW_CHIP_TYPE_8821C)
++		rtw_usb_reg_sec(rtwdev, addr, data);
+ }
+ 
+ static void rtw_usb_write8(struct rtw_dev *rtwdev, u32 addr, u8 val)
+diff --git a/drivers/net/wireless/silabs/wfx/sta.c b/drivers/net/wireless/silabs/wfx/sta.c
+index 537caf9d914a7..bb4446b88c12b 100644
+--- a/drivers/net/wireless/silabs/wfx/sta.c
++++ b/drivers/net/wireless/silabs/wfx/sta.c
+@@ -344,6 +344,7 @@ static int wfx_set_mfp_ap(struct wfx_vif *wvif)
+ 	const int pairwise_cipher_suite_count_offset = 8 / sizeof(u16);
+ 	const int pairwise_cipher_suite_size = 4 / sizeof(u16);
+ 	const int akm_suite_size = 4 / sizeof(u16);
++	int ret = -EINVAL;
+ 	const u16 *ptr;
+ 
+ 	if (unlikely(!skb))
+@@ -352,22 +353,26 @@ static int wfx_set_mfp_ap(struct wfx_vif *wvif)
+ 	ptr = (u16 *)cfg80211_find_ie(WLAN_EID_RSN, skb->data + ieoffset,
+ 				      skb->len - ieoffset);
+ 	if (unlikely(!ptr))
+-		return -EINVAL;
++		goto free_skb;
+ 
+ 	ptr += pairwise_cipher_suite_count_offset;
+ 	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+-		return -EINVAL;
++		goto free_skb;
+ 
+ 	ptr += 1 + pairwise_cipher_suite_size * *ptr;
+ 	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+-		return -EINVAL;
++		goto free_skb;
+ 
+ 	ptr += 1 + akm_suite_size * *ptr;
+ 	if (WARN_ON(ptr > (u16 *)skb_tail_pointer(skb)))
+-		return -EINVAL;
++		goto free_skb;
+ 
+ 	wfx_hif_set_mfp(wvif, *ptr & BIT(7), *ptr & BIT(6));
+-	return 0;
++	ret = 0;
++
++free_skb:
++	dev_kfree_skb(skb);
++	return ret;
+ }
+ 
+ int wfx_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+diff --git a/drivers/ntb/core.c b/drivers/ntb/core.c
+index 27dd93deff6e5..d702bee780826 100644
+--- a/drivers/ntb/core.c
++++ b/drivers/ntb/core.c
+@@ -100,6 +100,8 @@ EXPORT_SYMBOL(ntb_unregister_client);
+ 
+ int ntb_register_device(struct ntb_dev *ntb)
+ {
++	int ret;
++
+ 	if (!ntb)
+ 		return -EINVAL;
+ 	if (!ntb->pdev)
+@@ -120,7 +122,11 @@ int ntb_register_device(struct ntb_dev *ntb)
+ 	ntb->ctx_ops = NULL;
+ 	spin_lock_init(&ntb->ctx_lock);
+ 
+-	return device_register(&ntb->dev);
++	ret = device_register(&ntb->dev);
++	if (ret)
++		put_device(&ntb->dev);
++
++	return ret;
+ }
+ EXPORT_SYMBOL(ntb_register_device);
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 86149275ccb8e..771eac141298c 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4272,7 +4272,8 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
+ 	set->ops = ops;
+ 	set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
+ 	if (ctrl->ops->flags & NVME_F_FABRICS)
+-		set->reserved_tags = NVMF_RESERVED_TAGS;
++		/* Reserved for fabric connect and keep alive */
++		set->reserved_tags = 2;
+ 	set->numa_node = ctrl->numa_node;
+ 	set->flags = BLK_MQ_F_NO_SCHED;
+ 	if (ctrl->ops->flags & NVME_F_BLOCKING)
+@@ -4341,7 +4342,8 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
+ 	if (ctrl->quirks & NVME_QUIRK_SHARED_TAGS)
+ 		set->reserved_tags = NVME_AQ_DEPTH;
+ 	else if (ctrl->ops->flags & NVME_F_FABRICS)
+-		set->reserved_tags = NVMF_RESERVED_TAGS;
++		/* Reserved for fabric connect */
++		set->reserved_tags = 1;
+ 	set->numa_node = ctrl->numa_node;
+ 	set->flags = BLK_MQ_F_SHOULD_MERGE;
+ 	if (ctrl->ops->flags & NVME_F_BLOCKING)
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index fbaee5a7be196..e4acccc315bd9 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -18,13 +18,6 @@
+ /* default is -1: the fail fast mechanism is disabled  */
+ #define NVMF_DEF_FAIL_FAST_TMO		-1
+ 
+-/*
+- * Reserved one command for internal usage.  This command is used for sending
+- * the connect command, as well as for the keep alive command on the admin
+- * queue once live.
+- */
+-#define NVMF_RESERVED_TAGS	1
+-
+ /*
+  * Define a host as seen by the target.  We allocate one at boot, but also
+  * allow the override it when creating controllers.  This is both to provide
+diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
+index ec030b19164a3..f157fb50be50c 100644
+--- a/drivers/opp/debugfs.c
++++ b/drivers/opp/debugfs.c
+@@ -37,10 +37,12 @@ static ssize_t bw_name_read(struct file *fp, char __user *userbuf,
+ 			    size_t count, loff_t *ppos)
+ {
+ 	struct icc_path *path = fp->private_data;
++	const char *name = icc_get_name(path);
+ 	char buf[64];
+-	int i;
++	int i = 0;
+ 
+-	i = scnprintf(buf, sizeof(buf), "%.62s\n", icc_get_name(path));
++	if (name)
++		i = scnprintf(buf, sizeof(buf), "%.62s\n", name);
+ 
+ 	return simple_read_from_buffer(userbuf, count, ppos, buf, i);
+ }
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index f9dd6622fe109..e47a77f943b1e 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -330,7 +330,7 @@ static int brcm_pcie_mdio_write(void __iomem *base, u8 port,
+ 	readl(base + PCIE_RC_DL_MDIO_ADDR);
+ 	writel(MDIO_DATA_DONE_MASK | wrdata, base + PCIE_RC_DL_MDIO_WR_DATA);
+ 
+-	err = readw_poll_timeout_atomic(base + PCIE_RC_DL_MDIO_WR_DATA, data,
++	err = readl_poll_timeout_atomic(base + PCIE_RC_DL_MDIO_WR_DATA, data,
+ 					MDIO_WT_DONE(data), 10, 100);
+ 	return err;
+ }
+diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+index 3f60128560ed0..2b7bc5a731dd6 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -1278,15 +1278,11 @@ static int pci_vntb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	ret = ntb_register_device(&ndev->ntb);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to register NTB device\n");
+-		goto err_register_dev;
++		return ret;
+ 	}
+ 
+ 	dev_dbg(dev, "PCI Virtual NTB driver loaded\n");
+ 	return 0;
+-
+-err_register_dev:
+-	put_device(&ndev->ntb.dev);
+-	return -EINVAL;
+ }
+ 
+ static struct pci_device_id pci_vntb_table[] = {
+diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
+index 0c361561b855c..4f47a13cb500f 100644
+--- a/drivers/pci/p2pdma.c
++++ b/drivers/pci/p2pdma.c
+@@ -661,7 +661,7 @@ calc_map_type_and_dist(struct pci_dev *provider, struct pci_dev *client,
+ 	p2pdma = rcu_dereference(provider->p2pdma);
+ 	if (p2pdma)
+ 		xa_store(&p2pdma->map_types, map_types_idx(client),
+-			 xa_mk_value(map_type), GFP_KERNEL);
++			 xa_mk_value(map_type), GFP_ATOMIC);
+ 	rcu_read_unlock();
+ 	return map_type;
+ }
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 24ae29f0d36d7..1697cb8289d49 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -366,11 +366,6 @@ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
+ 	return 0;
+ }
+ 
+-static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
+-{
+-	return dev->error_state == pci_channel_io_perm_failure;
+-}
+-
+ /* pci_dev priv_flags */
+ #define PCI_DEV_ADDED 0
+ #define PCI_DPC_RECOVERED 1
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index 94111e4382413..e5d7c12854fa0 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -234,7 +234,7 @@ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
+ 
+ 	for (i = 0; i < pdev->dpc_rp_log_size - 5; i++) {
+ 		pci_read_config_dword(pdev,
+-			cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix);
++			cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG + i * 4, &prefix);
+ 		pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix);
+ 	}
+  clear_status:
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index a2bf6de11462f..528044237bf9f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5520,6 +5520,7 @@ static void quirk_no_ext_tags(struct pci_dev *pdev)
+ 
+ 	pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL);
+ }
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1004, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0132, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0141, quirk_no_ext_tags);
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 1804794d0e686..5a4adf6c04cf8 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1672,7 +1672,7 @@ static int switchtec_pci_probe(struct pci_dev *pdev,
+ 	rc = switchtec_init_isr(stdev);
+ 	if (rc) {
+ 		dev_err(&stdev->dev, "failed to init isr.\n");
+-		goto err_put;
++		goto err_exit_pci;
+ 	}
+ 
+ 	iowrite32(SWITCHTEC_EVENT_CLEAR |
+@@ -1693,6 +1693,8 @@ static int switchtec_pci_probe(struct pci_dev *pdev,
+ 
+ err_devadd:
+ 	stdev_kill(stdev);
++err_exit_pci:
++	switchtec_exit_pci(stdev);
+ err_put:
+ 	ida_free(&switchtec_minor_ida, MINOR(stdev->dev.devt));
+ 	put_device(&stdev->dev);
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index c584165b13bab..7e3aa7e2345fa 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -2305,6 +2305,17 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ 				dev_dbg(cmn->dev, "ignoring external node %llx\n", reg);
+ 				continue;
+ 			}
++			/*
++			 * AmpereOneX erratum AC04_MESH_1 makes some XPs report a bogus
++			 * child count larger than the number of valid child pointers.
++			 * A child offset of 0 can only occur on CMN-600; otherwise it
++			 * would imply the root node being its own grandchild, which
++			 * we can safely dismiss in general.
++			 */
++			if (reg == 0 && cmn->part != PART_CMN600) {
++				dev_dbg(cmn->dev, "bogus child pointer?\n");
++				continue;
++			}
+ 
+ 			arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn);
+ 
+diff --git a/drivers/perf/cxl_pmu.c b/drivers/perf/cxl_pmu.c
+index bc0d414a6aff9..308c9969642e1 100644
+--- a/drivers/perf/cxl_pmu.c
++++ b/drivers/perf/cxl_pmu.c
+@@ -59,7 +59,7 @@
+ #define   CXL_PMU_COUNTER_CFG_EVENT_GRP_ID_IDX_MSK	GENMASK_ULL(63, 59)
+ 
+ #define CXL_PMU_FILTER_CFG_REG(n, f)	(0x400 + 4 * ((f) + (n) * 8))
+-#define   CXL_PMU_FILTER_CFG_VALUE_MSK			GENMASK(15, 0)
++#define   CXL_PMU_FILTER_CFG_VALUE_MSK			GENMASK(31, 0)
+ 
+ #define CXL_PMU_COUNTER_REG(n)		(0xc00 + 8 * (n))
+ 
+@@ -314,9 +314,9 @@ static bool cxl_pmu_config1_get_edge(struct perf_event *event)
+ }
+ 
+ /*
+- * CPMU specification allows for 8 filters, each with a 16 bit value...
+- * So we need to find 8x16bits to store it in.
+- * As the value used for disable is 0xffff, a separate enable switch
++ * CPMU specification allows for 8 filters, each with a 32 bit value...
++ * So we need to find 8x32bits to store it in.
++ * As the value used for disable is 0xffff_ffff, a separate enable switch
+  * is needed.
+  */
+ 
+@@ -642,7 +642,7 @@ static void cxl_pmu_event_start(struct perf_event *event, int flags)
+ 		if (cxl_pmu_config1_hdm_filter_en(event))
+ 			cfg = cxl_pmu_config2_get_hdm_decoder(event);
+ 		else
+-			cfg = GENMASK(15, 0); /* No filtering if 0xFFFF_FFFF */
++			cfg = GENMASK(31, 0); /* No filtering if 0xFFFF_FFFF */
+ 		writeq(cfg, base + CXL_PMU_FILTER_CFG_REG(hwc->idx, 0));
+ 	}
+ 
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 16acd4dcdb96c..452aab49db1e8 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -512,7 +512,7 @@ static void pmu_sbi_set_scounteren(void *arg)
+ 
+ 	if (event->hw.idx != -1)
+ 		csr_write(CSR_SCOUNTEREN,
+-			  csr_read(CSR_SCOUNTEREN) | (1 << pmu_sbi_csr_index(event)));
++			  csr_read(CSR_SCOUNTEREN) | BIT(pmu_sbi_csr_index(event)));
+ }
+ 
+ static void pmu_sbi_reset_scounteren(void *arg)
+@@ -521,7 +521,7 @@ static void pmu_sbi_reset_scounteren(void *arg)
+ 
+ 	if (event->hw.idx != -1)
+ 		csr_write(CSR_SCOUNTEREN,
+-			  csr_read(CSR_SCOUNTEREN) & ~(1 << pmu_sbi_csr_index(event)));
++			  csr_read(CSR_SCOUNTEREN) & ~BIT(pmu_sbi_csr_index(event)));
+ }
+ 
+ static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival)
+@@ -731,14 +731,14 @@ static irqreturn_t pmu_sbi_ovf_handler(int irq, void *dev)
+ 		/* compute hardware counter index */
+ 		hidx = info->csr - CSR_CYCLE;
+ 		/* check if the corresponding bit is set in sscountovf */
+-		if (!(overflow & (1 << hidx)))
++		if (!(overflow & BIT(hidx)))
+ 			continue;
+ 
+ 		/*
+ 		 * Keep a track of overflowed counters so that they can be started
+ 		 * with updated initial value.
+ 		 */
+-		overflowed_ctrs |= 1 << lidx;
++		overflowed_ctrs |= BIT(lidx);
+ 		hw_evt = &event->hw;
+ 		riscv_pmu_event_update(event);
+ 		perf_sample_data_init(&data, 0, hw_evt->last_period);
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt8186.c b/drivers/pinctrl/mediatek/pinctrl-mt8186.c
+index a02f7c3269707..09edcf47effec 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt8186.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt8186.c
+@@ -1198,7 +1198,6 @@ static const struct mtk_pin_reg_calc mt8186_reg_cals[PINCTRL_PIN_REG_MAX] = {
+ 	[PINCTRL_PIN_REG_DIR] = MTK_RANGE(mt8186_pin_dir_range),
+ 	[PINCTRL_PIN_REG_DI] = MTK_RANGE(mt8186_pin_di_range),
+ 	[PINCTRL_PIN_REG_DO] = MTK_RANGE(mt8186_pin_do_range),
+-	[PINCTRL_PIN_REG_SR] = MTK_RANGE(mt8186_pin_dir_range),
+ 	[PINCTRL_PIN_REG_SMT] = MTK_RANGE(mt8186_pin_smt_range),
+ 	[PINCTRL_PIN_REG_IES] = MTK_RANGE(mt8186_pin_ies_range),
+ 	[PINCTRL_PIN_REG_PU] = MTK_RANGE(mt8186_pin_pu_range),
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt8192.c b/drivers/pinctrl/mediatek/pinctrl-mt8192.c
+index dee1b3aefd36e..bf5788d6810ff 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt8192.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt8192.c
+@@ -1379,7 +1379,6 @@ static const struct mtk_pin_reg_calc mt8192_reg_cals[PINCTRL_PIN_REG_MAX] = {
+ 	[PINCTRL_PIN_REG_DIR] = MTK_RANGE(mt8192_pin_dir_range),
+ 	[PINCTRL_PIN_REG_DI] = MTK_RANGE(mt8192_pin_di_range),
+ 	[PINCTRL_PIN_REG_DO] = MTK_RANGE(mt8192_pin_do_range),
+-	[PINCTRL_PIN_REG_SR] = MTK_RANGE(mt8192_pin_dir_range),
+ 	[PINCTRL_PIN_REG_SMT] = MTK_RANGE(mt8192_pin_smt_range),
+ 	[PINCTRL_PIN_REG_IES] = MTK_RANGE(mt8192_pin_ies_range),
+ 	[PINCTRL_PIN_REG_PU] = MTK_RANGE(mt8192_pin_pu_range),
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 863732287b1e5..445c61a4a7e55 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1575,8 +1575,10 @@ static int nmk_pmx_set(struct pinctrl_dev *pctldev, unsigned function,
+ 		 * Then mask the pins that need to be sleeping now when we're
+ 		 * switching to the ALT C function.
+ 		 */
+-		for (i = 0; i < g->grp.npins; i++)
+-			slpm[g->grp.pins[i] / NMK_GPIO_PER_CHIP] &= ~BIT(g->grp.pins[i]);
++		for (i = 0; i < g->grp.npins; i++) {
++			unsigned int bit = g->grp.pins[i] % NMK_GPIO_PER_CHIP;
++			slpm[g->grp.pins[i] / NMK_GPIO_PER_CHIP] &= ~BIT(bit);
++		}
+ 		nmk_gpio_glitch_slpm_init(slpm);
+ 	}
+ 
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 93e51abbf519a..d1e92bbed33ad 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -731,10 +731,12 @@ static int sh_pfc_resume_noirq(struct device *dev)
+ 		sh_pfc_walk_regs(pfc, sh_pfc_restore_reg);
+ 	return 0;
+ }
++#define pm_psci_sleep_ptr(_ptr)	pm_sleep_ptr(_ptr)
+ #else
+ static int sh_pfc_suspend_init(struct sh_pfc *pfc) { return 0; }
+ static int sh_pfc_suspend_noirq(struct device *dev) { return 0; }
+ static int sh_pfc_resume_noirq(struct device *dev) { return 0; }
++#define pm_psci_sleep_ptr(_ptr)	PTR_IF(false, (_ptr))
+ #endif	/* CONFIG_ARM_PSCI_FW */
+ 
+ static DEFINE_NOIRQ_DEV_PM_OPS(sh_pfc_pm, sh_pfc_suspend_noirq, sh_pfc_resume_noirq);
+@@ -1415,7 +1417,7 @@ static struct platform_driver sh_pfc_driver = {
+ 	.driver		= {
+ 		.name	= DRV_NAME,
+ 		.of_match_table = of_match_ptr(sh_pfc_of_table),
+-		.pm	= pm_sleep_ptr(&sh_pfc_pm),
++		.pm	= pm_psci_sleep_ptr(&sh_pfc_pm),
+ 	},
+ };
+ 
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779g0.c b/drivers/pinctrl/renesas/pfc-r8a779g0.c
+index acdea6ac15253..d2de526a3b588 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779g0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779g0.c
+@@ -2384,6 +2384,14 @@ static const unsigned int scif_clk_mux[] = {
+ 	SCIF_CLK_MARK,
+ };
+ 
++static const unsigned int scif_clk2_pins[] = {
++	/* SCIF_CLK2 */
++	RCAR_GP_PIN(8, 11),
++};
++static const unsigned int scif_clk2_mux[] = {
++	SCIF_CLK2_MARK,
++};
++
+ /* - SSI ------------------------------------------------- */
+ static const unsigned int ssi_data_pins[] = {
+ 	/* SSI_SD */
+@@ -2694,6 +2702,7 @@ static const struct sh_pfc_pin_group pinmux_groups[] = {
+ 	SH_PFC_PIN_GROUP(scif4_clk),
+ 	SH_PFC_PIN_GROUP(scif4_ctrl),
+ 	SH_PFC_PIN_GROUP(scif_clk),
++	SH_PFC_PIN_GROUP(scif_clk2),
+ 
+ 	SH_PFC_PIN_GROUP(ssi_data),
+ 	SH_PFC_PIN_GROUP(ssi_ctrl),
+@@ -3015,6 +3024,10 @@ static const char * const scif_clk_groups[] = {
+ 	"scif_clk",
+ };
+ 
++static const char * const scif_clk2_groups[] = {
++	"scif_clk2",
++};
++
+ static const char * const ssi_groups[] = {
+ 	"ssi_data",
+ 	"ssi_ctrl",
+@@ -3102,6 +3115,7 @@ static const struct sh_pfc_function pinmux_functions[] = {
+ 	SH_PFC_FUNCTION(scif3),
+ 	SH_PFC_FUNCTION(scif4),
+ 	SH_PFC_FUNCTION(scif_clk),
++	SH_PFC_FUNCTION(scif_clk2),
+ 
+ 	SH_PFC_FUNCTION(ssi),
+ 
+diff --git a/drivers/platform/x86/p2sb.c b/drivers/platform/x86/p2sb.c
+index 17cc4b45e0239..a64f56ddd4a44 100644
+--- a/drivers/platform/x86/p2sb.c
++++ b/drivers/platform/x86/p2sb.c
+@@ -20,9 +20,11 @@
+ #define P2SBC_HIDE		BIT(8)
+ 
+ #define P2SB_DEVFN_DEFAULT	PCI_DEVFN(31, 1)
++#define P2SB_DEVFN_GOLDMONT	PCI_DEVFN(13, 0)
++#define SPI_DEVFN_GOLDMONT	PCI_DEVFN(13, 2)
+ 
+ static const struct x86_cpu_id p2sb_cpu_ids[] = {
+-	X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT,	PCI_DEVFN(13, 0)),
++	X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, P2SB_DEVFN_GOLDMONT),
+ 	{}
+ };
+ 
+@@ -98,21 +100,12 @@ static void p2sb_scan_and_cache_devfn(struct pci_bus *bus, unsigned int devfn)
+ 
+ static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)
+ {
+-	unsigned int slot, fn;
+-
+-	if (PCI_FUNC(devfn) == 0) {
+-		/*
+-		 * When function number of the P2SB device is zero, scan it and
+-		 * other function numbers, and if devices are available, cache
+-		 * their BAR0s.
+-		 */
+-		slot = PCI_SLOT(devfn);
+-		for (fn = 0; fn < NR_P2SB_RES_CACHE; fn++)
+-			p2sb_scan_and_cache_devfn(bus, PCI_DEVFN(slot, fn));
+-	} else {
+-		/* Scan the P2SB device and cache its BAR0 */
+-		p2sb_scan_and_cache_devfn(bus, devfn);
+-	}
++	/* Scan the P2SB device and cache its BAR0 */
++	p2sb_scan_and_cache_devfn(bus, devfn);
++
++	/* On Goldmont p2sb_bar() also gets called for the SPI controller */
++	if (devfn == P2SB_DEVFN_GOLDMONT)
++		p2sb_scan_and_cache_devfn(bus, SPI_DEVFN_GOLDMONT);
+ 
+ 	if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res))
+ 		return -ENOENT;
+diff --git a/drivers/platform/x86/x86-android-tablets/other.c b/drivers/platform/x86/x86-android-tablets/other.c
+index bc6bbf7ec6ea1..278402dcb808c 100644
+--- a/drivers/platform/x86/x86-android-tablets/other.c
++++ b/drivers/platform/x86/x86-android-tablets/other.c
+@@ -68,7 +68,7 @@ static const struct x86_i2c_client_info acer_b1_750_i2c_clients[] __initconst =
+ 	},
+ };
+ 
+-static struct gpiod_lookup_table acer_b1_750_goodix_gpios = {
++static struct gpiod_lookup_table acer_b1_750_nvt_ts_gpios = {
+ 	.dev_id = "i2c-NVT-ts",
+ 	.table = {
+ 		GPIO_LOOKUP("INT33FC:01", 26, "reset", GPIO_ACTIVE_LOW),
+@@ -77,7 +77,7 @@ static struct gpiod_lookup_table acer_b1_750_goodix_gpios = {
+ };
+ 
+ static struct gpiod_lookup_table * const acer_b1_750_gpios[] = {
+-	&acer_b1_750_goodix_gpios,
++	&acer_b1_750_nvt_ts_gpios,
+ 	&int3496_reference_gpios,
+ 	NULL
+ };
+diff --git a/drivers/pmdomain/qcom/rpmhpd.c b/drivers/pmdomain/qcom/rpmhpd.c
+index 0f0943e3efe50..e4d5afab7b3b2 100644
+--- a/drivers/pmdomain/qcom/rpmhpd.c
++++ b/drivers/pmdomain/qcom/rpmhpd.c
+@@ -217,7 +217,6 @@ static struct rpmhpd *sa8540p_rpmhpds[] = {
+ 	[SC8280XP_CX] = &cx,
+ 	[SC8280XP_CX_AO] = &cx_ao,
+ 	[SC8280XP_EBI] = &ebi,
+-	[SC8280XP_GFX] = &gfx,
+ 	[SC8280XP_LCX] = &lcx,
+ 	[SC8280XP_LMX] = &lmx,
+ 	[SC8280XP_MMCX] = &mmcx,
+diff --git a/drivers/power/supply/mm8013.c b/drivers/power/supply/mm8013.c
+index caa272b035649..20c1651ca38e0 100644
+--- a/drivers/power/supply/mm8013.c
++++ b/drivers/power/supply/mm8013.c
+@@ -71,7 +71,6 @@ static int mm8013_checkdevice(struct mm8013_chip *chip)
+ 
+ static enum power_supply_property mm8013_battery_props[] = {
+ 	POWER_SUPPLY_PROP_CAPACITY,
+-	POWER_SUPPLY_PROP_CHARGE_BEHAVIOUR,
+ 	POWER_SUPPLY_PROP_CHARGE_FULL,
+ 	POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN,
+ 	POWER_SUPPLY_PROP_CHARGE_NOW,
+@@ -103,16 +102,6 @@ static int mm8013_get_property(struct power_supply *psy,
+ 
+ 		val->intval = regval;
+ 		break;
+-	case POWER_SUPPLY_PROP_CHARGE_BEHAVIOUR:
+-		ret = regmap_read(chip->regmap, REG_FLAGS, &regval);
+-		if (ret < 0)
+-			return ret;
+-
+-		if (regval & MM8013_FLAG_CHG_INH)
+-			val->intval = POWER_SUPPLY_CHARGE_BEHAVIOUR_INHIBIT_CHARGE;
+-		else
+-			val->intval = POWER_SUPPLY_CHARGE_BEHAVIOUR_AUTO;
+-		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL:
+ 		ret = regmap_read(chip->regmap, REG_FULL_CHARGE_CAPACITY, &regval);
+ 		if (ret < 0)
+@@ -187,6 +176,8 @@ static int mm8013_get_property(struct power_supply *psy,
+ 
+ 		if (regval & MM8013_FLAG_DSG)
+ 			val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
++		else if (regval & MM8013_FLAG_CHG_INH)
++			val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ 		else if (regval & MM8013_FLAG_CHG)
+ 			val->intval = POWER_SUPPLY_STATUS_CHARGING;
+ 		else if (regval & MM8013_FLAG_FC)
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index 9193c3b8edebe..ae7ee611978ba 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -219,7 +219,7 @@ static int __dtpm_cpu_setup(int cpu, struct dtpm *parent)
+ 	ret = freq_qos_add_request(&policy->constraints,
+ 				   &dtpm_cpu->qos_req, FREQ_QOS_MAX,
+ 				   pd->table[pd->nr_perf_states - 1].frequency);
+-	if (ret)
++	if (ret < 0)
+ 		goto out_dtpm_unregister;
+ 
+ 	cpufreq_cpu_put(policy);
+diff --git a/drivers/pwm/pwm-atmel-hlcdc.c b/drivers/pwm/pwm-atmel-hlcdc.c
+index 07920e0347575..584288d8c0503 100644
+--- a/drivers/pwm/pwm-atmel-hlcdc.c
++++ b/drivers/pwm/pwm-atmel-hlcdc.c
+@@ -186,7 +186,7 @@ static int atmel_hlcdc_pwm_suspend(struct device *dev)
+ 	struct atmel_hlcdc_pwm *atmel = dev_get_drvdata(dev);
+ 
+ 	/* Keep the periph clock enabled if the PWM is still running. */
+-	if (pwm_is_enabled(&atmel->chip.pwms[0]))
++	if (!pwm_is_enabled(&atmel->chip.pwms[0]))
+ 		clk_disable_unprepare(atmel->hlcdc->periph_clk);
+ 
+ 	return 0;
+diff --git a/drivers/pwm/pwm-sti.c b/drivers/pwm/pwm-sti.c
+index dc92cea31cd07..d885b30a147dc 100644
+--- a/drivers/pwm/pwm-sti.c
++++ b/drivers/pwm/pwm-sti.c
+@@ -395,8 +395,17 @@ static int sti_pwm_capture(struct pwm_chip *chip, struct pwm_device *pwm,
+ static int sti_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			 const struct pwm_state *state)
+ {
++	struct sti_pwm_chip *pc = to_sti_pwmchip(chip);
++	struct sti_pwm_compat_data *cdata = pc->cdata;
++	struct device *dev = pc->dev;
+ 	int err;
+ 
++	if (pwm->hwpwm >= cdata->pwm_num_devs) {
++		dev_err(dev, "device %u is not valid for pwm mode\n",
++			pwm->hwpwm);
++		return -EINVAL;
++	}
++
+ 	if (state->polarity != PWM_POLARITY_NORMAL)
+ 		return -EINVAL;
+ 
+@@ -646,7 +655,7 @@ static int sti_pwm_probe(struct platform_device *pdev)
+ 
+ 	pc->chip.dev = dev;
+ 	pc->chip.ops = &sti_pwm_ops;
+-	pc->chip.npwm = pc->cdata->pwm_num_devs;
++	pc->chip.npwm = max(cdata->pwm_num_devs, cdata->cpt_num_devs);
+ 
+ 	for (i = 0; i < cdata->cpt_num_devs; i++) {
+ 		struct sti_cpt_ddata *ddata = &cdata->ddata[i];
+diff --git a/drivers/regulator/max5970-regulator.c b/drivers/regulator/max5970-regulator.c
+index 830a1c4cd7057..8bbcd983a74aa 100644
+--- a/drivers/regulator/max5970-regulator.c
++++ b/drivers/regulator/max5970-regulator.c
+@@ -29,8 +29,8 @@ struct max5970_regulator {
+ };
+ 
+ enum max597x_regulator_id {
+-	MAX597X_SW0,
+-	MAX597X_SW1,
++	MAX597X_sw0,
++	MAX597X_sw1,
+ };
+ 
+ static int max5970_read_adc(struct regmap *regmap, int reg, long *val)
+@@ -378,8 +378,8 @@ static int max597x_dt_parse(struct device_node *np,
+ }
+ 
+ static const struct regulator_desc regulators[] = {
+-	MAX597X_SWITCH(SW0, MAX5970_REG_CHXEN, 0, "vss1"),
+-	MAX597X_SWITCH(SW1, MAX5970_REG_CHXEN, 1, "vss2"),
++	MAX597X_SWITCH(sw0, MAX5970_REG_CHXEN, 0, "vss1"),
++	MAX597X_SWITCH(sw1, MAX5970_REG_CHXEN, 1, "vss2"),
+ };
+ 
+ static int max597x_regmap_read_clear(struct regmap *map, unsigned int reg,
+diff --git a/drivers/regulator/userspace-consumer.c b/drivers/regulator/userspace-consumer.c
+index 97f075ed68c95..cb1de24b98626 100644
+--- a/drivers/regulator/userspace-consumer.c
++++ b/drivers/regulator/userspace-consumer.c
+@@ -210,6 +210,7 @@ static const struct of_device_id regulator_userspace_consumer_of_match[] = {
+ 	{ .compatible = "regulator-output", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, regulator_userspace_consumer_of_match);
+ 
+ static struct platform_driver regulator_userspace_consumer_driver = {
+ 	.probe		= regulator_userspace_consumer_probe,
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 4f469f0bcf8b2..10b442c6f6323 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -120,7 +120,7 @@ static int stm32_rproc_mem_alloc(struct rproc *rproc,
+ 	void *va;
+ 
+ 	dev_dbg(dev, "map memory: %pad+%zx\n", &mem->dma, mem->len);
+-	va = ioremap_wc(mem->dma, mem->len);
++	va = (__force void *)ioremap_wc(mem->dma, mem->len);
+ 	if (IS_ERR_OR_NULL(va)) {
+ 		dev_err(dev, "Unable to map memory region: %pad+0x%zx\n",
+ 			&mem->dma, mem->len);
+@@ -137,7 +137,7 @@ static int stm32_rproc_mem_release(struct rproc *rproc,
+ 				   struct rproc_mem_entry *mem)
+ {
+ 	dev_dbg(rproc->dev.parent, "unmap memory: %pa\n", &mem->dma);
+-	iounmap(mem->va);
++	iounmap((__force __iomem void *)mem->va);
+ 
+ 	return 0;
+ }
+@@ -657,7 +657,7 @@ stm32_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz)
+ 	 * entire area by overwriting it with the initial values stored in rproc->clean_table.
+ 	 */
+ 	*table_sz = RSC_TBL_SIZE;
+-	return (struct resource_table *)ddata->rsc_va;
++	return (__force struct resource_table *)ddata->rsc_va;
+ }
+ 
+ static const struct rproc_ops st_rproc_ops = {
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 3814e0845e772..b1e1d277d4593 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -1832,7 +1832,8 @@ config RTC_DRV_MT2712
+ 
+ config RTC_DRV_MT6397
+ 	tristate "MediaTek PMIC based RTC"
+-	depends on MFD_MT6397 || (COMPILE_TEST && IRQ_DOMAIN)
++	depends on MFD_MT6397 || COMPILE_TEST
++	select IRQ_DOMAIN
+ 	help
+ 	  This selects the MediaTek(R) RTC driver. RTC is part of MediaTek
+ 	  MT6397 PMIC. You should enable MT6397 PMIC MFD before select
+diff --git a/drivers/rtc/lib_test.c b/drivers/rtc/lib_test.c
+index d5caf36c56cdc..225c859d6da55 100644
+--- a/drivers/rtc/lib_test.c
++++ b/drivers/rtc/lib_test.c
+@@ -54,7 +54,7 @@ static void rtc_time64_to_tm_test_date_range(struct kunit *test)
+ 
+ 		days = div_s64(secs, 86400);
+ 
+-		#define FAIL_MSG "%d/%02d/%02d (%2d) : %ld", \
++		#define FAIL_MSG "%d/%02d/%02d (%2d) : %lld", \
+ 			year, month, mday, yday, days
+ 
+ 		KUNIT_ASSERT_EQ_MSG(test, year - 1900, result.tm_year, FAIL_MSG);
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 833cfab7d8776..0eea5afe9e9ea 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -8,9 +8,6 @@
+  * Copyright IBM Corp. 1999, 2009
+  */
+ 
+-#define KMSG_COMPONENT "dasd"
+-#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+-
+ #include <linux/kmod.h>
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+@@ -3408,8 +3405,7 @@ static void dasd_generic_auto_online(void *data, async_cookie_t cookie)
+ 
+ 	ret = ccw_device_set_online(cdev);
+ 	if (ret)
+-		pr_warn("%s: Setting the DASD online failed with rc=%d\n",
+-			dev_name(&cdev->dev), ret);
++		dev_warn(&cdev->dev, "Setting the DASD online failed with rc=%d\n", ret);
+ }
+ 
+ /*
+@@ -3496,8 +3492,11 @@ int dasd_generic_set_online(struct ccw_device *cdev,
+ {
+ 	struct dasd_discipline *discipline;
+ 	struct dasd_device *device;
++	struct device *dev;
+ 	int rc;
+ 
++	dev = &cdev->dev;
++
+ 	/* first online clears initial online feature flag */
+ 	dasd_set_feature(cdev, DASD_FEATURE_INITIAL_ONLINE, 0);
+ 	device = dasd_create_device(cdev);
+@@ -3510,11 +3509,10 @@ int dasd_generic_set_online(struct ccw_device *cdev,
+ 			/* Try to load the required module. */
+ 			rc = request_module(DASD_DIAG_MOD);
+ 			if (rc) {
+-				pr_warn("%s Setting the DASD online failed "
+-					"because the required module %s "
+-					"could not be loaded (rc=%d)\n",
+-					dev_name(&cdev->dev), DASD_DIAG_MOD,
+-					rc);
++				dev_warn(dev, "Setting the DASD online failed "
++					 "because the required module %s "
++					 "could not be loaded (rc=%d)\n",
++					 DASD_DIAG_MOD, rc);
+ 				dasd_delete_device(device);
+ 				return -ENODEV;
+ 			}
+@@ -3522,8 +3520,7 @@ int dasd_generic_set_online(struct ccw_device *cdev,
+ 		/* Module init could have failed, so check again here after
+ 		 * request_module(). */
+ 		if (!dasd_diag_discipline_pointer) {
+-			pr_warn("%s Setting the DASD online failed because of missing DIAG discipline\n",
+-				dev_name(&cdev->dev));
++			dev_warn(dev, "Setting the DASD online failed because of missing DIAG discipline\n");
+ 			dasd_delete_device(device);
+ 			return -ENODEV;
+ 		}
+@@ -3533,37 +3530,33 @@ int dasd_generic_set_online(struct ccw_device *cdev,
+ 		dasd_delete_device(device);
+ 		return -EINVAL;
+ 	}
++	device->base_discipline = base_discipline;
+ 	if (!try_module_get(discipline->owner)) {
+-		module_put(base_discipline->owner);
+ 		dasd_delete_device(device);
+ 		return -EINVAL;
+ 	}
+-	device->base_discipline = base_discipline;
+ 	device->discipline = discipline;
+ 
+ 	/* check_device will allocate block device if necessary */
+ 	rc = discipline->check_device(device);
+ 	if (rc) {
+-		pr_warn("%s Setting the DASD online with discipline %s failed with rc=%i\n",
+-			dev_name(&cdev->dev), discipline->name, rc);
+-		module_put(discipline->owner);
+-		module_put(base_discipline->owner);
++		dev_warn(dev, "Setting the DASD online with discipline %s failed with rc=%i\n",
++			 discipline->name, rc);
+ 		dasd_delete_device(device);
+ 		return rc;
+ 	}
+ 
+ 	dasd_set_target_state(device, DASD_STATE_ONLINE);
+ 	if (device->state <= DASD_STATE_KNOWN) {
+-		pr_warn("%s Setting the DASD online failed because of a missing discipline\n",
+-			dev_name(&cdev->dev));
++		dev_warn(dev, "Setting the DASD online failed because of a missing discipline\n");
+ 		rc = -ENODEV;
+ 		dasd_set_target_state(device, DASD_STATE_NEW);
+ 		if (device->block)
+ 			dasd_free_block(device->block);
+ 		dasd_delete_device(device);
+-	} else
+-		pr_debug("dasd_generic device %s found\n",
+-				dev_name(&cdev->dev));
++	} else {
++		dev_dbg(dev, "dasd_generic device found\n");
++	}
+ 
+ 	wait_event(dasd_init_waitq, _wait_for_device(device));
+ 
+@@ -3574,10 +3567,13 @@ EXPORT_SYMBOL_GPL(dasd_generic_set_online);
+ 
+ int dasd_generic_set_offline(struct ccw_device *cdev)
+ {
++	int max_count, open_count, rc;
+ 	struct dasd_device *device;
+ 	struct dasd_block *block;
+-	int max_count, open_count, rc;
+ 	unsigned long flags;
++	struct device *dev;
++
++	dev = &cdev->dev;
+ 
+ 	rc = 0;
+ 	spin_lock_irqsave(get_ccwdev_lock(cdev), flags);
+@@ -3598,11 +3594,10 @@ int dasd_generic_set_offline(struct ccw_device *cdev)
+ 		open_count = atomic_read(&device->block->open_count);
+ 		if (open_count > max_count) {
+ 			if (open_count > 0)
+-				pr_warn("%s: The DASD cannot be set offline with open count %i\n",
+-					dev_name(&cdev->dev), open_count);
++				dev_warn(dev, "The DASD cannot be set offline with open count %i\n",
++					 open_count);
+ 			else
+-				pr_warn("%s: The DASD cannot be set offline while it is in use\n",
+-					dev_name(&cdev->dev));
++				dev_warn(dev, "The DASD cannot be set offline while it is in use\n");
+ 			rc = -EBUSY;
+ 			goto out_err;
+ 		}
+@@ -3962,8 +3957,8 @@ static int dasd_handle_autoquiesce(struct dasd_device *device,
+ 	if (dasd_eer_enabled(device))
+ 		dasd_eer_write(device, NULL, DASD_EER_AUTOQUIESCE);
+ 
+-	pr_info("%s: The DASD has been put in the quiesce state\n",
+-		dev_name(&device->cdev->dev));
++	dev_info(&device->cdev->dev,
++		 "The DASD has been put in the quiesce state\n");
+ 	dasd_device_set_stop_bits(device, DASD_STOPPED_QUIESCE);
+ 
+ 	if (device->features & DASD_FEATURE_REQUEUEQUIESCE)
+diff --git a/drivers/scsi/bfa/bfa.h b/drivers/scsi/bfa/bfa.h
+index 7bd2ba1ad4d11..f30fe324e6ecc 100644
+--- a/drivers/scsi/bfa/bfa.h
++++ b/drivers/scsi/bfa/bfa.h
+@@ -20,7 +20,6 @@
+ struct bfa_s;
+ 
+ typedef void (*bfa_isr_func_t) (struct bfa_s *bfa, struct bfi_msg_s *m);
+-typedef void (*bfa_cb_cbfn_status_t) (void *cbarg, bfa_status_t status);
+ 
+ /*
+  * Interrupt message handlers
+@@ -437,4 +436,12 @@ struct bfa_cb_pending_q_s {
+ 	(__qe)->data = (__data);				\
+ } while (0)
+ 
++#define bfa_pending_q_init_status(__qe, __cbfn, __cbarg, __data) do {	\
++	bfa_q_qe_init(&((__qe)->hcb_qe.qe));			\
++	(__qe)->hcb_qe.cbfn_status = (__cbfn);			\
++	(__qe)->hcb_qe.cbarg = (__cbarg);			\
++	(__qe)->hcb_qe.pre_rmv = BFA_TRUE;			\
++	(__qe)->data = (__data);				\
++} while (0)
++
+ #endif /* __BFA_H__ */
+diff --git a/drivers/scsi/bfa/bfa_core.c b/drivers/scsi/bfa/bfa_core.c
+index 6846ca8f7313c..3438d0b8ba062 100644
+--- a/drivers/scsi/bfa/bfa_core.c
++++ b/drivers/scsi/bfa/bfa_core.c
+@@ -1907,15 +1907,13 @@ bfa_comp_process(struct bfa_s *bfa, struct list_head *comp_q)
+ 	struct list_head		*qe;
+ 	struct list_head		*qen;
+ 	struct bfa_cb_qe_s	*hcb_qe;
+-	bfa_cb_cbfn_status_t	cbfn;
+ 
+ 	list_for_each_safe(qe, qen, comp_q) {
+ 		hcb_qe = (struct bfa_cb_qe_s *) qe;
+ 		if (hcb_qe->pre_rmv) {
+ 			/* qe is invalid after return, dequeue before cbfn() */
+ 			list_del(qe);
+-			cbfn = (bfa_cb_cbfn_status_t)(hcb_qe->cbfn);
+-			cbfn(hcb_qe->cbarg, hcb_qe->fw_status);
++			hcb_qe->cbfn_status(hcb_qe->cbarg, hcb_qe->fw_status);
+ 		} else
+ 			hcb_qe->cbfn(hcb_qe->cbarg, BFA_TRUE);
+ 	}
+diff --git a/drivers/scsi/bfa/bfa_ioc.h b/drivers/scsi/bfa/bfa_ioc.h
+index 933a1c3890ff5..5e568d6d7b261 100644
+--- a/drivers/scsi/bfa/bfa_ioc.h
++++ b/drivers/scsi/bfa/bfa_ioc.h
+@@ -361,14 +361,18 @@ struct bfa_reqq_wait_s {
+ 	void	*cbarg;
+ };
+ 
+-typedef void	(*bfa_cb_cbfn_t) (void *cbarg, bfa_boolean_t complete);
++typedef void (*bfa_cb_cbfn_t) (void *cbarg, bfa_boolean_t complete);
++typedef void (*bfa_cb_cbfn_status_t) (void *cbarg, bfa_status_t status);
+ 
+ /*
+  * Generic BFA callback element.
+  */
+ struct bfa_cb_qe_s {
+ 	struct list_head	qe;
+-	bfa_cb_cbfn_t	cbfn;
++	union {
++		bfa_cb_cbfn_status_t	cbfn_status;
++		bfa_cb_cbfn_t		cbfn;
++	};
+ 	bfa_boolean_t	once;
+ 	bfa_boolean_t	pre_rmv;	/* set for stack based qe(s) */
+ 	bfa_status_t	fw_status;	/* to access fw status in comp proc */
+diff --git a/drivers/scsi/bfa/bfad_bsg.c b/drivers/scsi/bfa/bfad_bsg.c
+index d4ceca2d435ee..54bd11e6d5933 100644
+--- a/drivers/scsi/bfa/bfad_bsg.c
++++ b/drivers/scsi/bfa/bfad_bsg.c
+@@ -2135,8 +2135,7 @@ bfad_iocmd_fcport_get_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_cb_pending_q_s cb_qe;
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp,
+-			   &fcomp, &iocmd->stats);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, &iocmd->stats);
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	iocmd->status = bfa_fcport_get_stats(&bfad->bfa, &cb_qe);
+ 	spin_unlock_irqrestore(&bfad->bfad_lock, flags);
+@@ -2159,7 +2158,7 @@ bfad_iocmd_fcport_reset_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_cb_pending_q_s cb_qe;
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp, &fcomp, NULL);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, NULL);
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	iocmd->status = bfa_fcport_clear_stats(&bfad->bfa, &cb_qe);
+@@ -2443,8 +2442,7 @@ bfad_iocmd_qos_get_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_fcport_s *fcport = BFA_FCPORT_MOD(&bfad->bfa);
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp,
+-			   &fcomp, &iocmd->stats);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, &iocmd->stats);
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	WARN_ON(!bfa_ioc_get_fcmode(&bfad->bfa.ioc));
+@@ -2474,8 +2472,7 @@ bfad_iocmd_qos_reset_stats(struct bfad_s *bfad, void *cmd)
+ 	struct bfa_fcport_s *fcport = BFA_FCPORT_MOD(&bfad->bfa);
+ 
+ 	init_completion(&fcomp.comp);
+-	bfa_pending_q_init(&cb_qe, (bfa_cb_cbfn_t)bfad_hcb_comp,
+-			   &fcomp, NULL);
++	bfa_pending_q_init_status(&cb_qe, bfad_hcb_comp, &fcomp, NULL);
+ 
+ 	spin_lock_irqsave(&bfad->bfad_lock, flags);
+ 	WARN_ON(!bfa_ioc_get_fcmode(&bfad->bfa.ioc));
+diff --git a/drivers/scsi/csiostor/csio_defs.h b/drivers/scsi/csiostor/csio_defs.h
+index c38017b4af982..e50e93e7fe5a1 100644
+--- a/drivers/scsi/csiostor/csio_defs.h
++++ b/drivers/scsi/csiostor/csio_defs.h
+@@ -73,7 +73,21 @@ csio_list_deleted(struct list_head *list)
+ #define csio_list_prev(elem)	(((struct list_head *)(elem))->prev)
+ 
+ /* State machine */
+-typedef void (*csio_sm_state_t)(void *, uint32_t);
++struct csio_lnode;
++
++/* State machine evets */
++enum csio_ln_ev {
++	CSIO_LNE_NONE = (uint32_t)0,
++	CSIO_LNE_LINKUP,
++	CSIO_LNE_FAB_INIT_DONE,
++	CSIO_LNE_LINK_DOWN,
++	CSIO_LNE_DOWN_LINK,
++	CSIO_LNE_LOGO,
++	CSIO_LNE_CLOSE,
++	CSIO_LNE_MAX_EVENT,
++};
++
++typedef void (*csio_sm_state_t)(struct csio_lnode *ln, enum csio_ln_ev evt);
+ 
+ struct csio_sm {
+ 	struct list_head	sm_list;
+@@ -83,7 +97,7 @@ struct csio_sm {
+ static inline void
+ csio_set_state(void *smp, void *state)
+ {
+-	((struct csio_sm *)smp)->sm_state = (csio_sm_state_t)state;
++	((struct csio_sm *)smp)->sm_state = state;
+ }
+ 
+ static inline void
+diff --git a/drivers/scsi/csiostor/csio_lnode.c b/drivers/scsi/csiostor/csio_lnode.c
+index d5ac938970232..5b3ffefae476d 100644
+--- a/drivers/scsi/csiostor/csio_lnode.c
++++ b/drivers/scsi/csiostor/csio_lnode.c
+@@ -1095,7 +1095,7 @@ csio_handle_link_down(struct csio_hw *hw, uint8_t portid, uint32_t fcfi,
+ int
+ csio_is_lnode_ready(struct csio_lnode *ln)
+ {
+-	return (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_ready));
++	return (csio_get_state(ln) == csio_lns_ready);
+ }
+ 
+ /*****************************************************************************/
+@@ -1366,15 +1366,15 @@ csio_free_fcfinfo(struct kref *kref)
+ void
+ csio_lnode_state_to_str(struct csio_lnode *ln, int8_t *str)
+ {
+-	if (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_uninit)) {
++	if (csio_get_state(ln) == csio_lns_uninit) {
+ 		strcpy(str, "UNINIT");
+ 		return;
+ 	}
+-	if (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_ready)) {
++	if (csio_get_state(ln) == csio_lns_ready) {
+ 		strcpy(str, "READY");
+ 		return;
+ 	}
+-	if (csio_get_state(ln) == ((csio_sm_state_t)csio_lns_offline)) {
++	if (csio_get_state(ln) == csio_lns_offline) {
+ 		strcpy(str, "OFFLINE");
+ 		return;
+ 	}
+diff --git a/drivers/scsi/csiostor/csio_lnode.h b/drivers/scsi/csiostor/csio_lnode.h
+index 372a67d122d38..607698a0f0631 100644
+--- a/drivers/scsi/csiostor/csio_lnode.h
++++ b/drivers/scsi/csiostor/csio_lnode.h
+@@ -53,19 +53,6 @@
+ extern int csio_fcoe_rnodes;
+ extern int csio_fdmi_enable;
+ 
+-/* State machine evets */
+-enum csio_ln_ev {
+-	CSIO_LNE_NONE = (uint32_t)0,
+-	CSIO_LNE_LINKUP,
+-	CSIO_LNE_FAB_INIT_DONE,
+-	CSIO_LNE_LINK_DOWN,
+-	CSIO_LNE_DOWN_LINK,
+-	CSIO_LNE_LOGO,
+-	CSIO_LNE_CLOSE,
+-	CSIO_LNE_MAX_EVENT,
+-};
+-
+-
+ struct csio_fcf_info {
+ 	struct list_head	list;
+ 	uint8_t			priority;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index bbb7b2d9ffcfb..1abc62b07d24c 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1962,9 +1962,17 @@ static bool hisi_sas_internal_abort_timeout(struct sas_task *task,
+ 	struct hisi_sas_internal_abort_data *timeout = data;
+ 
+ 	if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct) {
+-		down(&hisi_hba->sem);
++		/*
++		 * If timeout occurs in device gone scenario, to avoid
++		 * circular dependency like:
++		 * hisi_sas_dev_gone() -> down() -> ... ->
++		 * hisi_sas_internal_abort_timeout() -> down().
++		 */
++		if (!timeout->rst_ha_timeout)
++			down(&hisi_hba->sem);
+ 		hisi_hba->hw->debugfs_snapshot_regs(hisi_hba);
+-		up(&hisi_hba->sem);
++		if (!timeout->rst_ha_timeout)
++			up(&hisi_hba->sem);
+ 	}
+ 
+ 	if (task->task_state_flags & SAS_TASK_STATE_DONE) {
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index a75f670bf5519..aa29e250cf15f 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -7387,7 +7387,9 @@ _base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout)
+ 		return -EFAULT;
+ 	}
+ 
+- issue_diag_reset:
++	return 0;
++
++issue_diag_reset:
+ 	rc = _base_diag_reset(ioc);
+ 	return rc;
+ }
+diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c
+index 1d2b27e3ea63f..b811446e0fa55 100644
+--- a/drivers/soc/fsl/dpio/dpio-service.c
++++ b/drivers/soc/fsl/dpio/dpio-service.c
+@@ -523,7 +523,7 @@ int dpaa2_io_service_enqueue_multiple_desc_fq(struct dpaa2_io *d,
+ 	struct qbman_eq_desc *ed;
+ 	int i, ret;
+ 
+-	ed = kcalloc(sizeof(struct qbman_eq_desc), 32, GFP_KERNEL);
++	ed = kcalloc(32, sizeof(struct qbman_eq_desc), GFP_KERNEL);
+ 	if (!ed)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/soc/microchip/Kconfig b/drivers/soc/microchip/Kconfig
+index eb656b33156ba..f19e74d342aa2 100644
+--- a/drivers/soc/microchip/Kconfig
++++ b/drivers/soc/microchip/Kconfig
+@@ -1,5 +1,5 @@
+ config POLARFIRE_SOC_SYS_CTRL
+-	tristate "POLARFIRE_SOC_SYS_CTRL"
++	tristate "Microchip PolarFire SoC (MPFS) system controller support"
+ 	depends on POLARFIRE_SOC_MAILBOX
+ 	help
+ 	  This driver adds support for the PolarFire SoC (MPFS) system controller.
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 57d47dcf11b92..1b7e921d45a11 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -766,6 +766,8 @@ static int llcc_update_act_ctrl(u32 sid,
+ 	ret = regmap_read_poll_timeout(drv_data->bcast_regmap, status_reg,
+ 				      slice_status, !(slice_status & status),
+ 				      0, LLCC_STATUS_READ_DELAY);
++	if (ret)
++		return ret;
+ 
+ 	if (drv_data->version >= LLCC_VERSION_4_1_0_0)
+ 		ret = regmap_write(drv_data->bcast_regmap, act_clear_reg,
+diff --git a/drivers/soc/qcom/pmic_glink_altmode.c b/drivers/soc/qcom/pmic_glink_altmode.c
+index 7ee52cf2570fa..ca58bfa41846c 100644
+--- a/drivers/soc/qcom/pmic_glink_altmode.c
++++ b/drivers/soc/qcom/pmic_glink_altmode.c
+@@ -469,12 +469,6 @@ static int pmic_glink_altmode_probe(struct auxiliary_device *adev,
+ 		alt_port->bridge.ops = DRM_BRIDGE_OP_HPD;
+ 		alt_port->bridge.type = DRM_MODE_CONNECTOR_DisplayPort;
+ 
+-		ret = devm_drm_bridge_add(dev, &alt_port->bridge);
+-		if (ret) {
+-			fwnode_handle_put(fwnode);
+-			return ret;
+-		}
+-
+ 		alt_port->dp_alt.svid = USB_TYPEC_DP_SID;
+ 		alt_port->dp_alt.mode = USB_TYPEC_DP_MODE;
+ 		alt_port->dp_alt.active = 1;
+@@ -525,6 +519,16 @@ static int pmic_glink_altmode_probe(struct auxiliary_device *adev,
+ 		}
+ 	}
+ 
++	for (port = 0; port < ARRAY_SIZE(altmode->ports); port++) {
++		alt_port = &altmode->ports[port];
++		if (!alt_port->altmode)
++			continue;
++
++		ret = devm_drm_bridge_add(dev, &alt_port->bridge);
++		if (ret)
++			return ret;
++	}
++
+ 	altmode->client = devm_pmic_glink_register_client(dev,
+ 							  altmode->owner_id,
+ 							  pmic_glink_altmode_callback,
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index 51e05bec5bfce..9865964cf6dbf 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -114,7 +114,7 @@ static const char *const pmic_models[] = {
+ 	[50] = "PM8350B",
+ 	[51] = "PMR735A",
+ 	[52] = "PMR735B",
+-	[55] = "PM2250",
++	[55] = "PM4125",
+ 	[58] = "PM8450",
+ 	[65] = "PM8010",
+ 	[69] = "PM8550VS",
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 731775d34d393..1a8d03958dffb 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1927,7 +1927,7 @@ static void cqspi_remove(struct platform_device *pdev)
+ 	pm_runtime_disable(&pdev->dev);
+ }
+ 
+-static int cqspi_suspend(struct device *dev)
++static int cqspi_runtime_suspend(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
+ 
+@@ -1936,7 +1936,7 @@ static int cqspi_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int cqspi_resume(struct device *dev)
++static int cqspi_runtime_resume(struct device *dev)
+ {
+ 	struct cqspi_st *cqspi = dev_get_drvdata(dev);
+ 
+@@ -1949,8 +1949,24 @@ static int cqspi_resume(struct device *dev)
+ 	return 0;
+ }
+ 
+-static DEFINE_RUNTIME_DEV_PM_OPS(cqspi_dev_pm_ops, cqspi_suspend,
+-				 cqspi_resume, NULL);
++static int cqspi_suspend(struct device *dev)
++{
++	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++
++	return spi_controller_suspend(cqspi->host);
++}
++
++static int cqspi_resume(struct device *dev)
++{
++	struct cqspi_st *cqspi = dev_get_drvdata(dev);
++
++	return spi_controller_resume(cqspi->host);
++}
++
++static const struct dev_pm_ops cqspi_dev_pm_ops = {
++	RUNTIME_PM_OPS(cqspi_runtime_suspend, cqspi_runtime_resume, NULL)
++	SYSTEM_SLEEP_PM_OPS(cqspi_suspend, cqspi_resume)
++};
+ 
+ static const struct cqspi_driver_platdata cdns_qspi = {
+ 	.quirks = CQSPI_DISABLE_DAC_MODE,
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 11991eb126364..079035db7dd85 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -830,11 +830,11 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ 
+ 	is_target = of_property_read_bool((&pdev->dev)->of_node, "spi-slave");
+ 	if (is_target)
+-		controller = spi_alloc_target(&pdev->dev,
+-					      sizeof(struct fsl_lpspi_data));
++		controller = devm_spi_alloc_target(&pdev->dev,
++						   sizeof(struct fsl_lpspi_data));
+ 	else
+-		controller = spi_alloc_host(&pdev->dev,
+-					    sizeof(struct fsl_lpspi_data));
++		controller = devm_spi_alloc_host(&pdev->dev,
++						 sizeof(struct fsl_lpspi_data));
+ 
+ 	if (!controller)
+ 		return -ENOMEM;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index e2d3e3ec13789..0e479c5406217 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -668,8 +668,8 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 				ctrl |= (MX51_ECSPI_CTRL_MAX_BURST * BITS_PER_BYTE - 1)
+ 						<< MX51_ECSPI_CTRL_BL_OFFSET;
+ 			else
+-				ctrl |= spi_imx->count / DIV_ROUND_UP(spi_imx->bits_per_word,
+-						BITS_PER_BYTE) * spi_imx->bits_per_word
++				ctrl |= (spi_imx->count / DIV_ROUND_UP(spi_imx->bits_per_word,
++						BITS_PER_BYTE) * spi_imx->bits_per_word - 1)
+ 						<< MX51_ECSPI_CTRL_BL_OFFSET;
+ 		}
+ 	}
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index 07d20ca1164c3..4337ca51d7aa2 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -85,6 +85,7 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0xa2a4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa324), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa3a4), (unsigned long)&cnl_info },
++	{ PCI_VDEVICE(INTEL, 0xa823), (unsigned long)&cnl_info },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(pci, intel_spi_pci_ids);
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 8d5d170d49cc4..109dac2e69df2 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -787,17 +787,19 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
+ 		mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len);
+ 		mtk_spi_setup_packet(host);
+ 
+-		cnt = mdata->xfer_len / 4;
+-		iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
+-				trans->tx_buf + mdata->num_xfered, cnt);
++		if (trans->tx_buf) {
++			cnt = mdata->xfer_len / 4;
++			iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
++					trans->tx_buf + mdata->num_xfered, cnt);
+ 
+-		remainder = mdata->xfer_len % 4;
+-		if (remainder > 0) {
+-			reg_val = 0;
+-			memcpy(&reg_val,
+-				trans->tx_buf + (cnt * 4) + mdata->num_xfered,
+-				remainder);
+-			writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++			remainder = mdata->xfer_len % 4;
++			if (remainder > 0) {
++				reg_val = 0;
++				memcpy(&reg_val,
++					trans->tx_buf + (cnt * 4) + mdata->num_xfered,
++					remainder);
++				writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++			}
+ 		}
+ 
+ 		mtk_spi_enable_transfer(host);
+diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
+index 87d36948c6106..c6bd86a5335ab 100644
+--- a/drivers/staging/greybus/light.c
++++ b/drivers/staging/greybus/light.c
+@@ -100,15 +100,15 @@ static struct led_classdev *get_channel_cdev(struct gb_channel *channel)
+ static struct gb_channel *get_channel_from_mode(struct gb_light *light,
+ 						u32 mode)
+ {
+-	struct gb_channel *channel = NULL;
++	struct gb_channel *channel;
+ 	int i;
+ 
+ 	for (i = 0; i < light->channels_count; i++) {
+ 		channel = &light->channels[i];
+-		if (channel && channel->mode == mode)
+-			break;
++		if (channel->mode == mode)
++			return channel;
+ 	}
+-	return channel;
++	return NULL;
+ }
+ 
+ static int __gb_lights_flash_intensity_set(struct gb_channel *channel,
+diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c
+index 1fd39a2fca98a..95cca281e8a37 100644
+--- a/drivers/staging/media/imx/imx-media-csc-scaler.c
++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c
+@@ -803,6 +803,7 @@ static int ipu_csc_scaler_release(struct file *file)
+ 
+ 	dev_dbg(priv->dev, "Releasing instance %p\n", ctx);
+ 
++	v4l2_ctrl_handler_free(&ctx->ctrl_hdlr);
+ 	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+ 	v4l2_fh_del(&ctx->fh);
+ 	v4l2_fh_exit(&ctx->fh);
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index fc9297232456f..16c822637dc6e 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -427,11 +427,11 @@ static int cedrus_h265_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
+ 	unsigned int ctb_addr_x, ctb_addr_y;
+ 	struct cedrus_buffer *cedrus_buf;
+ 	dma_addr_t src_buf_addr;
+-	dma_addr_t src_buf_end_addr;
+ 	u32 chroma_log2_weight_denom;
+ 	u32 num_entry_point_offsets;
+ 	u32 output_pic_list_index;
+ 	u32 pic_order_cnt[2];
++	size_t slice_bytes;
+ 	u8 padding;
+ 	int count;
+ 	u32 reg;
+@@ -443,6 +443,7 @@ static int cedrus_h265_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
+ 	pred_weight_table = &slice_params->pred_weight_table;
+ 	num_entry_point_offsets = slice_params->num_entry_point_offsets;
+ 	cedrus_buf = vb2_to_cedrus_buffer(&run->dst->vb2_buf);
++	slice_bytes = vb2_get_plane_payload(&run->src->vb2_buf, 0);
+ 
+ 	/*
+ 	 * If entry points offsets are present, we should get them
+@@ -490,7 +491,7 @@ static int cedrus_h265_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
+ 
+ 	cedrus_write(dev, VE_DEC_H265_BITS_OFFSET, 0);
+ 
+-	reg = slice_params->bit_size;
++	reg = slice_bytes * 8;
+ 	cedrus_write(dev, VE_DEC_H265_BITS_LEN, reg);
+ 
+ 	/* Source beginning and end addresses. */
+@@ -504,10 +505,7 @@ static int cedrus_h265_setup(struct cedrus_ctx *ctx, struct cedrus_run *run)
+ 
+ 	cedrus_write(dev, VE_DEC_H265_BITS_ADDR, reg);
+ 
+-	src_buf_end_addr = src_buf_addr +
+-			   DIV_ROUND_UP(slice_params->bit_size, 8);
+-
+-	reg = VE_DEC_H265_BITS_END_ADDR_BASE(src_buf_end_addr);
++	reg = VE_DEC_H265_BITS_END_ADDR_BASE(src_buf_addr + slice_bytes);
+ 	cedrus_write(dev, VE_DEC_H265_BITS_END_ADDR, reg);
+ 
+ 	/* Coding tree block address */
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 98d9c80bd4c62..fd4bd650c77a6 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -719,8 +719,10 @@ static int lvts_calibration_read(struct device *dev, struct lvts_domain *lvts_td
+ 
+ 		lvts_td->calib = devm_krealloc(dev, lvts_td->calib,
+ 					       lvts_td->calib_len + len, GFP_KERNEL);
+-		if (!lvts_td->calib)
++		if (!lvts_td->calib) {
++			kfree(efuse);
+ 			return -ENOMEM;
++		}
+ 
+ 		memcpy(lvts_td->calib + lvts_td->calib_len, efuse, len);
+ 
+diff --git a/drivers/thermal/qoriq_thermal.c b/drivers/thermal/qoriq_thermal.c
+index ccc2eea7f9f54..404f01cca4dab 100644
+--- a/drivers/thermal/qoriq_thermal.c
++++ b/drivers/thermal/qoriq_thermal.c
+@@ -57,6 +57,9 @@
+ #define REGS_TTRnCR(n)	(0xf10 + 4 * (n)) /* Temperature Range n
+ 					   * Control Register
+ 					   */
++#define NUM_TTRCR_V1	4
++#define NUM_TTRCR_MAX	16
++
+ #define REGS_IPBRR(n)		(0xbf8 + 4 * (n)) /* IP Block Revision
+ 						   * Register n
+ 						   */
+@@ -71,6 +74,7 @@ struct qoriq_sensor {
+ 
+ struct qoriq_tmu_data {
+ 	int ver;
++	u32 ttrcr[NUM_TTRCR_MAX];
+ 	struct regmap *regmap;
+ 	struct clk *clk;
+ 	struct qoriq_sensor	sensor[SITES_MAX];
+@@ -182,17 +186,17 @@ static int qoriq_tmu_calibration(struct device *dev,
+ 				 struct qoriq_tmu_data *data)
+ {
+ 	int i, val, len;
+-	u32 range[4];
+ 	const u32 *calibration;
+ 	struct device_node *np = dev->of_node;
+ 
+ 	len = of_property_count_u32_elems(np, "fsl,tmu-range");
+-	if (len < 0 || len > 4) {
++	if (len < 0 || (data->ver == TMU_VER1 && len > NUM_TTRCR_V1) ||
++	    (data->ver > TMU_VER1 && len > NUM_TTRCR_MAX)) {
+ 		dev_err(dev, "invalid range data.\n");
+ 		return len;
+ 	}
+ 
+-	val = of_property_read_u32_array(np, "fsl,tmu-range", range, len);
++	val = of_property_read_u32_array(np, "fsl,tmu-range", data->ttrcr, len);
+ 	if (val != 0) {
+ 		dev_err(dev, "failed to read range data.\n");
+ 		return val;
+@@ -200,7 +204,7 @@ static int qoriq_tmu_calibration(struct device *dev,
+ 
+ 	/* Init temperature range registers */
+ 	for (i = 0; i < len; i++)
+-		regmap_write(data->regmap, REGS_TTRnCR(i), range[i]);
++		regmap_write(data->regmap, REGS_TTRnCR(i), data->ttrcr[i]);
+ 
+ 	calibration = of_get_property(np, "fsl,tmu-calibration", &len);
+ 	if (calibration == NULL || len % 8) {
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 23366f868ae3a..dab94835b6f5f 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -753,6 +753,7 @@ static void exar_pci_remove(struct pci_dev *pcidev)
+ 	for (i = 0; i < priv->nr; i++)
+ 		serial8250_unregister_port(priv->line[i]);
+ 
++	/* Ensure that every init quirk is properly torn down */
+ 	if (priv->board->exit)
+ 		priv->board->exit(pcidev);
+ }
+@@ -767,10 +768,6 @@ static int __maybe_unused exar_suspend(struct device *dev)
+ 		if (priv->line[i] >= 0)
+ 			serial8250_suspend_port(priv->line[i]);
+ 
+-	/* Ensure that every init quirk is properly torn down */
+-	if (priv->board->exit)
+-		priv->board->exit(pcidev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 07f7b48bad13d..5aff179bf2978 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1461,7 +1461,7 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ 	if (!ret)
+ 		return 0;
+ 
+-	dev_err(dev, "Unable to reguest IRQ %i\n", irq);
++	dev_err(dev, "Unable to request IRQ %i\n", irq);
+ 
+ out_uart:
+ 	for (i = 0; i < devtype->nr; i++) {
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index 3bd552841cd28..06d140c9f56cf 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -987,11 +987,10 @@ static unsigned int s3c24xx_serial_tx_empty(struct uart_port *port)
+ 		if ((ufstat & info->tx_fifomask) != 0 ||
+ 		    (ufstat & info->tx_fifofull))
+ 			return 0;
+-
+-		return 1;
++		return TIOCSER_TEMT;
+ 	}
+ 
+-	return s3c24xx_serial_txempty_nofifo(port);
++	return s3c24xx_serial_txempty_nofifo(port) ? TIOCSER_TEMT : 0;
+ }
+ 
+ /* no modem control lines */
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 156efda7c80d6..6617d3a8e84c9 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -2469,7 +2469,7 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ 		}
+ 		return;
+ 	case EScsiignore:
+-		if (c >= 20 && c <= 0x3f)
++		if (c >= 0x20 && c <= 0x3f)
+ 			return;
+ 		vc->vc_state = ESnormal;
+ 		return;
+diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c
+index 12e76bb62c209..19bbc38f3d35d 100644
+--- a/drivers/usb/gadget/udc/net2272.c
++++ b/drivers/usb/gadget/udc/net2272.c
+@@ -2650,7 +2650,7 @@ net2272_plat_probe(struct platform_device *pdev)
+ 		goto err_req;
+ 	}
+ 
+-	ret = net2272_probe_fin(dev, IRQF_TRIGGER_LOW);
++	ret = net2272_probe_fin(dev, irqflags);
+ 	if (ret)
+ 		goto err_io;
+ 
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index 770081b828a42..b855d291dfe6b 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -268,6 +268,13 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ 		return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
+ 				     "could not get vbus regulator\n");
+ 
++	nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
++	if (PTR_ERR(nop->vbus_draw) == -ENODEV)
++		nop->vbus_draw = NULL;
++	if (IS_ERR(nop->vbus_draw))
++		return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
++				     "could not get vbus regulator\n");
++
+ 	nop->dev		= dev;
+ 	nop->phy.dev		= nop->dev;
+ 	nop->phy.label		= "nop-xceiv";
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 26ba7da6b4106..7795d2b7fcd1c 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -145,8 +145,6 @@ static void teardown_driver(struct mlx5_vdpa_net *ndev);
+ 
+ static bool mlx5_vdpa_debug;
+ 
+-#define MLX5_CVQ_MAX_ENT 16
+-
+ #define MLX5_LOG_VIO_FLAG(_feature)                                                                \
+ 	do {                                                                                       \
+ 		if (features & BIT_ULL(_feature))                                                  \
+@@ -2147,9 +2145,16 @@ static void mlx5_vdpa_set_vq_num(struct vdpa_device *vdev, u16 idx, u32 num)
+ 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
+ 	struct mlx5_vdpa_virtqueue *mvq;
+ 
+-	if (!is_index_valid(mvdev, idx) || is_ctrl_vq_idx(mvdev, idx))
++	if (!is_index_valid(mvdev, idx))
+ 		return;
+ 
++        if (is_ctrl_vq_idx(mvdev, idx)) {
++                struct mlx5_control_vq *cvq = &mvdev->cvq;
++
++                cvq->vring.vring.num = num;
++                return;
++        }
++
+ 	mvq = &ndev->vqs[idx];
+ 	mvq->num_ent = num;
+ }
+@@ -2819,7 +2824,7 @@ static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev)
+ 		u16 idx = cvq->vring.last_avail_idx;
+ 
+ 		err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features,
+-					MLX5_CVQ_MAX_ENT, false,
++					cvq->vring.vring.num, false,
+ 					(struct vring_desc *)(uintptr_t)cvq->desc_addr,
+ 					(struct vring_avail *)(uintptr_t)cvq->driver_addr,
+ 					(struct vring_used *)(uintptr_t)cvq->device_addr);
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index be2925d0d2836..18584ce70bf07 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -160,7 +160,7 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim, u32 flags)
+ 		}
+ 	}
+ 
+-	vdpasim->running = true;
++	vdpasim->running = false;
+ 	spin_unlock(&vdpasim->iommu_lock);
+ 
+ 	vdpasim->features = 0;
+@@ -483,6 +483,7 @@ static void vdpasim_set_status(struct vdpa_device *vdpa, u8 status)
+ 
+ 	mutex_lock(&vdpasim->mutex);
+ 	vdpasim->status = status;
++	vdpasim->running = (status & VIRTIO_CONFIG_S_DRIVER_OK) != 0;
+ 	mutex_unlock(&vdpasim->mutex);
+ }
+ 
+diff --git a/drivers/video/backlight/da9052_bl.c b/drivers/video/backlight/da9052_bl.c
+index 1cdc8543310b4..b8ff7046510eb 100644
+--- a/drivers/video/backlight/da9052_bl.c
++++ b/drivers/video/backlight/da9052_bl.c
+@@ -117,6 +117,7 @@ static int da9052_backlight_probe(struct platform_device *pdev)
+ 	wleds->led_reg = platform_get_device_id(pdev)->driver_data;
+ 	wleds->state = DA9052_WLEDS_OFF;
+ 
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	props.max_brightness = DA9052_MAX_BRIGHTNESS;
+ 
+diff --git a/drivers/video/backlight/ktz8866.c b/drivers/video/backlight/ktz8866.c
+index 9c980f2571ee3..014877b5a9848 100644
+--- a/drivers/video/backlight/ktz8866.c
++++ b/drivers/video/backlight/ktz8866.c
+@@ -97,20 +97,20 @@ static void ktz8866_init(struct ktz8866 *ktz)
+ {
+ 	unsigned int val = 0;
+ 
+-	if (of_property_read_u32(ktz->client->dev.of_node, "current-num-sinks", &val))
++	if (!of_property_read_u32(ktz->client->dev.of_node, "current-num-sinks", &val))
+ 		ktz8866_write(ktz, BL_EN, BIT(val) - 1);
+ 	else
+ 		/* Enable all 6 current sinks if the number of current sinks isn't specified. */
+ 		ktz8866_write(ktz, BL_EN, BIT(6) - 1);
+ 
+-	if (of_property_read_u32(ktz->client->dev.of_node, "kinetic,current-ramp-delay-ms", &val)) {
++	if (!of_property_read_u32(ktz->client->dev.of_node, "kinetic,current-ramp-delay-ms", &val)) {
+ 		if (val <= 128)
+ 			ktz8866_write(ktz, BL_CFG2, BIT(7) | (ilog2(val) << 3) | PWM_HYST);
+ 		else
+ 			ktz8866_write(ktz, BL_CFG2, BIT(7) | ((5 + val / 64) << 3) | PWM_HYST);
+ 	}
+ 
+-	if (of_property_read_u32(ktz->client->dev.of_node, "kinetic,led-enable-ramp-delay-ms", &val)) {
++	if (!of_property_read_u32(ktz->client->dev.of_node, "kinetic,led-enable-ramp-delay-ms", &val)) {
+ 		if (val == 0)
+ 			ktz8866_write(ktz, BL_DIMMING, 0);
+ 		else {
+diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
+index 8fcb62be597b8..7115d7bb2a141 100644
+--- a/drivers/video/backlight/lm3630a_bl.c
++++ b/drivers/video/backlight/lm3630a_bl.c
+@@ -233,7 +233,7 @@ static int lm3630a_bank_a_get_brightness(struct backlight_device *bl)
+ 		if (rval < 0)
+ 			goto out_i2c_err;
+ 		brightness |= rval;
+-		goto out;
++		return brightness;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -244,11 +244,8 @@ static int lm3630a_bank_a_get_brightness(struct backlight_device *bl)
+ 	rval = lm3630a_read(pchip, REG_BRT_A);
+ 	if (rval < 0)
+ 		goto out_i2c_err;
+-	brightness = rval;
++	return rval;
+ 
+-out:
+-	bl->props.brightness = brightness;
+-	return bl->props.brightness;
+ out_i2c_err:
+ 	dev_err(pchip->dev, "i2c failed to access register\n");
+ 	return 0;
+@@ -310,7 +307,7 @@ static int lm3630a_bank_b_get_brightness(struct backlight_device *bl)
+ 		if (rval < 0)
+ 			goto out_i2c_err;
+ 		brightness |= rval;
+-		goto out;
++		return brightness;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -321,11 +318,8 @@ static int lm3630a_bank_b_get_brightness(struct backlight_device *bl)
+ 	rval = lm3630a_read(pchip, REG_BRT_B);
+ 	if (rval < 0)
+ 		goto out_i2c_err;
+-	brightness = rval;
++	return rval;
+ 
+-out:
+-	bl->props.brightness = brightness;
+-	return bl->props.brightness;
+ out_i2c_err:
+ 	dev_err(pchip->dev, "i2c failed to access register\n");
+ 	return 0;
+@@ -343,6 +337,7 @@ static int lm3630a_backlight_register(struct lm3630a_chip *pchip)
+ 	struct backlight_properties props;
+ 	const char *label;
+ 
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	if (pdata->leda_ctrl != LM3630A_LEDA_DISABLE) {
+ 		props.brightness = pdata->leda_init_brt;
+diff --git a/drivers/video/backlight/lm3639_bl.c b/drivers/video/backlight/lm3639_bl.c
+index 5246c171497d6..564f62acd7211 100644
+--- a/drivers/video/backlight/lm3639_bl.c
++++ b/drivers/video/backlight/lm3639_bl.c
+@@ -338,6 +338,7 @@ static int lm3639_probe(struct i2c_client *client)
+ 	}
+ 
+ 	/* backlight */
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_RAW;
+ 	props.brightness = pdata->init_brt_led;
+ 	props.max_brightness = pdata->max_brt_led;
+diff --git a/drivers/video/backlight/lp8788_bl.c b/drivers/video/backlight/lp8788_bl.c
+index d1a14b0db265b..31f97230ee506 100644
+--- a/drivers/video/backlight/lp8788_bl.c
++++ b/drivers/video/backlight/lp8788_bl.c
+@@ -191,6 +191,7 @@ static int lp8788_backlight_register(struct lp8788_bl *bl)
+ 	int init_brt;
+ 	char *name;
+ 
++	memset(&props, 0, sizeof(struct backlight_properties));
+ 	props.type = BACKLIGHT_PLATFORM;
+ 	props.max_brightness = MAX_BRIGHTNESS;
+ 
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 49299b1f9ec74..6f7e5010a6735 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1340,7 +1340,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
+ 				sizeof(struct vring_packed_desc));
+ 	vq->packed.vring.desc[head].id = cpu_to_le16(id);
+ 
+-	if (vq->do_unmap) {
++	if (vq->use_dma_api) {
+ 		vq->packed.desc_extra[id].addr = addr;
+ 		vq->packed.desc_extra[id].len = total_sg *
+ 				sizeof(struct vring_packed_desc);
+@@ -1481,7 +1481,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 			desc[i].len = cpu_to_le32(sg->length);
+ 			desc[i].id = cpu_to_le16(id);
+ 
+-			if (unlikely(vq->do_unmap)) {
++			if (unlikely(vq->use_dma_api)) {
+ 				vq->packed.desc_extra[curr].addr = addr;
+ 				vq->packed.desc_extra[curr].len = sg->length;
+ 				vq->packed.desc_extra[curr].flags =
+@@ -1615,7 +1615,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
+ 	vq->free_head = id;
+ 	vq->vq.num_free += state->num;
+ 
+-	if (unlikely(vq->do_unmap)) {
++	if (unlikely(vq->use_dma_api)) {
+ 		curr = id;
+ 		for (i = 0; i < state->num; i++) {
+ 			vring_unmap_extra_packed(vq,
+diff --git a/drivers/watchdog/starfive-wdt.c b/drivers/watchdog/starfive-wdt.c
+index 49b38ecc092dd..e4b344db38030 100644
+--- a/drivers/watchdog/starfive-wdt.c
++++ b/drivers/watchdog/starfive-wdt.c
+@@ -494,8 +494,13 @@ static int starfive_wdt_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_exit;
+ 
+-	if (!early_enable)
+-		pm_runtime_put_sync(&pdev->dev);
++	if (!early_enable) {
++		if (pm_runtime_enabled(&pdev->dev)) {
++			ret = pm_runtime_put_sync(&pdev->dev);
++			if (ret)
++				goto err_exit;
++		}
++	}
+ 
+ 	return 0;
+ 
+diff --git a/drivers/watchdog/stm32_iwdg.c b/drivers/watchdog/stm32_iwdg.c
+index d9fd50df9802c..5404e03876202 100644
+--- a/drivers/watchdog/stm32_iwdg.c
++++ b/drivers/watchdog/stm32_iwdg.c
+@@ -20,6 +20,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/watchdog.h>
+ 
++#define DEFAULT_TIMEOUT 10
++
+ /* IWDG registers */
+ #define IWDG_KR		0x00 /* Key register */
+ #define IWDG_PR		0x04 /* Prescaler Register */
+@@ -248,6 +250,7 @@ static int stm32_iwdg_probe(struct platform_device *pdev)
+ 	wdd->parent = dev;
+ 	wdd->info = &stm32_iwdg_info;
+ 	wdd->ops = &stm32_iwdg_ops;
++	wdd->timeout = DEFAULT_TIMEOUT;
+ 	wdd->min_timeout = DIV_ROUND_UP((RLR_MIN + 1) * PR_MIN, wdt->rate);
+ 	wdd->max_hw_heartbeat_ms = ((RLR_MAX + 1) * wdt->data->max_prescaler *
+ 				    1000) / wdt->rate;
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 3b9f080109d7e..27553673e46bc 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -1190,7 +1190,7 @@ int xen_pirq_from_irq(unsigned irq)
+ EXPORT_SYMBOL_GPL(xen_pirq_from_irq);
+ 
+ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip,
+-				   struct xenbus_device *dev)
++				   struct xenbus_device *dev, bool shared)
+ {
+ 	int ret = -ENOMEM;
+ 	struct irq_info *info;
+@@ -1224,7 +1224,8 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip,
+ 		 */
+ 		bind_evtchn_to_cpu(info, 0, false);
+ 	} else if (!WARN_ON(info->type != IRQT_EVTCHN)) {
+-		info->refcnt++;
++		if (shared && !WARN_ON(info->refcnt < 0))
++			info->refcnt++;
+ 	}
+ 
+ 	ret = info->irq;
+@@ -1237,13 +1238,13 @@ static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip,
+ 
+ int bind_evtchn_to_irq(evtchn_port_t evtchn)
+ {
+-	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip, NULL);
++	return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip, NULL, false);
+ }
+ EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
+ 
+ int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn)
+ {
+-	return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip, NULL);
++	return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip, NULL, false);
+ }
+ EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi);
+ 
+@@ -1295,7 +1296,8 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
+ 
+ static int bind_interdomain_evtchn_to_irq_chip(struct xenbus_device *dev,
+ 					       evtchn_port_t remote_port,
+-					       struct irq_chip *chip)
++					       struct irq_chip *chip,
++					       bool shared)
+ {
+ 	struct evtchn_bind_interdomain bind_interdomain;
+ 	int err;
+@@ -1307,14 +1309,14 @@ static int bind_interdomain_evtchn_to_irq_chip(struct xenbus_device *dev,
+ 					  &bind_interdomain);
+ 
+ 	return err ? : bind_evtchn_to_irq_chip(bind_interdomain.local_port,
+-					       chip, dev);
++					       chip, dev, shared);
+ }
+ 
+ int bind_interdomain_evtchn_to_irq_lateeoi(struct xenbus_device *dev,
+ 					   evtchn_port_t remote_port)
+ {
+ 	return bind_interdomain_evtchn_to_irq_chip(dev, remote_port,
+-						   &xen_lateeoi_chip);
++						   &xen_lateeoi_chip, false);
+ }
+ EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq_lateeoi);
+ 
+@@ -1430,7 +1432,8 @@ static int bind_evtchn_to_irqhandler_chip(evtchn_port_t evtchn,
+ {
+ 	int irq, retval;
+ 
+-	irq = bind_evtchn_to_irq_chip(evtchn, chip, NULL);
++	irq = bind_evtchn_to_irq_chip(evtchn, chip, NULL,
++				      irqflags & IRQF_SHARED);
+ 	if (irq < 0)
+ 		return irq;
+ 	retval = request_irq(irq, handler, irqflags, devname, dev_id);
+@@ -1471,7 +1474,8 @@ static int bind_interdomain_evtchn_to_irqhandler_chip(
+ {
+ 	int irq, retval;
+ 
+-	irq = bind_interdomain_evtchn_to_irq_chip(dev, remote_port, chip);
++	irq = bind_interdomain_evtchn_to_irq_chip(dev, remote_port, chip,
++						  irqflags & IRQF_SHARED);
+ 	if (irq < 0)
+ 		return irq;
+ 
+diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
+index 59717628ca42b..f6a2216c2c870 100644
+--- a/drivers/xen/evtchn.c
++++ b/drivers/xen/evtchn.c
+@@ -85,6 +85,7 @@ struct user_evtchn {
+ 	struct per_user_data *user;
+ 	evtchn_port_t port;
+ 	bool enabled;
++	bool unbinding;
+ };
+ 
+ static void evtchn_free_ring(evtchn_port_t *ring)
+@@ -164,6 +165,10 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
+ 	struct per_user_data *u = evtchn->user;
+ 	unsigned int prod, cons;
+ 
++	/* Handler might be called when tearing down the IRQ. */
++	if (evtchn->unbinding)
++		return IRQ_HANDLED;
++
+ 	WARN(!evtchn->enabled,
+ 	     "Interrupt for port %u, but apparently not enabled; per-user %p\n",
+ 	     evtchn->port, u);
+@@ -421,6 +426,7 @@ static void evtchn_unbind_from_user(struct per_user_data *u,
+ 
+ 	BUG_ON(irq < 0);
+ 
++	evtchn->unbinding = true;
+ 	unbind_from_irqhandler(irq, evtchn);
+ 
+ 	del_evtchn(u, evtchn);
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index c097da6e9c5b2..7761f25a77f39 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -474,16 +474,6 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ 			continue;
+ 		}
+ 
+-		/* Don't expose silly rename entries to userspace. */
+-		if (nlen > 6 &&
+-		    dire->u.name[0] == '.' &&
+-		    ctx->actor != afs_lookup_filldir &&
+-		    ctx->actor != afs_lookup_one_filldir &&
+-		    memcmp(dire->u.name, ".__afs", 6) == 0) {
+-			ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent);
+-			continue;
+-		}
+-
+ 		/* found the next entry */
+ 		if (!dir_emit(ctx, dire->u.name, nlen,
+ 			      ntohl(dire->u.vnode),
+diff --git a/fs/bcachefs/btree_iter.c b/fs/bcachefs/btree_iter.c
+index da594e0067697..816ecc3375196 100644
+--- a/fs/bcachefs/btree_iter.c
++++ b/fs/bcachefs/btree_iter.c
+@@ -2094,7 +2094,9 @@ struct bkey_s_c bch2_btree_iter_peek_upto(struct btree_iter *iter, struct bpos e
+ 		 * isn't monotonically increasing before FILTER_SNAPSHOTS, and
+ 		 * that's what we check against in extents mode:
+ 		 */
+-		if (k.k->p.inode > end.inode)
++		if (unlikely(!(iter->flags & BTREE_ITER_IS_EXTENTS)
++			     ? bkey_gt(k.k->p, end)
++			     : k.k->p.inode > end.inode))
+ 			goto end;
+ 
+ 		if (iter->update_path &&
+diff --git a/fs/bcachefs/chardev.c b/fs/bcachefs/chardev.c
+index 4bb88aefed121..64000c8da5ee6 100644
+--- a/fs/bcachefs/chardev.c
++++ b/fs/bcachefs/chardev.c
+@@ -392,10 +392,9 @@ static long bch2_ioctl_data(struct bch_fs *c,
+ 		goto err;
+ 	}
+ 
+-	fd_install(fd, file);
+-
+ 	get_task_struct(ctx->thread);
+ 	wake_up_process(ctx->thread);
++	fd_install(fd, file);
+ 
+ 	return fd;
+ err:
+diff --git a/fs/bcachefs/errcode.h b/fs/bcachefs/errcode.h
+index 9ce29681eec96..ac90faccdcb22 100644
+--- a/fs/bcachefs/errcode.h
++++ b/fs/bcachefs/errcode.h
+@@ -224,6 +224,7 @@
+ 	x(BCH_ERR_invalid,		invalid_bkey)				\
+ 	x(BCH_ERR_operation_blocked,    nocow_lock_blocked)			\
+ 	x(EIO,				btree_node_read_err)			\
++	x(EIO,				sb_not_downgraded)			\
+ 	x(BCH_ERR_btree_node_read_err,	btree_node_read_err_fixable)		\
+ 	x(BCH_ERR_btree_node_read_err,	btree_node_read_err_want_retry)		\
+ 	x(BCH_ERR_btree_node_read_err,	btree_node_read_err_must_retry)		\
+diff --git a/fs/bcachefs/mean_and_variance.h b/fs/bcachefs/mean_and_variance.h
+index 647505010b397..056e797383fb5 100644
+--- a/fs/bcachefs/mean_and_variance.h
++++ b/fs/bcachefs/mean_and_variance.h
+@@ -14,7 +14,7 @@
+  * type
+  */
+ 
+-#ifdef __SIZEOF_INT128__
++#if defined(__SIZEOF_INT128__) && defined(__KERNEL__) && !defined(CONFIG_PARISC)
+ 
+ typedef struct {
+ 	unsigned __int128 v;
+diff --git a/fs/bcachefs/super-io.c b/fs/bcachefs/super-io.c
+index 4c98d8cc2a797..c925dc5742fa4 100644
+--- a/fs/bcachefs/super-io.c
++++ b/fs/bcachefs/super-io.c
+@@ -966,6 +966,18 @@ int bch2_write_super(struct bch_fs *c)
+ 	if (!BCH_SB_INITIALIZED(c->disk_sb.sb))
+ 		goto out;
+ 
++	if (le16_to_cpu(c->disk_sb.sb->version) > bcachefs_metadata_version_current) {
++		struct printbuf buf = PRINTBUF;
++		prt_printf(&buf, "attempting to write superblock that wasn't version downgraded (");
++		bch2_version_to_text(&buf, le16_to_cpu(c->disk_sb.sb->version));
++		prt_str(&buf, " > ");
++		bch2_version_to_text(&buf, bcachefs_metadata_version_current);
++		prt_str(&buf, ")");
++		bch2_fs_fatal_error(c, "%s", buf.buf);
++		printbuf_exit(&buf);
++		return -BCH_ERR_sb_not_downgraded;
++	}
++
+ 	for_each_online_member(ca, c, i) {
+ 		__set_bit(ca->dev_idx, sb_written.d);
+ 		ca->sb_write_error = 0;
+@@ -1073,13 +1085,22 @@ bool bch2_check_version_downgrade(struct bch_fs *c)
+ 	/*
+ 	 * Downgrade, if superblock is at a higher version than currently
+ 	 * supported:
++	 *
++	 * c->sb will be checked before we write the superblock, so update it as
++	 * well:
+ 	 */
+-	if (BCH_SB_VERSION_UPGRADE_COMPLETE(c->disk_sb.sb) > bcachefs_metadata_version_current)
++	if (BCH_SB_VERSION_UPGRADE_COMPLETE(c->disk_sb.sb) > bcachefs_metadata_version_current) {
+ 		SET_BCH_SB_VERSION_UPGRADE_COMPLETE(c->disk_sb.sb, bcachefs_metadata_version_current);
+-	if (c->sb.version > bcachefs_metadata_version_current)
++		c->sb.version_upgrade_complete = bcachefs_metadata_version_current;
++	}
++	if (c->sb.version > bcachefs_metadata_version_current) {
+ 		c->disk_sb.sb->version = cpu_to_le16(bcachefs_metadata_version_current);
+-	if (c->sb.version_min > bcachefs_metadata_version_current)
++		c->sb.version = bcachefs_metadata_version_current;
++	}
++	if (c->sb.version_min > bcachefs_metadata_version_current) {
+ 		c->disk_sb.sb->version_min = cpu_to_le16(bcachefs_metadata_version_current);
++		c->sb.version_min = bcachefs_metadata_version_current;
++	}
+ 	c->disk_sb.sb->compat[0] &= cpu_to_le64((1ULL << BCH_COMPAT_NR) - 1);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/block-rsv.c b/fs/btrfs/block-rsv.c
+index ceb5f586a2d55..1043a8142351b 100644
+--- a/fs/btrfs/block-rsv.c
++++ b/fs/btrfs/block-rsv.c
+@@ -494,7 +494,7 @@ struct btrfs_block_rsv *btrfs_use_block_rsv(struct btrfs_trans_handle *trans,
+ 
+ 	block_rsv = get_block_rsv(trans, root);
+ 
+-	if (unlikely(block_rsv->size == 0))
++	if (unlikely(btrfs_block_rsv_size(block_rsv) == 0))
+ 		goto try_reserve;
+ again:
+ 	ret = btrfs_block_rsv_use_bytes(block_rsv, blocksize);
+diff --git a/fs/btrfs/block-rsv.h b/fs/btrfs/block-rsv.h
+index b0bd12b8652f4..43a9a6b5a79f4 100644
+--- a/fs/btrfs/block-rsv.h
++++ b/fs/btrfs/block-rsv.h
+@@ -101,4 +101,36 @@ static inline bool btrfs_block_rsv_full(const struct btrfs_block_rsv *rsv)
+ 	return data_race(rsv->full);
+ }
+ 
++/*
++ * Get the reserved mount of a block reserve in a context where getting a stale
++ * value is acceptable, instead of accessing it directly and trigger data race
++ * warning from KCSAN.
++ */
++static inline u64 btrfs_block_rsv_reserved(struct btrfs_block_rsv *rsv)
++{
++	u64 ret;
++
++	spin_lock(&rsv->lock);
++	ret = rsv->reserved;
++	spin_unlock(&rsv->lock);
++
++	return ret;
++}
++
++/*
++ * Get the size of a block reserve in a context where getting a stale value is
++ * acceptable, instead of accessing it directly and trigger data race warning
++ * from KCSAN.
++ */
++static inline u64 btrfs_block_rsv_size(struct btrfs_block_rsv *rsv)
++{
++	u64 ret;
++
++	spin_lock(&rsv->lock);
++	ret = rsv->size;
++	spin_unlock(&rsv->lock);
++
++	return ret;
++}
++
+ #endif /* BTRFS_BLOCK_RSV_H */
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 571bb13587d5e..3b54eb5834746 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -856,7 +856,7 @@ btrfs_calc_reclaim_metadata_size(struct btrfs_fs_info *fs_info,
+ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ 				    struct btrfs_space_info *space_info)
+ {
+-	u64 global_rsv_size = fs_info->global_block_rsv.reserved;
++	const u64 global_rsv_size = btrfs_block_rsv_reserved(&fs_info->global_block_rsv);
+ 	u64 ordered, delalloc;
+ 	u64 thresh;
+ 	u64 used;
+@@ -956,8 +956,8 @@ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ 	ordered = percpu_counter_read_positive(&fs_info->ordered_bytes) >> 1;
+ 	delalloc = percpu_counter_read_positive(&fs_info->delalloc_bytes);
+ 	if (ordered >= delalloc)
+-		used += fs_info->delayed_refs_rsv.reserved +
+-			fs_info->delayed_block_rsv.reserved;
++		used += btrfs_block_rsv_reserved(&fs_info->delayed_refs_rsv) +
++			btrfs_block_rsv_reserved(&fs_info->delayed_block_rsv);
+ 	else
+ 		used += space_info->bytes_may_use - global_rsv_size;
+ 
+@@ -1173,7 +1173,7 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ 		enum btrfs_flush_state flush;
+ 		u64 delalloc_size = 0;
+ 		u64 to_reclaim, block_rsv_size;
+-		u64 global_rsv_size = global_rsv->reserved;
++		const u64 global_rsv_size = btrfs_block_rsv_reserved(global_rsv);
+ 
+ 		loops++;
+ 
+@@ -1185,9 +1185,9 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ 		 * assume it's tied up in delalloc reservations.
+ 		 */
+ 		block_rsv_size = global_rsv_size +
+-			delayed_block_rsv->reserved +
+-			delayed_refs_rsv->reserved +
+-			trans_rsv->reserved;
++			btrfs_block_rsv_reserved(delayed_block_rsv) +
++			btrfs_block_rsv_reserved(delayed_refs_rsv) +
++			btrfs_block_rsv_reserved(trans_rsv);
+ 		if (block_rsv_size < space_info->bytes_may_use)
+ 			delalloc_size = space_info->bytes_may_use - block_rsv_size;
+ 
+@@ -1207,16 +1207,16 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ 			to_reclaim = delalloc_size;
+ 			flush = FLUSH_DELALLOC;
+ 		} else if (space_info->bytes_pinned >
+-			   (delayed_block_rsv->reserved +
+-			    delayed_refs_rsv->reserved)) {
++			   (btrfs_block_rsv_reserved(delayed_block_rsv) +
++			    btrfs_block_rsv_reserved(delayed_refs_rsv))) {
+ 			to_reclaim = space_info->bytes_pinned;
+ 			flush = COMMIT_TRANS;
+-		} else if (delayed_block_rsv->reserved >
+-			   delayed_refs_rsv->reserved) {
+-			to_reclaim = delayed_block_rsv->reserved;
++		} else if (btrfs_block_rsv_reserved(delayed_block_rsv) >
++			   btrfs_block_rsv_reserved(delayed_refs_rsv)) {
++			to_reclaim = btrfs_block_rsv_reserved(delayed_block_rsv);
+ 			flush = FLUSH_DELAYED_ITEMS_NR;
+ 		} else {
+-			to_reclaim = delayed_refs_rsv->reserved;
++			to_reclaim = btrfs_block_rsv_reserved(delayed_refs_rsv);
+ 			flush = FLUSH_DELAYED_REFS_NR;
+ 		}
+ 
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 3779e76a15d64..524532f992746 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1661,6 +1661,15 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 	}
+ 
+ out:
++	/* Reject non SINGLE data profiles without RST */
++	if ((map->type & BTRFS_BLOCK_GROUP_DATA) &&
++	    (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) &&
++	    !fs_info->stripe_root) {
++		btrfs_err(fs_info, "zoned: data %s needs raid-stripe-tree",
++			  btrfs_bg_type_to_raid_name(map->type));
++		return -EINVAL;
++	}
++
+ 	if (cache->alloc_offset > cache->zone_capacity) {
+ 		btrfs_err(fs_info,
+ "zoned: invalid write pointer %llu (larger than zone capacity %llu) in block group %llu",
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index ad1f46c66fbff..7fb4aae974124 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2156,6 +2156,30 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
+ 		      ceph_cap_string(cap->implemented),
+ 		      ceph_cap_string(revoking));
+ 
++		/* completed revocation? going down and there are no caps? */
++		if (revoking) {
++			if ((revoking & cap_used) == 0) {
++				doutc(cl, "completed revocation of %s\n",
++				      ceph_cap_string(cap->implemented & ~cap->issued));
++				goto ack;
++			}
++
++			/*
++			 * If the "i_wrbuffer_ref" was increased by mmap or generic
++			 * cache write just before the ceph_check_caps() is called,
++			 * the Fb capability revoking will fail this time. Then we
++			 * must wait for the BDI's delayed work to flush the dirty
++			 * pages and to release the "i_wrbuffer_ref", which will cost
++			 * at most 5 seconds. That means the MDS needs to wait at
++			 * most 5 seconds to finished the Fb capability's revocation.
++			 *
++			 * Let's queue a writeback for it.
++			 */
++			if (S_ISREG(inode->i_mode) && ci->i_wrbuffer_ref &&
++			    (revoking & CEPH_CAP_FILE_BUFFER))
++				queue_writeback = true;
++		}
++
+ 		if (cap == ci->i_auth_cap &&
+ 		    (cap->issued & CEPH_CAP_FILE_WR)) {
+ 			/* request larger max_size from MDS? */
+@@ -2183,30 +2207,6 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags)
+ 			}
+ 		}
+ 
+-		/* completed revocation? going down and there are no caps? */
+-		if (revoking) {
+-			if ((revoking & cap_used) == 0) {
+-				doutc(cl, "completed revocation of %s\n",
+-				      ceph_cap_string(cap->implemented & ~cap->issued));
+-				goto ack;
+-			}
+-
+-			/*
+-			 * If the "i_wrbuffer_ref" was increased by mmap or generic
+-			 * cache write just before the ceph_check_caps() is called,
+-			 * the Fb capability revoking will fail this time. Then we
+-			 * must wait for the BDI's delayed work to flush the dirty
+-			 * pages and to release the "i_wrbuffer_ref", which will cost
+-			 * at most 5 seconds. That means the MDS needs to wait at
+-			 * most 5 seconds to finished the Fb capability's revocation.
+-			 *
+-			 * Let's queue a writeback for it.
+-			 */
+-			if (S_ISREG(inode->i_mode) && ci->i_wrbuffer_ref &&
+-			    (revoking & CEPH_CAP_FILE_BUFFER))
+-				queue_writeback = true;
+-		}
+-
+ 		/* want more caps from mds? */
+ 		if (want & ~cap->mds_wanted) {
+ 			if (want & ~(cap->mds_wanted | cap->issued))
+@@ -4772,7 +4772,22 @@ int ceph_drop_caps_for_unlink(struct inode *inode)
+ 		if (__ceph_caps_dirty(ci)) {
+ 			struct ceph_mds_client *mdsc =
+ 				ceph_inode_to_fs_client(inode)->mdsc;
+-			__cap_delay_requeue_front(mdsc, ci);
++
++			doutc(mdsc->fsc->client, "%p %llx.%llx\n", inode,
++			      ceph_vinop(inode));
++			spin_lock(&mdsc->cap_unlink_delay_lock);
++			ci->i_ceph_flags |= CEPH_I_FLUSH;
++			if (!list_empty(&ci->i_cap_delay_list))
++				list_del_init(&ci->i_cap_delay_list);
++			list_add_tail(&ci->i_cap_delay_list,
++				      &mdsc->cap_unlink_delay_list);
++			spin_unlock(&mdsc->cap_unlink_delay_lock);
++
++			/*
++			 * Fire the work immediately, because the MDS maybe
++			 * waiting for caps release.
++			 */
++			ceph_queue_cap_unlink_work(mdsc);
+ 		}
+ 	}
+ 	spin_unlock(&ci->i_ceph_lock);
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 3b5aae29e9447..523debc6f23e0 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1135,7 +1135,12 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ 		}
+ 
+ 		idx = 0;
+-		left = ret > 0 ? ret : 0;
++		if (ret <= 0)
++			left = 0;
++		else if (off + ret > i_size)
++			left = i_size - off;
++		else
++			left = ret;
+ 		while (left > 0) {
+ 			size_t plen, copied;
+ 
+@@ -1164,15 +1169,13 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ 	}
+ 
+ 	if (ret > 0) {
+-		if (off > *ki_pos) {
+-			if (off >= i_size) {
+-				*retry_op = CHECK_EOF;
+-				ret = i_size - *ki_pos;
+-				*ki_pos = i_size;
+-			} else {
+-				ret = off - *ki_pos;
+-				*ki_pos = off;
+-			}
++		if (off >= i_size) {
++			*retry_op = CHECK_EOF;
++			ret = i_size - *ki_pos;
++			*ki_pos = i_size;
++		} else {
++			ret = off - *ki_pos;
++			*ki_pos = off;
+ 		}
+ 
+ 		if (last_objver)
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 2eb66dd7d01b2..950360b07536b 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -2470,6 +2470,50 @@ void ceph_reclaim_caps_nr(struct ceph_mds_client *mdsc, int nr)
+ 	}
+ }
+ 
++void ceph_queue_cap_unlink_work(struct ceph_mds_client *mdsc)
++{
++	struct ceph_client *cl = mdsc->fsc->client;
++	if (mdsc->stopping)
++		return;
++
++        if (queue_work(mdsc->fsc->cap_wq, &mdsc->cap_unlink_work)) {
++                doutc(cl, "caps unlink work queued\n");
++        } else {
++                doutc(cl, "failed to queue caps unlink work\n");
++        }
++}
++
++static void ceph_cap_unlink_work(struct work_struct *work)
++{
++	struct ceph_mds_client *mdsc =
++		container_of(work, struct ceph_mds_client, cap_unlink_work);
++	struct ceph_client *cl = mdsc->fsc->client;
++
++	doutc(cl, "begin\n");
++	spin_lock(&mdsc->cap_unlink_delay_lock);
++	while (!list_empty(&mdsc->cap_unlink_delay_list)) {
++		struct ceph_inode_info *ci;
++		struct inode *inode;
++
++		ci = list_first_entry(&mdsc->cap_unlink_delay_list,
++				      struct ceph_inode_info,
++				      i_cap_delay_list);
++		list_del_init(&ci->i_cap_delay_list);
++
++		inode = igrab(&ci->netfs.inode);
++		if (inode) {
++			spin_unlock(&mdsc->cap_unlink_delay_lock);
++			doutc(cl, "on %p %llx.%llx\n", inode,
++			      ceph_vinop(inode));
++			ceph_check_caps(ci, CHECK_CAPS_FLUSH);
++			iput(inode);
++			spin_lock(&mdsc->cap_unlink_delay_lock);
++		}
++	}
++	spin_unlock(&mdsc->cap_unlink_delay_lock);
++	doutc(cl, "done\n");
++}
++
+ /*
+  * requests
+  */
+@@ -5345,6 +5389,8 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+ 	INIT_LIST_HEAD(&mdsc->cap_delay_list);
+ 	INIT_LIST_HEAD(&mdsc->cap_wait_list);
+ 	spin_lock_init(&mdsc->cap_delay_lock);
++	INIT_LIST_HEAD(&mdsc->cap_unlink_delay_list);
++	spin_lock_init(&mdsc->cap_unlink_delay_lock);
+ 	INIT_LIST_HEAD(&mdsc->snap_flush_list);
+ 	spin_lock_init(&mdsc->snap_flush_lock);
+ 	mdsc->last_cap_flush_tid = 1;
+@@ -5353,6 +5399,7 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+ 	spin_lock_init(&mdsc->cap_dirty_lock);
+ 	init_waitqueue_head(&mdsc->cap_flushing_wq);
+ 	INIT_WORK(&mdsc->cap_reclaim_work, ceph_cap_reclaim_work);
++	INIT_WORK(&mdsc->cap_unlink_work, ceph_cap_unlink_work);
+ 	err = ceph_metric_init(&mdsc->metric);
+ 	if (err)
+ 		goto err_mdsmap;
+@@ -5626,6 +5673,7 @@ void ceph_mdsc_close_sessions(struct ceph_mds_client *mdsc)
+ 	ceph_cleanup_global_and_empty_realms(mdsc);
+ 
+ 	cancel_work_sync(&mdsc->cap_reclaim_work);
++	cancel_work_sync(&mdsc->cap_unlink_work);
+ 	cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
+ 
+ 	doutc(cl, "done\n");
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 40560af388272..03f8ff00874f7 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -462,6 +462,8 @@ struct ceph_mds_client {
+ 	unsigned long    last_renew_caps;  /* last time we renewed our caps */
+ 	struct list_head cap_delay_list;   /* caps with delayed release */
+ 	spinlock_t       cap_delay_lock;   /* protects cap_delay_list */
++	struct list_head cap_unlink_delay_list;  /* caps with delayed release for unlink */
++	spinlock_t       cap_unlink_delay_lock;  /* protects cap_unlink_delay_list */
+ 	struct list_head snap_flush_list;  /* cap_snaps ready to flush */
+ 	spinlock_t       snap_flush_lock;
+ 
+@@ -475,6 +477,8 @@ struct ceph_mds_client {
+ 	struct work_struct cap_reclaim_work;
+ 	atomic_t	   cap_reclaim_pending;
+ 
++	struct work_struct cap_unlink_work;
++
+ 	/*
+ 	 * Cap reservations
+ 	 *
+@@ -574,6 +578,7 @@ extern void ceph_flush_cap_releases(struct ceph_mds_client *mdsc,
+ 				    struct ceph_mds_session *session);
+ extern void ceph_queue_cap_reclaim_work(struct ceph_mds_client *mdsc);
+ extern void ceph_reclaim_caps_nr(struct ceph_mds_client *mdsc, int nr);
++extern void ceph_queue_cap_unlink_work(struct ceph_mds_client *mdsc);
+ extern int ceph_iterate_session_caps(struct ceph_mds_session *session,
+ 				     int (*cb)(struct inode *, int mds, void *),
+ 				     void *arg);
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 87ff35bff8d5b..afc37c9029ce7 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -3,6 +3,7 @@
+  * Copyright (C) 2022, Alibaba Cloud
+  * Copyright (C) 2022, Bytedance Inc. All rights reserved.
+  */
++#include <linux/pseudo_fs.h>
+ #include <linux/fscache.h>
+ #include "internal.h"
+ 
+@@ -12,6 +13,18 @@ static LIST_HEAD(erofs_domain_list);
+ static LIST_HEAD(erofs_domain_cookies_list);
+ static struct vfsmount *erofs_pseudo_mnt;
+ 
++static int erofs_anon_init_fs_context(struct fs_context *fc)
++{
++	return init_pseudo(fc, EROFS_SUPER_MAGIC) ? 0 : -ENOMEM;
++}
++
++static struct file_system_type erofs_anon_fs_type = {
++	.owner		= THIS_MODULE,
++	.name           = "pseudo_erofs",
++	.init_fs_context = erofs_anon_init_fs_context,
++	.kill_sb        = kill_anon_super,
++};
++
+ struct erofs_fscache_request {
+ 	struct erofs_fscache_request *primary;
+ 	struct netfs_cache_resources cache_resources;
+@@ -381,11 +394,12 @@ static int erofs_fscache_init_domain(struct super_block *sb)
+ 		goto out;
+ 
+ 	if (!erofs_pseudo_mnt) {
+-		erofs_pseudo_mnt = kern_mount(&erofs_fs_type);
+-		if (IS_ERR(erofs_pseudo_mnt)) {
+-			err = PTR_ERR(erofs_pseudo_mnt);
++		struct vfsmount *mnt = kern_mount(&erofs_anon_fs_type);
++		if (IS_ERR(mnt)) {
++			err = PTR_ERR(mnt);
+ 			goto out;
+ 		}
++		erofs_pseudo_mnt = mnt;
+ 	}
+ 
+ 	domain->volume = sbi->volume;
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index b0409badb0172..410f5af623540 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -385,7 +385,6 @@ struct erofs_map_dev {
+ 	unsigned int m_deviceid;
+ };
+ 
+-extern struct file_system_type erofs_fs_type;
+ extern const struct super_operations erofs_sops;
+ 
+ extern const struct address_space_operations erofs_raw_access_aops;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 3789d62245136..8a82d5cec16da 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -573,13 +573,6 @@ static const struct export_operations erofs_export_ops = {
+ 	.get_parent = erofs_get_parent,
+ };
+ 
+-static int erofs_fc_fill_pseudo_super(struct super_block *sb, struct fs_context *fc)
+-{
+-	static const struct tree_descr empty_descr = {""};
+-
+-	return simple_fill_super(sb, EROFS_SUPER_MAGIC, &empty_descr);
+-}
+-
+ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ {
+ 	struct inode *inode;
+@@ -706,11 +699,6 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	return 0;
+ }
+ 
+-static int erofs_fc_anon_get_tree(struct fs_context *fc)
+-{
+-	return get_tree_nodev(fc, erofs_fc_fill_pseudo_super);
+-}
+-
+ static int erofs_fc_get_tree(struct fs_context *fc)
+ {
+ 	struct erofs_fs_context *ctx = fc->fs_private;
+@@ -783,20 +771,10 @@ static const struct fs_context_operations erofs_context_ops = {
+ 	.free		= erofs_fc_free,
+ };
+ 
+-static const struct fs_context_operations erofs_anon_context_ops = {
+-	.get_tree       = erofs_fc_anon_get_tree,
+-};
+-
+ static int erofs_init_fs_context(struct fs_context *fc)
+ {
+ 	struct erofs_fs_context *ctx;
+ 
+-	/* pseudo mount for anon inodes */
+-	if (fc->sb_flags & SB_KERNMOUNT) {
+-		fc->ops = &erofs_anon_context_ops;
+-		return 0;
+-	}
+-
+ 	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ 	if (!ctx)
+ 		return -ENOMEM;
+@@ -818,12 +796,6 @@ static void erofs_kill_sb(struct super_block *sb)
+ {
+ 	struct erofs_sb_info *sbi;
+ 
+-	/* pseudo mount for anon inodes */
+-	if (sb->s_flags & SB_KERNMOUNT) {
+-		kill_anon_super(sb);
+-		return;
+-	}
+-
+ 	if (erofs_is_fscache_mode(sb))
+ 		kill_anon_super(sb);
+ 	else
+@@ -862,7 +834,7 @@ static void erofs_put_super(struct super_block *sb)
+ 	erofs_fscache_unregister_fs(sb);
+ }
+ 
+-struct file_system_type erofs_fs_type = {
++static struct file_system_type erofs_fs_type = {
+ 	.owner          = THIS_MODULE,
+ 	.name           = "erofs",
+ 	.init_fs_context = erofs_init_fs_context,
+diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
+index 677a9ad45dcb7..f38bdd46e4f77 100644
+--- a/fs/ext2/ext2.h
++++ b/fs/ext2/ext2.h
+@@ -674,7 +674,7 @@ struct ext2_inode_info {
+ 	struct inode	vfs_inode;
+ 	struct list_head i_orphan;	/* unlinked but open inodes */
+ #ifdef CONFIG_QUOTA
+-	struct dquot *i_dquot[MAXQUOTAS];
++	struct dquot __rcu *i_dquot[MAXQUOTAS];
+ #endif
+ };
+ 
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 01f9addc8b1f6..6d8587505cea3 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -320,7 +320,7 @@ static ssize_t ext2_quota_read(struct super_block *sb, int type, char *data, siz
+ static ssize_t ext2_quota_write(struct super_block *sb, int type, const char *data, size_t len, loff_t off);
+ static int ext2_quota_on(struct super_block *sb, int type, int format_id,
+ 			 const struct path *path);
+-static struct dquot **ext2_get_dquots(struct inode *inode)
++static struct dquot __rcu **ext2_get_dquots(struct inode *inode)
+ {
+ 	return EXT2_I(inode)->i_dquot;
+ }
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index a5d784872303d..3205d46bc9673 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1156,7 +1156,7 @@ struct ext4_inode_info {
+ 	tid_t i_datasync_tid;
+ 
+ #ifdef CONFIG_QUOTA
+-	struct dquot *i_dquot[MAXQUOTAS];
++	struct dquot __rcu *i_dquot[MAXQUOTAS];
+ #endif
+ 
+ 	/* Precomputed uuid+inum+igen checksum for seeding inode checksums */
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index c5fcf377ab1fa..3f1fa5d182e24 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1600,7 +1600,7 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type,
+ static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
+ 			     unsigned int flags);
+ 
+-static struct dquot **ext4_get_dquots(struct inode *inode)
++static struct dquot __rcu **ext4_get_dquots(struct inode *inode)
+ {
+ 	return EXT4_I(inode)->i_dquot;
+ }
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index b0597a539fc54..9afc8d24dc369 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1587,8 +1587,9 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 	 */
+ 	if (f2fs_sb_has_encrypt(sbi) || f2fs_sb_has_verity(sbi) ||
+ 		f2fs_sb_has_compression(sbi))
+-		invalidate_mapping_pages(META_MAPPING(sbi),
+-				MAIN_BLKADDR(sbi), MAX_BLKADDR(sbi) - 1);
++		f2fs_bug_on(sbi,
++			invalidate_inode_pages2_range(META_MAPPING(sbi),
++				MAIN_BLKADDR(sbi), MAX_BLKADDR(sbi) - 1));
+ 
+ 	f2fs_release_ino_entry(sbi, false);
+ 
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 62119f3f7206d..52f407b9789b1 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1371,8 +1371,6 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 	add_compr_block_stat(inode, cc->valid_nr_cpages);
+ 
+ 	set_inode_flag(cc->inode, FI_APPEND_WRITE);
+-	if (cc->cluster_idx == 0)
+-		set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
+ 
+ 	f2fs_put_dnode(&dn);
+ 	if (quota_inode)
+@@ -1420,6 +1418,8 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
+ 	struct f2fs_sb_info *sbi = bio->bi_private;
+ 	struct compress_io_ctx *cic =
+ 			(struct compress_io_ctx *)page_private(page);
++	enum count_type type = WB_DATA_TYPE(page,
++				f2fs_is_compressed_page(page));
+ 	int i;
+ 
+ 	if (unlikely(bio->bi_status))
+@@ -1427,7 +1427,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
+ 
+ 	f2fs_compress_free_page(page);
+ 
+-	dec_page_count(sbi, F2FS_WB_DATA);
++	dec_page_count(sbi, type);
+ 
+ 	if (atomic_dec_return(&cic->pending_pages))
+ 		return;
+@@ -1443,12 +1443,14 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
+ }
+ 
+ static int f2fs_write_raw_pages(struct compress_ctx *cc,
+-					int *submitted,
++					int *submitted_p,
+ 					struct writeback_control *wbc,
+ 					enum iostat_type io_type)
+ {
+ 	struct address_space *mapping = cc->inode->i_mapping;
+-	int _submitted, compr_blocks, ret, i;
++	struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
++	int submitted, compr_blocks, i;
++	int ret = 0;
+ 
+ 	compr_blocks = f2fs_compressed_blocks(cc);
+ 
+@@ -1463,6 +1465,10 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
+ 	if (compr_blocks < 0)
+ 		return compr_blocks;
+ 
++	/* overwrite compressed cluster w/ normal cluster */
++	if (compr_blocks > 0)
++		f2fs_lock_op(sbi);
++
+ 	for (i = 0; i < cc->cluster_size; i++) {
+ 		if (!cc->rpages[i])
+ 			continue;
+@@ -1487,7 +1493,7 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
+ 		if (!clear_page_dirty_for_io(cc->rpages[i]))
+ 			goto continue_unlock;
+ 
+-		ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted,
++		ret = f2fs_write_single_data_page(cc->rpages[i], &submitted,
+ 						NULL, NULL, wbc, io_type,
+ 						compr_blocks, false);
+ 		if (ret) {
+@@ -1495,26 +1501,29 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
+ 				unlock_page(cc->rpages[i]);
+ 				ret = 0;
+ 			} else if (ret == -EAGAIN) {
++				ret = 0;
+ 				/*
+ 				 * for quota file, just redirty left pages to
+ 				 * avoid deadlock caused by cluster update race
+ 				 * from foreground operation.
+ 				 */
+ 				if (IS_NOQUOTA(cc->inode))
+-					return 0;
+-				ret = 0;
++					goto out;
+ 				f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT);
+ 				goto retry_write;
+ 			}
+-			return ret;
++			goto out;
+ 		}
+ 
+-		*submitted += _submitted;
++		*submitted_p += submitted;
+ 	}
+ 
+-	f2fs_balance_fs(F2FS_M_SB(mapping), true);
++out:
++	if (compr_blocks > 0)
++		f2fs_unlock_op(sbi);
+ 
+-	return 0;
++	f2fs_balance_fs(sbi, true);
++	return ret;
+ }
+ 
+ int f2fs_write_multi_pages(struct compress_ctx *cc,
+@@ -1808,16 +1817,18 @@ void f2fs_put_page_dic(struct page *page, bool in_task)
+  * check whether cluster blocks are contiguous, and add extent cache entry
+  * only if cluster blocks are logically and physically contiguous.
+  */
+-unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn)
++unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn,
++						unsigned int ofs_in_node)
+ {
+-	bool compressed = f2fs_data_blkaddr(dn) == COMPRESS_ADDR;
++	bool compressed = data_blkaddr(dn->inode, dn->node_page,
++					ofs_in_node) == COMPRESS_ADDR;
+ 	int i = compressed ? 1 : 0;
+ 	block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_page,
+-						dn->ofs_in_node + i);
++							ofs_in_node + i);
+ 
+ 	for (i += 1; i < F2FS_I(dn->inode)->i_cluster_size; i++) {
+ 		block_t blkaddr = data_blkaddr(dn->inode, dn->node_page,
+-						dn->ofs_in_node + i);
++							ofs_in_node + i);
+ 
+ 		if (!__is_valid_data_blkaddr(blkaddr))
+ 			break;
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index bc3f05d43b624..c611d064aee3e 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -48,7 +48,7 @@ void f2fs_destroy_bioset(void)
+ 	bioset_exit(&f2fs_bioset);
+ }
+ 
+-static bool __is_cp_guaranteed(struct page *page)
++bool f2fs_is_cp_guaranteed(struct page *page)
+ {
+ 	struct address_space *mapping = page->mapping;
+ 	struct inode *inode;
+@@ -65,8 +65,6 @@ static bool __is_cp_guaranteed(struct page *page)
+ 			S_ISDIR(inode->i_mode))
+ 		return true;
+ 
+-	if (f2fs_is_compressed_page(page))
+-		return false;
+ 	if ((S_ISREG(inode->i_mode) && IS_NOQUOTA(inode)) ||
+ 			page_private_gcing(page))
+ 		return true;
+@@ -338,7 +336,7 @@ static void f2fs_write_end_io(struct bio *bio)
+ 
+ 	bio_for_each_segment_all(bvec, bio, iter_all) {
+ 		struct page *page = bvec->bv_page;
+-		enum count_type type = WB_DATA_TYPE(page);
++		enum count_type type = WB_DATA_TYPE(page, false);
+ 
+ 		if (page_private_dummy(page)) {
+ 			clear_page_private_dummy(page);
+@@ -762,7 +760,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ 		wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+ 
+ 	inc_page_count(fio->sbi, is_read_io(fio->op) ?
+-			__read_io_type(page) : WB_DATA_TYPE(fio->page));
++			__read_io_type(page) : WB_DATA_TYPE(fio->page, false));
+ 
+ 	if (is_read_io(bio_op(bio)))
+ 		f2fs_submit_read_bio(fio->sbi, bio, fio->type);
+@@ -973,7 +971,7 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
+ 	if (fio->io_wbc)
+ 		wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+ 
+-	inc_page_count(fio->sbi, WB_DATA_TYPE(page));
++	inc_page_count(fio->sbi, WB_DATA_TYPE(page, false));
+ 
+ 	*fio->last_block = fio->new_blkaddr;
+ 	*fio->bio = bio;
+@@ -1007,11 +1005,12 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ 	enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
+ 	struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
+ 	struct page *bio_page;
++	enum count_type type;
+ 
+ 	f2fs_bug_on(sbi, is_read_io(fio->op));
+ 
+ 	f2fs_down_write(&io->io_rwsem);
+-
++next:
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	if (f2fs_sb_has_blkzoned(sbi) && btype < META && io->zone_pending_bio) {
+ 		wait_for_completion_io(&io->zone_wait);
+@@ -1021,7 +1020,6 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ 	}
+ #endif
+ 
+-next:
+ 	if (fio->in_list) {
+ 		spin_lock(&io->io_lock);
+ 		if (list_empty(&io->io_list)) {
+@@ -1046,7 +1044,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ 	/* set submitted = true as a return value */
+ 	fio->submitted = 1;
+ 
+-	inc_page_count(sbi, WB_DATA_TYPE(bio_page));
++	type = WB_DATA_TYPE(bio_page, fio->compressed_page);
++	inc_page_count(sbi, type);
+ 
+ 	if (io->bio &&
+ 	    (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio,
+@@ -1059,7 +1058,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ 		if (F2FS_IO_ALIGNED(sbi) &&
+ 				(fio->type == DATA || fio->type == NODE) &&
+ 				fio->new_blkaddr & F2FS_IO_SIZE_MASK(sbi)) {
+-			dec_page_count(sbi, WB_DATA_TYPE(bio_page));
++			dec_page_count(sbi, WB_DATA_TYPE(bio_page,
++						fio->compressed_page));
+ 			fio->retry = 1;
+ 			goto skip;
+ 		}
+@@ -1080,10 +1080,6 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ 	io->last_block_in_bio = fio->new_blkaddr;
+ 
+ 	trace_f2fs_submit_page_write(fio->page, fio);
+-skip:
+-	if (fio->in_list)
+-		goto next;
+-out:
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	if (f2fs_sb_has_blkzoned(sbi) && btype < META &&
+ 			is_end_zone_blkaddr(sbi, fio->new_blkaddr)) {
+@@ -1096,6 +1092,10 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ 		__submit_merged_bio(io);
+ 	}
+ #endif
++skip:
++	if (fio->in_list)
++		goto next;
++out:
+ 	if (is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) ||
+ 				!f2fs_is_checkpoint_ready(sbi))
+ 		__submit_merged_bio(io);
+@@ -1179,18 +1179,12 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
+ 	return 0;
+ }
+ 
+-static void __set_data_blkaddr(struct dnode_of_data *dn)
++static void __set_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr)
+ {
+-	struct f2fs_node *rn = F2FS_NODE(dn->node_page);
+-	__le32 *addr_array;
+-	int base = 0;
+-
+-	if (IS_INODE(dn->node_page) && f2fs_has_extra_attr(dn->inode))
+-		base = get_extra_isize(dn->inode);
++	__le32 *addr = get_dnode_addr(dn->inode, dn->node_page);
+ 
+-	/* Get physical address of data block */
+-	addr_array = blkaddr_in_node(rn);
+-	addr_array[base + dn->ofs_in_node] = cpu_to_le32(dn->data_blkaddr);
++	dn->data_blkaddr = blkaddr;
++	addr[dn->ofs_in_node] = cpu_to_le32(dn->data_blkaddr);
+ }
+ 
+ /*
+@@ -1199,18 +1193,17 @@ static void __set_data_blkaddr(struct dnode_of_data *dn)
+  *  ->node_page
+  *    update block addresses in the node page
+  */
+-void f2fs_set_data_blkaddr(struct dnode_of_data *dn)
++void f2fs_set_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr)
+ {
+ 	f2fs_wait_on_page_writeback(dn->node_page, NODE, true, true);
+-	__set_data_blkaddr(dn);
++	__set_data_blkaddr(dn, blkaddr);
+ 	if (set_page_dirty(dn->node_page))
+ 		dn->node_changed = true;
+ }
+ 
+ void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr)
+ {
+-	dn->data_blkaddr = blkaddr;
+-	f2fs_set_data_blkaddr(dn);
++	f2fs_set_data_blkaddr(dn, blkaddr);
+ 	f2fs_update_read_extent_cache(dn);
+ }
+ 
+@@ -1225,7 +1218,8 @@ int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
+ 
+ 	if (unlikely(is_inode_flag_set(dn->inode, FI_NO_ALLOC)))
+ 		return -EPERM;
+-	if (unlikely((err = inc_valid_block_count(sbi, dn->inode, &count))))
++	err = inc_valid_block_count(sbi, dn->inode, &count, true);
++	if (unlikely(err))
+ 		return err;
+ 
+ 	trace_f2fs_reserve_new_blocks(dn->inode, dn->nid,
+@@ -1237,8 +1231,7 @@ int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
+ 		block_t blkaddr = f2fs_data_blkaddr(dn);
+ 
+ 		if (blkaddr == NULL_ADDR) {
+-			dn->data_blkaddr = NEW_ADDR;
+-			__set_data_blkaddr(dn);
++			__set_data_blkaddr(dn, NEW_ADDR);
+ 			count--;
+ 		}
+ 	}
+@@ -1483,7 +1476,7 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
+ 
+ 	dn->data_blkaddr = f2fs_data_blkaddr(dn);
+ 	if (dn->data_blkaddr == NULL_ADDR) {
+-		err = inc_valid_block_count(sbi, dn->inode, &count);
++		err = inc_valid_block_count(sbi, dn->inode, &count, true);
+ 		if (unlikely(err))
+ 			return err;
+ 	}
+@@ -1492,11 +1485,9 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
+ 	old_blkaddr = dn->data_blkaddr;
+ 	f2fs_allocate_data_block(sbi, NULL, old_blkaddr, &dn->data_blkaddr,
+ 				&sum, seg_type, NULL);
+-	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) {
+-		invalidate_mapping_pages(META_MAPPING(sbi),
+-					old_blkaddr, old_blkaddr);
+-		f2fs_invalidate_compress_page(sbi, old_blkaddr);
+-	}
++	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO)
++		f2fs_invalidate_internal_cache(sbi, old_blkaddr);
++
+ 	f2fs_update_data_blkaddr(dn, dn->data_blkaddr);
+ 	return 0;
+ }
+@@ -2811,8 +2802,6 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ 	f2fs_outplace_write_data(&dn, fio);
+ 	trace_f2fs_do_write_data_page(page, OPU);
+ 	set_inode_flag(inode, FI_APPEND_WRITE);
+-	if (page->index == 0)
+-		set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
+ out_writepage:
+ 	f2fs_put_dnode(&dn);
+ out:
+@@ -2850,7 +2839,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 		.encrypted_page = NULL,
+ 		.submitted = 0,
+ 		.compr_blocks = compr_blocks,
+-		.need_lock = LOCK_RETRY,
++		.need_lock = compr_blocks ? LOCK_DONE : LOCK_RETRY,
+ 		.post_read = f2fs_post_read_required(inode) ? 1 : 0,
+ 		.io_type = io_type,
+ 		.io_wbc = wbc,
+@@ -2895,9 +2884,6 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 
+ 	zero_user_segment(page, offset, PAGE_SIZE);
+ write:
+-	if (f2fs_is_drop_cache(inode))
+-		goto out;
+-
+ 	/* Dentry/quota blocks are controlled by checkpoint */
+ 	if (S_ISDIR(inode->i_mode) || quota_inode) {
+ 		/*
+@@ -2934,6 +2920,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 	if (err == -EAGAIN) {
+ 		err = f2fs_do_write_data_page(&fio);
+ 		if (err == -EAGAIN) {
++			f2fs_bug_on(sbi, compr_blocks);
+ 			fio.need_lock = LOCK_REQ;
+ 			err = f2fs_do_write_data_page(&fio);
+ 		}
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 042593aed1ec0..1b937f7d0414f 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -830,13 +830,14 @@ int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
+ 	return err;
+ }
+ 
+-int f2fs_do_tmpfile(struct inode *inode, struct inode *dir)
++int f2fs_do_tmpfile(struct inode *inode, struct inode *dir,
++					struct f2fs_filename *fname)
+ {
+ 	struct page *page;
+ 	int err = 0;
+ 
+ 	f2fs_down_write(&F2FS_I(inode)->i_sem);
+-	page = f2fs_init_inode_metadata(inode, dir, NULL, NULL);
++	page = f2fs_init_inode_metadata(inode, dir, fname, NULL);
+ 	if (IS_ERR(page)) {
+ 		err = PTR_ERR(page);
+ 		goto fail;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 9043cedfa12ba..200e1e43ef9bb 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -75,6 +75,11 @@ struct f2fs_fault_info {
+ 
+ extern const char *f2fs_fault_name[FAULT_MAX];
+ #define IS_FAULT_SET(fi, type) ((fi)->inject_type & BIT(type))
++
++/* maximum retry count for injected failure */
++#define DEFAULT_FAILURE_RETRY_COUNT		8
++#else
++#define DEFAULT_FAILURE_RETRY_COUNT		1
+ #endif
+ 
+ /*
+@@ -774,8 +779,6 @@ enum {
+ 	FI_UPDATE_WRITE,	/* inode has in-place-update data */
+ 	FI_NEED_IPU,		/* used for ipu per file */
+ 	FI_ATOMIC_FILE,		/* indicate atomic file */
+-	FI_FIRST_BLOCK_WRITTEN,	/* indicate #0 data block was written */
+-	FI_DROP_CACHE,		/* drop dirty page cache */
+ 	FI_DATA_EXIST,		/* indicate data exists */
+ 	FI_INLINE_DOTS,		/* indicate inline dot dentries */
+ 	FI_SKIP_WRITES,		/* should skip data page writeback */
+@@ -824,7 +827,7 @@ struct f2fs_inode_info {
+ 	spinlock_t i_size_lock;		/* protect last_disk_size */
+ 
+ #ifdef CONFIG_QUOTA
+-	struct dquot *i_dquot[MAXQUOTAS];
++	struct dquot __rcu *i_dquot[MAXQUOTAS];
+ 
+ 	/* quota space reservation, managed internally by quota code */
+ 	qsize_t i_reserved_quota;
+@@ -1075,7 +1078,8 @@ struct f2fs_sm_info {
+  * f2fs monitors the number of several block types such as on-writeback,
+  * dirty dentry blocks, dirty node blocks, and dirty meta blocks.
+  */
+-#define WB_DATA_TYPE(p)	(__is_cp_guaranteed(p) ? F2FS_WB_CP_DATA : F2FS_WB_DATA)
++#define WB_DATA_TYPE(p, f)			\
++	(f || f2fs_is_cp_guaranteed(p) ? F2FS_WB_CP_DATA : F2FS_WB_DATA)
+ enum count_type {
+ 	F2FS_DIRTY_DENTS,
+ 	F2FS_DIRTY_DATA,
+@@ -2246,7 +2250,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
+ 
+ static inline void f2fs_i_blocks_write(struct inode *, block_t, bool, bool);
+ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+-				 struct inode *inode, blkcnt_t *count)
++				 struct inode *inode, blkcnt_t *count, bool partial)
+ {
+ 	blkcnt_t diff = 0, release = 0;
+ 	block_t avail_user_block_count;
+@@ -2286,6 +2290,11 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 			avail_user_block_count = 0;
+ 	}
+ 	if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
++		if (!partial) {
++			spin_unlock(&sbi->stat_lock);
++			goto enospc;
++		}
++
+ 		diff = sbi->total_valid_block_count - avail_user_block_count;
+ 		if (diff > *count)
+ 			diff = *count;
+@@ -3272,22 +3281,13 @@ static inline bool f2fs_is_cow_file(struct inode *inode)
+ 	return is_inode_flag_set(inode, FI_COW_FILE);
+ }
+ 
+-static inline bool f2fs_is_first_block_written(struct inode *inode)
+-{
+-	return is_inode_flag_set(inode, FI_FIRST_BLOCK_WRITTEN);
+-}
+-
+-static inline bool f2fs_is_drop_cache(struct inode *inode)
+-{
+-	return is_inode_flag_set(inode, FI_DROP_CACHE);
+-}
+-
++static inline __le32 *get_dnode_addr(struct inode *inode,
++					struct page *node_page);
+ static inline void *inline_data_addr(struct inode *inode, struct page *page)
+ {
+-	struct f2fs_inode *ri = F2FS_INODE(page);
+-	int extra_size = get_extra_isize(inode);
++	__le32 *addr = get_dnode_addr(inode, page);
+ 
+-	return (void *)&(ri->i_addr[extra_size + DEF_INLINE_RESERVED_SIZE]);
++	return (void *)(addr + DEF_INLINE_RESERVED_SIZE);
+ }
+ 
+ static inline int f2fs_has_inline_dentry(struct inode *inode)
+@@ -3432,6 +3432,17 @@ static inline int get_inline_xattr_addrs(struct inode *inode)
+ 	return F2FS_I(inode)->i_inline_xattr_size;
+ }
+ 
++static inline __le32 *get_dnode_addr(struct inode *inode,
++					struct page *node_page)
++{
++	int base = 0;
++
++	if (IS_INODE(node_page) && f2fs_has_extra_attr(inode))
++		base = get_extra_isize(inode);
++
++	return blkaddr_in_node(F2FS_NODE(node_page)) + base;
++}
++
+ #define f2fs_get_inode_mode(i) \
+ 	((is_inode_flag_set(i, FI_ACL_MODE)) ? \
+ 	 (F2FS_I(i)->i_acl_mode) : ((i)->i_mode))
+@@ -3457,11 +3468,9 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ static inline void verify_blkaddr(struct f2fs_sb_info *sbi,
+ 					block_t blkaddr, int type)
+ {
+-	if (!f2fs_is_valid_blkaddr(sbi, blkaddr, type)) {
++	if (!f2fs_is_valid_blkaddr(sbi, blkaddr, type))
+ 		f2fs_err(sbi, "invalid blkaddr: %u, type: %d, run fsck to fix.",
+ 			 blkaddr, type);
+-		f2fs_bug_on(sbi, 1);
+-	}
+ }
+ 
+ static inline bool __is_valid_data_blkaddr(block_t blkaddr)
+@@ -3563,7 +3572,8 @@ int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
+ 			struct inode *inode, nid_t ino, umode_t mode);
+ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
+ 			struct inode *dir, struct inode *inode);
+-int f2fs_do_tmpfile(struct inode *inode, struct inode *dir);
++int f2fs_do_tmpfile(struct inode *inode, struct inode *dir,
++					struct f2fs_filename *fname);
+ bool f2fs_empty_dir(struct inode *dir);
+ 
+ static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode)
+@@ -3797,6 +3807,7 @@ void f2fs_init_ckpt_req_control(struct f2fs_sb_info *sbi);
+  */
+ int __init f2fs_init_bioset(void);
+ void f2fs_destroy_bioset(void);
++bool f2fs_is_cp_guaranteed(struct page *page);
+ int f2fs_init_bio_entry_cache(void);
+ void f2fs_destroy_bio_entry_cache(void);
+ void f2fs_submit_read_bio(struct f2fs_sb_info *sbi, struct bio *bio,
+@@ -3815,7 +3826,7 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio);
+ struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
+ 		block_t blk_addr, sector_t *sector);
+ int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr);
+-void f2fs_set_data_blkaddr(struct dnode_of_data *dn);
++void f2fs_set_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr);
+ void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr);
+ int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count);
+ int f2fs_reserve_new_block(struct dnode_of_data *dn);
+@@ -4280,7 +4291,8 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc);
+ void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed,
+ 				bool in_task);
+ void f2fs_put_page_dic(struct page *page, bool in_task);
+-unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn);
++unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn,
++						unsigned int ofs_in_node);
+ int f2fs_init_compress_ctx(struct compress_ctx *cc);
+ void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
+ void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
+@@ -4337,7 +4349,8 @@ static inline void f2fs_put_page_dic(struct page *page, bool in_task)
+ {
+ 	WARN_ON_ONCE(1);
+ }
+-static inline unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn) { return 0; }
++static inline unsigned int f2fs_cluster_blocks_are_contiguous(
++			struct dnode_of_data *dn, unsigned int ofs_in_node) { return 0; }
+ static inline bool f2fs_sanity_check_cluster(struct dnode_of_data *dn) { return false; }
+ static inline int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) { return 0; }
+ static inline void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) { }
+@@ -4394,15 +4407,24 @@ static inline bool f2fs_disable_compressed_file(struct inode *inode)
+ {
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+ 
+-	if (!f2fs_compressed_file(inode))
++	f2fs_down_write(&F2FS_I(inode)->i_sem);
++
++	if (!f2fs_compressed_file(inode)) {
++		f2fs_up_write(&F2FS_I(inode)->i_sem);
+ 		return true;
+-	if (S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode))
++	}
++	if (f2fs_is_mmap_file(inode) ||
++		(S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode))) {
++		f2fs_up_write(&F2FS_I(inode)->i_sem);
+ 		return false;
++	}
+ 
+ 	fi->i_flags &= ~F2FS_COMPR_FL;
+ 	stat_dec_compr_inode(inode);
+ 	clear_inode_flag(inode, FI_COMPRESSED_FILE);
+ 	f2fs_mark_inode_dirty_sync(inode, true);
++
++	f2fs_up_write(&F2FS_I(inode)->i_sem);
+ 	return true;
+ }
+ 
+@@ -4606,6 +4628,39 @@ static inline bool f2fs_is_readonly(struct f2fs_sb_info *sbi)
+ 	return f2fs_sb_has_readonly(sbi) || f2fs_readonly(sbi->sb);
+ }
+ 
++static inline void f2fs_truncate_meta_inode_pages(struct f2fs_sb_info *sbi,
++					block_t blkaddr, unsigned int cnt)
++{
++	bool need_submit = false;
++	int i = 0;
++
++	do {
++		struct page *page;
++
++		page = find_get_page(META_MAPPING(sbi), blkaddr + i);
++		if (page) {
++			if (PageWriteback(page))
++				need_submit = true;
++			f2fs_put_page(page, 0);
++		}
++	} while (++i < cnt && !need_submit);
++
++	if (need_submit)
++		f2fs_submit_merged_write_cond(sbi, sbi->meta_inode,
++							NULL, 0, DATA);
++
++	truncate_inode_pages_range(META_MAPPING(sbi),
++			F2FS_BLK_TO_BYTES((loff_t)blkaddr),
++			F2FS_BLK_END_BYTES((loff_t)(blkaddr + cnt - 1)));
++}
++
++static inline void f2fs_invalidate_internal_cache(struct f2fs_sb_info *sbi,
++								block_t blkaddr)
++{
++	f2fs_truncate_meta_inode_pages(sbi, blkaddr, 1);
++	f2fs_invalidate_compress_page(sbi, blkaddr);
++}
++
+ #define EFSBADCRC	EBADMSG		/* Bad CRC detected */
+ #define EFSCORRUPTED	EUCLEAN		/* Filesystem is corrupted */
+ 
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index a05781e708d66..caeae900f797a 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -557,20 +557,14 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
+ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
+-	struct f2fs_node *raw_node;
+ 	int nr_free = 0, ofs = dn->ofs_in_node, len = count;
+ 	__le32 *addr;
+-	int base = 0;
+ 	bool compressed_cluster = false;
+ 	int cluster_index = 0, valid_blocks = 0;
+ 	int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
+ 	bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
+ 
+-	if (IS_INODE(dn->node_page) && f2fs_has_extra_attr(dn->inode))
+-		base = get_extra_isize(dn->inode);
+-
+-	raw_node = F2FS_NODE(dn->node_page);
+-	addr = blkaddr_in_node(raw_node) + base + ofs;
++	addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
+ 
+ 	/* Assumption: truncation starts with cluster */
+ 	for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) {
+@@ -588,8 +582,7 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
+ 		if (blkaddr == NULL_ADDR)
+ 			continue;
+ 
+-		dn->data_blkaddr = NULL_ADDR;
+-		f2fs_set_data_blkaddr(dn);
++		f2fs_set_data_blkaddr(dn, NULL_ADDR);
+ 
+ 		if (__is_valid_data_blkaddr(blkaddr)) {
+ 			if (!f2fs_is_valid_blkaddr(sbi, blkaddr,
+@@ -599,9 +592,6 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
+ 				valid_blocks++;
+ 		}
+ 
+-		if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page))
+-			clear_inode_flag(dn->inode, FI_FIRST_BLOCK_WRITTEN);
+-
+ 		f2fs_invalidate_blocks(sbi, blkaddr);
+ 
+ 		if (!released || blkaddr != COMPRESS_ADDR)
+@@ -1488,8 +1478,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
+ 		}
+ 
+ 		f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
+-		dn->data_blkaddr = NEW_ADDR;
+-		f2fs_set_data_blkaddr(dn);
++		f2fs_set_data_blkaddr(dn, NEW_ADDR);
+ 	}
+ 
+ 	f2fs_update_read_extent_cache_range(dn, start, 0, index - start);
+@@ -3469,8 +3458,7 @@ static int release_compress_blocks(struct dnode_of_data *dn, pgoff_t count)
+ 			if (blkaddr != NEW_ADDR)
+ 				continue;
+ 
+-			dn->data_blkaddr = NULL_ADDR;
+-			f2fs_set_data_blkaddr(dn);
++			f2fs_set_data_blkaddr(dn, NULL_ADDR);
+ 		}
+ 
+ 		f2fs_i_compr_blocks_update(dn->inode, compr_blocks, false);
+@@ -3595,10 +3583,10 @@ static int f2fs_release_compress_blocks(struct file *filp, unsigned long arg)
+ 	return ret;
+ }
+ 
+-static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count)
++static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count,
++		unsigned int *reserved_blocks)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
+-	unsigned int reserved_blocks = 0;
+ 	int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
+ 	block_t blkaddr;
+ 	int i;
+@@ -3621,41 +3609,53 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count)
+ 		blkcnt_t reserved;
+ 		int ret;
+ 
+-		for (i = 0; i < cluster_size; i++, dn->ofs_in_node++) {
+-			blkaddr = f2fs_data_blkaddr(dn);
++		for (i = 0; i < cluster_size; i++) {
++			blkaddr = data_blkaddr(dn->inode, dn->node_page,
++						dn->ofs_in_node + i);
+ 
+ 			if (i == 0) {
+-				if (blkaddr == COMPRESS_ADDR)
+-					continue;
+-				dn->ofs_in_node += cluster_size;
+-				goto next;
++				if (blkaddr != COMPRESS_ADDR) {
++					dn->ofs_in_node += cluster_size;
++					goto next;
++				}
++				continue;
+ 			}
+ 
+-			if (__is_valid_data_blkaddr(blkaddr)) {
++			/*
++			 * compressed cluster was not released due to it
++			 * fails in release_compress_blocks(), so NEW_ADDR
++			 * is a possible case.
++			 */
++			if (blkaddr == NEW_ADDR ||
++				__is_valid_data_blkaddr(blkaddr)) {
+ 				compr_blocks++;
+ 				continue;
+ 			}
+-
+-			dn->data_blkaddr = NEW_ADDR;
+-			f2fs_set_data_blkaddr(dn);
+ 		}
+ 
+ 		reserved = cluster_size - compr_blocks;
+-		ret = inc_valid_block_count(sbi, dn->inode, &reserved);
+-		if (ret)
++
++		/* for the case all blocks in cluster were reserved */
++		if (reserved == 1)
++			goto next;
++
++		ret = inc_valid_block_count(sbi, dn->inode, &reserved, false);
++		if (unlikely(ret))
+ 			return ret;
+ 
+-		if (reserved != cluster_size - compr_blocks)
+-			return -ENOSPC;
++		for (i = 0; i < cluster_size; i++, dn->ofs_in_node++) {
++			if (f2fs_data_blkaddr(dn) == NULL_ADDR)
++				f2fs_set_data_blkaddr(dn, NEW_ADDR);
++		}
+ 
+ 		f2fs_i_compr_blocks_update(dn->inode, compr_blocks, true);
+ 
+-		reserved_blocks += reserved;
++		*reserved_blocks += reserved;
+ next:
+ 		count -= cluster_size;
+ 	}
+ 
+-	return reserved_blocks;
++	return 0;
+ }
+ 
+ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+@@ -3679,9 +3679,6 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (atomic_read(&F2FS_I(inode)->i_compr_blocks))
+-		goto out;
+-
+ 	f2fs_balance_fs(sbi, true);
+ 
+ 	inode_lock(inode);
+@@ -3691,6 +3688,9 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 		goto unlock_inode;
+ 	}
+ 
++	if (atomic_read(&F2FS_I(inode)->i_compr_blocks))
++		goto unlock_inode;
++
+ 	f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 	filemap_invalidate_lock(inode->i_mapping);
+ 
+@@ -3716,7 +3716,7 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 		count = min(end_offset - dn.ofs_in_node, last_idx - page_idx);
+ 		count = round_up(count, F2FS_I(inode)->i_cluster_size);
+ 
+-		ret = reserve_compress_blocks(&dn, count);
++		ret = reserve_compress_blocks(&dn, count, &reserved_blocks);
+ 
+ 		f2fs_put_dnode(&dn);
+ 
+@@ -3724,23 +3724,21 @@ static int f2fs_reserve_compress_blocks(struct file *filp, unsigned long arg)
+ 			break;
+ 
+ 		page_idx += count;
+-		reserved_blocks += ret;
+ 	}
+ 
+ 	filemap_invalidate_unlock(inode->i_mapping);
+ 	f2fs_up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 
+-	if (ret >= 0) {
++	if (!ret) {
+ 		clear_inode_flag(inode, FI_COMPRESS_RELEASED);
+ 		inode_set_ctime_current(inode);
+ 		f2fs_mark_inode_dirty_sync(inode, true);
+ 	}
+ unlock_inode:
+ 	inode_unlock(inode);
+-out:
+ 	mnt_drop_write_file(filp);
+ 
+-	if (ret >= 0) {
++	if (!ret) {
+ 		ret = put_user(reserved_blocks, (u64 __user *)arg);
+ 	} else if (reserved_blocks &&
+ 			atomic_read(&F2FS_I(inode)->i_compr_blocks)) {
+@@ -3989,16 +3987,20 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
+ 				sizeof(option)))
+ 		return -EFAULT;
+ 
+-	if (!f2fs_compressed_file(inode) ||
+-			option.log_cluster_size < MIN_COMPRESS_LOG_SIZE ||
+-			option.log_cluster_size > MAX_COMPRESS_LOG_SIZE ||
+-			option.algorithm >= COMPRESS_MAX)
++	if (option.log_cluster_size < MIN_COMPRESS_LOG_SIZE ||
++		option.log_cluster_size > MAX_COMPRESS_LOG_SIZE ||
++		option.algorithm >= COMPRESS_MAX)
+ 		return -EINVAL;
+ 
+ 	file_start_write(filp);
+ 	inode_lock(inode);
+ 
+ 	f2fs_down_write(&F2FS_I(inode)->i_sem);
++	if (!f2fs_compressed_file(inode)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	if (f2fs_is_mmap_file(inode) || get_dirty_pages(inode)) {
+ 		ret = -EBUSY;
+ 		goto out;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index f550cdeaa6638..405a6077bd83b 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1380,9 +1380,8 @@ static int move_data_block(struct inode *inode, block_t bidx,
+ 	memcpy(page_address(fio.encrypted_page),
+ 				page_address(mpage), PAGE_SIZE);
+ 	f2fs_put_page(mpage, 1);
+-	invalidate_mapping_pages(META_MAPPING(fio.sbi),
+-				fio.old_blkaddr, fio.old_blkaddr);
+-	f2fs_invalidate_compress_page(fio.sbi, fio.old_blkaddr);
++
++	f2fs_invalidate_internal_cache(fio.sbi, fio.old_blkaddr);
+ 
+ 	set_page_dirty(fio.encrypted_page);
+ 	if (clear_page_dirty_for_io(fio.encrypted_page))
+@@ -1405,8 +1404,6 @@ static int move_data_block(struct inode *inode, block_t bidx,
+ 
+ 	f2fs_update_data_blkaddr(&dn, newaddr);
+ 	set_inode_flag(inode, FI_APPEND_WRITE);
+-	if (page->index == 0)
+-		set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
+ put_page_out:
+ 	f2fs_put_page(fio.encrypted_page, 1);
+ recover_block:
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 560bfcad1af23..b31410c4afbc1 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -61,49 +61,31 @@ void f2fs_set_inode_flags(struct inode *inode)
+ 			S_ENCRYPTED|S_VERITY|S_CASEFOLD);
+ }
+ 
+-static void __get_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
++static void __get_inode_rdev(struct inode *inode, struct page *node_page)
+ {
+-	int extra_size = get_extra_isize(inode);
++	__le32 *addr = get_dnode_addr(inode, node_page);
+ 
+ 	if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
+ 			S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+-		if (ri->i_addr[extra_size])
+-			inode->i_rdev = old_decode_dev(
+-				le32_to_cpu(ri->i_addr[extra_size]));
++		if (addr[0])
++			inode->i_rdev = old_decode_dev(le32_to_cpu(addr[0]));
+ 		else
+-			inode->i_rdev = new_decode_dev(
+-				le32_to_cpu(ri->i_addr[extra_size + 1]));
++			inode->i_rdev = new_decode_dev(le32_to_cpu(addr[1]));
+ 	}
+ }
+ 
+-static int __written_first_block(struct f2fs_sb_info *sbi,
+-					struct f2fs_inode *ri)
++static void __set_inode_rdev(struct inode *inode, struct page *node_page)
+ {
+-	block_t addr = le32_to_cpu(ri->i_addr[offset_in_addr(ri)]);
+-
+-	if (!__is_valid_data_blkaddr(addr))
+-		return 1;
+-	if (!f2fs_is_valid_blkaddr(sbi, addr, DATA_GENERIC_ENHANCE)) {
+-		f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
+-		return -EFSCORRUPTED;
+-	}
+-	return 0;
+-}
+-
+-static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
+-{
+-	int extra_size = get_extra_isize(inode);
++	__le32 *addr = get_dnode_addr(inode, node_page);
+ 
+ 	if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
+ 		if (old_valid_dev(inode->i_rdev)) {
+-			ri->i_addr[extra_size] =
+-				cpu_to_le32(old_encode_dev(inode->i_rdev));
+-			ri->i_addr[extra_size + 1] = 0;
++			addr[0] = cpu_to_le32(old_encode_dev(inode->i_rdev));
++			addr[1] = 0;
+ 		} else {
+-			ri->i_addr[extra_size] = 0;
+-			ri->i_addr[extra_size + 1] =
+-				cpu_to_le32(new_encode_dev(inode->i_rdev));
+-			ri->i_addr[extra_size + 2] = 0;
++			addr[0] = 0;
++			addr[1] = cpu_to_le32(new_encode_dev(inode->i_rdev));
++			addr[2] = 0;
+ 		}
+ 	}
+ }
+@@ -398,7 +380,6 @@ static int do_read_inode(struct inode *inode)
+ 	struct page *node_page;
+ 	struct f2fs_inode *ri;
+ 	projid_t i_projid;
+-	int err;
+ 
+ 	/* Check if ino is within scope */
+ 	if (f2fs_check_nid_range(sbi, inode->i_ino))
+@@ -478,17 +459,7 @@ static int do_read_inode(struct inode *inode)
+ 	}
+ 
+ 	/* get rdev by using inline_info */
+-	__get_inode_rdev(inode, ri);
+-
+-	if (S_ISREG(inode->i_mode)) {
+-		err = __written_first_block(sbi, ri);
+-		if (err < 0) {
+-			f2fs_put_page(node_page, 1);
+-			return err;
+-		}
+-		if (!err)
+-			set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
+-	}
++	__get_inode_rdev(inode, node_page);
+ 
+ 	if (!f2fs_need_inode_block_update(sbi, inode->i_ino))
+ 		fi->last_disk_size = inode->i_size;
+@@ -761,7 +732,7 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
+ 		}
+ 	}
+ 
+-	__set_inode_rdev(inode, ri);
++	__set_inode_rdev(inode, node_page);
+ 
+ 	/* deleted inode */
+ 	if (inode->i_nlink == 0)
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 7f71bae2c83b5..9cdf3f36d1b90 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -853,7 +853,7 @@ static int f2fs_mknod(struct mnt_idmap *idmap, struct inode *dir,
+ 
+ static int __f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
+ 			  struct file *file, umode_t mode, bool is_whiteout,
+-			  struct inode **new_inode)
++			  struct inode **new_inode, struct f2fs_filename *fname)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+ 	struct inode *inode;
+@@ -881,7 +881,7 @@ static int __f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
+ 	if (err)
+ 		goto out;
+ 
+-	err = f2fs_do_tmpfile(inode, dir);
++	err = f2fs_do_tmpfile(inode, dir, fname);
+ 	if (err)
+ 		goto release_out;
+ 
+@@ -932,22 +932,24 @@ static int f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
+ 	if (!f2fs_is_checkpoint_ready(sbi))
+ 		return -ENOSPC;
+ 
+-	err = __f2fs_tmpfile(idmap, dir, file, mode, false, NULL);
++	err = __f2fs_tmpfile(idmap, dir, file, mode, false, NULL, NULL);
+ 
+ 	return finish_open_simple(file, err);
+ }
+ 
+ static int f2fs_create_whiteout(struct mnt_idmap *idmap,
+-				struct inode *dir, struct inode **whiteout)
++				struct inode *dir, struct inode **whiteout,
++				struct f2fs_filename *fname)
+ {
+-	return __f2fs_tmpfile(idmap, dir, NULL,
+-				S_IFCHR | WHITEOUT_MODE, true, whiteout);
++	return __f2fs_tmpfile(idmap, dir, NULL, S_IFCHR | WHITEOUT_MODE,
++						true, whiteout, fname);
+ }
+ 
+ int f2fs_get_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
+ 		     struct inode **new_inode)
+ {
+-	return __f2fs_tmpfile(idmap, dir, NULL, S_IFREG, false, new_inode);
++	return __f2fs_tmpfile(idmap, dir, NULL, S_IFREG,
++				false, new_inode, NULL);
+ }
+ 
+ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+@@ -990,7 +992,14 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 	}
+ 
+ 	if (flags & RENAME_WHITEOUT) {
+-		err = f2fs_create_whiteout(idmap, old_dir, &whiteout);
++		struct f2fs_filename fname;
++
++		err = f2fs_setup_filename(old_dir, &old_dentry->d_name,
++							0, &fname);
++		if (err)
++			return err;
++
++		err = f2fs_create_whiteout(idmap, old_dir, &whiteout, &fname);
+ 		if (err)
+ 			return err;
+ 	}
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 9b546fd210100..2ea9c99e7dcb7 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -852,21 +852,29 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
+ 
+ 	if (is_inode_flag_set(dn->inode, FI_COMPRESSED_FILE) &&
+ 					f2fs_sb_has_readonly(sbi)) {
+-		unsigned int c_len = f2fs_cluster_blocks_are_contiguous(dn);
++		unsigned int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
++		unsigned int ofs_in_node = dn->ofs_in_node;
++		pgoff_t fofs = index;
++		unsigned int c_len;
+ 		block_t blkaddr;
+ 
++		/* should align fofs and ofs_in_node to cluster_size */
++		if (fofs % cluster_size) {
++			fofs = round_down(fofs, cluster_size);
++			ofs_in_node = round_down(ofs_in_node, cluster_size);
++		}
++
++		c_len = f2fs_cluster_blocks_are_contiguous(dn, ofs_in_node);
+ 		if (!c_len)
+ 			goto out;
+ 
+-		blkaddr = f2fs_data_blkaddr(dn);
++		blkaddr = data_blkaddr(dn->inode, dn->node_page, ofs_in_node);
+ 		if (blkaddr == COMPRESS_ADDR)
+ 			blkaddr = data_blkaddr(dn->inode, dn->node_page,
+-						dn->ofs_in_node + 1);
++						ofs_in_node + 1);
+ 
+ 		f2fs_update_read_extent_tree_range_compressed(dn->inode,
+-					index, blkaddr,
+-					F2FS_I(dn->inode)->i_cluster_size,
+-					c_len);
++					fofs, blkaddr, cluster_size, c_len);
+ 	}
+ out:
+ 	return 0;
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index d0f24ccbd1ac6..aad1d1a9b3d64 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -611,6 +611,19 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
+ 	return 0;
+ }
+ 
++static int f2fs_reserve_new_block_retry(struct dnode_of_data *dn)
++{
++	int i, err = 0;
++
++	for (i = DEFAULT_FAILURE_RETRY_COUNT; i > 0; i--) {
++		err = f2fs_reserve_new_block(dn);
++		if (!err)
++			break;
++	}
++
++	return err;
++}
++
+ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+ 					struct page *page)
+ {
+@@ -712,14 +725,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+ 		 */
+ 		if (dest == NEW_ADDR) {
+ 			f2fs_truncate_data_blocks_range(&dn, 1);
+-			do {
+-				err = f2fs_reserve_new_block(&dn);
+-				if (err == -ENOSPC) {
+-					f2fs_bug_on(sbi, 1);
+-					break;
+-				}
+-			} while (err &&
+-				IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
++
++			err = f2fs_reserve_new_block_retry(&dn);
+ 			if (err)
+ 				goto err;
+ 			continue;
+@@ -727,16 +734,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
+ 
+ 		/* dest is valid block, try to recover from src to dest */
+ 		if (f2fs_is_valid_blkaddr(sbi, dest, META_POR)) {
+-
+ 			if (src == NULL_ADDR) {
+-				do {
+-					err = f2fs_reserve_new_block(&dn);
+-					if (err == -ENOSPC) {
+-						f2fs_bug_on(sbi, 1);
+-						break;
+-					}
+-				} while (err &&
+-					IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
++				err = f2fs_reserve_new_block_retry(&dn);
+ 				if (err)
+ 					goto err;
+ 			}
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 727d016318f98..dfc7fd8aa1e8f 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -248,7 +248,7 @@ static int __replace_atomic_write_block(struct inode *inode, pgoff_t index,
+ 	} else {
+ 		blkcnt_t count = 1;
+ 
+-		err = inc_valid_block_count(sbi, inode, &count);
++		err = inc_valid_block_count(sbi, inode, &count, true);
+ 		if (err) {
+ 			f2fs_put_dnode(&dn);
+ 			return err;
+@@ -2495,8 +2495,7 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
+ 	if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
+ 		return;
+ 
+-	invalidate_mapping_pages(META_MAPPING(sbi), addr, addr);
+-	f2fs_invalidate_compress_page(sbi, addr);
++	f2fs_invalidate_internal_cache(sbi, addr);
+ 
+ 	/* add it into sit main buffer */
+ 	down_write(&sit_i->sentry_lock);
+@@ -3490,12 +3489,12 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
+ 	locate_dirty_segment(sbi, GET_SEGNO(sbi, old_blkaddr));
+ 	locate_dirty_segment(sbi, GET_SEGNO(sbi, *new_blkaddr));
+ 
+-	if (IS_DATASEG(type))
++	if (IS_DATASEG(curseg->seg_type))
+ 		atomic64_inc(&sbi->allocated_data_blocks);
+ 
+ 	up_write(&sit_i->sentry_lock);
+ 
+-	if (page && IS_NODESEG(type)) {
++	if (page && IS_NODESEG(curseg->seg_type)) {
+ 		fill_node_footer_blkaddr(page, NEXT_FREE_BLKADDR(sbi, curseg));
+ 
+ 		f2fs_inode_chksum_set(sbi, page);
+@@ -3557,11 +3556,8 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio)
+ reallocate:
+ 	f2fs_allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr,
+ 			&fio->new_blkaddr, sum, type, fio);
+-	if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) {
+-		invalidate_mapping_pages(META_MAPPING(fio->sbi),
+-					fio->old_blkaddr, fio->old_blkaddr);
+-		f2fs_invalidate_compress_page(fio->sbi, fio->old_blkaddr);
+-	}
++	if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO)
++		f2fs_invalidate_internal_cache(fio->sbi, fio->old_blkaddr);
+ 
+ 	/* writeout dirty page into bdev */
+ 	f2fs_submit_page_write(fio);
+@@ -3655,8 +3651,7 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio)
+ 	}
+ 
+ 	if (fio->post_read)
+-		invalidate_mapping_pages(META_MAPPING(sbi),
+-				fio->new_blkaddr, fio->new_blkaddr);
++		f2fs_truncate_meta_inode_pages(sbi, fio->new_blkaddr, 1);
+ 
+ 	stat_inc_inplace_blocks(fio->sbi);
+ 
+@@ -3757,9 +3752,7 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 		update_sit_entry(sbi, new_blkaddr, 1);
+ 	}
+ 	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) {
+-		invalidate_mapping_pages(META_MAPPING(sbi),
+-					old_blkaddr, old_blkaddr);
+-		f2fs_invalidate_compress_page(sbi, old_blkaddr);
++		f2fs_invalidate_internal_cache(sbi, old_blkaddr);
+ 		if (!from_gc)
+ 			update_segment_mtime(sbi, old_blkaddr, 0);
+ 		update_sit_entry(sbi, old_blkaddr, -1);
+@@ -3848,7 +3841,7 @@ void f2fs_wait_on_block_writeback_range(struct inode *inode, block_t blkaddr,
+ 	for (i = 0; i < len; i++)
+ 		f2fs_wait_on_block_writeback(inode, blkaddr + i);
+ 
+-	invalidate_mapping_pages(META_MAPPING(sbi), blkaddr, blkaddr + len - 1);
++	f2fs_truncate_meta_inode_pages(sbi, blkaddr, len);
+ }
+ 
+ static int read_compacted_summaries(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 8129be788bd56..c77a562831493 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -573,23 +573,22 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+ 			unsigned int node_blocks, unsigned int dent_blocks)
+ {
+ 
+-	unsigned int segno, left_blocks;
++	unsigned segno, left_blocks;
+ 	int i;
+ 
+-	/* check current node segment */
++	/* check current node sections in the worst case. */
+ 	for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) {
+ 		segno = CURSEG_I(sbi, i)->segno;
+-		left_blocks = f2fs_usable_blks_in_seg(sbi, segno) -
+-				get_seg_entry(sbi, segno)->ckpt_valid_blocks;
+-
++		left_blocks = CAP_BLKS_PER_SEC(sbi) -
++				get_ckpt_valid_blocks(sbi, segno, true);
+ 		if (node_blocks > left_blocks)
+ 			return false;
+ 	}
+ 
+-	/* check current data segment */
++	/* check current data section for dentry blocks. */
+ 	segno = CURSEG_I(sbi, CURSEG_HOT_DATA)->segno;
+-	left_blocks = f2fs_usable_blks_in_seg(sbi, segno) -
+-			get_seg_entry(sbi, segno)->ckpt_valid_blocks;
++	left_blocks = CAP_BLKS_PER_SEC(sbi) -
++			get_ckpt_valid_blocks(sbi, segno, true);
+ 	if (dent_blocks > left_blocks)
+ 		return false;
+ 	return true;
+@@ -638,7 +637,7 @@ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+ 
+ 	if (free_secs > upper_secs)
+ 		return false;
+-	else if (free_secs <= lower_secs)
++	if (free_secs <= lower_secs)
+ 		return true;
+ 	return !curseg_space;
+ }
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 5dfbc6b4c0ac8..4151293fc4d67 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -663,7 +663,7 @@ static int f2fs_set_lz4hc_level(struct f2fs_sb_info *sbi, const char *str)
+ #ifdef CONFIG_F2FS_FS_ZSTD
+ static int f2fs_set_zstd_level(struct f2fs_sb_info *sbi, const char *str)
+ {
+-	unsigned int level;
++	int level;
+ 	int len = 4;
+ 
+ 	if (strlen(str) == len) {
+@@ -677,9 +677,15 @@ static int f2fs_set_zstd_level(struct f2fs_sb_info *sbi, const char *str)
+ 		f2fs_info(sbi, "wrong format, e.g. <alg_name>:<compr_level>");
+ 		return -EINVAL;
+ 	}
+-	if (kstrtouint(str + 1, 10, &level))
++	if (kstrtoint(str + 1, 10, &level))
+ 		return -EINVAL;
+ 
++	/* f2fs does not support negative compress level now */
++	if (level < 0) {
++		f2fs_info(sbi, "do not support negative compress level: %d", level);
++		return -ERANGE;
++	}
++
+ 	if (!f2fs_is_compress_level_valid(COMPRESS_ZSTD, level)) {
+ 		f2fs_info(sbi, "invalid zstd compress level: %d", level);
+ 		return -EINVAL;
+@@ -2776,7 +2782,7 @@ int f2fs_dquot_initialize(struct inode *inode)
+ 	return dquot_initialize(inode);
+ }
+ 
+-static struct dquot **f2fs_get_dquots(struct inode *inode)
++static struct dquot __rcu **f2fs_get_dquots(struct inode *inode)
+ {
+ 	return F2FS_I(inode)->i_dquot;
+ }
+@@ -3938,11 +3944,6 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
+ 		return 0;
+ 
+ 	zone_sectors = bdev_zone_sectors(bdev);
+-	if (!is_power_of_2(zone_sectors)) {
+-		f2fs_err(sbi, "F2FS does not support non power of 2 zone sizes\n");
+-		return -EINVAL;
+-	}
+-
+ 	if (sbi->blocks_per_blkz && sbi->blocks_per_blkz !=
+ 				SECTOR_TO_BLOCK(zone_sectors))
+ 		return -EINVAL;
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index c80a6acad742f..3ff707bf2743a 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -268,7 +268,7 @@ static int f_getowner_uids(struct file *filp, unsigned long arg)
+ }
+ #endif
+ 
+-static bool rw_hint_valid(enum rw_hint hint)
++static bool rw_hint_valid(u64 hint)
+ {
+ 	switch (hint) {
+ 	case RWH_WRITE_LIFE_NOT_SET:
+@@ -288,19 +288,17 @@ static long fcntl_rw_hint(struct file *file, unsigned int cmd,
+ {
+ 	struct inode *inode = file_inode(file);
+ 	u64 __user *argp = (u64 __user *)arg;
+-	enum rw_hint hint;
+-	u64 h;
++	u64 hint;
+ 
+ 	switch (cmd) {
+ 	case F_GET_RW_HINT:
+-		h = inode->i_write_hint;
+-		if (copy_to_user(argp, &h, sizeof(*argp)))
++		hint = inode->i_write_hint;
++		if (copy_to_user(argp, &hint, sizeof(*argp)))
+ 			return -EFAULT;
+ 		return 0;
+ 	case F_SET_RW_HINT:
+-		if (copy_from_user(&h, argp, sizeof(h)))
++		if (copy_from_user(&hint, argp, sizeof(hint)))
+ 			return -EFAULT;
+-		hint = (enum rw_hint) h;
+ 		if (!rw_hint_valid(hint))
+ 			return -EINVAL;
+ 
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 18b3ba8dc8ead..57a12614addfd 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -36,7 +36,7 @@ static long do_sys_name_to_handle(const struct path *path,
+ 	if (f_handle.handle_bytes > MAX_HANDLE_SZ)
+ 		return -EINVAL;
+ 
+-	handle = kmalloc(sizeof(struct file_handle) + f_handle.handle_bytes,
++	handle = kzalloc(sizeof(struct file_handle) + f_handle.handle_bytes,
+ 			 GFP_KERNEL);
+ 	if (!handle)
+ 		return -ENOMEM;
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index f72df2babe561..fc5c64712318a 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1843,16 +1843,10 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ 	if (unlikely(error)) {
+ 		/*
+ 		 * Let the filesystem know what portion of the current page
+-		 * failed to map. If the page hasn't been added to ioend, it
+-		 * won't be affected by I/O completion and we must unlock it
+-		 * now.
++		 * failed to map.
+ 		 */
+ 		if (wpc->ops->discard_folio)
+ 			wpc->ops->discard_folio(folio, pos);
+-		if (!count) {
+-			folio_unlock(folio);
+-			goto done;
+-		}
+ 	}
+ 
+ 	/*
+@@ -1861,6 +1855,16 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ 	 * all the dirty bits in the folio here.
+ 	 */
+ 	iomap_clear_range_dirty(folio, 0, folio_size(folio));
++
++	/*
++	 * If the page hasn't been added to the ioend, it won't be affected by
++	 * I/O completion and we must unlock it now.
++	 */
++	if (error && !count) {
++		folio_unlock(folio);
++		goto done;
++	}
++
+ 	folio_start_writeback(folio);
+ 	folio_unlock(folio);
+ 
+diff --git a/fs/jfs/jfs_incore.h b/fs/jfs/jfs_incore.h
+index dd4264aa9bedd..10934f9a11be3 100644
+--- a/fs/jfs/jfs_incore.h
++++ b/fs/jfs/jfs_incore.h
+@@ -92,7 +92,7 @@ struct jfs_inode_info {
+ 		} link;
+ 	} u;
+ #ifdef CONFIG_QUOTA
+-	struct dquot *i_dquot[MAXQUOTAS];
++	struct dquot __rcu *i_dquot[MAXQUOTAS];
+ #endif
+ 	u32 dev;	/* will die when we get wide dev_t */
+ 	struct inode	vfs_inode;
+diff --git a/fs/jfs/super.c b/fs/jfs/super.c
+index 8d8e556bd6104..ff135a43b5b7b 100644
+--- a/fs/jfs/super.c
++++ b/fs/jfs/super.c
+@@ -824,7 +824,7 @@ static ssize_t jfs_quota_write(struct super_block *sb, int type,
+ 	return len - towrite;
+ }
+ 
+-static struct dquot **jfs_get_dquots(struct inode *inode)
++static struct dquot __rcu **jfs_get_dquots(struct inode *inode)
+ {
+ 	return JFS_IP(inode)->i_dquot;
+ }
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index ef817a0475ffa..3e724cb7ef01d 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -2016,7 +2016,7 @@ static void ff_layout_cancel_io(struct pnfs_layout_segment *lseg)
+ 	for (idx = 0; idx < flseg->mirror_array_cnt; idx++) {
+ 		mirror = flseg->mirror_array[idx];
+ 		mirror_ds = mirror->mirror_ds;
+-		if (!mirror_ds)
++		if (IS_ERR_OR_NULL(mirror_ds))
+ 			continue;
+ 		ds = mirror->mirror_ds->ds;
+ 		if (!ds)
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index b05717fe0d4e4..60a3c28784e0b 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -307,11 +307,11 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
+ 	struct inode *inode = sreq->rreq->inode;
+ 	struct nfs_open_context *ctx = sreq->rreq->netfs_priv;
+ 	struct page *page;
++	unsigned long idx;
+ 	int err;
+ 	pgoff_t start = (sreq->start + sreq->transferred) >> PAGE_SHIFT;
+ 	pgoff_t last = ((sreq->start + sreq->len -
+ 			 sreq->transferred - 1) >> PAGE_SHIFT);
+-	XA_STATE(xas, &sreq->rreq->mapping->i_pages, start);
+ 
+ 	nfs_pageio_init_read(&pgio, inode, false,
+ 			     &nfs_async_read_completion_ops);
+@@ -322,19 +322,14 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
+ 
+ 	pgio.pg_netfs = netfs; /* used in completion */
+ 
+-	xas_lock(&xas);
+-	xas_for_each(&xas, page, last) {
++	xa_for_each_range(&sreq->rreq->mapping->i_pages, idx, page, start, last) {
+ 		/* nfs_read_add_folio() may schedule() due to pNFS layout and other RPCs  */
+-		xas_pause(&xas);
+-		xas_unlock(&xas);
+ 		err = nfs_read_add_folio(&pgio, ctx, page_folio(page));
+ 		if (err < 0) {
+ 			netfs->error = err;
+ 			goto out;
+ 		}
+-		xas_lock(&xas);
+ 	}
+-	xas_unlock(&xas);
+ out:
+ 	nfs_pageio_complete_read(&pgio);
+ 	nfs_netfs_put(netfs);
+diff --git a/fs/nfs/nfs42.h b/fs/nfs/nfs42.h
+index b59876b01a1e3..0282d93c8bccb 100644
+--- a/fs/nfs/nfs42.h
++++ b/fs/nfs/nfs42.h
+@@ -55,11 +55,14 @@ int nfs42_proc_removexattr(struct inode *inode, const char *name);
+  * They would be 7 bytes long in the eventual buffer ("user.x\0"), and
+  * 8 bytes long XDR-encoded.
+  *
+- * Include the trailing eof word as well.
++ * Include the trailing eof word as well and make the result a multiple
++ * of 4 bytes.
+  */
+ static inline u32 nfs42_listxattr_xdrsize(u32 buflen)
+ {
+-	return ((buflen / (XATTR_USER_PREFIX_LEN + 2)) * 8) + 4;
++	u32 size = 8 * buflen / (XATTR_USER_PREFIX_LEN + 2) + 4;
++
++	return (size + 3) & ~3;
+ }
+ #endif /* CONFIG_NFS_V4_2 */
+ #endif /* __LINUX_FS_NFS_NFS4_2_H */
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 23819a7565085..b2ff8c7a21494 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10615,29 +10615,33 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+ 	ssize_t error, error2, error3;
++	size_t left = size;
+ 
+-	error = generic_listxattr(dentry, list, size);
++	error = generic_listxattr(dentry, list, left);
+ 	if (error < 0)
+ 		return error;
+ 	if (list) {
+ 		list += error;
+-		size -= error;
++		left -= error;
+ 	}
+ 
+-	error2 = nfs4_listxattr_nfs4_label(d_inode(dentry), list, size);
++	error2 = nfs4_listxattr_nfs4_label(d_inode(dentry), list, left);
+ 	if (error2 < 0)
+ 		return error2;
+ 
+ 	if (list) {
+ 		list += error2;
+-		size -= error2;
++		left -= error2;
+ 	}
+ 
+-	error3 = nfs4_listxattr_nfs4_user(d_inode(dentry), list, size);
++	error3 = nfs4_listxattr_nfs4_user(d_inode(dentry), list, left);
+ 	if (error3 < 0)
+ 		return error3;
+ 
+-	return error + error2 + error3;
++	error += error2 + error3;
++	if (size && error > size)
++		return -ERANGE;
++	return error;
+ }
+ 
+ static void nfs4_enable_swap(struct inode *inode)
+diff --git a/fs/nfs/nfsroot.c b/fs/nfs/nfsroot.c
+index 7600100ba26f0..432612d224374 100644
+--- a/fs/nfs/nfsroot.c
++++ b/fs/nfs/nfsroot.c
+@@ -175,10 +175,10 @@ static int __init root_nfs_cat(char *dest, const char *src,
+ 	size_t len = strlen(dest);
+ 
+ 	if (len && dest[len - 1] != ',')
+-		if (strlcat(dest, ",", destlen) > destlen)
++		if (strlcat(dest, ",", destlen) >= destlen)
+ 			return -1;
+ 
+-	if (strlcat(dest, src, destlen) > destlen)
++	if (strlcat(dest, src, destlen) >= destlen)
+ 		return -1;
+ 	return 0;
+ }
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index afd23910f3bff..88e061bd711b7 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -919,6 +919,8 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 	dprintk("--> %s DS %s\n", __func__, ds->ds_remotestr);
+ 
+ 	list_for_each_entry(da, &ds->ds_addrs, da_node) {
++		char servername[48];
++
+ 		dprintk("%s: DS %s: trying address %s\n",
+ 			__func__, ds->ds_remotestr, da->da_remotestr);
+ 
+@@ -929,6 +931,7 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 				.dstaddr = (struct sockaddr *)&da->da_addr,
+ 				.addrlen = da->da_addrlen,
+ 				.servername = clp->cl_hostname,
++				.xprtsec = clp->cl_xprtsec,
+ 			};
+ 			struct nfs4_add_xprt_data xprtdata = {
+ 				.clp = clp,
+@@ -938,10 +941,45 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 				.data = &xprtdata,
+ 			};
+ 
+-			if (da->da_transport != clp->cl_proto)
++			if (da->da_transport != clp->cl_proto &&
++					clp->cl_proto != XPRT_TRANSPORT_TCP_TLS)
+ 				continue;
++			if (da->da_transport == XPRT_TRANSPORT_TCP &&
++				mds_srv->nfs_client->cl_proto ==
++					XPRT_TRANSPORT_TCP_TLS) {
++				struct sockaddr *addr =
++					(struct sockaddr *)&da->da_addr;
++				struct sockaddr_in *sin =
++					(struct sockaddr_in *)&da->da_addr;
++				struct sockaddr_in6 *sin6 =
++					(struct sockaddr_in6 *)&da->da_addr;
++
++				/* for NFS with TLS we need to supply a correct
++				 * servername of the trunked transport, not the
++				 * servername of the main transport stored in
++				 * clp->cl_hostname. And set the protocol to
++				 * indicate to use TLS
++				 */
++				servername[0] = '\0';
++				switch(addr->sa_family) {
++				case AF_INET:
++					snprintf(servername, sizeof(servername),
++						"%pI4", &sin->sin_addr.s_addr);
++					break;
++				case AF_INET6:
++					snprintf(servername, sizeof(servername),
++						"%pI6", &sin6->sin6_addr);
++					break;
++				default:
++					/* do not consider this address */
++					continue;
++				}
++				xprt_args.ident = XPRT_TRANSPORT_TCP_TLS;
++				xprt_args.servername = servername;
++			}
+ 			if (da->da_addr.ss_family != clp->cl_addr.ss_family)
+ 				continue;
++
+ 			/**
+ 			* Test this address for session trunking and
+ 			* add as an alias
+@@ -953,6 +991,10 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 			if (xprtdata.cred)
+ 				put_cred(xprtdata.cred);
+ 		} else {
++			if (da->da_transport == XPRT_TRANSPORT_TCP &&
++				mds_srv->nfs_client->cl_proto ==
++					XPRT_TRANSPORT_TCP_TLS)
++				da->da_transport = XPRT_TRANSPORT_TCP_TLS;
+ 			clp = nfs4_set_ds_client(mds_srv,
+ 						&da->da_addr,
+ 						da->da_addrlen,
+diff --git a/fs/ocfs2/inode.h b/fs/ocfs2/inode.h
+index 82b28fdacc7e9..accf03d4765ed 100644
+--- a/fs/ocfs2/inode.h
++++ b/fs/ocfs2/inode.h
+@@ -65,7 +65,7 @@ struct ocfs2_inode_info
+ 	tid_t i_sync_tid;
+ 	tid_t i_datasync_tid;
+ 
+-	struct dquot *i_dquot[MAXQUOTAS];
++	struct dquot __rcu *i_dquot[MAXQUOTAS];
+ };
+ 
+ /*
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 6b906424902b4..1259fe02cd53b 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -122,7 +122,7 @@ static int ocfs2_susp_quotas(struct ocfs2_super *osb, int unsuspend);
+ static int ocfs2_enable_quotas(struct ocfs2_super *osb);
+ static void ocfs2_disable_quotas(struct ocfs2_super *osb);
+ 
+-static struct dquot **ocfs2_get_dquots(struct inode *inode)
++static struct dquot __rcu **ocfs2_get_dquots(struct inode *inode)
+ {
+ 	return OCFS2_I(inode)->i_dquot;
+ }
+diff --git a/fs/overlayfs/params.c b/fs/overlayfs/params.c
+index 3fe2dde1598f9..488f920f79d28 100644
+--- a/fs/overlayfs/params.c
++++ b/fs/overlayfs/params.c
+@@ -280,12 +280,20 @@ static int ovl_mount_dir_check(struct fs_context *fc, const struct path *path,
+ {
+ 	struct ovl_fs_context *ctx = fc->fs_private;
+ 
+-	if (ovl_dentry_weird(path->dentry))
+-		return invalfc(fc, "filesystem on %s not supported", name);
+-
+ 	if (!d_is_dir(path->dentry))
+ 		return invalfc(fc, "%s is not a directory", name);
+ 
++	/*
++	 * Root dentries of case-insensitive capable filesystems might
++	 * not have the dentry operations set, but still be incompatible
++	 * with overlayfs.  Check explicitly to prevent post-mount
++	 * failures.
++	 */
++	if (sb_has_encoding(path->mnt->mnt_sb))
++		return invalfc(fc, "case-insensitive capable filesystem on %s not supported", name);
++
++	if (ovl_dentry_weird(path->dentry))
++		return invalfc(fc, "filesystem on %s not supported", name);
+ 
+ 	/*
+ 	 * Check whether upper path is read-only here to report failures
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index d41c20d1b5e82..4ca72b08d919a 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -182,25 +182,21 @@ static int pstore_unlink(struct inode *dir, struct dentry *dentry)
+ {
+ 	struct pstore_private *p = d_inode(dentry)->i_private;
+ 	struct pstore_record *record = p->record;
+-	int rc = 0;
+ 
+ 	if (!record->psi->erase)
+ 		return -EPERM;
+ 
+ 	/* Make sure we can't race while removing this file. */
+-	mutex_lock(&records_list_lock);
+-	if (!list_empty(&p->list))
+-		list_del_init(&p->list);
+-	else
+-		rc = -ENOENT;
+-	p->dentry = NULL;
+-	mutex_unlock(&records_list_lock);
+-	if (rc)
+-		return rc;
+-
+-	mutex_lock(&record->psi->read_mutex);
+-	record->psi->erase(record);
+-	mutex_unlock(&record->psi->read_mutex);
++	scoped_guard(mutex, &records_list_lock) {
++		if (!list_empty(&p->list))
++			list_del_init(&p->list);
++		else
++			return -ENOENT;
++		p->dentry = NULL;
++	}
++
++	scoped_guard(mutex, &record->psi->read_mutex)
++		record->psi->erase(record);
+ 
+ 	return simple_unlink(dir, dentry);
+ }
+@@ -292,19 +288,16 @@ static struct dentry *psinfo_lock_root(void)
+ {
+ 	struct dentry *root;
+ 
+-	mutex_lock(&pstore_sb_lock);
++	guard(mutex)(&pstore_sb_lock);
+ 	/*
+ 	 * Having no backend is fine -- no records appear.
+ 	 * Not being mounted is fine -- nothing to do.
+ 	 */
+-	if (!psinfo || !pstore_sb) {
+-		mutex_unlock(&pstore_sb_lock);
++	if (!psinfo || !pstore_sb)
+ 		return NULL;
+-	}
+ 
+ 	root = pstore_sb->s_root;
+ 	inode_lock(d_inode(root));
+-	mutex_unlock(&pstore_sb_lock);
+ 
+ 	return root;
+ }
+@@ -313,29 +306,25 @@ int pstore_put_backend_records(struct pstore_info *psi)
+ {
+ 	struct pstore_private *pos, *tmp;
+ 	struct dentry *root;
+-	int rc = 0;
+ 
+ 	root = psinfo_lock_root();
+ 	if (!root)
+ 		return 0;
+ 
+-	mutex_lock(&records_list_lock);
+-	list_for_each_entry_safe(pos, tmp, &records_list, list) {
+-		if (pos->record->psi == psi) {
+-			list_del_init(&pos->list);
+-			rc = simple_unlink(d_inode(root), pos->dentry);
+-			if (WARN_ON(rc))
+-				break;
+-			d_drop(pos->dentry);
+-			dput(pos->dentry);
+-			pos->dentry = NULL;
++	scoped_guard(mutex, &records_list_lock) {
++		list_for_each_entry_safe(pos, tmp, &records_list, list) {
++			if (pos->record->psi == psi) {
++				list_del_init(&pos->list);
++				d_invalidate(pos->dentry);
++				simple_unlink(d_inode(root), pos->dentry);
++				pos->dentry = NULL;
++			}
+ 		}
+ 	}
+-	mutex_unlock(&records_list_lock);
+ 
+ 	inode_unlock(d_inode(root));
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ /*
+@@ -355,20 +344,20 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 	if (WARN_ON(!inode_is_locked(d_inode(root))))
+ 		return -EINVAL;
+ 
+-	rc = -EEXIST;
++	guard(mutex)(&records_list_lock);
++
+ 	/* Skip records that are already present in the filesystem. */
+-	mutex_lock(&records_list_lock);
+ 	list_for_each_entry(pos, &records_list, list) {
+ 		if (pos->record->type == record->type &&
+ 		    pos->record->id == record->id &&
+ 		    pos->record->psi == record->psi)
+-			goto fail;
++			return -EEXIST;
+ 	}
+ 
+ 	rc = -ENOMEM;
+ 	inode = pstore_get_inode(root->d_sb);
+ 	if (!inode)
+-		goto fail;
++		return -ENOMEM;
+ 	inode->i_mode = S_IFREG | 0444;
+ 	inode->i_fop = &pstore_file_operations;
+ 	scnprintf(name, sizeof(name), "%s-%s-%llu%s",
+@@ -396,7 +385,6 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 	d_add(dentry, inode);
+ 
+ 	list_add(&private->list, &records_list);
+-	mutex_unlock(&records_list_lock);
+ 
+ 	return 0;
+ 
+@@ -404,8 +392,6 @@ int pstore_mkfile(struct dentry *root, struct pstore_record *record)
+ 	free_pstore_private(private);
+ fail_inode:
+ 	iput(inode);
+-fail:
+-	mutex_unlock(&records_list_lock);
+ 	return rc;
+ }
+ 
+@@ -451,9 +437,8 @@ static int pstore_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (!sb->s_root)
+ 		return -ENOMEM;
+ 
+-	mutex_lock(&pstore_sb_lock);
+-	pstore_sb = sb;
+-	mutex_unlock(&pstore_sb_lock);
++	scoped_guard(mutex, &pstore_sb_lock)
++		pstore_sb = sb;
+ 
+ 	pstore_get_records(0);
+ 
+@@ -468,17 +453,14 @@ static struct dentry *pstore_mount(struct file_system_type *fs_type,
+ 
+ static void pstore_kill_sb(struct super_block *sb)
+ {
+-	mutex_lock(&pstore_sb_lock);
++	guard(mutex)(&pstore_sb_lock);
+ 	WARN_ON(pstore_sb && pstore_sb != sb);
+ 
+ 	kill_litter_super(sb);
+ 	pstore_sb = NULL;
+ 
+-	mutex_lock(&records_list_lock);
++	guard(mutex)(&records_list_lock);
+ 	INIT_LIST_HEAD(&records_list);
+-	mutex_unlock(&records_list_lock);
+-
+-	mutex_unlock(&pstore_sb_lock);
+ }
+ 
+ static struct file_system_type pstore_fs_type = {
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index 58b5de081b571..6f2fa7889daf3 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -399,15 +399,17 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
+ EXPORT_SYMBOL(dquot_mark_dquot_dirty);
+ 
+ /* Dirtify all the dquots - this can block when journalling */
+-static inline int mark_all_dquot_dirty(struct dquot * const *dquot)
++static inline int mark_all_dquot_dirty(struct dquot __rcu * const *dquots)
+ {
+ 	int ret, err, cnt;
++	struct dquot *dquot;
+ 
+ 	ret = err = 0;
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (dquot[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (dquot)
+ 			/* Even in case of error we have to continue */
+-			ret = mark_dquot_dirty(dquot[cnt]);
++			ret = mark_dquot_dirty(dquot);
+ 		if (!err)
+ 			err = ret;
+ 	}
+@@ -998,14 +1000,14 @@ struct dquot *dqget(struct super_block *sb, struct kqid qid)
+ }
+ EXPORT_SYMBOL(dqget);
+ 
+-static inline struct dquot **i_dquot(struct inode *inode)
++static inline struct dquot __rcu **i_dquot(struct inode *inode)
+ {
+ 	return inode->i_sb->s_op->get_dquots(inode);
+ }
+ 
+ static int dqinit_needed(struct inode *inode, int type)
+ {
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
+ 	int cnt;
+ 
+ 	if (IS_NOQUOTA(inode))
+@@ -1095,14 +1097,16 @@ static void remove_dquot_ref(struct super_block *sb, int type)
+ 		 */
+ 		spin_lock(&dq_data_lock);
+ 		if (!IS_NOQUOTA(inode)) {
+-			struct dquot **dquots = i_dquot(inode);
+-			struct dquot *dquot = dquots[type];
++			struct dquot __rcu **dquots = i_dquot(inode);
++			struct dquot *dquot = srcu_dereference_check(
++				dquots[type], &dquot_srcu,
++				lockdep_is_held(&dq_data_lock));
+ 
+ #ifdef CONFIG_QUOTA_DEBUG
+ 			if (unlikely(inode_get_rsv_space(inode) > 0))
+ 				reserved = 1;
+ #endif
+-			dquots[type] = NULL;
++			rcu_assign_pointer(dquots[type], NULL);
+ 			if (dquot)
+ 				dqput(dquot);
+ 		}
+@@ -1455,7 +1459,8 @@ static int inode_quota_active(const struct inode *inode)
+ static int __dquot_initialize(struct inode *inode, int type)
+ {
+ 	int cnt, init_needed = 0;
+-	struct dquot **dquots, *got[MAXQUOTAS] = {};
++	struct dquot __rcu **dquots;
++	struct dquot *got[MAXQUOTAS] = {};
+ 	struct super_block *sb = inode->i_sb;
+ 	qsize_t rsv;
+ 	int ret = 0;
+@@ -1530,7 +1535,7 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 		if (!got[cnt])
+ 			continue;
+ 		if (!dquots[cnt]) {
+-			dquots[cnt] = got[cnt];
++			rcu_assign_pointer(dquots[cnt], got[cnt]);
+ 			got[cnt] = NULL;
+ 			/*
+ 			 * Make quota reservation system happy if someone
+@@ -1538,12 +1543,16 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 			 */
+ 			rsv = inode_get_rsv_space(inode);
+ 			if (unlikely(rsv)) {
++				struct dquot *dquot = srcu_dereference_check(
++					dquots[cnt], &dquot_srcu,
++					lockdep_is_held(&dq_data_lock));
++
+ 				spin_lock(&inode->i_lock);
+ 				/* Get reservation again under proper lock */
+ 				rsv = __inode_get_rsv_space(inode);
+-				spin_lock(&dquots[cnt]->dq_dqb_lock);
+-				dquots[cnt]->dq_dqb.dqb_rsvspace += rsv;
+-				spin_unlock(&dquots[cnt]->dq_dqb_lock);
++				spin_lock(&dquot->dq_dqb_lock);
++				dquot->dq_dqb.dqb_rsvspace += rsv;
++				spin_unlock(&dquot->dq_dqb_lock);
+ 				spin_unlock(&inode->i_lock);
+ 			}
+ 		}
+@@ -1565,7 +1574,7 @@ EXPORT_SYMBOL(dquot_initialize);
+ 
+ bool dquot_initialize_needed(struct inode *inode)
+ {
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
+ 	int i;
+ 
+ 	if (!inode_quota_active(inode))
+@@ -1590,13 +1599,14 @@ EXPORT_SYMBOL(dquot_initialize_needed);
+ static void __dquot_drop(struct inode *inode)
+ {
+ 	int cnt;
+-	struct dquot **dquots = i_dquot(inode);
++	struct dquot __rcu **dquots = i_dquot(inode);
+ 	struct dquot *put[MAXQUOTAS];
+ 
+ 	spin_lock(&dq_data_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		put[cnt] = dquots[cnt];
+-		dquots[cnt] = NULL;
++		put[cnt] = srcu_dereference_check(dquots[cnt], &dquot_srcu,
++					lockdep_is_held(&dq_data_lock));
++		rcu_assign_pointer(dquots[cnt], NULL);
+ 	}
+ 	spin_unlock(&dq_data_lock);
+ 	dqput_all(put);
+@@ -1604,7 +1614,7 @@ static void __dquot_drop(struct inode *inode)
+ 
+ void dquot_drop(struct inode *inode)
+ {
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
+ 	int cnt;
+ 
+ 	if (IS_NOQUOTA(inode))
+@@ -1677,7 +1687,8 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
+ 	int cnt, ret = 0, index;
+ 	struct dquot_warn warn[MAXQUOTAS];
+ 	int reserve = flags & DQUOT_SPACE_RESERVE;
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 
+ 	if (!inode_quota_active(inode)) {
+ 		if (reserve) {
+@@ -1697,27 +1708,26 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
+ 	index = srcu_read_lock(&dquot_srcu);
+ 	spin_lock(&inode->i_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+ 		if (reserve) {
+-			ret = dquot_add_space(dquots[cnt], 0, number, flags,
+-					      &warn[cnt]);
++			ret = dquot_add_space(dquot, 0, number, flags, &warn[cnt]);
+ 		} else {
+-			ret = dquot_add_space(dquots[cnt], number, 0, flags,
+-					      &warn[cnt]);
++			ret = dquot_add_space(dquot, number, 0, flags, &warn[cnt]);
+ 		}
+ 		if (ret) {
+ 			/* Back out changes we already did */
+ 			for (cnt--; cnt >= 0; cnt--) {
+-				if (!dquots[cnt])
++				dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++				if (!dquot)
+ 					continue;
+-				spin_lock(&dquots[cnt]->dq_dqb_lock);
++				spin_lock(&dquot->dq_dqb_lock);
+ 				if (reserve)
+-					dquot_free_reserved_space(dquots[cnt],
+-								  number);
++					dquot_free_reserved_space(dquot, number);
+ 				else
+-					dquot_decr_space(dquots[cnt], number);
+-				spin_unlock(&dquots[cnt]->dq_dqb_lock);
++					dquot_decr_space(dquot, number);
++				spin_unlock(&dquot->dq_dqb_lock);
+ 			}
+ 			spin_unlock(&inode->i_lock);
+ 			goto out_flush_warn;
+@@ -1747,7 +1757,8 @@ int dquot_alloc_inode(struct inode *inode)
+ {
+ 	int cnt, ret = 0, index;
+ 	struct dquot_warn warn[MAXQUOTAS];
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
++	struct dquot *dquot;
+ 
+ 	if (!inode_quota_active(inode))
+ 		return 0;
+@@ -1758,17 +1769,19 @@ int dquot_alloc_inode(struct inode *inode)
+ 	index = srcu_read_lock(&dquot_srcu);
+ 	spin_lock(&inode->i_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+-		ret = dquot_add_inodes(dquots[cnt], 1, &warn[cnt]);
++		ret = dquot_add_inodes(dquot, 1, &warn[cnt]);
+ 		if (ret) {
+ 			for (cnt--; cnt >= 0; cnt--) {
+-				if (!dquots[cnt])
++				dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++				if (!dquot)
+ 					continue;
+ 				/* Back out changes we already did */
+-				spin_lock(&dquots[cnt]->dq_dqb_lock);
+-				dquot_decr_inodes(dquots[cnt], 1);
+-				spin_unlock(&dquots[cnt]->dq_dqb_lock);
++				spin_lock(&dquot->dq_dqb_lock);
++				dquot_decr_inodes(dquot, 1);
++				spin_unlock(&dquot->dq_dqb_lock);
+ 			}
+ 			goto warn_put_all;
+ 		}
+@@ -1789,7 +1802,8 @@ EXPORT_SYMBOL(dquot_alloc_inode);
+  */
+ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
+ {
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 	int cnt, index;
+ 
+ 	if (!inode_quota_active(inode)) {
+@@ -1805,9 +1819,8 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
+ 	spin_lock(&inode->i_lock);
+ 	/* Claim reserved quotas to allocated quotas */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (dquots[cnt]) {
+-			struct dquot *dquot = dquots[cnt];
+-
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (dquot) {
+ 			spin_lock(&dquot->dq_dqb_lock);
+ 			if (WARN_ON_ONCE(dquot->dq_dqb.dqb_rsvspace < number))
+ 				number = dquot->dq_dqb.dqb_rsvspace;
+@@ -1831,7 +1844,8 @@ EXPORT_SYMBOL(dquot_claim_space_nodirty);
+  */
+ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number)
+ {
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 	int cnt, index;
+ 
+ 	if (!inode_quota_active(inode)) {
+@@ -1847,9 +1861,8 @@ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number)
+ 	spin_lock(&inode->i_lock);
+ 	/* Claim reserved quotas to allocated quotas */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		if (dquots[cnt]) {
+-			struct dquot *dquot = dquots[cnt];
+-
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (dquot) {
+ 			spin_lock(&dquot->dq_dqb_lock);
+ 			if (WARN_ON_ONCE(dquot->dq_dqb.dqb_curspace < number))
+ 				number = dquot->dq_dqb.dqb_curspace;
+@@ -1875,7 +1888,8 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
+ {
+ 	unsigned int cnt;
+ 	struct dquot_warn warn[MAXQUOTAS];
+-	struct dquot **dquots;
++	struct dquot __rcu **dquots;
++	struct dquot *dquot;
+ 	int reserve = flags & DQUOT_SPACE_RESERVE, index;
+ 
+ 	if (!inode_quota_active(inode)) {
+@@ -1896,17 +1910,18 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
+ 		int wtype;
+ 
+ 		warn[cnt].w_type = QUOTA_NL_NOWARN;
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+-		spin_lock(&dquots[cnt]->dq_dqb_lock);
+-		wtype = info_bdq_free(dquots[cnt], number);
++		spin_lock(&dquot->dq_dqb_lock);
++		wtype = info_bdq_free(dquot, number);
+ 		if (wtype != QUOTA_NL_NOWARN)
+-			prepare_warning(&warn[cnt], dquots[cnt], wtype);
++			prepare_warning(&warn[cnt], dquot, wtype);
+ 		if (reserve)
+-			dquot_free_reserved_space(dquots[cnt], number);
++			dquot_free_reserved_space(dquot, number);
+ 		else
+-			dquot_decr_space(dquots[cnt], number);
+-		spin_unlock(&dquots[cnt]->dq_dqb_lock);
++			dquot_decr_space(dquot, number);
++		spin_unlock(&dquot->dq_dqb_lock);
+ 	}
+ 	if (reserve)
+ 		*inode_reserved_space(inode) -= number;
+@@ -1930,7 +1945,8 @@ void dquot_free_inode(struct inode *inode)
+ {
+ 	unsigned int cnt;
+ 	struct dquot_warn warn[MAXQUOTAS];
+-	struct dquot * const *dquots;
++	struct dquot __rcu * const *dquots;
++	struct dquot *dquot;
+ 	int index;
+ 
+ 	if (!inode_quota_active(inode))
+@@ -1941,16 +1957,16 @@ void dquot_free_inode(struct inode *inode)
+ 	spin_lock(&inode->i_lock);
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+ 		int wtype;
+-
+ 		warn[cnt].w_type = QUOTA_NL_NOWARN;
+-		if (!dquots[cnt])
++		dquot = srcu_dereference(dquots[cnt], &dquot_srcu);
++		if (!dquot)
+ 			continue;
+-		spin_lock(&dquots[cnt]->dq_dqb_lock);
+-		wtype = info_idq_free(dquots[cnt], 1);
++		spin_lock(&dquot->dq_dqb_lock);
++		wtype = info_idq_free(dquot, 1);
+ 		if (wtype != QUOTA_NL_NOWARN)
+-			prepare_warning(&warn[cnt], dquots[cnt], wtype);
+-		dquot_decr_inodes(dquots[cnt], 1);
+-		spin_unlock(&dquots[cnt]->dq_dqb_lock);
++			prepare_warning(&warn[cnt], dquot, wtype);
++		dquot_decr_inodes(dquot, 1);
++		spin_unlock(&dquot->dq_dqb_lock);
+ 	}
+ 	spin_unlock(&inode->i_lock);
+ 	mark_all_dquot_dirty(dquots);
+@@ -1976,8 +1992,9 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 	qsize_t cur_space;
+ 	qsize_t rsv_space = 0;
+ 	qsize_t inode_usage = 1;
++	struct dquot __rcu **dquots;
+ 	struct dquot *transfer_from[MAXQUOTAS] = {};
+-	int cnt, ret = 0;
++	int cnt, index, ret = 0;
+ 	char is_valid[MAXQUOTAS] = {};
+ 	struct dquot_warn warn_to[MAXQUOTAS];
+ 	struct dquot_warn warn_from_inodes[MAXQUOTAS];
+@@ -2008,6 +2025,7 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 	}
+ 	cur_space = __inode_get_bytes(inode);
+ 	rsv_space = __inode_get_rsv_space(inode);
++	dquots = i_dquot(inode);
+ 	/*
+ 	 * Build the transfer_from list, check limits, and update usage in
+ 	 * the target structures.
+@@ -2022,7 +2040,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 		if (!sb_has_quota_active(inode->i_sb, cnt))
+ 			continue;
+ 		is_valid[cnt] = 1;
+-		transfer_from[cnt] = i_dquot(inode)[cnt];
++		transfer_from[cnt] = srcu_dereference_check(dquots[cnt],
++				&dquot_srcu, lockdep_is_held(&dq_data_lock));
+ 		ret = dquot_add_inodes(transfer_to[cnt], inode_usage,
+ 				       &warn_to[cnt]);
+ 		if (ret)
+@@ -2061,13 +2080,21 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
+ 						  rsv_space);
+ 			spin_unlock(&transfer_from[cnt]->dq_dqb_lock);
+ 		}
+-		i_dquot(inode)[cnt] = transfer_to[cnt];
++		rcu_assign_pointer(dquots[cnt], transfer_to[cnt]);
+ 	}
+ 	spin_unlock(&inode->i_lock);
+ 	spin_unlock(&dq_data_lock);
+ 
+-	mark_all_dquot_dirty(transfer_from);
+-	mark_all_dquot_dirty(transfer_to);
++	/*
++	 * These arrays are local and we hold dquot references so we don't need
++	 * the srcu protection but still take dquot_srcu to avoid warning in
++	 * mark_all_dquot_dirty().
++	 */
++	index = srcu_read_lock(&dquot_srcu);
++	mark_all_dquot_dirty((struct dquot __rcu **)transfer_from);
++	mark_all_dquot_dirty((struct dquot __rcu **)transfer_to);
++	srcu_read_unlock(&dquot_srcu, index);
++
+ 	flush_warnings(warn_to);
+ 	flush_warnings(warn_from_inodes);
+ 	flush_warnings(warn_from_space);
+diff --git a/fs/reiserfs/reiserfs.h b/fs/reiserfs/reiserfs.h
+index 725667880e626..b65549164590c 100644
+--- a/fs/reiserfs/reiserfs.h
++++ b/fs/reiserfs/reiserfs.h
+@@ -97,7 +97,7 @@ struct reiserfs_inode_info {
+ 	struct rw_semaphore i_xattr_sem;
+ #endif
+ #ifdef CONFIG_QUOTA
+-	struct dquot *i_dquot[MAXQUOTAS];
++	struct dquot __rcu *i_dquot[MAXQUOTAS];
+ #endif
+ 
+ 	struct inode vfs_inode;
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index 67b5510beded2..7b3d5aeb2a6fe 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -802,7 +802,7 @@ static ssize_t reiserfs_quota_write(struct super_block *, int, const char *,
+ static ssize_t reiserfs_quota_read(struct super_block *, int, char *, size_t,
+ 				   loff_t);
+ 
+-static struct dquot **reiserfs_get_dquots(struct inode *inode)
++static struct dquot __rcu **reiserfs_get_dquots(struct inode *inode)
+ {
+ 	return REISERFS_I(inode)->i_dquot;
+ }
+diff --git a/fs/select.c b/fs/select.c
+index 0ee55af1a55c2..d4d881d439dcd 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -476,7 +476,7 @@ static inline void wait_key_set(poll_table *wait, unsigned long in,
+ 		wait->_key |= POLLOUT_SET;
+ }
+ 
+-static int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
++static noinline_for_stack int do_select(int n, fd_set_bits *fds, struct timespec64 *end_time)
+ {
+ 	ktime_t expire, *to = NULL;
+ 	struct poll_wqueues table;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 4cbb5487bd8d0..c156460eb5587 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -87,7 +87,7 @@ void cifs_pages_written_back(struct inode *inode, loff_t start, unsigned int len
+ 			continue;
+ 		if (!folio_test_writeback(folio)) {
+ 			WARN_ONCE(1, "bad %x @%llx page %lx %lx\n",
+-				  len, start, folio_index(folio), end);
++				  len, start, folio->index, end);
+ 			continue;
+ 		}
+ 
+@@ -120,7 +120,7 @@ void cifs_pages_write_failed(struct inode *inode, loff_t start, unsigned int len
+ 			continue;
+ 		if (!folio_test_writeback(folio)) {
+ 			WARN_ONCE(1, "bad %x @%llx page %lx %lx\n",
+-				  len, start, folio_index(folio), end);
++				  len, start, folio->index, end);
+ 			continue;
+ 		}
+ 
+@@ -151,7 +151,7 @@ void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int le
+ 	xas_for_each(&xas, folio, end) {
+ 		if (!folio_test_writeback(folio)) {
+ 			WARN_ONCE(1, "bad %x @%llx page %lx %lx\n",
+-				  len, start, folio_index(folio), end);
++				  len, start, folio->index, end);
+ 			continue;
+ 		}
+ 
+@@ -2622,20 +2622,20 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to)
+  * dirty pages if possible, but don't sleep while doing so.
+  */
+ static void cifs_extend_writeback(struct address_space *mapping,
++				  struct xa_state *xas,
+ 				  long *_count,
+ 				  loff_t start,
+ 				  int max_pages,
+-				  size_t max_len,
+-				  unsigned int *_len)
++				  loff_t max_len,
++				  size_t *_len)
+ {
+ 	struct folio_batch batch;
+ 	struct folio *folio;
+-	unsigned int psize, nr_pages;
+-	size_t len = *_len;
+-	pgoff_t index = (start + len) / PAGE_SIZE;
++	unsigned int nr_pages;
++	pgoff_t index = (start + *_len) / PAGE_SIZE;
++	size_t len;
+ 	bool stop = true;
+ 	unsigned int i;
+-	XA_STATE(xas, &mapping->i_pages, index);
+ 
+ 	folio_batch_init(&batch);
+ 
+@@ -2646,54 +2646,64 @@ static void cifs_extend_writeback(struct address_space *mapping,
+ 		 */
+ 		rcu_read_lock();
+ 
+-		xas_for_each(&xas, folio, ULONG_MAX) {
++		xas_for_each(xas, folio, ULONG_MAX) {
+ 			stop = true;
+-			if (xas_retry(&xas, folio))
++			if (xas_retry(xas, folio))
+ 				continue;
+ 			if (xa_is_value(folio))
+ 				break;
+-			if (folio_index(folio) != index)
++			if (folio->index != index) {
++				xas_reset(xas);
+ 				break;
++			}
++
+ 			if (!folio_try_get_rcu(folio)) {
+-				xas_reset(&xas);
++				xas_reset(xas);
+ 				continue;
+ 			}
+ 			nr_pages = folio_nr_pages(folio);
+-			if (nr_pages > max_pages)
++			if (nr_pages > max_pages) {
++				xas_reset(xas);
+ 				break;
++			}
+ 
+ 			/* Has the page moved or been split? */
+-			if (unlikely(folio != xas_reload(&xas))) {
++			if (unlikely(folio != xas_reload(xas))) {
+ 				folio_put(folio);
++				xas_reset(xas);
+ 				break;
+ 			}
+ 
+ 			if (!folio_trylock(folio)) {
+ 				folio_put(folio);
++				xas_reset(xas);
+ 				break;
+ 			}
+-			if (!folio_test_dirty(folio) || folio_test_writeback(folio)) {
++			if (!folio_test_dirty(folio) ||
++			    folio_test_writeback(folio)) {
+ 				folio_unlock(folio);
+ 				folio_put(folio);
++				xas_reset(xas);
+ 				break;
+ 			}
+ 
+ 			max_pages -= nr_pages;
+-			psize = folio_size(folio);
+-			len += psize;
++			len = folio_size(folio);
+ 			stop = false;
+-			if (max_pages <= 0 || len >= max_len || *_count <= 0)
+-				stop = true;
+ 
+ 			index += nr_pages;
++			*_count -= nr_pages;
++			*_len += len;
++			if (max_pages <= 0 || *_len >= max_len || *_count <= 0)
++				stop = true;
++
+ 			if (!folio_batch_add(&batch, folio))
+ 				break;
+ 			if (stop)
+ 				break;
+ 		}
+ 
+-		if (!stop)
+-			xas_pause(&xas);
++		xas_pause(xas);
+ 		rcu_read_unlock();
+ 
+ 		/* Now, if we obtained any pages, we can shift them to being
+@@ -2709,18 +2719,13 @@ static void cifs_extend_writeback(struct address_space *mapping,
+ 			 */
+ 			if (!folio_clear_dirty_for_io(folio))
+ 				WARN_ON(1);
+-			if (folio_start_writeback(folio))
+-				WARN_ON(1);
+-
+-			*_count -= folio_nr_pages(folio);
++			folio_start_writeback(folio);
+ 			folio_unlock(folio);
+ 		}
+ 
+ 		folio_batch_release(&batch);
+ 		cond_resched();
+ 	} while (!stop);
+-
+-	*_len = len;
+ }
+ 
+ /*
+@@ -2728,8 +2733,10 @@ static void cifs_extend_writeback(struct address_space *mapping,
+  */
+ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
+ 						 struct writeback_control *wbc,
++						 struct xa_state *xas,
+ 						 struct folio *folio,
+-						 loff_t start, loff_t end)
++						 unsigned long long start,
++						 unsigned long long end)
+ {
+ 	struct inode *inode = mapping->host;
+ 	struct TCP_Server_Info *server;
+@@ -2738,18 +2745,18 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
+ 	struct cifs_credits credits_on_stack;
+ 	struct cifs_credits *credits = &credits_on_stack;
+ 	struct cifsFileInfo *cfile = NULL;
+-	unsigned int xid, wsize, len;
+-	loff_t i_size = i_size_read(inode);
+-	size_t max_len;
++	unsigned long long i_size = i_size_read(inode), max_len;
++	unsigned int xid, wsize;
++	size_t len = folio_size(folio);
+ 	long count = wbc->nr_to_write;
+ 	int rc;
+ 
+ 	/* The folio should be locked, dirty and not undergoing writeback. */
+-	if (folio_start_writeback(folio))
+-		WARN_ON(1);
++	if (!folio_clear_dirty_for_io(folio))
++		WARN_ON_ONCE(1);
++	folio_start_writeback(folio);
+ 
+ 	count -= folio_nr_pages(folio);
+-	len = folio_size(folio);
+ 
+ 	xid = get_xid();
+ 	server = cifs_pick_channel(cifs_sb_master_tcon(cifs_sb)->ses);
+@@ -2779,9 +2786,10 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
+ 	wdata->server = server;
+ 	cfile = NULL;
+ 
+-	/* Find all consecutive lockable dirty pages, stopping when we find a
+-	 * page that is not immediately lockable, is not dirty or is missing,
+-	 * or we reach the end of the range.
++	/* Find all consecutive lockable dirty pages that have contiguous
++	 * written regions, stopping when we find a page that is not
++	 * immediately lockable, is not dirty or is missing, or we reach the
++	 * end of the range.
+ 	 */
+ 	if (start < i_size) {
+ 		/* Trim the write to the EOF; the extra data is ignored.  Also
+@@ -2801,19 +2809,18 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
+ 			max_pages -= folio_nr_pages(folio);
+ 
+ 			if (max_pages > 0)
+-				cifs_extend_writeback(mapping, &count, start,
++				cifs_extend_writeback(mapping, xas, &count, start,
+ 						      max_pages, max_len, &len);
+ 		}
+-		len = min_t(loff_t, len, max_len);
+ 	}
+-
+-	wdata->bytes = len;
++	len = min_t(unsigned long long, len, i_size - start);
+ 
+ 	/* We now have a contiguous set of dirty pages, each with writeback
+ 	 * set; the first page is still locked at this point, but all the rest
+ 	 * have been unlocked.
+ 	 */
+ 	folio_unlock(folio);
++	wdata->bytes = len;
+ 
+ 	if (start < i_size) {
+ 		iov_iter_xarray(&wdata->iter, ITER_SOURCE, &mapping->i_pages,
+@@ -2864,102 +2871,118 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
+ /*
+  * write a region of pages back to the server
+  */
+-static int cifs_writepages_region(struct address_space *mapping,
+-				  struct writeback_control *wbc,
+-				  loff_t start, loff_t end, loff_t *_next)
++static ssize_t cifs_writepages_begin(struct address_space *mapping,
++				     struct writeback_control *wbc,
++				     struct xa_state *xas,
++				     unsigned long long *_start,
++				     unsigned long long end)
+ {
+-	struct folio_batch fbatch;
++	struct folio *folio;
++	unsigned long long start = *_start;
++	ssize_t ret;
+ 	int skips = 0;
+ 
+-	folio_batch_init(&fbatch);
+-	do {
+-		int nr;
+-		pgoff_t index = start / PAGE_SIZE;
++search_again:
++	/* Find the first dirty page. */
++	rcu_read_lock();
+ 
+-		nr = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
+-					    PAGECACHE_TAG_DIRTY, &fbatch);
+-		if (!nr)
++	for (;;) {
++		folio = xas_find_marked(xas, end / PAGE_SIZE, PAGECACHE_TAG_DIRTY);
++		if (xas_retry(xas, folio) || xa_is_value(folio))
++			continue;
++		if (!folio)
+ 			break;
+ 
+-		for (int i = 0; i < nr; i++) {
+-			ssize_t ret;
+-			struct folio *folio = fbatch.folios[i];
++		if (!folio_try_get_rcu(folio)) {
++			xas_reset(xas);
++			continue;
++		}
+ 
+-redo_folio:
+-			start = folio_pos(folio); /* May regress with THPs */
++		if (unlikely(folio != xas_reload(xas))) {
++			folio_put(folio);
++			xas_reset(xas);
++			continue;
++		}
+ 
+-			/* At this point we hold neither the i_pages lock nor the
+-			 * page lock: the page may be truncated or invalidated
+-			 * (changing page->mapping to NULL), or even swizzled
+-			 * back from swapper_space to tmpfs file mapping
+-			 */
+-			if (wbc->sync_mode != WB_SYNC_NONE) {
+-				ret = folio_lock_killable(folio);
+-				if (ret < 0)
+-					goto write_error;
+-			} else {
+-				if (!folio_trylock(folio))
+-					goto skip_write;
+-			}
++		xas_pause(xas);
++		break;
++	}
++	rcu_read_unlock();
++	if (!folio)
++		return 0;
+ 
+-			if (folio_mapping(folio) != mapping ||
+-			    !folio_test_dirty(folio)) {
+-				start += folio_size(folio);
+-				folio_unlock(folio);
+-				continue;
+-			}
++	start = folio_pos(folio); /* May regress with THPs */
+ 
+-			if (folio_test_writeback(folio) ||
+-			    folio_test_fscache(folio)) {
+-				folio_unlock(folio);
+-				if (wbc->sync_mode == WB_SYNC_NONE)
+-					goto skip_write;
++	/* At this point we hold neither the i_pages lock nor the page lock:
++	 * the page may be truncated or invalidated (changing page->mapping to
++	 * NULL), or even swizzled back from swapper_space to tmpfs file
++	 * mapping
++	 */
++lock_again:
++	if (wbc->sync_mode != WB_SYNC_NONE) {
++		ret = folio_lock_killable(folio);
++		if (ret < 0)
++			return ret;
++	} else {
++		if (!folio_trylock(folio))
++			goto search_again;
++	}
++
++	if (folio->mapping != mapping ||
++	    !folio_test_dirty(folio)) {
++		start += folio_size(folio);
++		folio_unlock(folio);
++		goto search_again;
++	}
+ 
+-				folio_wait_writeback(folio);
++	if (folio_test_writeback(folio) ||
++	    folio_test_fscache(folio)) {
++		folio_unlock(folio);
++		if (wbc->sync_mode != WB_SYNC_NONE) {
++			folio_wait_writeback(folio);
+ #ifdef CONFIG_CIFS_FSCACHE
+-				folio_wait_fscache(folio);
++			folio_wait_fscache(folio);
+ #endif
+-				goto redo_folio;
+-			}
+-
+-			if (!folio_clear_dirty_for_io(folio))
+-				/* We hold the page lock - it should've been dirty. */
+-				WARN_ON(1);
+-
+-			ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
+-			if (ret < 0)
+-				goto write_error;
+-
+-			start += ret;
+-			continue;
+-
+-write_error:
+-			folio_batch_release(&fbatch);
+-			*_next = start;
+-			return ret;
++			goto lock_again;
++		}
+ 
+-skip_write:
+-			/*
+-			 * Too many skipped writes, or need to reschedule?
+-			 * Treat it as a write error without an error code.
+-			 */
++		start += folio_size(folio);
++		if (wbc->sync_mode == WB_SYNC_NONE) {
+ 			if (skips >= 5 || need_resched()) {
+ 				ret = 0;
+-				goto write_error;
++				goto out;
+ 			}
+-
+-			/* Otherwise, just skip that folio and go on to the next */
+ 			skips++;
+-			start += folio_size(folio);
+-			continue;
+ 		}
++		goto search_again;
++	}
+ 
+-		folio_batch_release(&fbatch);		
+-		cond_resched();
+-	} while (wbc->nr_to_write > 0);
++	ret = cifs_write_back_from_locked_folio(mapping, wbc, xas, folio, start, end);
++out:
++	if (ret > 0)
++		*_start = start + ret;
++	return ret;
++}
+ 
+-	*_next = start;
+-	return 0;
++/*
++ * Write a region of pages back to the server
++ */
++static int cifs_writepages_region(struct address_space *mapping,
++				  struct writeback_control *wbc,
++				  unsigned long long *_start,
++				  unsigned long long end)
++{
++	ssize_t ret;
++
++	XA_STATE(xas, &mapping->i_pages, *_start / PAGE_SIZE);
++
++	do {
++		ret = cifs_writepages_begin(mapping, wbc, &xas, _start, end);
++		if (ret > 0 && wbc->nr_to_write > 0)
++			cond_resched();
++	} while (ret > 0 && wbc->nr_to_write > 0);
++
++	return ret > 0 ? 0 : ret;
+ }
+ 
+ /*
+@@ -2968,7 +2991,7 @@ static int cifs_writepages_region(struct address_space *mapping,
+ static int cifs_writepages(struct address_space *mapping,
+ 			   struct writeback_control *wbc)
+ {
+-	loff_t start, next;
++	loff_t start, end;
+ 	int ret;
+ 
+ 	/* We have to be careful as we can end up racing with setattr()
+@@ -2976,28 +2999,34 @@ static int cifs_writepages(struct address_space *mapping,
+ 	 * to prevent it.
+ 	 */
+ 
+-	if (wbc->range_cyclic) {
++	if (wbc->range_cyclic && mapping->writeback_index) {
+ 		start = mapping->writeback_index * PAGE_SIZE;
+-		ret = cifs_writepages_region(mapping, wbc, start, LLONG_MAX, &next);
+-		if (ret == 0) {
+-			mapping->writeback_index = next / PAGE_SIZE;
+-			if (start > 0 && wbc->nr_to_write > 0) {
+-				ret = cifs_writepages_region(mapping, wbc, 0,
+-							     start, &next);
+-				if (ret == 0)
+-					mapping->writeback_index =
+-						next / PAGE_SIZE;
+-			}
++		ret = cifs_writepages_region(mapping, wbc, &start, LLONG_MAX);
++		if (ret < 0)
++			goto out;
++
++		if (wbc->nr_to_write <= 0) {
++			mapping->writeback_index = start / PAGE_SIZE;
++			goto out;
+ 		}
++
++		start = 0;
++		end = mapping->writeback_index * PAGE_SIZE;
++		mapping->writeback_index = 0;
++		ret = cifs_writepages_region(mapping, wbc, &start, end);
++		if (ret == 0)
++			mapping->writeback_index = start / PAGE_SIZE;
+ 	} else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
+-		ret = cifs_writepages_region(mapping, wbc, 0, LLONG_MAX, &next);
++		start = 0;
++		ret = cifs_writepages_region(mapping, wbc, &start, LLONG_MAX);
+ 		if (wbc->nr_to_write > 0 && ret == 0)
+-			mapping->writeback_index = next / PAGE_SIZE;
++			mapping->writeback_index = start / PAGE_SIZE;
+ 	} else {
+-		ret = cifs_writepages_region(mapping, wbc,
+-					     wbc->range_start, wbc->range_end, &next);
++		start = wbc->range_start;
++		ret = cifs_writepages_region(mapping, wbc, &start, wbc->range_end);
+ 	}
+ 
++out:
+ 	return ret;
+ }
+ 
+diff --git a/include/drm/drm_fixed.h b/include/drm/drm_fixed.h
+index 6ea339d5de088..81572d32db0c2 100644
+--- a/include/drm/drm_fixed.h
++++ b/include/drm/drm_fixed.h
+@@ -71,7 +71,6 @@ static inline u32 dfixed_div(fixed20_12 A, fixed20_12 B)
+ }
+ 
+ #define DRM_FIXED_POINT		32
+-#define DRM_FIXED_POINT_HALF	16
+ #define DRM_FIXED_ONE		(1ULL << DRM_FIXED_POINT)
+ #define DRM_FIXED_DECIMAL_MASK	(DRM_FIXED_ONE - 1)
+ #define DRM_FIXED_DIGITS_MASK	(~DRM_FIXED_DECIMAL_MASK)
+@@ -90,12 +89,12 @@ static inline int drm_fixp2int(s64 a)
+ 
+ static inline int drm_fixp2int_round(s64 a)
+ {
+-	return drm_fixp2int(a + (1 << (DRM_FIXED_POINT_HALF - 1)));
++	return drm_fixp2int(a + DRM_FIXED_ONE / 2);
+ }
+ 
+ static inline int drm_fixp2int_ceil(s64 a)
+ {
+-	if (a > 0)
++	if (a >= 0)
+ 		return drm_fixp2int(a + DRM_FIXED_ALMOST_ONE);
+ 	else
+ 		return drm_fixp2int(a - DRM_FIXED_ALMOST_ONE);
+diff --git a/include/drm/drm_kunit_helpers.h b/include/drm/drm_kunit_helpers.h
+index ba483c87f0e7b..3ae19892229db 100644
+--- a/include/drm/drm_kunit_helpers.h
++++ b/include/drm/drm_kunit_helpers.h
+@@ -3,6 +3,8 @@
+ #ifndef DRM_KUNIT_HELPERS_H_
+ #define DRM_KUNIT_HELPERS_H_
+ 
++#include <drm/drm_drv.h>
++
+ #include <linux/device.h>
+ 
+ #include <kunit/test.h>
+diff --git a/include/dt-bindings/clock/r8a779g0-cpg-mssr.h b/include/dt-bindings/clock/r8a779g0-cpg-mssr.h
+index 754c54a6eb06a..7850cdc62e285 100644
+--- a/include/dt-bindings/clock/r8a779g0-cpg-mssr.h
++++ b/include/dt-bindings/clock/r8a779g0-cpg-mssr.h
+@@ -86,5 +86,6 @@
+ #define R8A779G0_CLK_CPEX		74
+ #define R8A779G0_CLK_CBFUSA		75
+ #define R8A779G0_CLK_R			76
++#define R8A779G0_CLK_CP			77
+ 
+ #endif /* __DT_BINDINGS_CLOCK_R8A779G0_CPG_MSSR_H__ */
+diff --git a/include/linux/dm-io.h b/include/linux/dm-io.h
+index 7595142f3fc57..7b2968612b7e6 100644
+--- a/include/linux/dm-io.h
++++ b/include/linux/dm-io.h
+@@ -80,7 +80,8 @@ void dm_io_client_destroy(struct dm_io_client *client);
+  * error occurred doing io to the corresponding region.
+  */
+ int dm_io(struct dm_io_request *io_req, unsigned int num_regions,
+-	  struct dm_io_region *region, unsigned int long *sync_error_bits);
++	  struct dm_io_region *region, unsigned int long *sync_error_bits,
++	  unsigned short ioprio);
+ 
+ #endif	/* __KERNEL__ */
+ #endif	/* _LINUX_DM_IO_H */
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index 039fe0ce8d83d..1812b7ec8491a 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -27,6 +27,7 @@
+ 
+ #define F2FS_BYTES_TO_BLK(bytes)	((bytes) >> F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_TO_BYTES(blk)		((blk) << F2FS_BLKSIZE_BITS)
++#define F2FS_BLK_END_BYTES(blk)		(F2FS_BLK_TO_BYTES(blk + 1) - 1)
+ 
+ /* 0, 1(node nid), 2(meta nid) are reserved node id */
+ #define F2FS_RESERVED_NODE_NUM		3
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index a4953fafc8cb8..02e043bd93b1a 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -547,24 +547,27 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
+ 	__BPF_MAP(n, __BPF_DECL_ARGS, __BPF_N, u64, __ur_1, u64, __ur_2,       \
+ 		  u64, __ur_3, u64, __ur_4, u64, __ur_5)
+ 
+-#define BPF_CALL_x(x, name, ...)					       \
++#define BPF_CALL_x(x, attr, name, ...)					       \
+ 	static __always_inline						       \
+ 	u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__));   \
+ 	typedef u64 (*btf_##name)(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \
+-	u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__));	       \
+-	u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__))	       \
++	attr u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__));    \
++	attr u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__))     \
+ 	{								       \
+ 		return ((btf_##name)____##name)(__BPF_MAP(x,__BPF_CAST,__BPF_N,__VA_ARGS__));\
+ 	}								       \
+ 	static __always_inline						       \
+ 	u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__))
+ 
+-#define BPF_CALL_0(name, ...)	BPF_CALL_x(0, name, __VA_ARGS__)
+-#define BPF_CALL_1(name, ...)	BPF_CALL_x(1, name, __VA_ARGS__)
+-#define BPF_CALL_2(name, ...)	BPF_CALL_x(2, name, __VA_ARGS__)
+-#define BPF_CALL_3(name, ...)	BPF_CALL_x(3, name, __VA_ARGS__)
+-#define BPF_CALL_4(name, ...)	BPF_CALL_x(4, name, __VA_ARGS__)
+-#define BPF_CALL_5(name, ...)	BPF_CALL_x(5, name, __VA_ARGS__)
++#define __NOATTR
++#define BPF_CALL_0(name, ...)	BPF_CALL_x(0, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_1(name, ...)	BPF_CALL_x(1, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_2(name, ...)	BPF_CALL_x(2, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_3(name, ...)	BPF_CALL_x(3, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_4(name, ...)	BPF_CALL_x(4, __NOATTR, name, __VA_ARGS__)
++#define BPF_CALL_5(name, ...)	BPF_CALL_x(5, __NOATTR, name, __VA_ARGS__)
++
++#define NOTRACE_BPF_CALL_1(name, ...)	BPF_CALL_x(1, notrace, name, __VA_ARGS__)
+ 
+ #define bpf_ctx_range(TYPE, MEMBER)						\
+ 	offsetof(TYPE, MEMBER) ... offsetofend(TYPE, MEMBER) - 1
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 7f659c26794b5..3226a1d47be11 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2084,7 +2084,7 @@ struct super_operations {
+ #ifdef CONFIG_QUOTA
+ 	ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
+ 	ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
+-	struct dquot **(*get_dquots)(struct inode *);
++	struct dquot __rcu **(*get_dquots)(struct inode *);
+ #endif
+ 	long (*nr_cached_objects)(struct super_block *,
+ 				  struct shrink_control *);
+@@ -3205,6 +3205,15 @@ extern int generic_check_addressable(unsigned, u64);
+ 
+ extern void generic_set_encrypted_ci_d_ops(struct dentry *dentry);
+ 
++static inline bool sb_has_encoding(const struct super_block *sb)
++{
++#if IS_ENABLED(CONFIG_UNICODE)
++	return !!sb->s_encoding;
++#else
++	return false;
++#endif
++}
++
+ int may_setattr(struct mnt_idmap *idmap, struct inode *inode,
+ 		unsigned int ia_valid);
+ int setattr_prepare(struct mnt_idmap *, struct dentry *, struct iattr *);
+diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
+index aefb73eeeebff..b3a32687f5143 100644
+--- a/include/linux/io_uring.h
++++ b/include/linux/io_uring.h
+@@ -93,6 +93,7 @@ int io_uring_cmd_sock(struct io_uring_cmd *cmd, unsigned int issue_flags);
+ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd,
+ 		unsigned int issue_flags);
+ struct task_struct *io_uring_cmd_get_task(struct io_uring_cmd *cmd);
++bool io_is_uring_fops(struct file *file);
+ #else
+ static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
+ 			      struct iov_iter *iter, void *ioucmd)
+@@ -111,10 +112,6 @@ static inline void io_uring_cmd_do_in_task_lazy(struct io_uring_cmd *ioucmd,
+ 			void (*task_work_cb)(struct io_uring_cmd *, unsigned))
+ {
+ }
+-static inline struct sock *io_uring_get_socket(struct file *file)
+-{
+-	return NULL;
+-}
+ static inline void io_uring_task_cancel(void)
+ {
+ }
+@@ -141,6 +138,10 @@ static inline struct task_struct *io_uring_cmd_get_task(struct io_uring_cmd *cmd
+ {
+ 	return NULL;
+ }
++static inline bool io_is_uring_fops(struct file *file)
++{
++	return false;
++}
+ #endif
+ 
+ #endif
+diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
+index 239a4f68801bb..335eca49dc8b0 100644
+--- a/include/linux/io_uring_types.h
++++ b/include/linux/io_uring_types.h
+@@ -358,9 +358,6 @@ struct io_ring_ctx {
+ 	struct wait_queue_head		rsrc_quiesce_wq;
+ 	unsigned			rsrc_quiesce;
+ 
+-	#if defined(CONFIG_UNIX)
+-		struct socket		*ring_sock;
+-	#endif
+ 	/* hashed buffered write serialization */
+ 	struct io_wq_hash		*hash_map;
+ 
+diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h
+index bd53cf4be7bdc..f0e55bf3ec8b5 100644
+--- a/include/linux/mlx5/qp.h
++++ b/include/linux/mlx5/qp.h
+@@ -269,7 +269,10 @@ struct mlx5_wqe_eth_seg {
+ 	union {
+ 		struct {
+ 			__be16 sz;
+-			u8     start[2];
++			union {
++				u8     start[2];
++				DECLARE_FLEX_ARRAY(u8, data);
++			};
+ 		} inline_hdr;
+ 		struct {
+ 			__be16 type;
+diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h
+index 001b2ce83832e..89b1e0ed98114 100644
+--- a/include/linux/moduleloader.h
++++ b/include/linux/moduleloader.h
+@@ -115,6 +115,14 @@ int module_finalize(const Elf_Ehdr *hdr,
+ 		    const Elf_Shdr *sechdrs,
+ 		    struct module *mod);
+ 
++#ifdef CONFIG_MODULES
++void flush_module_init_free_work(void);
++#else
++static inline void flush_module_init_free_work(void)
++{
++}
++#endif
++
+ /* Any cleanup needed when module leaves. */
+ void module_arch_cleanup(struct module *mod);
+ 
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 675937a5bd7ce..ca0e61c838e83 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2511,6 +2511,11 @@ static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev)
+ 	return NULL;
+ }
+ 
++static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
++{
++	return dev->error_state == pci_channel_io_perm_failure;
++}
++
+ void pci_request_acs(void);
+ bool pci_acs_enabled(struct pci_dev *pdev, u16 acs_flags);
+ bool pci_acs_path_enabled(struct pci_dev *start,
+diff --git a/include/linux/poll.h b/include/linux/poll.h
+index a9e0e1c2d1f2f..d1ea4f3714a84 100644
+--- a/include/linux/poll.h
++++ b/include/linux/poll.h
+@@ -14,11 +14,7 @@
+ 
+ /* ~832 bytes of stack space used max in sys_select/sys_poll before allocating
+    additional memory. */
+-#ifdef __clang__
+-#define MAX_STACK_ALLOC 768
+-#else
+ #define MAX_STACK_ALLOC 832
+-#endif
+ #define FRONTEND_STACK_ALLOC	256
+ #define SELECT_STACK_ALLOC	FRONTEND_STACK_ALLOC
+ #define POLL_STACK_ALLOC	FRONTEND_STACK_ALLOC
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 31d523c4e0893..e1bb04e578d8a 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -250,6 +250,37 @@ do { \
+ 	cond_resched(); \
+ } while (0)
+ 
++/**
++ * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
++ * @old_ts: jiffies at start of processing.
++ *
++ * This helper is for long-running softirq handlers, such as NAPI threads in
++ * networking. The caller should initialize the variable passed in as @old_ts
++ * at the beginning of the softirq handler. When invoked frequently, this macro
++ * will invoke rcu_softirq_qs() every 100 milliseconds thereafter, which will
++ * provide both RCU and RCU-Tasks quiescent states. Note that this macro
++ * modifies its old_ts argument.
++ *
++ * Because regions of code that have disabled softirq act as RCU read-side
++ * critical sections, this macro should be invoked with softirq (and
++ * preemption) enabled.
++ *
++ * The macro is not needed when CONFIG_PREEMPT_RT is defined. RT kernels would
++ * have more chance to invoke schedule() calls and provide necessary quiescent
++ * states. As a contrast, calling cond_resched() only won't achieve the same
++ * effect because cond_resched() does not provide RCU-Tasks quiescent states.
++ */
++#define rcu_softirq_qs_periodic(old_ts) \
++do { \
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && \
++	    time_after(jiffies, (old_ts) + HZ / 10)) { \
++		preempt_disable(); \
++		rcu_softirq_qs(); \
++		preempt_enable(); \
++		(old_ts) = jiffies; \
++	} \
++} while (0)
++
+ /*
+  * Infrastructure to implement the synchronize_() primitives in
+  * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
+diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
+index 2caa6b86106aa..66828dfc6e74e 100644
+--- a/include/linux/shmem_fs.h
++++ b/include/linux/shmem_fs.h
+@@ -37,7 +37,7 @@ struct shmem_inode_info {
+ 	unsigned int		fsflags;	/* for FS_IOC_[SG]ETFLAGS */
+ 	atomic_t		stop_eviction;	/* hold when working on inode */
+ #ifdef CONFIG_TMPFS_QUOTA
+-	struct dquot		*i_dquot[MAXQUOTAS];
++	struct dquot __rcu	*i_dquot[MAXQUOTAS];
+ #endif
+ 	struct inode		vfs_inode;
+ };
+diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
+index 24b1e5070f4d4..ad97453e7c3a3 100644
+--- a/include/linux/workqueue.h
++++ b/include/linux/workqueue.h
+@@ -405,6 +405,13 @@ enum {
+ 	WQ_MAX_ACTIVE		= 512,	  /* I like 512, better ideas? */
+ 	WQ_UNBOUND_MAX_ACTIVE	= WQ_MAX_ACTIVE,
+ 	WQ_DFL_ACTIVE		= WQ_MAX_ACTIVE / 2,
++
++	/*
++	 * Per-node default cap on min_active. Unless explicitly set, min_active
++	 * is set to min(max_active, WQ_DFL_MIN_ACTIVE). For more details, see
++	 * workqueue_struct->min_active definition.
++	 */
++	WQ_DFL_MIN_ACTIVE	= 8,
+ };
+ 
+ /*
+@@ -447,11 +454,33 @@ extern struct workqueue_struct *system_freezable_power_efficient_wq;
+  * alloc_workqueue - allocate a workqueue
+  * @fmt: printf format for the name of the workqueue
+  * @flags: WQ_* flags
+- * @max_active: max in-flight work items per CPU, 0 for default
++ * @max_active: max in-flight work items, 0 for default
+  * remaining args: args for @fmt
+  *
+- * Allocate a workqueue with the specified parameters.  For detailed
+- * information on WQ_* flags, please refer to
++ * For a per-cpu workqueue, @max_active limits the number of in-flight work
++ * items for each CPU. e.g. @max_active of 1 indicates that each CPU can be
++ * executing at most one work item for the workqueue.
++ *
++ * For unbound workqueues, @max_active limits the number of in-flight work items
++ * for the whole system. e.g. @max_active of 16 indicates that that there can be
++ * at most 16 work items executing for the workqueue in the whole system.
++ *
++ * As sharing the same active counter for an unbound workqueue across multiple
++ * NUMA nodes can be expensive, @max_active is distributed to each NUMA node
++ * according to the proportion of the number of online CPUs and enforced
++ * independently.
++ *
++ * Depending on online CPU distribution, a node may end up with per-node
++ * max_active which is significantly lower than @max_active, which can lead to
++ * deadlocks if the per-node concurrency limit is lower than the maximum number
++ * of interdependent work items for the workqueue.
++ *
++ * To guarantee forward progress regardless of online CPU distribution, the
++ * concurrency limit on every node is guaranteed to be equal to or greater than
++ * min_active which is set to min(@max_active, %WQ_DFL_MIN_ACTIVE). This means
++ * that the sum of per-node max_active's may be larger than @max_active.
++ *
++ * For detailed information on %WQ_* flags, please refer to
+  * Documentation/core-api/workqueue.rst.
+  *
+  * RETURNS:
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index bdee5d649cc61..0d231024570a3 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -393,7 +393,6 @@ enum {
+ 	HCI_LIMITED_PRIVACY,
+ 	HCI_RPA_EXPIRED,
+ 	HCI_RPA_RESOLVING,
+-	HCI_HS_ENABLED,
+ 	HCI_LE_ENABLED,
+ 	HCI_ADVERTISING,
+ 	HCI_ADVERTISING_CONNECTABLE,
+@@ -437,7 +436,6 @@ enum {
+ #define HCI_NCMD_TIMEOUT	msecs_to_jiffies(4000)	/* 4 seconds */
+ #define HCI_ACL_TX_TIMEOUT	msecs_to_jiffies(45000)	/* 45 seconds */
+ #define HCI_AUTO_OFF_TIMEOUT	msecs_to_jiffies(2000)	/* 2 seconds */
+-#define HCI_POWER_OFF_TIMEOUT	msecs_to_jiffies(5000)	/* 5 seconds */
+ #define HCI_LE_CONN_TIMEOUT	msecs_to_jiffies(20000)	/* 20 seconds */
+ #define HCI_LE_AUTOCONN_TIMEOUT	msecs_to_jiffies(4000)	/* 4 seconds */
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 65dd286693527..24446038d8a6d 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -553,6 +553,7 @@ struct hci_dev {
+ 	__u32			req_status;
+ 	__u32			req_result;
+ 	struct sk_buff		*req_skb;
++	struct sk_buff		*req_rsp;
+ 
+ 	void			*smp_data;
+ 	void			*smp_bredr_data;
+diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h
+index 6efbc2152146b..e2582c2425449 100644
+--- a/include/net/bluetooth/hci_sync.h
++++ b/include/net/bluetooth/hci_sync.h
+@@ -42,7 +42,7 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
+ void hci_cmd_sync_init(struct hci_dev *hdev);
+ void hci_cmd_sync_clear(struct hci_dev *hdev);
+ void hci_cmd_sync_cancel(struct hci_dev *hdev, int err);
+-void __hci_cmd_sync_cancel(struct hci_dev *hdev, int err);
++void hci_cmd_sync_cancel_sync(struct hci_dev *hdev, int err);
+ 
+ int hci_cmd_sync_submit(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ 			void *data, hci_cmd_sync_work_destroy_t destroy);
+diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
+index cf393e72d6ed6..92d7197f9a563 100644
+--- a/include/net/bluetooth/l2cap.h
++++ b/include/net/bluetooth/l2cap.h
+@@ -59,8 +59,6 @@
+ #define L2CAP_WAIT_ACK_POLL_PERIOD	msecs_to_jiffies(200)
+ #define L2CAP_WAIT_ACK_TIMEOUT		msecs_to_jiffies(10000)
+ 
+-#define L2CAP_A2MP_DEFAULT_MTU		670
+-
+ /* L2CAP socket address */
+ struct sockaddr_l2 {
+ 	sa_family_t	l2_family;
+@@ -109,12 +107,6 @@ struct l2cap_conninfo {
+ #define L2CAP_ECHO_RSP		0x09
+ #define L2CAP_INFO_REQ		0x0a
+ #define L2CAP_INFO_RSP		0x0b
+-#define L2CAP_CREATE_CHAN_REQ	0x0c
+-#define L2CAP_CREATE_CHAN_RSP	0x0d
+-#define L2CAP_MOVE_CHAN_REQ	0x0e
+-#define L2CAP_MOVE_CHAN_RSP	0x0f
+-#define L2CAP_MOVE_CHAN_CFM	0x10
+-#define L2CAP_MOVE_CHAN_CFM_RSP	0x11
+ #define L2CAP_CONN_PARAM_UPDATE_REQ	0x12
+ #define L2CAP_CONN_PARAM_UPDATE_RSP	0x13
+ #define L2CAP_LE_CONN_REQ	0x14
+@@ -144,7 +136,6 @@ struct l2cap_conninfo {
+ /* L2CAP fixed channels */
+ #define L2CAP_FC_SIG_BREDR	0x02
+ #define L2CAP_FC_CONNLESS	0x04
+-#define L2CAP_FC_A2MP		0x08
+ #define L2CAP_FC_ATT		0x10
+ #define L2CAP_FC_SIG_LE		0x20
+ #define L2CAP_FC_SMP_LE		0x40
+@@ -267,7 +258,6 @@ struct l2cap_conn_rsp {
+ /* channel identifier */
+ #define L2CAP_CID_SIGNALING	0x0001
+ #define L2CAP_CID_CONN_LESS	0x0002
+-#define L2CAP_CID_A2MP		0x0003
+ #define L2CAP_CID_ATT		0x0004
+ #define L2CAP_CID_LE_SIGNALING	0x0005
+ #define L2CAP_CID_SMP		0x0006
+@@ -282,7 +272,6 @@ struct l2cap_conn_rsp {
+ #define L2CAP_CR_BAD_PSM	0x0002
+ #define L2CAP_CR_SEC_BLOCK	0x0003
+ #define L2CAP_CR_NO_MEM		0x0004
+-#define L2CAP_CR_BAD_AMP	0x0005
+ #define L2CAP_CR_INVALID_SCID	0x0006
+ #define L2CAP_CR_SCID_IN_USE	0x0007
+ 
+@@ -404,29 +393,6 @@ struct l2cap_info_rsp {
+ 	__u8        data[];
+ } __packed;
+ 
+-struct l2cap_create_chan_req {
+-	__le16      psm;
+-	__le16      scid;
+-	__u8        amp_id;
+-} __packed;
+-
+-struct l2cap_create_chan_rsp {
+-	__le16      dcid;
+-	__le16      scid;
+-	__le16      result;
+-	__le16      status;
+-} __packed;
+-
+-struct l2cap_move_chan_req {
+-	__le16      icid;
+-	__u8        dest_amp_id;
+-} __packed;
+-
+-struct l2cap_move_chan_rsp {
+-	__le16      icid;
+-	__le16      result;
+-} __packed;
+-
+ #define L2CAP_MR_SUCCESS	0x0000
+ #define L2CAP_MR_PEND		0x0001
+ #define L2CAP_MR_BAD_ID		0x0002
+@@ -539,8 +505,6 @@ struct l2cap_seq_list {
+ 
+ struct l2cap_chan {
+ 	struct l2cap_conn	*conn;
+-	struct hci_conn		*hs_hcon;
+-	struct hci_chan		*hs_hchan;
+ 	struct kref	kref;
+ 	atomic_t	nesting;
+ 
+@@ -591,12 +555,6 @@ struct l2cap_chan {
+ 	unsigned long	conn_state;
+ 	unsigned long	flags;
+ 
+-	__u8		remote_amp_id;
+-	__u8		local_amp_id;
+-	__u8		move_id;
+-	__u8		move_state;
+-	__u8		move_role;
+-
+ 	__u16		next_tx_seq;
+ 	__u16		expected_ack_seq;
+ 	__u16		expected_tx_seq;
+diff --git a/include/soc/qcom/qcom-spmi-pmic.h b/include/soc/qcom/qcom-spmi-pmic.h
+index c47cc71a999ec..fdd462b295927 100644
+--- a/include/soc/qcom/qcom-spmi-pmic.h
++++ b/include/soc/qcom/qcom-spmi-pmic.h
+@@ -48,7 +48,7 @@
+ #define PMK8350_SUBTYPE		0x2f
+ #define PMR735B_SUBTYPE		0x34
+ #define PM6350_SUBTYPE		0x36
+-#define PM2250_SUBTYPE		0x37
++#define PM4125_SUBTYPE		0x37
+ 
+ #define PMI8998_FAB_ID_SMIC	0x11
+ #define PMI8998_FAB_ID_GF	0x30
+diff --git a/include/sound/tas2781.h b/include/sound/tas2781.h
+index 475294c853aa4..be58d870505a4 100644
+--- a/include/sound/tas2781.h
++++ b/include/sound/tas2781.h
+@@ -131,6 +131,9 @@ struct tasdevice_priv {
+ 		const struct firmware *fmw, int offset);
+ 	int (*tasdevice_load_block)(struct tasdevice_priv *tas_priv,
+ 		struct tasdev_blk *block);
++
++	int (*save_calibration)(struct tasdevice_priv *tas_priv);
++	void (*apply_calibration)(struct tasdevice_priv *tas_priv);
+ };
+ 
+ void tas2781_reset(struct tasdevice_priv *tas_dev);
+@@ -140,6 +143,8 @@ int tascodec_init(struct tasdevice_priv *tas_priv, void *codec,
+ struct tasdevice_priv *tasdevice_kzalloc(struct i2c_client *i2c);
+ int tasdevice_init(struct tasdevice_priv *tas_priv);
+ void tasdevice_remove(struct tasdevice_priv *tas_priv);
++int tasdevice_save_calibration(struct tasdevice_priv *tas_priv);
++void tasdevice_apply_calibration(struct tasdevice_priv *tas_priv);
+ int tasdevice_dev_read(struct tasdevice_priv *tas_priv,
+ 	unsigned short chn, unsigned int reg, unsigned int *value);
+ int tasdevice_dev_write(struct tasdevice_priv *tas_priv,
+diff --git a/init/main.c b/init/main.c
+index e24b0780fdff7..9e6ab6d593bd8 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -88,6 +88,7 @@
+ #include <linux/sched/task_stack.h>
+ #include <linux/context_tracking.h>
+ #include <linux/random.h>
++#include <linux/moduleloader.h>
+ #include <linux/list.h>
+ #include <linux/integrity.h>
+ #include <linux/proc_ns.h>
+@@ -1402,11 +1403,11 @@ static void mark_readonly(void)
+ 	if (rodata_enabled) {
+ 		/*
+ 		 * load_module() results in W+X mappings, which are cleaned
+-		 * up with call_rcu().  Let's make sure that queued work is
++		 * up with init_free_wq. Let's make sure that queued work is
+ 		 * flushed so that we don't hit false positives looking for
+ 		 * insecure pages which are W+X.
+ 		 */
+-		rcu_barrier();
++		flush_module_init_free_work();
+ 		mark_rodata_ro();
+ 		rodata_test();
+ 	} else
+diff --git a/io_uring/filetable.c b/io_uring/filetable.c
+index e7d749991de42..6e86e6188dbee 100644
+--- a/io_uring/filetable.c
++++ b/io_uring/filetable.c
+@@ -87,13 +87,10 @@ static int io_install_fixed_file(struct io_ring_ctx *ctx, struct file *file,
+ 		io_file_bitmap_clear(&ctx->file_table, slot_index);
+ 	}
+ 
+-	ret = io_scm_file_account(ctx, file);
+-	if (!ret) {
+-		*io_get_tag_slot(ctx->file_data, slot_index) = 0;
+-		io_fixed_file_set(file_slot, file);
+-		io_file_bitmap_set(&ctx->file_table, slot_index);
+-	}
+-	return ret;
++	*io_get_tag_slot(ctx->file_data, slot_index) = 0;
++	io_fixed_file_set(file_slot, file);
++	io_file_bitmap_set(&ctx->file_table, slot_index);
++	return 0;
+ }
+ 
+ int __io_fixed_fd_install(struct io_ring_ctx *ctx, struct file *file,
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 59f5791c90c31..45d6e440bdc04 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -60,7 +60,6 @@
+ #include <linux/net.h>
+ #include <net/sock.h>
+ #include <net/af_unix.h>
+-#include <net/scm.h>
+ #include <linux/anon_inodes.h>
+ #include <linux/sched/mm.h>
+ #include <linux/uaccess.h>
+@@ -177,19 +176,6 @@ static struct ctl_table kernel_io_uring_disabled_table[] = {
+ };
+ #endif
+ 
+-struct sock *io_uring_get_socket(struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+-	if (io_is_uring_fops(file)) {
+-		struct io_ring_ctx *ctx = file->private_data;
+-
+-		return ctx->ring_sock->sk;
+-	}
+-#endif
+-	return NULL;
+-}
+-EXPORT_SYMBOL(io_uring_get_socket);
+-
+ static inline void io_submit_flush_completions(struct io_ring_ctx *ctx)
+ {
+ 	if (!wq_list_empty(&ctx->submit_state.compl_reqs) ||
+@@ -1188,12 +1174,11 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx, struct io_tw_state *ts)
+ 
+ static unsigned int handle_tw_list(struct llist_node *node,
+ 				   struct io_ring_ctx **ctx,
+-				   struct io_tw_state *ts,
+-				   struct llist_node *last)
++				   struct io_tw_state *ts)
+ {
+ 	unsigned int count = 0;
+ 
+-	while (node && node != last) {
++	do {
+ 		struct llist_node *next = node->next;
+ 		struct io_kiocb *req = container_of(node, struct io_kiocb,
+ 						    io_task_work.node);
+@@ -1217,7 +1202,7 @@ static unsigned int handle_tw_list(struct llist_node *node,
+ 			*ctx = NULL;
+ 			cond_resched();
+ 		}
+-	}
++	} while (node);
+ 
+ 	return count;
+ }
+@@ -1236,22 +1221,6 @@ static inline struct llist_node *io_llist_xchg(struct llist_head *head,
+ 	return xchg(&head->first, new);
+ }
+ 
+-/**
+- * io_llist_cmpxchg - possibly swap all entries in a lock-less list
+- * @head:	the head of lock-less list to delete all entries
+- * @old:	expected old value of the first entry of the list
+- * @new:	new entry as the head of the list
+- *
+- * perform a cmpxchg on the first entry of the list.
+- */
+-
+-static inline struct llist_node *io_llist_cmpxchg(struct llist_head *head,
+-						  struct llist_node *old,
+-						  struct llist_node *new)
+-{
+-	return cmpxchg(&head->first, old, new);
+-}
+-
+ static __cold void io_fallback_tw(struct io_uring_task *tctx, bool sync)
+ {
+ 	struct llist_node *node = llist_del_all(&tctx->task_list);
+@@ -1286,9 +1255,7 @@ void tctx_task_work(struct callback_head *cb)
+ 	struct io_ring_ctx *ctx = NULL;
+ 	struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
+ 						  task_work);
+-	struct llist_node fake = {};
+ 	struct llist_node *node;
+-	unsigned int loops = 0;
+ 	unsigned int count = 0;
+ 
+ 	if (unlikely(current->flags & PF_EXITING)) {
+@@ -1296,21 +1263,9 @@ void tctx_task_work(struct callback_head *cb)
+ 		return;
+ 	}
+ 
+-	do {
+-		loops++;
+-		node = io_llist_xchg(&tctx->task_list, &fake);
+-		count += handle_tw_list(node, &ctx, &ts, &fake);
+-
+-		/* skip expensive cmpxchg if there are items in the list */
+-		if (READ_ONCE(tctx->task_list.first) != &fake)
+-			continue;
+-		if (ts.locked && !wq_list_empty(&ctx->submit_state.compl_reqs)) {
+-			io_submit_flush_completions(ctx);
+-			if (READ_ONCE(tctx->task_list.first) != &fake)
+-				continue;
+-		}
+-		node = io_llist_cmpxchg(&tctx->task_list, &fake, NULL);
+-	} while (node != &fake);
++	node = llist_del_all(&tctx->task_list);
++	if (node)
++		count = handle_tw_list(node, &ctx, &ts);
+ 
+ 	ctx_flush_and_put(ctx, &ts);
+ 
+@@ -1318,7 +1273,7 @@ void tctx_task_work(struct callback_head *cb)
+ 	if (unlikely(atomic_read(&tctx->in_cancel)))
+ 		io_uring_drop_tctx_refs(current);
+ 
+-	trace_io_uring_task_work_run(tctx, count, loops);
++	trace_io_uring_task_work_run(tctx, count, 1);
+ }
+ 
+ static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
+@@ -1415,7 +1370,20 @@ static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx)
+ 	}
+ }
+ 
+-static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts)
++static bool io_run_local_work_continue(struct io_ring_ctx *ctx, int events,
++				       int min_events)
++{
++	if (llist_empty(&ctx->work_llist))
++		return false;
++	if (events < min_events)
++		return true;
++	if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
++		atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
++	return false;
++}
++
++static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts,
++			       int min_events)
+ {
+ 	struct llist_node *node;
+ 	unsigned int loops = 0;
+@@ -1444,18 +1412,20 @@ static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts)
+ 	}
+ 	loops++;
+ 
+-	if (!llist_empty(&ctx->work_llist))
++	if (io_run_local_work_continue(ctx, ret, min_events))
+ 		goto again;
+ 	if (ts->locked) {
+ 		io_submit_flush_completions(ctx);
+-		if (!llist_empty(&ctx->work_llist))
++		if (io_run_local_work_continue(ctx, ret, min_events))
+ 			goto again;
+ 	}
++
+ 	trace_io_uring_local_work_run(ctx, ret, loops);
+ 	return ret;
+ }
+ 
+-static inline int io_run_local_work_locked(struct io_ring_ctx *ctx)
++static inline int io_run_local_work_locked(struct io_ring_ctx *ctx,
++					   int min_events)
+ {
+ 	struct io_tw_state ts = { .locked = true, };
+ 	int ret;
+@@ -1463,20 +1433,20 @@ static inline int io_run_local_work_locked(struct io_ring_ctx *ctx)
+ 	if (llist_empty(&ctx->work_llist))
+ 		return 0;
+ 
+-	ret = __io_run_local_work(ctx, &ts);
++	ret = __io_run_local_work(ctx, &ts, min_events);
+ 	/* shouldn't happen! */
+ 	if (WARN_ON_ONCE(!ts.locked))
+ 		mutex_lock(&ctx->uring_lock);
+ 	return ret;
+ }
+ 
+-static int io_run_local_work(struct io_ring_ctx *ctx)
++static int io_run_local_work(struct io_ring_ctx *ctx, int min_events)
+ {
+ 	struct io_tw_state ts = {};
+ 	int ret;
+ 
+ 	ts.locked = mutex_trylock(&ctx->uring_lock);
+-	ret = __io_run_local_work(ctx, &ts);
++	ret = __io_run_local_work(ctx, &ts, min_events);
+ 	if (ts.locked)
+ 		mutex_unlock(&ctx->uring_lock);
+ 
+@@ -1672,7 +1642,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+ 		    io_task_work_pending(ctx)) {
+ 			u32 tail = ctx->cached_cq_tail;
+ 
+-			(void) io_run_local_work_locked(ctx);
++			(void) io_run_local_work_locked(ctx, min);
+ 
+ 			if (task_work_pending(current) ||
+ 			    wq_list_empty(&ctx->iopoll_list)) {
+@@ -2515,7 +2485,7 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx)
+ {
+ 	if (!llist_empty(&ctx->work_llist)) {
+ 		__set_current_state(TASK_RUNNING);
+-		if (io_run_local_work(ctx) > 0)
++		if (io_run_local_work(ctx, INT_MAX) > 0)
+ 			return 0;
+ 	}
+ 	if (io_run_task_work() > 0)
+@@ -2538,7 +2508,7 @@ static bool current_pending_io(void)
+ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq)
+ {
+-	int io_wait, ret;
++	int ret;
+ 
+ 	if (unlikely(READ_ONCE(ctx->check_cq)))
+ 		return 1;
+@@ -2556,7 +2526,6 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 	 * can take into account that the task is waiting for IO - turns out
+ 	 * to be important for low QD IO.
+ 	 */
+-	io_wait = current->in_iowait;
+ 	if (current_pending_io())
+ 		current->in_iowait = 1;
+ 	ret = 0;
+@@ -2564,7 +2533,7 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 		schedule();
+ 	else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
+ 		ret = -ETIME;
+-	current->in_iowait = io_wait;
++	current->in_iowait = 0;
+ 	return ret;
+ }
+ 
+@@ -2583,7 +2552,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 	if (!io_allowed_run_tw(ctx))
+ 		return -EEXIST;
+ 	if (!llist_empty(&ctx->work_llist))
+-		io_run_local_work(ctx);
++		io_run_local_work(ctx, min_events);
+ 	io_run_task_work();
+ 	io_cqring_overflow_flush(ctx);
+ 	/* if user messes with these they will just get an early return */
+@@ -2621,11 +2590,10 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 
+ 	trace_io_uring_cqring_wait(ctx, min_events);
+ 	do {
++		int nr_wait = (int) iowq.cq_tail - READ_ONCE(ctx->rings->cq.tail);
+ 		unsigned long check_cq;
+ 
+ 		if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
+-			int nr_wait = (int) iowq.cq_tail - READ_ONCE(ctx->rings->cq.tail);
+-
+ 			atomic_set(&ctx->cq_wait_nr, nr_wait);
+ 			set_current_state(TASK_INTERRUPTIBLE);
+ 		} else {
+@@ -2644,7 +2612,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
+ 		 */
+ 		io_run_task_work();
+ 		if (!llist_empty(&ctx->work_llist))
+-			io_run_local_work(ctx);
++			io_run_local_work(ctx, nr_wait);
+ 
+ 		/*
+ 		 * Non-local task_work will be run on exit to userspace, but
+@@ -2715,7 +2683,7 @@ static void *__io_uaddr_map(struct page ***pages, unsigned short *npages,
+ 	struct page **page_array;
+ 	unsigned int nr_pages;
+ 	void *page_addr;
+-	int ret, i;
++	int ret, i, pinned;
+ 
+ 	*npages = 0;
+ 
+@@ -2729,12 +2697,12 @@ static void *__io_uaddr_map(struct page ***pages, unsigned short *npages,
+ 	if (!page_array)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	ret = pin_user_pages_fast(uaddr, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
+-					page_array);
+-	if (ret != nr_pages) {
+-err:
+-		io_pages_free(&page_array, ret > 0 ? ret : 0);
+-		return ret < 0 ? ERR_PTR(ret) : ERR_PTR(-EFAULT);
++
++	pinned = pin_user_pages_fast(uaddr, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
++				     page_array);
++	if (pinned != nr_pages) {
++		ret = (pinned < 0) ? pinned : -EFAULT;
++		goto free_pages;
+ 	}
+ 
+ 	page_addr = page_address(page_array[0]);
+@@ -2748,7 +2716,7 @@ static void *__io_uaddr_map(struct page ***pages, unsigned short *npages,
+ 		 * didn't support this feature.
+ 		 */
+ 		if (PageHighMem(page_array[i]))
+-			goto err;
++			goto free_pages;
+ 
+ 		/*
+ 		 * No support for discontig pages for now, should either be a
+@@ -2757,13 +2725,17 @@ static void *__io_uaddr_map(struct page ***pages, unsigned short *npages,
+ 		 * just fail them with EINVAL.
+ 		 */
+ 		if (page_address(page_array[i]) != page_addr)
+-			goto err;
++			goto free_pages;
+ 		page_addr += PAGE_SIZE;
+ 	}
+ 
+ 	*pages = page_array;
+ 	*npages = nr_pages;
+ 	return page_to_virt(page_array[0]);
++
++free_pages:
++	io_pages_free(&page_array, pinned > 0 ? pinned : 0);
++	return ERR_PTR(ret);
+ }
+ 
+ static void *io_rings_map(struct io_ring_ctx *ctx, unsigned long uaddr,
+@@ -2952,13 +2924,6 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 		io_rsrc_node_destroy(ctx, ctx->rsrc_node);
+ 
+ 	WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
+-
+-#if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock) {
+-		ctx->ring_sock->file = NULL; /* so that iput() is called */
+-		sock_release(ctx->ring_sock);
+-	}
+-#endif
+ 	WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
+ 
+ 	io_alloc_cache_free(&ctx->rsrc_node_cache, io_rsrc_node_cache_free);
+@@ -3374,7 +3339,7 @@ static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ 
+ 	if ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) &&
+ 	    io_allowed_defer_tw_run(ctx))
+-		ret |= io_run_local_work(ctx) > 0;
++		ret |= io_run_local_work(ctx, INT_MAX) > 0;
+ 	ret |= io_cancel_defer_files(ctx, task, cancel_all);
+ 	mutex_lock(&ctx->uring_lock);
+ 	ret |= io_poll_remove_all(ctx, task, cancel_all);
+@@ -3736,7 +3701,7 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ 			 * it should handle ownership problems if any.
+ 			 */
+ 			if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
+-				(void)io_run_local_work_locked(ctx);
++				(void)io_run_local_work_locked(ctx, min_complete);
+ 		}
+ 		mutex_unlock(&ctx->uring_lock);
+ 	}
+@@ -3880,32 +3845,12 @@ static int io_uring_install_fd(struct file *file)
+ /*
+  * Allocate an anonymous fd, this is what constitutes the application
+  * visible backing of an io_uring instance. The application mmaps this
+- * fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
+- * we have to tie this fd to a socket for file garbage collection purposes.
++ * fd to gain access to the SQ/CQ ring details.
+  */
+ static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
+ {
+-	struct file *file;
+-#if defined(CONFIG_UNIX)
+-	int ret;
+-
+-	ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
+-				&ctx->ring_sock);
+-	if (ret)
+-		return ERR_PTR(ret);
+-#endif
+-
+-	file = anon_inode_getfile_secure("[io_uring]", &io_uring_fops, ctx,
++	return anon_inode_getfile_secure("[io_uring]", &io_uring_fops, ctx,
+ 					 O_RDWR | O_CLOEXEC, NULL);
+-#if defined(CONFIG_UNIX)
+-	if (IS_ERR(file)) {
+-		sock_release(ctx->ring_sock);
+-		ctx->ring_sock = NULL;
+-	} else {
+-		ctx->ring_sock->file = file;
+-	}
+-#endif
+-	return file;
+ }
+ 
+ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index c9992cd7f1385..0d66a7058dbe1 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -61,7 +61,6 @@ struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
+ 			       unsigned issue_flags);
+ 
+ void __io_req_task_work_add(struct io_kiocb *req, unsigned flags);
+-bool io_is_uring_fops(struct file *file);
+ bool io_alloc_async_data(struct io_kiocb *req);
+ void io_req_task_queue(struct io_kiocb *req);
+ void io_queue_iowq(struct io_kiocb *req, struct io_tw_state *ts_dont_use);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 161622029147c..4aaeada03f1e7 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -204,16 +204,115 @@ static int io_setup_async_msg(struct io_kiocb *req,
+ 	return -EAGAIN;
+ }
+ 
++#ifdef CONFIG_COMPAT
++static int io_compat_msg_copy_hdr(struct io_kiocb *req,
++				  struct io_async_msghdr *iomsg,
++				  struct compat_msghdr *msg, int ddir)
++{
++	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
++	struct compat_iovec __user *uiov;
++	int ret;
++
++	if (copy_from_user(msg, sr->umsg_compat, sizeof(*msg)))
++		return -EFAULT;
++
++	uiov = compat_ptr(msg->msg_iov);
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		compat_ssize_t clen;
++
++		iomsg->free_iov = NULL;
++		if (msg->msg_iovlen == 0) {
++			sr->len = 0;
++		} else if (msg->msg_iovlen > 1) {
++			return -EINVAL;
++		} else {
++			if (!access_ok(uiov, sizeof(*uiov)))
++				return -EFAULT;
++			if (__get_user(clen, &uiov->iov_len))
++				return -EFAULT;
++			if (clen < 0)
++				return -EINVAL;
++			sr->len = clen;
++		}
++
++		return 0;
++	}
++
++	iomsg->free_iov = iomsg->fast_iov;
++	ret = __import_iovec(ddir, (struct iovec __user *)uiov, msg->msg_iovlen,
++				UIO_FASTIOV, &iomsg->free_iov,
++				&iomsg->msg.msg_iter, true);
++	if (unlikely(ret < 0))
++		return ret;
++
++	return 0;
++}
++#endif
++
++static int io_msg_copy_hdr(struct io_kiocb *req, struct io_async_msghdr *iomsg,
++			   struct user_msghdr *msg, int ddir)
++{
++	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
++	int ret;
++
++	if (copy_from_user(msg, sr->umsg, sizeof(*sr->umsg)))
++		return -EFAULT;
++
++	if (req->flags & REQ_F_BUFFER_SELECT) {
++		if (msg->msg_iovlen == 0) {
++			sr->len = iomsg->fast_iov[0].iov_len = 0;
++			iomsg->fast_iov[0].iov_base = NULL;
++			iomsg->free_iov = NULL;
++		} else if (msg->msg_iovlen > 1) {
++			return -EINVAL;
++		} else {
++			if (copy_from_user(iomsg->fast_iov, msg->msg_iov,
++					   sizeof(*msg->msg_iov)))
++				return -EFAULT;
++			sr->len = iomsg->fast_iov[0].iov_len;
++			iomsg->free_iov = NULL;
++		}
++
++		return 0;
++	}
++
++	iomsg->free_iov = iomsg->fast_iov;
++	ret = __import_iovec(ddir, msg->msg_iov, msg->msg_iovlen, UIO_FASTIOV,
++				&iomsg->free_iov, &iomsg->msg.msg_iter, false);
++	if (unlikely(ret < 0))
++		return ret;
++
++	return 0;
++}
++
+ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+ 			       struct io_async_msghdr *iomsg)
+ {
+ 	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
++	struct user_msghdr msg;
+ 	int ret;
+ 
+ 	iomsg->msg.msg_name = &iomsg->addr;
+-	iomsg->free_iov = iomsg->fast_iov;
+-	ret = sendmsg_copy_msghdr(&iomsg->msg, sr->umsg, sr->msg_flags,
+-					&iomsg->free_iov);
++	iomsg->msg.msg_iter.nr_segs = 0;
++
++#ifdef CONFIG_COMPAT
++	if (unlikely(req->ctx->compat)) {
++		struct compat_msghdr cmsg;
++
++		ret = io_compat_msg_copy_hdr(req, iomsg, &cmsg, ITER_SOURCE);
++		if (unlikely(ret))
++			return ret;
++
++		return __get_compat_msghdr(&iomsg->msg, &cmsg, NULL);
++	}
++#endif
++
++	ret = io_msg_copy_hdr(req, iomsg, &msg, ITER_SOURCE);
++	if (unlikely(ret))
++		return ret;
++
++	ret = __copy_msghdr(&iomsg->msg, &msg, NULL);
++
+ 	/* save msg_control as sys_sendmsg() overwrites it */
+ 	sr->msg_control = iomsg->msg.msg_control_user;
+ 	return ret;
+@@ -435,142 +534,77 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
+ 	return IOU_OK;
+ }
+ 
+-static bool io_recvmsg_multishot_overflow(struct io_async_msghdr *iomsg)
++static int io_recvmsg_mshot_prep(struct io_kiocb *req,
++				 struct io_async_msghdr *iomsg,
++				 int namelen, size_t controllen)
+ {
+-	int hdr;
+-
+-	if (iomsg->namelen < 0)
+-		return true;
+-	if (check_add_overflow((int)sizeof(struct io_uring_recvmsg_out),
+-			       iomsg->namelen, &hdr))
+-		return true;
+-	if (check_add_overflow(hdr, (int)iomsg->controllen, &hdr))
+-		return true;
++	if ((req->flags & (REQ_F_APOLL_MULTISHOT|REQ_F_BUFFER_SELECT)) ==
++			  (REQ_F_APOLL_MULTISHOT|REQ_F_BUFFER_SELECT)) {
++		int hdr;
++
++		if (unlikely(namelen < 0))
++			return -EOVERFLOW;
++		if (check_add_overflow(sizeof(struct io_uring_recvmsg_out),
++					namelen, &hdr))
++			return -EOVERFLOW;
++		if (check_add_overflow(hdr, controllen, &hdr))
++			return -EOVERFLOW;
++
++		iomsg->namelen = namelen;
++		iomsg->controllen = controllen;
++		return 0;
++	}
+ 
+-	return false;
++	return 0;
+ }
+ 
+-static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
+-				 struct io_async_msghdr *iomsg)
++static int io_recvmsg_copy_hdr(struct io_kiocb *req,
++			       struct io_async_msghdr *iomsg)
+ {
+-	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+ 	struct user_msghdr msg;
+ 	int ret;
+ 
+-	if (copy_from_user(&msg, sr->umsg, sizeof(*sr->umsg)))
+-		return -EFAULT;
+-
+-	ret = __copy_msghdr(&iomsg->msg, &msg, &iomsg->uaddr);
+-	if (ret)
+-		return ret;
+-
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		if (msg.msg_iovlen == 0) {
+-			sr->len = iomsg->fast_iov[0].iov_len = 0;
+-			iomsg->fast_iov[0].iov_base = NULL;
+-			iomsg->free_iov = NULL;
+-		} else if (msg.msg_iovlen > 1) {
+-			return -EINVAL;
+-		} else {
+-			if (copy_from_user(iomsg->fast_iov, msg.msg_iov, sizeof(*msg.msg_iov)))
+-				return -EFAULT;
+-			sr->len = iomsg->fast_iov[0].iov_len;
+-			iomsg->free_iov = NULL;
+-		}
+-
+-		if (req->flags & REQ_F_APOLL_MULTISHOT) {
+-			iomsg->namelen = msg.msg_namelen;
+-			iomsg->controllen = msg.msg_controllen;
+-			if (io_recvmsg_multishot_overflow(iomsg))
+-				return -EOVERFLOW;
+-		}
+-	} else {
+-		iomsg->free_iov = iomsg->fast_iov;
+-		ret = __import_iovec(ITER_DEST, msg.msg_iov, msg.msg_iovlen, UIO_FASTIOV,
+-				     &iomsg->free_iov, &iomsg->msg.msg_iter,
+-				     false);
+-		if (ret > 0)
+-			ret = 0;
+-	}
+-
+-	return ret;
+-}
++	iomsg->msg.msg_name = &iomsg->addr;
++	iomsg->msg.msg_iter.nr_segs = 0;
+ 
+ #ifdef CONFIG_COMPAT
+-static int __io_compat_recvmsg_copy_hdr(struct io_kiocb *req,
+-					struct io_async_msghdr *iomsg)
+-{
+-	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+-	struct compat_msghdr msg;
+-	struct compat_iovec __user *uiov;
+-	int ret;
+-
+-	if (copy_from_user(&msg, sr->umsg_compat, sizeof(msg)))
+-		return -EFAULT;
+-
+-	ret = __get_compat_msghdr(&iomsg->msg, &msg, &iomsg->uaddr);
+-	if (ret)
+-		return ret;
++	if (unlikely(req->ctx->compat)) {
++		struct compat_msghdr cmsg;
+ 
+-	uiov = compat_ptr(msg.msg_iov);
+-	if (req->flags & REQ_F_BUFFER_SELECT) {
+-		compat_ssize_t clen;
+-
+-		iomsg->free_iov = NULL;
+-		if (msg.msg_iovlen == 0) {
+-			sr->len = 0;
+-		} else if (msg.msg_iovlen > 1) {
+-			return -EINVAL;
+-		} else {
+-			if (!access_ok(uiov, sizeof(*uiov)))
+-				return -EFAULT;
+-			if (__get_user(clen, &uiov->iov_len))
+-				return -EFAULT;
+-			if (clen < 0)
+-				return -EINVAL;
+-			sr->len = clen;
+-		}
++		ret = io_compat_msg_copy_hdr(req, iomsg, &cmsg, ITER_DEST);
++		if (unlikely(ret))
++			return ret;
+ 
+-		if (req->flags & REQ_F_APOLL_MULTISHOT) {
+-			iomsg->namelen = msg.msg_namelen;
+-			iomsg->controllen = msg.msg_controllen;
+-			if (io_recvmsg_multishot_overflow(iomsg))
+-				return -EOVERFLOW;
+-		}
+-	} else {
+-		iomsg->free_iov = iomsg->fast_iov;
+-		ret = __import_iovec(ITER_DEST, (struct iovec __user *)uiov, msg.msg_iovlen,
+-				   UIO_FASTIOV, &iomsg->free_iov,
+-				   &iomsg->msg.msg_iter, true);
+-		if (ret < 0)
++		ret = __get_compat_msghdr(&iomsg->msg, &cmsg, &iomsg->uaddr);
++		if (unlikely(ret))
+ 			return ret;
+-	}
+ 
+-	return 0;
+-}
++		return io_recvmsg_mshot_prep(req, iomsg, cmsg.msg_namelen,
++						cmsg.msg_controllen);
++	}
+ #endif
+ 
+-static int io_recvmsg_copy_hdr(struct io_kiocb *req,
+-			       struct io_async_msghdr *iomsg)
+-{
+-	iomsg->msg.msg_name = &iomsg->addr;
+-	iomsg->msg.msg_iter.nr_segs = 0;
++	ret = io_msg_copy_hdr(req, iomsg, &msg, ITER_DEST);
++	if (unlikely(ret))
++		return ret;
+ 
+-#ifdef CONFIG_COMPAT
+-	if (req->ctx->compat)
+-		return __io_compat_recvmsg_copy_hdr(req, iomsg);
+-#endif
++	ret = __copy_msghdr(&iomsg->msg, &msg, &iomsg->uaddr);
++	if (unlikely(ret))
++		return ret;
+ 
+-	return __io_recvmsg_copy_hdr(req, iomsg);
++	return io_recvmsg_mshot_prep(req, iomsg, msg.msg_namelen,
++					msg.msg_controllen);
+ }
+ 
+ int io_recvmsg_prep_async(struct io_kiocb *req)
+ {
++	struct io_async_msghdr *iomsg;
+ 	int ret;
+ 
+ 	if (!io_msg_alloc_async_prep(req))
+ 		return -ENOMEM;
+-	ret = io_recvmsg_copy_hdr(req, req->async_data);
++	iomsg = req->async_data;
++	ret = io_recvmsg_copy_hdr(req, iomsg);
+ 	if (!ret)
+ 		req->flags |= REQ_F_NEED_CLEANUP;
+ 	return ret;
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 7513afc7b702e..58b7556f621eb 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -995,7 +995,6 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
+ 	struct io_hash_bucket *bucket;
+ 	struct io_kiocb *preq;
+ 	int ret2, ret = 0;
+-	struct io_tw_state ts = { .locked = true };
+ 
+ 	io_ring_submit_lock(ctx, issue_flags);
+ 	preq = io_poll_find(ctx, true, &cd, &ctx->cancel_table, &bucket);
+@@ -1044,7 +1043,8 @@ int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 	req_set_fail(preq);
+ 	io_req_set_res(preq, -ECANCELED, 0);
+-	io_req_task_complete(preq, &ts);
++	preq->io_task_work.func = io_req_task_complete;
++	io_req_task_work_add(preq);
+ out:
+ 	io_ring_submit_unlock(ctx, issue_flags);
+ 	if (ret < 0) {
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index f521c5965a933..4818b79231ddb 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -24,7 +24,6 @@ struct io_rsrc_update {
+ };
+ 
+ static void io_rsrc_buf_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
+-static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
+ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
+ 				  struct io_mapped_ubuf **pimu,
+ 				  struct page **last_hpage);
+@@ -157,7 +156,7 @@ static void io_rsrc_put_work(struct io_rsrc_node *node)
+ 
+ 	switch (node->type) {
+ 	case IORING_RSRC_FILE:
+-		io_rsrc_file_put(node->ctx, prsrc);
++		fput(prsrc->file);
+ 		break;
+ 	case IORING_RSRC_BUFFER:
+ 		io_rsrc_buf_put(node->ctx, prsrc);
+@@ -402,23 +401,13 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ 				break;
+ 			}
+ 			/*
+-			 * Don't allow io_uring instances to be registered. If
+-			 * UNIX isn't enabled, then this causes a reference
+-			 * cycle and this instance can never get freed. If UNIX
+-			 * is enabled we'll handle it just fine, but there's
+-			 * still no point in allowing a ring fd as it doesn't
+-			 * support regular read/write anyway.
++			 * Don't allow io_uring instances to be registered.
+ 			 */
+ 			if (io_is_uring_fops(file)) {
+ 				fput(file);
+ 				err = -EBADF;
+ 				break;
+ 			}
+-			err = io_scm_file_account(ctx, file);
+-			if (err) {
+-				fput(file);
+-				break;
+-			}
+ 			*io_get_tag_slot(data, i) = tag;
+ 			io_fixed_file_set(file_slot, file);
+ 			io_file_bitmap_set(&ctx->file_table, i);
+@@ -675,22 +664,12 @@ void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ 	for (i = 0; i < ctx->nr_user_files; i++) {
+ 		struct file *file = io_file_from_index(&ctx->file_table, i);
+ 
+-		/* skip scm accounted files, they'll be freed by ->ring_sock */
+-		if (!file || io_file_need_scm(file))
++		if (!file)
+ 			continue;
+ 		io_file_bitmap_clear(&ctx->file_table, i);
+ 		fput(file);
+ 	}
+ 
+-#if defined(CONFIG_UNIX)
+-	if (ctx->ring_sock) {
+-		struct sock *sock = ctx->ring_sock->sk;
+-		struct sk_buff *skb;
+-
+-		while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
+-			kfree_skb(skb);
+-	}
+-#endif
+ 	io_free_file_tables(&ctx->file_table);
+ 	io_file_table_set_alloc_range(ctx, 0, 0);
+ 	io_rsrc_data_free(ctx->file_data);
+@@ -718,137 +697,6 @@ int io_sqe_files_unregister(struct io_ring_ctx *ctx)
+ 	return ret;
+ }
+ 
+-/*
+- * Ensure the UNIX gc is aware of our file set, so we are certain that
+- * the io_uring can be safely unregistered on process exit, even if we have
+- * loops in the file referencing. We account only files that can hold other
+- * files because otherwise they can't form a loop and so are not interesting
+- * for GC.
+- */
+-int __io_scm_file_account(struct io_ring_ctx *ctx, struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+-	struct sock *sk = ctx->ring_sock->sk;
+-	struct sk_buff_head *head = &sk->sk_receive_queue;
+-	struct scm_fp_list *fpl;
+-	struct sk_buff *skb;
+-
+-	if (likely(!io_file_need_scm(file)))
+-		return 0;
+-
+-	/*
+-	 * See if we can merge this file into an existing skb SCM_RIGHTS
+-	 * file set. If there's no room, fall back to allocating a new skb
+-	 * and filling it in.
+-	 */
+-	spin_lock_irq(&head->lock);
+-	skb = skb_peek(head);
+-	if (skb && UNIXCB(skb).fp->count < SCM_MAX_FD)
+-		__skb_unlink(skb, head);
+-	else
+-		skb = NULL;
+-	spin_unlock_irq(&head->lock);
+-
+-	if (!skb) {
+-		fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
+-		if (!fpl)
+-			return -ENOMEM;
+-
+-		skb = alloc_skb(0, GFP_KERNEL);
+-		if (!skb) {
+-			kfree(fpl);
+-			return -ENOMEM;
+-		}
+-
+-		fpl->user = get_uid(current_user());
+-		fpl->max = SCM_MAX_FD;
+-		fpl->count = 0;
+-
+-		UNIXCB(skb).fp = fpl;
+-		skb->sk = sk;
+-		skb->destructor = io_uring_destruct_scm;
+-		refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+-	}
+-
+-	fpl = UNIXCB(skb).fp;
+-	fpl->fp[fpl->count++] = get_file(file);
+-	unix_inflight(fpl->user, file);
+-	skb_queue_head(head, skb);
+-	fput(file);
+-#endif
+-	return 0;
+-}
+-
+-static __cold void io_rsrc_file_scm_put(struct io_ring_ctx *ctx, struct file *file)
+-{
+-#if defined(CONFIG_UNIX)
+-	struct sock *sock = ctx->ring_sock->sk;
+-	struct sk_buff_head list, *head = &sock->sk_receive_queue;
+-	struct sk_buff *skb;
+-	int i;
+-
+-	__skb_queue_head_init(&list);
+-
+-	/*
+-	 * Find the skb that holds this file in its SCM_RIGHTS. When found,
+-	 * remove this entry and rearrange the file array.
+-	 */
+-	skb = skb_dequeue(head);
+-	while (skb) {
+-		struct scm_fp_list *fp;
+-
+-		fp = UNIXCB(skb).fp;
+-		for (i = 0; i < fp->count; i++) {
+-			int left;
+-
+-			if (fp->fp[i] != file)
+-				continue;
+-
+-			unix_notinflight(fp->user, fp->fp[i]);
+-			left = fp->count - 1 - i;
+-			if (left) {
+-				memmove(&fp->fp[i], &fp->fp[i + 1],
+-						left * sizeof(struct file *));
+-			}
+-			fp->count--;
+-			if (!fp->count) {
+-				kfree_skb(skb);
+-				skb = NULL;
+-			} else {
+-				__skb_queue_tail(&list, skb);
+-			}
+-			fput(file);
+-			file = NULL;
+-			break;
+-		}
+-
+-		if (!file)
+-			break;
+-
+-		__skb_queue_tail(&list, skb);
+-
+-		skb = skb_dequeue(head);
+-	}
+-
+-	if (skb_peek(&list)) {
+-		spin_lock_irq(&head->lock);
+-		while ((skb = __skb_dequeue(&list)) != NULL)
+-			__skb_queue_tail(head, skb);
+-		spin_unlock_irq(&head->lock);
+-	}
+-#endif
+-}
+-
+-static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
+-{
+-	struct file *file = prsrc->file;
+-
+-	if (likely(!io_file_need_scm(file)))
+-		fput(file);
+-	else
+-		io_rsrc_file_scm_put(ctx, file);
+-}
+-
+ int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 			  unsigned nr_args, u64 __user *tags)
+ {
+@@ -897,21 +745,12 @@ int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 			goto fail;
+ 
+ 		/*
+-		 * Don't allow io_uring instances to be registered. If UNIX
+-		 * isn't enabled, then this causes a reference cycle and this
+-		 * instance can never get freed. If UNIX is enabled we'll
+-		 * handle it just fine, but there's still no point in allowing
+-		 * a ring fd as it doesn't support regular read/write anyway.
++		 * Don't allow io_uring instances to be registered.
+ 		 */
+ 		if (io_is_uring_fops(file)) {
+ 			fput(file);
+ 			goto fail;
+ 		}
+-		ret = io_scm_file_account(ctx, file);
+-		if (ret) {
+-			fput(file);
+-			goto fail;
+-		}
+ 		file_slot = io_fixed_file_slot(&ctx->file_table, i);
+ 		io_fixed_file_set(file_slot, file);
+ 		io_file_bitmap_set(&ctx->file_table, i);
+diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
+index 08ac0d8e07ef8..7238b9cfe33b6 100644
+--- a/io_uring/rsrc.h
++++ b/io_uring/rsrc.h
+@@ -75,21 +75,6 @@ int io_sqe_files_unregister(struct io_ring_ctx *ctx);
+ int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 			  unsigned nr_args, u64 __user *tags);
+ 
+-int __io_scm_file_account(struct io_ring_ctx *ctx, struct file *file);
+-
+-static inline bool io_file_need_scm(struct file *filp)
+-{
+-	return false;
+-}
+-
+-static inline int io_scm_file_account(struct io_ring_ctx *ctx,
+-				      struct file *file)
+-{
+-	if (likely(!io_file_need_scm(file)))
+-		return 0;
+-	return __io_scm_file_account(ctx, file);
+-}
+-
+ int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
+ 			     unsigned nr_args);
+ int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index fe254ae035fe4..27fd41777176c 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -863,7 +863,12 @@ static LIST_HEAD(pack_list);
+  * CONFIG_MMU=n. Use PAGE_SIZE in these cases.
+  */
+ #ifdef PMD_SIZE
+-#define BPF_PROG_PACK_SIZE (PMD_SIZE * num_possible_nodes())
++/* PMD_SIZE is really big for some archs. It doesn't make sense to
++ * reserve too much memory in one allocation. Hardcode BPF_PROG_PACK_SIZE to
++ * 2MiB * num_possible_nodes(). On most architectures PMD_SIZE will be
++ * greater than or equal to 2MB.
++ */
++#define BPF_PROG_PACK_SIZE (SZ_2M * num_possible_nodes())
+ #else
+ #define BPF_PROG_PACK_SIZE PAGE_SIZE
+ #endif
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index ef82ffc90cbe9..8f1d390bcbdeb 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -262,6 +262,7 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ static int cpu_map_kthread_run(void *data)
+ {
+ 	struct bpf_cpu_map_entry *rcpu = data;
++	unsigned long last_qs = jiffies;
+ 
+ 	complete(&rcpu->kthread_running);
+ 	set_current_state(TASK_INTERRUPTIBLE);
+@@ -287,10 +288,12 @@ static int cpu_map_kthread_run(void *data)
+ 			if (__ptr_ring_empty(rcpu->queue)) {
+ 				schedule();
+ 				sched = 1;
++				last_qs = jiffies;
+ 			} else {
+ 				__set_current_state(TASK_RUNNING);
+ 			}
+ 		} else {
++			rcu_softirq_qs_periodic(last_qs);
+ 			sched = cond_resched();
+ 		}
+ 
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index a936c704d4e77..4e2cdbb5629f2 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -130,13 +130,14 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ 	bpf_map_init_from_attr(&dtab->map, attr);
+ 
+ 	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
+-		dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
+-
+-		if (!dtab->n_buckets) /* Overflow check */
++		/* hash table size must be power of 2; roundup_pow_of_two() can
++		 * overflow into UB on 32-bit arches, so check that first
++		 */
++		if (dtab->map.max_entries > 1UL << 31)
+ 			return -EINVAL;
+-	}
+ 
+-	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
++		dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
++
+ 		dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
+ 							   dtab->map.numa_node);
+ 		if (!dtab->dev_index_head)
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 5b9146fa825fd..85cd17ca38290 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -498,7 +498,13 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
+ 							  num_possible_cpus());
+ 	}
+ 
+-	/* hash table size must be power of 2 */
++	/* hash table size must be power of 2; roundup_pow_of_two() can overflow
++	 * into UB on 32-bit arches, so check that first
++	 */
++	err = -E2BIG;
++	if (htab->map.max_entries > 1UL << 31)
++		goto free_htab;
++
+ 	htab->n_buckets = roundup_pow_of_two(htab->map.max_entries);
+ 
+ 	htab->elem_size = sizeof(struct htab_elem) +
+@@ -508,10 +514,8 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr)
+ 	else
+ 		htab->elem_size += round_up(htab->map.value_size, 8);
+ 
+-	err = -E2BIG;
+-	/* prevent zero size kmalloc and check for u32 overflow */
+-	if (htab->n_buckets == 0 ||
+-	    htab->n_buckets > U32_MAX / sizeof(struct bucket))
++	/* check for u32 overflow */
++	if (htab->n_buckets > U32_MAX / sizeof(struct bucket))
+ 		goto free_htab;
+ 
+ 	err = bpf_map_init_elem_count(&htab->map);
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index ce4729ef1ad2d..b912d055a8470 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -334,7 +334,7 @@ static inline void __bpf_spin_lock_irqsave(struct bpf_spin_lock *lock)
+ 	__this_cpu_write(irqsave_flags, flags);
+ }
+ 
+-notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
++NOTRACE_BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
+ {
+ 	__bpf_spin_lock_irqsave(lock);
+ 	return 0;
+@@ -357,7 +357,7 @@ static inline void __bpf_spin_unlock_irqrestore(struct bpf_spin_lock *lock)
+ 	local_irq_restore(flags);
+ }
+ 
+-notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
++NOTRACE_BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
+ {
+ 	__bpf_spin_unlock_irqrestore(lock);
+ 	return 0;
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index dff7ba5397015..c99f8e5234ac4 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -91,11 +91,14 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
+ 	} else if (value_size / 8 > sysctl_perf_event_max_stack)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* hash table size must be power of 2 */
+-	n_buckets = roundup_pow_of_two(attr->max_entries);
+-	if (!n_buckets)
++	/* hash table size must be power of 2; roundup_pow_of_two() can overflow
++	 * into UB on 32-bit arches, so check that first
++	 */
++	if (attr->max_entries > 1UL << 31)
+ 		return ERR_PTR(-E2BIG);
+ 
++	n_buckets = roundup_pow_of_two(attr->max_entries);
++
+ 	cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
+ 	smap = bpf_map_area_alloc(cost, bpf_map_attr_numa_node(attr));
+ 	if (!smap)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 9698e93d48c6e..890d4c4bf9972 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5445,7 +5445,9 @@ BTF_ID(struct, prog_test_ref_kfunc)
+ #ifdef CONFIG_CGROUPS
+ BTF_ID(struct, cgroup)
+ #endif
++#ifdef CONFIG_BPF_JIT
+ BTF_ID(struct, bpf_cpumask)
++#endif
+ BTF_ID(struct, task_struct)
+ BTF_SET_END(rcu_protected_types)
+ 
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 98fedfdb8db52..34d9e718c2c7d 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -2486,6 +2486,11 @@ static void do_free_init(struct work_struct *w)
+ 	}
+ }
+ 
++void flush_module_init_free_work(void)
++{
++	flush_work(&init_free_wq);
++}
++
+ #undef MODULE_PARAM_PREFIX
+ #define MODULE_PARAM_PREFIX "module."
+ /* Default value for module->async_probe_requested */
+@@ -2590,8 +2595,8 @@ static noinline int do_init_module(struct module *mod)
+ 	 * Note that module_alloc() on most architectures creates W+X page
+ 	 * mappings which won't be cleaned up until do_free_init() runs.  Any
+ 	 * code such as mark_rodata_ro() which depends on those mappings to
+-	 * be cleaned up needs to sync with the queued work - ie
+-	 * rcu_barrier()
++	 * be cleaned up needs to sync with the queued work by invoking
++	 * flush_module_init_free_work().
+ 	 */
+ 	if (llist_add(&freeinit->node, &init_free_list))
+ 		schedule_work(&init_free_wq);
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index 6c2afee5ef620..ac2d9750e5f81 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -130,6 +130,7 @@ struct printk_message {
+ };
+ 
+ bool other_cpu_in_panic(void);
++bool this_cpu_in_panic(void);
+ bool printk_get_next_message(struct printk_message *pmsg, u64 seq,
+ 			     bool is_extended, bool may_supress);
+ 
+diff --git a/kernel/printk/nbcon.c b/kernel/printk/nbcon.c
+index b96077152f49d..c8093bcc01fe6 100644
+--- a/kernel/printk/nbcon.c
++++ b/kernel/printk/nbcon.c
+@@ -140,39 +140,6 @@ static inline bool nbcon_state_try_cmpxchg(struct console *con, struct nbcon_sta
+ 	return atomic_try_cmpxchg(&ACCESS_PRIVATE(con, nbcon_state), &cur->atom, new->atom);
+ }
+ 
+-#ifdef CONFIG_64BIT
+-
+-#define __seq_to_nbcon_seq(seq) (seq)
+-#define __nbcon_seq_to_seq(seq) (seq)
+-
+-#else /* CONFIG_64BIT */
+-
+-#define __seq_to_nbcon_seq(seq) ((u32)seq)
+-
+-static inline u64 __nbcon_seq_to_seq(u32 nbcon_seq)
+-{
+-	u64 seq;
+-	u64 rb_next_seq;
+-
+-	/*
+-	 * The provided sequence is only the lower 32 bits of the ringbuffer
+-	 * sequence. It needs to be expanded to 64bit. Get the next sequence
+-	 * number from the ringbuffer and fold it.
+-	 *
+-	 * Having a 32bit representation in the console is sufficient.
+-	 * If a console ever gets more than 2^31 records behind
+-	 * the ringbuffer then this is the least of the problems.
+-	 *
+-	 * Also the access to the ring buffer is always safe.
+-	 */
+-	rb_next_seq = prb_next_seq(prb);
+-	seq = rb_next_seq - ((u32)rb_next_seq - nbcon_seq);
+-
+-	return seq;
+-}
+-
+-#endif /* CONFIG_64BIT */
+-
+ /**
+  * nbcon_seq_read - Read the current console sequence
+  * @con:	Console to read the sequence of
+@@ -183,7 +150,7 @@ u64 nbcon_seq_read(struct console *con)
+ {
+ 	unsigned long nbcon_seq = atomic_long_read(&ACCESS_PRIVATE(con, nbcon_seq));
+ 
+-	return __nbcon_seq_to_seq(nbcon_seq);
++	return __ulseq_to_u64seq(prb, nbcon_seq);
+ }
+ 
+ /**
+@@ -204,7 +171,7 @@ void nbcon_seq_force(struct console *con, u64 seq)
+ 	 */
+ 	u64 valid_seq = max_t(u64, seq, prb_first_valid_seq(prb));
+ 
+-	atomic_long_set(&ACCESS_PRIVATE(con, nbcon_seq), __seq_to_nbcon_seq(valid_seq));
++	atomic_long_set(&ACCESS_PRIVATE(con, nbcon_seq), __u64seq_to_ulseq(valid_seq));
+ 
+ 	/* Clear con->seq since nbcon consoles use con->nbcon_seq instead. */
+ 	con->seq = 0;
+@@ -223,11 +190,11 @@ void nbcon_seq_force(struct console *con, u64 seq)
+  */
+ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
+ {
+-	unsigned long nbcon_seq = __seq_to_nbcon_seq(ctxt->seq);
++	unsigned long nbcon_seq = __u64seq_to_ulseq(ctxt->seq);
+ 	struct console *con = ctxt->console;
+ 
+ 	if (atomic_long_try_cmpxchg(&ACCESS_PRIVATE(con, nbcon_seq), &nbcon_seq,
+-				    __seq_to_nbcon_seq(new_seq))) {
++				    __u64seq_to_ulseq(new_seq))) {
+ 		ctxt->seq = new_seq;
+ 	} else {
+ 		ctxt->seq = nbcon_seq_read(con);
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index f2444b581e16c..72f6a564e832f 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -347,6 +347,29 @@ static bool panic_in_progress(void)
+ 	return unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID);
+ }
+ 
++/* Return true if a panic is in progress on the current CPU. */
++bool this_cpu_in_panic(void)
++{
++	/*
++	 * We can use raw_smp_processor_id() here because it is impossible for
++	 * the task to be migrated to the panic_cpu, or away from it. If
++	 * panic_cpu has already been set, and we're not currently executing on
++	 * that CPU, then we never will be.
++	 */
++	return unlikely(atomic_read(&panic_cpu) == raw_smp_processor_id());
++}
++
++/*
++ * Return true if a panic is in progress on a remote CPU.
++ *
++ * On true, the local CPU should immediately release any printing resources
++ * that may be needed by the panic CPU.
++ */
++bool other_cpu_in_panic(void)
++{
++	return (panic_in_progress() && !this_cpu_in_panic());
++}
++
+ /*
+  * This is used for debugging the mess that is the VT code by
+  * keeping track if we have the console semaphore held. It's
+@@ -1846,10 +1869,23 @@ static bool console_waiter;
+  */
+ static void console_lock_spinning_enable(void)
+ {
++	/*
++	 * Do not use spinning in panic(). The panic CPU wants to keep the lock.
++	 * Non-panic CPUs abandon the flush anyway.
++	 *
++	 * Just keep the lockdep annotation. The panic-CPU should avoid
++	 * taking console_owner_lock because it might cause a deadlock.
++	 * This looks like the easiest way how to prevent false lockdep
++	 * reports without handling races a lockless way.
++	 */
++	if (panic_in_progress())
++		goto lockdep;
++
+ 	raw_spin_lock(&console_owner_lock);
+ 	console_owner = current;
+ 	raw_spin_unlock(&console_owner_lock);
+ 
++lockdep:
+ 	/* The waiter may spin on us after setting console_owner */
+ 	spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
+ }
+@@ -1874,6 +1910,22 @@ static int console_lock_spinning_disable_and_check(int cookie)
+ {
+ 	int waiter;
+ 
++	/*
++	 * Ignore spinning waiters during panic() because they might get stopped
++	 * or blocked at any time,
++	 *
++	 * It is safe because nobody is allowed to start spinning during panic
++	 * in the first place. If there has been a waiter then non panic CPUs
++	 * might stay spinning. They would get stopped anyway. The panic context
++	 * will never start spinning and an interrupted spin on panic CPU will
++	 * never continue.
++	 */
++	if (panic_in_progress()) {
++		/* Keep lockdep happy. */
++		spin_release(&console_owner_dep_map, _THIS_IP_);
++		return 0;
++	}
++
+ 	raw_spin_lock(&console_owner_lock);
+ 	waiter = READ_ONCE(console_waiter);
+ 	console_owner = NULL;
+@@ -2601,26 +2653,6 @@ static int console_cpu_notify(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-/*
+- * Return true if a panic is in progress on a remote CPU.
+- *
+- * On true, the local CPU should immediately release any printing resources
+- * that may be needed by the panic CPU.
+- */
+-bool other_cpu_in_panic(void)
+-{
+-	if (!panic_in_progress())
+-		return false;
+-
+-	/*
+-	 * We can use raw_smp_processor_id() here because it is impossible for
+-	 * the task to be migrated to the panic_cpu, or away from it. If
+-	 * panic_cpu has already been set, and we're not currently executing on
+-	 * that CPU, then we never will be.
+-	 */
+-	return atomic_read(&panic_cpu) != raw_smp_processor_id();
+-}
+-
+ /**
+  * console_lock - block the console subsystem from printing
+  *
+@@ -3761,7 +3793,7 @@ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progre
+ 
+ 	might_sleep();
+ 
+-	seq = prb_next_seq(prb);
++	seq = prb_next_reserve_seq(prb);
+ 
+ 	/* Flush the consoles so that records up to @seq are printed. */
+ 	console_lock();
+diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
+index fde338606ce83..f5a8bb606fe50 100644
+--- a/kernel/printk/printk_ringbuffer.c
++++ b/kernel/printk/printk_ringbuffer.c
+@@ -6,6 +6,7 @@
+ #include <linux/errno.h>
+ #include <linux/bug.h>
+ #include "printk_ringbuffer.h"
++#include "internal.h"
+ 
+ /**
+  * DOC: printk_ringbuffer overview
+@@ -303,6 +304,9 @@
+  *
+  *   desc_push_tail:B / desc_reserve:D
+  *     set descriptor reusable (state), then push descriptor tail (id)
++ *
++ *   desc_update_last_finalized:A / desc_last_finalized_seq:A
++ *     store finalized record, then set new highest finalized sequence number
+  */
+ 
+ #define DATA_SIZE(data_ring)		_DATA_SIZE((data_ring)->size_bits)
+@@ -1441,20 +1445,118 @@ bool prb_reserve_in_last(struct prb_reserved_entry *e, struct printk_ringbuffer
+ 	return false;
+ }
+ 
++/*
++ * @last_finalized_seq value guarantees that all records up to and including
++ * this sequence number are finalized and can be read. The only exception are
++ * too old records which have already been overwritten.
++ *
++ * It is also guaranteed that @last_finalized_seq only increases.
++ *
++ * Be aware that finalized records following non-finalized records are not
++ * reported because they are not yet available to the reader. For example,
++ * a new record stored via printk() will not be available to a printer if
++ * it follows a record that has not been finalized yet. However, once that
++ * non-finalized record becomes finalized, @last_finalized_seq will be
++ * appropriately updated and the full set of finalized records will be
++ * available to the printer. And since each printk() caller will either
++ * directly print or trigger deferred printing of all available unprinted
++ * records, all printk() messages will get printed.
++ */
++static u64 desc_last_finalized_seq(struct printk_ringbuffer *rb)
++{
++	struct prb_desc_ring *desc_ring = &rb->desc_ring;
++	unsigned long ulseq;
++
++	/*
++	 * Guarantee the sequence number is loaded before loading the
++	 * associated record in order to guarantee that the record can be
++	 * seen by this CPU. This pairs with desc_update_last_finalized:A.
++	 */
++	ulseq = atomic_long_read_acquire(&desc_ring->last_finalized_seq
++					); /* LMM(desc_last_finalized_seq:A) */
++
++	return __ulseq_to_u64seq(rb, ulseq);
++}
++
++static bool _prb_read_valid(struct printk_ringbuffer *rb, u64 *seq,
++			    struct printk_record *r, unsigned int *line_count);
++
++/*
++ * Check if there are records directly following @last_finalized_seq that are
++ * finalized. If so, update @last_finalized_seq to the latest of these
++ * records. It is not allowed to skip over records that are not yet finalized.
++ */
++static void desc_update_last_finalized(struct printk_ringbuffer *rb)
++{
++	struct prb_desc_ring *desc_ring = &rb->desc_ring;
++	u64 old_seq = desc_last_finalized_seq(rb);
++	unsigned long oldval;
++	unsigned long newval;
++	u64 finalized_seq;
++	u64 try_seq;
++
++try_again:
++	finalized_seq = old_seq;
++	try_seq = finalized_seq + 1;
++
++	/* Try to find later finalized records. */
++	while (_prb_read_valid(rb, &try_seq, NULL, NULL)) {
++		finalized_seq = try_seq;
++		try_seq++;
++	}
++
++	/* No update needed if no later finalized record was found. */
++	if (finalized_seq == old_seq)
++		return;
++
++	oldval = __u64seq_to_ulseq(old_seq);
++	newval = __u64seq_to_ulseq(finalized_seq);
++
++	/*
++	 * Set the sequence number of a later finalized record that has been
++	 * seen.
++	 *
++	 * Guarantee the record data is visible to other CPUs before storing
++	 * its sequence number. This pairs with desc_last_finalized_seq:A.
++	 *
++	 * Memory barrier involvement:
++	 *
++	 * If desc_last_finalized_seq:A reads from
++	 * desc_update_last_finalized:A, then desc_read:A reads from
++	 * _prb_commit:B.
++	 *
++	 * Relies on:
++	 *
++	 * RELEASE from _prb_commit:B to desc_update_last_finalized:A
++	 *    matching
++	 * ACQUIRE from desc_last_finalized_seq:A to desc_read:A
++	 *
++	 * Note: _prb_commit:B and desc_update_last_finalized:A can be
++	 *       different CPUs. However, the desc_update_last_finalized:A
++	 *       CPU (which performs the release) must have previously seen
++	 *       _prb_commit:B.
++	 */
++	if (!atomic_long_try_cmpxchg_release(&desc_ring->last_finalized_seq,
++				&oldval, newval)) { /* LMM(desc_update_last_finalized:A) */
++		old_seq = __ulseq_to_u64seq(rb, oldval);
++		goto try_again;
++	}
++}
++
+ /*
+  * Attempt to finalize a specified descriptor. If this fails, the descriptor
+  * is either already final or it will finalize itself when the writer commits.
+  */
+-static void desc_make_final(struct prb_desc_ring *desc_ring, unsigned long id)
++static void desc_make_final(struct printk_ringbuffer *rb, unsigned long id)
+ {
++	struct prb_desc_ring *desc_ring = &rb->desc_ring;
+ 	unsigned long prev_state_val = DESC_SV(id, desc_committed);
+ 	struct prb_desc *d = to_desc(desc_ring, id);
+ 
+-	atomic_long_cmpxchg_relaxed(&d->state_var, prev_state_val,
+-			DESC_SV(id, desc_finalized)); /* LMM(desc_make_final:A) */
+-
+-	/* Best effort to remember the last finalized @id. */
+-	atomic_long_set(&desc_ring->last_finalized_id, id);
++	if (atomic_long_try_cmpxchg_relaxed(&d->state_var, &prev_state_val,
++			DESC_SV(id, desc_finalized))) { /* LMM(desc_make_final:A) */
++		desc_update_last_finalized(rb);
++	}
+ }
+ 
+ /**
+@@ -1550,7 +1652,7 @@ bool prb_reserve(struct prb_reserved_entry *e, struct printk_ringbuffer *rb,
+ 	 * readers. (For seq==0 there is no previous descriptor.)
+ 	 */
+ 	if (info->seq > 0)
+-		desc_make_final(desc_ring, DESC_ID(id - 1));
++		desc_make_final(rb, DESC_ID(id - 1));
+ 
+ 	r->text_buf = data_alloc(rb, r->text_buf_size, &d->text_blk_lpos, id);
+ 	/* If text data allocation fails, a data-less record is committed. */
+@@ -1643,7 +1745,7 @@ void prb_commit(struct prb_reserved_entry *e)
+ 	 */
+ 	head_id = atomic_long_read(&desc_ring->head_id); /* LMM(prb_commit:A) */
+ 	if (head_id != e->id)
+-		desc_make_final(desc_ring, e->id);
++		desc_make_final(e->rb, e->id);
+ }
+ 
+ /**
+@@ -1663,12 +1765,9 @@ void prb_commit(struct prb_reserved_entry *e)
+  */
+ void prb_final_commit(struct prb_reserved_entry *e)
+ {
+-	struct prb_desc_ring *desc_ring = &e->rb->desc_ring;
+-
+ 	_prb_commit(e, desc_finalized);
+ 
+-	/* Best effort to remember the last finalized @id. */
+-	atomic_long_set(&desc_ring->last_finalized_id, e->id);
++	desc_update_last_finalized(e->rb);
+ }
+ 
+ /*
+@@ -1832,7 +1931,7 @@ static int prb_read(struct printk_ringbuffer *rb, u64 seq,
+ }
+ 
+ /* Get the sequence number of the tail descriptor. */
+-static u64 prb_first_seq(struct printk_ringbuffer *rb)
++u64 prb_first_seq(struct printk_ringbuffer *rb)
+ {
+ 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
+ 	enum desc_state d_state;
+@@ -1875,12 +1974,123 @@ static u64 prb_first_seq(struct printk_ringbuffer *rb)
+ 	return seq;
+ }
+ 
++/**
++ * prb_next_reserve_seq() - Get the sequence number after the most recently
++ *                  reserved record.
++ *
++ * @rb:  The ringbuffer to get the sequence number from.
++ *
++ * This is the public function available to readers to see what sequence
++ * number will be assigned to the next reserved record.
++ *
++ * Note that depending on the situation, this value can be equal to or
++ * higher than the sequence number returned by prb_next_seq().
++ *
++ * Context: Any context.
++ * Return: The sequence number that will be assigned to the next record
++ *         reserved.
++ */
++u64 prb_next_reserve_seq(struct printk_ringbuffer *rb)
++{
++	struct prb_desc_ring *desc_ring = &rb->desc_ring;
++	unsigned long last_finalized_id;
++	atomic_long_t *state_var;
++	u64 last_finalized_seq;
++	unsigned long head_id;
++	struct prb_desc desc;
++	unsigned long diff;
++	struct prb_desc *d;
++	int err;
++
++	/*
++	 * It may not be possible to read a sequence number for @head_id.
++	 * So the ID of @last_finailzed_seq is used to calculate what the
++	 * sequence number of @head_id will be.
++	 */
++
++try_again:
++	last_finalized_seq = desc_last_finalized_seq(rb);
++
++	/*
++	 * @head_id is loaded after @last_finalized_seq to ensure that
++	 * it points to the record with @last_finalized_seq or newer.
++	 *
++	 * Memory barrier involvement:
++	 *
++	 * If desc_last_finalized_seq:A reads from
++	 * desc_update_last_finalized:A, then
++	 * prb_next_reserve_seq:A reads from desc_reserve:D.
++	 *
++	 * Relies on:
++	 *
++	 * RELEASE from desc_reserve:D to desc_update_last_finalized:A
++	 *    matching
++	 * ACQUIRE from desc_last_finalized_seq:A to prb_next_reserve_seq:A
++	 *
++	 * Note: desc_reserve:D and desc_update_last_finalized:A can be
++	 *       different CPUs. However, the desc_update_last_finalized:A CPU
++	 *       (which performs the release) must have previously seen
++	 *       desc_read:C, which implies desc_reserve:D can be seen.
++	 */
++	head_id = atomic_long_read(&desc_ring->head_id); /* LMM(prb_next_reserve_seq:A) */
++
++	d = to_desc(desc_ring, last_finalized_seq);
++	state_var = &d->state_var;
++
++	/* Extract the ID, used to specify the descriptor to read. */
++	last_finalized_id = DESC_ID(atomic_long_read(state_var));
++
++	/* Ensure @last_finalized_id is correct. */
++	err = desc_read_finalized_seq(desc_ring, last_finalized_id, last_finalized_seq, &desc);
++
++	if (err == -EINVAL) {
++		if (last_finalized_seq == 0) {
++			/*
++			 * No record has been finalized or even reserved yet.
++			 *
++			 * The @head_id is initialized such that the first
++			 * increment will yield the first record (seq=0).
++			 * Handle it separately to avoid a negative @diff
++			 * below.
++			 */
++			if (head_id == DESC0_ID(desc_ring->count_bits))
++				return 0;
++
++			/*
++			 * One or more descriptors are already reserved. Use
++			 * the descriptor ID of the first one (@seq=0) for
++			 * the @diff below.
++			 */
++			last_finalized_id = DESC0_ID(desc_ring->count_bits) + 1;
++		} else {
++			/* Record must have been overwritten. Try again. */
++			goto try_again;
++		}
++	}
++
++	/* Diff of known descriptor IDs to compute related sequence numbers. */
++	diff = head_id - last_finalized_id;
++
++	/*
++	 * @head_id points to the most recently reserved record, but this
++	 * function returns the sequence number that will be assigned to the
++	 * next (not yet reserved) record. Thus +1 is needed.
++	 */
++	return (last_finalized_seq + diff + 1);
++}
++
+ /*
+- * Non-blocking read of a record. Updates @seq to the last finalized record
+- * (which may have no data available).
++ * Non-blocking read of a record.
++ *
++ * On success @seq is updated to the record that was read and (if provided)
++ * @r and @line_count will contain the read/calculated data.
++ *
++ * On failure @seq is updated to a record that is not yet available to the
++ * reader, but it will be the next record available to the reader.
+  *
+- * See the description of prb_read_valid() and prb_read_valid_info()
+- * for details.
++ * Note: When the current CPU is in panic, this function will skip over any
++ *       non-existent/non-finalized records in order to allow the panic CPU
++ *       to print any and all records that have been finalized.
+  */
+ static bool _prb_read_valid(struct printk_ringbuffer *rb, u64 *seq,
+ 			    struct printk_record *r, unsigned int *line_count)
+@@ -1899,12 +2109,32 @@ static bool _prb_read_valid(struct printk_ringbuffer *rb, u64 *seq,
+ 			*seq = tail_seq;
+ 
+ 		} else if (err == -ENOENT) {
+-			/* Record exists, but no data available. Skip. */
++			/* Record exists, but the data was lost. Skip. */
+ 			(*seq)++;
+ 
+ 		} else {
+-			/* Non-existent/non-finalized record. Must stop. */
+-			return false;
++			/*
++			 * Non-existent/non-finalized record. Must stop.
++			 *
++			 * For panic situations it cannot be expected that
++			 * non-finalized records will become finalized. But
++			 * there may be other finalized records beyond that
++			 * need to be printed for a panic situation. If this
++			 * is the panic CPU, skip this
++			 * non-existent/non-finalized record unless it is
++			 * at or beyond the head, in which case it is not
++			 * possible to continue.
++			 *
++			 * Note that new messages printed on panic CPU are
++			 * finalized when we are here. The only exception
++			 * might be the last message without trailing newline.
++			 * But it would have the sequence number returned
++			 * by "prb_next_reserve_seq() - 1".
++			 */
++			if (this_cpu_in_panic() && ((*seq + 1) < prb_next_reserve_seq(rb)))
++				(*seq)++;
++			else
++				return false;
+ 		}
+ 	}
+ 
+@@ -1932,7 +2162,7 @@ static bool _prb_read_valid(struct printk_ringbuffer *rb, u64 *seq,
+  * On success, the reader must check r->info.seq to see which record was
+  * actually read. This allows the reader to detect dropped records.
+  *
+- * Failure means @seq refers to a not yet written record.
++ * Failure means @seq refers to a record not yet available to the reader.
+  */
+ bool prb_read_valid(struct printk_ringbuffer *rb, u64 seq,
+ 		    struct printk_record *r)
+@@ -1962,7 +2192,7 @@ bool prb_read_valid(struct printk_ringbuffer *rb, u64 seq,
+  * On success, the reader must check info->seq to see which record meta data
+  * was actually read. This allows the reader to detect dropped records.
+  *
+- * Failure means @seq refers to a not yet written record.
++ * Failure means @seq refers to a record not yet available to the reader.
+  */
+ bool prb_read_valid_info(struct printk_ringbuffer *rb, u64 seq,
+ 			 struct printk_info *info, unsigned int *line_count)
+@@ -2008,7 +2238,9 @@ u64 prb_first_valid_seq(struct printk_ringbuffer *rb)
+  * newest sequence number available to readers will be.
+  *
+  * This provides readers a sequence number to jump to if all currently
+- * available records should be skipped.
++ * available records should be skipped. It is guaranteed that all records
++ * previous to the returned value have been finalized and are (or were)
++ * available to the reader.
+  *
+  * Context: Any context.
+  * Return: The sequence number of the next newest (not yet available) record
+@@ -2016,34 +2248,19 @@ u64 prb_first_valid_seq(struct printk_ringbuffer *rb)
+  */
+ u64 prb_next_seq(struct printk_ringbuffer *rb)
+ {
+-	struct prb_desc_ring *desc_ring = &rb->desc_ring;
+-	enum desc_state d_state;
+-	unsigned long id;
+ 	u64 seq;
+ 
+-	/* Check if the cached @id still points to a valid @seq. */
+-	id = atomic_long_read(&desc_ring->last_finalized_id);
+-	d_state = desc_read(desc_ring, id, NULL, &seq, NULL);
++	seq = desc_last_finalized_seq(rb);
+ 
+-	if (d_state == desc_finalized || d_state == desc_reusable) {
+-		/*
+-		 * Begin searching after the last finalized record.
+-		 *
+-		 * On 0, the search must begin at 0 because of hack#2
+-		 * of the bootstrapping phase it is not known if a
+-		 * record at index 0 exists.
+-		 */
+-		if (seq != 0)
+-			seq++;
+-	} else {
+-		/*
+-		 * The information about the last finalized sequence number
+-		 * has gone. It should happen only when there is a flood of
+-		 * new messages and the ringbuffer is rapidly recycled.
+-		 * Give up and start from the beginning.
+-		 */
+-		seq = 0;
+-	}
++	/*
++	 * Begin searching after the last finalized record.
++	 *
++	 * On 0, the search must begin at 0 because of hack#2
++	 * of the bootstrapping phase it is not known if a
++	 * record at index 0 exists.
++	 */
++	if (seq != 0)
++		seq++;
+ 
+ 	/*
+ 	 * The information about the last finalized @seq might be inaccurate.
+@@ -2085,7 +2302,7 @@ void prb_init(struct printk_ringbuffer *rb,
+ 	rb->desc_ring.infos = infos;
+ 	atomic_long_set(&rb->desc_ring.head_id, DESC0_ID(descbits));
+ 	atomic_long_set(&rb->desc_ring.tail_id, DESC0_ID(descbits));
+-	atomic_long_set(&rb->desc_ring.last_finalized_id, DESC0_ID(descbits));
++	atomic_long_set(&rb->desc_ring.last_finalized_seq, 0);
+ 
+ 	rb->text_data_ring.size_bits = textbits;
+ 	rb->text_data_ring.data = text_buf;
+diff --git a/kernel/printk/printk_ringbuffer.h b/kernel/printk/printk_ringbuffer.h
+index 18cd25e489b89..cb887489d00f0 100644
+--- a/kernel/printk/printk_ringbuffer.h
++++ b/kernel/printk/printk_ringbuffer.h
+@@ -75,7 +75,7 @@ struct prb_desc_ring {
+ 	struct printk_info	*infos;
+ 	atomic_long_t		head_id;
+ 	atomic_long_t		tail_id;
+-	atomic_long_t		last_finalized_id;
++	atomic_long_t		last_finalized_seq;
+ };
+ 
+ /*
+@@ -259,7 +259,7 @@ static struct printk_ringbuffer name = {							\
+ 		.infos		= &_##name##_infos[0],						\
+ 		.head_id	= ATOMIC_INIT(DESC0_ID(descbits)),				\
+ 		.tail_id	= ATOMIC_INIT(DESC0_ID(descbits)),				\
+-		.last_finalized_id = ATOMIC_INIT(DESC0_ID(descbits)),				\
++		.last_finalized_seq = ATOMIC_INIT(0),						\
+ 	},											\
+ 	.text_data_ring = {									\
+ 		.size_bits	= (avgtextbits) + (descbits),					\
+@@ -378,7 +378,41 @@ bool prb_read_valid(struct printk_ringbuffer *rb, u64 seq,
+ bool prb_read_valid_info(struct printk_ringbuffer *rb, u64 seq,
+ 			 struct printk_info *info, unsigned int *line_count);
+ 
++u64 prb_first_seq(struct printk_ringbuffer *rb);
+ u64 prb_first_valid_seq(struct printk_ringbuffer *rb);
+ u64 prb_next_seq(struct printk_ringbuffer *rb);
++u64 prb_next_reserve_seq(struct printk_ringbuffer *rb);
++
++#ifdef CONFIG_64BIT
++
++#define __u64seq_to_ulseq(u64seq) (u64seq)
++#define __ulseq_to_u64seq(rb, ulseq) (ulseq)
++
++#else /* CONFIG_64BIT */
++
++#define __u64seq_to_ulseq(u64seq) ((u32)u64seq)
++
++static inline u64 __ulseq_to_u64seq(struct printk_ringbuffer *rb, u32 ulseq)
++{
++	u64 rb_first_seq = prb_first_seq(rb);
++	u64 seq;
++
++	/*
++	 * The provided sequence is only the lower 32 bits of the ringbuffer
++	 * sequence. It needs to be expanded to 64bit. Get the first sequence
++	 * number from the ringbuffer and fold it.
++	 *
++	 * Having a 32bit representation in the console is sufficient.
++	 * If a console ever gets more than 2^31 records behind
++	 * the ringbuffer then this is the least of the problems.
++	 *
++	 * Also the access to the ring buffer is always safe.
++	 */
++	seq = rb_first_seq - (s32)((u32)rb_first_seq - ulseq);
++
++	return seq;
++}
++
++#endif /* CONFIG_64BIT */
+ 
+ #endif /* _KERNEL_PRINTK_RINGBUFFER_H */
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 157f3ca2a9b56..f544f24df1856 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4741,13 +4741,16 @@ static void __init rcu_start_exp_gp_kworkers(void)
+ 	rcu_exp_gp_kworker = kthread_create_worker(0, gp_kworker_name);
+ 	if (IS_ERR_OR_NULL(rcu_exp_gp_kworker)) {
+ 		pr_err("Failed to create %s!\n", gp_kworker_name);
++		rcu_exp_gp_kworker = NULL;
+ 		return;
+ 	}
+ 
+ 	rcu_exp_par_gp_kworker = kthread_create_worker(0, par_gp_kworker_name);
+ 	if (IS_ERR_OR_NULL(rcu_exp_par_gp_kworker)) {
+ 		pr_err("Failed to create %s!\n", par_gp_kworker_name);
++		rcu_exp_par_gp_kworker = NULL;
+ 		kthread_destroy_worker(rcu_exp_gp_kworker);
++		rcu_exp_gp_kworker = NULL;
+ 		return;
+ 	}
+ 
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index 2ac440bc7e10b..8107f818455da 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -428,7 +428,12 @@ static void sync_rcu_exp_select_node_cpus(struct kthread_work *wp)
+ 	__sync_rcu_exp_select_node_cpus(rewp);
+ }
+ 
+-static inline bool rcu_gp_par_worker_started(void)
++static inline bool rcu_exp_worker_started(void)
++{
++	return !!READ_ONCE(rcu_exp_gp_kworker);
++}
++
++static inline bool rcu_exp_par_worker_started(void)
+ {
+ 	return !!READ_ONCE(rcu_exp_par_gp_kworker);
+ }
+@@ -478,7 +483,12 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
+ 	__sync_rcu_exp_select_node_cpus(rewp);
+ }
+ 
+-static inline bool rcu_gp_par_worker_started(void)
++static inline bool rcu_exp_worker_started(void)
++{
++	return !!READ_ONCE(rcu_gp_wq);
++}
++
++static inline bool rcu_exp_par_worker_started(void)
+ {
+ 	return !!READ_ONCE(rcu_par_gp_wq);
+ }
+@@ -541,7 +551,7 @@ static void sync_rcu_exp_select_cpus(void)
+ 		rnp->exp_need_flush = false;
+ 		if (!READ_ONCE(rnp->expmask))
+ 			continue; /* Avoid early boot non-existent wq. */
+-		if (!rcu_gp_par_worker_started() ||
++		if (!rcu_exp_par_worker_started() ||
+ 		    rcu_scheduler_active != RCU_SCHEDULER_RUNNING ||
+ 		    rcu_is_last_leaf_node(rnp)) {
+ 			/* No worker started yet or last leaf, do direct call. */
+@@ -956,7 +966,7 @@ static void rcu_exp_print_detail_task_stall_rnp(struct rcu_node *rnp)
+  */
+ void synchronize_rcu_expedited(void)
+ {
+-	bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT);
++	bool use_worker;
+ 	unsigned long flags;
+ 	struct rcu_exp_work rew;
+ 	struct rcu_node *rnp;
+@@ -967,6 +977,9 @@ void synchronize_rcu_expedited(void)
+ 			 lock_is_held(&rcu_sched_lock_map),
+ 			 "Illegal synchronize_rcu_expedited() in RCU read-side critical section");
+ 
++	use_worker = (rcu_scheduler_active != RCU_SCHEDULER_INIT) &&
++		      rcu_exp_worker_started();
++
+ 	/* Is the state is such that the call is a grace period? */
+ 	if (rcu_blocking_is_gp()) {
+ 		// Note well that this code runs with !PREEMPT && !SMP.
+@@ -996,7 +1009,7 @@ void synchronize_rcu_expedited(void)
+ 		return;  /* Someone else did our work for us. */
+ 
+ 	/* Ensure that load happens before action based on it. */
+-	if (unlikely(boottime)) {
++	if (unlikely(!use_worker)) {
+ 		/* Direct call during scheduler init and early_initcalls(). */
+ 		rcu_exp_sel_wait_wake(s);
+ 	} else {
+@@ -1014,7 +1027,7 @@ void synchronize_rcu_expedited(void)
+ 	/* Let the next expedited grace period start. */
+ 	mutex_unlock(&rcu_state.exp_mutex);
+ 
+-	if (likely(!boottime))
++	if (likely(use_worker))
+ 		synchronize_rcu_expedited_destroy_work(&rew);
+ }
+ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 7ac9f4b1d955c..c43b71792a8e5 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7296,7 +7296,7 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
+ 		if (!available_idle_cpu(cpu)) {
+ 			idle = false;
+ 			if (*idle_cpu == -1) {
+-				if (sched_idle_cpu(cpu) && cpumask_test_cpu(cpu, p->cpus_ptr)) {
++				if (sched_idle_cpu(cpu) && cpumask_test_cpu(cpu, cpus)) {
+ 					*idle_cpu = cpu;
+ 					break;
+ 				}
+@@ -7304,7 +7304,7 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
+ 			}
+ 			break;
+ 		}
+-		if (*idle_cpu == -1 && cpumask_test_cpu(cpu, p->cpus_ptr))
++		if (*idle_cpu == -1 && cpumask_test_cpu(cpu, cpus))
+ 			*idle_cpu = cpu;
+ 	}
+ 
+@@ -7318,13 +7318,19 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
+ /*
+  * Scan the local SMT mask for idle CPUs.
+  */
+-static int select_idle_smt(struct task_struct *p, int target)
++static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+ {
+ 	int cpu;
+ 
+ 	for_each_cpu_and(cpu, cpu_smt_mask(target), p->cpus_ptr) {
+ 		if (cpu == target)
+ 			continue;
++		/*
++		 * Check if the CPU is in the LLC scheduling domain of @target.
++		 * Due to isolcpus, there is no guarantee that all the siblings are in the domain.
++		 */
++		if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
++			continue;
+ 		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+ 			return cpu;
+ 	}
+@@ -7348,7 +7354,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
+ 	return __select_idle_cpu(core, p);
+ }
+ 
+-static inline int select_idle_smt(struct task_struct *p, int target)
++static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+ {
+ 	return -1;
+ }
+@@ -7598,7 +7604,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
+ 		has_idle_core = test_idle_cores(target);
+ 
+ 		if (!has_idle_core && cpus_share_cache(prev, target)) {
+-			i = select_idle_smt(p, prev);
++			i = select_idle_smt(p, sd, prev);
+ 			if ((unsigned int)i < nr_cpumask_bits)
+ 				return i;
+ 		}
+diff --git a/kernel/time/time_test.c b/kernel/time/time_test.c
+index ca058c8af6baf..3e5d422dd15cb 100644
+--- a/kernel/time/time_test.c
++++ b/kernel/time/time_test.c
+@@ -73,7 +73,7 @@ static void time64_to_tm_test_date_range(struct kunit *test)
+ 
+ 		days = div_s64(secs, 86400);
+ 
+-		#define FAIL_MSG "%05ld/%02d/%02d (%2d) : %ld", \
++		#define FAIL_MSG "%05ld/%02d/%02d (%2d) : %lld", \
+ 			year, month, mdday, yday, days
+ 
+ 		KUNIT_ASSERT_EQ_MSG(test, year - 1900, result.tm_year, FAIL_MSG);
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 266d02809dbb1..8aab7ed414907 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1180,13 +1180,15 @@ static int adjust_historical_crosststamp(struct system_time_snapshot *history,
+ }
+ 
+ /*
+- * cycle_between - true if test occurs chronologically between before and after
++ * timestamp_in_interval - true if ts is chronologically in [start, end]
++ *
++ * True if ts occurs chronologically at or after start, and before or at end.
+  */
+-static bool cycle_between(u64 before, u64 test, u64 after)
++static bool timestamp_in_interval(u64 start, u64 end, u64 ts)
+ {
+-	if (test > before && test < after)
++	if (ts >= start && ts <= end)
+ 		return true;
+-	if (test < before && before > after)
++	if (start > end && (ts >= start || ts <= end))
+ 		return true;
+ 	return false;
+ }
+@@ -1246,7 +1248,7 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 		 */
+ 		now = tk_clock_read(&tk->tkr_mono);
+ 		interval_start = tk->tkr_mono.cycle_last;
+-		if (!cycle_between(interval_start, cycles, now)) {
++		if (!timestamp_in_interval(interval_start, now, cycles)) {
+ 			clock_was_set_seq = tk->clock_was_set_seq;
+ 			cs_was_changed_seq = tk->cs_was_changed_seq;
+ 			cycles = interval_start;
+@@ -1259,10 +1261,8 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 				      tk_core.timekeeper.offs_real);
+ 		base_raw = tk->tkr_raw.base;
+ 
+-		nsec_real = timekeeping_cycles_to_ns(&tk->tkr_mono,
+-						     system_counterval.cycles);
+-		nsec_raw = timekeeping_cycles_to_ns(&tk->tkr_raw,
+-						    system_counterval.cycles);
++		nsec_real = timekeeping_cycles_to_ns(&tk->tkr_mono, cycles);
++		nsec_raw = timekeeping_cycles_to_ns(&tk->tkr_raw, cycles);
+ 	} while (read_seqcount_retry(&tk_core.seq, seq));
+ 
+ 	xtstamp->sys_realtime = ktime_add_ns(base_real, nsec_real);
+@@ -1277,13 +1277,13 @@ int get_device_system_crosststamp(int (*get_time_fn)
+ 		bool discontinuity;
+ 
+ 		/*
+-		 * Check that the counter value occurs after the provided
++		 * Check that the counter value is not before the provided
+ 		 * history reference and that the history doesn't cross a
+ 		 * clocksource change
+ 		 */
+ 		if (!history_begin ||
+-		    !cycle_between(history_begin->cycles,
+-				   system_counterval.cycles, cycles) ||
++		    !timestamp_in_interval(history_begin->cycles,
++					   cycles, system_counterval.cycles) ||
+ 		    history_begin->cs_was_changed_seq != cs_was_changed_seq)
+ 			return -EINVAL;
+ 		partial_history_cycles = cycles - system_counterval.cycles;
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 4f87b1851c74a..6f7cb619aa5e4 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -108,7 +108,7 @@ enum {
+ 	RESCUER_NICE_LEVEL	= MIN_NICE,
+ 	HIGHPRI_NICE_LEVEL	= MIN_NICE,
+ 
+-	WQ_NAME_LEN		= 24,
++	WQ_NAME_LEN		= 32,
+ };
+ 
+ /*
+@@ -122,6 +122,9 @@ enum {
+  *
+  * L: pool->lock protected.  Access with pool->lock held.
+  *
++ * LN: pool->lock and wq_node_nr_active->lock protected for writes. Either for
++ *     reads.
++ *
+  * K: Only modified by worker while holding pool->lock. Can be safely read by
+  *    self, while holding pool->lock or from IRQ context if %current is the
+  *    kworker.
+@@ -143,6 +146,9 @@ enum {
+  *
+  * WR: wq->mutex protected for writes.  RCU protected for reads.
+  *
++ * WO: wq->mutex protected for writes. Updated with WRITE_ONCE() and can be read
++ *     with READ_ONCE() without locking.
++ *
+  * MD: wq_mayday_lock protected.
+  *
+  * WD: Used internally by the watchdog.
+@@ -240,18 +246,18 @@ struct pool_workqueue {
+ 	 * pwq->inactive_works instead of pool->worklist and marked with
+ 	 * WORK_STRUCT_INACTIVE.
+ 	 *
+-	 * All work items marked with WORK_STRUCT_INACTIVE do not participate
+-	 * in pwq->nr_active and all work items in pwq->inactive_works are
+-	 * marked with WORK_STRUCT_INACTIVE.  But not all WORK_STRUCT_INACTIVE
+-	 * work items are in pwq->inactive_works.  Some of them are ready to
+-	 * run in pool->worklist or worker->scheduled.  Those work itmes are
+-	 * only struct wq_barrier which is used for flush_work() and should
+-	 * not participate in pwq->nr_active.  For non-barrier work item, it
+-	 * is marked with WORK_STRUCT_INACTIVE iff it is in pwq->inactive_works.
++	 * All work items marked with WORK_STRUCT_INACTIVE do not participate in
++	 * nr_active and all work items in pwq->inactive_works are marked with
++	 * WORK_STRUCT_INACTIVE. But not all WORK_STRUCT_INACTIVE work items are
++	 * in pwq->inactive_works. Some of them are ready to run in
++	 * pool->worklist or worker->scheduled. Those work itmes are only struct
++	 * wq_barrier which is used for flush_work() and should not participate
++	 * in nr_active. For non-barrier work item, it is marked with
++	 * WORK_STRUCT_INACTIVE iff it is in pwq->inactive_works.
+ 	 */
+ 	int			nr_active;	/* L: nr of active works */
+-	int			max_active;	/* L: max active works */
+ 	struct list_head	inactive_works;	/* L: inactive works */
++	struct list_head	pending_node;	/* LN: node on wq_node_nr_active->pending_pwqs */
+ 	struct list_head	pwqs_node;	/* WR: node on wq->pwqs */
+ 	struct list_head	mayday_node;	/* MD: node on wq->maydays */
+ 
+@@ -278,6 +284,26 @@ struct wq_flusher {
+ 
+ struct wq_device;
+ 
++/*
++ * Unlike in a per-cpu workqueue where max_active limits its concurrency level
++ * on each CPU, in an unbound workqueue, max_active applies to the whole system.
++ * As sharing a single nr_active across multiple sockets can be very expensive,
++ * the counting and enforcement is per NUMA node.
++ *
++ * The following struct is used to enforce per-node max_active. When a pwq wants
++ * to start executing a work item, it should increment ->nr using
++ * tryinc_node_nr_active(). If acquisition fails due to ->nr already being over
++ * ->max, the pwq is queued on ->pending_pwqs. As in-flight work items finish
++ * and decrement ->nr, node_activate_pending_pwq() activates the pending pwqs in
++ * round-robin order.
++ */
++struct wq_node_nr_active {
++	int			max;		/* per-node max_active */
++	atomic_t		nr;		/* per-node nr_active */
++	raw_spinlock_t		lock;		/* nests inside pool locks */
++	struct list_head	pending_pwqs;	/* LN: pwqs with inactive works */
++};
++
+ /*
+  * The externally visible workqueue.  It relays the issued work items to
+  * the appropriate worker_pool through its pool_workqueues.
+@@ -298,10 +324,15 @@ struct workqueue_struct {
+ 	struct worker		*rescuer;	/* MD: rescue worker */
+ 
+ 	int			nr_drainers;	/* WQ: drain in progress */
+-	int			saved_max_active; /* WQ: saved pwq max_active */
++
++	/* See alloc_workqueue() function comment for info on min/max_active */
++	int			max_active;	/* WO: max active works */
++	int			min_active;	/* WO: min active works */
++	int			saved_max_active; /* WQ: saved max_active */
++	int			saved_min_active; /* WQ: saved min_active */
+ 
+ 	struct workqueue_attrs	*unbound_attrs;	/* PW: only for unbound wqs */
+-	struct pool_workqueue	*dfl_pwq;	/* PW: only for unbound wqs */
++	struct pool_workqueue __rcu *dfl_pwq;   /* PW: only for unbound wqs */
+ 
+ #ifdef CONFIG_SYSFS
+ 	struct wq_device	*wq_dev;	/* I: for sysfs interface */
+@@ -323,6 +354,7 @@ struct workqueue_struct {
+ 	/* hot fields used during command issue, aligned to cacheline */
+ 	unsigned int		flags ____cacheline_aligned; /* WQ: WQ_* flags */
+ 	struct pool_workqueue __percpu __rcu **cpu_pwq; /* I: per-cpu pwqs */
++	struct wq_node_nr_active *node_nr_active[]; /* I: per-node nr_active */
+ };
+ 
+ static struct kmem_cache *pwq_cache;
+@@ -626,6 +658,36 @@ static int worker_pool_assign_id(struct worker_pool *pool)
+ 	return ret;
+ }
+ 
++static struct pool_workqueue __rcu **
++unbound_pwq_slot(struct workqueue_struct *wq, int cpu)
++{
++       if (cpu >= 0)
++               return per_cpu_ptr(wq->cpu_pwq, cpu);
++       else
++               return &wq->dfl_pwq;
++}
++
++/* @cpu < 0 for dfl_pwq */
++static struct pool_workqueue *unbound_pwq(struct workqueue_struct *wq, int cpu)
++{
++	return rcu_dereference_check(*unbound_pwq_slot(wq, cpu),
++				     lockdep_is_held(&wq_pool_mutex) ||
++				     lockdep_is_held(&wq->mutex));
++}
++
++/**
++ * unbound_effective_cpumask - effective cpumask of an unbound workqueue
++ * @wq: workqueue of interest
++ *
++ * @wq->unbound_attrs->cpumask contains the cpumask requested by the user which
++ * is masked with wq_unbound_cpumask to determine the effective cpumask. The
++ * default pwq is always mapped to the pool with the current effective cpumask.
++ */
++static struct cpumask *unbound_effective_cpumask(struct workqueue_struct *wq)
++{
++	return unbound_pwq(wq, -1)->pool->attrs->__pod_cpumask;
++}
++
+ static unsigned int work_color_to_flags(int color)
+ {
+ 	return color << WORK_STRUCT_COLOR_SHIFT;
+@@ -1395,6 +1457,71 @@ work_func_t wq_worker_last_func(struct task_struct *task)
+ 	return worker->last_func;
+ }
+ 
++/**
++ * wq_node_nr_active - Determine wq_node_nr_active to use
++ * @wq: workqueue of interest
++ * @node: NUMA node, can be %NUMA_NO_NODE
++ *
++ * Determine wq_node_nr_active to use for @wq on @node. Returns:
++ *
++ * - %NULL for per-cpu workqueues as they don't need to use shared nr_active.
++ *
++ * - node_nr_active[nr_node_ids] if @node is %NUMA_NO_NODE.
++ *
++ * - Otherwise, node_nr_active[@node].
++ */
++static struct wq_node_nr_active *wq_node_nr_active(struct workqueue_struct *wq,
++						   int node)
++{
++	if (!(wq->flags & WQ_UNBOUND))
++		return NULL;
++
++	if (node == NUMA_NO_NODE)
++		node = nr_node_ids;
++
++	return wq->node_nr_active[node];
++}
++
++/**
++ * wq_update_node_max_active - Update per-node max_actives to use
++ * @wq: workqueue to update
++ * @off_cpu: CPU that's going down, -1 if a CPU is not going down
++ *
++ * Update @wq->node_nr_active[]->max. @wq must be unbound. max_active is
++ * distributed among nodes according to the proportions of numbers of online
++ * cpus. The result is always between @wq->min_active and max_active.
++ */
++static void wq_update_node_max_active(struct workqueue_struct *wq, int off_cpu)
++{
++	struct cpumask *effective = unbound_effective_cpumask(wq);
++	int min_active = READ_ONCE(wq->min_active);
++	int max_active = READ_ONCE(wq->max_active);
++	int total_cpus, node;
++
++	lockdep_assert_held(&wq->mutex);
++
++	if (off_cpu >= 0 && !cpumask_test_cpu(off_cpu, effective))
++		off_cpu = -1;
++
++	total_cpus = cpumask_weight_and(effective, cpu_online_mask);
++	if (off_cpu >= 0)
++		total_cpus--;
++
++	for_each_node(node) {
++		int node_cpus;
++
++		node_cpus = cpumask_weight_and(effective, cpumask_of_node(node));
++		if (off_cpu >= 0 && cpu_to_node(off_cpu) == node)
++			node_cpus--;
++
++		wq_node_nr_active(wq, node)->max =
++			clamp(DIV_ROUND_UP(max_active * node_cpus, total_cpus),
++			      min_active, max_active);
++	}
++
++	wq_node_nr_active(wq, NUMA_NO_NODE)->max = min_active;
++}
++
+ /**
+  * get_pwq - get an extra reference on the specified pool_workqueue
+  * @pwq: pool_workqueue to get
+@@ -1447,24 +1574,293 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq)
+ 	}
+ }
+ 
+-static void pwq_activate_inactive_work(struct work_struct *work)
++static bool pwq_is_empty(struct pool_workqueue *pwq)
+ {
+-	struct pool_workqueue *pwq = get_work_pwq(work);
++	return !pwq->nr_active && list_empty(&pwq->inactive_works);
++}
+ 
++static void __pwq_activate_work(struct pool_workqueue *pwq,
++				struct work_struct *work)
++{
++	unsigned long *wdb = work_data_bits(work);
++
++	WARN_ON_ONCE(!(*wdb & WORK_STRUCT_INACTIVE));
+ 	trace_workqueue_activate_work(work);
+ 	if (list_empty(&pwq->pool->worklist))
+ 		pwq->pool->watchdog_ts = jiffies;
+ 	move_linked_works(work, &pwq->pool->worklist, NULL);
+-	__clear_bit(WORK_STRUCT_INACTIVE_BIT, work_data_bits(work));
++	__clear_bit(WORK_STRUCT_INACTIVE_BIT, wdb);
++}
++
++/**
++ * pwq_activate_work - Activate a work item if inactive
++ * @pwq: pool_workqueue @work belongs to
++ * @work: work item to activate
++ *
++ * Returns %true if activated. %false if already active.
++ */
++static bool pwq_activate_work(struct pool_workqueue *pwq,
++			      struct work_struct *work)
++{
++	struct worker_pool *pool = pwq->pool;
++	struct wq_node_nr_active *nna;
++
++	lockdep_assert_held(&pool->lock);
++
++	if (!(*work_data_bits(work) & WORK_STRUCT_INACTIVE))
++		return false;
++
++	nna = wq_node_nr_active(pwq->wq, pool->node);
++	if (nna)
++		atomic_inc(&nna->nr);
++
+ 	pwq->nr_active++;
++	__pwq_activate_work(pwq, work);
++	return true;
++}
++
++static bool tryinc_node_nr_active(struct wq_node_nr_active *nna)
++{
++	int max = READ_ONCE(nna->max);
++
++	while (true) {
++		int old, tmp;
++
++		old = atomic_read(&nna->nr);
++		if (old >= max)
++			return false;
++		tmp = atomic_cmpxchg_relaxed(&nna->nr, old, old + 1);
++		if (tmp == old)
++			return true;
++	}
++}
++
++/**
++ * pwq_tryinc_nr_active - Try to increment nr_active for a pwq
++ * @pwq: pool_workqueue of interest
++ * @fill: max_active may have increased, try to increase concurrency level
++ *
++ * Try to increment nr_active for @pwq. Returns %true if an nr_active count is
++ * successfully obtained. %false otherwise.
++ */
++static bool pwq_tryinc_nr_active(struct pool_workqueue *pwq, bool fill)
++{
++	struct workqueue_struct *wq = pwq->wq;
++	struct worker_pool *pool = pwq->pool;
++	struct wq_node_nr_active *nna = wq_node_nr_active(wq, pool->node);
++	bool obtained = false;
++
++	lockdep_assert_held(&pool->lock);
++
++	if (!nna) {
++		/* per-cpu workqueue, pwq->nr_active is sufficient */
++		obtained = pwq->nr_active < READ_ONCE(wq->max_active);
++		goto out;
++	}
++
++	/*
++	 * Unbound workqueue uses per-node shared nr_active $nna. If @pwq is
++	 * already waiting on $nna, pwq_dec_nr_active() will maintain the
++	 * concurrency level. Don't jump the line.
++	 *
++	 * We need to ignore the pending test after max_active has increased as
++	 * pwq_dec_nr_active() can only maintain the concurrency level but not
++	 * increase it. This is indicated by @fill.
++	 */
++	if (!list_empty(&pwq->pending_node) && likely(!fill))
++		goto out;
++
++	obtained = tryinc_node_nr_active(nna);
++	if (obtained)
++		goto out;
++
++	/*
++	 * Lockless acquisition failed. Lock, add ourself to $nna->pending_pwqs
++	 * and try again. The smp_mb() is paired with the implied memory barrier
++	 * of atomic_dec_return() in pwq_dec_nr_active() to ensure that either
++	 * we see the decremented $nna->nr or they see non-empty
++	 * $nna->pending_pwqs.
++	 */
++	raw_spin_lock(&nna->lock);
++
++	if (list_empty(&pwq->pending_node))
++		list_add_tail(&pwq->pending_node, &nna->pending_pwqs);
++	else if (likely(!fill))
++		goto out_unlock;
++
++	smp_mb();
++
++	obtained = tryinc_node_nr_active(nna);
++
++	/*
++	 * If @fill, @pwq might have already been pending. Being spuriously
++	 * pending in cold paths doesn't affect anything. Let's leave it be.
++	 */
++	if (obtained && likely(!fill))
++		list_del_init(&pwq->pending_node);
++
++out_unlock:
++	raw_spin_unlock(&nna->lock);
++out:
++	if (obtained)
++		pwq->nr_active++;
++	return obtained;
++}
++
++/**
++ * pwq_activate_first_inactive - Activate the first inactive work item on a pwq
++ * @pwq: pool_workqueue of interest
++ * @fill: max_active may have increased, try to increase concurrency level
++ *
++ * Activate the first inactive work item of @pwq if available and allowed by
++ * max_active limit.
++ *
++ * Returns %true if an inactive work item has been activated. %false if no
++ * inactive work item is found or max_active limit is reached.
++ */
++static bool pwq_activate_first_inactive(struct pool_workqueue *pwq, bool fill)
++{
++	struct work_struct *work =
++		list_first_entry_or_null(&pwq->inactive_works,
++					 struct work_struct, entry);
++
++	if (work && pwq_tryinc_nr_active(pwq, fill)) {
++		__pwq_activate_work(pwq, work);
++		return true;
++	} else {
++		return false;
++	}
++}
++
++/**
++ * node_activate_pending_pwq - Activate a pending pwq on a wq_node_nr_active
++ * @nna: wq_node_nr_active to activate a pending pwq for
++ * @caller_pool: worker_pool the caller is locking
++ *
++ * Activate a pwq in @nna->pending_pwqs. Called with @caller_pool locked.
++ * @caller_pool may be unlocked and relocked to lock other worker_pools.
++ */
++static void node_activate_pending_pwq(struct wq_node_nr_active *nna,
++				      struct worker_pool *caller_pool)
++{
++	struct worker_pool *locked_pool = caller_pool;
++	struct pool_workqueue *pwq;
++	struct work_struct *work;
++
++	lockdep_assert_held(&caller_pool->lock);
++
++	raw_spin_lock(&nna->lock);
++retry:
++	pwq = list_first_entry_or_null(&nna->pending_pwqs,
++				       struct pool_workqueue, pending_node);
++	if (!pwq)
++		goto out_unlock;
++
++	/*
++	 * If @pwq is for a different pool than @locked_pool, we need to lock
++	 * @pwq->pool->lock. Let's trylock first. If unsuccessful, do the unlock
++	 * / lock dance. For that, we also need to release @nna->lock as it's
++	 * nested inside pool locks.
++	 */
++	if (pwq->pool != locked_pool) {
++		raw_spin_unlock(&locked_pool->lock);
++		locked_pool = pwq->pool;
++		if (!raw_spin_trylock(&locked_pool->lock)) {
++			raw_spin_unlock(&nna->lock);
++			raw_spin_lock(&locked_pool->lock);
++			raw_spin_lock(&nna->lock);
++			goto retry;
++		}
++	}
++
++	/*
++	 * $pwq may not have any inactive work items due to e.g. cancellations.
++	 * Drop it from pending_pwqs and see if there's another one.
++	 */
++	work = list_first_entry_or_null(&pwq->inactive_works,
++					struct work_struct, entry);
++	if (!work) {
++		list_del_init(&pwq->pending_node);
++		goto retry;
++	}
++
++	/*
++	 * Acquire an nr_active count and activate the inactive work item. If
++	 * $pwq still has inactive work items, rotate it to the end of the
++	 * pending_pwqs so that we round-robin through them. This means that
++	 * inactive work items are not activated in queueing order which is fine
++	 * given that there has never been any ordering across different pwqs.
++	 */
++	if (likely(tryinc_node_nr_active(nna))) {
++		pwq->nr_active++;
++		__pwq_activate_work(pwq, work);
++
++		if (list_empty(&pwq->inactive_works))
++			list_del_init(&pwq->pending_node);
++		else
++			list_move_tail(&pwq->pending_node, &nna->pending_pwqs);
++
++		/* if activating a foreign pool, make sure it's running */
++		if (pwq->pool != caller_pool)
++			kick_pool(pwq->pool);
++	}
++
++out_unlock:
++	raw_spin_unlock(&nna->lock);
++	if (locked_pool != caller_pool) {
++		raw_spin_unlock(&locked_pool->lock);
++		raw_spin_lock(&caller_pool->lock);
++	}
+ }
+ 
+-static void pwq_activate_first_inactive(struct pool_workqueue *pwq)
++/**
++ * pwq_dec_nr_active - Retire an active count
++ * @pwq: pool_workqueue of interest
++ *
++ * Decrement @pwq's nr_active and try to activate the first inactive work item.
++ * For unbound workqueues, this function may temporarily drop @pwq->pool->lock.
++ */
++static void pwq_dec_nr_active(struct pool_workqueue *pwq)
+ {
+-	struct work_struct *work = list_first_entry(&pwq->inactive_works,
+-						    struct work_struct, entry);
++	struct worker_pool *pool = pwq->pool;
++	struct wq_node_nr_active *nna = wq_node_nr_active(pwq->wq, pool->node);
+ 
+-	pwq_activate_inactive_work(work);
++	lockdep_assert_held(&pool->lock);
++
++	/*
++	 * @pwq->nr_active should be decremented for both percpu and unbound
++	 * workqueues.
++	 */
++	pwq->nr_active--;
++
++	/*
++	 * For a percpu workqueue, it's simple. Just need to kick the first
++	 * inactive work item on @pwq itself.
++	 */
++	if (!nna) {
++		pwq_activate_first_inactive(pwq, false);
++		return;
++	}
++
++	/*
++	 * If @pwq is for an unbound workqueue, it's more complicated because
++	 * multiple pwqs and pools may be sharing the nr_active count. When a
++	 * pwq needs to wait for an nr_active count, it puts itself on
++	 * $nna->pending_pwqs. The following atomic_dec_return()'s implied
++	 * memory barrier is paired with smp_mb() in pwq_tryinc_nr_active() to
++	 * guarantee that either we see non-empty pending_pwqs or they see
++	 * decremented $nna->nr.
++	 *
++	 * $nna->max may change as CPUs come online/offline and @pwq->wq's
++	 * max_active gets updated. However, it is guaranteed to be equal to or
++	 * larger than @pwq->wq->min_active which is above zero unless freezing.
++	 * This maintains the forward progress guarantee.
++	 */
++	if (atomic_dec_return(&nna->nr) >= READ_ONCE(nna->max))
++		return;
++
++	if (!list_empty(&nna->pending_pwqs))
++		node_activate_pending_pwq(nna, pool);
+ }
+ 
+ /**
+@@ -1482,14 +1878,8 @@ static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, unsigned long work_
+ {
+ 	int color = get_work_color(work_data);
+ 
+-	if (!(work_data & WORK_STRUCT_INACTIVE)) {
+-		pwq->nr_active--;
+-		if (!list_empty(&pwq->inactive_works)) {
+-			/* one down, submit an inactive one */
+-			if (pwq->nr_active < pwq->max_active)
+-				pwq_activate_first_inactive(pwq);
+-		}
+-	}
++	if (!(work_data & WORK_STRUCT_INACTIVE))
++		pwq_dec_nr_active(pwq);
+ 
+ 	pwq->nr_in_flight[color]--;
+ 
+@@ -1602,8 +1992,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
+ 		 * management later on and cause stall.  Make sure the work
+ 		 * item is activated before grabbing.
+ 		 */
+-		if (*work_data_bits(work) & WORK_STRUCT_INACTIVE)
+-			pwq_activate_inactive_work(work);
++		pwq_activate_work(pwq, work);
+ 
+ 		list_del_init(&work->entry);
+ 		pwq_dec_nr_in_flight(pwq, *work_data_bits(work));
+@@ -1787,12 +2176,16 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+ 	pwq->nr_in_flight[pwq->work_color]++;
+ 	work_flags = work_color_to_flags(pwq->work_color);
+ 
+-	if (likely(pwq->nr_active < pwq->max_active)) {
++	/*
++	 * Limit the number of concurrently active work items to max_active.
++	 * @work must also queue behind existing inactive work items to maintain
++	 * ordering when max_active changes. See wq_adjust_max_active().
++	 */
++	if (list_empty(&pwq->inactive_works) && pwq_tryinc_nr_active(pwq, false)) {
+ 		if (list_empty(&pool->worklist))
+ 			pool->watchdog_ts = jiffies;
+ 
+ 		trace_workqueue_activate_work(work);
+-		pwq->nr_active++;
+ 		insert_work(pwq, work, &pool->worklist, work_flags);
+ 		kick_pool(pool);
+ 	} else {
+@@ -3021,7 +3414,7 @@ static void insert_wq_barrier(struct pool_workqueue *pwq,
+ 
+ 	barr->task = current;
+ 
+-	/* The barrier work item does not participate in pwq->nr_active. */
++	/* The barrier work item does not participate in nr_active. */
+ 	work_flags |= WORK_STRUCT_INACTIVE;
+ 
+ 	/*
+@@ -3310,7 +3703,7 @@ void drain_workqueue(struct workqueue_struct *wq)
+ 		bool drained;
+ 
+ 		raw_spin_lock_irq(&pwq->pool->lock);
+-		drained = !pwq->nr_active && list_empty(&pwq->inactive_works);
++		drained = pwq_is_empty(pwq);
+ 		raw_spin_unlock_irq(&pwq->pool->lock);
+ 
+ 		if (drained)
+@@ -3921,11 +4314,65 @@ static void wq_free_lockdep(struct workqueue_struct *wq)
+ }
+ #endif
+ 
++static void free_node_nr_active(struct wq_node_nr_active **nna_ar)
++{
++	int node;
++
++	for_each_node(node) {
++		kfree(nna_ar[node]);
++		nna_ar[node] = NULL;
++	}
++
++	kfree(nna_ar[nr_node_ids]);
++	nna_ar[nr_node_ids] = NULL;
++}
++
++static void init_node_nr_active(struct wq_node_nr_active *nna)
++{
++	atomic_set(&nna->nr, 0);
++	raw_spin_lock_init(&nna->lock);
++	INIT_LIST_HEAD(&nna->pending_pwqs);
++}
++
++/*
++ * Each node's nr_active counter will be accessed mostly from its own node and
++ * should be allocated in the node.
++ */
++static int alloc_node_nr_active(struct wq_node_nr_active **nna_ar)
++{
++	struct wq_node_nr_active *nna;
++	int node;
++
++	for_each_node(node) {
++		nna = kzalloc_node(sizeof(*nna), GFP_KERNEL, node);
++		if (!nna)
++			goto err_free;
++		init_node_nr_active(nna);
++		nna_ar[node] = nna;
++	}
++
++	/* [nr_node_ids] is used as the fallback */
++	nna = kzalloc_node(sizeof(*nna), GFP_KERNEL, NUMA_NO_NODE);
++	if (!nna)
++		goto err_free;
++	init_node_nr_active(nna);
++	nna_ar[nr_node_ids] = nna;
++
++	return 0;
++
++err_free:
++	free_node_nr_active(nna_ar);
++	return -ENOMEM;
++}
++
+ static void rcu_free_wq(struct rcu_head *rcu)
+ {
+ 	struct workqueue_struct *wq =
+ 		container_of(rcu, struct workqueue_struct, rcu);
+ 
++	if (wq->flags & WQ_UNBOUND)
++		free_node_nr_active(wq->node_nr_active);
++
+ 	wq_free_lockdep(wq);
+ 	free_percpu(wq->cpu_pwq);
+ 	free_workqueue_attrs(wq->unbound_attrs);
+@@ -4124,6 +4571,15 @@ static void pwq_release_workfn(struct kthread_work *work)
+ 		mutex_unlock(&wq_pool_mutex);
+ 	}
+ 
++	if (!list_empty(&pwq->pending_node)) {
++		struct wq_node_nr_active *nna =
++			wq_node_nr_active(pwq->wq, pwq->pool->node);
++
++		raw_spin_lock_irq(&nna->lock);
++		list_del_init(&pwq->pending_node);
++		raw_spin_unlock_irq(&nna->lock);
++	}
++
+ 	call_rcu(&pwq->rcu, rcu_free_pwq);
+ 
+ 	/*
+@@ -4136,50 +4592,6 @@ static void pwq_release_workfn(struct kthread_work *work)
+ 	}
+ }
+ 
+-/**
+- * pwq_adjust_max_active - update a pwq's max_active to the current setting
+- * @pwq: target pool_workqueue
+- *
+- * If @pwq isn't freezing, set @pwq->max_active to the associated
+- * workqueue's saved_max_active and activate inactive work items
+- * accordingly.  If @pwq is freezing, clear @pwq->max_active to zero.
+- */
+-static void pwq_adjust_max_active(struct pool_workqueue *pwq)
+-{
+-	struct workqueue_struct *wq = pwq->wq;
+-	bool freezable = wq->flags & WQ_FREEZABLE;
+-	unsigned long flags;
+-
+-	/* for @wq->saved_max_active */
+-	lockdep_assert_held(&wq->mutex);
+-
+-	/* fast exit for non-freezable wqs */
+-	if (!freezable && pwq->max_active == wq->saved_max_active)
+-		return;
+-
+-	/* this function can be called during early boot w/ irq disabled */
+-	raw_spin_lock_irqsave(&pwq->pool->lock, flags);
+-
+-	/*
+-	 * During [un]freezing, the caller is responsible for ensuring that
+-	 * this function is called at least once after @workqueue_freezing
+-	 * is updated and visible.
+-	 */
+-	if (!freezable || !workqueue_freezing) {
+-		pwq->max_active = wq->saved_max_active;
+-
+-		while (!list_empty(&pwq->inactive_works) &&
+-		       pwq->nr_active < pwq->max_active)
+-			pwq_activate_first_inactive(pwq);
+-
+-		kick_pool(pwq->pool);
+-	} else {
+-		pwq->max_active = 0;
+-	}
+-
+-	raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
+-}
+-
+ /* initialize newly allocated @pwq which is associated with @wq and @pool */
+ static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq,
+ 		     struct worker_pool *pool)
+@@ -4193,6 +4605,7 @@ static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq,
+ 	pwq->flush_color = -1;
+ 	pwq->refcnt = 1;
+ 	INIT_LIST_HEAD(&pwq->inactive_works);
++	INIT_LIST_HEAD(&pwq->pending_node);
+ 	INIT_LIST_HEAD(&pwq->pwqs_node);
+ 	INIT_LIST_HEAD(&pwq->mayday_node);
+ 	kthread_init_work(&pwq->release_work, pwq_release_workfn);
+@@ -4212,9 +4625,6 @@ static void link_pwq(struct pool_workqueue *pwq)
+ 	/* set the matching work_color */
+ 	pwq->work_color = wq->work_color;
+ 
+-	/* sync max_active to the current setting */
+-	pwq_adjust_max_active(pwq);
+-
+ 	/* link in @pwq */
+ 	list_add_rcu(&pwq->pwqs_node, &wq->pwqs);
+ }
+@@ -4283,10 +4693,11 @@ static void wq_calc_pod_cpumask(struct workqueue_attrs *attrs, int cpu,
+ 				"possible intersect\n");
+ }
+ 
+-/* install @pwq into @wq's cpu_pwq and return the old pwq */
++/* install @pwq into @wq and return the old pwq, @cpu < 0 for dfl_pwq */
+ static struct pool_workqueue *install_unbound_pwq(struct workqueue_struct *wq,
+ 					int cpu, struct pool_workqueue *pwq)
+ {
++	struct pool_workqueue __rcu **slot = unbound_pwq_slot(wq, cpu);
+ 	struct pool_workqueue *old_pwq;
+ 
+ 	lockdep_assert_held(&wq_pool_mutex);
+@@ -4295,8 +4706,8 @@ static struct pool_workqueue *install_unbound_pwq(struct workqueue_struct *wq,
+ 	/* link_pwq() can handle duplicate calls */
+ 	link_pwq(pwq);
+ 
+-	old_pwq = rcu_access_pointer(*per_cpu_ptr(wq->cpu_pwq, cpu));
+-	rcu_assign_pointer(*per_cpu_ptr(wq->cpu_pwq, cpu), pwq);
++	old_pwq = rcu_access_pointer(*slot);
++	rcu_assign_pointer(*slot, pwq);
+ 	return old_pwq;
+ }
+ 
+@@ -4396,14 +4807,14 @@ static void apply_wqattrs_commit(struct apply_wqattrs_ctx *ctx)
+ 
+ 	copy_workqueue_attrs(ctx->wq->unbound_attrs, ctx->attrs);
+ 
+-	/* save the previous pwq and install the new one */
++	/* save the previous pwqs and install the new ones */
+ 	for_each_possible_cpu(cpu)
+ 		ctx->pwq_tbl[cpu] = install_unbound_pwq(ctx->wq, cpu,
+ 							ctx->pwq_tbl[cpu]);
++	ctx->dfl_pwq = install_unbound_pwq(ctx->wq, -1, ctx->dfl_pwq);
+ 
+-	/* @dfl_pwq might not have been used, ensure it's linked */
+-	link_pwq(ctx->dfl_pwq);
+-	swap(ctx->wq->dfl_pwq, ctx->dfl_pwq);
++	/* update node_nr_active->max */
++	wq_update_node_max_active(ctx->wq, -1);
+ 
+ 	mutex_unlock(&ctx->wq->mutex);
+ }
+@@ -4526,9 +4937,7 @@ static void wq_update_pod(struct workqueue_struct *wq, int cpu,
+ 
+ 	/* nothing to do if the target cpumask matches the current pwq */
+ 	wq_calc_pod_cpumask(target_attrs, cpu, off_cpu);
+-	pwq = rcu_dereference_protected(*per_cpu_ptr(wq->cpu_pwq, cpu),
+-					lockdep_is_held(&wq_pool_mutex));
+-	if (wqattrs_equal(target_attrs, pwq->pool->attrs))
++	if (wqattrs_equal(target_attrs, unbound_pwq(wq, cpu)->pool->attrs))
+ 		return;
+ 
+ 	/* create a new pwq */
+@@ -4546,10 +4955,11 @@ static void wq_update_pod(struct workqueue_struct *wq, int cpu,
+ 
+ use_dfl_pwq:
+ 	mutex_lock(&wq->mutex);
+-	raw_spin_lock_irq(&wq->dfl_pwq->pool->lock);
+-	get_pwq(wq->dfl_pwq);
+-	raw_spin_unlock_irq(&wq->dfl_pwq->pool->lock);
+-	old_pwq = install_unbound_pwq(wq, cpu, wq->dfl_pwq);
++	pwq = unbound_pwq(wq, -1);
++	raw_spin_lock_irq(&pwq->pool->lock);
++	get_pwq(pwq);
++	raw_spin_unlock_irq(&pwq->pool->lock);
++	old_pwq = install_unbound_pwq(wq, cpu, pwq);
+ out_unlock:
+ 	mutex_unlock(&wq->mutex);
+ 	put_pwq_unlocked(old_pwq);
+@@ -4587,10 +4997,13 @@ static int alloc_and_link_pwqs(struct workqueue_struct *wq)
+ 
+ 	cpus_read_lock();
+ 	if (wq->flags & __WQ_ORDERED) {
++		struct pool_workqueue *dfl_pwq;
++
+ 		ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
+ 		/* there should only be single pwq for ordering guarantee */
+-		WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node ||
+-			      wq->pwqs.prev != &wq->dfl_pwq->pwqs_node),
++		dfl_pwq = rcu_access_pointer(wq->dfl_pwq);
++		WARN(!ret && (wq->pwqs.next != &dfl_pwq->pwqs_node ||
++			      wq->pwqs.prev != &dfl_pwq->pwqs_node),
+ 		     "ordering guarantee broken for workqueue %s\n", wq->name);
+ 	} else {
+ 		ret = apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
+@@ -4665,6 +5078,69 @@ static int init_rescuer(struct workqueue_struct *wq)
+ 	return 0;
+ }
+ 
++/**
++ * wq_adjust_max_active - update a wq's max_active to the current setting
++ * @wq: target workqueue
++ *
++ * If @wq isn't freezing, set @wq->max_active to the saved_max_active and
++ * activate inactive work items accordingly. If @wq is freezing, clear
++ * @wq->max_active to zero.
++ */
++static void wq_adjust_max_active(struct workqueue_struct *wq)
++{
++	bool activated;
++	int new_max, new_min;
++
++	lockdep_assert_held(&wq->mutex);
++
++	if ((wq->flags & WQ_FREEZABLE) && workqueue_freezing) {
++		new_max = 0;
++		new_min = 0;
++	} else {
++		new_max = wq->saved_max_active;
++		new_min = wq->saved_min_active;
++	}
++
++	if (wq->max_active == new_max && wq->min_active == new_min)
++		return;
++
++	/*
++	 * Update @wq->max/min_active and then kick inactive work items if more
++	 * active work items are allowed. This doesn't break work item ordering
++	 * because new work items are always queued behind existing inactive
++	 * work items if there are any.
++	 */
++	WRITE_ONCE(wq->max_active, new_max);
++	WRITE_ONCE(wq->min_active, new_min);
++
++	if (wq->flags & WQ_UNBOUND)
++		wq_update_node_max_active(wq, -1);
++
++	if (new_max == 0)
++		return;
++
++	/*
++	 * Round-robin through pwq's activating the first inactive work item
++	 * until max_active is filled.
++	 */
++	do {
++		struct pool_workqueue *pwq;
++
++		activated = false;
++		for_each_pwq(pwq, wq) {
++			unsigned long flags;
++
++			/* can be called during early boot w/ irq disabled */
++			raw_spin_lock_irqsave(&pwq->pool->lock, flags);
++			if (pwq_activate_first_inactive(pwq, true)) {
++				activated = true;
++				kick_pool(pwq->pool);
++			}
++			raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
++		}
++	} while (activated);
++}
++
+ __printf(1, 4)
+ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ 					 unsigned int flags,
+@@ -4672,7 +5148,8 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ {
+ 	va_list args;
+ 	struct workqueue_struct *wq;
+-	struct pool_workqueue *pwq;
++	size_t wq_size;
++	int name_len;
+ 
+ 	/*
+ 	 * Unbound && max_active == 1 used to imply ordered, which is no longer
+@@ -4688,7 +5165,12 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ 		flags |= WQ_UNBOUND;
+ 
+ 	/* allocate wq and format name */
+-	wq = kzalloc(sizeof(*wq), GFP_KERNEL);
++	if (flags & WQ_UNBOUND)
++		wq_size = struct_size(wq, node_nr_active, nr_node_ids + 1);
++	else
++		wq_size = sizeof(*wq);
++
++	wq = kzalloc(wq_size, GFP_KERNEL);
+ 	if (!wq)
+ 		return NULL;
+ 
+@@ -4699,15 +5181,22 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ 	}
+ 
+ 	va_start(args, max_active);
+-	vsnprintf(wq->name, sizeof(wq->name), fmt, args);
++	name_len = vsnprintf(wq->name, sizeof(wq->name), fmt, args);
+ 	va_end(args);
+ 
++	if (name_len >= WQ_NAME_LEN)
++		pr_warn_once("workqueue: name exceeds WQ_NAME_LEN. Truncating to: %s\n",
++			     wq->name);
++
+ 	max_active = max_active ?: WQ_DFL_ACTIVE;
+ 	max_active = wq_clamp_max_active(max_active, flags, wq->name);
+ 
+ 	/* init wq */
+ 	wq->flags = flags;
+-	wq->saved_max_active = max_active;
++	wq->max_active = max_active;
++	wq->min_active = min(max_active, WQ_DFL_MIN_ACTIVE);
++	wq->saved_max_active = wq->max_active;
++	wq->saved_min_active = wq->min_active;
+ 	mutex_init(&wq->mutex);
+ 	atomic_set(&wq->nr_pwqs_to_flush, 0);
+ 	INIT_LIST_HEAD(&wq->pwqs);
+@@ -4718,8 +5207,13 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ 	wq_init_lockdep(wq);
+ 	INIT_LIST_HEAD(&wq->list);
+ 
++	if (flags & WQ_UNBOUND) {
++		if (alloc_node_nr_active(wq->node_nr_active) < 0)
++			goto err_unreg_lockdep;
++	}
++
+ 	if (alloc_and_link_pwqs(wq) < 0)
+-		goto err_unreg_lockdep;
++		goto err_free_node_nr_active;
+ 
+ 	if (wq_online && init_rescuer(wq) < 0)
+ 		goto err_destroy;
+@@ -4735,8 +5229,7 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ 	mutex_lock(&wq_pool_mutex);
+ 
+ 	mutex_lock(&wq->mutex);
+-	for_each_pwq(pwq, wq)
+-		pwq_adjust_max_active(pwq);
++	wq_adjust_max_active(wq);
+ 	mutex_unlock(&wq->mutex);
+ 
+ 	list_add_tail_rcu(&wq->list, &workqueues);
+@@ -4745,6 +5238,9 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
+ 
+ 	return wq;
+ 
++err_free_node_nr_active:
++	if (wq->flags & WQ_UNBOUND)
++		free_node_nr_active(wq->node_nr_active);
+ err_unreg_lockdep:
+ 	wq_unregister_lockdep(wq);
+ 	wq_free_lockdep(wq);
+@@ -4766,9 +5262,9 @@ static bool pwq_busy(struct pool_workqueue *pwq)
+ 		if (pwq->nr_in_flight[i])
+ 			return true;
+ 
+-	if ((pwq != pwq->wq->dfl_pwq) && (pwq->refcnt > 1))
++	if ((pwq != rcu_access_pointer(pwq->wq->dfl_pwq)) && (pwq->refcnt > 1))
+ 		return true;
+-	if (pwq->nr_active || !list_empty(&pwq->inactive_works))
++	if (!pwq_is_empty(pwq))
+ 		return true;
+ 
+ 	return false;
+@@ -4850,13 +5346,12 @@ void destroy_workqueue(struct workqueue_struct *wq)
+ 	rcu_read_lock();
+ 
+ 	for_each_possible_cpu(cpu) {
+-		pwq = rcu_access_pointer(*per_cpu_ptr(wq->cpu_pwq, cpu));
+-		RCU_INIT_POINTER(*per_cpu_ptr(wq->cpu_pwq, cpu), NULL);
+-		put_pwq_unlocked(pwq);
++		put_pwq_unlocked(unbound_pwq(wq, cpu));
++		RCU_INIT_POINTER(*unbound_pwq_slot(wq, cpu), NULL);
+ 	}
+ 
+-	put_pwq_unlocked(wq->dfl_pwq);
+-	wq->dfl_pwq = NULL;
++	put_pwq_unlocked(unbound_pwq(wq, -1));
++	RCU_INIT_POINTER(*unbound_pwq_slot(wq, -1), NULL);
+ 
+ 	rcu_read_unlock();
+ }
+@@ -4867,15 +5362,14 @@ EXPORT_SYMBOL_GPL(destroy_workqueue);
+  * @wq: target workqueue
+  * @max_active: new max_active value.
+  *
+- * Set max_active of @wq to @max_active.
++ * Set max_active of @wq to @max_active. See the alloc_workqueue() function
++ * comment.
+  *
+  * CONTEXT:
+  * Don't call from IRQ context.
+  */
+ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
+ {
+-	struct pool_workqueue *pwq;
+-
+ 	/* disallow meddling with max_active for ordered workqueues */
+ 	if (WARN_ON(wq->flags & __WQ_ORDERED_EXPLICIT))
+ 		return;
+@@ -4886,9 +5380,10 @@ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
+ 
+ 	wq->flags &= ~__WQ_ORDERED;
+ 	wq->saved_max_active = max_active;
++	if (wq->flags & WQ_UNBOUND)
++		wq->saved_min_active = min(wq->saved_min_active, max_active);
+ 
+-	for_each_pwq(pwq, wq)
+-		pwq_adjust_max_active(pwq);
++	wq_adjust_max_active(wq);
+ 
+ 	mutex_unlock(&wq->mutex);
+ }
+@@ -5135,8 +5630,8 @@ static void show_pwq(struct pool_workqueue *pwq)
+ 	pr_info("  pwq %d:", pool->id);
+ 	pr_cont_pool_info(pool);
+ 
+-	pr_cont(" active=%d/%d refcnt=%d%s\n",
+-		pwq->nr_active, pwq->max_active, pwq->refcnt,
++	pr_cont(" active=%d refcnt=%d%s\n",
++		pwq->nr_active, pwq->refcnt,
+ 		!list_empty(&pwq->mayday_node) ? " MAYDAY" : "");
+ 
+ 	hash_for_each(pool->busy_hash, bkt, worker, hentry) {
+@@ -5210,7 +5705,7 @@ void show_one_workqueue(struct workqueue_struct *wq)
+ 	unsigned long flags;
+ 
+ 	for_each_pwq(pwq, wq) {
+-		if (pwq->nr_active || !list_empty(&pwq->inactive_works)) {
++		if (!pwq_is_empty(pwq)) {
+ 			idle = false;
+ 			break;
+ 		}
+@@ -5222,7 +5717,7 @@ void show_one_workqueue(struct workqueue_struct *wq)
+ 
+ 	for_each_pwq(pwq, wq) {
+ 		raw_spin_lock_irqsave(&pwq->pool->lock, flags);
+-		if (pwq->nr_active || !list_empty(&pwq->inactive_works)) {
++		if (!pwq_is_empty(pwq)) {
+ 			/*
+ 			 * Defer printing to avoid deadlocks in console
+ 			 * drivers that queue work while holding locks
+@@ -5569,6 +6064,10 @@ int workqueue_online_cpu(unsigned int cpu)
+ 
+ 			for_each_cpu(tcpu, pt->pod_cpus[pt->cpu_pod[cpu]])
+ 				wq_update_pod(wq, tcpu, cpu, true);
++
++			mutex_lock(&wq->mutex);
++			wq_update_node_max_active(wq, -1);
++			mutex_unlock(&wq->mutex);
+ 		}
+ 	}
+ 
+@@ -5597,6 +6096,10 @@ int workqueue_offline_cpu(unsigned int cpu)
+ 
+ 			for_each_cpu(tcpu, pt->pod_cpus[pt->cpu_pod[cpu]])
+ 				wq_update_pod(wq, tcpu, cpu, false);
++
++			mutex_lock(&wq->mutex);
++			wq_update_node_max_active(wq, cpu);
++			mutex_unlock(&wq->mutex);
+ 		}
+ 	}
+ 	mutex_unlock(&wq_pool_mutex);
+@@ -5684,7 +6187,6 @@ EXPORT_SYMBOL_GPL(work_on_cpu_safe_key);
+ void freeze_workqueues_begin(void)
+ {
+ 	struct workqueue_struct *wq;
+-	struct pool_workqueue *pwq;
+ 
+ 	mutex_lock(&wq_pool_mutex);
+ 
+@@ -5693,8 +6195,7 @@ void freeze_workqueues_begin(void)
+ 
+ 	list_for_each_entry(wq, &workqueues, list) {
+ 		mutex_lock(&wq->mutex);
+-		for_each_pwq(pwq, wq)
+-			pwq_adjust_max_active(pwq);
++		wq_adjust_max_active(wq);
+ 		mutex_unlock(&wq->mutex);
+ 	}
+ 
+@@ -5759,7 +6260,6 @@ bool freeze_workqueues_busy(void)
+ void thaw_workqueues(void)
+ {
+ 	struct workqueue_struct *wq;
+-	struct pool_workqueue *pwq;
+ 
+ 	mutex_lock(&wq_pool_mutex);
+ 
+@@ -5771,8 +6271,7 @@ void thaw_workqueues(void)
+ 	/* restore max_active and repopulate worklist */
+ 	list_for_each_entry(wq, &workqueues, list) {
+ 		mutex_lock(&wq->mutex);
+-		for_each_pwq(pwq, wq)
+-			pwq_adjust_max_active(pwq);
++		wq_adjust_max_active(wq);
+ 		mutex_unlock(&wq->mutex);
+ 	}
+ 
+@@ -6797,8 +7296,12 @@ void __init workqueue_init_topology(void)
+ 	 * combinations to apply per-pod sharing.
+ 	 */
+ 	list_for_each_entry(wq, &workqueues, list) {
+-		for_each_online_cpu(cpu) {
++		for_each_online_cpu(cpu)
+ 			wq_update_pod(wq, cpu, cpu, true);
++		if (wq->flags & WQ_UNBOUND) {
++			mutex_lock(&wq->mutex);
++			wq_update_node_max_active(wq, -1);
++			mutex_unlock(&wq->mutex);
+ 		}
+ 	}
+ 
+diff --git a/lib/cmdline_kunit.c b/lib/cmdline_kunit.c
+index d4572dbc91453..705b82736be08 100644
+--- a/lib/cmdline_kunit.c
++++ b/lib/cmdline_kunit.c
+@@ -124,7 +124,7 @@ static void cmdline_do_one_range_test(struct kunit *test, const char *in,
+ 			    n, e[0], r[0]);
+ 
+ 	p = memchr_inv(&r[1], 0, sizeof(r) - sizeof(r[0]));
+-	KUNIT_EXPECT_PTR_EQ_MSG(test, p, NULL, "in test %u at %u out of bound", n, p - r);
++	KUNIT_EXPECT_PTR_EQ_MSG(test, p, NULL, "in test %u at %td out of bound", n, p - r);
+ }
+ 
+ static void cmdline_test_range(struct kunit *test)
+diff --git a/lib/kunit/executor_test.c b/lib/kunit/executor_test.c
+index 22d4ee86dbedd..3f7f967e3688e 100644
+--- a/lib/kunit/executor_test.c
++++ b/lib/kunit/executor_test.c
+@@ -129,7 +129,7 @@ static void parse_filter_attr_test(struct kunit *test)
+ 			GFP_KERNEL);
+ 	for (j = 0; j < filter_count; j++) {
+ 		parsed_filters[j] = kunit_next_attr_filter(&filter, &err);
+-		KUNIT_ASSERT_EQ_MSG(test, err, 0, "failed to parse filter '%s'", filters[j]);
++		KUNIT_ASSERT_EQ_MSG(test, err, 0, "failed to parse filter from '%s'", filters);
+ 	}
+ 
+ 	KUNIT_EXPECT_STREQ(test, kunit_attr_filter_name(parsed_filters[0]), "speed");
+diff --git a/lib/memcpy_kunit.c b/lib/memcpy_kunit.c
+index 440aee705ccca..30e00ef0bf2e0 100644
+--- a/lib/memcpy_kunit.c
++++ b/lib/memcpy_kunit.c
+@@ -32,7 +32,7 @@ struct some_bytes {
+ 	BUILD_BUG_ON(sizeof(instance.data) != 32);	\
+ 	for (size_t i = 0; i < sizeof(instance.data); i++) {	\
+ 		KUNIT_ASSERT_EQ_MSG(test, instance.data[i], v, \
+-			"line %d: '%s' not initialized to 0x%02x @ %d (saw 0x%02x)\n", \
++			"line %d: '%s' not initialized to 0x%02x @ %zu (saw 0x%02x)\n", \
+ 			__LINE__, #instance, v, i, instance.data[i]);	\
+ 	}	\
+ } while (0)
+@@ -41,7 +41,7 @@ struct some_bytes {
+ 	BUILD_BUG_ON(sizeof(one) != sizeof(two)); \
+ 	for (size_t i = 0; i < sizeof(one); i++) {	\
+ 		KUNIT_EXPECT_EQ_MSG(test, one.data[i], two.data[i], \
+-			"line %d: %s.data[%d] (0x%02x) != %s.data[%d] (0x%02x)\n", \
++			"line %d: %s.data[%zu] (0x%02x) != %s.data[%zu] (0x%02x)\n", \
+ 			__LINE__, #one, i, one.data[i], #two, i, two.data[i]); \
+ 	}	\
+ 	kunit_info(test, "ok: " TEST_OP "() " name "\n");	\
+diff --git a/lib/test_blackhole_dev.c b/lib/test_blackhole_dev.c
+index 4c40580a99a36..f247089d63c08 100644
+--- a/lib/test_blackhole_dev.c
++++ b/lib/test_blackhole_dev.c
+@@ -29,7 +29,6 @@ static int __init test_blackholedev_init(void)
+ {
+ 	struct ipv6hdr *ip6h;
+ 	struct sk_buff *skb;
+-	struct ethhdr *ethh;
+ 	struct udphdr *uh;
+ 	int data_len;
+ 	int ret;
+@@ -61,7 +60,7 @@ static int __init test_blackholedev_init(void)
+ 	ip6h->saddr = in6addr_loopback;
+ 	ip6h->daddr = in6addr_loopback;
+ 	/* Ether */
+-	ethh = (struct ethhdr *)skb_push(skb, sizeof(struct ethhdr));
++	skb_push(skb, sizeof(struct ethhdr));
+ 	skb_set_mac_header(skb, 0);
+ 
+ 	skb->protocol = htons(ETH_P_IPV6);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 0d1ce70bce380..fb2daf89e2f2a 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -310,7 +310,7 @@ static void shmem_disable_quotas(struct super_block *sb)
+ 		dquot_quota_off(sb, type);
+ }
+ 
+-static struct dquot **shmem_get_dquots(struct inode *inode)
++static struct dquot __rcu **shmem_get_dquots(struct inode *inode)
+ {
+ 	return SHMEM_I(inode)->i_dquot;
+ }
+diff --git a/net/bluetooth/Kconfig b/net/bluetooth/Kconfig
+index da7cac0a1b716..6b2b65a667008 100644
+--- a/net/bluetooth/Kconfig
++++ b/net/bluetooth/Kconfig
+@@ -62,14 +62,6 @@ source "net/bluetooth/cmtp/Kconfig"
+ 
+ source "net/bluetooth/hidp/Kconfig"
+ 
+-config BT_HS
+-	bool "Bluetooth High Speed (HS) features"
+-	depends on BT_BREDR
+-	help
+-	  Bluetooth High Speed includes support for off-loading
+-	  Bluetooth connections via 802.11 (wifi) physical layer
+-	  available with Bluetooth version 3.0 or later.
+-
+ config BT_LE
+ 	bool "Bluetooth Low Energy (LE) features"
+ 	depends on BT
+diff --git a/net/bluetooth/Makefile b/net/bluetooth/Makefile
+index 141ac1fda0bfa..628d448d78be3 100644
+--- a/net/bluetooth/Makefile
++++ b/net/bluetooth/Makefile
+@@ -21,7 +21,6 @@ bluetooth-$(CONFIG_DEV_COREDUMP) += coredump.o
+ 
+ bluetooth-$(CONFIG_BT_BREDR) += sco.o
+ bluetooth-$(CONFIG_BT_LE) += iso.o
+-bluetooth-$(CONFIG_BT_HS) += a2mp.o amp.o
+ bluetooth-$(CONFIG_BT_LEDS) += leds.o
+ bluetooth-$(CONFIG_BT_MSFTEXT) += msft.o
+ bluetooth-$(CONFIG_BT_AOSPEXT) += aosp.o
+diff --git a/net/bluetooth/a2mp.c b/net/bluetooth/a2mp.c
+deleted file mode 100644
+index e7adb8a98cf90..0000000000000
+--- a/net/bluetooth/a2mp.c
++++ /dev/null
+@@ -1,1054 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+-   Copyright (c) 2010,2011 Code Aurora Forum.  All rights reserved.
+-   Copyright (c) 2011,2012 Intel Corp.
+-
+-*/
+-
+-#include <net/bluetooth/bluetooth.h>
+-#include <net/bluetooth/hci_core.h>
+-#include <net/bluetooth/l2cap.h>
+-
+-#include "hci_request.h"
+-#include "a2mp.h"
+-#include "amp.h"
+-
+-#define A2MP_FEAT_EXT	0x8000
+-
+-/* Global AMP Manager list */
+-static LIST_HEAD(amp_mgr_list);
+-static DEFINE_MUTEX(amp_mgr_list_lock);
+-
+-/* A2MP build & send command helper functions */
+-static struct a2mp_cmd *__a2mp_build(u8 code, u8 ident, u16 len, void *data)
+-{
+-	struct a2mp_cmd *cmd;
+-	int plen;
+-
+-	plen = sizeof(*cmd) + len;
+-	cmd = kzalloc(plen, GFP_KERNEL);
+-	if (!cmd)
+-		return NULL;
+-
+-	cmd->code = code;
+-	cmd->ident = ident;
+-	cmd->len = cpu_to_le16(len);
+-
+-	memcpy(cmd->data, data, len);
+-
+-	return cmd;
+-}
+-
+-static void a2mp_send(struct amp_mgr *mgr, u8 code, u8 ident, u16 len, void *data)
+-{
+-	struct l2cap_chan *chan = mgr->a2mp_chan;
+-	struct a2mp_cmd *cmd;
+-	u16 total_len = len + sizeof(*cmd);
+-	struct kvec iv;
+-	struct msghdr msg;
+-
+-	cmd = __a2mp_build(code, ident, len, data);
+-	if (!cmd)
+-		return;
+-
+-	iv.iov_base = cmd;
+-	iv.iov_len = total_len;
+-
+-	memset(&msg, 0, sizeof(msg));
+-
+-	iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iv, 1, total_len);
+-
+-	l2cap_chan_send(chan, &msg, total_len);
+-
+-	kfree(cmd);
+-}
+-
+-static u8 __next_ident(struct amp_mgr *mgr)
+-{
+-	if (++mgr->ident == 0)
+-		mgr->ident = 1;
+-
+-	return mgr->ident;
+-}
+-
+-static struct amp_mgr *amp_mgr_lookup_by_state(u8 state)
+-{
+-	struct amp_mgr *mgr;
+-
+-	mutex_lock(&amp_mgr_list_lock);
+-	list_for_each_entry(mgr, &amp_mgr_list, list) {
+-		if (test_and_clear_bit(state, &mgr->state)) {
+-			amp_mgr_get(mgr);
+-			mutex_unlock(&amp_mgr_list_lock);
+-			return mgr;
+-		}
+-	}
+-	mutex_unlock(&amp_mgr_list_lock);
+-
+-	return NULL;
+-}
+-
+-/* hci_dev_list shall be locked */
+-static void __a2mp_add_cl(struct amp_mgr *mgr, struct a2mp_cl *cl)
+-{
+-	struct hci_dev *hdev;
+-	int i = 1;
+-
+-	cl[0].id = AMP_ID_BREDR;
+-	cl[0].type = AMP_TYPE_BREDR;
+-	cl[0].status = AMP_STATUS_BLUETOOTH_ONLY;
+-
+-	list_for_each_entry(hdev, &hci_dev_list, list) {
+-		if (hdev->dev_type == HCI_AMP) {
+-			cl[i].id = hdev->id;
+-			cl[i].type = hdev->amp_type;
+-			if (test_bit(HCI_UP, &hdev->flags))
+-				cl[i].status = hdev->amp_status;
+-			else
+-				cl[i].status = AMP_STATUS_POWERED_DOWN;
+-			i++;
+-		}
+-	}
+-}
+-
+-/* Processing A2MP messages */
+-static int a2mp_command_rej(struct amp_mgr *mgr, struct sk_buff *skb,
+-			    struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_cmd_rej *rej = (void *) skb->data;
+-
+-	if (le16_to_cpu(hdr->len) < sizeof(*rej))
+-		return -EINVAL;
+-
+-	BT_DBG("ident %u reason %d", hdr->ident, le16_to_cpu(rej->reason));
+-
+-	skb_pull(skb, sizeof(*rej));
+-
+-	return 0;
+-}
+-
+-static int a2mp_discover_req(struct amp_mgr *mgr, struct sk_buff *skb,
+-			     struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_discov_req *req = (void *) skb->data;
+-	u16 len = le16_to_cpu(hdr->len);
+-	struct a2mp_discov_rsp *rsp;
+-	u16 ext_feat;
+-	u8 num_ctrl;
+-	struct hci_dev *hdev;
+-
+-	if (len < sizeof(*req))
+-		return -EINVAL;
+-
+-	skb_pull(skb, sizeof(*req));
+-
+-	ext_feat = le16_to_cpu(req->ext_feat);
+-
+-	BT_DBG("mtu %d efm 0x%4.4x", le16_to_cpu(req->mtu), ext_feat);
+-
+-	/* check that packet is not broken for now */
+-	while (ext_feat & A2MP_FEAT_EXT) {
+-		if (len < sizeof(ext_feat))
+-			return -EINVAL;
+-
+-		ext_feat = get_unaligned_le16(skb->data);
+-		BT_DBG("efm 0x%4.4x", ext_feat);
+-		len -= sizeof(ext_feat);
+-		skb_pull(skb, sizeof(ext_feat));
+-	}
+-
+-	read_lock(&hci_dev_list_lock);
+-
+-	/* at minimum the BR/EDR needs to be listed */
+-	num_ctrl = 1;
+-
+-	list_for_each_entry(hdev, &hci_dev_list, list) {
+-		if (hdev->dev_type == HCI_AMP)
+-			num_ctrl++;
+-	}
+-
+-	len = struct_size(rsp, cl, num_ctrl);
+-	rsp = kmalloc(len, GFP_ATOMIC);
+-	if (!rsp) {
+-		read_unlock(&hci_dev_list_lock);
+-		return -ENOMEM;
+-	}
+-
+-	rsp->mtu = cpu_to_le16(L2CAP_A2MP_DEFAULT_MTU);
+-	rsp->ext_feat = 0;
+-
+-	__a2mp_add_cl(mgr, rsp->cl);
+-
+-	read_unlock(&hci_dev_list_lock);
+-
+-	a2mp_send(mgr, A2MP_DISCOVER_RSP, hdr->ident, len, rsp);
+-
+-	kfree(rsp);
+-	return 0;
+-}
+-
+-static int a2mp_discover_rsp(struct amp_mgr *mgr, struct sk_buff *skb,
+-			     struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_discov_rsp *rsp = (void *) skb->data;
+-	u16 len = le16_to_cpu(hdr->len);
+-	struct a2mp_cl *cl;
+-	u16 ext_feat;
+-	bool found = false;
+-
+-	if (len < sizeof(*rsp))
+-		return -EINVAL;
+-
+-	len -= sizeof(*rsp);
+-	skb_pull(skb, sizeof(*rsp));
+-
+-	ext_feat = le16_to_cpu(rsp->ext_feat);
+-
+-	BT_DBG("mtu %d efm 0x%4.4x", le16_to_cpu(rsp->mtu), ext_feat);
+-
+-	/* check that packet is not broken for now */
+-	while (ext_feat & A2MP_FEAT_EXT) {
+-		if (len < sizeof(ext_feat))
+-			return -EINVAL;
+-
+-		ext_feat = get_unaligned_le16(skb->data);
+-		BT_DBG("efm 0x%4.4x", ext_feat);
+-		len -= sizeof(ext_feat);
+-		skb_pull(skb, sizeof(ext_feat));
+-	}
+-
+-	cl = (void *) skb->data;
+-	while (len >= sizeof(*cl)) {
+-		BT_DBG("Remote AMP id %u type %u status %u", cl->id, cl->type,
+-		       cl->status);
+-
+-		if (cl->id != AMP_ID_BREDR && cl->type != AMP_TYPE_BREDR) {
+-			struct a2mp_info_req req;
+-
+-			found = true;
+-
+-			memset(&req, 0, sizeof(req));
+-
+-			req.id = cl->id;
+-			a2mp_send(mgr, A2MP_GETINFO_REQ, __next_ident(mgr),
+-				  sizeof(req), &req);
+-		}
+-
+-		len -= sizeof(*cl);
+-		cl = skb_pull(skb, sizeof(*cl));
+-	}
+-
+-	/* Fall back to L2CAP init sequence */
+-	if (!found) {
+-		struct l2cap_conn *conn = mgr->l2cap_conn;
+-		struct l2cap_chan *chan;
+-
+-		mutex_lock(&conn->chan_lock);
+-
+-		list_for_each_entry(chan, &conn->chan_l, list) {
+-
+-			BT_DBG("chan %p state %s", chan,
+-			       state_to_string(chan->state));
+-
+-			if (chan->scid == L2CAP_CID_A2MP)
+-				continue;
+-
+-			l2cap_chan_lock(chan);
+-
+-			if (chan->state == BT_CONNECT)
+-				l2cap_send_conn_req(chan);
+-
+-			l2cap_chan_unlock(chan);
+-		}
+-
+-		mutex_unlock(&conn->chan_lock);
+-	}
+-
+-	return 0;
+-}
+-
+-static int a2mp_change_notify(struct amp_mgr *mgr, struct sk_buff *skb,
+-			      struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_cl *cl = (void *) skb->data;
+-
+-	while (skb->len >= sizeof(*cl)) {
+-		BT_DBG("Controller id %u type %u status %u", cl->id, cl->type,
+-		       cl->status);
+-		cl = skb_pull(skb, sizeof(*cl));
+-	}
+-
+-	/* TODO send A2MP_CHANGE_RSP */
+-
+-	return 0;
+-}
+-
+-static void read_local_amp_info_complete(struct hci_dev *hdev, u8 status,
+-					 u16 opcode)
+-{
+-	BT_DBG("%s status 0x%2.2x", hdev->name, status);
+-
+-	a2mp_send_getinfo_rsp(hdev);
+-}
+-
+-static int a2mp_getinfo_req(struct amp_mgr *mgr, struct sk_buff *skb,
+-			    struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_info_req *req  = (void *) skb->data;
+-	struct hci_dev *hdev;
+-	struct hci_request hreq;
+-	int err = 0;
+-
+-	if (le16_to_cpu(hdr->len) < sizeof(*req))
+-		return -EINVAL;
+-
+-	BT_DBG("id %u", req->id);
+-
+-	hdev = hci_dev_get(req->id);
+-	if (!hdev || hdev->dev_type != HCI_AMP) {
+-		struct a2mp_info_rsp rsp;
+-
+-		memset(&rsp, 0, sizeof(rsp));
+-
+-		rsp.id = req->id;
+-		rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+-
+-		a2mp_send(mgr, A2MP_GETINFO_RSP, hdr->ident, sizeof(rsp),
+-			  &rsp);
+-
+-		goto done;
+-	}
+-
+-	set_bit(READ_LOC_AMP_INFO, &mgr->state);
+-	hci_req_init(&hreq, hdev);
+-	hci_req_add(&hreq, HCI_OP_READ_LOCAL_AMP_INFO, 0, NULL);
+-	err = hci_req_run(&hreq, read_local_amp_info_complete);
+-	if (err < 0)
+-		a2mp_send_getinfo_rsp(hdev);
+-
+-done:
+-	if (hdev)
+-		hci_dev_put(hdev);
+-
+-	skb_pull(skb, sizeof(*req));
+-	return 0;
+-}
+-
+-static int a2mp_getinfo_rsp(struct amp_mgr *mgr, struct sk_buff *skb,
+-			    struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_info_rsp *rsp = (struct a2mp_info_rsp *) skb->data;
+-	struct a2mp_amp_assoc_req req;
+-	struct amp_ctrl *ctrl;
+-
+-	if (le16_to_cpu(hdr->len) < sizeof(*rsp))
+-		return -EINVAL;
+-
+-	BT_DBG("id %u status 0x%2.2x", rsp->id, rsp->status);
+-
+-	if (rsp->status)
+-		return -EINVAL;
+-
+-	ctrl = amp_ctrl_add(mgr, rsp->id);
+-	if (!ctrl)
+-		return -ENOMEM;
+-
+-	memset(&req, 0, sizeof(req));
+-
+-	req.id = rsp->id;
+-	a2mp_send(mgr, A2MP_GETAMPASSOC_REQ, __next_ident(mgr), sizeof(req),
+-		  &req);
+-
+-	skb_pull(skb, sizeof(*rsp));
+-	return 0;
+-}
+-
+-static int a2mp_getampassoc_req(struct amp_mgr *mgr, struct sk_buff *skb,
+-				struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_amp_assoc_req *req = (void *) skb->data;
+-	struct hci_dev *hdev;
+-	struct amp_mgr *tmp;
+-
+-	if (le16_to_cpu(hdr->len) < sizeof(*req))
+-		return -EINVAL;
+-
+-	BT_DBG("id %u", req->id);
+-
+-	/* Make sure that other request is not processed */
+-	tmp = amp_mgr_lookup_by_state(READ_LOC_AMP_ASSOC);
+-
+-	hdev = hci_dev_get(req->id);
+-	if (!hdev || hdev->amp_type == AMP_TYPE_BREDR || tmp) {
+-		struct a2mp_amp_assoc_rsp rsp;
+-
+-		memset(&rsp, 0, sizeof(rsp));
+-		rsp.id = req->id;
+-
+-		if (tmp) {
+-			rsp.status = A2MP_STATUS_COLLISION_OCCURED;
+-			amp_mgr_put(tmp);
+-		} else {
+-			rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+-		}
+-
+-		a2mp_send(mgr, A2MP_GETAMPASSOC_RSP, hdr->ident, sizeof(rsp),
+-			  &rsp);
+-
+-		goto done;
+-	}
+-
+-	amp_read_loc_assoc(hdev, mgr);
+-
+-done:
+-	if (hdev)
+-		hci_dev_put(hdev);
+-
+-	skb_pull(skb, sizeof(*req));
+-	return 0;
+-}
+-
+-static int a2mp_getampassoc_rsp(struct amp_mgr *mgr, struct sk_buff *skb,
+-				struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_amp_assoc_rsp *rsp = (void *) skb->data;
+-	u16 len = le16_to_cpu(hdr->len);
+-	struct hci_dev *hdev;
+-	struct amp_ctrl *ctrl;
+-	struct hci_conn *hcon;
+-	size_t assoc_len;
+-
+-	if (len < sizeof(*rsp))
+-		return -EINVAL;
+-
+-	assoc_len = len - sizeof(*rsp);
+-
+-	BT_DBG("id %u status 0x%2.2x assoc len %zu", rsp->id, rsp->status,
+-	       assoc_len);
+-
+-	if (rsp->status)
+-		return -EINVAL;
+-
+-	/* Save remote ASSOC data */
+-	ctrl = amp_ctrl_lookup(mgr, rsp->id);
+-	if (ctrl) {
+-		u8 *assoc;
+-
+-		assoc = kmemdup(rsp->amp_assoc, assoc_len, GFP_KERNEL);
+-		if (!assoc) {
+-			amp_ctrl_put(ctrl);
+-			return -ENOMEM;
+-		}
+-
+-		ctrl->assoc = assoc;
+-		ctrl->assoc_len = assoc_len;
+-		ctrl->assoc_rem_len = assoc_len;
+-		ctrl->assoc_len_so_far = 0;
+-
+-		amp_ctrl_put(ctrl);
+-	}
+-
+-	/* Create Phys Link */
+-	hdev = hci_dev_get(rsp->id);
+-	if (!hdev)
+-		return -EINVAL;
+-
+-	hcon = phylink_add(hdev, mgr, rsp->id, true);
+-	if (!hcon)
+-		goto done;
+-
+-	BT_DBG("Created hcon %p: loc:%u -> rem:%u", hcon, hdev->id, rsp->id);
+-
+-	mgr->bredr_chan->remote_amp_id = rsp->id;
+-
+-	amp_create_phylink(hdev, mgr, hcon);
+-
+-done:
+-	hci_dev_put(hdev);
+-	skb_pull(skb, len);
+-	return 0;
+-}
+-
+-static int a2mp_createphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb,
+-				   struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_physlink_req *req = (void *) skb->data;
+-	struct a2mp_physlink_rsp rsp;
+-	struct hci_dev *hdev;
+-	struct hci_conn *hcon;
+-	struct amp_ctrl *ctrl;
+-
+-	if (le16_to_cpu(hdr->len) < sizeof(*req))
+-		return -EINVAL;
+-
+-	BT_DBG("local_id %u, remote_id %u", req->local_id, req->remote_id);
+-
+-	memset(&rsp, 0, sizeof(rsp));
+-
+-	rsp.local_id = req->remote_id;
+-	rsp.remote_id = req->local_id;
+-
+-	hdev = hci_dev_get(req->remote_id);
+-	if (!hdev || hdev->amp_type == AMP_TYPE_BREDR) {
+-		rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+-		goto send_rsp;
+-	}
+-
+-	ctrl = amp_ctrl_lookup(mgr, rsp.remote_id);
+-	if (!ctrl) {
+-		ctrl = amp_ctrl_add(mgr, rsp.remote_id);
+-		if (ctrl) {
+-			amp_ctrl_get(ctrl);
+-		} else {
+-			rsp.status = A2MP_STATUS_UNABLE_START_LINK_CREATION;
+-			goto send_rsp;
+-		}
+-	}
+-
+-	if (ctrl) {
+-		size_t assoc_len = le16_to_cpu(hdr->len) - sizeof(*req);
+-		u8 *assoc;
+-
+-		assoc = kmemdup(req->amp_assoc, assoc_len, GFP_KERNEL);
+-		if (!assoc) {
+-			amp_ctrl_put(ctrl);
+-			hci_dev_put(hdev);
+-			return -ENOMEM;
+-		}
+-
+-		ctrl->assoc = assoc;
+-		ctrl->assoc_len = assoc_len;
+-		ctrl->assoc_rem_len = assoc_len;
+-		ctrl->assoc_len_so_far = 0;
+-
+-		amp_ctrl_put(ctrl);
+-	}
+-
+-	hcon = phylink_add(hdev, mgr, req->local_id, false);
+-	if (hcon) {
+-		amp_accept_phylink(hdev, mgr, hcon);
+-		rsp.status = A2MP_STATUS_SUCCESS;
+-	} else {
+-		rsp.status = A2MP_STATUS_UNABLE_START_LINK_CREATION;
+-	}
+-
+-send_rsp:
+-	if (hdev)
+-		hci_dev_put(hdev);
+-
+-	/* Reply error now and success after HCI Write Remote AMP Assoc
+-	   command complete with success status
+-	 */
+-	if (rsp.status != A2MP_STATUS_SUCCESS) {
+-		a2mp_send(mgr, A2MP_CREATEPHYSLINK_RSP, hdr->ident,
+-			  sizeof(rsp), &rsp);
+-	} else {
+-		set_bit(WRITE_REMOTE_AMP_ASSOC, &mgr->state);
+-		mgr->ident = hdr->ident;
+-	}
+-
+-	skb_pull(skb, le16_to_cpu(hdr->len));
+-	return 0;
+-}
+-
+-static int a2mp_discphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb,
+-				 struct a2mp_cmd *hdr)
+-{
+-	struct a2mp_physlink_req *req = (void *) skb->data;
+-	struct a2mp_physlink_rsp rsp;
+-	struct hci_dev *hdev;
+-	struct hci_conn *hcon;
+-
+-	if (le16_to_cpu(hdr->len) < sizeof(*req))
+-		return -EINVAL;
+-
+-	BT_DBG("local_id %u remote_id %u", req->local_id, req->remote_id);
+-
+-	memset(&rsp, 0, sizeof(rsp));
+-
+-	rsp.local_id = req->remote_id;
+-	rsp.remote_id = req->local_id;
+-	rsp.status = A2MP_STATUS_SUCCESS;
+-
+-	hdev = hci_dev_get(req->remote_id);
+-	if (!hdev) {
+-		rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+-		goto send_rsp;
+-	}
+-
+-	hcon = hci_conn_hash_lookup_ba(hdev, AMP_LINK,
+-				       &mgr->l2cap_conn->hcon->dst);
+-	if (!hcon) {
+-		bt_dev_err(hdev, "no phys link exist");
+-		rsp.status = A2MP_STATUS_NO_PHYSICAL_LINK_EXISTS;
+-		goto clean;
+-	}
+-
+-	/* TODO Disconnect Phys Link here */
+-
+-clean:
+-	hci_dev_put(hdev);
+-
+-send_rsp:
+-	a2mp_send(mgr, A2MP_DISCONNPHYSLINK_RSP, hdr->ident, sizeof(rsp), &rsp);
+-
+-	skb_pull(skb, sizeof(*req));
+-	return 0;
+-}
+-
+-static inline int a2mp_cmd_rsp(struct amp_mgr *mgr, struct sk_buff *skb,
+-			       struct a2mp_cmd *hdr)
+-{
+-	BT_DBG("ident %u code 0x%2.2x", hdr->ident, hdr->code);
+-
+-	skb_pull(skb, le16_to_cpu(hdr->len));
+-	return 0;
+-}
+-
+-/* Handle A2MP signalling */
+-static int a2mp_chan_recv_cb(struct l2cap_chan *chan, struct sk_buff *skb)
+-{
+-	struct a2mp_cmd *hdr;
+-	struct amp_mgr *mgr = chan->data;
+-	int err = 0;
+-
+-	amp_mgr_get(mgr);
+-
+-	while (skb->len >= sizeof(*hdr)) {
+-		u16 len;
+-
+-		hdr = (void *) skb->data;
+-		len = le16_to_cpu(hdr->len);
+-
+-		BT_DBG("code 0x%2.2x id %u len %u", hdr->code, hdr->ident, len);
+-
+-		skb_pull(skb, sizeof(*hdr));
+-
+-		if (len > skb->len || !hdr->ident) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		mgr->ident = hdr->ident;
+-
+-		switch (hdr->code) {
+-		case A2MP_COMMAND_REJ:
+-			a2mp_command_rej(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_DISCOVER_REQ:
+-			err = a2mp_discover_req(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_CHANGE_NOTIFY:
+-			err = a2mp_change_notify(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_GETINFO_REQ:
+-			err = a2mp_getinfo_req(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_GETAMPASSOC_REQ:
+-			err = a2mp_getampassoc_req(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_CREATEPHYSLINK_REQ:
+-			err = a2mp_createphyslink_req(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_DISCONNPHYSLINK_REQ:
+-			err = a2mp_discphyslink_req(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_DISCOVER_RSP:
+-			err = a2mp_discover_rsp(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_GETINFO_RSP:
+-			err = a2mp_getinfo_rsp(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_GETAMPASSOC_RSP:
+-			err = a2mp_getampassoc_rsp(mgr, skb, hdr);
+-			break;
+-
+-		case A2MP_CHANGE_RSP:
+-		case A2MP_CREATEPHYSLINK_RSP:
+-		case A2MP_DISCONNPHYSLINK_RSP:
+-			err = a2mp_cmd_rsp(mgr, skb, hdr);
+-			break;
+-
+-		default:
+-			BT_ERR("Unknown A2MP sig cmd 0x%2.2x", hdr->code);
+-			err = -EINVAL;
+-			break;
+-		}
+-	}
+-
+-	if (err) {
+-		struct a2mp_cmd_rej rej;
+-
+-		memset(&rej, 0, sizeof(rej));
+-
+-		rej.reason = cpu_to_le16(0);
+-		hdr = (void *) skb->data;
+-
+-		BT_DBG("Send A2MP Rej: cmd 0x%2.2x err %d", hdr->code, err);
+-
+-		a2mp_send(mgr, A2MP_COMMAND_REJ, hdr->ident, sizeof(rej),
+-			  &rej);
+-	}
+-
+-	/* Always free skb and return success error code to prevent
+-	   from sending L2CAP Disconnect over A2MP channel */
+-	kfree_skb(skb);
+-
+-	amp_mgr_put(mgr);
+-
+-	return 0;
+-}
+-
+-static void a2mp_chan_close_cb(struct l2cap_chan *chan)
+-{
+-	l2cap_chan_put(chan);
+-}
+-
+-static void a2mp_chan_state_change_cb(struct l2cap_chan *chan, int state,
+-				      int err)
+-{
+-	struct amp_mgr *mgr = chan->data;
+-
+-	if (!mgr)
+-		return;
+-
+-	BT_DBG("chan %p state %s", chan, state_to_string(state));
+-
+-	chan->state = state;
+-
+-	switch (state) {
+-	case BT_CLOSED:
+-		if (mgr)
+-			amp_mgr_put(mgr);
+-		break;
+-	}
+-}
+-
+-static struct sk_buff *a2mp_chan_alloc_skb_cb(struct l2cap_chan *chan,
+-					      unsigned long hdr_len,
+-					      unsigned long len, int nb)
+-{
+-	struct sk_buff *skb;
+-
+-	skb = bt_skb_alloc(hdr_len + len, GFP_KERNEL);
+-	if (!skb)
+-		return ERR_PTR(-ENOMEM);
+-
+-	return skb;
+-}
+-
+-static const struct l2cap_ops a2mp_chan_ops = {
+-	.name = "L2CAP A2MP channel",
+-	.recv = a2mp_chan_recv_cb,
+-	.close = a2mp_chan_close_cb,
+-	.state_change = a2mp_chan_state_change_cb,
+-	.alloc_skb = a2mp_chan_alloc_skb_cb,
+-
+-	/* Not implemented for A2MP */
+-	.new_connection = l2cap_chan_no_new_connection,
+-	.teardown = l2cap_chan_no_teardown,
+-	.ready = l2cap_chan_no_ready,
+-	.defer = l2cap_chan_no_defer,
+-	.resume = l2cap_chan_no_resume,
+-	.set_shutdown = l2cap_chan_no_set_shutdown,
+-	.get_sndtimeo = l2cap_chan_no_get_sndtimeo,
+-};
+-
+-static struct l2cap_chan *a2mp_chan_open(struct l2cap_conn *conn, bool locked)
+-{
+-	struct l2cap_chan *chan;
+-	int err;
+-
+-	chan = l2cap_chan_create();
+-	if (!chan)
+-		return NULL;
+-
+-	BT_DBG("chan %p", chan);
+-
+-	chan->chan_type = L2CAP_CHAN_FIXED;
+-	chan->scid = L2CAP_CID_A2MP;
+-	chan->dcid = L2CAP_CID_A2MP;
+-	chan->omtu = L2CAP_A2MP_DEFAULT_MTU;
+-	chan->imtu = L2CAP_A2MP_DEFAULT_MTU;
+-	chan->flush_to = L2CAP_DEFAULT_FLUSH_TO;
+-
+-	chan->ops = &a2mp_chan_ops;
+-
+-	l2cap_chan_set_defaults(chan);
+-	chan->remote_max_tx = chan->max_tx;
+-	chan->remote_tx_win = chan->tx_win;
+-
+-	chan->retrans_timeout = L2CAP_DEFAULT_RETRANS_TO;
+-	chan->monitor_timeout = L2CAP_DEFAULT_MONITOR_TO;
+-
+-	skb_queue_head_init(&chan->tx_q);
+-
+-	chan->mode = L2CAP_MODE_ERTM;
+-
+-	err = l2cap_ertm_init(chan);
+-	if (err < 0) {
+-		l2cap_chan_del(chan, 0);
+-		return NULL;
+-	}
+-
+-	chan->conf_state = 0;
+-
+-	if (locked)
+-		__l2cap_chan_add(conn, chan);
+-	else
+-		l2cap_chan_add(conn, chan);
+-
+-	chan->remote_mps = chan->omtu;
+-	chan->mps = chan->omtu;
+-
+-	chan->state = BT_CONNECTED;
+-
+-	return chan;
+-}
+-
+-/* AMP Manager functions */
+-struct amp_mgr *amp_mgr_get(struct amp_mgr *mgr)
+-{
+-	BT_DBG("mgr %p orig refcnt %d", mgr, kref_read(&mgr->kref));
+-
+-	kref_get(&mgr->kref);
+-
+-	return mgr;
+-}
+-
+-static void amp_mgr_destroy(struct kref *kref)
+-{
+-	struct amp_mgr *mgr = container_of(kref, struct amp_mgr, kref);
+-
+-	BT_DBG("mgr %p", mgr);
+-
+-	mutex_lock(&amp_mgr_list_lock);
+-	list_del(&mgr->list);
+-	mutex_unlock(&amp_mgr_list_lock);
+-
+-	amp_ctrl_list_flush(mgr);
+-	kfree(mgr);
+-}
+-
+-int amp_mgr_put(struct amp_mgr *mgr)
+-{
+-	BT_DBG("mgr %p orig refcnt %d", mgr, kref_read(&mgr->kref));
+-
+-	return kref_put(&mgr->kref, &amp_mgr_destroy);
+-}
+-
+-static struct amp_mgr *amp_mgr_create(struct l2cap_conn *conn, bool locked)
+-{
+-	struct amp_mgr *mgr;
+-	struct l2cap_chan *chan;
+-
+-	mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
+-	if (!mgr)
+-		return NULL;
+-
+-	BT_DBG("conn %p mgr %p", conn, mgr);
+-
+-	mgr->l2cap_conn = conn;
+-
+-	chan = a2mp_chan_open(conn, locked);
+-	if (!chan) {
+-		kfree(mgr);
+-		return NULL;
+-	}
+-
+-	mgr->a2mp_chan = chan;
+-	chan->data = mgr;
+-
+-	conn->hcon->amp_mgr = mgr;
+-
+-	kref_init(&mgr->kref);
+-
+-	/* Remote AMP ctrl list initialization */
+-	INIT_LIST_HEAD(&mgr->amp_ctrls);
+-	mutex_init(&mgr->amp_ctrls_lock);
+-
+-	mutex_lock(&amp_mgr_list_lock);
+-	list_add(&mgr->list, &amp_mgr_list);
+-	mutex_unlock(&amp_mgr_list_lock);
+-
+-	return mgr;
+-}
+-
+-struct l2cap_chan *a2mp_channel_create(struct l2cap_conn *conn,
+-				       struct sk_buff *skb)
+-{
+-	struct amp_mgr *mgr;
+-
+-	if (conn->hcon->type != ACL_LINK)
+-		return NULL;
+-
+-	mgr = amp_mgr_create(conn, false);
+-	if (!mgr) {
+-		BT_ERR("Could not create AMP manager");
+-		return NULL;
+-	}
+-
+-	BT_DBG("mgr: %p chan %p", mgr, mgr->a2mp_chan);
+-
+-	return mgr->a2mp_chan;
+-}
+-
+-void a2mp_send_getinfo_rsp(struct hci_dev *hdev)
+-{
+-	struct amp_mgr *mgr;
+-	struct a2mp_info_rsp rsp;
+-
+-	mgr = amp_mgr_lookup_by_state(READ_LOC_AMP_INFO);
+-	if (!mgr)
+-		return;
+-
+-	BT_DBG("%s mgr %p", hdev->name, mgr);
+-
+-	memset(&rsp, 0, sizeof(rsp));
+-
+-	rsp.id = hdev->id;
+-	rsp.status = A2MP_STATUS_INVALID_CTRL_ID;
+-
+-	if (hdev->amp_type != AMP_TYPE_BREDR) {
+-		rsp.status = 0;
+-		rsp.total_bw = cpu_to_le32(hdev->amp_total_bw);
+-		rsp.max_bw = cpu_to_le32(hdev->amp_max_bw);
+-		rsp.min_latency = cpu_to_le32(hdev->amp_min_latency);
+-		rsp.pal_cap = cpu_to_le16(hdev->amp_pal_cap);
+-		rsp.assoc_size = cpu_to_le16(hdev->amp_assoc_size);
+-	}
+-
+-	a2mp_send(mgr, A2MP_GETINFO_RSP, mgr->ident, sizeof(rsp), &rsp);
+-	amp_mgr_put(mgr);
+-}
+-
+-void a2mp_send_getampassoc_rsp(struct hci_dev *hdev, u8 status)
+-{
+-	struct amp_mgr *mgr;
+-	struct amp_assoc *loc_assoc = &hdev->loc_assoc;
+-	struct a2mp_amp_assoc_rsp *rsp;
+-	size_t len;
+-
+-	mgr = amp_mgr_lookup_by_state(READ_LOC_AMP_ASSOC);
+-	if (!mgr)
+-		return;
+-
+-	BT_DBG("%s mgr %p", hdev->name, mgr);
+-
+-	len = sizeof(struct a2mp_amp_assoc_rsp) + loc_assoc->len;
+-	rsp = kzalloc(len, GFP_KERNEL);
+-	if (!rsp) {
+-		amp_mgr_put(mgr);
+-		return;
+-	}
+-
+-	rsp->id = hdev->id;
+-
+-	if (status) {
+-		rsp->status = A2MP_STATUS_INVALID_CTRL_ID;
+-	} else {
+-		rsp->status = A2MP_STATUS_SUCCESS;
+-		memcpy(rsp->amp_assoc, loc_assoc->data, loc_assoc->len);
+-	}
+-
+-	a2mp_send(mgr, A2MP_GETAMPASSOC_RSP, mgr->ident, len, rsp);
+-	amp_mgr_put(mgr);
+-	kfree(rsp);
+-}
+-
+-void a2mp_send_create_phy_link_req(struct hci_dev *hdev, u8 status)
+-{
+-	struct amp_mgr *mgr;
+-	struct amp_assoc *loc_assoc = &hdev->loc_assoc;
+-	struct a2mp_physlink_req *req;
+-	struct l2cap_chan *bredr_chan;
+-	size_t len;
+-
+-	mgr = amp_mgr_lookup_by_state(READ_LOC_AMP_ASSOC_FINAL);
+-	if (!mgr)
+-		return;
+-
+-	len = sizeof(*req) + loc_assoc->len;
+-
+-	BT_DBG("%s mgr %p assoc_len %zu", hdev->name, mgr, len);
+-
+-	req = kzalloc(len, GFP_KERNEL);
+-	if (!req) {
+-		amp_mgr_put(mgr);
+-		return;
+-	}
+-
+-	bredr_chan = mgr->bredr_chan;
+-	if (!bredr_chan)
+-		goto clean;
+-
+-	req->local_id = hdev->id;
+-	req->remote_id = bredr_chan->remote_amp_id;
+-	memcpy(req->amp_assoc, loc_assoc->data, loc_assoc->len);
+-
+-	a2mp_send(mgr, A2MP_CREATEPHYSLINK_REQ, __next_ident(mgr), len, req);
+-
+-clean:
+-	amp_mgr_put(mgr);
+-	kfree(req);
+-}
+-
+-void a2mp_send_create_phy_link_rsp(struct hci_dev *hdev, u8 status)
+-{
+-	struct amp_mgr *mgr;
+-	struct a2mp_physlink_rsp rsp;
+-	struct hci_conn *hs_hcon;
+-
+-	mgr = amp_mgr_lookup_by_state(WRITE_REMOTE_AMP_ASSOC);
+-	if (!mgr)
+-		return;
+-
+-	memset(&rsp, 0, sizeof(rsp));
+-
+-	hs_hcon = hci_conn_hash_lookup_state(hdev, AMP_LINK, BT_CONNECT);
+-	if (!hs_hcon) {
+-		rsp.status = A2MP_STATUS_UNABLE_START_LINK_CREATION;
+-	} else {
+-		rsp.remote_id = hs_hcon->remote_id;
+-		rsp.status = A2MP_STATUS_SUCCESS;
+-	}
+-
+-	BT_DBG("%s mgr %p hs_hcon %p status %u", hdev->name, mgr, hs_hcon,
+-	       status);
+-
+-	rsp.local_id = hdev->id;
+-	a2mp_send(mgr, A2MP_CREATEPHYSLINK_RSP, mgr->ident, sizeof(rsp), &rsp);
+-	amp_mgr_put(mgr);
+-}
+-
+-void a2mp_discover_amp(struct l2cap_chan *chan)
+-{
+-	struct l2cap_conn *conn = chan->conn;
+-	struct amp_mgr *mgr = conn->hcon->amp_mgr;
+-	struct a2mp_discov_req req;
+-
+-	BT_DBG("chan %p conn %p mgr %p", chan, conn, mgr);
+-
+-	if (!mgr) {
+-		mgr = amp_mgr_create(conn, true);
+-		if (!mgr)
+-			return;
+-	}
+-
+-	mgr->bredr_chan = chan;
+-
+-	memset(&req, 0, sizeof(req));
+-
+-	req.mtu = cpu_to_le16(L2CAP_A2MP_DEFAULT_MTU);
+-	req.ext_feat = 0;
+-	a2mp_send(mgr, A2MP_DISCOVER_REQ, 1, sizeof(req), &req);
+-}
+diff --git a/net/bluetooth/a2mp.h b/net/bluetooth/a2mp.h
+deleted file mode 100644
+index 2fd253a61a2a1..0000000000000
+--- a/net/bluetooth/a2mp.h
++++ /dev/null
+@@ -1,154 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-/*
+-   Copyright (c) 2010,2011 Code Aurora Forum.  All rights reserved.
+-   Copyright (c) 2011,2012 Intel Corp.
+-
+-*/
+-
+-#ifndef __A2MP_H
+-#define __A2MP_H
+-
+-#include <net/bluetooth/l2cap.h>
+-
+-enum amp_mgr_state {
+-	READ_LOC_AMP_INFO,
+-	READ_LOC_AMP_ASSOC,
+-	READ_LOC_AMP_ASSOC_FINAL,
+-	WRITE_REMOTE_AMP_ASSOC,
+-};
+-
+-struct amp_mgr {
+-	struct list_head	list;
+-	struct l2cap_conn	*l2cap_conn;
+-	struct l2cap_chan	*a2mp_chan;
+-	struct l2cap_chan	*bredr_chan;
+-	struct kref		kref;
+-	__u8			ident;
+-	__u8			handle;
+-	unsigned long		state;
+-	unsigned long		flags;
+-
+-	struct list_head	amp_ctrls;
+-	struct mutex		amp_ctrls_lock;
+-};
+-
+-struct a2mp_cmd {
+-	__u8	code;
+-	__u8	ident;
+-	__le16	len;
+-	__u8	data[];
+-} __packed;
+-
+-/* A2MP command codes */
+-#define A2MP_COMMAND_REJ         0x01
+-struct a2mp_cmd_rej {
+-	__le16	reason;
+-	__u8	data[];
+-} __packed;
+-
+-#define A2MP_DISCOVER_REQ        0x02
+-struct a2mp_discov_req {
+-	__le16	mtu;
+-	__le16	ext_feat;
+-} __packed;
+-
+-struct a2mp_cl {
+-	__u8	id;
+-	__u8	type;
+-	__u8	status;
+-} __packed;
+-
+-#define A2MP_DISCOVER_RSP        0x03
+-struct a2mp_discov_rsp {
+-	__le16     mtu;
+-	__le16     ext_feat;
+-	struct a2mp_cl cl[];
+-} __packed;
+-
+-#define A2MP_CHANGE_NOTIFY       0x04
+-#define A2MP_CHANGE_RSP          0x05
+-
+-#define A2MP_GETINFO_REQ         0x06
+-struct a2mp_info_req {
+-	__u8       id;
+-} __packed;
+-
+-#define A2MP_GETINFO_RSP         0x07
+-struct a2mp_info_rsp {
+-	__u8	id;
+-	__u8	status;
+-	__le32	total_bw;
+-	__le32	max_bw;
+-	__le32	min_latency;
+-	__le16	pal_cap;
+-	__le16	assoc_size;
+-} __packed;
+-
+-#define A2MP_GETAMPASSOC_REQ     0x08
+-struct a2mp_amp_assoc_req {
+-	__u8	id;
+-} __packed;
+-
+-#define A2MP_GETAMPASSOC_RSP     0x09
+-struct a2mp_amp_assoc_rsp {
+-	__u8	id;
+-	__u8	status;
+-	__u8	amp_assoc[];
+-} __packed;
+-
+-#define A2MP_CREATEPHYSLINK_REQ  0x0A
+-#define A2MP_DISCONNPHYSLINK_REQ 0x0C
+-struct a2mp_physlink_req {
+-	__u8	local_id;
+-	__u8	remote_id;
+-	__u8	amp_assoc[];
+-} __packed;
+-
+-#define A2MP_CREATEPHYSLINK_RSP  0x0B
+-#define A2MP_DISCONNPHYSLINK_RSP 0x0D
+-struct a2mp_physlink_rsp {
+-	__u8	local_id;
+-	__u8	remote_id;
+-	__u8	status;
+-} __packed;
+-
+-/* A2MP response status */
+-#define A2MP_STATUS_SUCCESS			0x00
+-#define A2MP_STATUS_INVALID_CTRL_ID		0x01
+-#define A2MP_STATUS_UNABLE_START_LINK_CREATION	0x02
+-#define A2MP_STATUS_NO_PHYSICAL_LINK_EXISTS	0x02
+-#define A2MP_STATUS_COLLISION_OCCURED		0x03
+-#define A2MP_STATUS_DISCONN_REQ_RECVD		0x04
+-#define A2MP_STATUS_PHYS_LINK_EXISTS		0x05
+-#define A2MP_STATUS_SECURITY_VIOLATION		0x06
+-
+-struct amp_mgr *amp_mgr_get(struct amp_mgr *mgr);
+-
+-#if IS_ENABLED(CONFIG_BT_HS)
+-int amp_mgr_put(struct amp_mgr *mgr);
+-struct l2cap_chan *a2mp_channel_create(struct l2cap_conn *conn,
+-				       struct sk_buff *skb);
+-void a2mp_discover_amp(struct l2cap_chan *chan);
+-#else
+-static inline int amp_mgr_put(struct amp_mgr *mgr)
+-{
+-	return 0;
+-}
+-
+-static inline struct l2cap_chan *a2mp_channel_create(struct l2cap_conn *conn,
+-						     struct sk_buff *skb)
+-{
+-	return NULL;
+-}
+-
+-static inline void a2mp_discover_amp(struct l2cap_chan *chan)
+-{
+-}
+-#endif
+-
+-void a2mp_send_getinfo_rsp(struct hci_dev *hdev);
+-void a2mp_send_getampassoc_rsp(struct hci_dev *hdev, u8 status);
+-void a2mp_send_create_phy_link_req(struct hci_dev *hdev, u8 status);
+-void a2mp_send_create_phy_link_rsp(struct hci_dev *hdev, u8 status);
+-
+-#endif /* __A2MP_H */
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index b93464ac3517f..67604ccec2f42 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -309,14 +309,11 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	if (flags & MSG_OOB)
+ 		return -EOPNOTSUPP;
+ 
+-	lock_sock(sk);
+-
+ 	skb = skb_recv_datagram(sk, flags, &err);
+ 	if (!skb) {
+ 		if (sk->sk_shutdown & RCV_SHUTDOWN)
+ 			err = 0;
+ 
+-		release_sock(sk);
+ 		return err;
+ 	}
+ 
+@@ -346,8 +343,6 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 
+ 	skb_free_datagram(sk, skb);
+ 
+-	release_sock(sk);
+-
+ 	if (flags & MSG_TRUNC)
+ 		copied = skblen;
+ 
+@@ -570,10 +565,11 @@ int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 		if (sk->sk_state == BT_LISTEN)
+ 			return -EINVAL;
+ 
+-		lock_sock(sk);
++		spin_lock(&sk->sk_receive_queue.lock);
+ 		skb = skb_peek(&sk->sk_receive_queue);
+ 		amount = skb ? skb->len : 0;
+-		release_sock(sk);
++		spin_unlock(&sk->sk_receive_queue.lock);
++
+ 		err = put_user(amount, (int __user *)arg);
+ 		break;
+ 
+diff --git a/net/bluetooth/amp.c b/net/bluetooth/amp.c
+deleted file mode 100644
+index 5d698f19868c5..0000000000000
+--- a/net/bluetooth/amp.c
++++ /dev/null
+@@ -1,590 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+-   Copyright (c) 2011,2012 Intel Corp.
+-
+-*/
+-
+-#include <net/bluetooth/bluetooth.h>
+-#include <net/bluetooth/hci.h>
+-#include <net/bluetooth/hci_core.h>
+-#include <crypto/hash.h>
+-
+-#include "hci_request.h"
+-#include "a2mp.h"
+-#include "amp.h"
+-
+-/* Remote AMP Controllers interface */
+-void amp_ctrl_get(struct amp_ctrl *ctrl)
+-{
+-	BT_DBG("ctrl %p orig refcnt %d", ctrl,
+-	       kref_read(&ctrl->kref));
+-
+-	kref_get(&ctrl->kref);
+-}
+-
+-static void amp_ctrl_destroy(struct kref *kref)
+-{
+-	struct amp_ctrl *ctrl = container_of(kref, struct amp_ctrl, kref);
+-
+-	BT_DBG("ctrl %p", ctrl);
+-
+-	kfree(ctrl->assoc);
+-	kfree(ctrl);
+-}
+-
+-int amp_ctrl_put(struct amp_ctrl *ctrl)
+-{
+-	BT_DBG("ctrl %p orig refcnt %d", ctrl,
+-	       kref_read(&ctrl->kref));
+-
+-	return kref_put(&ctrl->kref, &amp_ctrl_destroy);
+-}
+-
+-struct amp_ctrl *amp_ctrl_add(struct amp_mgr *mgr, u8 id)
+-{
+-	struct amp_ctrl *ctrl;
+-
+-	ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
+-	if (!ctrl)
+-		return NULL;
+-
+-	kref_init(&ctrl->kref);
+-	ctrl->id = id;
+-
+-	mutex_lock(&mgr->amp_ctrls_lock);
+-	list_add(&ctrl->list, &mgr->amp_ctrls);
+-	mutex_unlock(&mgr->amp_ctrls_lock);
+-
+-	BT_DBG("mgr %p ctrl %p", mgr, ctrl);
+-
+-	return ctrl;
+-}
+-
+-void amp_ctrl_list_flush(struct amp_mgr *mgr)
+-{
+-	struct amp_ctrl *ctrl, *n;
+-
+-	BT_DBG("mgr %p", mgr);
+-
+-	mutex_lock(&mgr->amp_ctrls_lock);
+-	list_for_each_entry_safe(ctrl, n, &mgr->amp_ctrls, list) {
+-		list_del(&ctrl->list);
+-		amp_ctrl_put(ctrl);
+-	}
+-	mutex_unlock(&mgr->amp_ctrls_lock);
+-}
+-
+-struct amp_ctrl *amp_ctrl_lookup(struct amp_mgr *mgr, u8 id)
+-{
+-	struct amp_ctrl *ctrl;
+-
+-	BT_DBG("mgr %p id %u", mgr, id);
+-
+-	mutex_lock(&mgr->amp_ctrls_lock);
+-	list_for_each_entry(ctrl, &mgr->amp_ctrls, list) {
+-		if (ctrl->id == id) {
+-			amp_ctrl_get(ctrl);
+-			mutex_unlock(&mgr->amp_ctrls_lock);
+-			return ctrl;
+-		}
+-	}
+-	mutex_unlock(&mgr->amp_ctrls_lock);
+-
+-	return NULL;
+-}
+-
+-/* Physical Link interface */
+-static u8 __next_handle(struct amp_mgr *mgr)
+-{
+-	if (++mgr->handle == 0)
+-		mgr->handle = 1;
+-
+-	return mgr->handle;
+-}
+-
+-struct hci_conn *phylink_add(struct hci_dev *hdev, struct amp_mgr *mgr,
+-			     u8 remote_id, bool out)
+-{
+-	bdaddr_t *dst = &mgr->l2cap_conn->hcon->dst;
+-	struct hci_conn *hcon;
+-	u8 role = out ? HCI_ROLE_MASTER : HCI_ROLE_SLAVE;
+-
+-	hcon = hci_conn_add(hdev, AMP_LINK, dst, role, __next_handle(mgr));
+-	if (!hcon)
+-		return NULL;
+-
+-	BT_DBG("hcon %p dst %pMR", hcon, dst);
+-
+-	hcon->state = BT_CONNECT;
+-	hcon->attempt++;
+-	hcon->remote_id = remote_id;
+-	hcon->amp_mgr = amp_mgr_get(mgr);
+-
+-	return hcon;
+-}
+-
+-/* AMP crypto key generation interface */
+-static int hmac_sha256(u8 *key, u8 ksize, char *plaintext, u8 psize, u8 *output)
+-{
+-	struct crypto_shash *tfm;
+-	struct shash_desc *shash;
+-	int ret;
+-
+-	if (!ksize)
+-		return -EINVAL;
+-
+-	tfm = crypto_alloc_shash("hmac(sha256)", 0, 0);
+-	if (IS_ERR(tfm)) {
+-		BT_DBG("crypto_alloc_ahash failed: err %ld", PTR_ERR(tfm));
+-		return PTR_ERR(tfm);
+-	}
+-
+-	ret = crypto_shash_setkey(tfm, key, ksize);
+-	if (ret) {
+-		BT_DBG("crypto_ahash_setkey failed: err %d", ret);
+-		goto failed;
+-	}
+-
+-	shash = kzalloc(sizeof(*shash) + crypto_shash_descsize(tfm),
+-			GFP_KERNEL);
+-	if (!shash) {
+-		ret = -ENOMEM;
+-		goto failed;
+-	}
+-
+-	shash->tfm = tfm;
+-
+-	ret = crypto_shash_digest(shash, plaintext, psize, output);
+-
+-	kfree(shash);
+-
+-failed:
+-	crypto_free_shash(tfm);
+-	return ret;
+-}
+-
+-int phylink_gen_key(struct hci_conn *conn, u8 *data, u8 *len, u8 *type)
+-{
+-	struct hci_dev *hdev = conn->hdev;
+-	struct link_key *key;
+-	u8 keybuf[HCI_AMP_LINK_KEY_SIZE];
+-	u8 gamp_key[HCI_AMP_LINK_KEY_SIZE];
+-	int err;
+-
+-	if (!hci_conn_check_link_mode(conn))
+-		return -EACCES;
+-
+-	BT_DBG("conn %p key_type %d", conn, conn->key_type);
+-
+-	/* Legacy key */
+-	if (conn->key_type < 3) {
+-		bt_dev_err(hdev, "legacy key type %u", conn->key_type);
+-		return -EACCES;
+-	}
+-
+-	*type = conn->key_type;
+-	*len = HCI_AMP_LINK_KEY_SIZE;
+-
+-	key = hci_find_link_key(hdev, &conn->dst);
+-	if (!key) {
+-		BT_DBG("No Link key for conn %p dst %pMR", conn, &conn->dst);
+-		return -EACCES;
+-	}
+-
+-	/* BR/EDR Link Key concatenated together with itself */
+-	memcpy(&keybuf[0], key->val, HCI_LINK_KEY_SIZE);
+-	memcpy(&keybuf[HCI_LINK_KEY_SIZE], key->val, HCI_LINK_KEY_SIZE);
+-
+-	/* Derive Generic AMP Link Key (gamp) */
+-	err = hmac_sha256(keybuf, HCI_AMP_LINK_KEY_SIZE, "gamp", 4, gamp_key);
+-	if (err) {
+-		bt_dev_err(hdev, "could not derive Generic AMP Key: err %d", err);
+-		return err;
+-	}
+-
+-	if (conn->key_type == HCI_LK_DEBUG_COMBINATION) {
+-		BT_DBG("Use Generic AMP Key (gamp)");
+-		memcpy(data, gamp_key, HCI_AMP_LINK_KEY_SIZE);
+-		return err;
+-	}
+-
+-	/* Derive Dedicated AMP Link Key: "802b" is 802.11 PAL keyID */
+-	return hmac_sha256(gamp_key, HCI_AMP_LINK_KEY_SIZE, "802b", 4, data);
+-}
+-
+-static void read_local_amp_assoc_complete(struct hci_dev *hdev, u8 status,
+-					  u16 opcode, struct sk_buff *skb)
+-{
+-	struct hci_rp_read_local_amp_assoc *rp = (void *)skb->data;
+-	struct amp_assoc *assoc = &hdev->loc_assoc;
+-	size_t rem_len, frag_len;
+-
+-	BT_DBG("%s status 0x%2.2x", hdev->name, rp->status);
+-
+-	if (rp->status)
+-		goto send_rsp;
+-
+-	frag_len = skb->len - sizeof(*rp);
+-	rem_len = __le16_to_cpu(rp->rem_len);
+-
+-	if (rem_len > frag_len) {
+-		BT_DBG("frag_len %zu rem_len %zu", frag_len, rem_len);
+-
+-		memcpy(assoc->data + assoc->offset, rp->frag, frag_len);
+-		assoc->offset += frag_len;
+-
+-		/* Read other fragments */
+-		amp_read_loc_assoc_frag(hdev, rp->phy_handle);
+-
+-		return;
+-	}
+-
+-	memcpy(assoc->data + assoc->offset, rp->frag, rem_len);
+-	assoc->len = assoc->offset + rem_len;
+-	assoc->offset = 0;
+-
+-send_rsp:
+-	/* Send A2MP Rsp when all fragments are received */
+-	a2mp_send_getampassoc_rsp(hdev, rp->status);
+-	a2mp_send_create_phy_link_req(hdev, rp->status);
+-}
+-
+-void amp_read_loc_assoc_frag(struct hci_dev *hdev, u8 phy_handle)
+-{
+-	struct hci_cp_read_local_amp_assoc cp;
+-	struct amp_assoc *loc_assoc = &hdev->loc_assoc;
+-	struct hci_request req;
+-	int err;
+-
+-	BT_DBG("%s handle %u", hdev->name, phy_handle);
+-
+-	cp.phy_handle = phy_handle;
+-	cp.max_len = cpu_to_le16(hdev->amp_assoc_size);
+-	cp.len_so_far = cpu_to_le16(loc_assoc->offset);
+-
+-	hci_req_init(&req, hdev);
+-	hci_req_add(&req, HCI_OP_READ_LOCAL_AMP_ASSOC, sizeof(cp), &cp);
+-	err = hci_req_run_skb(&req, read_local_amp_assoc_complete);
+-	if (err < 0)
+-		a2mp_send_getampassoc_rsp(hdev, A2MP_STATUS_INVALID_CTRL_ID);
+-}
+-
+-void amp_read_loc_assoc(struct hci_dev *hdev, struct amp_mgr *mgr)
+-{
+-	struct hci_cp_read_local_amp_assoc cp;
+-	struct hci_request req;
+-	int err;
+-
+-	memset(&hdev->loc_assoc, 0, sizeof(struct amp_assoc));
+-	memset(&cp, 0, sizeof(cp));
+-
+-	cp.max_len = cpu_to_le16(hdev->amp_assoc_size);
+-
+-	set_bit(READ_LOC_AMP_ASSOC, &mgr->state);
+-	hci_req_init(&req, hdev);
+-	hci_req_add(&req, HCI_OP_READ_LOCAL_AMP_ASSOC, sizeof(cp), &cp);
+-	err = hci_req_run_skb(&req, read_local_amp_assoc_complete);
+-	if (err < 0)
+-		a2mp_send_getampassoc_rsp(hdev, A2MP_STATUS_INVALID_CTRL_ID);
+-}
+-
+-void amp_read_loc_assoc_final_data(struct hci_dev *hdev,
+-				   struct hci_conn *hcon)
+-{
+-	struct hci_cp_read_local_amp_assoc cp;
+-	struct amp_mgr *mgr = hcon->amp_mgr;
+-	struct hci_request req;
+-	int err;
+-
+-	if (!mgr)
+-		return;
+-
+-	cp.phy_handle = hcon->handle;
+-	cp.len_so_far = cpu_to_le16(0);
+-	cp.max_len = cpu_to_le16(hdev->amp_assoc_size);
+-
+-	set_bit(READ_LOC_AMP_ASSOC_FINAL, &mgr->state);
+-
+-	/* Read Local AMP Assoc final link information data */
+-	hci_req_init(&req, hdev);
+-	hci_req_add(&req, HCI_OP_READ_LOCAL_AMP_ASSOC, sizeof(cp), &cp);
+-	err = hci_req_run_skb(&req, read_local_amp_assoc_complete);
+-	if (err < 0)
+-		a2mp_send_getampassoc_rsp(hdev, A2MP_STATUS_INVALID_CTRL_ID);
+-}
+-
+-static void write_remote_amp_assoc_complete(struct hci_dev *hdev, u8 status,
+-					    u16 opcode, struct sk_buff *skb)
+-{
+-	struct hci_rp_write_remote_amp_assoc *rp = (void *)skb->data;
+-
+-	BT_DBG("%s status 0x%2.2x phy_handle 0x%2.2x",
+-	       hdev->name, rp->status, rp->phy_handle);
+-
+-	if (rp->status)
+-		return;
+-
+-	amp_write_rem_assoc_continue(hdev, rp->phy_handle);
+-}
+-
+-/* Write AMP Assoc data fragments, returns true with last fragment written*/
+-static bool amp_write_rem_assoc_frag(struct hci_dev *hdev,
+-				     struct hci_conn *hcon)
+-{
+-	struct hci_cp_write_remote_amp_assoc *cp;
+-	struct amp_mgr *mgr = hcon->amp_mgr;
+-	struct amp_ctrl *ctrl;
+-	struct hci_request req;
+-	u16 frag_len, len;
+-
+-	ctrl = amp_ctrl_lookup(mgr, hcon->remote_id);
+-	if (!ctrl)
+-		return false;
+-
+-	if (!ctrl->assoc_rem_len) {
+-		BT_DBG("all fragments are written");
+-		ctrl->assoc_rem_len = ctrl->assoc_len;
+-		ctrl->assoc_len_so_far = 0;
+-
+-		amp_ctrl_put(ctrl);
+-		return true;
+-	}
+-
+-	frag_len = min_t(u16, 248, ctrl->assoc_rem_len);
+-	len = frag_len + sizeof(*cp);
+-
+-	cp = kzalloc(len, GFP_KERNEL);
+-	if (!cp) {
+-		amp_ctrl_put(ctrl);
+-		return false;
+-	}
+-
+-	BT_DBG("hcon %p ctrl %p frag_len %u assoc_len %u rem_len %u",
+-	       hcon, ctrl, frag_len, ctrl->assoc_len, ctrl->assoc_rem_len);
+-
+-	cp->phy_handle = hcon->handle;
+-	cp->len_so_far = cpu_to_le16(ctrl->assoc_len_so_far);
+-	cp->rem_len = cpu_to_le16(ctrl->assoc_rem_len);
+-	memcpy(cp->frag, ctrl->assoc, frag_len);
+-
+-	ctrl->assoc_len_so_far += frag_len;
+-	ctrl->assoc_rem_len -= frag_len;
+-
+-	amp_ctrl_put(ctrl);
+-
+-	hci_req_init(&req, hdev);
+-	hci_req_add(&req, HCI_OP_WRITE_REMOTE_AMP_ASSOC, len, cp);
+-	hci_req_run_skb(&req, write_remote_amp_assoc_complete);
+-
+-	kfree(cp);
+-
+-	return false;
+-}
+-
+-void amp_write_rem_assoc_continue(struct hci_dev *hdev, u8 handle)
+-{
+-	struct hci_conn *hcon;
+-
+-	BT_DBG("%s phy handle 0x%2.2x", hdev->name, handle);
+-
+-	hcon = hci_conn_hash_lookup_handle(hdev, handle);
+-	if (!hcon)
+-		return;
+-
+-	/* Send A2MP create phylink rsp when all fragments are written */
+-	if (amp_write_rem_assoc_frag(hdev, hcon))
+-		a2mp_send_create_phy_link_rsp(hdev, 0);
+-}
+-
+-void amp_write_remote_assoc(struct hci_dev *hdev, u8 handle)
+-{
+-	struct hci_conn *hcon;
+-
+-	BT_DBG("%s phy handle 0x%2.2x", hdev->name, handle);
+-
+-	hcon = hci_conn_hash_lookup_handle(hdev, handle);
+-	if (!hcon)
+-		return;
+-
+-	BT_DBG("%s phy handle 0x%2.2x hcon %p", hdev->name, handle, hcon);
+-
+-	amp_write_rem_assoc_frag(hdev, hcon);
+-}
+-
+-static void create_phylink_complete(struct hci_dev *hdev, u8 status,
+-				    u16 opcode)
+-{
+-	struct hci_cp_create_phy_link *cp;
+-
+-	BT_DBG("%s status 0x%2.2x", hdev->name, status);
+-
+-	cp = hci_sent_cmd_data(hdev, HCI_OP_CREATE_PHY_LINK);
+-	if (!cp)
+-		return;
+-
+-	hci_dev_lock(hdev);
+-
+-	if (status) {
+-		struct hci_conn *hcon;
+-
+-		hcon = hci_conn_hash_lookup_handle(hdev, cp->phy_handle);
+-		if (hcon)
+-			hci_conn_del(hcon);
+-	} else {
+-		amp_write_remote_assoc(hdev, cp->phy_handle);
+-	}
+-
+-	hci_dev_unlock(hdev);
+-}
+-
+-void amp_create_phylink(struct hci_dev *hdev, struct amp_mgr *mgr,
+-			struct hci_conn *hcon)
+-{
+-	struct hci_cp_create_phy_link cp;
+-	struct hci_request req;
+-
+-	cp.phy_handle = hcon->handle;
+-
+-	BT_DBG("%s hcon %p phy handle 0x%2.2x", hdev->name, hcon,
+-	       hcon->handle);
+-
+-	if (phylink_gen_key(mgr->l2cap_conn->hcon, cp.key, &cp.key_len,
+-			    &cp.key_type)) {
+-		BT_DBG("Cannot create link key");
+-		return;
+-	}
+-
+-	hci_req_init(&req, hdev);
+-	hci_req_add(&req, HCI_OP_CREATE_PHY_LINK, sizeof(cp), &cp);
+-	hci_req_run(&req, create_phylink_complete);
+-}
+-
+-static void accept_phylink_complete(struct hci_dev *hdev, u8 status,
+-				    u16 opcode)
+-{
+-	struct hci_cp_accept_phy_link *cp;
+-
+-	BT_DBG("%s status 0x%2.2x", hdev->name, status);
+-
+-	if (status)
+-		return;
+-
+-	cp = hci_sent_cmd_data(hdev, HCI_OP_ACCEPT_PHY_LINK);
+-	if (!cp)
+-		return;
+-
+-	amp_write_remote_assoc(hdev, cp->phy_handle);
+-}
+-
+-void amp_accept_phylink(struct hci_dev *hdev, struct amp_mgr *mgr,
+-			struct hci_conn *hcon)
+-{
+-	struct hci_cp_accept_phy_link cp;
+-	struct hci_request req;
+-
+-	cp.phy_handle = hcon->handle;
+-
+-	BT_DBG("%s hcon %p phy handle 0x%2.2x", hdev->name, hcon,
+-	       hcon->handle);
+-
+-	if (phylink_gen_key(mgr->l2cap_conn->hcon, cp.key, &cp.key_len,
+-			    &cp.key_type)) {
+-		BT_DBG("Cannot create link key");
+-		return;
+-	}
+-
+-	hci_req_init(&req, hdev);
+-	hci_req_add(&req, HCI_OP_ACCEPT_PHY_LINK, sizeof(cp), &cp);
+-	hci_req_run(&req, accept_phylink_complete);
+-}
+-
+-void amp_physical_cfm(struct hci_conn *bredr_hcon, struct hci_conn *hs_hcon)
+-{
+-	struct hci_dev *bredr_hdev = hci_dev_hold(bredr_hcon->hdev);
+-	struct amp_mgr *mgr = hs_hcon->amp_mgr;
+-	struct l2cap_chan *bredr_chan;
+-
+-	BT_DBG("bredr_hcon %p hs_hcon %p mgr %p", bredr_hcon, hs_hcon, mgr);
+-
+-	if (!bredr_hdev || !mgr || !mgr->bredr_chan)
+-		return;
+-
+-	bredr_chan = mgr->bredr_chan;
+-
+-	l2cap_chan_lock(bredr_chan);
+-
+-	set_bit(FLAG_EFS_ENABLE, &bredr_chan->flags);
+-	bredr_chan->remote_amp_id = hs_hcon->remote_id;
+-	bredr_chan->local_amp_id = hs_hcon->hdev->id;
+-	bredr_chan->hs_hcon = hs_hcon;
+-	bredr_chan->conn->mtu = hs_hcon->hdev->block_mtu;
+-
+-	__l2cap_physical_cfm(bredr_chan, 0);
+-
+-	l2cap_chan_unlock(bredr_chan);
+-
+-	hci_dev_put(bredr_hdev);
+-}
+-
+-void amp_create_logical_link(struct l2cap_chan *chan)
+-{
+-	struct hci_conn *hs_hcon = chan->hs_hcon;
+-	struct hci_cp_create_accept_logical_link cp;
+-	struct hci_dev *hdev;
+-
+-	BT_DBG("chan %p hs_hcon %p dst %pMR", chan, hs_hcon,
+-	       &chan->conn->hcon->dst);
+-
+-	if (!hs_hcon)
+-		return;
+-
+-	hdev = hci_dev_hold(chan->hs_hcon->hdev);
+-	if (!hdev)
+-		return;
+-
+-	cp.phy_handle = hs_hcon->handle;
+-
+-	cp.tx_flow_spec.id = chan->local_id;
+-	cp.tx_flow_spec.stype = chan->local_stype;
+-	cp.tx_flow_spec.msdu = cpu_to_le16(chan->local_msdu);
+-	cp.tx_flow_spec.sdu_itime = cpu_to_le32(chan->local_sdu_itime);
+-	cp.tx_flow_spec.acc_lat = cpu_to_le32(chan->local_acc_lat);
+-	cp.tx_flow_spec.flush_to = cpu_to_le32(chan->local_flush_to);
+-
+-	cp.rx_flow_spec.id = chan->remote_id;
+-	cp.rx_flow_spec.stype = chan->remote_stype;
+-	cp.rx_flow_spec.msdu = cpu_to_le16(chan->remote_msdu);
+-	cp.rx_flow_spec.sdu_itime = cpu_to_le32(chan->remote_sdu_itime);
+-	cp.rx_flow_spec.acc_lat = cpu_to_le32(chan->remote_acc_lat);
+-	cp.rx_flow_spec.flush_to = cpu_to_le32(chan->remote_flush_to);
+-
+-	if (hs_hcon->out)
+-		hci_send_cmd(hdev, HCI_OP_CREATE_LOGICAL_LINK, sizeof(cp),
+-			     &cp);
+-	else
+-		hci_send_cmd(hdev, HCI_OP_ACCEPT_LOGICAL_LINK, sizeof(cp),
+-			     &cp);
+-
+-	hci_dev_put(hdev);
+-}
+-
+-void amp_disconnect_logical_link(struct hci_chan *hchan)
+-{
+-	struct hci_conn *hcon = hchan->conn;
+-	struct hci_cp_disconn_logical_link cp;
+-
+-	if (hcon->state != BT_CONNECTED) {
+-		BT_DBG("hchan %p not connected", hchan);
+-		return;
+-	}
+-
+-	cp.log_handle = cpu_to_le16(hchan->handle);
+-	hci_send_cmd(hcon->hdev, HCI_OP_DISCONN_LOGICAL_LINK, sizeof(cp), &cp);
+-}
+-
+-void amp_destroy_logical_link(struct hci_chan *hchan, u8 reason)
+-{
+-	BT_DBG("hchan %p", hchan);
+-
+-	hci_chan_del(hchan);
+-}
+diff --git a/net/bluetooth/amp.h b/net/bluetooth/amp.h
+deleted file mode 100644
+index 97c87abd129f6..0000000000000
+--- a/net/bluetooth/amp.h
++++ /dev/null
+@@ -1,60 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0-only */
+-/*
+-   Copyright (c) 2011,2012 Intel Corp.
+-
+-*/
+-
+-#ifndef __AMP_H
+-#define __AMP_H
+-
+-struct amp_ctrl {
+-	struct list_head	list;
+-	struct kref		kref;
+-	__u8			id;
+-	__u16			assoc_len_so_far;
+-	__u16			assoc_rem_len;
+-	__u16			assoc_len;
+-	__u8			*assoc;
+-};
+-
+-int amp_ctrl_put(struct amp_ctrl *ctrl);
+-void amp_ctrl_get(struct amp_ctrl *ctrl);
+-struct amp_ctrl *amp_ctrl_add(struct amp_mgr *mgr, u8 id);
+-struct amp_ctrl *amp_ctrl_lookup(struct amp_mgr *mgr, u8 id);
+-void amp_ctrl_list_flush(struct amp_mgr *mgr);
+-
+-struct hci_conn *phylink_add(struct hci_dev *hdev, struct amp_mgr *mgr,
+-			     u8 remote_id, bool out);
+-
+-int phylink_gen_key(struct hci_conn *hcon, u8 *data, u8 *len, u8 *type);
+-
+-void amp_read_loc_assoc_frag(struct hci_dev *hdev, u8 phy_handle);
+-void amp_read_loc_assoc(struct hci_dev *hdev, struct amp_mgr *mgr);
+-void amp_read_loc_assoc_final_data(struct hci_dev *hdev,
+-				   struct hci_conn *hcon);
+-void amp_create_phylink(struct hci_dev *hdev, struct amp_mgr *mgr,
+-			struct hci_conn *hcon);
+-void amp_accept_phylink(struct hci_dev *hdev, struct amp_mgr *mgr,
+-			struct hci_conn *hcon);
+-
+-#if IS_ENABLED(CONFIG_BT_HS)
+-void amp_create_logical_link(struct l2cap_chan *chan);
+-void amp_disconnect_logical_link(struct hci_chan *hchan);
+-#else
+-static inline void amp_create_logical_link(struct l2cap_chan *chan)
+-{
+-}
+-
+-static inline void amp_disconnect_logical_link(struct hci_chan *hchan)
+-{
+-}
+-#endif
+-
+-void amp_write_remote_assoc(struct hci_dev *hdev, u8 handle);
+-void amp_write_rem_assoc_continue(struct hci_dev *hdev, u8 handle);
+-void amp_physical_cfm(struct hci_conn *bredr_hcon, struct hci_conn *hs_hcon);
+-void amp_create_logical_link(struct l2cap_chan *chan);
+-void amp_disconnect_logical_link(struct hci_chan *hchan);
+-void amp_destroy_logical_link(struct hci_chan *hchan, u8 reason);
+-
+-#endif /* __AMP_H */
+diff --git a/net/bluetooth/eir.c b/net/bluetooth/eir.c
+index 9214189279e80..1bc51e2b05a34 100644
+--- a/net/bluetooth/eir.c
++++ b/net/bluetooth/eir.c
+@@ -13,48 +13,33 @@
+ 
+ #define PNP_INFO_SVCLASS_ID		0x1200
+ 
+-static u8 eir_append_name(u8 *eir, u16 eir_len, u8 type, u8 *data, u8 data_len)
+-{
+-	u8 name[HCI_MAX_SHORT_NAME_LENGTH + 1];
+-
+-	/* If data is already NULL terminated just pass it directly */
+-	if (data[data_len - 1] == '\0')
+-		return eir_append_data(eir, eir_len, type, data, data_len);
+-
+-	memcpy(name, data, HCI_MAX_SHORT_NAME_LENGTH);
+-	name[HCI_MAX_SHORT_NAME_LENGTH] = '\0';
+-
+-	return eir_append_data(eir, eir_len, type, name, sizeof(name));
+-}
+-
+ u8 eir_append_local_name(struct hci_dev *hdev, u8 *ptr, u8 ad_len)
+ {
+ 	size_t short_len;
+ 	size_t complete_len;
+ 
+-	/* no space left for name (+ NULL + type + len) */
+-	if ((max_adv_len(hdev) - ad_len) < HCI_MAX_SHORT_NAME_LENGTH + 3)
++	/* no space left for name (+ type + len) */
++	if ((max_adv_len(hdev) - ad_len) < HCI_MAX_SHORT_NAME_LENGTH + 2)
+ 		return ad_len;
+ 
+ 	/* use complete name if present and fits */
+ 	complete_len = strnlen(hdev->dev_name, sizeof(hdev->dev_name));
+ 	if (complete_len && complete_len <= HCI_MAX_SHORT_NAME_LENGTH)
+-		return eir_append_name(ptr, ad_len, EIR_NAME_COMPLETE,
+-				       hdev->dev_name, complete_len + 1);
++		return eir_append_data(ptr, ad_len, EIR_NAME_COMPLETE,
++				       hdev->dev_name, complete_len);
+ 
+ 	/* use short name if present */
+ 	short_len = strnlen(hdev->short_name, sizeof(hdev->short_name));
+ 	if (short_len)
+-		return eir_append_name(ptr, ad_len, EIR_NAME_SHORT,
++		return eir_append_data(ptr, ad_len, EIR_NAME_SHORT,
+ 				       hdev->short_name,
+-				       short_len == HCI_MAX_SHORT_NAME_LENGTH ?
+-				       short_len : short_len + 1);
++				       short_len);
+ 
+ 	/* use shortened full name if present, we already know that name
+ 	 * is longer then HCI_MAX_SHORT_NAME_LENGTH
+ 	 */
+ 	if (complete_len)
+-		return eir_append_name(ptr, ad_len, EIR_NAME_SHORT,
++		return eir_append_data(ptr, ad_len, EIR_NAME_SHORT,
+ 				       hdev->dev_name,
+ 				       HCI_MAX_SHORT_NAME_LENGTH);
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index d01db89fcb469..50c55d7335692 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -36,7 +36,6 @@
+ 
+ #include "hci_request.h"
+ #include "smp.h"
+-#include "a2mp.h"
+ #include "eir.h"
+ 
+ struct sco_param {
+@@ -1169,9 +1168,6 @@ void hci_conn_del(struct hci_conn *conn)
+ 		}
+ 	}
+ 
+-	if (conn->amp_mgr)
+-		amp_mgr_put(conn->amp_mgr);
+-
+ 	skb_queue_purge(&conn->data_q);
+ 
+ 	/* Remove the connection from the list and cleanup its remaining
+@@ -2981,7 +2977,7 @@ int hci_abort_conn(struct hci_conn *conn, u8 reason)
+ 		case HCI_EV_LE_CONN_COMPLETE:
+ 		case HCI_EV_LE_ENHANCED_CONN_COMPLETE:
+ 		case HCI_EVT_LE_CIS_ESTABLISHED:
+-			hci_cmd_sync_cancel(hdev, -ECANCELED);
++			hci_cmd_sync_cancel(hdev, ECANCELED);
+ 			break;
+ 		}
+ 	}
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2821a42cefdc6..7d5334b529834 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -908,7 +908,7 @@ int hci_get_dev_info(void __user *arg)
+ 	else
+ 		flags = hdev->flags;
+ 
+-	strcpy(di.name, hdev->name);
++	strscpy(di.name, hdev->name, sizeof(di.name));
+ 	di.bdaddr   = hdev->bdaddr;
+ 	di.type     = (hdev->bus & 0x0f) | ((hdev->dev_type & 0x03) << 4);
+ 	di.flags    = flags;
+@@ -1491,11 +1491,12 @@ static void hci_cmd_timeout(struct work_struct *work)
+ 	struct hci_dev *hdev = container_of(work, struct hci_dev,
+ 					    cmd_timer.work);
+ 
+-	if (hdev->sent_cmd) {
+-		struct hci_command_hdr *sent = (void *) hdev->sent_cmd->data;
+-		u16 opcode = __le16_to_cpu(sent->opcode);
++	if (hdev->req_skb) {
++		u16 opcode = hci_skb_opcode(hdev->req_skb);
+ 
+ 		bt_dev_err(hdev, "command 0x%4.4x tx timeout", opcode);
++
++		hci_cmd_sync_cancel_sync(hdev, ETIMEDOUT);
+ 	} else {
+ 		bt_dev_err(hdev, "command tx timeout");
+ 	}
+@@ -2795,6 +2796,7 @@ void hci_release_dev(struct hci_dev *hdev)
+ 	ida_destroy(&hdev->unset_handle_ida);
+ 	ida_simple_remove(&hci_index_ida, hdev->id);
+ 	kfree_skb(hdev->sent_cmd);
++	kfree_skb(hdev->req_skb);
+ 	kfree_skb(hdev->recv_event);
+ 	kfree(hdev);
+ }
+@@ -2826,6 +2828,23 @@ int hci_unregister_suspend_notifier(struct hci_dev *hdev)
+ 	return ret;
+ }
+ 
++/* Cancel ongoing command synchronously:
++ *
++ * - Cancel command timer
++ * - Reset command counter
++ * - Cancel command request
++ */
++static void hci_cancel_cmd_sync(struct hci_dev *hdev, int err)
++{
++	bt_dev_dbg(hdev, "err 0x%2.2x", err);
++
++	cancel_delayed_work_sync(&hdev->cmd_timer);
++	cancel_delayed_work_sync(&hdev->ncmd_timer);
++	atomic_set(&hdev->cmd_cnt, 1);
++
++	hci_cmd_sync_cancel_sync(hdev, -err);
++}
++
+ /* Suspend HCI device */
+ int hci_suspend_dev(struct hci_dev *hdev)
+ {
+@@ -2843,7 +2862,7 @@ int hci_suspend_dev(struct hci_dev *hdev)
+ 		return 0;
+ 
+ 	/* Cancel potentially blocking sync operation before suspend */
+-	__hci_cmd_sync_cancel(hdev, -EHOSTDOWN);
++	hci_cancel_cmd_sync(hdev, -EHOSTDOWN);
+ 
+ 	hci_req_sync_lock(hdev);
+ 	ret = hci_suspend_sync(hdev);
+@@ -3107,21 +3126,33 @@ int __hci_cmd_send(struct hci_dev *hdev, u16 opcode, u32 plen,
+ EXPORT_SYMBOL(__hci_cmd_send);
+ 
+ /* Get data from the previously sent command */
+-void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 opcode)
++static void *hci_cmd_data(struct sk_buff *skb, __u16 opcode)
+ {
+ 	struct hci_command_hdr *hdr;
+ 
+-	if (!hdev->sent_cmd)
++	if (!skb || skb->len < HCI_COMMAND_HDR_SIZE)
+ 		return NULL;
+ 
+-	hdr = (void *) hdev->sent_cmd->data;
++	hdr = (void *)skb->data;
+ 
+ 	if (hdr->opcode != cpu_to_le16(opcode))
+ 		return NULL;
+ 
+-	BT_DBG("%s opcode 0x%4.4x", hdev->name, opcode);
++	return skb->data + HCI_COMMAND_HDR_SIZE;
++}
++
++/* Get data from the previously sent command */
++void *hci_sent_cmd_data(struct hci_dev *hdev, __u16 opcode)
++{
++	void *data;
+ 
+-	return hdev->sent_cmd->data + HCI_COMMAND_HDR_SIZE;
++	/* Check if opcode matches last sent command */
++	data = hci_cmd_data(hdev->sent_cmd, opcode);
++	if (!data)
++		/* Check if opcode matches last request */
++		data = hci_cmd_data(hdev->req_skb, opcode);
++
++	return data;
+ }
+ 
+ /* Get data from last received event */
+@@ -4022,17 +4053,19 @@ void hci_req_cmd_complete(struct hci_dev *hdev, u16 opcode, u8 status,
+ 	if (!status && !hci_req_is_complete(hdev))
+ 		return;
+ 
++	skb = hdev->req_skb;
++
+ 	/* If this was the last command in a request the complete
+-	 * callback would be found in hdev->sent_cmd instead of the
++	 * callback would be found in hdev->req_skb instead of the
+ 	 * command queue (hdev->cmd_q).
+ 	 */
+-	if (bt_cb(hdev->sent_cmd)->hci.req_flags & HCI_REQ_SKB) {
+-		*req_complete_skb = bt_cb(hdev->sent_cmd)->hci.req_complete_skb;
++	if (skb && bt_cb(skb)->hci.req_flags & HCI_REQ_SKB) {
++		*req_complete_skb = bt_cb(skb)->hci.req_complete_skb;
+ 		return;
+ 	}
+ 
+-	if (bt_cb(hdev->sent_cmd)->hci.req_complete) {
+-		*req_complete = bt_cb(hdev->sent_cmd)->hci.req_complete;
++	if (skb && bt_cb(skb)->hci.req_complete) {
++		*req_complete = bt_cb(skb)->hci.req_complete;
+ 		return;
+ 	}
+ 
+@@ -4128,6 +4161,36 @@ static void hci_rx_work(struct work_struct *work)
+ 	}
+ }
+ 
++static void hci_send_cmd_sync(struct hci_dev *hdev, struct sk_buff *skb)
++{
++	int err;
++
++	bt_dev_dbg(hdev, "skb %p", skb);
++
++	kfree_skb(hdev->sent_cmd);
++
++	hdev->sent_cmd = skb_clone(skb, GFP_KERNEL);
++	if (!hdev->sent_cmd) {
++		skb_queue_head(&hdev->cmd_q, skb);
++		queue_work(hdev->workqueue, &hdev->cmd_work);
++		return;
++	}
++
++	err = hci_send_frame(hdev, skb);
++	if (err < 0) {
++		hci_cmd_sync_cancel_sync(hdev, err);
++		return;
++	}
++
++	if (hci_req_status_pend(hdev) &&
++	    !hci_dev_test_and_set_flag(hdev, HCI_CMD_PENDING)) {
++		kfree_skb(hdev->req_skb);
++		hdev->req_skb = skb_clone(hdev->sent_cmd, GFP_KERNEL);
++	}
++
++	atomic_dec(&hdev->cmd_cnt);
++}
++
+ static void hci_cmd_work(struct work_struct *work)
+ {
+ 	struct hci_dev *hdev = container_of(work, struct hci_dev, cmd_work);
+@@ -4142,30 +4205,15 @@ static void hci_cmd_work(struct work_struct *work)
+ 		if (!skb)
+ 			return;
+ 
+-		kfree_skb(hdev->sent_cmd);
+-
+-		hdev->sent_cmd = skb_clone(skb, GFP_KERNEL);
+-		if (hdev->sent_cmd) {
+-			int res;
+-			if (hci_req_status_pend(hdev))
+-				hci_dev_set_flag(hdev, HCI_CMD_PENDING);
+-			atomic_dec(&hdev->cmd_cnt);
++		hci_send_cmd_sync(hdev, skb);
+ 
+-			res = hci_send_frame(hdev, skb);
+-			if (res < 0)
+-				__hci_cmd_sync_cancel(hdev, -res);
+-
+-			rcu_read_lock();
+-			if (test_bit(HCI_RESET, &hdev->flags) ||
+-			    hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
+-				cancel_delayed_work(&hdev->cmd_timer);
+-			else
+-				queue_delayed_work(hdev->workqueue, &hdev->cmd_timer,
+-						   HCI_CMD_TIMEOUT);
+-			rcu_read_unlock();
+-		} else {
+-			skb_queue_head(&hdev->cmd_q, skb);
+-			queue_work(hdev->workqueue, &hdev->cmd_work);
+-		}
++		rcu_read_lock();
++		if (test_bit(HCI_RESET, &hdev->flags) ||
++		    hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
++			cancel_delayed_work(&hdev->cmd_timer);
++		else
++			queue_delayed_work(hdev->workqueue, &hdev->cmd_timer,
++					   HCI_CMD_TIMEOUT);
++		rcu_read_unlock();
+ 	}
+ }
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 2a5f5a7d2412b..6275b14b56927 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -36,8 +36,6 @@
+ #include "hci_request.h"
+ #include "hci_debugfs.h"
+ #include "hci_codec.h"
+-#include "a2mp.h"
+-#include "amp.h"
+ #include "smp.h"
+ #include "msft.h"
+ #include "eir.h"
+@@ -2526,9 +2524,7 @@ static void hci_check_pending_name(struct hci_dev *hdev, struct hci_conn *conn,
+ 	 * Only those in BT_CONFIG or BT_CONNECTED states can be
+ 	 * considered connected.
+ 	 */
+-	if (conn &&
+-	    (conn->state == BT_CONFIG || conn->state == BT_CONNECTED) &&
+-	    !test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags))
++	if (conn && (conn->state == BT_CONFIG || conn->state == BT_CONNECTED))
+ 		mgmt_device_connected(hdev, conn, name, name_len);
+ 
+ 	if (discov->state == DISCOVERY_STOPPED)
+@@ -3556,8 +3552,6 @@ static void hci_remote_name_evt(struct hci_dev *hdev, void *data,
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+ 
+-	hci_conn_check_pending(hdev);
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
+@@ -3762,8 +3756,9 @@ static void hci_remote_features_evt(struct hci_dev *hdev, void *data,
+ 		bacpy(&cp.bdaddr, &conn->dst);
+ 		cp.pscan_rep_mode = 0x02;
+ 		hci_send_cmd(hdev, HCI_OP_REMOTE_NAME_REQ, sizeof(cp), &cp);
+-	} else if (!test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags))
++	} else {
+ 		mgmt_device_connected(hdev, conn, NULL, 0);
++	}
+ 
+ 	if (!hci_outgoing_auth_needed(hdev, conn)) {
+ 		conn->state = BT_CONNECTED;
+@@ -3936,6 +3931,11 @@ static u8 hci_cc_le_setup_iso_path(struct hci_dev *hdev, void *data,
+ 		 * last.
+ 		 */
+ 		hci_connect_cfm(conn, rp->status);
++
++		/* Notify device connected in case it is a BIG Sync */
++		if (!rp->status && test_bit(HCI_CONN_BIG_SYNC, &conn->flags))
++			mgmt_device_connected(hdev, conn, NULL, 0);
++
+ 		break;
+ 	}
+ 
+@@ -4381,7 +4381,7 @@ static void hci_cmd_status_evt(struct hci_dev *hdev, void *data,
+ 	 * (since for this kind of commands there will not be a command
+ 	 * complete event).
+ 	 */
+-	if (ev->status || (hdev->sent_cmd && !hci_skb_event(hdev->sent_cmd))) {
++	if (ev->status || (hdev->req_skb && !hci_skb_event(hdev->req_skb))) {
+ 		hci_req_cmd_complete(hdev, *opcode, ev->status, req_complete,
+ 				     req_complete_skb);
+ 		if (hci_dev_test_flag(hdev, HCI_CMD_PENDING)) {
+@@ -5010,8 +5010,9 @@ static void hci_remote_ext_features_evt(struct hci_dev *hdev, void *data,
+ 		bacpy(&cp.bdaddr, &conn->dst);
+ 		cp.pscan_rep_mode = 0x02;
+ 		hci_send_cmd(hdev, HCI_OP_REMOTE_NAME_REQ, sizeof(cp), &cp);
+-	} else if (!test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags))
++	} else {
+ 		mgmt_device_connected(hdev, conn, NULL, 0);
++	}
+ 
+ 	if (!hci_outgoing_auth_needed(hdev, conn)) {
+ 		conn->state = BT_CONNECTED;
+@@ -5984,8 +5985,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		goto unlock;
+ 	}
+ 
+-	if (!test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags))
+-		mgmt_device_connected(hdev, conn, NULL, 0);
++	mgmt_device_connected(hdev, conn, NULL, 0);
+ 
+ 	conn->sec_level = BT_SECURITY_LOW;
+ 	conn->state = BT_CONFIG;
+@@ -7214,6 +7214,9 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
+ 	/* Notify iso layer */
+ 	hci_connect_cfm(pa_sync, 0x00);
+ 
++	/* Notify MGMT layer */
++	mgmt_device_connected(hdev, pa_sync, NULL, 0);
++
+ unlock:
+ 	hci_dev_unlock(hdev);
+ }
+@@ -7324,10 +7327,10 @@ static void hci_le_meta_evt(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "subevent 0x%2.2x", ev->subevent);
+ 
+ 	/* Only match event if command OGF is for LE */
+-	if (hdev->sent_cmd &&
+-	    hci_opcode_ogf(hci_skb_opcode(hdev->sent_cmd)) == 0x08 &&
+-	    hci_skb_event(hdev->sent_cmd) == ev->subevent) {
+-		*opcode = hci_skb_opcode(hdev->sent_cmd);
++	if (hdev->req_skb &&
++	    hci_opcode_ogf(hci_skb_opcode(hdev->req_skb)) == 0x08 &&
++	    hci_skb_event(hdev->req_skb) == ev->subevent) {
++		*opcode = hci_skb_opcode(hdev->req_skb);
+ 		hci_req_cmd_complete(hdev, *opcode, 0x00, req_complete,
+ 				     req_complete_skb);
+ 	}
+@@ -7714,10 +7717,10 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ 	}
+ 
+ 	/* Only match event if command OGF is not for LE */
+-	if (hdev->sent_cmd &&
+-	    hci_opcode_ogf(hci_skb_opcode(hdev->sent_cmd)) != 0x08 &&
+-	    hci_skb_event(hdev->sent_cmd) == event) {
+-		hci_req_cmd_complete(hdev, hci_skb_opcode(hdev->sent_cmd),
++	if (hdev->req_skb &&
++	    hci_opcode_ogf(hci_skb_opcode(hdev->req_skb)) != 0x08 &&
++	    hci_skb_event(hdev->req_skb) == event) {
++		hci_req_cmd_complete(hdev, hci_skb_opcode(hdev->req_skb),
+ 				     status, &req_complete, &req_complete_skb);
+ 		req_evt = event;
+ 	}
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 6e023b0104b03..00e02138003ec 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -895,7 +895,7 @@ void hci_request_setup(struct hci_dev *hdev)
+ 
+ void hci_request_cancel_all(struct hci_dev *hdev)
+ {
+-	__hci_cmd_sync_cancel(hdev, ENODEV);
++	hci_cmd_sync_cancel_sync(hdev, ENODEV);
+ 
+ 	cancel_interleave_scan(hdev);
+ }
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index b90ee68bba1d6..183501f921814 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -32,6 +32,10 @@ static void hci_cmd_sync_complete(struct hci_dev *hdev, u8 result, u16 opcode,
+ 	hdev->req_result = result;
+ 	hdev->req_status = HCI_REQ_DONE;
+ 
++	/* Free the request command so it is not used as response */
++	kfree_skb(hdev->req_skb);
++	hdev->req_skb = NULL;
++
+ 	if (skb) {
+ 		struct sock *sk = hci_skb_sk(skb);
+ 
+@@ -39,7 +43,7 @@ static void hci_cmd_sync_complete(struct hci_dev *hdev, u8 result, u16 opcode,
+ 		if (sk)
+ 			sock_put(sk);
+ 
+-		hdev->req_skb = skb_get(skb);
++		hdev->req_rsp = skb_get(skb);
+ 	}
+ 
+ 	wake_up_interruptible(&hdev->req_wait_q);
+@@ -187,8 +191,8 @@ struct sk_buff *__hci_cmd_sync_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
+ 
+ 	hdev->req_status = 0;
+ 	hdev->req_result = 0;
+-	skb = hdev->req_skb;
+-	hdev->req_skb = NULL;
++	skb = hdev->req_rsp;
++	hdev->req_rsp = NULL;
+ 
+ 	bt_dev_dbg(hdev, "end: err %d", err);
+ 
+@@ -652,7 +656,7 @@ void hci_cmd_sync_clear(struct hci_dev *hdev)
+ 	mutex_unlock(&hdev->cmd_sync_work_lock);
+ }
+ 
+-void __hci_cmd_sync_cancel(struct hci_dev *hdev, int err)
++void hci_cmd_sync_cancel(struct hci_dev *hdev, int err)
+ {
+ 	bt_dev_dbg(hdev, "err 0x%2.2x", err);
+ 
+@@ -660,15 +664,17 @@ void __hci_cmd_sync_cancel(struct hci_dev *hdev, int err)
+ 		hdev->req_result = err;
+ 		hdev->req_status = HCI_REQ_CANCELED;
+ 
+-		cancel_delayed_work_sync(&hdev->cmd_timer);
+-		cancel_delayed_work_sync(&hdev->ncmd_timer);
+-		atomic_set(&hdev->cmd_cnt, 1);
+-
+-		wake_up_interruptible(&hdev->req_wait_q);
++		queue_work(hdev->workqueue, &hdev->cmd_sync_cancel_work);
+ 	}
+ }
++EXPORT_SYMBOL(hci_cmd_sync_cancel);
+ 
+-void hci_cmd_sync_cancel(struct hci_dev *hdev, int err)
++/* Cancel ongoing command request synchronously:
++ *
++ * - Set result and mark status to HCI_REQ_CANCELED
++ * - Wakeup command sync thread
++ */
++void hci_cmd_sync_cancel_sync(struct hci_dev *hdev, int err)
+ {
+ 	bt_dev_dbg(hdev, "err 0x%2.2x", err);
+ 
+@@ -676,10 +682,10 @@ void hci_cmd_sync_cancel(struct hci_dev *hdev, int err)
+ 		hdev->req_result = err;
+ 		hdev->req_status = HCI_REQ_CANCELED;
+ 
+-		queue_work(hdev->workqueue, &hdev->cmd_sync_cancel_work);
++		wake_up_interruptible(&hdev->req_wait_q);
+ 	}
+ }
+-EXPORT_SYMBOL(hci_cmd_sync_cancel);
++EXPORT_SYMBOL(hci_cmd_sync_cancel_sync);
+ 
+ /* Submit HCI command to be run in as cmd_sync_work:
+  *
+@@ -4902,6 +4908,11 @@ int hci_dev_open_sync(struct hci_dev *hdev)
+ 			hdev->sent_cmd = NULL;
+ 		}
+ 
++		if (hdev->req_skb) {
++			kfree_skb(hdev->req_skb);
++			hdev->req_skb = NULL;
++		}
++
+ 		clear_bit(HCI_RUNNING, &hdev->flags);
+ 		hci_sock_dev_event(hdev, HCI_DEV_CLOSE);
+ 
+@@ -5063,6 +5074,12 @@ int hci_dev_close_sync(struct hci_dev *hdev)
+ 		hdev->sent_cmd = NULL;
+ 	}
+ 
++	/* Drop last request */
++	if (hdev->req_skb) {
++		kfree_skb(hdev->req_skb);
++		hdev->req_skb = NULL;
++	}
++
+ 	clear_bit(HCI_RUNNING, &hdev->flags);
+ 	hci_sock_dev_event(hdev, HCI_DEV_CLOSE);
+ 
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 656f49b299d20..ab5a9d42fae71 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -39,8 +39,6 @@
+ #include <net/bluetooth/l2cap.h>
+ 
+ #include "smp.h"
+-#include "a2mp.h"
+-#include "amp.h"
+ 
+ #define LE_FLOWCTL_MAX_CREDITS 65535
+ 
+@@ -167,24 +165,6 @@ static struct l2cap_chan *__l2cap_get_chan_by_ident(struct l2cap_conn *conn,
+ 	return NULL;
+ }
+ 
+-static struct l2cap_chan *l2cap_get_chan_by_ident(struct l2cap_conn *conn,
+-						  u8 ident)
+-{
+-	struct l2cap_chan *c;
+-
+-	mutex_lock(&conn->chan_lock);
+-	c = __l2cap_get_chan_by_ident(conn, ident);
+-	if (c) {
+-		/* Only lock if chan reference is not 0 */
+-		c = l2cap_chan_hold_unless_zero(c);
+-		if (c)
+-			l2cap_chan_lock(c);
+-	}
+-	mutex_unlock(&conn->chan_lock);
+-
+-	return c;
+-}
+-
+ static struct l2cap_chan *__l2cap_global_chan_by_addr(__le16 psm, bdaddr_t *src,
+ 						      u8 src_type)
+ {
+@@ -651,7 +631,6 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err)
+ 	chan->ops->teardown(chan, err);
+ 
+ 	if (conn) {
+-		struct amp_mgr *mgr = conn->hcon->amp_mgr;
+ 		/* Delete from channel list */
+ 		list_del(&chan->list);
+ 
+@@ -666,16 +645,6 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err)
+ 		if (chan->chan_type != L2CAP_CHAN_FIXED ||
+ 		    test_bit(FLAG_HOLD_HCI_CONN, &chan->flags))
+ 			hci_conn_drop(conn->hcon);
+-
+-		if (mgr && mgr->bredr_chan == chan)
+-			mgr->bredr_chan = NULL;
+-	}
+-
+-	if (chan->hs_hchan) {
+-		struct hci_chan *hs_hchan = chan->hs_hchan;
+-
+-		BT_DBG("chan %p disconnect hs_hchan %p", chan, hs_hchan);
+-		amp_disconnect_logical_link(hs_hchan);
+ 	}
+ 
+ 	if (test_bit(CONF_NOT_COMPLETE, &chan->conf_state))
+@@ -977,12 +946,6 @@ static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len,
+ 	hci_send_acl(conn->hchan, skb, flags);
+ }
+ 
+-static bool __chan_is_moving(struct l2cap_chan *chan)
+-{
+-	return chan->move_state != L2CAP_MOVE_STABLE &&
+-	       chan->move_state != L2CAP_MOVE_WAIT_PREPARE;
+-}
+-
+ static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb)
+ {
+ 	struct hci_conn *hcon = chan->conn->hcon;
+@@ -991,15 +954,6 @@ static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb)
+ 	BT_DBG("chan %p, skb %p len %d priority %u", chan, skb, skb->len,
+ 	       skb->priority);
+ 
+-	if (chan->hs_hcon && !__chan_is_moving(chan)) {
+-		if (chan->hs_hchan)
+-			hci_send_acl(chan->hs_hchan, skb, ACL_COMPLETE);
+-		else
+-			kfree_skb(skb);
+-
+-		return;
+-	}
+-
+ 	/* Use NO_FLUSH for LE links (where this is the only option) or
+ 	 * if the BR/EDR link supports it and flushing has not been
+ 	 * explicitly requested (through FLAG_FLUSHABLE).
+@@ -1180,9 +1134,6 @@ static void l2cap_send_sframe(struct l2cap_chan *chan,
+ 	if (!control->sframe)
+ 		return;
+ 
+-	if (__chan_is_moving(chan))
+-		return;
+-
+ 	if (test_and_clear_bit(CONN_SEND_FBIT, &chan->conn_state) &&
+ 	    !control->poll)
+ 		control->final = 1;
+@@ -1237,40 +1188,6 @@ static inline int __l2cap_no_conn_pending(struct l2cap_chan *chan)
+ 	return !test_bit(CONF_CONNECT_PEND, &chan->conf_state);
+ }
+ 
+-static bool __amp_capable(struct l2cap_chan *chan)
+-{
+-	struct l2cap_conn *conn = chan->conn;
+-	struct hci_dev *hdev;
+-	bool amp_available = false;
+-
+-	if (!(conn->local_fixed_chan & L2CAP_FC_A2MP))
+-		return false;
+-
+-	if (!(conn->remote_fixed_chan & L2CAP_FC_A2MP))
+-		return false;
+-
+-	read_lock(&hci_dev_list_lock);
+-	list_for_each_entry(hdev, &hci_dev_list, list) {
+-		if (hdev->amp_type != AMP_TYPE_BREDR &&
+-		    test_bit(HCI_UP, &hdev->flags)) {
+-			amp_available = true;
+-			break;
+-		}
+-	}
+-	read_unlock(&hci_dev_list_lock);
+-
+-	if (chan->chan_policy == BT_CHANNEL_POLICY_AMP_PREFERRED)
+-		return amp_available;
+-
+-	return false;
+-}
+-
+-static bool l2cap_check_efs(struct l2cap_chan *chan)
+-{
+-	/* Check EFS parameters */
+-	return true;
+-}
+-
+ void l2cap_send_conn_req(struct l2cap_chan *chan)
+ {
+ 	struct l2cap_conn *conn = chan->conn;
+@@ -1286,76 +1203,6 @@ void l2cap_send_conn_req(struct l2cap_chan *chan)
+ 	l2cap_send_cmd(conn, chan->ident, L2CAP_CONN_REQ, sizeof(req), &req);
+ }
+ 
+-static void l2cap_send_create_chan_req(struct l2cap_chan *chan, u8 amp_id)
+-{
+-	struct l2cap_create_chan_req req;
+-	req.scid = cpu_to_le16(chan->scid);
+-	req.psm  = chan->psm;
+-	req.amp_id = amp_id;
+-
+-	chan->ident = l2cap_get_ident(chan->conn);
+-
+-	l2cap_send_cmd(chan->conn, chan->ident, L2CAP_CREATE_CHAN_REQ,
+-		       sizeof(req), &req);
+-}
+-
+-static void l2cap_move_setup(struct l2cap_chan *chan)
+-{
+-	struct sk_buff *skb;
+-
+-	BT_DBG("chan %p", chan);
+-
+-	if (chan->mode != L2CAP_MODE_ERTM)
+-		return;
+-
+-	__clear_retrans_timer(chan);
+-	__clear_monitor_timer(chan);
+-	__clear_ack_timer(chan);
+-
+-	chan->retry_count = 0;
+-	skb_queue_walk(&chan->tx_q, skb) {
+-		if (bt_cb(skb)->l2cap.retries)
+-			bt_cb(skb)->l2cap.retries = 1;
+-		else
+-			break;
+-	}
+-
+-	chan->expected_tx_seq = chan->buffer_seq;
+-
+-	clear_bit(CONN_REJ_ACT, &chan->conn_state);
+-	clear_bit(CONN_SREJ_ACT, &chan->conn_state);
+-	l2cap_seq_list_clear(&chan->retrans_list);
+-	l2cap_seq_list_clear(&chan->srej_list);
+-	skb_queue_purge(&chan->srej_q);
+-
+-	chan->tx_state = L2CAP_TX_STATE_XMIT;
+-	chan->rx_state = L2CAP_RX_STATE_MOVE;
+-
+-	set_bit(CONN_REMOTE_BUSY, &chan->conn_state);
+-}
+-
+-static void l2cap_move_done(struct l2cap_chan *chan)
+-{
+-	u8 move_role = chan->move_role;
+-	BT_DBG("chan %p", chan);
+-
+-	chan->move_state = L2CAP_MOVE_STABLE;
+-	chan->move_role = L2CAP_MOVE_ROLE_NONE;
+-
+-	if (chan->mode != L2CAP_MODE_ERTM)
+-		return;
+-
+-	switch (move_role) {
+-	case L2CAP_MOVE_ROLE_INITIATOR:
+-		l2cap_tx(chan, NULL, NULL, L2CAP_EV_EXPLICIT_POLL);
+-		chan->rx_state = L2CAP_RX_STATE_WAIT_F;
+-		break;
+-	case L2CAP_MOVE_ROLE_RESPONDER:
+-		chan->rx_state = L2CAP_RX_STATE_WAIT_P;
+-		break;
+-	}
+-}
+-
+ static void l2cap_chan_ready(struct l2cap_chan *chan)
+ {
+ 	/* The channel may have already been flagged as connected in
+@@ -1505,10 +1352,7 @@ static void l2cap_le_start(struct l2cap_chan *chan)
+ 
+ static void l2cap_start_connection(struct l2cap_chan *chan)
+ {
+-	if (__amp_capable(chan)) {
+-		BT_DBG("chan %p AMP capable: discover AMPs", chan);
+-		a2mp_discover_amp(chan);
+-	} else if (chan->conn->hcon->type == LE_LINK) {
++	if (chan->conn->hcon->type == LE_LINK) {
+ 		l2cap_le_start(chan);
+ 	} else {
+ 		l2cap_send_conn_req(chan);
+@@ -1611,11 +1455,6 @@ static void l2cap_send_disconn_req(struct l2cap_chan *chan, int err)
+ 		__clear_ack_timer(chan);
+ 	}
+ 
+-	if (chan->scid == L2CAP_CID_A2MP) {
+-		l2cap_state_change(chan, BT_DISCONN);
+-		return;
+-	}
+-
+ 	req.dcid = cpu_to_le16(chan->dcid);
+ 	req.scid = cpu_to_le16(chan->scid);
+ 	l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_DISCONN_REQ,
+@@ -1754,11 +1593,6 @@ static void l2cap_conn_ready(struct l2cap_conn *conn)
+ 
+ 		l2cap_chan_lock(chan);
+ 
+-		if (chan->scid == L2CAP_CID_A2MP) {
+-			l2cap_chan_unlock(chan);
+-			continue;
+-		}
+-
+ 		if (hcon->type == LE_LINK) {
+ 			l2cap_le_start(chan);
+ 		} else if (chan->chan_type != L2CAP_CHAN_CONN_ORIENTED) {
+@@ -2067,9 +1901,6 @@ static void l2cap_streaming_send(struct l2cap_chan *chan,
+ 
+ 	BT_DBG("chan %p, skbs %p", chan, skbs);
+ 
+-	if (__chan_is_moving(chan))
+-		return;
+-
+ 	skb_queue_splice_tail_init(skbs, &chan->tx_q);
+ 
+ 	while (!skb_queue_empty(&chan->tx_q)) {
+@@ -2112,9 +1943,6 @@ static int l2cap_ertm_send(struct l2cap_chan *chan)
+ 	if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state))
+ 		return 0;
+ 
+-	if (__chan_is_moving(chan))
+-		return 0;
+-
+ 	while (chan->tx_send_head &&
+ 	       chan->unacked_frames < chan->remote_tx_win &&
+ 	       chan->tx_state == L2CAP_TX_STATE_XMIT) {
+@@ -2180,9 +2008,6 @@ static void l2cap_ertm_resend(struct l2cap_chan *chan)
+ 	if (test_bit(CONN_REMOTE_BUSY, &chan->conn_state))
+ 		return;
+ 
+-	if (__chan_is_moving(chan))
+-		return;
+-
+ 	while (chan->retrans_list.head != L2CAP_SEQ_LIST_CLEAR) {
+ 		seq = l2cap_seq_list_pop(&chan->retrans_list);
+ 
+@@ -2522,8 +2347,7 @@ static int l2cap_segment_sdu(struct l2cap_chan *chan,
+ 	pdu_len = chan->conn->mtu;
+ 
+ 	/* Constrain PDU size for BR/EDR connections */
+-	if (!chan->hs_hcon)
+-		pdu_len = min_t(size_t, pdu_len, L2CAP_BREDR_MAX_PAYLOAD);
++	pdu_len = min_t(size_t, pdu_len, L2CAP_BREDR_MAX_PAYLOAD);
+ 
+ 	/* Adjust for largest possible L2CAP overhead. */
+ 	if (chan->fcs)
+@@ -3287,11 +3111,6 @@ int l2cap_ertm_init(struct l2cap_chan *chan)
+ 
+ 	skb_queue_head_init(&chan->tx_q);
+ 
+-	chan->local_amp_id = AMP_ID_BREDR;
+-	chan->move_id = AMP_ID_BREDR;
+-	chan->move_state = L2CAP_MOVE_STABLE;
+-	chan->move_role = L2CAP_MOVE_ROLE_NONE;
+-
+ 	if (chan->mode != L2CAP_MODE_ERTM)
+ 		return 0;
+ 
+@@ -3326,52 +3145,19 @@ static inline __u8 l2cap_select_mode(__u8 mode, __u16 remote_feat_mask)
+ 
+ static inline bool __l2cap_ews_supported(struct l2cap_conn *conn)
+ {
+-	return ((conn->local_fixed_chan & L2CAP_FC_A2MP) &&
+-		(conn->feat_mask & L2CAP_FEAT_EXT_WINDOW));
++	return (conn->feat_mask & L2CAP_FEAT_EXT_WINDOW);
+ }
+ 
+ static inline bool __l2cap_efs_supported(struct l2cap_conn *conn)
+ {
+-	return ((conn->local_fixed_chan & L2CAP_FC_A2MP) &&
+-		(conn->feat_mask & L2CAP_FEAT_EXT_FLOW));
++	return (conn->feat_mask & L2CAP_FEAT_EXT_FLOW);
+ }
+ 
+ static void __l2cap_set_ertm_timeouts(struct l2cap_chan *chan,
+ 				      struct l2cap_conf_rfc *rfc)
+ {
+-	if (chan->local_amp_id != AMP_ID_BREDR && chan->hs_hcon) {
+-		u64 ertm_to = chan->hs_hcon->hdev->amp_be_flush_to;
+-
+-		/* Class 1 devices have must have ERTM timeouts
+-		 * exceeding the Link Supervision Timeout.  The
+-		 * default Link Supervision Timeout for AMP
+-		 * controllers is 10 seconds.
+-		 *
+-		 * Class 1 devices use 0xffffffff for their
+-		 * best-effort flush timeout, so the clamping logic
+-		 * will result in a timeout that meets the above
+-		 * requirement.  ERTM timeouts are 16-bit values, so
+-		 * the maximum timeout is 65.535 seconds.
+-		 */
+-
+-		/* Convert timeout to milliseconds and round */
+-		ertm_to = DIV_ROUND_UP_ULL(ertm_to, 1000);
+-
+-		/* This is the recommended formula for class 2 devices
+-		 * that start ERTM timers when packets are sent to the
+-		 * controller.
+-		 */
+-		ertm_to = 3 * ertm_to + 500;
+-
+-		if (ertm_to > 0xffff)
+-			ertm_to = 0xffff;
+-
+-		rfc->retrans_timeout = cpu_to_le16((u16) ertm_to);
+-		rfc->monitor_timeout = rfc->retrans_timeout;
+-	} else {
+-		rfc->retrans_timeout = cpu_to_le16(L2CAP_DEFAULT_RETRANS_TO);
+-		rfc->monitor_timeout = cpu_to_le16(L2CAP_DEFAULT_MONITOR_TO);
+-	}
++	rfc->retrans_timeout = cpu_to_le16(L2CAP_DEFAULT_RETRANS_TO);
++	rfc->monitor_timeout = cpu_to_le16(L2CAP_DEFAULT_MONITOR_TO);
+ }
+ 
+ static inline void l2cap_txwin_setup(struct l2cap_chan *chan)
+@@ -3623,13 +3409,7 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 		case L2CAP_CONF_EWS:
+ 			if (olen != 2)
+ 				break;
+-			if (!(chan->conn->local_fixed_chan & L2CAP_FC_A2MP))
+-				return -ECONNREFUSED;
+-			set_bit(FLAG_EXT_CTRL, &chan->flags);
+-			set_bit(CONF_EWS_RECV, &chan->conf_state);
+-			chan->tx_win_max = L2CAP_DEFAULT_EXT_WINDOW;
+-			chan->remote_tx_win = val;
+-			break;
++			return -ECONNREFUSED;
+ 
+ 		default:
+ 			if (hint)
+@@ -4027,11 +3807,7 @@ void __l2cap_connect_rsp_defer(struct l2cap_chan *chan)
+ 	rsp.dcid   = cpu_to_le16(chan->scid);
+ 	rsp.result = cpu_to_le16(L2CAP_CR_SUCCESS);
+ 	rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO);
+-
+-	if (chan->hs_hcon)
+-		rsp_code = L2CAP_CREATE_CHAN_RSP;
+-	else
+-		rsp_code = L2CAP_CONN_RSP;
++	rsp_code = L2CAP_CONN_RSP;
+ 
+ 	BT_DBG("chan %p rsp_code %u", chan, rsp_code);
+ 
+@@ -4190,7 +3966,6 @@ static struct l2cap_chan *l2cap_connect(struct l2cap_conn *conn,
+ 	chan->dst_type = bdaddr_dst_type(conn->hcon);
+ 	chan->psm  = psm;
+ 	chan->dcid = scid;
+-	chan->local_amp_id = amp_id;
+ 
+ 	__l2cap_chan_add(conn, chan);
+ 
+@@ -4516,10 +4291,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn,
+ 		/* check compatibility */
+ 
+ 		/* Send rsp for BR/EDR channel */
+-		if (!chan->hs_hcon)
+-			l2cap_send_efs_conf_rsp(chan, rsp, cmd->ident, flags);
+-		else
+-			chan->ident = cmd->ident;
++		l2cap_send_efs_conf_rsp(chan, rsp, cmd->ident, flags);
+ 	}
+ 
+ unlock:
+@@ -4571,15 +4343,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn,
+ 				goto done;
+ 			}
+ 
+-			if (!chan->hs_hcon) {
+-				l2cap_send_efs_conf_rsp(chan, buf, cmd->ident,
+-							0);
+-			} else {
+-				if (l2cap_check_efs(chan)) {
+-					amp_create_logical_link(chan);
+-					chan->ident = cmd->ident;
+-				}
+-			}
++			l2cap_send_efs_conf_rsp(chan, buf, cmd->ident, 0);
+ 		}
+ 		goto done;
+ 
+@@ -4750,9 +4514,6 @@ static inline int l2cap_information_req(struct l2cap_conn *conn,
+ 		if (!disable_ertm)
+ 			feat_mask |= L2CAP_FEAT_ERTM | L2CAP_FEAT_STREAMING
+ 				| L2CAP_FEAT_FCS;
+-		if (conn->local_fixed_chan & L2CAP_FC_A2MP)
+-			feat_mask |= L2CAP_FEAT_EXT_FLOW
+-				| L2CAP_FEAT_EXT_WINDOW;
+ 
+ 		put_unaligned_le32(feat_mask, rsp->data);
+ 		l2cap_send_cmd(conn, cmd->ident, L2CAP_INFO_RSP, sizeof(buf),
+@@ -4841,751 +4602,6 @@ static inline int l2cap_information_rsp(struct l2cap_conn *conn,
+ 	return 0;
+ }
+ 
+-static int l2cap_create_channel_req(struct l2cap_conn *conn,
+-				    struct l2cap_cmd_hdr *cmd,
+-				    u16 cmd_len, void *data)
+-{
+-	struct l2cap_create_chan_req *req = data;
+-	struct l2cap_create_chan_rsp rsp;
+-	struct l2cap_chan *chan;
+-	struct hci_dev *hdev;
+-	u16 psm, scid;
+-
+-	if (cmd_len != sizeof(*req))
+-		return -EPROTO;
+-
+-	if (!(conn->local_fixed_chan & L2CAP_FC_A2MP))
+-		return -EINVAL;
+-
+-	psm = le16_to_cpu(req->psm);
+-	scid = le16_to_cpu(req->scid);
+-
+-	BT_DBG("psm 0x%2.2x, scid 0x%4.4x, amp_id %d", psm, scid, req->amp_id);
+-
+-	/* For controller id 0 make BR/EDR connection */
+-	if (req->amp_id == AMP_ID_BREDR) {
+-		l2cap_connect(conn, cmd, data, L2CAP_CREATE_CHAN_RSP,
+-			      req->amp_id);
+-		return 0;
+-	}
+-
+-	/* Validate AMP controller id */
+-	hdev = hci_dev_get(req->amp_id);
+-	if (!hdev)
+-		goto error;
+-
+-	if (hdev->dev_type != HCI_AMP || !test_bit(HCI_UP, &hdev->flags)) {
+-		hci_dev_put(hdev);
+-		goto error;
+-	}
+-
+-	chan = l2cap_connect(conn, cmd, data, L2CAP_CREATE_CHAN_RSP,
+-			     req->amp_id);
+-	if (chan) {
+-		struct amp_mgr *mgr = conn->hcon->amp_mgr;
+-		struct hci_conn *hs_hcon;
+-
+-		hs_hcon = hci_conn_hash_lookup_ba(hdev, AMP_LINK,
+-						  &conn->hcon->dst);
+-		if (!hs_hcon) {
+-			hci_dev_put(hdev);
+-			cmd_reject_invalid_cid(conn, cmd->ident, chan->scid,
+-					       chan->dcid);
+-			return 0;
+-		}
+-
+-		BT_DBG("mgr %p bredr_chan %p hs_hcon %p", mgr, chan, hs_hcon);
+-
+-		mgr->bredr_chan = chan;
+-		chan->hs_hcon = hs_hcon;
+-		chan->fcs = L2CAP_FCS_NONE;
+-		conn->mtu = hdev->block_mtu;
+-	}
+-
+-	hci_dev_put(hdev);
+-
+-	return 0;
+-
+-error:
+-	rsp.dcid = 0;
+-	rsp.scid = cpu_to_le16(scid);
+-	rsp.result = cpu_to_le16(L2CAP_CR_BAD_AMP);
+-	rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO);
+-
+-	l2cap_send_cmd(conn, cmd->ident, L2CAP_CREATE_CHAN_RSP,
+-		       sizeof(rsp), &rsp);
+-
+-	return 0;
+-}
+-
+-static void l2cap_send_move_chan_req(struct l2cap_chan *chan, u8 dest_amp_id)
+-{
+-	struct l2cap_move_chan_req req;
+-	u8 ident;
+-
+-	BT_DBG("chan %p, dest_amp_id %d", chan, dest_amp_id);
+-
+-	ident = l2cap_get_ident(chan->conn);
+-	chan->ident = ident;
+-
+-	req.icid = cpu_to_le16(chan->scid);
+-	req.dest_amp_id = dest_amp_id;
+-
+-	l2cap_send_cmd(chan->conn, ident, L2CAP_MOVE_CHAN_REQ, sizeof(req),
+-		       &req);
+-
+-	__set_chan_timer(chan, L2CAP_MOVE_TIMEOUT);
+-}
+-
+-static void l2cap_send_move_chan_rsp(struct l2cap_chan *chan, u16 result)
+-{
+-	struct l2cap_move_chan_rsp rsp;
+-
+-	BT_DBG("chan %p, result 0x%4.4x", chan, result);
+-
+-	rsp.icid = cpu_to_le16(chan->dcid);
+-	rsp.result = cpu_to_le16(result);
+-
+-	l2cap_send_cmd(chan->conn, chan->ident, L2CAP_MOVE_CHAN_RSP,
+-		       sizeof(rsp), &rsp);
+-}
+-
+-static void l2cap_send_move_chan_cfm(struct l2cap_chan *chan, u16 result)
+-{
+-	struct l2cap_move_chan_cfm cfm;
+-
+-	BT_DBG("chan %p, result 0x%4.4x", chan, result);
+-
+-	chan->ident = l2cap_get_ident(chan->conn);
+-
+-	cfm.icid = cpu_to_le16(chan->scid);
+-	cfm.result = cpu_to_le16(result);
+-
+-	l2cap_send_cmd(chan->conn, chan->ident, L2CAP_MOVE_CHAN_CFM,
+-		       sizeof(cfm), &cfm);
+-
+-	__set_chan_timer(chan, L2CAP_MOVE_TIMEOUT);
+-}
+-
+-static void l2cap_send_move_chan_cfm_icid(struct l2cap_conn *conn, u16 icid)
+-{
+-	struct l2cap_move_chan_cfm cfm;
+-
+-	BT_DBG("conn %p, icid 0x%4.4x", conn, icid);
+-
+-	cfm.icid = cpu_to_le16(icid);
+-	cfm.result = cpu_to_le16(L2CAP_MC_UNCONFIRMED);
+-
+-	l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_MOVE_CHAN_CFM,
+-		       sizeof(cfm), &cfm);
+-}
+-
+-static void l2cap_send_move_chan_cfm_rsp(struct l2cap_conn *conn, u8 ident,
+-					 u16 icid)
+-{
+-	struct l2cap_move_chan_cfm_rsp rsp;
+-
+-	BT_DBG("icid 0x%4.4x", icid);
+-
+-	rsp.icid = cpu_to_le16(icid);
+-	l2cap_send_cmd(conn, ident, L2CAP_MOVE_CHAN_CFM_RSP, sizeof(rsp), &rsp);
+-}
+-
+-static void __release_logical_link(struct l2cap_chan *chan)
+-{
+-	chan->hs_hchan = NULL;
+-	chan->hs_hcon = NULL;
+-
+-	/* Placeholder - release the logical link */
+-}
+-
+-static void l2cap_logical_fail(struct l2cap_chan *chan)
+-{
+-	/* Logical link setup failed */
+-	if (chan->state != BT_CONNECTED) {
+-		/* Create channel failure, disconnect */
+-		l2cap_send_disconn_req(chan, ECONNRESET);
+-		return;
+-	}
+-
+-	switch (chan->move_role) {
+-	case L2CAP_MOVE_ROLE_RESPONDER:
+-		l2cap_move_done(chan);
+-		l2cap_send_move_chan_rsp(chan, L2CAP_MR_NOT_SUPP);
+-		break;
+-	case L2CAP_MOVE_ROLE_INITIATOR:
+-		if (chan->move_state == L2CAP_MOVE_WAIT_LOGICAL_COMP ||
+-		    chan->move_state == L2CAP_MOVE_WAIT_LOGICAL_CFM) {
+-			/* Remote has only sent pending or
+-			 * success responses, clean up
+-			 */
+-			l2cap_move_done(chan);
+-		}
+-
+-		/* Other amp move states imply that the move
+-		 * has already aborted
+-		 */
+-		l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
+-		break;
+-	}
+-}
+-
+-static void l2cap_logical_finish_create(struct l2cap_chan *chan,
+-					struct hci_chan *hchan)
+-{
+-	struct l2cap_conf_rsp rsp;
+-
+-	chan->hs_hchan = hchan;
+-	chan->hs_hcon->l2cap_data = chan->conn;
+-
+-	l2cap_send_efs_conf_rsp(chan, &rsp, chan->ident, 0);
+-
+-	if (test_bit(CONF_INPUT_DONE, &chan->conf_state)) {
+-		int err;
+-
+-		set_default_fcs(chan);
+-
+-		err = l2cap_ertm_init(chan);
+-		if (err < 0)
+-			l2cap_send_disconn_req(chan, -err);
+-		else
+-			l2cap_chan_ready(chan);
+-	}
+-}
+-
+-static void l2cap_logical_finish_move(struct l2cap_chan *chan,
+-				      struct hci_chan *hchan)
+-{
+-	chan->hs_hcon = hchan->conn;
+-	chan->hs_hcon->l2cap_data = chan->conn;
+-
+-	BT_DBG("move_state %d", chan->move_state);
+-
+-	switch (chan->move_state) {
+-	case L2CAP_MOVE_WAIT_LOGICAL_COMP:
+-		/* Move confirm will be sent after a success
+-		 * response is received
+-		 */
+-		chan->move_state = L2CAP_MOVE_WAIT_RSP_SUCCESS;
+-		break;
+-	case L2CAP_MOVE_WAIT_LOGICAL_CFM:
+-		if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) {
+-			chan->move_state = L2CAP_MOVE_WAIT_LOCAL_BUSY;
+-		} else if (chan->move_role == L2CAP_MOVE_ROLE_INITIATOR) {
+-			chan->move_state = L2CAP_MOVE_WAIT_CONFIRM_RSP;
+-			l2cap_send_move_chan_cfm(chan, L2CAP_MC_CONFIRMED);
+-		} else if (chan->move_role == L2CAP_MOVE_ROLE_RESPONDER) {
+-			chan->move_state = L2CAP_MOVE_WAIT_CONFIRM;
+-			l2cap_send_move_chan_rsp(chan, L2CAP_MR_SUCCESS);
+-		}
+-		break;
+-	default:
+-		/* Move was not in expected state, free the channel */
+-		__release_logical_link(chan);
+-
+-		chan->move_state = L2CAP_MOVE_STABLE;
+-	}
+-}
+-
+-/* Call with chan locked */
+-void l2cap_logical_cfm(struct l2cap_chan *chan, struct hci_chan *hchan,
+-		       u8 status)
+-{
+-	BT_DBG("chan %p, hchan %p, status %d", chan, hchan, status);
+-
+-	if (status) {
+-		l2cap_logical_fail(chan);
+-		__release_logical_link(chan);
+-		return;
+-	}
+-
+-	if (chan->state != BT_CONNECTED) {
+-		/* Ignore logical link if channel is on BR/EDR */
+-		if (chan->local_amp_id != AMP_ID_BREDR)
+-			l2cap_logical_finish_create(chan, hchan);
+-	} else {
+-		l2cap_logical_finish_move(chan, hchan);
+-	}
+-}
+-
+-void l2cap_move_start(struct l2cap_chan *chan)
+-{
+-	BT_DBG("chan %p", chan);
+-
+-	if (chan->local_amp_id == AMP_ID_BREDR) {
+-		if (chan->chan_policy != BT_CHANNEL_POLICY_AMP_PREFERRED)
+-			return;
+-		chan->move_role = L2CAP_MOVE_ROLE_INITIATOR;
+-		chan->move_state = L2CAP_MOVE_WAIT_PREPARE;
+-		/* Placeholder - start physical link setup */
+-	} else {
+-		chan->move_role = L2CAP_MOVE_ROLE_INITIATOR;
+-		chan->move_state = L2CAP_MOVE_WAIT_RSP_SUCCESS;
+-		chan->move_id = 0;
+-		l2cap_move_setup(chan);
+-		l2cap_send_move_chan_req(chan, 0);
+-	}
+-}
+-
+-static void l2cap_do_create(struct l2cap_chan *chan, int result,
+-			    u8 local_amp_id, u8 remote_amp_id)
+-{
+-	BT_DBG("chan %p state %s %u -> %u", chan, state_to_string(chan->state),
+-	       local_amp_id, remote_amp_id);
+-
+-	chan->fcs = L2CAP_FCS_NONE;
+-
+-	/* Outgoing channel on AMP */
+-	if (chan->state == BT_CONNECT) {
+-		if (result == L2CAP_CR_SUCCESS) {
+-			chan->local_amp_id = local_amp_id;
+-			l2cap_send_create_chan_req(chan, remote_amp_id);
+-		} else {
+-			/* Revert to BR/EDR connect */
+-			l2cap_send_conn_req(chan);
+-		}
+-
+-		return;
+-	}
+-
+-	/* Incoming channel on AMP */
+-	if (__l2cap_no_conn_pending(chan)) {
+-		struct l2cap_conn_rsp rsp;
+-		char buf[128];
+-		rsp.scid = cpu_to_le16(chan->dcid);
+-		rsp.dcid = cpu_to_le16(chan->scid);
+-
+-		if (result == L2CAP_CR_SUCCESS) {
+-			/* Send successful response */
+-			rsp.result = cpu_to_le16(L2CAP_CR_SUCCESS);
+-			rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO);
+-		} else {
+-			/* Send negative response */
+-			rsp.result = cpu_to_le16(L2CAP_CR_NO_MEM);
+-			rsp.status = cpu_to_le16(L2CAP_CS_NO_INFO);
+-		}
+-
+-		l2cap_send_cmd(chan->conn, chan->ident, L2CAP_CREATE_CHAN_RSP,
+-			       sizeof(rsp), &rsp);
+-
+-		if (result == L2CAP_CR_SUCCESS) {
+-			l2cap_state_change(chan, BT_CONFIG);
+-			set_bit(CONF_REQ_SENT, &chan->conf_state);
+-			l2cap_send_cmd(chan->conn, l2cap_get_ident(chan->conn),
+-				       L2CAP_CONF_REQ,
+-				       l2cap_build_conf_req(chan, buf, sizeof(buf)), buf);
+-			chan->num_conf_req++;
+-		}
+-	}
+-}
+-
+-static void l2cap_do_move_initiate(struct l2cap_chan *chan, u8 local_amp_id,
+-				   u8 remote_amp_id)
+-{
+-	l2cap_move_setup(chan);
+-	chan->move_id = local_amp_id;
+-	chan->move_state = L2CAP_MOVE_WAIT_RSP;
+-
+-	l2cap_send_move_chan_req(chan, remote_amp_id);
+-}
+-
+-static void l2cap_do_move_respond(struct l2cap_chan *chan, int result)
+-{
+-	struct hci_chan *hchan = NULL;
+-
+-	/* Placeholder - get hci_chan for logical link */
+-
+-	if (hchan) {
+-		if (hchan->state == BT_CONNECTED) {
+-			/* Logical link is ready to go */
+-			chan->hs_hcon = hchan->conn;
+-			chan->hs_hcon->l2cap_data = chan->conn;
+-			chan->move_state = L2CAP_MOVE_WAIT_CONFIRM;
+-			l2cap_send_move_chan_rsp(chan, L2CAP_MR_SUCCESS);
+-
+-			l2cap_logical_cfm(chan, hchan, L2CAP_MR_SUCCESS);
+-		} else {
+-			/* Wait for logical link to be ready */
+-			chan->move_state = L2CAP_MOVE_WAIT_LOGICAL_CFM;
+-		}
+-	} else {
+-		/* Logical link not available */
+-		l2cap_send_move_chan_rsp(chan, L2CAP_MR_NOT_ALLOWED);
+-	}
+-}
+-
+-static void l2cap_do_move_cancel(struct l2cap_chan *chan, int result)
+-{
+-	if (chan->move_role == L2CAP_MOVE_ROLE_RESPONDER) {
+-		u8 rsp_result;
+-		if (result == -EINVAL)
+-			rsp_result = L2CAP_MR_BAD_ID;
+-		else
+-			rsp_result = L2CAP_MR_NOT_ALLOWED;
+-
+-		l2cap_send_move_chan_rsp(chan, rsp_result);
+-	}
+-
+-	chan->move_role = L2CAP_MOVE_ROLE_NONE;
+-	chan->move_state = L2CAP_MOVE_STABLE;
+-
+-	/* Restart data transmission */
+-	l2cap_ertm_send(chan);
+-}
+-
+-/* Invoke with locked chan */
+-void __l2cap_physical_cfm(struct l2cap_chan *chan, int result)
+-{
+-	u8 local_amp_id = chan->local_amp_id;
+-	u8 remote_amp_id = chan->remote_amp_id;
+-
+-	BT_DBG("chan %p, result %d, local_amp_id %d, remote_amp_id %d",
+-	       chan, result, local_amp_id, remote_amp_id);
+-
+-	if (chan->state == BT_DISCONN || chan->state == BT_CLOSED)
+-		return;
+-
+-	if (chan->state != BT_CONNECTED) {
+-		l2cap_do_create(chan, result, local_amp_id, remote_amp_id);
+-	} else if (result != L2CAP_MR_SUCCESS) {
+-		l2cap_do_move_cancel(chan, result);
+-	} else {
+-		switch (chan->move_role) {
+-		case L2CAP_MOVE_ROLE_INITIATOR:
+-			l2cap_do_move_initiate(chan, local_amp_id,
+-					       remote_amp_id);
+-			break;
+-		case L2CAP_MOVE_ROLE_RESPONDER:
+-			l2cap_do_move_respond(chan, result);
+-			break;
+-		default:
+-			l2cap_do_move_cancel(chan, result);
+-			break;
+-		}
+-	}
+-}
+-
+-static inline int l2cap_move_channel_req(struct l2cap_conn *conn,
+-					 struct l2cap_cmd_hdr *cmd,
+-					 u16 cmd_len, void *data)
+-{
+-	struct l2cap_move_chan_req *req = data;
+-	struct l2cap_move_chan_rsp rsp;
+-	struct l2cap_chan *chan;
+-	u16 icid = 0;
+-	u16 result = L2CAP_MR_NOT_ALLOWED;
+-
+-	if (cmd_len != sizeof(*req))
+-		return -EPROTO;
+-
+-	icid = le16_to_cpu(req->icid);
+-
+-	BT_DBG("icid 0x%4.4x, dest_amp_id %d", icid, req->dest_amp_id);
+-
+-	if (!(conn->local_fixed_chan & L2CAP_FC_A2MP))
+-		return -EINVAL;
+-
+-	chan = l2cap_get_chan_by_dcid(conn, icid);
+-	if (!chan) {
+-		rsp.icid = cpu_to_le16(icid);
+-		rsp.result = cpu_to_le16(L2CAP_MR_NOT_ALLOWED);
+-		l2cap_send_cmd(conn, cmd->ident, L2CAP_MOVE_CHAN_RSP,
+-			       sizeof(rsp), &rsp);
+-		return 0;
+-	}
+-
+-	chan->ident = cmd->ident;
+-
+-	if (chan->scid < L2CAP_CID_DYN_START ||
+-	    chan->chan_policy == BT_CHANNEL_POLICY_BREDR_ONLY ||
+-	    (chan->mode != L2CAP_MODE_ERTM &&
+-	     chan->mode != L2CAP_MODE_STREAMING)) {
+-		result = L2CAP_MR_NOT_ALLOWED;
+-		goto send_move_response;
+-	}
+-
+-	if (chan->local_amp_id == req->dest_amp_id) {
+-		result = L2CAP_MR_SAME_ID;
+-		goto send_move_response;
+-	}
+-
+-	if (req->dest_amp_id != AMP_ID_BREDR) {
+-		struct hci_dev *hdev;
+-		hdev = hci_dev_get(req->dest_amp_id);
+-		if (!hdev || hdev->dev_type != HCI_AMP ||
+-		    !test_bit(HCI_UP, &hdev->flags)) {
+-			if (hdev)
+-				hci_dev_put(hdev);
+-
+-			result = L2CAP_MR_BAD_ID;
+-			goto send_move_response;
+-		}
+-		hci_dev_put(hdev);
+-	}
+-
+-	/* Detect a move collision.  Only send a collision response
+-	 * if this side has "lost", otherwise proceed with the move.
+-	 * The winner has the larger bd_addr.
+-	 */
+-	if ((__chan_is_moving(chan) ||
+-	     chan->move_role != L2CAP_MOVE_ROLE_NONE) &&
+-	    bacmp(&conn->hcon->src, &conn->hcon->dst) > 0) {
+-		result = L2CAP_MR_COLLISION;
+-		goto send_move_response;
+-	}
+-
+-	chan->move_role = L2CAP_MOVE_ROLE_RESPONDER;
+-	l2cap_move_setup(chan);
+-	chan->move_id = req->dest_amp_id;
+-
+-	if (req->dest_amp_id == AMP_ID_BREDR) {
+-		/* Moving to BR/EDR */
+-		if (test_bit(CONN_LOCAL_BUSY, &chan->conn_state)) {
+-			chan->move_state = L2CAP_MOVE_WAIT_LOCAL_BUSY;
+-			result = L2CAP_MR_PEND;
+-		} else {
+-			chan->move_state = L2CAP_MOVE_WAIT_CONFIRM;
+-			result = L2CAP_MR_SUCCESS;
+-		}
+-	} else {
+-		chan->move_state = L2CAP_MOVE_WAIT_PREPARE;
+-		/* Placeholder - uncomment when amp functions are available */
+-		/*amp_accept_physical(chan, req->dest_amp_id);*/
+-		result = L2CAP_MR_PEND;
+-	}
+-
+-send_move_response:
+-	l2cap_send_move_chan_rsp(chan, result);
+-
+-	l2cap_chan_unlock(chan);
+-	l2cap_chan_put(chan);
+-
+-	return 0;
+-}
+-
+-static void l2cap_move_continue(struct l2cap_conn *conn, u16 icid, u16 result)
+-{
+-	struct l2cap_chan *chan;
+-	struct hci_chan *hchan = NULL;
+-
+-	chan = l2cap_get_chan_by_scid(conn, icid);
+-	if (!chan) {
+-		l2cap_send_move_chan_cfm_icid(conn, icid);
+-		return;
+-	}
+-
+-	__clear_chan_timer(chan);
+-	if (result == L2CAP_MR_PEND)
+-		__set_chan_timer(chan, L2CAP_MOVE_ERTX_TIMEOUT);
+-
+-	switch (chan->move_state) {
+-	case L2CAP_MOVE_WAIT_LOGICAL_COMP:
+-		/* Move confirm will be sent when logical link
+-		 * is complete.
+-		 */
+-		chan->move_state = L2CAP_MOVE_WAIT_LOGICAL_CFM;
+-		break;
+-	case L2CAP_MOVE_WAIT_RSP_SUCCESS:
+-		if (result == L2CAP_MR_PEND) {
+-			break;
+-		} else if (test_bit(CONN_LOCAL_BUSY,
+-				    &chan->conn_state)) {
+-			chan->move_state = L2CAP_MOVE_WAIT_LOCAL_BUSY;
+-		} else {
+-			/* Logical link is up or moving to BR/EDR,
+-			 * proceed with move
+-			 */
+-			chan->move_state = L2CAP_MOVE_WAIT_CONFIRM_RSP;
+-			l2cap_send_move_chan_cfm(chan, L2CAP_MC_CONFIRMED);
+-		}
+-		break;
+-	case L2CAP_MOVE_WAIT_RSP:
+-		/* Moving to AMP */
+-		if (result == L2CAP_MR_SUCCESS) {
+-			/* Remote is ready, send confirm immediately
+-			 * after logical link is ready
+-			 */
+-			chan->move_state = L2CAP_MOVE_WAIT_LOGICAL_CFM;
+-		} else {
+-			/* Both logical link and move success
+-			 * are required to confirm
+-			 */
+-			chan->move_state = L2CAP_MOVE_WAIT_LOGICAL_COMP;
+-		}
+-
+-		/* Placeholder - get hci_chan for logical link */
+-		if (!hchan) {
+-			/* Logical link not available */
+-			l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
+-			break;
+-		}
+-
+-		/* If the logical link is not yet connected, do not
+-		 * send confirmation.
+-		 */
+-		if (hchan->state != BT_CONNECTED)
+-			break;
+-
+-		/* Logical link is already ready to go */
+-
+-		chan->hs_hcon = hchan->conn;
+-		chan->hs_hcon->l2cap_data = chan->conn;
+-
+-		if (result == L2CAP_MR_SUCCESS) {
+-			/* Can confirm now */
+-			l2cap_send_move_chan_cfm(chan, L2CAP_MC_CONFIRMED);
+-		} else {
+-			/* Now only need move success
+-			 * to confirm
+-			 */
+-			chan->move_state = L2CAP_MOVE_WAIT_RSP_SUCCESS;
+-		}
+-
+-		l2cap_logical_cfm(chan, hchan, L2CAP_MR_SUCCESS);
+-		break;
+-	default:
+-		/* Any other amp move state means the move failed. */
+-		chan->move_id = chan->local_amp_id;
+-		l2cap_move_done(chan);
+-		l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
+-	}
+-
+-	l2cap_chan_unlock(chan);
+-	l2cap_chan_put(chan);
+-}
+-
+-static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid,
+-			    u16 result)
+-{
+-	struct l2cap_chan *chan;
+-
+-	chan = l2cap_get_chan_by_ident(conn, ident);
+-	if (!chan) {
+-		/* Could not locate channel, icid is best guess */
+-		l2cap_send_move_chan_cfm_icid(conn, icid);
+-		return;
+-	}
+-
+-	__clear_chan_timer(chan);
+-
+-	if (chan->move_role == L2CAP_MOVE_ROLE_INITIATOR) {
+-		if (result == L2CAP_MR_COLLISION) {
+-			chan->move_role = L2CAP_MOVE_ROLE_RESPONDER;
+-		} else {
+-			/* Cleanup - cancel move */
+-			chan->move_id = chan->local_amp_id;
+-			l2cap_move_done(chan);
+-		}
+-	}
+-
+-	l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED);
+-
+-	l2cap_chan_unlock(chan);
+-	l2cap_chan_put(chan);
+-}
+-
+-static int l2cap_move_channel_rsp(struct l2cap_conn *conn,
+-				  struct l2cap_cmd_hdr *cmd,
+-				  u16 cmd_len, void *data)
+-{
+-	struct l2cap_move_chan_rsp *rsp = data;
+-	u16 icid, result;
+-
+-	if (cmd_len != sizeof(*rsp))
+-		return -EPROTO;
+-
+-	icid = le16_to_cpu(rsp->icid);
+-	result = le16_to_cpu(rsp->result);
+-
+-	BT_DBG("icid 0x%4.4x, result 0x%4.4x", icid, result);
+-
+-	if (result == L2CAP_MR_SUCCESS || result == L2CAP_MR_PEND)
+-		l2cap_move_continue(conn, icid, result);
+-	else
+-		l2cap_move_fail(conn, cmd->ident, icid, result);
+-
+-	return 0;
+-}
+-
+-static int l2cap_move_channel_confirm(struct l2cap_conn *conn,
+-				      struct l2cap_cmd_hdr *cmd,
+-				      u16 cmd_len, void *data)
+-{
+-	struct l2cap_move_chan_cfm *cfm = data;
+-	struct l2cap_chan *chan;
+-	u16 icid, result;
+-
+-	if (cmd_len != sizeof(*cfm))
+-		return -EPROTO;
+-
+-	icid = le16_to_cpu(cfm->icid);
+-	result = le16_to_cpu(cfm->result);
+-
+-	BT_DBG("icid 0x%4.4x, result 0x%4.4x", icid, result);
+-
+-	chan = l2cap_get_chan_by_dcid(conn, icid);
+-	if (!chan) {
+-		/* Spec requires a response even if the icid was not found */
+-		l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid);
+-		return 0;
+-	}
+-
+-	if (chan->move_state == L2CAP_MOVE_WAIT_CONFIRM) {
+-		if (result == L2CAP_MC_CONFIRMED) {
+-			chan->local_amp_id = chan->move_id;
+-			if (chan->local_amp_id == AMP_ID_BREDR)
+-				__release_logical_link(chan);
+-		} else {
+-			chan->move_id = chan->local_amp_id;
+-		}
+-
+-		l2cap_move_done(chan);
+-	}
+-
+-	l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid);
+-
+-	l2cap_chan_unlock(chan);
+-	l2cap_chan_put(chan);
+-
+-	return 0;
+-}
+-
+-static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn,
+-						 struct l2cap_cmd_hdr *cmd,
+-						 u16 cmd_len, void *data)
+-{
+-	struct l2cap_move_chan_cfm_rsp *rsp = data;
+-	struct l2cap_chan *chan;
+-	u16 icid;
+-
+-	if (cmd_len != sizeof(*rsp))
+-		return -EPROTO;
+-
+-	icid = le16_to_cpu(rsp->icid);
+-
+-	BT_DBG("icid 0x%4.4x", icid);
+-
+-	chan = l2cap_get_chan_by_scid(conn, icid);
+-	if (!chan)
+-		return 0;
+-
+-	__clear_chan_timer(chan);
+-
+-	if (chan->move_state == L2CAP_MOVE_WAIT_CONFIRM_RSP) {
+-		chan->local_amp_id = chan->move_id;
+-
+-		if (chan->local_amp_id == AMP_ID_BREDR && chan->hs_hchan)
+-			__release_logical_link(chan);
+-
+-		l2cap_move_done(chan);
+-	}
+-
+-	l2cap_chan_unlock(chan);
+-	l2cap_chan_put(chan);
+-
+-	return 0;
+-}
+-
+ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn,
+ 					      struct l2cap_cmd_hdr *cmd,
+ 					      u16 cmd_len, u8 *data)
+@@ -5745,7 +4761,6 @@ static inline int l2cap_bredr_sig_cmd(struct l2cap_conn *conn,
+ 		break;
+ 
+ 	case L2CAP_CONN_RSP:
+-	case L2CAP_CREATE_CHAN_RSP:
+ 		l2cap_connect_create_rsp(conn, cmd, cmd_len, data);
+ 		break;
+ 
+@@ -5780,26 +4795,6 @@ static inline int l2cap_bredr_sig_cmd(struct l2cap_conn *conn,
+ 		l2cap_information_rsp(conn, cmd, cmd_len, data);
+ 		break;
+ 
+-	case L2CAP_CREATE_CHAN_REQ:
+-		err = l2cap_create_channel_req(conn, cmd, cmd_len, data);
+-		break;
+-
+-	case L2CAP_MOVE_CHAN_REQ:
+-		err = l2cap_move_channel_req(conn, cmd, cmd_len, data);
+-		break;
+-
+-	case L2CAP_MOVE_CHAN_RSP:
+-		l2cap_move_channel_rsp(conn, cmd, cmd_len, data);
+-		break;
+-
+-	case L2CAP_MOVE_CHAN_CFM:
+-		err = l2cap_move_channel_confirm(conn, cmd, cmd_len, data);
+-		break;
+-
+-	case L2CAP_MOVE_CHAN_CFM_RSP:
+-		l2cap_move_channel_confirm_rsp(conn, cmd, cmd_len, data);
+-		break;
+-
+ 	default:
+ 		BT_ERR("Unknown BR/EDR signaling command 0x%2.2x", cmd->code);
+ 		err = -EINVAL;
+@@ -7051,8 +6046,8 @@ static int l2cap_rx_state_recv(struct l2cap_chan *chan,
+ 		if (control->final) {
+ 			clear_bit(CONN_REMOTE_BUSY, &chan->conn_state);
+ 
+-			if (!test_and_clear_bit(CONN_REJ_ACT, &chan->conn_state) &&
+-			    !__chan_is_moving(chan)) {
++			if (!test_and_clear_bit(CONN_REJ_ACT,
++						&chan->conn_state)) {
+ 				control->final = 0;
+ 				l2cap_retransmit_all(chan, control);
+ 			}
+@@ -7245,11 +6240,7 @@ static int l2cap_finish_move(struct l2cap_chan *chan)
+ 	BT_DBG("chan %p", chan);
+ 
+ 	chan->rx_state = L2CAP_RX_STATE_RECV;
+-
+-	if (chan->hs_hcon)
+-		chan->conn->mtu = chan->hs_hcon->hdev->block_mtu;
+-	else
+-		chan->conn->mtu = chan->conn->hcon->hdev->acl_mtu;
++	chan->conn->mtu = chan->conn->hcon->hdev->acl_mtu;
+ 
+ 	return l2cap_resegment(chan);
+ }
+@@ -7316,11 +6307,7 @@ static int l2cap_rx_state_wait_f(struct l2cap_chan *chan,
+ 	 */
+ 	chan->next_tx_seq = control->reqseq;
+ 	chan->unacked_frames = 0;
+-
+-	if (chan->hs_hcon)
+-		chan->conn->mtu = chan->hs_hcon->hdev->block_mtu;
+-	else
+-		chan->conn->mtu = chan->conn->hcon->hdev->acl_mtu;
++	chan->conn->mtu = chan->conn->hcon->hdev->acl_mtu;
+ 
+ 	err = l2cap_resegment(chan);
+ 
+@@ -7672,21 +6659,10 @@ static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid,
+ 
+ 	chan = l2cap_get_chan_by_scid(conn, cid);
+ 	if (!chan) {
+-		if (cid == L2CAP_CID_A2MP) {
+-			chan = a2mp_channel_create(conn, skb);
+-			if (!chan) {
+-				kfree_skb(skb);
+-				return;
+-			}
+-
+-			l2cap_chan_hold(chan);
+-			l2cap_chan_lock(chan);
+-		} else {
+-			BT_DBG("unknown cid 0x%4.4x", cid);
+-			/* Drop packet and return */
+-			kfree_skb(skb);
+-			return;
+-		}
++		BT_DBG("unknown cid 0x%4.4x", cid);
++		/* Drop packet and return */
++		kfree_skb(skb);
++		return;
+ 	}
+ 
+ 	BT_DBG("chan %p, len %d", chan, skb->len);
+@@ -7887,10 +6863,6 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)
+ 
+ 	conn->local_fixed_chan = L2CAP_FC_SIG_BREDR | L2CAP_FC_CONNLESS;
+ 
+-	if (hcon->type == ACL_LINK &&
+-	    hci_dev_test_flag(hcon->hdev, HCI_HS_ENABLED))
+-		conn->local_fixed_chan |= L2CAP_FC_A2MP;
+-
+ 	if (hci_dev_test_flag(hcon->hdev, HCI_LE_ENABLED) &&
+ 	    (bredr_sc_enabled(hcon->hdev) ||
+ 	     hci_dev_test_flag(hcon->hdev, HCI_FORCE_BREDR_SMP)))
+@@ -8355,11 +7327,6 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+ 		BT_DBG("chan %p scid 0x%4.4x state %s", chan, chan->scid,
+ 		       state_to_string(chan->state));
+ 
+-		if (chan->scid == L2CAP_CID_A2MP) {
+-			l2cap_chan_unlock(chan);
+-			continue;
+-		}
+-
+ 		if (!status && encrypt)
+ 			chan->sec_level = hcon->sec_level;
+ 
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index e50d3d102078e..ee7a41d6994fc 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1027,23 +1027,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			break;
+ 		}
+ 
+-		if (opt > BT_CHANNEL_POLICY_AMP_PREFERRED) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		if (chan->mode != L2CAP_MODE_ERTM &&
+-		    chan->mode != L2CAP_MODE_STREAMING) {
+-			err = -EOPNOTSUPP;
+-			break;
+-		}
+-
+-		chan->chan_policy = (u8) opt;
+-
+-		if (sk->sk_state == BT_CONNECTED &&
+-		    chan->move_role == L2CAP_MOVE_ROLE_NONE)
+-			l2cap_move_start(chan);
+-
++		err = -EOPNOTSUPP;
+ 		break;
+ 
+ 	case BT_SNDMTU:
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 9dd815b6603fe..92fd3786bbdff 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -835,8 +835,6 @@ static u32 get_supported_settings(struct hci_dev *hdev)
+ 
+ 		if (lmp_ssp_capable(hdev)) {
+ 			settings |= MGMT_SETTING_SSP;
+-			if (IS_ENABLED(CONFIG_BT_HS))
+-				settings |= MGMT_SETTING_HS;
+ 		}
+ 
+ 		if (lmp_sc_capable(hdev))
+@@ -901,9 +899,6 @@ static u32 get_current_settings(struct hci_dev *hdev)
+ 	if (hci_dev_test_flag(hdev, HCI_SSP_ENABLED))
+ 		settings |= MGMT_SETTING_SSP;
+ 
+-	if (hci_dev_test_flag(hdev, HCI_HS_ENABLED))
+-		settings |= MGMT_SETTING_HS;
+-
+ 	if (hci_dev_test_flag(hdev, HCI_ADVERTISING))
+ 		settings |= MGMT_SETTING_ADVERTISING;
+ 
+@@ -1045,6 +1040,8 @@ static void rpa_expired(struct work_struct *work)
+ 	hci_cmd_sync_queue(hdev, rpa_expired_sync, NULL, NULL);
+ }
+ 
++static int set_discoverable_sync(struct hci_dev *hdev, void *data);
++
+ static void discov_off(struct work_struct *work)
+ {
+ 	struct hci_dev *hdev = container_of(work, struct hci_dev,
+@@ -1063,7 +1060,7 @@ static void discov_off(struct work_struct *work)
+ 	hci_dev_clear_flag(hdev, HCI_DISCOVERABLE);
+ 	hdev->discov_timeout = 0;
+ 
+-	hci_update_discoverable(hdev);
++	hci_cmd_sync_queue(hdev, set_discoverable_sync, NULL, NULL);
+ 
+ 	mgmt_new_settings(hdev);
+ 
+@@ -1407,7 +1404,7 @@ static int set_powered(struct sock *sk, struct hci_dev *hdev, void *data,
+ 
+ 	/* Cancel potentially blocking sync operation before power off */
+ 	if (cp->val == 0x00) {
+-		__hci_cmd_sync_cancel(hdev, -EHOSTDOWN);
++		hci_cmd_sync_cancel_sync(hdev, -EHOSTDOWN);
+ 		err = hci_cmd_sync_queue(hdev, set_powered_sync, cmd,
+ 					 mgmt_set_powered_complete);
+ 	} else {
+@@ -1928,7 +1925,6 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 		if (enable && hci_dev_test_and_clear_flag(hdev,
+ 							  HCI_SSP_ENABLED)) {
+-			hci_dev_clear_flag(hdev, HCI_HS_ENABLED);
+ 			new_settings(hdev, NULL);
+ 		}
+ 
+@@ -1941,12 +1937,6 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 		changed = !hci_dev_test_and_set_flag(hdev, HCI_SSP_ENABLED);
+ 	} else {
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
+-
+-		if (!changed)
+-			changed = hci_dev_test_and_clear_flag(hdev,
+-							      HCI_HS_ENABLED);
+-		else
+-			hci_dev_clear_flag(hdev, HCI_HS_ENABLED);
+ 	}
+ 
+ 	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, settings_rsp, &match);
+@@ -2010,11 +2000,6 @@ static int set_ssp(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 		} else {
+ 			changed = hci_dev_test_and_clear_flag(hdev,
+ 							      HCI_SSP_ENABLED);
+-			if (!changed)
+-				changed = hci_dev_test_and_clear_flag(hdev,
+-								      HCI_HS_ENABLED);
+-			else
+-				hci_dev_clear_flag(hdev, HCI_HS_ENABLED);
+ 		}
+ 
+ 		err = send_settings_rsp(sk, MGMT_OP_SET_SSP, hdev);
+@@ -2060,63 +2045,10 @@ static int set_ssp(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 
+ static int set_hs(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ {
+-	struct mgmt_mode *cp = data;
+-	bool changed;
+-	u8 status;
+-	int err;
+-
+ 	bt_dev_dbg(hdev, "sock %p", sk);
+ 
+-	if (!IS_ENABLED(CONFIG_BT_HS))
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
+-				       MGMT_STATUS_NOT_SUPPORTED);
+-
+-	status = mgmt_bredr_support(hdev);
+-	if (status)
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS, status);
+-
+-	if (!lmp_ssp_capable(hdev))
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
++	return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+-
+-	if (!hci_dev_test_flag(hdev, HCI_SSP_ENABLED))
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
+-				       MGMT_STATUS_REJECTED);
+-
+-	if (cp->val != 0x00 && cp->val != 0x01)
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
+-				       MGMT_STATUS_INVALID_PARAMS);
+-
+-	hci_dev_lock(hdev);
+-
+-	if (pending_find(MGMT_OP_SET_SSP, hdev)) {
+-		err = mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
+-				      MGMT_STATUS_BUSY);
+-		goto unlock;
+-	}
+-
+-	if (cp->val) {
+-		changed = !hci_dev_test_and_set_flag(hdev, HCI_HS_ENABLED);
+-	} else {
+-		if (hdev_is_powered(hdev)) {
+-			err = mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_HS,
+-					      MGMT_STATUS_REJECTED);
+-			goto unlock;
+-		}
+-
+-		changed = hci_dev_test_and_clear_flag(hdev, HCI_HS_ENABLED);
+-	}
+-
+-	err = send_settings_rsp(sk, MGMT_OP_SET_HS, hdev);
+-	if (err < 0)
+-		goto unlock;
+-
+-	if (changed)
+-		err = new_settings(hdev, sk);
+-
+-unlock:
+-	hci_dev_unlock(hdev);
+-	return err;
+ }
+ 
+ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
+@@ -3186,6 +3118,7 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
+ static u8 link_to_bdaddr(u8 link_type, u8 addr_type)
+ {
+ 	switch (link_type) {
++	case ISO_LINK:
+ 	case LE_LINK:
+ 		switch (addr_type) {
+ 		case ADDR_LE_DEV_PUBLIC:
+@@ -6764,7 +6697,6 @@ static int set_bredr(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 			hci_dev_clear_flag(hdev, HCI_SSP_ENABLED);
+ 			hci_dev_clear_flag(hdev, HCI_LINK_SECURITY);
+ 			hci_dev_clear_flag(hdev, HCI_FAST_CONNECTABLE);
+-			hci_dev_clear_flag(hdev, HCI_HS_ENABLED);
+ 		}
+ 
+ 		hci_dev_change_flag(hdev, HCI_BREDR_ENABLED);
+@@ -8468,7 +8400,7 @@ static int read_adv_features(struct sock *sk, struct hci_dev *hdev,
+ 
+ static u8 calculate_name_len(struct hci_dev *hdev)
+ {
+-	u8 buf[HCI_MAX_SHORT_NAME_LENGTH + 3];
++	u8 buf[HCI_MAX_SHORT_NAME_LENGTH + 2]; /* len + type + name */
+ 
+ 	return eir_append_local_name(hdev, buf, 0);
+ }
+@@ -9679,6 +9611,9 @@ void mgmt_device_connected(struct hci_dev *hdev, struct hci_conn *conn,
+ 	u16 eir_len = 0;
+ 	u32 flags = 0;
+ 
++	if (test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags))
++		return;
++
+ 	/* allocate buff for LE or BR/EDR adv */
+ 	if (conn->le_adv_data_len > 0)
+ 		skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_CONNECTED,
+@@ -9764,14 +9699,6 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	struct mgmt_ev_device_disconnected ev;
+ 	struct sock *sk = NULL;
+ 
+-	/* The connection is still in hci_conn_hash so test for 1
+-	 * instead of 0 to know if this is the last one.
+-	 */
+-	if (mgmt_powering_down(hdev) && hci_conn_count(hdev) == 1) {
+-		cancel_delayed_work(&hdev->power_off);
+-		queue_work(hdev->req_workqueue, &hdev->power_off.work);
+-	}
+-
+ 	if (!mgmt_connected)
+ 		return;
+ 
+@@ -9828,14 +9755,6 @@ void mgmt_connect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+ {
+ 	struct mgmt_ev_connect_failed ev;
+ 
+-	/* The connection is still in hci_conn_hash so test for 1
+-	 * instead of 0 to know if this is the last one.
+-	 */
+-	if (mgmt_powering_down(hdev) && hci_conn_count(hdev) == 1) {
+-		cancel_delayed_work(&hdev->power_off);
+-		queue_work(hdev->req_workqueue, &hdev->power_off.work);
+-	}
+-
+ 	bacpy(&ev.addr.bdaddr, bdaddr);
+ 	ev.addr.type = link_to_bdaddr(link_type, addr_type);
+ 	ev.status = mgmt_status(status);
+diff --git a/net/bluetooth/msft.c b/net/bluetooth/msft.c
+index 630e3023273b2..9612c5d1b13f6 100644
+--- a/net/bluetooth/msft.c
++++ b/net/bluetooth/msft.c
+@@ -875,6 +875,7 @@ static int msft_add_address_filter_sync(struct hci_dev *hdev, void *data)
+ 		remove = true;
+ 		goto done;
+ 	}
++
+ 	cp->sub_opcode           = MSFT_OP_LE_MONITOR_ADVERTISEMENT;
+ 	cp->rssi_high		 = address_filter->rssi_high;
+ 	cp->rssi_low		 = address_filter->rssi_low;
+@@ -887,6 +888,8 @@ static int msft_add_address_filter_sync(struct hci_dev *hdev, void *data)
+ 
+ 	skb = __hci_cmd_sync(hdev, hdev->msft_opcode, size, cp,
+ 			     HCI_CMD_TIMEOUT);
++	kfree(cp);
++
+ 	if (IS_ERR(skb)) {
+ 		bt_dev_err(hdev, "Failed to enable address %pMR filter",
+ 			   &address_filter->bdaddr);
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index 053ef8f25fae4..1d34d84970332 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -1941,7 +1941,7 @@ static struct rfcomm_session *rfcomm_process_rx(struct rfcomm_session *s)
+ 	/* Get data directly from socket receive queue without copying it. */
+ 	while ((skb = skb_dequeue(&sk->sk_receive_queue))) {
+ 		skb_orphan(skb);
+-		if (!skb_linearize(skb)) {
++		if (!skb_linearize(skb) && sk->sk_state != BT_CLOSED) {
+ 			s = rfcomm_recv_frame(s, skb);
+ 			if (!s)
+ 				break;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index add22ca0dff95..e3c06ccf21d03 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2262,7 +2262,7 @@ void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev)
+ 	rcu_read_lock();
+ again:
+ 	list_for_each_entry_rcu(ptype, ptype_list, list) {
+-		if (ptype->ignore_outgoing)
++		if (READ_ONCE(ptype->ignore_outgoing))
+ 			continue;
+ 
+ 		/* Never send packets back to the socket
+@@ -6668,6 +6668,8 @@ static int napi_threaded_poll(void *data)
+ 	void *have;
+ 
+ 	while (!napi_thread_wait(napi)) {
++		unsigned long last_qs = jiffies;
++
+ 		for (;;) {
+ 			bool repoll = false;
+ 
+@@ -6692,6 +6694,7 @@ static int napi_threaded_poll(void *data)
+ 			if (!repoll)
+ 				break;
+ 
++			rcu_softirq_qs_periodic(last_qs);
+ 			cond_resched();
+ 		}
+ 	}
+diff --git a/net/core/gso_test.c b/net/core/gso_test.c
+index 4c2e77bd12f4b..358c44680d917 100644
+--- a/net/core/gso_test.c
++++ b/net/core/gso_test.c
+@@ -225,7 +225,7 @@ static void gso_test_func(struct kunit *test)
+ 
+ 	segs = skb_segment(skb, features);
+ 	if (IS_ERR(segs)) {
+-		KUNIT_FAIL(test, "segs error %lld", PTR_ERR(segs));
++		KUNIT_FAIL(test, "segs error %pe", segs);
+ 		goto free_gso_skb;
+ 	} else if (!segs) {
+ 		KUNIT_FAIL(test, "no segments");
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 7dc47c17d8638..737917c7ac627 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -105,7 +105,7 @@ static int scm_fp_copy(struct cmsghdr *cmsg, struct scm_fp_list **fplp)
+ 		if (fd < 0 || !(file = fget_raw(fd)))
+ 			return -EBADF;
+ 		/* don't allow io_uring files */
+-		if (io_uring_get_socket(file)) {
++		if (io_is_uring_fops(file)) {
+ 			fput(file);
+ 			return -EINVAL;
+ 		}
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 94cc40a6f7975..78cb3304fb353 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6676,6 +6676,14 @@ static struct skb_ext *skb_ext_maybe_cow(struct skb_ext *old,
+ 		for (i = 0; i < sp->len; i++)
+ 			xfrm_state_hold(sp->xvec[i]);
+ 	}
++#endif
++#ifdef CONFIG_MCTP_FLOWS
++	if (old_active & (1 << SKB_EXT_MCTP)) {
++		struct mctp_flow *flow = skb_ext_get_ptr(old, SKB_EXT_MCTP);
++
++		if (flow->key)
++			refcount_inc(&flow->key->refs);
++	}
+ #endif
+ 	__skb_ext_put(old);
+ 	return new;
+diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
+index b1e29e18d1d60..c53b731f2d672 100644
+--- a/net/core/sock_diag.c
++++ b/net/core/sock_diag.c
+@@ -193,7 +193,7 @@ int sock_diag_register(const struct sock_diag_handler *hndl)
+ 	if (sock_diag_handlers[hndl->family])
+ 		err = -EBUSY;
+ 	else
+-		sock_diag_handlers[hndl->family] = hndl;
++		WRITE_ONCE(sock_diag_handlers[hndl->family], hndl);
+ 	mutex_unlock(&sock_diag_table_mutex);
+ 
+ 	return err;
+@@ -209,7 +209,7 @@ void sock_diag_unregister(const struct sock_diag_handler *hnld)
+ 
+ 	mutex_lock(&sock_diag_table_mutex);
+ 	BUG_ON(sock_diag_handlers[family] != hnld);
+-	sock_diag_handlers[family] = NULL;
++	WRITE_ONCE(sock_diag_handlers[family], NULL);
+ 	mutex_unlock(&sock_diag_table_mutex);
+ }
+ EXPORT_SYMBOL_GPL(sock_diag_unregister);
+@@ -227,7 +227,7 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 		return -EINVAL;
+ 	req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
+ 
+-	if (sock_diag_handlers[req->sdiag_family] == NULL)
++	if (READ_ONCE(sock_diag_handlers[req->sdiag_family]) == NULL)
+ 		sock_load_diag_module(req->sdiag_family, 0);
+ 
+ 	mutex_lock(&sock_diag_table_mutex);
+@@ -286,12 +286,12 @@ static int sock_diag_bind(struct net *net, int group)
+ 	switch (group) {
+ 	case SKNLGRP_INET_TCP_DESTROY:
+ 	case SKNLGRP_INET_UDP_DESTROY:
+-		if (!sock_diag_handlers[AF_INET])
++		if (!READ_ONCE(sock_diag_handlers[AF_INET]))
+ 			sock_load_diag_module(AF_INET, 0);
+ 		break;
+ 	case SKNLGRP_INET6_TCP_DESTROY:
+ 	case SKNLGRP_INET6_UDP_DESTROY:
+-		if (!sock_diag_handlers[AF_INET6])
++		if (!READ_ONCE(sock_diag_handlers[AF_INET6]))
+ 			sock_load_diag_module(AF_INET6, 0);
+ 		break;
+ 	}
+diff --git a/net/devlink/core.c b/net/devlink/core.c
+index bc3d265fe2d6e..7f0b093208d75 100644
+--- a/net/devlink/core.c
++++ b/net/devlink/core.c
+@@ -503,14 +503,14 @@ static void __net_exit devlink_pernet_pre_exit(struct net *net)
+ 	 * all devlink instances from this namespace into init_net.
+ 	 */
+ 	devlinks_xa_for_each_registered_get(net, index, devlink) {
+-		devl_lock(devlink);
++		devl_dev_lock(devlink, true);
+ 		err = 0;
+ 		if (devl_is_registered(devlink))
+ 			err = devlink_reload(devlink, &init_net,
+ 					     DEVLINK_RELOAD_ACTION_DRIVER_REINIT,
+ 					     DEVLINK_RELOAD_LIMIT_UNSPEC,
+ 					     &actions_performed, NULL);
+-		devl_unlock(devlink);
++		devl_dev_unlock(devlink, true);
+ 		devlink_put(devlink);
+ 		if (err && err != -EOPNOTSUPP)
+ 			pr_warn("Failed to reload devlink instance into init_net\n");
+diff --git a/net/devlink/devl_internal.h b/net/devlink/devl_internal.h
+index 183dbe3807ab3..5ea2e2012e930 100644
+--- a/net/devlink/devl_internal.h
++++ b/net/devlink/devl_internal.h
+@@ -3,6 +3,7 @@
+  * Copyright (c) 2016 Jiri Pirko <jiri@mellanox.com>
+  */
+ 
++#include <linux/device.h>
+ #include <linux/etherdevice.h>
+ #include <linux/mutex.h>
+ #include <linux/netdevice.h>
+@@ -96,6 +97,20 @@ static inline bool devl_is_registered(struct devlink *devlink)
+ 	return xa_get_mark(&devlinks, devlink->index, DEVLINK_REGISTERED);
+ }
+ 
++static inline void devl_dev_lock(struct devlink *devlink, bool dev_lock)
++{
++	if (dev_lock)
++		device_lock(devlink->dev);
++	devl_lock(devlink);
++}
++
++static inline void devl_dev_unlock(struct devlink *devlink, bool dev_lock)
++{
++	devl_unlock(devlink);
++	if (dev_lock)
++		device_unlock(devlink->dev);
++}
++
+ typedef void devlink_rel_notify_cb_t(struct devlink *devlink, u32 obj_index);
+ typedef void devlink_rel_cleanup_cb_t(struct devlink *devlink, u32 obj_index,
+ 				      u32 rel_index);
+@@ -111,9 +126,6 @@ int devlink_rel_devlink_handle_put(struct sk_buff *msg, struct devlink *devlink,
+ 				   bool *msg_updated);
+ 
+ /* Netlink */
+-#define DEVLINK_NL_FLAG_NEED_PORT		BIT(0)
+-#define DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT	BIT(1)
+-
+ enum devlink_multicast_groups {
+ 	DEVLINK_MCGRP_CONFIG,
+ };
+@@ -140,7 +152,8 @@ typedef int devlink_nl_dump_one_func_t(struct sk_buff *msg,
+ 				       int flags);
+ 
+ struct devlink *
+-devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs);
++devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs,
++			    bool dev_lock);
+ 
+ int devlink_nl_dumpit(struct sk_buff *msg, struct netlink_callback *cb,
+ 		      devlink_nl_dump_one_func_t *dump_one);
+diff --git a/net/devlink/health.c b/net/devlink/health.c
+index 695df61f8ac2a..71ae121dc739d 100644
+--- a/net/devlink/health.c
++++ b/net/devlink/health.c
+@@ -1151,7 +1151,8 @@ devlink_health_reporter_get_from_cb_lock(struct netlink_callback *cb)
+ 	struct nlattr **attrs = info->attrs;
+ 	struct devlink *devlink;
+ 
+-	devlink = devlink_get_from_attrs_lock(sock_net(cb->skb->sk), attrs);
++	devlink = devlink_get_from_attrs_lock(sock_net(cb->skb->sk), attrs,
++					      false);
+ 	if (IS_ERR(devlink))
+ 		return NULL;
+ 
+diff --git a/net/devlink/netlink.c b/net/devlink/netlink.c
+index d0b90ebc8b152..0f41fded6a6d7 100644
+--- a/net/devlink/netlink.c
++++ b/net/devlink/netlink.c
+@@ -9,6 +9,10 @@
+ 
+ #include "devl_internal.h"
+ 
++#define DEVLINK_NL_FLAG_NEED_PORT		BIT(0)
++#define DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT	BIT(1)
++#define DEVLINK_NL_FLAG_NEED_DEV_LOCK		BIT(2)
++
+ static const struct genl_multicast_group devlink_nl_mcgrps[] = {
+ 	[DEVLINK_MCGRP_CONFIG] = { .name = DEVLINK_GENL_MCGRP_CONFIG_NAME },
+ };
+@@ -61,7 +65,8 @@ int devlink_nl_msg_reply_and_new(struct sk_buff **msg, struct genl_info *info)
+ }
+ 
+ struct devlink *
+-devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs)
++devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs,
++			    bool dev_lock)
+ {
+ 	struct devlink *devlink;
+ 	unsigned long index;
+@@ -75,12 +80,13 @@ devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs)
+ 	devname = nla_data(attrs[DEVLINK_ATTR_DEV_NAME]);
+ 
+ 	devlinks_xa_for_each_registered_get(net, index, devlink) {
+-		devl_lock(devlink);
+-		if (devl_is_registered(devlink) &&
+-		    strcmp(devlink->dev->bus->name, busname) == 0 &&
+-		    strcmp(dev_name(devlink->dev), devname) == 0)
+-			return devlink;
+-		devl_unlock(devlink);
++		if (strcmp(devlink->dev->bus->name, busname) == 0 &&
++		    strcmp(dev_name(devlink->dev), devname) == 0) {
++			devl_dev_lock(devlink, dev_lock);
++			if (devl_is_registered(devlink))
++				return devlink;
++			devl_dev_unlock(devlink, dev_lock);
++		}
+ 		devlink_put(devlink);
+ 	}
+ 
+@@ -90,11 +96,13 @@ devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs)
+ static int __devlink_nl_pre_doit(struct sk_buff *skb, struct genl_info *info,
+ 				 u8 flags)
+ {
++	bool dev_lock = flags & DEVLINK_NL_FLAG_NEED_DEV_LOCK;
+ 	struct devlink_port *devlink_port;
+ 	struct devlink *devlink;
+ 	int err;
+ 
+-	devlink = devlink_get_from_attrs_lock(genl_info_net(info), info->attrs);
++	devlink = devlink_get_from_attrs_lock(genl_info_net(info), info->attrs,
++					      dev_lock);
+ 	if (IS_ERR(devlink))
+ 		return PTR_ERR(devlink);
+ 
+@@ -114,7 +122,7 @@ static int __devlink_nl_pre_doit(struct sk_buff *skb, struct genl_info *info,
+ 	return 0;
+ 
+ unlock:
+-	devl_unlock(devlink);
++	devl_dev_unlock(devlink, dev_lock);
+ 	devlink_put(devlink);
+ 	return err;
+ }
+@@ -138,16 +146,23 @@ int devlink_nl_pre_doit_port_optional(const struct genl_split_ops *ops,
+ 	return __devlink_nl_pre_doit(skb, info, DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT);
+ }
+ 
+-void devlink_nl_post_doit(const struct genl_split_ops *ops,
+-			  struct sk_buff *skb, struct genl_info *info)
++static void __devlink_nl_post_doit(struct sk_buff *skb, struct genl_info *info,
++				   u8 flags)
+ {
++	bool dev_lock = flags & DEVLINK_NL_FLAG_NEED_DEV_LOCK;
+ 	struct devlink *devlink;
+ 
+ 	devlink = info->user_ptr[0];
+-	devl_unlock(devlink);
++	devl_dev_unlock(devlink, dev_lock);
+ 	devlink_put(devlink);
+ }
+ 
++void devlink_nl_post_doit(const struct genl_split_ops *ops,
++			  struct sk_buff *skb, struct genl_info *info)
++{
++	__devlink_nl_post_doit(skb, info, 0);
++}
++
+ static int devlink_nl_inst_single_dumpit(struct sk_buff *msg,
+ 					 struct netlink_callback *cb, int flags,
+ 					 devlink_nl_dump_one_func_t *dump_one,
+@@ -156,7 +171,7 @@ static int devlink_nl_inst_single_dumpit(struct sk_buff *msg,
+ 	struct devlink *devlink;
+ 	int err;
+ 
+-	devlink = devlink_get_from_attrs_lock(sock_net(msg->sk), attrs);
++	devlink = devlink_get_from_attrs_lock(sock_net(msg->sk), attrs, false);
+ 	if (IS_ERR(devlink))
+ 		return PTR_ERR(devlink);
+ 	err = dump_one(msg, devlink, cb, flags | NLM_F_DUMP_FILTERED);
+diff --git a/net/devlink/netlink_gen.c b/net/devlink/netlink_gen.c
+index 788dfdc498a95..371f27f653317 100644
+--- a/net/devlink/netlink_gen.c
++++ b/net/devlink/netlink_gen.c
+@@ -198,7 +198,7 @@ static const struct nla_policy devlink_eswitch_set_nl_policy[DEVLINK_ATTR_ESWITC
+ 	[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ 	[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ 	[DEVLINK_ATTR_ESWITCH_MODE] = NLA_POLICY_MAX(NLA_U16, 1),
+-	[DEVLINK_ATTR_ESWITCH_INLINE_MODE] = NLA_POLICY_MAX(NLA_U16, 3),
++	[DEVLINK_ATTR_ESWITCH_INLINE_MODE] = NLA_POLICY_MAX(NLA_U8, 3),
+ 	[DEVLINK_ATTR_ESWITCH_ENCAP_MODE] = NLA_POLICY_MAX(NLA_U8, 1),
+ };
+ 
+diff --git a/net/devlink/port.c b/net/devlink/port.c
+index d39ee6053cc7b..2b3c2b1a3eb37 100644
+--- a/net/devlink/port.c
++++ b/net/devlink/port.c
+@@ -887,7 +887,7 @@ int devlink_nl_port_new_doit(struct sk_buff *skb, struct genl_info *info)
+ 		err = -ENOMEM;
+ 		goto err_out_port_del;
+ 	}
+-	err = devlink_nl_port_fill(msg, devlink_port, DEVLINK_CMD_NEW,
++	err = devlink_nl_port_fill(msg, devlink_port, DEVLINK_CMD_PORT_NEW,
+ 				   info->snd_portid, info->snd_seq, 0, NULL);
+ 	if (WARN_ON_ONCE(err))
+ 		goto err_out_msg_free;
+diff --git a/net/devlink/region.c b/net/devlink/region.c
+index 0aab7b82d6780..e3bab458db940 100644
+--- a/net/devlink/region.c
++++ b/net/devlink/region.c
+@@ -883,7 +883,8 @@ int devlink_nl_region_read_dumpit(struct sk_buff *skb,
+ 
+ 	start_offset = state->start_offset;
+ 
+-	devlink = devlink_get_from_attrs_lock(sock_net(cb->skb->sk), attrs);
++	devlink = devlink_get_from_attrs_lock(sock_net(cb->skb->sk), attrs,
++					      false);
+ 	if (IS_ERR(devlink))
+ 		return PTR_ERR(devlink);
+ 
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 6d14d935ee828..26329db09210b 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -228,6 +228,10 @@ struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db,
+ 	 */
+ 	if (ethhdr->h_proto == htons(ETH_P_PRP) ||
+ 	    ethhdr->h_proto == htons(ETH_P_HSR)) {
++		/* Check if skb contains hsr_ethhdr */
++		if (skb->mac_len < sizeof(struct hsr_ethhdr))
++			return NULL;
++
+ 		/* Use the existing sequence_nr from the tag as starting point
+ 		 * for filtering duplicate frames.
+ 		 */
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index b099c31501509..257b50124cee5 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -148,14 +148,21 @@ static struct notifier_block hsr_nb = {
+ 
+ static int __init hsr_init(void)
+ {
+-	int res;
++	int err;
+ 
+ 	BUILD_BUG_ON(sizeof(struct hsr_tag) != HSR_HLEN);
+ 
+-	register_netdevice_notifier(&hsr_nb);
+-	res = hsr_netlink_init();
++	err = register_netdevice_notifier(&hsr_nb);
++	if (err)
++		return err;
++
++	err = hsr_netlink_init();
++	if (err) {
++		unregister_netdevice_notifier(&hsr_nb);
++		return err;
++	}
+ 
+-	return res;
++	return 0;
+ }
+ 
+ static void __exit hsr_exit(void)
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index 7d0e7aaa71e0a..5f7fdbd01cf9b 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -57,7 +57,7 @@ static const struct inet_diag_handler *inet_diag_lock_handler(int proto)
+ 		return ERR_PTR(-ENOENT);
+ 	}
+ 
+-	if (!inet_diag_table[proto])
++	if (!READ_ONCE(inet_diag_table[proto]))
+ 		sock_load_diag_module(AF_INET, proto);
+ 
+ 	mutex_lock(&inet_diag_table_mutex);
+@@ -1419,7 +1419,7 @@ int inet_diag_register(const struct inet_diag_handler *h)
+ 	mutex_lock(&inet_diag_table_mutex);
+ 	err = -EEXIST;
+ 	if (!inet_diag_table[type]) {
+-		inet_diag_table[type] = h;
++		WRITE_ONCE(inet_diag_table[type], h);
+ 		err = 0;
+ 	}
+ 	mutex_unlock(&inet_diag_table_mutex);
+@@ -1436,7 +1436,7 @@ void inet_diag_unregister(const struct inet_diag_handler *h)
+ 		return;
+ 
+ 	mutex_lock(&inet_diag_table_mutex);
+-	inet_diag_table[type] = NULL;
++	WRITE_ONCE(inet_diag_table[type], NULL);
+ 	mutex_unlock(&inet_diag_table_mutex);
+ }
+ EXPORT_SYMBOL_GPL(inet_diag_unregister);
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 9456bf9e2705b..7967ff7e02f79 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -1137,7 +1137,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ 		sock_prot_inuse_add(net, sk->sk_prot, -1);
+ 
+ 		spin_lock(lock);
+-		sk_nulls_del_node_init_rcu(sk);
++		__sk_nulls_del_node_init_rcu(sk);
+ 		spin_unlock(lock);
+ 
+ 		sk->sk_hash = 0;
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index dd37a5bf68811..757ae3a4e2f1a 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -278,12 +278,12 @@ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm)
+ }
+ EXPORT_SYMBOL_GPL(__inet_twsk_schedule);
+ 
++/* Remove all non full sockets (TIME_WAIT and NEW_SYN_RECV) for dead netns */
+ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family)
+ {
+-	struct inet_timewait_sock *tw;
+-	struct sock *sk;
+ 	struct hlist_nulls_node *node;
+ 	unsigned int slot;
++	struct sock *sk;
+ 
+ 	for (slot = 0; slot <= hashinfo->ehash_mask; slot++) {
+ 		struct inet_ehash_bucket *head = &hashinfo->ehash[slot];
+@@ -292,38 +292,35 @@ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family)
+ 		rcu_read_lock();
+ restart:
+ 		sk_nulls_for_each_rcu(sk, node, &head->chain) {
+-			if (sk->sk_state != TCP_TIME_WAIT) {
+-				/* A kernel listener socket might not hold refcnt for net,
+-				 * so reqsk_timer_handler() could be fired after net is
+-				 * freed.  Userspace listener and reqsk never exist here.
+-				 */
+-				if (unlikely(sk->sk_state == TCP_NEW_SYN_RECV &&
+-					     hashinfo->pernet)) {
+-					struct request_sock *req = inet_reqsk(sk);
+-
+-					inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
+-				}
++			int state = inet_sk_state_load(sk);
+ 
++			if ((1 << state) & ~(TCPF_TIME_WAIT |
++					     TCPF_NEW_SYN_RECV))
+ 				continue;
+-			}
+ 
+-			tw = inet_twsk(sk);
+-			if ((tw->tw_family != family) ||
+-				refcount_read(&twsk_net(tw)->ns.count))
++			if (sk->sk_family != family ||
++			    refcount_read(&sock_net(sk)->ns.count))
+ 				continue;
+ 
+-			if (unlikely(!refcount_inc_not_zero(&tw->tw_refcnt)))
++			if (unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ 				continue;
+ 
+-			if (unlikely((tw->tw_family != family) ||
+-				     refcount_read(&twsk_net(tw)->ns.count))) {
+-				inet_twsk_put(tw);
++			if (unlikely(sk->sk_family != family ||
++				     refcount_read(&sock_net(sk)->ns.count))) {
++				sock_gen_put(sk);
+ 				goto restart;
+ 			}
+ 
+ 			rcu_read_unlock();
+ 			local_bh_disable();
+-			inet_twsk_deschedule_put(tw);
++			if (state == TCP_TIME_WAIT) {
++				inet_twsk_deschedule_put(inet_twsk(sk));
++			} else {
++				struct request_sock *req = inet_reqsk(sk);
++
++				inet_csk_reqsk_queue_drop_and_put(req->rsk_listener,
++								  req);
++			}
+ 			local_bh_enable();
+ 			goto restart_rcu;
+ 		}
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 2d29fce7c5606..b1b6dcf2161fb 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -378,7 +378,7 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
+ 		  bool log_ecn_error)
+ {
+ 	const struct iphdr *iph = ip_hdr(skb);
+-	int err;
++	int nh, err;
+ 
+ #ifdef CONFIG_NET_IPGRE_BROADCAST
+ 	if (ipv4_is_multicast(iph->daddr)) {
+@@ -404,8 +404,21 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
+ 		tunnel->i_seqno = ntohl(tpi->seq) + 1;
+ 	}
+ 
++	/* Save offset of outer header relative to skb->head,
++	 * because we are going to reset the network header to the inner header
++	 * and might change skb->head.
++	 */
++	nh = skb_network_header(skb) - skb->head;
++
+ 	skb_set_network_header(skb, (tunnel->dev->type == ARPHRD_ETHER) ? ETH_HLEN : 0);
+ 
++	if (!pskb_inet_may_pull(skb)) {
++		DEV_STATS_INC(tunnel->dev, rx_length_errors);
++		DEV_STATS_INC(tunnel->dev, rx_errors);
++		goto drop;
++	}
++	iph = (struct iphdr *)(skb->head + nh);
++
+ 	err = IP_ECN_decapsulate(iph, skb);
+ 	if (unlikely(err)) {
+ 		if (log_ecn_error)
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index e49242706b5f5..66eade3fb629f 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -1603,9 +1603,11 @@ int ip_mroute_getsockopt(struct sock *sk, int optname, sockptr_t optval,
+ 
+ 	if (copy_from_sockptr(&olr, optlen, sizeof(int)))
+ 		return -EFAULT;
+-	olr = min_t(unsigned int, olr, sizeof(int));
+ 	if (olr < 0)
+ 		return -EINVAL;
++
++	olr = min_t(unsigned int, olr, sizeof(int));
++
+ 	if (copy_to_sockptr(optlen, &olr, sizeof(int)))
+ 		return -EFAULT;
+ 	if (copy_to_sockptr(optval, &val, olr))
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index aea89326c6979..288f1846b3518 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -350,6 +350,7 @@ static int raw_send_hdrinc(struct sock *sk, struct flowi4 *fl4,
+ 		goto error;
+ 	skb_reserve(skb, hlen);
+ 
++	skb->protocol = htons(ETH_P_IP);
+ 	skb->priority = READ_ONCE(sk->sk_priority);
+ 	skb->mark = sockc->mark;
+ 	skb->tstamp = sockc->transmit_time;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index b30ef770a6cca..0d03d48702a4e 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -4011,11 +4011,11 @@ int do_tcp_getsockopt(struct sock *sk, int level,
+ 	if (copy_from_sockptr(&len, optlen, sizeof(int)))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	switch (optname) {
+ 	case TCP_MAXSEG:
+ 		val = tp->mss_cache;
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 9e85f2a0bddd4..0ecc7311dc6ce 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -398,10 +398,6 @@ void tcp_twsk_purge(struct list_head *net_exit_list, int family)
+ 			/* Even if tw_refcount == 1, we must clean up kernel reqsk */
+ 			inet_twsk_purge(net->ipv4.tcp_death_row.hashinfo, family);
+ 		} else if (!purged_once) {
+-			/* The last refcount is decremented in tcp_sk_exit_batch() */
+-			if (refcount_read(&net->ipv4.tcp_death_row.tw_refcount) == 1)
+-				continue;
+-
+ 			inet_twsk_purge(&tcp_hashinfo, family);
+ 			purged_once = true;
+ 		}
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index e474b201900f9..17231c0f88302 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2792,11 +2792,11 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	switch (optname) {
+ 	case UDP_CORK:
+ 		val = udp_test_bit(CORK, sk);
+diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
+index 7c20038330104..be52b18e08a6b 100644
+--- a/net/ipv6/fib6_rules.c
++++ b/net/ipv6/fib6_rules.c
+@@ -449,6 +449,11 @@ static size_t fib6_rule_nlmsg_payload(struct fib_rule *rule)
+ 	       + nla_total_size(16); /* src */
+ }
+ 
++static void fib6_rule_flush_cache(struct fib_rules_ops *ops)
++{
++	rt_genid_bump_ipv6(ops->fro_net);
++}
++
+ static const struct fib_rules_ops __net_initconst fib6_rules_ops_template = {
+ 	.family			= AF_INET6,
+ 	.rule_size		= sizeof(struct fib6_rule),
+@@ -461,6 +466,7 @@ static const struct fib_rules_ops __net_initconst fib6_rules_ops_template = {
+ 	.compare		= fib6_rule_compare,
+ 	.fill			= fib6_rule_fill,
+ 	.nlmsg_payload		= fib6_rule_nlmsg_payload,
++	.flush_cache		= fib6_rule_flush_cache,
+ 	.nlgroup		= RTNLGRP_IPV6_RULE,
+ 	.owner			= THIS_MODULE,
+ 	.fro_net		= &init_net,
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index bc6e0a0bad3c1..76ee1615ff2a0 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2719,7 +2719,6 @@ void ipv6_mc_down(struct inet6_dev *idev)
+ 	/* Should stop work after group drop. or we will
+ 	 * start work again in mld_ifc_event()
+ 	 */
+-	synchronize_net();
+ 	mld_query_stop_work(idev);
+ 	mld_report_stop_work(idev);
+ 
+diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
+index 0ed6e34d6edd1..ce33adb65afb0 100644
+--- a/net/iucv/iucv.c
++++ b/net/iucv/iucv.c
+@@ -156,7 +156,7 @@ static char iucv_error_pathid[16] = "INVALID PATHID";
+ static LIST_HEAD(iucv_handler_list);
+ 
+ /*
+- * iucv_path_table: an array of iucv_path structures.
++ * iucv_path_table: array of pointers to iucv_path structures.
+  */
+ static struct iucv_path **iucv_path_table;
+ static unsigned long iucv_max_pathid;
+@@ -544,7 +544,7 @@ static int iucv_enable(void)
+ 
+ 	cpus_read_lock();
+ 	rc = -ENOMEM;
+-	alloc_size = iucv_max_pathid * sizeof(struct iucv_path);
++	alloc_size = iucv_max_pathid * sizeof(*iucv_path_table);
+ 	iucv_path_table = kzalloc(alloc_size, GFP_KERNEL);
+ 	if (!iucv_path_table)
+ 		goto out;
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 1184d40167b86..eda933c097926 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1152,10 +1152,11 @@ static int kcm_getsockopt(struct socket *sock, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	switch (optname) {
+ 	case KCM_RECV_DISABLE:
+ 		val = kcm->rx_disabled;
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index f011af6601c9c..6146e4e67bbb5 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -1356,11 +1356,11 @@ static int pppol2tp_getsockopt(struct socket *sock, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		return -EFAULT;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	err = -ENOTCONN;
+ 	if (!sk->sk_user_data)
+ 		goto end;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 241e615189244..6cfc07aaa1b6c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -7502,10 +7502,10 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
+ 	if (err)
+ 		goto err_clear;
+ 
+-	if (req->link_id > 0)
++	if (req->link_id >= 0)
+ 		link = sdata_dereference(sdata->link[req->link_id], sdata);
+ 	else
+-		link = sdata_dereference(sdata->link[0], sdata);
++		link = &sdata->deflink;
+ 
+ 	if (WARN_ON(!link)) {
+ 		err = -ENOLINK;
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index d5ea5f5bcf3a0..9d33fd2377c88 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -119,7 +119,8 @@ void rate_control_rate_update(struct ieee80211_local *local,
+ 		rcu_read_unlock();
+ 	}
+ 
+-	drv_sta_rc_update(local, sta->sdata, &sta->sta, changed);
++	if (sta->uploaded)
++		drv_sta_rc_update(local, sta->sdata, &sta->sta, changed);
+ }
+ 
+ int ieee80211_rate_control_register(const struct rate_control_ops *ops)
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index ceee44ea09d97..01c530dbc1a65 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -843,6 +843,9 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ 		/* copy message payload */
+ 		skb_copy_bits(skb, pos, skb_transport_header(skb2), size);
+ 
++		/* we need to copy the extensions, for MCTP flow data */
++		skb_ext_copy(skb2, skb);
++
+ 		/* do route */
+ 		rc = rt->output(rt, skb2);
+ 		if (rc)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 79e088e6f103e..0130c2782cdc7 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1211,7 +1211,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 	if (flags & ~NFT_TABLE_F_MASK)
+ 		return -EOPNOTSUPP;
+ 
+-	if (flags == ctx->table->flags)
++	if (flags == (ctx->table->flags & NFT_TABLE_F_MASK))
+ 		return 0;
+ 
+ 	if ((nft_table_has_owner(ctx->table) &&
+@@ -2619,19 +2619,6 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 		}
+ 	}
+ 
+-	if (nla[NFTA_CHAIN_COUNTERS]) {
+-		if (!nft_is_base_chain(chain)) {
+-			err = -EOPNOTSUPP;
+-			goto err_hooks;
+-		}
+-
+-		stats = nft_stats_alloc(nla[NFTA_CHAIN_COUNTERS]);
+-		if (IS_ERR(stats)) {
+-			err = PTR_ERR(stats);
+-			goto err_hooks;
+-		}
+-	}
+-
+ 	if (!(table->flags & NFT_TABLE_F_DORMANT) &&
+ 	    nft_is_base_chain(chain) &&
+ 	    !list_empty(&hook.list)) {
+@@ -2646,6 +2633,20 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 	}
+ 
+ 	unregister = true;
++
++	if (nla[NFTA_CHAIN_COUNTERS]) {
++		if (!nft_is_base_chain(chain)) {
++			err = -EOPNOTSUPP;
++			goto err_hooks;
++		}
++
++		stats = nft_stats_alloc(nla[NFTA_CHAIN_COUNTERS]);
++		if (IS_ERR(stats)) {
++			err = PTR_ERR(stats);
++			goto err_hooks;
++		}
++	}
++
+ 	err = -ENOMEM;
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWCHAIN,
+ 				sizeof(struct nft_trans_chain));
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 3089c4ca8fff3..abf659cb2d91f 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -2244,8 +2244,6 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 	if (m) {
+ 		rcu_barrier();
+ 
+-		nft_set_pipapo_match_destroy(ctx, set, m);
+-
+ 		for_each_possible_cpu(cpu)
+ 			pipapo_free_scratch(m, cpu);
+ 		free_percpu(m->scratch);
+@@ -2257,8 +2255,7 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ 	if (priv->clone) {
+ 		m = priv->clone;
+ 
+-		if (priv->dirty)
+-			nft_set_pipapo_match_destroy(ctx, set, m);
++		nft_set_pipapo_match_destroy(ctx, set, m);
+ 
+ 		for_each_possible_cpu(cpu)
+ 			pipapo_free_scratch(priv->clone, cpu);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 7adf48549a3b7..f017d7d33da39 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4004,7 +4004,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ 		if (val < 0 || val > 1)
+ 			return -EINVAL;
+ 
+-		po->prot_hook.ignore_outgoing = !!val;
++		WRITE_ONCE(po->prot_hook.ignore_outgoing, !!val);
+ 		return 0;
+ 	}
+ 	case PACKET_TX_HAS_OFF:
+@@ -4135,7 +4135,7 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ 		       0);
+ 		break;
+ 	case PACKET_IGNORE_OUTGOING:
+-		val = po->prot_hook.ignore_outgoing;
++		val = READ_ONCE(po->prot_hook.ignore_outgoing);
+ 		break;
+ 	case PACKET_ROLLOVER_STATS:
+ 		if (!po->rollover)
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 2899def23865f..09a2801106549 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -103,13 +103,12 @@ EXPORT_SYMBOL_GPL(rds_send_path_reset);
+ 
+ static int acquire_in_xmit(struct rds_conn_path *cp)
+ {
+-	return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0;
++	return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0;
+ }
+ 
+ static void release_in_xmit(struct rds_conn_path *cp)
+ {
+-	clear_bit(RDS_IN_XMIT, &cp->cp_flags);
+-	smp_mb__after_atomic();
++	clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags);
+ 	/*
+ 	 * We don't use wait_on_bit()/wake_up_bit() because our waking is in a
+ 	 * hot path and finding waiters is very rare.  We don't want to walk
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 31a8252bd09c9..ad99409c6325e 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1008,7 +1008,8 @@ static const struct nla_policy entry_policy[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = {
+ };
+ 
+ static const struct nla_policy taprio_tc_policy[TCA_TAPRIO_TC_ENTRY_MAX + 1] = {
+-	[TCA_TAPRIO_TC_ENTRY_INDEX]	   = { .type = NLA_U32 },
++	[TCA_TAPRIO_TC_ENTRY_INDEX]	   = NLA_POLICY_MAX(NLA_U32,
++							    TC_QOPT_MAX_QUEUE),
+ 	[TCA_TAPRIO_TC_ENTRY_MAX_SDU]	   = { .type = NLA_U32 },
+ 	[TCA_TAPRIO_TC_ENTRY_FP]	   = NLA_POLICY_RANGE(NLA_U32,
+ 							      TC_FP_EXPRESS,
+diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c
+index d435bffc61999..97ff11973c493 100644
+--- a/net/sunrpc/addr.c
++++ b/net/sunrpc/addr.c
+@@ -284,10 +284,10 @@ char *rpc_sockaddr2uaddr(const struct sockaddr *sap, gfp_t gfp_flags)
+ 	}
+ 
+ 	if (snprintf(portbuf, sizeof(portbuf),
+-		     ".%u.%u", port >> 8, port & 0xff) > (int)sizeof(portbuf))
++		     ".%u.%u", port >> 8, port & 0xff) >= (int)sizeof(portbuf))
+ 		return NULL;
+ 
+-	if (strlcat(addrbuf, portbuf, sizeof(addrbuf)) > sizeof(addrbuf))
++	if (strlcat(addrbuf, portbuf, sizeof(addrbuf)) >= sizeof(addrbuf))
+ 		return NULL;
+ 
+ 	return kstrdup(addrbuf, gfp_flags);
+diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
+index e31cfdf7eadcb..f6fc80e1d658b 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_mech.c
++++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
+@@ -398,6 +398,7 @@ gss_import_v2_context(const void *p, const void *end, struct krb5_ctx *ctx,
+ 	u64 seq_send64;
+ 	int keylen;
+ 	u32 time32;
++	int ret;
+ 
+ 	p = simple_get_bytes(p, end, &ctx->flags, sizeof(ctx->flags));
+ 	if (IS_ERR(p))
+@@ -450,8 +451,16 @@ gss_import_v2_context(const void *p, const void *end, struct krb5_ctx *ctx,
+ 	}
+ 	ctx->mech_used.len = gss_kerberos_mech.gm_oid.len;
+ 
+-	return gss_krb5_import_ctx_v2(ctx, gfp_mask);
++	ret = gss_krb5_import_ctx_v2(ctx, gfp_mask);
++	if (ret) {
++		p = ERR_PTR(ret);
++		goto out_free;
++	}
+ 
++	return 0;
++
++out_free:
++	kfree(ctx->mech_used.data);
+ out_err:
+ 	return PTR_ERR(p);
+ }
+diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+index d79f12c2550ac..cb32ab9a83952 100644
+--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
++++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
+@@ -250,8 +250,8 @@ static int gssx_dec_option_array(struct xdr_stream *xdr,
+ 
+ 	creds = kzalloc(sizeof(struct svc_cred), GFP_KERNEL);
+ 	if (!creds) {
+-		kfree(oa->data);
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto free_oa;
+ 	}
+ 
+ 	oa->data[0].option.data = CREDS_VALUE;
+@@ -265,29 +265,40 @@ static int gssx_dec_option_array(struct xdr_stream *xdr,
+ 
+ 		/* option buffer */
+ 		p = xdr_inline_decode(xdr, 4);
+-		if (unlikely(p == NULL))
+-			return -ENOSPC;
++		if (unlikely(p == NULL)) {
++			err = -ENOSPC;
++			goto free_creds;
++		}
+ 
+ 		length = be32_to_cpup(p);
+ 		p = xdr_inline_decode(xdr, length);
+-		if (unlikely(p == NULL))
+-			return -ENOSPC;
++		if (unlikely(p == NULL)) {
++			err = -ENOSPC;
++			goto free_creds;
++		}
+ 
+ 		if (length == sizeof(CREDS_VALUE) &&
+ 		    memcmp(p, CREDS_VALUE, sizeof(CREDS_VALUE)) == 0) {
+ 			/* We have creds here. parse them */
+ 			err = gssx_dec_linux_creds(xdr, creds);
+ 			if (err)
+-				return err;
++				goto free_creds;
+ 			oa->data[0].value.len = 1; /* presence */
+ 		} else {
+ 			/* consume uninteresting buffer */
+ 			err = gssx_dec_buffer(xdr, &dummy);
+ 			if (err)
+-				return err;
++				goto free_creds;
+ 		}
+ 	}
+ 	return 0;
++
++free_creds:
++	kfree(creds);
++free_oa:
++	kfree(oa->data);
++	oa->data = NULL;
++	return err;
+ }
+ 
+ static int gssx_dec_status(struct xdr_stream *xdr,
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 2a81880dac7b7..027c86e804f8a 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -198,7 +198,7 @@ void wait_for_unix_gc(void)
+ 	if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC &&
+ 	    !READ_ONCE(gc_in_progress))
+ 		unix_gc();
+-	wait_event(unix_gc_wait, gc_in_progress == false);
++	wait_event(unix_gc_wait, !READ_ONCE(gc_in_progress));
+ }
+ 
+ /* The external entry point: unix_gc() */
+diff --git a/net/unix/scm.c b/net/unix/scm.c
+index 6ff628f2349f5..822ce0d0d7915 100644
+--- a/net/unix/scm.c
++++ b/net/unix/scm.c
+@@ -35,10 +35,8 @@ struct sock *unix_get_socket(struct file *filp)
+ 		/* PF_UNIX ? */
+ 		if (s && ops && ops->family == PF_UNIX)
+ 			u_sock = s;
+-	} else {
+-		/* Could be an io_uring instance */
+-		u_sock = io_uring_get_socket(filp);
+ 	}
++
+ 	return u_sock;
+ }
+ EXPORT_SYMBOL(unix_get_socket);
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index aad8ffeaee041..ae90696efe929 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -460,12 +460,12 @@ static int x25_getsockopt(struct socket *sock, int level, int optname,
+ 	if (get_user(len, optlen))
+ 		goto out;
+ 
+-	len = min_t(unsigned int, len, sizeof(int));
+-
+ 	rc = -EINVAL;
+ 	if (len < 0)
+ 		goto out;
+ 
++	len = min_t(unsigned int, len, sizeof(int));
++
+ 	rc = -EFAULT;
+ 	if (put_user(len, optlen))
+ 		goto out;
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index 653e51ae39648..6346690d5c699 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -407,7 +407,8 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 	struct xfrm_dst *xdst = (struct xfrm_dst *)dst;
+ 	struct net_device *dev = x->xso.dev;
+ 
+-	if (!x->type_offload)
++	if (!x->type_offload ||
++	    (x->xso.type == XFRM_DEV_OFFLOAD_UNSPECIFIED && x->encap))
+ 		return false;
+ 
+ 	if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET ||
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 662c83beb345e..e5722c95b8bb3 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -704,9 +704,13 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct net *net = dev_net(skb_dst(skb)->dev);
+ 	struct xfrm_state *x = skb_dst(skb)->xfrm;
++	int family;
+ 	int err;
+ 
+-	switch (x->outer_mode.family) {
++	family = (x->xso.type != XFRM_DEV_OFFLOAD_PACKET) ? x->outer_mode.family
++		: skb_dst(skb)->ops->family;
++
++	switch (family) {
+ 	case AF_INET:
+ 		memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 		IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index e69d588caa0c6..9c5f2efed3333 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -2694,7 +2694,9 @@ static struct dst_entry *xfrm_bundle_create(struct xfrm_policy *policy,
+ 			if (xfrm[i]->props.smark.v || xfrm[i]->props.smark.m)
+ 				mark = xfrm_smark_get(fl->flowi_mark, xfrm[i]);
+ 
+-			family = xfrm[i]->props.family;
++			if (xfrm[i]->xso.type != XFRM_DEV_OFFLOAD_PACKET)
++				family = xfrm[i]->props.family;
++
+ 			oif = fl->flowi_oif ? : fl->flowi_l3mdev;
+ 			dst = xfrm_dst_lookup(xfrm[i], tos, oif,
+ 					      &saddr, &daddr, family, mark);
+diff --git a/scripts/clang-tools/gen_compile_commands.py b/scripts/clang-tools/gen_compile_commands.py
+index 5dea4479240bc..e4fb686dfaa9f 100755
+--- a/scripts/clang-tools/gen_compile_commands.py
++++ b/scripts/clang-tools/gen_compile_commands.py
+@@ -170,7 +170,7 @@ def process_line(root_directory, command_prefix, file_path):
+     # escape the pound sign '#', either as '\#' or '$(pound)' (depending on the
+     # kernel version). The compile_commands.json file is not interepreted
+     # by Make, so this code replaces the escaped version with '#'.
+-    prefix = command_prefix.replace('\#', '#').replace('$(pound)', '#')
++    prefix = command_prefix.replace(r'\#', '#').replace('$(pound)', '#')
+ 
+     # Return the canonical path, eliminating any symbolic links encountered in the path.
+     abs_path = os.path.realpath(os.path.join(root_directory, file_path))
+diff --git a/scripts/kconfig/lexer.l b/scripts/kconfig/lexer.l
+index cc386e4436834..2c2b3e6f248ca 100644
+--- a/scripts/kconfig/lexer.l
++++ b/scripts/kconfig/lexer.l
+@@ -302,8 +302,11 @@ static char *expand_token(const char *in, size_t n)
+ 	new_string();
+ 	append_string(in, n);
+ 
+-	/* get the whole line because we do not know the end of token. */
+-	while ((c = input()) != EOF) {
++	/*
++	 * get the whole line because we do not know the end of token.
++	 * input() returns 0 (not EOF!) when it reachs the end of file.
++	 */
++	while ((c = input()) != 0) {
+ 		if (c == '\n') {
+ 			unput(c);
+ 			break;
+diff --git a/sound/core/seq/seq_midi.c b/sound/core/seq/seq_midi.c
+index 18320a248aa7d..78dcb0ea15582 100644
+--- a/sound/core/seq/seq_midi.c
++++ b/sound/core/seq/seq_midi.c
+@@ -113,6 +113,12 @@ static int dump_midi(struct snd_rawmidi_substream *substream, const char *buf, i
+ 	return 0;
+ }
+ 
++/* callback for snd_seq_dump_var_event(), bridging to dump_midi() */
++static int __dump_midi(void *ptr, void *buf, int count)
++{
++	return dump_midi(ptr, buf, count);
++}
++
+ static int event_process_midi(struct snd_seq_event *ev, int direct,
+ 			      void *private_data, int atomic, int hop)
+ {
+@@ -132,7 +138,7 @@ static int event_process_midi(struct snd_seq_event *ev, int direct,
+ 			pr_debug("ALSA: seq_midi: invalid sysex event flags = 0x%x\n", ev->flags);
+ 			return 0;
+ 		}
+-		snd_seq_dump_var_event(ev, (snd_seq_dump_func_t)dump_midi, substream);
++		snd_seq_dump_var_event(ev, __dump_midi, substream);
+ 		snd_midi_event_reset_decode(msynth->parser);
+ 	} else {
+ 		if (msynth->parser == NULL)
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 1b9260108e482..1678737f11be7 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -62,6 +62,13 @@ static void snd_virmidi_init_event(struct snd_virmidi *vmidi,
+ /*
+  * decode input event and put to read buffer of each opened file
+  */
++
++/* callback for snd_seq_dump_var_event(), bridging to snd_rawmidi_receive() */
++static int dump_to_rawmidi(void *ptr, void *buf, int count)
++{
++	return snd_rawmidi_receive(ptr, buf, count);
++}
++
+ static int snd_virmidi_dev_receive_event(struct snd_virmidi_dev *rdev,
+ 					 struct snd_seq_event *ev,
+ 					 bool atomic)
+@@ -80,7 +87,7 @@ static int snd_virmidi_dev_receive_event(struct snd_virmidi_dev *rdev,
+ 		if (ev->type == SNDRV_SEQ_EVENT_SYSEX) {
+ 			if ((ev->flags & SNDRV_SEQ_EVENT_LENGTH_MASK) != SNDRV_SEQ_EVENT_LENGTH_VARIABLE)
+ 				continue;
+-			snd_seq_dump_var_event(ev, (snd_seq_dump_func_t)snd_rawmidi_receive, vmidi->substream);
++			snd_seq_dump_var_event(ev, dump_to_rawmidi, vmidi->substream);
+ 			snd_midi_event_reset_decode(vmidi->parser);
+ 		} else {
+ 			len = snd_midi_event_decode(vmidi->parser, msg, sizeof(msg), ev);
+diff --git a/sound/pci/hda/cs35l41_hda_property.c b/sound/pci/hda/cs35l41_hda_property.c
+index d74cf11eef1ea..b5b24fefb20cf 100644
+--- a/sound/pci/hda/cs35l41_hda_property.c
++++ b/sound/pci/hda/cs35l41_hda_property.c
+@@ -83,6 +83,7 @@ static const struct cs35l41_config cs35l41_config_table[] = {
+ 	{ "104317F3", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+ 	{ "10431863", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+ 	{ "104318D3", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
++	{ "10431A83", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+ 	{ "10431C9F", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+ 	{ "10431CAF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+ 	{ "10431CCF", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 1000, 4500, 24 },
+@@ -91,6 +92,7 @@ static const struct cs35l41_config cs35l41_config_table[] = {
+ 	{ "10431D1F", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+ 	{ "10431DA2", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
+ 	{ "10431E02", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, 2, 0, 0, 0, 0 },
++	{ "10431E12", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 },
+ 	{ "10431EE2", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, -1, -1, 0, 0, 0 },
+ 	{ "10431F12", 2, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 1000, 4500, 24 },
+ 	{ "10431F1F", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 1, -1, 0, 0, 0, 0 },
+@@ -205,6 +207,7 @@ static int generic_dsd_config(struct cs35l41_hda *cs35l41, struct device *physde
+ 	struct spi_device *spi;
+ 	bool dsd_found;
+ 	int ret;
++	int i;
+ 
+ 	for (cfg = cs35l41_config_table; cfg->ssid; cfg++) {
+ 		if (!strcasecmp(cfg->ssid, cs35l41->acpi_subsystem_id))
+@@ -290,16 +293,6 @@ static int generic_dsd_config(struct cs35l41_hda *cs35l41, struct device *physde
+ 			cs35l41->index = id == 0x40 ? 0 : 1;
+ 	}
+ 
+-	if (cfg->num_amps == 3)
+-		/* 3 amps means a center channel, so no duplicate channels */
+-		cs35l41->channel_index = 0;
+-	else
+-		/*
+-		 * if 4 amps, there are duplicate channels, so they need different indexes
+-		 * if 2 amps, no duplicate channels, channel_index would be 0
+-		 */
+-		cs35l41->channel_index = cs35l41->index / 2;
+-
+ 	cs35l41->reset_gpio = fwnode_gpiod_get_index(acpi_fwnode_handle(cs35l41->dacpi), "reset",
+ 						     cs35l41->index, GPIOD_OUT_LOW,
+ 						     "cs35l41-reset");
+@@ -307,6 +300,11 @@ static int generic_dsd_config(struct cs35l41_hda *cs35l41, struct device *physde
+ 
+ 	hw_cfg->spk_pos = cfg->channel[cs35l41->index];
+ 
++	cs35l41->channel_index = 0;
++	for (i = 0; i < cs35l41->index; i++)
++		if (cfg->channel[i] == hw_cfg->spk_pos)
++			cs35l41->channel_index++;
++
+ 	if (cfg->boost_type == INTERNAL) {
+ 		hw_cfg->bst_type = CS35L41_INT_BOOST;
+ 		hw_cfg->bst_ind = cfg->boost_ind_nanohenry;
+@@ -419,6 +417,7 @@ static const struct cs35l41_prop_model cs35l41_prop_model_table[] = {
+ 	{ "CSC3551", "104317F3", generic_dsd_config },
+ 	{ "CSC3551", "10431863", generic_dsd_config },
+ 	{ "CSC3551", "104318D3", generic_dsd_config },
++	{ "CSC3551", "10431A83", generic_dsd_config },
+ 	{ "CSC3551", "10431C9F", generic_dsd_config },
+ 	{ "CSC3551", "10431CAF", generic_dsd_config },
+ 	{ "CSC3551", "10431CCF", generic_dsd_config },
+@@ -427,6 +426,7 @@ static const struct cs35l41_prop_model cs35l41_prop_model_table[] = {
+ 	{ "CSC3551", "10431D1F", generic_dsd_config },
+ 	{ "CSC3551", "10431DA2", generic_dsd_config },
+ 	{ "CSC3551", "10431E02", generic_dsd_config },
++	{ "CSC3551", "10431E12", generic_dsd_config },
+ 	{ "CSC3551", "10431EE2", generic_dsd_config },
+ 	{ "CSC3551", "10431F12", generic_dsd_config },
+ 	{ "CSC3551", "10431F1F", generic_dsd_config },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index eb45e5c3db8c6..0c746613c5ae0 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3684,6 +3684,7 @@ static void alc285_hp_init(struct hda_codec *codec)
+ 	int i, val;
+ 	int coef38, coef0d, coef36;
+ 
++	alc_write_coefex_idx(codec, 0x58, 0x00, 0x1888); /* write default value */
+ 	alc_update_coef_idx(codec, 0x4a, 1<<15, 1<<15); /* Reset HP JD */
+ 	coef38 = alc_read_coef_idx(codec, 0x38); /* Amp control */
+ 	coef0d = alc_read_coef_idx(codec, 0x0d); /* Digital Misc control */
+@@ -6695,6 +6696,60 @@ static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc285_fixup_hp_envy_x360(struct hda_codec *codec,
++				      const struct hda_fixup *fix,
++				      int action)
++{
++	static const struct coef_fw coefs[] = {
++		WRITE_COEF(0x08, 0x6a0c), WRITE_COEF(0x0d, 0xa023),
++		WRITE_COEF(0x10, 0x0320), WRITE_COEF(0x1a, 0x8c03),
++		WRITE_COEF(0x25, 0x1800), WRITE_COEF(0x26, 0x003a),
++		WRITE_COEF(0x28, 0x1dfe), WRITE_COEF(0x29, 0xb014),
++		WRITE_COEF(0x2b, 0x1dfe), WRITE_COEF(0x37, 0xfe15),
++		WRITE_COEF(0x38, 0x7909), WRITE_COEF(0x45, 0xd489),
++		WRITE_COEF(0x46, 0x00f4), WRITE_COEF(0x4a, 0x21e0),
++		WRITE_COEF(0x66, 0x03f0), WRITE_COEF(0x67, 0x1000),
++		WRITE_COEF(0x6e, 0x1005), { }
++	};
++
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x12, 0xb7a60130 },  /* Internal microphone*/
++		{ 0x14, 0x90170150 },  /* B&O soundbar speakers */
++		{ 0x17, 0x90170153 },  /* Side speakers */
++		{ 0x19, 0x03a11040 },  /* Headset microphone */
++		{ }
++	};
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_apply_pincfgs(codec, pincfgs);
++
++		/* Fixes volume control problem for side speakers */
++		alc295_fixup_disable_dac3(codec, fix, action);
++
++		/* Fixes no sound from headset speaker */
++		snd_hda_codec_amp_stereo(codec, 0x21, HDA_OUTPUT, 0, -1, 0);
++
++		/* Auto-enable headset mic when plugged */
++		snd_hda_jack_set_gating_jack(codec, 0x19, 0x21);
++
++		/* Headset mic volume enhancement */
++		snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREF50);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		alc_process_coef_fw(codec, coefs);
++		break;
++	case HDA_FIXUP_ACT_BUILD:
++		rename_ctl(codec, "Bass Speaker Playback Volume",
++			   "B&O-Tuned Playback Volume");
++		rename_ctl(codec, "Front Playback Switch",
++			   "B&O Soundbar Playback Switch");
++		rename_ctl(codec, "Bass Speaker Playback Switch",
++			   "Side Speaker Playback Switch");
++		break;
++	}
++}
++
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+ 
+@@ -7290,6 +7345,7 @@ enum {
+ 	ALC280_FIXUP_HP_9480M,
+ 	ALC245_FIXUP_HP_X360_AMP,
+ 	ALC285_FIXUP_HP_SPECTRE_X360_EB1,
++	ALC285_FIXUP_HP_ENVY_X360,
+ 	ALC288_FIXUP_DELL_HEADSET_MODE,
+ 	ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC288_FIXUP_DELL_XPS_13,
+@@ -9265,6 +9321,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_spectre_x360_eb1
+ 	},
++	[ALC285_FIXUP_HP_ENVY_X360] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_envy_x360,
++		.chained = true,
++		.chain_id = ALC285_FIXUP_HP_GPIO_AMP_INIT,
++	},
+ 	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_ideapad_s740_coef,
+@@ -9847,6 +9909,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+@@ -10288,6 +10351,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3886, "Y780 VECO DUAL", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38a7, "Y780P AMD YG dual", ALC287_FIXUP_TAS2781_I2C),
+ 	SND_PCI_QUIRK(0x17aa, 0x38a8, "Y780P AMD VECO dual", ALC287_FIXUP_TAS2781_I2C),
++	SND_PCI_QUIRK(0x17aa, 0x38a9, "Thinkbook 16P", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x17aa, 0x38ab, "Thinkbook 16P", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38b4, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38b5, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x17aa, 0x38b6, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10538,6 +10603,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ 	{.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
++	{.id = ALC285_FIXUP_HP_ENVY_X360, .name = "alc285-hp-envy-x360"},
+ 	{.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ 	{.id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN, .name = "alc287-yoga9-bass-spk-pin"},
+ 	{.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"},
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index 9f98965742e83..5179b69e403ac 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -142,11 +142,13 @@ static void tas2781_hda_playback_hook(struct device *dev, int action)
+ 		pm_runtime_get_sync(dev);
+ 		mutex_lock(&tas_hda->priv->codec_lock);
+ 		tasdevice_tuning_switch(tas_hda->priv, 0);
++		tas_hda->priv->playback_started = true;
+ 		mutex_unlock(&tas_hda->priv->codec_lock);
+ 		break;
+ 	case HDA_GEN_PCM_ACT_CLOSE:
+ 		mutex_lock(&tas_hda->priv->codec_lock);
+ 		tasdevice_tuning_switch(tas_hda->priv, 1);
++		tas_hda->priv->playback_started = false;
+ 		mutex_unlock(&tas_hda->priv->codec_lock);
+ 
+ 		pm_runtime_mark_last_busy(dev);
+@@ -479,7 +481,7 @@ static int tas2781_save_calibration(struct tasdevice_priv *tas_priv)
+ 		dev_dbg(tas_priv->dev, "%4ld-%2d-%2d, %2d:%2d:%2d\n",
+ 			tm->tm_year, tm->tm_mon, tm->tm_mday,
+ 			tm->tm_hour, tm->tm_min, tm->tm_sec);
+-		tas2781_apply_calib(tas_priv);
++		tasdevice_apply_calibration(tas_priv);
+ 	} else
+ 		tas_priv->cali_data.total_sz = 0;
+ 
+@@ -582,7 +584,10 @@ static void tasdev_fw_ready(const struct firmware *fmw, void *context)
+ 	/* If calibrated data occurs error, dsp will still works with default
+ 	 * calibrated data inside algo.
+ 	 */
+-	tas2781_save_calibration(tas_priv);
++	tasdevice_save_calibration(tas_priv);
++
++	tasdevice_tuning_switch(tas_hda->priv, 0);
++	tas_hda->priv->playback_started = true;
+ 
+ out:
+ 	mutex_unlock(&tas_hda->priv->codec_lock);
+@@ -683,10 +688,6 @@ static int tas2781_hda_i2c_probe(struct i2c_client *clt)
+ 	const char *device_name;
+ 	int ret;
+ 
+-	if (strstr(dev_name(&clt->dev), "TIAS2781"))
+-		device_name = "TIAS2781";
+-	else
+-		return -ENODEV;
+ 
+ 	tas_hda = devm_kzalloc(&clt->dev, sizeof(*tas_hda), GFP_KERNEL);
+ 	if (!tas_hda)
+@@ -699,6 +700,13 @@ static int tas2781_hda_i2c_probe(struct i2c_client *clt)
+ 	if (!tas_hda->priv)
+ 		return -ENOMEM;
+ 
++	if (strstr(dev_name(&clt->dev), "TIAS2781")) {
++		device_name = "TIAS2781";
++		tas_hda->priv->save_calibration = tas2781_save_calibration;
++		tas_hda->priv->apply_calibration = tas2781_apply_calib;
++	} else
++		return -ENODEV;
++
+ 	tas_hda->priv->irq_info.irq = clt->irq;
+ 	ret = tas2781_read_acpi(tas_hda->priv, device_name);
+ 	if (ret)
+@@ -740,23 +748,19 @@ static void tas2781_hda_i2c_remove(struct i2c_client *clt)
+ static int tas2781_runtime_suspend(struct device *dev)
+ {
+ 	struct tas2781_hda *tas_hda = dev_get_drvdata(dev);
+-	int i;
+ 
+ 	dev_dbg(tas_hda->dev, "Runtime Suspend\n");
+ 
+ 	mutex_lock(&tas_hda->priv->codec_lock);
+ 
++	/* The driver powers up the amplifiers at module load time.
++	 * Stop the playback if it's unused.
++	 */
+ 	if (tas_hda->priv->playback_started) {
+ 		tasdevice_tuning_switch(tas_hda->priv, 1);
+ 		tas_hda->priv->playback_started = false;
+ 	}
+ 
+-	for (i = 0; i < tas_hda->priv->ndev; i++) {
+-		tas_hda->priv->tasdevice[i].cur_book = -1;
+-		tas_hda->priv->tasdevice[i].cur_prog = -1;
+-		tas_hda->priv->tasdevice[i].cur_conf = -1;
+-	}
+-
+ 	mutex_unlock(&tas_hda->priv->codec_lock);
+ 
+ 	return 0;
+@@ -765,8 +769,6 @@ static int tas2781_runtime_suspend(struct device *dev)
+ static int tas2781_runtime_resume(struct device *dev)
+ {
+ 	struct tas2781_hda *tas_hda = dev_get_drvdata(dev);
+-	unsigned long calib_data_sz =
+-		tas_hda->priv->ndev * TASDEVICE_SPEAKER_CALIBRATION_SIZE;
+ 
+ 	dev_dbg(tas_hda->dev, "Runtime Resume\n");
+ 
+@@ -777,8 +779,7 @@ static int tas2781_runtime_resume(struct device *dev)
+ 	/* If calibrated data occurs error, dsp will still works with default
+ 	 * calibrated data inside algo.
+ 	 */
+-	if (tas_hda->priv->cali_data.total_sz > calib_data_sz)
+-		tas2781_apply_calib(tas_hda->priv);
++	tasdevice_apply_calibration(tas_hda->priv);
+ 
+ 	mutex_unlock(&tas_hda->priv->codec_lock);
+ 
+@@ -788,16 +789,16 @@ static int tas2781_runtime_resume(struct device *dev)
+ static int tas2781_system_suspend(struct device *dev)
+ {
+ 	struct tas2781_hda *tas_hda = dev_get_drvdata(dev);
+-	int ret;
+ 
+ 	dev_dbg(tas_hda->priv->dev, "System Suspend\n");
+ 
+-	ret = pm_runtime_force_suspend(dev);
+-	if (ret)
+-		return ret;
++	mutex_lock(&tas_hda->priv->codec_lock);
+ 
+ 	/* Shutdown chip before system suspend */
+-	tasdevice_tuning_switch(tas_hda->priv, 1);
++	if (tas_hda->priv->playback_started)
++		tasdevice_tuning_switch(tas_hda->priv, 1);
++
++	mutex_unlock(&tas_hda->priv->codec_lock);
+ 
+ 	/*
+ 	 * Reset GPIO may be shared, so cannot reset here.
+@@ -809,15 +810,9 @@ static int tas2781_system_suspend(struct device *dev)
+ static int tas2781_system_resume(struct device *dev)
+ {
+ 	struct tas2781_hda *tas_hda = dev_get_drvdata(dev);
+-	unsigned long calib_data_sz =
+-		tas_hda->priv->ndev * TASDEVICE_SPEAKER_CALIBRATION_SIZE;
+-	int i, ret;
+-
+-	dev_info(tas_hda->priv->dev, "System Resume\n");
++	int i;
+ 
+-	ret = pm_runtime_force_resume(dev);
+-	if (ret)
+-		return ret;
++	dev_dbg(tas_hda->priv->dev, "System Resume\n");
+ 
+ 	mutex_lock(&tas_hda->priv->codec_lock);
+ 
+@@ -832,8 +827,11 @@ static int tas2781_system_resume(struct device *dev)
+ 	/* If calibrated data occurs error, dsp will still work with default
+ 	 * calibrated data inside algo.
+ 	 */
+-	if (tas_hda->priv->cali_data.total_sz > calib_data_sz)
+-		tas2781_apply_calib(tas_hda->priv);
++	tasdevice_apply_calibration(tas_hda->priv);
++
++	if (tas_hda->priv->playback_started)
++		tasdevice_tuning_switch(tas_hda->priv, 0);
++
+ 	mutex_unlock(&tas_hda->priv->codec_lock);
+ 
+ 	return 0;
+diff --git a/sound/soc/amd/acp/acp-sof-mach.c b/sound/soc/amd/acp/acp-sof-mach.c
+index 5223033a122f8..354d0fc55299b 100644
+--- a/sound/soc/amd/acp/acp-sof-mach.c
++++ b/sound/soc/amd/acp/acp-sof-mach.c
+@@ -120,16 +120,14 @@ static int acp_sof_probe(struct platform_device *pdev)
+ 	if (dmi_id && dmi_id->driver_data)
+ 		acp_card_drvdata->tdm_mode = dmi_id->driver_data;
+ 
+-	acp_sofdsp_dai_links_create(card);
++	ret = acp_sofdsp_dai_links_create(card);
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret, "Failed to create DAI links\n");
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+-	if (ret) {
+-		dev_err(&pdev->dev,
+-				"devm_snd_soc_register_card(%s) failed: %d\n",
+-				card->name, ret);
+-		return ret;
+-	}
+-
++	if (ret)
++		return dev_err_probe(&pdev->dev, ret,
++				     "Failed to register card(%s)\n", card->name);
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 80ad60d485ea0..90360f8b3e81b 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -199,6 +199,20 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21HY"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21J2"),
++		}
++	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "21J0"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -234,6 +248,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82UG"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82UU"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -395,6 +416,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "8B2F"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8BD6"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c
+index d62c9f26c6325..5009cf64124ed 100644
+--- a/sound/soc/codecs/cs42l43.c
++++ b/sound/soc/codecs/cs42l43.c
+@@ -2175,7 +2175,10 @@ static int cs42l43_codec_probe(struct platform_device *pdev)
+ 	pm_runtime_use_autosuspend(priv->dev);
+ 	pm_runtime_set_active(priv->dev);
+ 	pm_runtime_get_noresume(priv->dev);
+-	devm_pm_runtime_enable(priv->dev);
++
++	ret = devm_pm_runtime_enable(priv->dev);
++	if (ret)
++		goto err_pm;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(cs42l43_irqs); i++) {
+ 		ret = cs42l43_request_irq(priv, dom, cs42l43_irqs[i].name,
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index ea08b7cfc31da..e0da151508309 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -3829,6 +3829,16 @@ static const struct dmi_system_id dmi_platform_data[] = {
+ 		  DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ 		  DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
+ 		  DMI_EXACT_MATCH(DMI_BOARD_VERSION, "Default string"),
++		  /*
++		   * Above strings are too generic, LattePanda BIOS versions for
++		   * all 4 hw revisions are:
++		   * DF-BI-7-S70CR100-*
++		   * DF-BI-7-S70CR110-*
++		   * DF-BI-7-S70CR200-*
++		   * LP-BS-7-S70CR700-*
++		   * Do a partial match for S70CR to avoid false positive matches.
++		   */
++		  DMI_MATCH(DMI_BIOS_VERSION, "S70CR"),
+ 		},
+ 		.driver_data = (void *)&lattepanda_board_platform_data,
+ 	},
+diff --git a/sound/soc/codecs/tas2781-comlib.c b/sound/soc/codecs/tas2781-comlib.c
+index add16302f711e..5d0e5348b361a 100644
+--- a/sound/soc/codecs/tas2781-comlib.c
++++ b/sound/soc/codecs/tas2781-comlib.c
+@@ -413,6 +413,21 @@ void tasdevice_remove(struct tasdevice_priv *tas_priv)
+ }
+ EXPORT_SYMBOL_GPL(tasdevice_remove);
+ 
++int tasdevice_save_calibration(struct tasdevice_priv *tas_priv)
++{
++	if (tas_priv->save_calibration)
++		return tas_priv->save_calibration(tas_priv);
++	return -EINVAL;
++}
++EXPORT_SYMBOL_GPL(tasdevice_save_calibration);
++
++void tasdevice_apply_calibration(struct tasdevice_priv *tas_priv)
++{
++	if (tas_priv->apply_calibration && tas_priv->cali_data.total_sz)
++		tas_priv->apply_calibration(tas_priv);
++}
++EXPORT_SYMBOL_GPL(tasdevice_apply_calibration);
++
+ static int tasdevice_clamp(int val, int max, unsigned int invert)
+ {
+ 	if (val > max)
+diff --git a/sound/soc/codecs/tlv320adc3xxx.c b/sound/soc/codecs/tlv320adc3xxx.c
+index 420bbf588efea..e100cc9f5c192 100644
+--- a/sound/soc/codecs/tlv320adc3xxx.c
++++ b/sound/soc/codecs/tlv320adc3xxx.c
+@@ -1429,7 +1429,7 @@ static int adc3xxx_i2c_probe(struct i2c_client *i2c)
+ 	return ret;
+ }
+ 
+-static void __exit adc3xxx_i2c_remove(struct i2c_client *client)
++static void adc3xxx_i2c_remove(struct i2c_client *client)
+ {
+ 	struct adc3xxx *adc3xxx = i2c_get_clientdata(client);
+ 
+@@ -1452,7 +1452,7 @@ static struct i2c_driver adc3xxx_i2c_driver = {
+ 		   .of_match_table = tlv320adc3xxx_of_match,
+ 		  },
+ 	.probe = adc3xxx_i2c_probe,
+-	.remove = __exit_p(adc3xxx_i2c_remove),
++	.remove = adc3xxx_i2c_remove,
+ 	.id_table = adc3xxx_i2c_id,
+ };
+ 
+diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
+index fb90ae6a8a344..7c6ed29831285 100644
+--- a/sound/soc/codecs/wm8962.c
++++ b/sound/soc/codecs/wm8962.c
+@@ -2229,6 +2229,9 @@ SND_SOC_DAPM_PGA_E("HPOUT", SND_SOC_NOPM, 0, 0, NULL, 0, hp_event,
+ 
+ SND_SOC_DAPM_OUTPUT("HPOUTL"),
+ SND_SOC_DAPM_OUTPUT("HPOUTR"),
++
++SND_SOC_DAPM_PGA("SPKOUTL Output", WM8962_CLASS_D_CONTROL_1, 6, 0, NULL, 0),
++SND_SOC_DAPM_PGA("SPKOUTR Output", WM8962_CLASS_D_CONTROL_1, 7, 0, NULL, 0),
+ };
+ 
+ static const struct snd_soc_dapm_widget wm8962_dapm_spk_mono_widgets[] = {
+@@ -2236,7 +2239,6 @@ SND_SOC_DAPM_MIXER("Speaker Mixer", WM8962_MIXER_ENABLES, 1, 0,
+ 		   spkmixl, ARRAY_SIZE(spkmixl)),
+ SND_SOC_DAPM_MUX_E("Speaker PGA", WM8962_PWR_MGMT_2, 4, 0, &spkoutl_mux,
+ 		   out_pga_event, SND_SOC_DAPM_POST_PMU),
+-SND_SOC_DAPM_PGA("Speaker Output", WM8962_CLASS_D_CONTROL_1, 7, 0, NULL, 0),
+ SND_SOC_DAPM_OUTPUT("SPKOUT"),
+ };
+ 
+@@ -2251,9 +2253,6 @@ SND_SOC_DAPM_MUX_E("SPKOUTL PGA", WM8962_PWR_MGMT_2, 4, 0, &spkoutl_mux,
+ SND_SOC_DAPM_MUX_E("SPKOUTR PGA", WM8962_PWR_MGMT_2, 3, 0, &spkoutr_mux,
+ 		   out_pga_event, SND_SOC_DAPM_POST_PMU),
+ 
+-SND_SOC_DAPM_PGA("SPKOUTR Output", WM8962_CLASS_D_CONTROL_1, 7, 0, NULL, 0),
+-SND_SOC_DAPM_PGA("SPKOUTL Output", WM8962_CLASS_D_CONTROL_1, 6, 0, NULL, 0),
+-
+ SND_SOC_DAPM_OUTPUT("SPKOUTL"),
+ SND_SOC_DAPM_OUTPUT("SPKOUTR"),
+ };
+@@ -2366,12 +2365,18 @@ static const struct snd_soc_dapm_route wm8962_spk_mono_intercon[] = {
+ 	{ "Speaker PGA", "Mixer", "Speaker Mixer" },
+ 	{ "Speaker PGA", "DAC", "DACL" },
+ 
+-	{ "Speaker Output", NULL, "Speaker PGA" },
+-	{ "Speaker Output", NULL, "SYSCLK" },
+-	{ "Speaker Output", NULL, "TOCLK" },
+-	{ "Speaker Output", NULL, "TEMP_SPK" },
++	{ "SPKOUTL Output", NULL, "Speaker PGA" },
++	{ "SPKOUTL Output", NULL, "SYSCLK" },
++	{ "SPKOUTL Output", NULL, "TOCLK" },
++	{ "SPKOUTL Output", NULL, "TEMP_SPK" },
+ 
+-	{ "SPKOUT", NULL, "Speaker Output" },
++	{ "SPKOUTR Output", NULL, "Speaker PGA" },
++	{ "SPKOUTR Output", NULL, "SYSCLK" },
++	{ "SPKOUTR Output", NULL, "TOCLK" },
++	{ "SPKOUTR Output", NULL, "TEMP_SPK" },
++
++	{ "SPKOUT", NULL, "SPKOUTL Output" },
++	{ "SPKOUT", NULL, "SPKOUTR Output" },
+ };
+ 
+ static const struct snd_soc_dapm_route wm8962_spk_stereo_intercon[] = {
+@@ -2914,8 +2919,12 @@ static int wm8962_set_fll(struct snd_soc_component *component, int fll_id, int s
+ 	switch (fll_id) {
+ 	case WM8962_FLL_MCLK:
+ 	case WM8962_FLL_BCLK:
++		fll1 |= (fll_id - 1) << WM8962_FLL_REFCLK_SRC_SHIFT;
++		break;
+ 	case WM8962_FLL_OSC:
+ 		fll1 |= (fll_id - 1) << WM8962_FLL_REFCLK_SRC_SHIFT;
++		snd_soc_component_update_bits(component, WM8962_PLL2,
++					      WM8962_OSC_ENA, WM8962_OSC_ENA);
+ 		break;
+ 	case WM8962_FLL_INT:
+ 		snd_soc_component_update_bits(component, WM8962_FLL_CONTROL_1,
+@@ -2924,7 +2933,7 @@ static int wm8962_set_fll(struct snd_soc_component *component, int fll_id, int s
+ 				    WM8962_FLL_FRC_NCO, WM8962_FLL_FRC_NCO);
+ 		break;
+ 	default:
+-		dev_err(component->dev, "Unknown FLL source %d\n", ret);
++		dev_err(component->dev, "Unknown FLL source %d\n", source);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 42466b4b1ca45..a290f498ba823 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -685,6 +685,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* Chuwi Vi8 dual-boot (CWI506) */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Insyde"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "i86"),
++			/* The above are too generic, also match BIOS info */
++			DMI_MATCH(DMI_BIOS_VERSION, "CHUWI2.D86JHBNR02"),
++		},
++		.driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		/* Chuwi Vi10 (CWI505) */
+ 		.matches = {
+diff --git a/sound/soc/meson/aiu.c b/sound/soc/meson/aiu.c
+index 7109b81cc3d0a..5d1419ed7a62d 100644
+--- a/sound/soc/meson/aiu.c
++++ b/sound/soc/meson/aiu.c
+@@ -212,11 +212,12 @@ static const char * const aiu_spdif_ids[] = {
+ static int aiu_clk_get(struct device *dev)
+ {
+ 	struct aiu *aiu = dev_get_drvdata(dev);
++	struct clk *pclk;
+ 	int ret;
+ 
+-	aiu->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(aiu->pclk))
+-		return dev_err_probe(dev, PTR_ERR(aiu->pclk), "Can't get the aiu pclk\n");
++	pclk = devm_clk_get_enabled(dev, "pclk");
++	if (IS_ERR(pclk))
++		return dev_err_probe(dev, PTR_ERR(pclk), "Can't get the aiu pclk\n");
+ 
+ 	aiu->spdif_mclk = devm_clk_get(dev, "spdif_mclk");
+ 	if (IS_ERR(aiu->spdif_mclk))
+@@ -233,18 +234,6 @@ static int aiu_clk_get(struct device *dev)
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "Can't get the spdif clocks\n");
+ 
+-	ret = clk_prepare_enable(aiu->pclk);
+-	if (ret) {
+-		dev_err(dev, "peripheral clock enable failed\n");
+-		return ret;
+-	}
+-
+-	ret = devm_add_action_or_reset(dev,
+-				       (void(*)(void *))clk_disable_unprepare,
+-				       aiu->pclk);
+-	if (ret)
+-		dev_err(dev, "failed to add reset action on pclk");
+-
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/meson/aiu.h b/sound/soc/meson/aiu.h
+index 393b6c2307e49..0f94c8bf60818 100644
+--- a/sound/soc/meson/aiu.h
++++ b/sound/soc/meson/aiu.h
+@@ -33,7 +33,6 @@ struct aiu_platform_data {
+ };
+ 
+ struct aiu {
+-	struct clk *pclk;
+ 	struct clk *spdif_mclk;
+ 	struct aiu_interface i2s;
+ 	struct aiu_interface spdif;
+diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c
+index 1c3d433cefd23..2cedbce738373 100644
+--- a/sound/soc/meson/axg-tdm-interface.c
++++ b/sound/soc/meson/axg-tdm-interface.c
+@@ -12,6 +12,9 @@
+ 
+ #include "axg-tdm.h"
+ 
++/* Maximum bit clock frequency according the datasheets */
++#define MAX_SCLK 100000000 /* Hz */
++
+ enum {
+ 	TDM_IFACE_PAD,
+ 	TDM_IFACE_LOOPBACK,
+@@ -153,19 +156,27 @@ static int axg_tdm_iface_startup(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
+-	/* Apply component wide rate symmetry */
+ 	if (snd_soc_component_active(dai->component)) {
++		/* Apply component wide rate symmetry */
+ 		ret = snd_pcm_hw_constraint_single(substream->runtime,
+ 						   SNDRV_PCM_HW_PARAM_RATE,
+ 						   iface->rate);
+-		if (ret < 0) {
+-			dev_err(dai->dev,
+-				"can't set iface rate constraint\n");
+-			return ret;
+-		}
++
++	} else {
++		/* Limit rate according to the slot number and width */
++		unsigned int max_rate =
++			MAX_SCLK / (iface->slots * iface->slot_width);
++		ret = snd_pcm_hw_constraint_minmax(substream->runtime,
++						   SNDRV_PCM_HW_PARAM_RATE,
++						   0, max_rate);
+ 	}
+ 
+-	return 0;
++	if (ret < 0)
++		dev_err(dai->dev, "can't set iface rate constraint\n");
++	else
++		ret = 0;
++
++	return ret;
+ }
+ 
+ static int axg_tdm_iface_set_stream(struct snd_pcm_substream *substream,
+@@ -264,8 +275,8 @@ static int axg_tdm_iface_set_sclk(struct snd_soc_dai *dai,
+ 	srate = iface->slots * iface->slot_width * params_rate(params);
+ 
+ 	if (!iface->mclk_rate) {
+-		/* If no specific mclk is requested, default to bit clock * 4 */
+-		clk_set_rate(iface->mclk, 4 * srate);
++		/* If no specific mclk is requested, default to bit clock * 2 */
++		clk_set_rate(iface->mclk, 2 * srate);
+ 	} else {
+ 		/* Check if we can actually get the bit clock from mclk */
+ 		if (iface->mclk_rate % srate) {
+diff --git a/sound/soc/meson/t9015.c b/sound/soc/meson/t9015.c
+index 9c6b4dac68932..571f65788c592 100644
+--- a/sound/soc/meson/t9015.c
++++ b/sound/soc/meson/t9015.c
+@@ -48,7 +48,6 @@
+ #define POWER_CFG	0x10
+ 
+ struct t9015 {
+-	struct clk *pclk;
+ 	struct regulator *avdd;
+ };
+ 
+@@ -249,6 +248,7 @@ static int t9015_probe(struct platform_device *pdev)
+ 	struct t9015 *priv;
+ 	void __iomem *regs;
+ 	struct regmap *regmap;
++	struct clk *pclk;
+ 	int ret;
+ 
+ 	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -256,26 +256,14 @@ static int t9015_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	platform_set_drvdata(pdev, priv);
+ 
+-	priv->pclk = devm_clk_get(dev, "pclk");
+-	if (IS_ERR(priv->pclk))
+-		return dev_err_probe(dev, PTR_ERR(priv->pclk), "failed to get core clock\n");
++	pclk = devm_clk_get_enabled(dev, "pclk");
++	if (IS_ERR(pclk))
++		return dev_err_probe(dev, PTR_ERR(pclk), "failed to get core clock\n");
+ 
+ 	priv->avdd = devm_regulator_get(dev, "AVDD");
+ 	if (IS_ERR(priv->avdd))
+ 		return dev_err_probe(dev, PTR_ERR(priv->avdd), "failed to AVDD\n");
+ 
+-	ret = clk_prepare_enable(priv->pclk);
+-	if (ret) {
+-		dev_err(dev, "core clock enable failed\n");
+-		return ret;
+-	}
+-
+-	ret = devm_add_action_or_reset(dev,
+-			(void(*)(void *))clk_disable_unprepare,
+-			priv->pclk);
+-	if (ret)
+-		return ret;
+-
+ 	ret = device_reset(dev);
+ 	if (ret) {
+ 		dev_err(dev, "reset failed\n");
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index 860e66ec85e8a..9fa020ef7eab9 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -25,8 +25,6 @@
+ #define DEFAULT_MCLK_FS				256
+ #define CH_GRP_MAX				4  /* The max channel 8 / 2 */
+ #define MULTIPLEX_CH_MAX			10
+-#define CLK_PPM_MIN				-1000
+-#define CLK_PPM_MAX				1000
+ 
+ #define TRCM_TXRX 0
+ #define TRCM_TX 1
+@@ -53,20 +51,6 @@ struct rk_i2s_tdm_dev {
+ 	struct clk *hclk;
+ 	struct clk *mclk_tx;
+ 	struct clk *mclk_rx;
+-	/* The mclk_tx_src is parent of mclk_tx */
+-	struct clk *mclk_tx_src;
+-	/* The mclk_rx_src is parent of mclk_rx */
+-	struct clk *mclk_rx_src;
+-	/*
+-	 * The mclk_root0 and mclk_root1 are root parent and supplies for
+-	 * the different FS.
+-	 *
+-	 * e.g:
+-	 * mclk_root0 is VPLL0, used for FS=48000Hz
+-	 * mclk_root1 is VPLL1, used for FS=44100Hz
+-	 */
+-	struct clk *mclk_root0;
+-	struct clk *mclk_root1;
+ 	struct regmap *regmap;
+ 	struct regmap *grf;
+ 	struct snd_dmaengine_dai_dma_data capture_dma_data;
+@@ -76,19 +60,11 @@ struct rk_i2s_tdm_dev {
+ 	const struct rk_i2s_soc_data *soc_data;
+ 	bool is_master_mode;
+ 	bool io_multiplex;
+-	bool mclk_calibrate;
+ 	bool tdm_mode;
+-	unsigned int mclk_rx_freq;
+-	unsigned int mclk_tx_freq;
+-	unsigned int mclk_root0_freq;
+-	unsigned int mclk_root1_freq;
+-	unsigned int mclk_root0_initial_freq;
+-	unsigned int mclk_root1_initial_freq;
+ 	unsigned int frame_width;
+ 	unsigned int clk_trcm;
+ 	unsigned int i2s_sdis[CH_GRP_MAX];
+ 	unsigned int i2s_sdos[CH_GRP_MAX];
+-	int clk_ppm;
+ 	int refcount;
+ 	spinlock_t lock; /* xfer lock */
+ 	bool has_playback;
+@@ -114,12 +90,6 @@ static void i2s_tdm_disable_unprepare_mclk(struct rk_i2s_tdm_dev *i2s_tdm)
+ {
+ 	clk_disable_unprepare(i2s_tdm->mclk_tx);
+ 	clk_disable_unprepare(i2s_tdm->mclk_rx);
+-	if (i2s_tdm->mclk_calibrate) {
+-		clk_disable_unprepare(i2s_tdm->mclk_tx_src);
+-		clk_disable_unprepare(i2s_tdm->mclk_rx_src);
+-		clk_disable_unprepare(i2s_tdm->mclk_root0);
+-		clk_disable_unprepare(i2s_tdm->mclk_root1);
+-	}
+ }
+ 
+ /**
+@@ -142,29 +112,9 @@ static int i2s_tdm_prepare_enable_mclk(struct rk_i2s_tdm_dev *i2s_tdm)
+ 	ret = clk_prepare_enable(i2s_tdm->mclk_rx);
+ 	if (ret)
+ 		goto err_mclk_rx;
+-	if (i2s_tdm->mclk_calibrate) {
+-		ret = clk_prepare_enable(i2s_tdm->mclk_tx_src);
+-		if (ret)
+-			goto err_mclk_rx;
+-		ret = clk_prepare_enable(i2s_tdm->mclk_rx_src);
+-		if (ret)
+-			goto err_mclk_rx_src;
+-		ret = clk_prepare_enable(i2s_tdm->mclk_root0);
+-		if (ret)
+-			goto err_mclk_root0;
+-		ret = clk_prepare_enable(i2s_tdm->mclk_root1);
+-		if (ret)
+-			goto err_mclk_root1;
+-	}
+ 
+ 	return 0;
+ 
+-err_mclk_root1:
+-	clk_disable_unprepare(i2s_tdm->mclk_root0);
+-err_mclk_root0:
+-	clk_disable_unprepare(i2s_tdm->mclk_rx_src);
+-err_mclk_rx_src:
+-	clk_disable_unprepare(i2s_tdm->mclk_tx_src);
+ err_mclk_rx:
+ 	clk_disable_unprepare(i2s_tdm->mclk_tx);
+ err_mclk_tx:
+@@ -564,159 +514,6 @@ static void rockchip_i2s_tdm_xfer_resume(struct snd_pcm_substream *substream,
+ 			   I2S_XFER_RXS_START);
+ }
+ 
+-static int rockchip_i2s_tdm_clk_set_rate(struct rk_i2s_tdm_dev *i2s_tdm,
+-					 struct clk *clk, unsigned long rate,
+-					 int ppm)
+-{
+-	unsigned long rate_target;
+-	int delta, ret;
+-
+-	if (ppm == i2s_tdm->clk_ppm)
+-		return 0;
+-
+-	if (ppm < 0)
+-		delta = -1;
+-	else
+-		delta = 1;
+-
+-	delta *= (int)div64_u64((u64)rate * (u64)abs(ppm) + 500000,
+-				1000000);
+-
+-	rate_target = rate + delta;
+-
+-	if (!rate_target)
+-		return -EINVAL;
+-
+-	ret = clk_set_rate(clk, rate_target);
+-	if (ret)
+-		return ret;
+-
+-	i2s_tdm->clk_ppm = ppm;
+-
+-	return 0;
+-}
+-
+-static int rockchip_i2s_tdm_calibrate_mclk(struct rk_i2s_tdm_dev *i2s_tdm,
+-					   struct snd_pcm_substream *substream,
+-					   unsigned int lrck_freq)
+-{
+-	struct clk *mclk_root;
+-	struct clk *mclk_parent;
+-	unsigned int mclk_root_freq;
+-	unsigned int mclk_root_initial_freq;
+-	unsigned int mclk_parent_freq;
+-	unsigned int div, delta;
+-	u64 ppm;
+-	int ret;
+-
+-	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		mclk_parent = i2s_tdm->mclk_tx_src;
+-	else
+-		mclk_parent = i2s_tdm->mclk_rx_src;
+-
+-	switch (lrck_freq) {
+-	case 8000:
+-	case 16000:
+-	case 24000:
+-	case 32000:
+-	case 48000:
+-	case 64000:
+-	case 96000:
+-	case 192000:
+-		mclk_root = i2s_tdm->mclk_root0;
+-		mclk_root_freq = i2s_tdm->mclk_root0_freq;
+-		mclk_root_initial_freq = i2s_tdm->mclk_root0_initial_freq;
+-		mclk_parent_freq = DEFAULT_MCLK_FS * 192000;
+-		break;
+-	case 11025:
+-	case 22050:
+-	case 44100:
+-	case 88200:
+-	case 176400:
+-		mclk_root = i2s_tdm->mclk_root1;
+-		mclk_root_freq = i2s_tdm->mclk_root1_freq;
+-		mclk_root_initial_freq = i2s_tdm->mclk_root1_initial_freq;
+-		mclk_parent_freq = DEFAULT_MCLK_FS * 176400;
+-		break;
+-	default:
+-		dev_err(i2s_tdm->dev, "Invalid LRCK frequency: %u Hz\n",
+-			lrck_freq);
+-		return -EINVAL;
+-	}
+-
+-	ret = clk_set_parent(mclk_parent, mclk_root);
+-	if (ret)
+-		return ret;
+-
+-	ret = rockchip_i2s_tdm_clk_set_rate(i2s_tdm, mclk_root,
+-					    mclk_root_freq, 0);
+-	if (ret)
+-		return ret;
+-
+-	delta = abs(mclk_root_freq % mclk_parent_freq - mclk_parent_freq);
+-	ppm = div64_u64((uint64_t)delta * 1000000, (uint64_t)mclk_root_freq);
+-
+-	if (ppm) {
+-		div = DIV_ROUND_CLOSEST(mclk_root_initial_freq, mclk_parent_freq);
+-		if (!div)
+-			return -EINVAL;
+-
+-		mclk_root_freq = mclk_parent_freq * round_up(div, 2);
+-
+-		ret = clk_set_rate(mclk_root, mclk_root_freq);
+-		if (ret)
+-			return ret;
+-
+-		i2s_tdm->mclk_root0_freq = clk_get_rate(i2s_tdm->mclk_root0);
+-		i2s_tdm->mclk_root1_freq = clk_get_rate(i2s_tdm->mclk_root1);
+-	}
+-
+-	return clk_set_rate(mclk_parent, mclk_parent_freq);
+-}
+-
+-static int rockchip_i2s_tdm_set_mclk(struct rk_i2s_tdm_dev *i2s_tdm,
+-				     struct snd_pcm_substream *substream,
+-				     struct clk **mclk)
+-{
+-	unsigned int mclk_freq;
+-	int ret;
+-
+-	if (i2s_tdm->clk_trcm) {
+-		if (i2s_tdm->mclk_tx_freq != i2s_tdm->mclk_rx_freq) {
+-			dev_err(i2s_tdm->dev,
+-				"clk_trcm, tx: %d and rx: %d should be the same\n",
+-				i2s_tdm->mclk_tx_freq,
+-				i2s_tdm->mclk_rx_freq);
+-			return -EINVAL;
+-		}
+-
+-		ret = clk_set_rate(i2s_tdm->mclk_tx, i2s_tdm->mclk_tx_freq);
+-		if (ret)
+-			return ret;
+-
+-		ret = clk_set_rate(i2s_tdm->mclk_rx, i2s_tdm->mclk_rx_freq);
+-		if (ret)
+-			return ret;
+-
+-		/* mclk_rx is also ok. */
+-		*mclk = i2s_tdm->mclk_tx;
+-	} else {
+-		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+-			*mclk = i2s_tdm->mclk_tx;
+-			mclk_freq = i2s_tdm->mclk_tx_freq;
+-		} else {
+-			*mclk = i2s_tdm->mclk_rx;
+-			mclk_freq = i2s_tdm->mclk_rx_freq;
+-		}
+-
+-		ret = clk_set_rate(*mclk, mclk_freq);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	return 0;
+-}
+-
+ static int rockchip_i2s_ch_to_io(unsigned int ch, bool substream_capture)
+ {
+ 	if (substream_capture) {
+@@ -853,19 +650,17 @@ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
+ 				      struct snd_soc_dai *dai)
+ {
+ 	struct rk_i2s_tdm_dev *i2s_tdm = to_info(dai);
+-	struct clk *mclk;
+-	int ret = 0;
+ 	unsigned int val = 0;
+ 	unsigned int mclk_rate, bclk_rate, div_bclk = 4, div_lrck = 64;
++	int err;
+ 
+ 	if (i2s_tdm->is_master_mode) {
+-		if (i2s_tdm->mclk_calibrate)
+-			rockchip_i2s_tdm_calibrate_mclk(i2s_tdm, substream,
+-							params_rate(params));
++		struct clk *mclk = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) ?
++			i2s_tdm->mclk_tx : i2s_tdm->mclk_rx;
+ 
+-		ret = rockchip_i2s_tdm_set_mclk(i2s_tdm, substream, &mclk);
+-		if (ret)
+-			return ret;
++		err = clk_set_rate(mclk, DEFAULT_MCLK_FS * params_rate(params));
++		if (err)
++			return err;
+ 
+ 		mclk_rate = clk_get_rate(mclk);
+ 		bclk_rate = i2s_tdm->frame_width * params_rate(params);
+@@ -973,96 +768,6 @@ static int rockchip_i2s_tdm_trigger(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-static int rockchip_i2s_tdm_set_sysclk(struct snd_soc_dai *cpu_dai, int stream,
+-				       unsigned int freq, int dir)
+-{
+-	struct rk_i2s_tdm_dev *i2s_tdm = to_info(cpu_dai);
+-
+-	/* Put set mclk rate into rockchip_i2s_tdm_set_mclk() */
+-	if (i2s_tdm->clk_trcm) {
+-		i2s_tdm->mclk_tx_freq = freq;
+-		i2s_tdm->mclk_rx_freq = freq;
+-	} else {
+-		if (stream == SNDRV_PCM_STREAM_PLAYBACK)
+-			i2s_tdm->mclk_tx_freq = freq;
+-		else
+-			i2s_tdm->mclk_rx_freq = freq;
+-	}
+-
+-	dev_dbg(i2s_tdm->dev, "The target mclk_%s freq is: %d\n",
+-		stream ? "rx" : "tx", freq);
+-
+-	return 0;
+-}
+-
+-static int rockchip_i2s_tdm_clk_compensation_info(struct snd_kcontrol *kcontrol,
+-						  struct snd_ctl_elem_info *uinfo)
+-{
+-	uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+-	uinfo->count = 1;
+-	uinfo->value.integer.min = CLK_PPM_MIN;
+-	uinfo->value.integer.max = CLK_PPM_MAX;
+-	uinfo->value.integer.step = 1;
+-
+-	return 0;
+-}
+-
+-static int rockchip_i2s_tdm_clk_compensation_get(struct snd_kcontrol *kcontrol,
+-						 struct snd_ctl_elem_value *ucontrol)
+-{
+-	struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+-	struct rk_i2s_tdm_dev *i2s_tdm = snd_soc_dai_get_drvdata(dai);
+-
+-	ucontrol->value.integer.value[0] = i2s_tdm->clk_ppm;
+-
+-	return 0;
+-}
+-
+-static int rockchip_i2s_tdm_clk_compensation_put(struct snd_kcontrol *kcontrol,
+-						 struct snd_ctl_elem_value *ucontrol)
+-{
+-	struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+-	struct rk_i2s_tdm_dev *i2s_tdm = snd_soc_dai_get_drvdata(dai);
+-	int ret = 0, ppm = 0;
+-	int changed = 0;
+-	unsigned long old_rate;
+-
+-	if (ucontrol->value.integer.value[0] < CLK_PPM_MIN ||
+-	    ucontrol->value.integer.value[0] > CLK_PPM_MAX)
+-		return -EINVAL;
+-
+-	ppm = ucontrol->value.integer.value[0];
+-
+-	old_rate = clk_get_rate(i2s_tdm->mclk_root0);
+-	ret = rockchip_i2s_tdm_clk_set_rate(i2s_tdm, i2s_tdm->mclk_root0,
+-					    i2s_tdm->mclk_root0_freq, ppm);
+-	if (ret)
+-		return ret;
+-	if (old_rate != clk_get_rate(i2s_tdm->mclk_root0))
+-		changed = 1;
+-
+-	if (clk_is_match(i2s_tdm->mclk_root0, i2s_tdm->mclk_root1))
+-		return changed;
+-
+-	old_rate = clk_get_rate(i2s_tdm->mclk_root1);
+-	ret = rockchip_i2s_tdm_clk_set_rate(i2s_tdm, i2s_tdm->mclk_root1,
+-					    i2s_tdm->mclk_root1_freq, ppm);
+-	if (ret)
+-		return ret;
+-	if (old_rate != clk_get_rate(i2s_tdm->mclk_root1))
+-		changed = 1;
+-
+-	return changed;
+-}
+-
+-static struct snd_kcontrol_new rockchip_i2s_tdm_compensation_control = {
+-	.iface = SNDRV_CTL_ELEM_IFACE_PCM,
+-	.name = "PCM Clock Compensation in PPM",
+-	.info = rockchip_i2s_tdm_clk_compensation_info,
+-	.get = rockchip_i2s_tdm_clk_compensation_get,
+-	.put = rockchip_i2s_tdm_clk_compensation_put,
+-};
+-
+ static int rockchip_i2s_tdm_dai_probe(struct snd_soc_dai *dai)
+ {
+ 	struct rk_i2s_tdm_dev *i2s_tdm = snd_soc_dai_get_drvdata(dai);
+@@ -1072,9 +777,6 @@ static int rockchip_i2s_tdm_dai_probe(struct snd_soc_dai *dai)
+ 	if (i2s_tdm->has_playback)
+ 		snd_soc_dai_dma_data_set_playback(dai, &i2s_tdm->playback_dma_data);
+ 
+-	if (i2s_tdm->mclk_calibrate)
+-		snd_soc_add_dai_controls(dai, &rockchip_i2s_tdm_compensation_control, 1);
+-
+ 	return 0;
+ }
+ 
+@@ -1115,7 +817,6 @@ static const struct snd_soc_dai_ops rockchip_i2s_tdm_dai_ops = {
+ 	.probe = rockchip_i2s_tdm_dai_probe,
+ 	.hw_params = rockchip_i2s_tdm_hw_params,
+ 	.set_bclk_ratio	= rockchip_i2s_tdm_set_bclk_ratio,
+-	.set_sysclk = rockchip_i2s_tdm_set_sysclk,
+ 	.set_fmt = rockchip_i2s_tdm_set_fmt,
+ 	.set_tdm_slot = rockchip_dai_tdm_slot,
+ 	.trigger = rockchip_i2s_tdm_trigger,
+@@ -1444,35 +1145,6 @@ static void rockchip_i2s_tdm_path_config(struct rk_i2s_tdm_dev *i2s_tdm,
+ 		rockchip_i2s_tdm_tx_path_config(i2s_tdm, num);
+ }
+ 
+-static int rockchip_i2s_tdm_get_calibrate_mclks(struct rk_i2s_tdm_dev *i2s_tdm)
+-{
+-	int num_mclks = 0;
+-
+-	i2s_tdm->mclk_tx_src = devm_clk_get(i2s_tdm->dev, "mclk_tx_src");
+-	if (!IS_ERR(i2s_tdm->mclk_tx_src))
+-		num_mclks++;
+-
+-	i2s_tdm->mclk_rx_src = devm_clk_get(i2s_tdm->dev, "mclk_rx_src");
+-	if (!IS_ERR(i2s_tdm->mclk_rx_src))
+-		num_mclks++;
+-
+-	i2s_tdm->mclk_root0 = devm_clk_get(i2s_tdm->dev, "mclk_root0");
+-	if (!IS_ERR(i2s_tdm->mclk_root0))
+-		num_mclks++;
+-
+-	i2s_tdm->mclk_root1 = devm_clk_get(i2s_tdm->dev, "mclk_root1");
+-	if (!IS_ERR(i2s_tdm->mclk_root1))
+-		num_mclks++;
+-
+-	if (num_mclks < 4 && num_mclks != 0)
+-		return -ENOENT;
+-
+-	if (num_mclks == 4)
+-		i2s_tdm->mclk_calibrate = 1;
+-
+-	return 0;
+-}
+-
+ static int rockchip_i2s_tdm_path_prepare(struct rk_i2s_tdm_dev *i2s_tdm,
+ 					 struct device_node *np,
+ 					 bool is_rx_path)
+@@ -1610,11 +1282,6 @@ static int rockchip_i2s_tdm_probe(struct platform_device *pdev)
+ 	i2s_tdm->io_multiplex =
+ 		of_property_read_bool(node, "rockchip,io-multiplex");
+ 
+-	ret = rockchip_i2s_tdm_get_calibrate_mclks(i2s_tdm);
+-	if (ret)
+-		return dev_err_probe(i2s_tdm->dev, ret,
+-				     "mclk-calibrate clocks missing");
+-
+ 	regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(regs)) {
+ 		return dev_err_probe(i2s_tdm->dev, PTR_ERR(regs),
+@@ -1667,13 +1334,6 @@ static int rockchip_i2s_tdm_probe(struct platform_device *pdev)
+ 		goto err_disable_hclk;
+ 	}
+ 
+-	if (i2s_tdm->mclk_calibrate) {
+-		i2s_tdm->mclk_root0_initial_freq = clk_get_rate(i2s_tdm->mclk_root0);
+-		i2s_tdm->mclk_root1_initial_freq = clk_get_rate(i2s_tdm->mclk_root1);
+-		i2s_tdm->mclk_root0_freq = i2s_tdm->mclk_root0_initial_freq;
+-		i2s_tdm->mclk_root1_freq = i2s_tdm->mclk_root1_initial_freq;
+-	}
+-
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	regmap_update_bits(i2s_tdm->regmap, I2S_DMACR, I2S_DMACR_TDL_MASK,
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index 14cf1a41fb0d1..9d103646973ad 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -1015,7 +1015,7 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ 					       dev_name(&pdev->dev), ssi);
+ 			if (ret < 0)
+ 				return dev_err_probe(&pdev->dev, ret,
+-						"irq request error (dma_tx)\n");
++						     "irq request error (dma_rt)\n");
+ 		} else {
+ 			if (ssi->irq_tx < 0)
+ 				return ssi->irq_tx;
+diff --git a/sound/soc/sof/amd/acp.c b/sound/soc/sof/amd/acp.c
+index 603ea5fc0d0d4..c6f637f298476 100644
+--- a/sound/soc/sof/amd/acp.c
++++ b/sound/soc/sof/amd/acp.c
+@@ -547,17 +547,27 @@ int amd_sof_acp_probe(struct snd_sof_dev *sdev)
+ 	adata->signed_fw_image = false;
+ 	dmi_id = dmi_first_match(acp_sof_quirk_table);
+ 	if (dmi_id && dmi_id->driver_data) {
+-		adata->fw_code_bin = kasprintf(GFP_KERNEL, "%s/sof-%s-code.bin",
+-					       plat_data->fw_filename_prefix,
+-					       chip->name);
+-		adata->fw_data_bin = kasprintf(GFP_KERNEL, "%s/sof-%s-data.bin",
+-					       plat_data->fw_filename_prefix,
+-					       chip->name);
+-		adata->signed_fw_image = dmi_id->driver_data;
++		adata->fw_code_bin = devm_kasprintf(sdev->dev, GFP_KERNEL,
++						    "%s/sof-%s-code.bin",
++						    plat_data->fw_filename_prefix,
++						    chip->name);
++		if (!adata->fw_code_bin) {
++			ret = -ENOMEM;
++			goto free_ipc_irq;
++		}
++
++		adata->fw_data_bin = devm_kasprintf(sdev->dev, GFP_KERNEL,
++						    "%s/sof-%s-data.bin",
++						    plat_data->fw_filename_prefix,
++						    chip->name);
++		if (!adata->fw_data_bin) {
++			ret = -ENOMEM;
++			goto free_ipc_irq;
++		}
+ 
+-		dev_dbg(sdev->dev, "fw_code_bin:%s, fw_data_bin:%s\n", adata->fw_code_bin,
+-			adata->fw_data_bin);
++		adata->signed_fw_image = dmi_id->driver_data;
+ 	}
++
+ 	adata->enable_fw_debug = enable_fw_debug;
+ 	acp_memory_init(sdev);
+ 
+diff --git a/sound/soc/sof/ipc3-loader.c b/sound/soc/sof/ipc3-loader.c
+index 28218766d2114..6e3ef06721106 100644
+--- a/sound/soc/sof/ipc3-loader.c
++++ b/sound/soc/sof/ipc3-loader.c
+@@ -148,6 +148,8 @@ static size_t sof_ipc3_fw_parse_ext_man(struct snd_sof_dev *sdev)
+ 
+ 	head = (struct sof_ext_man_header *)fw->data;
+ 	remaining = head->full_size - head->header_size;
++	if (remaining < 0 || remaining > sdev->basefw.fw->size)
++		return -EINVAL;
+ 	ext_man_size = ipc3_fw_ext_man_size(sdev, fw);
+ 
+ 	/* Assert firmware starts with extended manifest */
+diff --git a/sound/soc/sof/ipc4-pcm.c b/sound/soc/sof/ipc4-pcm.c
+index 39039a647cca3..ea70c0d7cf75a 100644
+--- a/sound/soc/sof/ipc4-pcm.c
++++ b/sound/soc/sof/ipc4-pcm.c
+@@ -413,7 +413,18 @@ static int sof_ipc4_trigger_pipelines(struct snd_soc_component *component,
+ 	ret = sof_ipc4_set_multi_pipeline_state(sdev, state, trigger_list);
+ 	if (ret < 0) {
+ 		dev_err(sdev->dev, "failed to set final state %d for all pipelines\n", state);
+-		goto free;
++		/*
++		 * workaround: if the firmware is crashed while setting the
++		 * pipelines to reset state we must ignore the error code and
++		 * reset it to 0.
++		 * Since the firmware is crashed we will not send IPC messages
++		 * and we are going to see errors printed, but the state of the
++		 * widgets will be correct for the next boot.
++		 */
++		if (sdev->fw_state != SOF_FW_CRASHED || state != SOF_IPC4_PIPE_RESET)
++			goto free;
++
++		ret = 0;
+ 	}
+ 
+ 	/* update RUNNING/RESET state for all pipelines that were just triggered */
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 3d4add94e367d..d5409f3879455 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -300,9 +300,12 @@ static struct snd_pcm_chmap_elem *convert_chmap(int channels, unsigned int bits,
+ 	c = 0;
+ 
+ 	if (bits) {
+-		for (; bits && *maps; maps++, bits >>= 1)
++		for (; bits && *maps; maps++, bits >>= 1) {
+ 			if (bits & 1)
+ 				chmap->map[c++] = *maps;
++			if (c == chmap->channels)
++				break;
++		}
+ 	} else {
+ 		/* If we're missing wChannelConfig, then guess something
+ 		    to make sure the channel map is not skipped entirely */
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 7ec4f5671e7a9..865bfa869a5b2 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -2294,7 +2294,7 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
+ 	int map_fd;
+ 
+ 	profile_perf_events = calloc(
+-		sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
++		obj->rodata->num_cpu * obj->rodata->num_metric, sizeof(int));
+ 	if (!profile_perf_events) {
+ 		p_err("failed to allocate memory for perf_event array: %s",
+ 		      strerror(errno));
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index 27a23196d58e1..d9520cb826b31 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -70,6 +70,7 @@
+ #include <sys/stat.h>
+ #include <fcntl.h>
+ #include <errno.h>
++#include <linux/btf_ids.h>
+ #include <linux/rbtree.h>
+ #include <linux/zalloc.h>
+ #include <linux/err.h>
+@@ -78,7 +79,7 @@
+ #include <subcmd/parse-options.h>
+ 
+ #define BTF_IDS_SECTION	".BTF_ids"
+-#define BTF_ID		"__BTF_ID__"
++#define BTF_ID_PREFIX	"__BTF_ID__"
+ 
+ #define BTF_STRUCT	"struct"
+ #define BTF_UNION	"union"
+@@ -89,6 +90,14 @@
+ 
+ #define ADDR_CNT	100
+ 
++#if __BYTE_ORDER == __LITTLE_ENDIAN
++# define ELFDATANATIVE	ELFDATA2LSB
++#elif __BYTE_ORDER == __BIG_ENDIAN
++# define ELFDATANATIVE	ELFDATA2MSB
++#else
++# error "Unknown machine endianness!"
++#endif
++
+ struct btf_id {
+ 	struct rb_node	 rb_node;
+ 	char		*name;
+@@ -116,6 +125,7 @@ struct object {
+ 		int		 idlist_shndx;
+ 		size_t		 strtabidx;
+ 		unsigned long	 idlist_addr;
++		int		 encoding;
+ 	} efile;
+ 
+ 	struct rb_root	sets;
+@@ -161,7 +171,7 @@ static int eprintf(int level, int var, const char *fmt, ...)
+ 
+ static bool is_btf_id(const char *name)
+ {
+-	return name && !strncmp(name, BTF_ID, sizeof(BTF_ID) - 1);
++	return name && !strncmp(name, BTF_ID_PREFIX, sizeof(BTF_ID_PREFIX) - 1);
+ }
+ 
+ static struct btf_id *btf_id__find(struct rb_root *root, const char *name)
+@@ -319,6 +329,7 @@ static int elf_collect(struct object *obj)
+ {
+ 	Elf_Scn *scn = NULL;
+ 	size_t shdrstrndx;
++	GElf_Ehdr ehdr;
+ 	int idx = 0;
+ 	Elf *elf;
+ 	int fd;
+@@ -350,6 +361,13 @@ static int elf_collect(struct object *obj)
+ 		return -1;
+ 	}
+ 
++	if (gelf_getehdr(obj->efile.elf, &ehdr) == NULL) {
++		pr_err("FAILED cannot get ELF header: %s\n",
++			elf_errmsg(-1));
++		return -1;
++	}
++	obj->efile.encoding = ehdr.e_ident[EI_DATA];
++
+ 	/*
+ 	 * Scan all the elf sections and look for save data
+ 	 * from .BTF_ids section and symbols.
+@@ -441,7 +459,7 @@ static int symbols_collect(struct object *obj)
+ 		 * __BTF_ID__TYPE__vfs_truncate__0
+ 		 * prefix =  ^
+ 		 */
+-		prefix = name + sizeof(BTF_ID) - 1;
++		prefix = name + sizeof(BTF_ID_PREFIX) - 1;
+ 
+ 		/* struct */
+ 		if (!strncmp(prefix, BTF_STRUCT, sizeof(BTF_STRUCT) - 1)) {
+@@ -649,19 +667,18 @@ static int cmp_id(const void *pa, const void *pb)
+ static int sets_patch(struct object *obj)
+ {
+ 	Elf_Data *data = obj->efile.idlist;
+-	int *ptr = data->d_buf;
+ 	struct rb_node *next;
+ 
+ 	next = rb_first(&obj->sets);
+ 	while (next) {
+-		unsigned long addr, idx;
++		struct btf_id_set8 *set8;
++		struct btf_id_set *set;
++		unsigned long addr, off;
+ 		struct btf_id *id;
+-		int *base;
+-		int cnt;
+ 
+ 		id   = rb_entry(next, struct btf_id, rb_node);
+ 		addr = id->addr[0];
+-		idx  = addr - obj->efile.idlist_addr;
++		off = addr - obj->efile.idlist_addr;
+ 
+ 		/* sets are unique */
+ 		if (id->addr_cnt != 1) {
+@@ -670,14 +687,39 @@ static int sets_patch(struct object *obj)
+ 			return -1;
+ 		}
+ 
+-		idx = idx / sizeof(int);
+-		base = &ptr[idx] + (id->is_set8 ? 2 : 1);
+-		cnt = ptr[idx];
++		if (id->is_set) {
++			set = data->d_buf + off;
++			qsort(set->ids, set->cnt, sizeof(set->ids[0]), cmp_id);
++		} else {
++			set8 = data->d_buf + off;
++			/*
++			 * Make sure id is at the beginning of the pairs
++			 * struct, otherwise the below qsort would not work.
++			 */
++			BUILD_BUG_ON(set8->pairs != &set8->pairs[0].id);
++			qsort(set8->pairs, set8->cnt, sizeof(set8->pairs[0]), cmp_id);
+ 
+-		pr_debug("sorting  addr %5lu: cnt %6d [%s]\n",
+-			 (idx + 1) * sizeof(int), cnt, id->name);
++			/*
++			 * When ELF endianness does not match endianness of the
++			 * host, libelf will do the translation when updating
++			 * the ELF. This, however, corrupts SET8 flags which are
++			 * already in the target endianness. So, let's bswap
++			 * them to the host endianness and libelf will then
++			 * correctly translate everything.
++			 */
++			if (obj->efile.encoding != ELFDATANATIVE) {
++				int i;
++
++				set8->flags = bswap_32(set8->flags);
++				for (i = 0; i < set8->cnt; i++) {
++					set8->pairs[i].flags =
++						bswap_32(set8->pairs[i].flags);
++				}
++			}
++		}
+ 
+-		qsort(base, cnt, id->is_set8 ? sizeof(uint64_t) : sizeof(int), cmp_id);
++		pr_debug("sorting  addr %5lu: cnt %6d [%s]\n",
++			 off, id->is_set ? set->cnt : set8->cnt, id->name);
+ 
+ 		next = rb_next(next);
+ 	}
+diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
+index 2f882d5cb30f5..72535f00572f6 100644
+--- a/tools/include/linux/btf_ids.h
++++ b/tools/include/linux/btf_ids.h
+@@ -8,6 +8,15 @@ struct btf_id_set {
+ 	u32 ids[];
+ };
+ 
++struct btf_id_set8 {
++	u32 cnt;
++	u32 flags;
++	struct {
++		u32 id;
++		u32 flags;
++	} pairs[];
++};
++
+ #ifdef CONFIG_DEBUG_INFO_BTF
+ 
+ #include <linux/compiler.h> /* for __PASTE */
+diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
+index d0f53772bdc02..dad7917903d19 100644
+--- a/tools/lib/bpf/bpf.h
++++ b/tools/lib/bpf/bpf.h
+@@ -35,7 +35,7 @@
+ extern "C" {
+ #endif
+ 
+-int libbpf_set_memlock_rlim(size_t memlock_bytes);
++LIBBPF_API int libbpf_set_memlock_rlim(size_t memlock_bytes);
+ 
+ struct bpf_map_create_opts {
+ 	size_t sz; /* size of this struct for forward/backward compatibility */
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index df1b550f7460a..ca83af916ccb0 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -70,6 +70,7 @@
+ 
+ static struct bpf_map *bpf_object__add_map(struct bpf_object *obj);
+ static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog);
++static int map_set_def_max_entries(struct bpf_map *map);
+ 
+ static const char * const attach_type_name[] = {
+ 	[BPF_CGROUP_INET_INGRESS]	= "cgroup_inet_ingress",
+@@ -5214,6 +5215,9 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type)) {
+ 		if (map->inner_map) {
++			err = map_set_def_max_entries(map->inner_map);
++			if (err)
++				return err;
+ 			err = bpf_object__create_map(obj, map->inner_map, true);
+ 			if (err) {
+ 				pr_warn("map '%s': failed to create inner map: %d\n",
+diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
+index f0f08635adb0d..57dec645d6878 100644
+--- a/tools/lib/bpf/libbpf_internal.h
++++ b/tools/lib/bpf/libbpf_internal.h
+@@ -18,6 +18,20 @@
+ #include <libelf.h>
+ #include "relo_core.h"
+ 
++/* Android's libc doesn't support AT_EACCESS in faccessat() implementation
++ * ([0]), and just returns -EINVAL even if file exists and is accessible.
++ * See [1] for issues caused by this.
++ *
++ * So just redefine it to 0 on Android.
++ *
++ * [0] https://android.googlesource.com/platform/bionic/+/refs/heads/android13-release/libc/bionic/faccessat.cpp#50
++ * [1] https://github.com/libbpf/libbpf-bootstrap/issues/250#issuecomment-1911324250
++ */
++#ifdef __ANDROID__
++#undef AT_EACCESS
++#define AT_EACCESS 0
++#endif
++
+ /* make sure libbpf doesn't use kernel-only integer typedefs */
+ #pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
+ 
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 090bcf6e3b3d5..68a2def171751 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -496,8 +496,8 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
+ 	if (err)
+ 		return libbpf_err(err);
+ 
+-	opts->feature_flags = md.flags;
+-	opts->xdp_zc_max_segs = md.xdp_zc_max_segs;
++	OPTS_SET(opts, feature_flags, md.flags);
++	OPTS_SET(opts, xdp_zc_max_segs, md.xdp_zc_max_segs);
+ 
+ skip_feature_flags:
+ 	return 0;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index e94756e09ca9f..167603f839851 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -3620,6 +3620,18 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ 				}
+ 
+ 				if (!save_insn->visited) {
++					/*
++					 * If the restore hint insn is at the
++					 * beginning of a basic block and was
++					 * branched to from elsewhere, and the
++					 * save insn hasn't been visited yet,
++					 * defer following this branch for now.
++					 * It will be seen later via the
++					 * straight-line path.
++					 */
++					if (!prev_insn)
++						return 0;
++
+ 					WARN_INSN(insn, "objtool isn't smart enough to handle this CFI save/restore combo");
+ 					return 1;
+ 				}
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 058c9aecf6087..af22d539f31e2 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -1148,7 +1148,7 @@ bpf-skel:
+ endif # CONFIG_PERF_BPF_SKEL
+ 
+ bpf-skel-clean:
+-	$(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS)
++	$(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS) $(SKEL_OUT)/vmlinux.h
+ 
+ clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean
+ 	$(call QUIET_CLEAN, core-objs)  $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-iostat $(LANG_BINDINGS)
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index dcf288a4fb9a9..5d86aa5ff5367 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1818,8 +1818,8 @@ static int
+ record__switch_output(struct record *rec, bool at_exit)
+ {
+ 	struct perf_data *data = &rec->data;
++	char *new_filename = NULL;
+ 	int fd, err;
+-	char *new_filename;
+ 
+ 	/* Same Size:      "2015122520103046"*/
+ 	char timestamp[] = "InvalidTimestamp";
+@@ -2216,32 +2216,6 @@ static void hit_auxtrace_snapshot_trigger(struct record *rec)
+ 	}
+ }
+ 
+-static void record__uniquify_name(struct record *rec)
+-{
+-	struct evsel *pos;
+-	struct evlist *evlist = rec->evlist;
+-	char *new_name;
+-	int ret;
+-
+-	if (perf_pmus__num_core_pmus() == 1)
+-		return;
+-
+-	evlist__for_each_entry(evlist, pos) {
+-		if (!evsel__is_hybrid(pos))
+-			continue;
+-
+-		if (strchr(pos->name, '/'))
+-			continue;
+-
+-		ret = asprintf(&new_name, "%s/%s/",
+-			       pos->pmu_name, pos->name);
+-		if (ret) {
+-			free(pos->name);
+-			pos->name = new_name;
+-		}
+-	}
+-}
+-
+ static int record__terminate_thread(struct record_thread *thread_data)
+ {
+ 	int err;
+@@ -2475,7 +2449,12 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
+ 	if (data->is_pipe && rec->evlist->core.nr_entries == 1)
+ 		rec->opts.sample_id = true;
+ 
+-	record__uniquify_name(rec);
++	if (rec->timestamp_filename && perf_data__is_pipe(data)) {
++		rec->timestamp_filename = false;
++		pr_warning("WARNING: --timestamp-filename option is not available in pipe mode.\n");
++	}
++
++	evlist__uniquify_name(rec->evlist);
+ 
+ 	/* Debug message used by test scripts */
+ 	pr_debug3("perf record opening and mmapping events\n");
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index ea8c7eca5eeed..8d7c31bd2ebfc 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -1299,6 +1299,7 @@ static int __cmd_top(struct perf_top *top)
+ 		}
+ 	}
+ 
++	evlist__uniquify_name(top->evlist);
+ 	ret = perf_top__start_counters(top);
+ 	if (ret)
+ 		return ret;
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index c29d8a382b196..550675ce0b787 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -430,8 +430,6 @@ int perf_data__switch(struct perf_data *data,
+ {
+ 	int ret;
+ 
+-	if (check_pipe(data))
+-		return -EINVAL;
+ 	if (perf_data__is_read(data))
+ 		return -EINVAL;
+ 
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index b0ed14a6da9d4..eb1dd29c538d5 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -2525,3 +2525,28 @@ void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_lis
+ 	}
+ 	perf_cpu_map__put(user_requested_cpus);
+ }
++
++void evlist__uniquify_name(struct evlist *evlist)
++{
++	struct evsel *pos;
++	char *new_name;
++	int ret;
++
++	if (perf_pmus__num_core_pmus() == 1)
++		return;
++
++	evlist__for_each_entry(evlist, pos) {
++		if (!evsel__is_hybrid(pos))
++			continue;
++
++		if (strchr(pos->name, '/'))
++			continue;
++
++		ret = asprintf(&new_name, "%s/%s/",
++			       pos->pmu_name, pos->name);
++		if (ret) {
++			free(pos->name);
++			pos->name = new_name;
++		}
++	}
++}
+diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
+index 98e7ddb2bd305..cb91dc9117a27 100644
+--- a/tools/perf/util/evlist.h
++++ b/tools/perf/util/evlist.h
+@@ -442,5 +442,6 @@ struct evsel *evlist__find_evsel(struct evlist *evlist, int idx);
+ int evlist__scnprintf_evsels(struct evlist *evlist, size_t size, char *bf);
+ void evlist__check_mem_load_aux(struct evlist *evlist);
+ void evlist__warn_user_requested_cpus(struct evlist *evlist, const char *cpu_list);
++void evlist__uniquify_name(struct evlist *evlist);
+ 
+ #endif /* __PERF_EVLIST_H */
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 72a5dfc38d380..1fb24ca8aea52 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -2340,7 +2340,6 @@ int evsel__parse_sample(struct evsel *evsel, union perf_event *event,
+ 	data->period = evsel->core.attr.sample_period;
+ 	data->cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
+ 	data->misc    = event->header.misc;
+-	data->id = -1ULL;
+ 	data->data_src = PERF_MEM_DATA_SRC_NONE;
+ 	data->vcpu = -1;
+ 
+diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c
+index 7be23b3ac0821..b8875aac8f870 100644
+--- a/tools/perf/util/expr.c
++++ b/tools/perf/util/expr.c
+@@ -500,7 +500,25 @@ double expr__has_event(const struct expr_parse_ctx *ctx, bool compute_ids, const
+ 	tmp = evlist__new();
+ 	if (!tmp)
+ 		return NAN;
+-	ret = parse_event(tmp, id) ? 0 : 1;
++
++	if (strchr(id, '@')) {
++		char *tmp_id, *p;
++
++		tmp_id = strdup(id);
++		if (!tmp_id) {
++			ret = NAN;
++			goto out;
++		}
++		p = strchr(tmp_id, '@');
++		*p = '/';
++		p = strrchr(tmp_id, '@');
++		*p = '/';
++		ret = parse_event(tmp, tmp_id) ? 0 : 1;
++		free(tmp_id);
++	} else {
++		ret = parse_event(tmp, id) ? 0 : 1;
++	}
++out:
+ 	evlist__delete(tmp);
+ 	return ret;
+ }
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index d3c9aa4326bee..aaa013af52524 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -1019,10 +1019,9 @@ struct perf_pmu *perf_pmu__lookup(struct list_head *pmus, int dirfd, const char
+ 	 * type value and format definitions. Load both right
+ 	 * now.
+ 	 */
+-	if (pmu_format(pmu, dirfd, name)) {
+-		free(pmu);
+-		return NULL;
+-	}
++	if (pmu_format(pmu, dirfd, name))
++		goto err;
++
+ 	pmu->is_core = is_pmu_core(name);
+ 	pmu->cpus = pmu_cpumask(dirfd, name, pmu->is_core);
+ 
+@@ -1756,6 +1755,12 @@ bool pmu__name_match(const struct perf_pmu *pmu, const char *pmu_name)
+ 
+ bool perf_pmu__is_software(const struct perf_pmu *pmu)
+ {
++	const char *known_sw_pmus[] = {
++		"kprobe",
++		"msr",
++		"uprobe",
++	};
++
+ 	if (pmu->is_core || pmu->is_uncore || pmu->auxtrace)
+ 		return false;
+ 	switch (pmu->type) {
+@@ -1767,7 +1772,11 @@ bool perf_pmu__is_software(const struct perf_pmu *pmu)
+ 	case PERF_TYPE_BREAKPOINT:	return true;
+ 	default: break;
+ 	}
+-	return !strcmp(pmu->name, "kprobe") || !strcmp(pmu->name, "uprobe");
++	for (size_t i = 0; i < ARRAY_SIZE(known_sw_pmus); i++) {
++		if (!strcmp(pmu->name, known_sw_pmus[i]))
++			return true;
++	}
++	return false;
+ }
+ 
+ FILE *perf_pmu__open_file(const struct perf_pmu *pmu, const char *name)
+diff --git a/tools/perf/util/print-events.c b/tools/perf/util/print-events.c
+index b0fc48be623f3..4f67e8f00a4d6 100644
+--- a/tools/perf/util/print-events.c
++++ b/tools/perf/util/print-events.c
+@@ -232,7 +232,6 @@ void print_sdt_events(const struct print_callbacks *print_cb, void *print_state)
+ bool is_event_supported(u8 type, u64 config)
+ {
+ 	bool ret = true;
+-	int open_return;
+ 	struct evsel *evsel;
+ 	struct perf_event_attr attr = {
+ 		.type = type,
+@@ -246,20 +245,32 @@ bool is_event_supported(u8 type, u64 config)
+ 
+ 	evsel = evsel__new(&attr);
+ 	if (evsel) {
+-		open_return = evsel__open(evsel, NULL, tmap);
+-		ret = open_return >= 0;
++		ret = evsel__open(evsel, NULL, tmap) >= 0;
+ 
+-		if (open_return == -EACCES) {
++		if (!ret) {
+ 			/*
+-			 * This happens if the paranoid value
++			 * The event may fail to open if the paranoid value
+ 			 * /proc/sys/kernel/perf_event_paranoid is set to 2
+-			 * Re-run with exclude_kernel set; we don't do that
+-			 * by default as some ARM machines do not support it.
+-			 *
++			 * Re-run with exclude_kernel set; we don't do that by
++			 * default as some ARM machines do not support it.
+ 			 */
+ 			evsel->core.attr.exclude_kernel = 1;
+ 			ret = evsel__open(evsel, NULL, tmap) >= 0;
+ 		}
++
++		if (!ret) {
++			/*
++			 * The event may fail to open if the PMU requires
++			 * exclude_guest to be set (e.g. as the Apple M1 PMU
++			 * requires).
++			 * Re-run with exclude_guest set; we don't do that by
++			 * default as it's equally legitimate for another PMU
++			 * driver to require that exclude_guest is clear.
++			 */
++			evsel->core.attr.exclude_guest = 1;
++			ret = evsel__open(evsel, NULL, tmap) >= 0;
++		}
++
+ 		evsel__delete(evsel);
+ 	}
+ 
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index 034b496df2978..7addc34afcf5d 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -399,6 +399,8 @@ static void addr2line_subprocess_cleanup(struct child_process *a2l)
+ 		kill(a2l->pid, SIGKILL);
+ 		finish_command(a2l); /* ignore result, we don't care */
+ 		a2l->pid = -1;
++		close(a2l->in);
++		close(a2l->out);
+ 	}
+ 
+ 	free(a2l);
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index afe6db8e7bf4f..969ce40096330 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -560,7 +560,7 @@ static void print_metric_only(struct perf_stat_config *config,
+ 	if (color)
+ 		mlen += strlen(color) + sizeof(PERF_COLOR_RESET) - 1;
+ 
+-	color_snprintf(str, sizeof(str), color ?: "", fmt, val);
++	color_snprintf(str, sizeof(str), color ?: "", fmt ?: "", val);
+ 	fprintf(out, "%*s ", mlen, str);
+ 	os->first = false;
+ }
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index e31426167852a..cf573ff3fa84f 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -414,12 +414,7 @@ static int prepare_metric(struct evsel **metric_events,
+ 				val = NAN;
+ 				source_count = 0;
+ 			} else {
+-				/*
+-				 * If an event was scaled during stat gathering,
+-				 * reverse the scale before computing the
+-				 * metric.
+-				 */
+-				val = aggr->counts.val * (1.0 / metric_events[i]->scale);
++				val = aggr->counts.val;
+ 				source_count = evsel__source_count(metric_events[i]);
+ 			}
+ 		}
+diff --git a/tools/perf/util/thread_map.c b/tools/perf/util/thread_map.c
+index e848579e61a86..ea3b431b97830 100644
+--- a/tools/perf/util/thread_map.c
++++ b/tools/perf/util/thread_map.c
+@@ -280,13 +280,13 @@ struct perf_thread_map *thread_map__new_by_tid_str(const char *tid_str)
+ 		threads->nr = ntasks;
+ 	}
+ out:
++	strlist__delete(slist);
+ 	if (threads)
+ 		refcount_set(&threads->refcnt, 1);
+ 	return threads;
+ 
+ out_free_threads:
+ 	zfree(&threads);
+-	strlist__delete(slist);
+ 	goto out;
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+index 91907b321f913..e7c9e1c7fde04 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+@@ -2,6 +2,7 @@
+ /* Copyright (c) 2020 Facebook */
+ #include <linux/btf.h>
+ #include <linux/btf_ids.h>
++#include <linux/delay.h>
+ #include <linux/error-injection.h>
+ #include <linux/init.h>
+ #include <linux/module.h>
+@@ -544,6 +545,14 @@ static int bpf_testmod_init(void)
+ 
+ static void bpf_testmod_exit(void)
+ {
++        /* Need to wait for all references to be dropped because
++         * bpf_kfunc_call_test_release() which currently resides in kernel can
++         * be called after bpf_testmod is unloaded. Once release function is
++         * moved into the module this wait can be removed.
++         */
++	while (refcount_read(&prog_test_struct.cnt) > 1)
++		msleep(20);
++
+ 	return sysfs_remove_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c b/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
+index 59b38569f310b..2bc932a18c17e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/lwt_redirect.c
+@@ -203,6 +203,7 @@ static int setup_redirect_target(const char *target_dev, bool need_mac)
+ 	if (!ASSERT_GE(target_index, 0, "if_nametoindex"))
+ 		goto fail;
+ 
++	SYS(fail, "sysctl -w net.ipv6.conf.all.disable_ipv6=1");
+ 	SYS(fail, "ip link add link_err type dummy");
+ 	SYS(fail, "ip link set lo up");
+ 	SYS(fail, "ip addr add dev lo " LOCAL_SRC "/32");
+diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+index 518f143c5b0fe..dbe06aeaa2b27 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c
+@@ -188,6 +188,7 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ {
+ 	struct nstoken *nstoken = NULL;
+ 	char src_fwd_addr[IFADDR_STR_LEN+1] = {};
++	char src_addr[IFADDR_STR_LEN + 1] = {};
+ 	int err;
+ 
+ 	if (result->dev_mode == MODE_VETH) {
+@@ -208,6 +209,9 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ 	if (get_ifaddr("src_fwd", src_fwd_addr))
+ 		goto fail;
+ 
++	if (get_ifaddr("src", src_addr))
++		goto fail;
++
+ 	result->ifindex_src = if_nametoindex("src");
+ 	if (!ASSERT_GT(result->ifindex_src, 0, "ifindex_src"))
+ 		goto fail;
+@@ -270,6 +274,13 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ 	SYS(fail, "ip route add " IP4_DST "/32 dev dst_fwd scope global");
+ 	SYS(fail, "ip route add " IP6_DST "/128 dev dst_fwd scope global");
+ 
++	if (result->dev_mode == MODE_VETH) {
++		SYS(fail, "ip neigh add " IP4_SRC " dev src_fwd lladdr %s", src_addr);
++		SYS(fail, "ip neigh add " IP6_SRC " dev src_fwd lladdr %s", src_addr);
++		SYS(fail, "ip neigh add " IP4_DST " dev dst_fwd lladdr %s", MAC_DST);
++		SYS(fail, "ip neigh add " IP6_DST " dev dst_fwd lladdr %s", MAC_DST);
++	}
++
+ 	close_netns(nstoken);
+ 
+ 	/** setup in 'dst' namespace */
+@@ -280,6 +291,7 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
+ 	SYS(fail, "ip addr add " IP4_DST "/32 dev dst");
+ 	SYS(fail, "ip addr add " IP6_DST "/128 dev dst nodad");
+ 	SYS(fail, "ip link set dev dst up");
++	SYS(fail, "ip link set dev lo up");
+ 
+ 	SYS(fail, "ip route add " IP4_SRC "/32 dev dst scope global");
+ 	SYS(fail, "ip route add " IP4_NET "/16 dev dst scope global");
+@@ -457,7 +469,7 @@ static int set_forwarding(bool enable)
+ 	return 0;
+ }
+ 
+-static void rcv_tstamp(int fd, const char *expected, size_t s)
++static int __rcv_tstamp(int fd, const char *expected, size_t s, __u64 *tstamp)
+ {
+ 	struct __kernel_timespec pkt_ts = {};
+ 	char ctl[CMSG_SPACE(sizeof(pkt_ts))];
+@@ -478,7 +490,7 @@ static void rcv_tstamp(int fd, const char *expected, size_t s)
+ 
+ 	ret = recvmsg(fd, &msg, 0);
+ 	if (!ASSERT_EQ(ret, s, "recvmsg"))
+-		return;
++		return -1;
+ 	ASSERT_STRNEQ(data, expected, s, "expected rcv data");
+ 
+ 	cmsg = CMSG_FIRSTHDR(&msg);
+@@ -487,6 +499,12 @@ static void rcv_tstamp(int fd, const char *expected, size_t s)
+ 		memcpy(&pkt_ts, CMSG_DATA(cmsg), sizeof(pkt_ts));
+ 
+ 	pkt_ns = pkt_ts.tv_sec * NSEC_PER_SEC + pkt_ts.tv_nsec;
++	if (tstamp) {
++		/* caller will check the tstamp itself */
++		*tstamp = pkt_ns;
++		return 0;
++	}
++
+ 	ASSERT_NEQ(pkt_ns, 0, "pkt rcv tstamp");
+ 
+ 	ret = clock_gettime(CLOCK_REALTIME, &now_ts);
+@@ -496,6 +514,60 @@ static void rcv_tstamp(int fd, const char *expected, size_t s)
+ 	if (ASSERT_GE(now_ns, pkt_ns, "check rcv tstamp"))
+ 		ASSERT_LT(now_ns - pkt_ns, 5 * NSEC_PER_SEC,
+ 			  "check rcv tstamp");
++	return 0;
++}
++
++static void rcv_tstamp(int fd, const char *expected, size_t s)
++{
++	__rcv_tstamp(fd, expected, s, NULL);
++}
++
++static int wait_netstamp_needed_key(void)
++{
++	int opt = 1, srv_fd = -1, cli_fd = -1, nretries = 0, err, n;
++	char buf[] = "testing testing";
++	struct nstoken *nstoken;
++	__u64 tstamp = 0;
++
++	nstoken = open_netns(NS_DST);
++	if (!nstoken)
++		return -1;
++
++	srv_fd = start_server(AF_INET6, SOCK_DGRAM, "::1", 0, 0);
++	if (!ASSERT_GE(srv_fd, 0, "start_server"))
++		goto done;
++
++	err = setsockopt(srv_fd, SOL_SOCKET, SO_TIMESTAMPNS_NEW,
++			 &opt, sizeof(opt));
++	if (!ASSERT_OK(err, "setsockopt(SO_TIMESTAMPNS_NEW)"))
++		goto done;
++
++	cli_fd = connect_to_fd(srv_fd, TIMEOUT_MILLIS);
++	if (!ASSERT_GE(cli_fd, 0, "connect_to_fd"))
++		goto done;
++
++again:
++	n = write(cli_fd, buf, sizeof(buf));
++	if (!ASSERT_EQ(n, sizeof(buf), "send to server"))
++		goto done;
++	err = __rcv_tstamp(srv_fd, buf, sizeof(buf), &tstamp);
++	if (!ASSERT_OK(err, "__rcv_tstamp"))
++		goto done;
++	if (!tstamp && nretries++ < 5) {
++		sleep(1);
++		printf("netstamp_needed_key retry#%d\n", nretries);
++		goto again;
++	}
++
++done:
++	if (!tstamp && srv_fd != -1) {
++		close(srv_fd);
++		srv_fd = -1;
++	}
++	if (cli_fd != -1)
++		close(cli_fd);
++	close_netns(nstoken);
++	return srv_fd;
+ }
+ 
+ static void snd_tstamp(int fd, char *b, size_t s)
+@@ -832,11 +904,20 @@ static void test_tc_redirect_dtime(struct netns_setup_result *setup_result)
+ {
+ 	struct test_tc_dtime *skel;
+ 	struct nstoken *nstoken;
+-	int err;
++	int hold_tstamp_fd, err;
++
++	/* Hold a sk with the SOCK_TIMESTAMP set to ensure there
++	 * is no delay in the kernel net_enable_timestamp().
++	 * This ensures the following tests must have
++	 * non zero rcv tstamp in the recvmsg().
++	 */
++	hold_tstamp_fd = wait_netstamp_needed_key();
++	if (!ASSERT_GE(hold_tstamp_fd, 0, "wait_netstamp_needed_key"))
++		return;
+ 
+ 	skel = test_tc_dtime__open();
+ 	if (!ASSERT_OK_PTR(skel, "test_tc_dtime__open"))
+-		return;
++		goto done;
+ 
+ 	skel->rodata->IFINDEX_SRC = setup_result->ifindex_src_fwd;
+ 	skel->rodata->IFINDEX_DST = setup_result->ifindex_dst_fwd;
+@@ -881,6 +962,7 @@ static void test_tc_redirect_dtime(struct netns_setup_result *setup_result)
+ 
+ done:
+ 	test_tc_dtime__destroy(skel);
++	close(hold_tstamp_fd);
+ }
+ 
+ static void test_tc_redirect_neigh_fib(struct netns_setup_result *setup_result)
+diff --git a/tools/testing/selftests/bpf/progs/test_map_in_map.c b/tools/testing/selftests/bpf/progs/test_map_in_map.c
+index f416032ba858b..b295f9b721bf8 100644
+--- a/tools/testing/selftests/bpf/progs/test_map_in_map.c
++++ b/tools/testing/selftests/bpf/progs/test_map_in_map.c
+@@ -21,6 +21,32 @@ struct {
+ 	__type(value, __u32);
+ } mim_hash SEC(".maps");
+ 
++/* The following three maps are used to test
++ * perf_event_array map can be an inner
++ * map of hash/array_of_maps.
++ */
++struct perf_event_array {
++	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
++	__type(key, __u32);
++	__type(value, __u32);
++} inner_map0 SEC(".maps");
++
++struct {
++	__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
++	__uint(max_entries, 1);
++	__type(key, __u32);
++	__array(values, struct perf_event_array);
++} mim_array_pe SEC(".maps") = {
++	.values = {&inner_map0}};
++
++struct {
++	__uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
++	__uint(max_entries, 1);
++	__type(key, __u32);
++	__array(values, struct perf_event_array);
++} mim_hash_pe SEC(".maps") = {
++	.values = {&inner_map0}};
++
+ SEC("xdp")
+ int xdp_mimtest0(struct xdp_md *ctx)
+ {
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 7fc00e423e4dd..e0dd101c9f2bd 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -1190,7 +1190,11 @@ static void test_map_in_map(void)
+ 		goto out_map_in_map;
+ 	}
+ 
+-	bpf_object__load(obj);
++	err = bpf_object__load(obj);
++	if (err) {
++		printf("Failed to load test prog\n");
++		goto out_map_in_map;
++	}
+ 
+ 	map = bpf_object__find_map_by_name(obj, "mim_array");
+ 	if (!map) {
+diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
+index 4faa898ff7fc4..27fd7ed3e4b0c 100644
+--- a/tools/testing/selftests/bpf/trace_helpers.c
++++ b/tools/testing/selftests/bpf/trace_helpers.c
+@@ -271,7 +271,7 @@ ssize_t get_uprobe_offset(const void *addr)
+ 	 * addi  r2,r2,XXXX
+ 	 */
+ 	{
+-		const u32 *insn = (const u32 *)(uintptr_t)addr;
++		const __u32 *insn = (const __u32 *)(uintptr_t)addr;
+ 
+ 		if ((((*insn & OP_RT_RA_MASK) == ADDIS_R2_R12) ||
+ 		     ((*insn & OP_RT_RA_MASK) == LIS_R2)) &&
+diff --git a/tools/testing/selftests/net/forwarding/config b/tools/testing/selftests/net/forwarding/config
+index 697994a9278bb..8d7a1a004b7c3 100644
+--- a/tools/testing/selftests/net/forwarding/config
++++ b/tools/testing/selftests/net/forwarding/config
+@@ -6,14 +6,49 @@ CONFIG_IPV6_MULTIPLE_TABLES=y
+ CONFIG_NET_VRF=m
+ CONFIG_BPF_SYSCALL=y
+ CONFIG_CGROUP_BPF=y
++CONFIG_DUMMY=m
++CONFIG_IPV6=y
++CONFIG_IPV6_GRE=m
++CONFIG_IPV6_MROUTE=y
++CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IPV6_PIMSM_V2=y
++CONFIG_IP_MROUTE=y
++CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
++CONFIG_IP_PIMSM_V1=y
++CONFIG_IP_PIMSM_V2=y
++CONFIG_MACVLAN=m
+ CONFIG_NET_ACT_CT=m
+ CONFIG_NET_ACT_MIRRED=m
+ CONFIG_NET_ACT_MPLS=m
++CONFIG_NET_ACT_PEDIT=m
++CONFIG_NET_ACT_POLICE=m
++CONFIG_NET_ACT_SAMPLE=m
++CONFIG_NET_ACT_SKBEDIT=m
++CONFIG_NET_ACT_TUNNEL_KEY=m
+ CONFIG_NET_ACT_VLAN=m
+ CONFIG_NET_CLS_FLOWER=m
+ CONFIG_NET_CLS_MATCHALL=m
++CONFIG_NET_CLS_BASIC=m
++CONFIG_NET_EMATCH=y
++CONFIG_NET_EMATCH_META=m
++CONFIG_NET_IPGRE=m
++CONFIG_NET_IPGRE_DEMUX=m
++CONFIG_NET_IPIP=m
++CONFIG_NET_SCH_ETS=m
+ CONFIG_NET_SCH_INGRESS=m
+ CONFIG_NET_ACT_GACT=m
++CONFIG_NET_SCH_PRIO=m
++CONFIG_NET_SCH_RED=m
++CONFIG_NET_SCH_TBF=m
++CONFIG_NET_TC_SKB_EXT=y
++CONFIG_NET_TEAM=y
++CONFIG_NET_TEAM_MODE_LOADBALANCE=y
++CONFIG_NETFILTER=y
++CONFIG_NF_CONNTRACK=m
++CONFIG_NF_FLOW_TABLE=m
++CONFIG_NF_TABLES=m
+ CONFIG_VETH=m
+ CONFIG_NAMESPACES=y
+ CONFIG_NET_NS=y
++CONFIG_VXLAN=m
++CONFIG_XFRM_USER=m
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
+index ac97f07e5ce82..bd3f7d492af2b 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
+@@ -354,7 +354,7 @@ __ping_ipv4()
+ 
+ 	# Send 100 packets and verify that at least 100 packets hit the rule,
+ 	# to overcome ARP noise.
+-	PING_COUNT=100 PING_TIMEOUT=11 ping_do $dev $dst_ip
++	PING_COUNT=100 PING_TIMEOUT=20 ping_do $dev $dst_ip
+ 	check_err $? "Ping failed"
+ 
+ 	tc_check_at_least_x_packets "dev $rp1 egress" 101 10 100
+@@ -410,7 +410,7 @@ __ping_ipv6()
+ 
+ 	# Send 100 packets and verify that at least 100 packets hit the rule,
+ 	# to overcome neighbor discovery noise.
+-	PING_COUNT=100 PING_TIMEOUT=11 ping6_do $dev $dst_ip
++	PING_COUNT=100 PING_TIMEOUT=20 ping6_do $dev $dst_ip
+ 	check_err $? "Ping failed"
+ 
+ 	tc_check_at_least_x_packets "dev $rp1 egress" 101 100
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh
+index d880df89bc8bd..e83fde79f40d0 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh
+@@ -457,7 +457,7 @@ __ping_ipv4()
+ 
+ 	# Send 100 packets and verify that at least 100 packets hit the rule,
+ 	# to overcome ARP noise.
+-	PING_COUNT=100 PING_TIMEOUT=11 ping_do $dev $dst_ip
++	PING_COUNT=100 PING_TIMEOUT=20 ping_do $dev $dst_ip
+ 	check_err $? "Ping failed"
+ 
+ 	tc_check_at_least_x_packets "dev $rp1 egress" 101 10 100
+@@ -522,7 +522,7 @@ __ping_ipv6()
+ 
+ 	# Send 100 packets and verify that at least 100 packets hit the rule,
+ 	# to overcome neighbor discovery noise.
+-	PING_COUNT=100 PING_TIMEOUT=11 ping6_do $dev $dst_ip
++	PING_COUNT=100 PING_TIMEOUT=20 ping6_do $dev $dst_ip
+ 	check_err $? "Ping failed"
+ 
+ 	tc_check_at_least_x_packets "dev $rp1 egress" 101 100
+diff --git a/tools/testing/selftests/net/openvswitch/openvswitch.sh b/tools/testing/selftests/net/openvswitch/openvswitch.sh
+index f8499d4c87f3f..36e40256ab92a 100755
+--- a/tools/testing/selftests/net/openvswitch/openvswitch.sh
++++ b/tools/testing/selftests/net/openvswitch/openvswitch.sh
+@@ -502,7 +502,20 @@ test_netlink_checks () {
+ 	    wc -l) == 2 ] || \
+ 	      return 1
+ 
++	info "Checking clone depth"
+ 	ERR_MSG="Flow actions may not be safe on all matching packets"
++	PRE_TEST=$(dmesg | grep -c "${ERR_MSG}")
++	ovs_add_flow "test_netlink_checks" nv0 \
++		'in_port(1),eth(),eth_type(0x800),ipv4()' \
++		'clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(clone(drop)))))))))))))))))' \
++		>/dev/null 2>&1 && return 1
++	POST_TEST=$(dmesg | grep -c "${ERR_MSG}")
++
++	if [ "$PRE_TEST" == "$POST_TEST" ]; then
++		info "failed - clone depth too large"
++		return 1
++	fi
++
+ 	PRE_TEST=$(dmesg | grep -c "${ERR_MSG}")
+ 	ovs_add_flow "test_netlink_checks" nv0 \
+ 		'in_port(1),eth(),eth_type(0x0806),arp()' 'drop(0),2' \
+diff --git a/tools/testing/selftests/net/openvswitch/ovs-dpctl.py b/tools/testing/selftests/net/openvswitch/ovs-dpctl.py
+index b97e621face95..5e0e539a323d5 100644
+--- a/tools/testing/selftests/net/openvswitch/ovs-dpctl.py
++++ b/tools/testing/selftests/net/openvswitch/ovs-dpctl.py
+@@ -299,7 +299,7 @@ class ovsactions(nla):
+         ("OVS_ACTION_ATTR_PUSH_NSH", "none"),
+         ("OVS_ACTION_ATTR_POP_NSH", "flag"),
+         ("OVS_ACTION_ATTR_METER", "none"),
+-        ("OVS_ACTION_ATTR_CLONE", "none"),
++        ("OVS_ACTION_ATTR_CLONE", "recursive"),
+         ("OVS_ACTION_ATTR_CHECK_PKT_LEN", "none"),
+         ("OVS_ACTION_ATTR_ADD_MPLS", "none"),
+         ("OVS_ACTION_ATTR_DEC_TTL", "none"),
+@@ -465,29 +465,42 @@ class ovsactions(nla):
+                     print_str += "pop_mpls"
+             else:
+                 datum = self.get_attr(field[0])
+-                print_str += datum.dpstr(more)
++                if field[0] == "OVS_ACTION_ATTR_CLONE":
++                    print_str += "clone("
++                    print_str += datum.dpstr(more)
++                    print_str += ")"
++                else:
++                    print_str += datum.dpstr(more)
+ 
+         return print_str
+ 
+     def parse(self, actstr):
++        totallen = len(actstr)
+         while len(actstr) != 0:
+             parsed = False
++            parencount = 0
+             if actstr.startswith("drop"):
+                 # If no reason is provided, the implicit drop is used (i.e no
+                 # action). If some reason is given, an explicit action is used.
+-                actstr, reason = parse_extract_field(
+-                    actstr,
+-                    "drop(",
+-                    "([0-9]+)",
+-                    lambda x: int(x, 0),
+-                    False,
+-                    None,
+-                )
++                reason = None
++                if actstr.startswith("drop("):
++                    parencount += 1
++
++                    actstr, reason = parse_extract_field(
++                        actstr,
++                        "drop(",
++                        "([0-9]+)",
++                        lambda x: int(x, 0),
++                        False,
++                        None,
++                    )
++
+                 if reason is not None:
+                     self["attrs"].append(["OVS_ACTION_ATTR_DROP", reason])
+                     parsed = True
+                 else:
+-                    return
++                    actstr = actstr[len("drop"): ]
++                    return (totallen - len(actstr))
+ 
+             elif parse_starts_block(actstr, "^(\d+)", False, True):
+                 actstr, output = parse_extract_field(
+@@ -504,6 +517,7 @@ class ovsactions(nla):
+                     False,
+                     0,
+                 )
++                parencount += 1
+                 self["attrs"].append(["OVS_ACTION_ATTR_RECIRC", recircid])
+                 parsed = True
+ 
+@@ -516,12 +530,22 @@ class ovsactions(nla):
+ 
+             for flat_act in parse_flat_map:
+                 if parse_starts_block(actstr, flat_act[0], False):
+-                    actstr += len(flat_act[0])
++                    actstr = actstr[len(flat_act[0]):]
+                     self["attrs"].append([flat_act[1]])
+                     actstr = actstr[strspn(actstr, ", ") :]
+                     parsed = True
+ 
+-            if parse_starts_block(actstr, "ct(", False):
++            if parse_starts_block(actstr, "clone(", False):
++                parencount += 1
++                subacts = ovsactions()
++                actstr = actstr[len("clone("):]
++                parsedLen = subacts.parse(actstr)
++                lst = []
++                self["attrs"].append(("OVS_ACTION_ATTR_CLONE", subacts))
++                actstr = actstr[parsedLen:]
++                parsed = True
++            elif parse_starts_block(actstr, "ct(", False):
++                parencount += 1
+                 actstr = actstr[len("ct(") :]
+                 ctact = ovsactions.ctact()
+ 
+@@ -553,6 +577,7 @@ class ovsactions(nla):
+                         natact = ovsactions.ctact.natattr()
+ 
+                         if actstr.startswith("("):
++                            parencount += 1
+                             t = None
+                             actstr = actstr[1:]
+                             if actstr.startswith("src"):
+@@ -607,15 +632,29 @@ class ovsactions(nla):
+                                     actstr = actstr[strspn(actstr, ", ") :]
+ 
+                         ctact["attrs"].append(["OVS_CT_ATTR_NAT", natact])
+-                        actstr = actstr[strspn(actstr, ",) ") :]
++                        actstr = actstr[strspn(actstr, ", ") :]
+ 
+                 self["attrs"].append(["OVS_ACTION_ATTR_CT", ctact])
+                 parsed = True
+ 
+-            actstr = actstr[strspn(actstr, "), ") :]
++            actstr = actstr[strspn(actstr, ", ") :]
++            while parencount > 0:
++                parencount -= 1
++                actstr = actstr[strspn(actstr, " "):]
++                if len(actstr) and actstr[0] != ")":
++                    raise ValueError("Action str: '%s' unbalanced" % actstr)
++                actstr = actstr[1:]
++
++            if len(actstr) and actstr[0] == ")":
++                return (totallen - len(actstr))
++
++            actstr = actstr[strspn(actstr, ", ") :]
++
+             if not parsed:
+                 raise ValueError("Action str: '%s' not supported" % actstr)
+ 
++        return (totallen - len(actstr))
++
+ 
+ class ovskey(nla):
+     nla_flags = NLA_F_NESTED
+@@ -2111,6 +2150,8 @@ def main(argv):
+     ovsflow = OvsFlow()
+     ndb = NDB()
+ 
++    sys.setrecursionlimit(100000)
++
+     if hasattr(args, "showdp"):
+         found = False
+         for iface in ndb.interfaces:
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 464853a7f9829..ad993ab3ac181 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -988,12 +988,12 @@ TEST_F(tls, recv_partial)
+ 
+ 	memset(recv_mem, 0, sizeof(recv_mem));
+ 	EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len);
+-	EXPECT_NE(recv(self->cfd, recv_mem, strlen(test_str_first),
+-		       MSG_WAITALL), -1);
++	EXPECT_EQ(recv(self->cfd, recv_mem, strlen(test_str_first),
++		       MSG_WAITALL), strlen(test_str_first));
+ 	EXPECT_EQ(memcmp(test_str_first, recv_mem, strlen(test_str_first)), 0);
+ 	memset(recv_mem, 0, sizeof(recv_mem));
+-	EXPECT_NE(recv(self->cfd, recv_mem, strlen(test_str_second),
+-		       MSG_WAITALL), -1);
++	EXPECT_EQ(recv(self->cfd, recv_mem, strlen(test_str_second),
++		       MSG_WAITALL), strlen(test_str_second));
+ 	EXPECT_EQ(memcmp(test_str_second, recv_mem, strlen(test_str_second)),
+ 		  0);
+ }


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [gentoo-commits] proj/linux-patches:6.7 commit in: /
@ 2024-04-03 13:53 Mike Pagano
  0 siblings, 0 replies; 25+ messages in thread
From: Mike Pagano @ 2024-04-03 13:53 UTC (permalink / raw
  To: gentoo-commits

commit:     b033846ce1f2fa0cbfa53194e2a8e14c55b68752
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  3 13:53:07 2024 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  3 13:53:07 2024 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b033846c

Linux patch 6.7.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1011_linux-6.7.12.patch | 19068 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 19072 insertions(+)

diff --git a/0000_README b/0000_README
index 6f168e56..80cbdbf4 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-6.7.11.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.7.11
 
+Patch:  1011_linux-6.7.12.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.7.12
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1011_linux-6.7.12.patch b/1011_linux-6.7.12.patch
new file mode 100644
index 00000000..5cb96254
--- /dev/null
+++ b/1011_linux-6.7.12.patch
@@ -0,0 +1,19068 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 40b89dd7c0bb3..7120c4e1692f6 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3327,9 +3327,7 @@
+ 
+ 	mem_encrypt=	[X86-64] AMD Secure Memory Encryption (SME) control
+ 			Valid arguments: on, off
+-			Default (depends on kernel configuration option):
+-			  on  (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y)
+-			  off (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=n)
++			Default: off
+ 			mem_encrypt=on:		Activate SME
+ 			mem_encrypt=off:	Do not activate SME
+ 
+diff --git a/Documentation/arch/x86/amd-memory-encryption.rst b/Documentation/arch/x86/amd-memory-encryption.rst
+index 07caa8fff852e..414bc7402ae7d 100644
+--- a/Documentation/arch/x86/amd-memory-encryption.rst
++++ b/Documentation/arch/x86/amd-memory-encryption.rst
+@@ -87,14 +87,14 @@ The state of SME in the Linux kernel can be documented as follows:
+ 	  kernel is non-zero).
+ 
+ SME can also be enabled and activated in the BIOS. If SME is enabled and
+-activated in the BIOS, then all memory accesses will be encrypted and it will
+-not be necessary to activate the Linux memory encryption support.  If the BIOS
+-merely enables SME (sets bit 23 of the MSR_AMD64_SYSCFG), then Linux can activate
+-memory encryption by default (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y) or
+-by supplying mem_encrypt=on on the kernel command line.  However, if BIOS does
+-not enable SME, then Linux will not be able to activate memory encryption, even
+-if configured to do so by default or the mem_encrypt=on command line parameter
+-is specified.
++activated in the BIOS, then all memory accesses will be encrypted and it
++will not be necessary to activate the Linux memory encryption support.
++
++If the BIOS merely enables SME (sets bit 23 of the MSR_AMD64_SYSCFG),
++then memory encryption can be enabled by supplying mem_encrypt=on on the
++kernel command line.  However, if BIOS does not enable SME, then Linux
++will not be able to activate memory encryption, even if configured to do
++so by default or the mem_encrypt=on command line parameter is specified.
+ 
+ Secure Nested Paging (SNP)
+ ==========================
+diff --git a/Documentation/conf.py b/Documentation/conf.py
+index dfc19c915d5c4..e385e24fe9e72 100644
+--- a/Documentation/conf.py
++++ b/Documentation/conf.py
+@@ -345,9 +345,9 @@ sys.stderr.write("Using %s theme\n" % html_theme)
+ html_static_path = ['sphinx-static']
+ 
+ # If true, Docutils "smart quotes" will be used to convert quotes and dashes
+-# to typographically correct entities.  This will convert "--" to "—",
+-# which is not always what we want, so disable it.
+-smartquotes = False
++# to typographically correct entities.  However, conversion of "--" to "—"
++# is not always what we want, so enable only quotes.
++smartquotes_action = 'q'
+ 
+ # Custom sidebar templates, maps document names to template names.
+ # Note that the RTD theme ignores this
+diff --git a/Documentation/userspace-api/media/mediactl/media-types.rst b/Documentation/userspace-api/media/mediactl/media-types.rst
+index 0ffeece1e0c8e..6332e8395263b 100644
+--- a/Documentation/userspace-api/media/mediactl/media-types.rst
++++ b/Documentation/userspace-api/media/mediactl/media-types.rst
+@@ -375,12 +375,11 @@ Types and flags used to represent the media graph elements
+ 	  are origins of links.
+ 
+     *  -  ``MEDIA_PAD_FL_MUST_CONNECT``
+-       -  If this flag is set and the pad is linked to any other pad, then
+-	  at least one of those links must be enabled for the entity to be
+-	  able to stream. There could be temporary reasons (e.g. device
+-	  configuration dependent) for the pad to need enabled links even
+-	  when this flag isn't set; the absence of the flag doesn't imply
+-	  there is none.
++       -  If this flag is set, then for this pad to be able to stream, it must
++	  be connected by at least one enabled link. There could be temporary
++	  reasons (e.g. device configuration dependent) for the pad to need
++	  enabled links even when this flag isn't set; the absence of the flag
++	  doesn't imply there is none.
+ 
+ 
+ One and only one of ``MEDIA_PAD_FL_SINK`` and ``MEDIA_PAD_FL_SOURCE``
+diff --git a/Makefile b/Makefile
+index 6e3182cdf0162..190ed2b2e0acf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 7
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 8f47d6762ea4b..f53832383a635 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -591,8 +591,8 @@ source "arch/arm/mm/Kconfig"
+ 
+ config IWMMXT
+ 	bool "Enable iWMMXt support"
+-	depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_PJ4 || CPU_PJ4B
+-	default y if PXA27x || PXA3xx || ARCH_MMP || CPU_PJ4 || CPU_PJ4B
++	depends on CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK
++	default y if PXA27x || PXA3xx || ARCH_MMP
+ 	help
+ 	  Enable support for iWMMXt context switching at run time if
+ 	  running on a CPU that supports it.
+diff --git a/arch/arm/boot/dts/marvell/mmp2-brownstone.dts b/arch/arm/boot/dts/marvell/mmp2-brownstone.dts
+index 04f1ae1382e7a..bc64348b82185 100644
+--- a/arch/arm/boot/dts/marvell/mmp2-brownstone.dts
++++ b/arch/arm/boot/dts/marvell/mmp2-brownstone.dts
+@@ -28,7 +28,7 @@ &uart3 {
+ &twsi1 {
+ 	status = "okay";
+ 	pmic: max8925@3c {
+-		compatible = "maxium,max8925";
++		compatible = "maxim,max8925";
+ 		reg = <0x3c>;
+ 		interrupts = <1>;
+ 		interrupt-parent = <&intcmux4>;
+diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig
+index 0a90583f9f017..8f9dbe8d90291 100644
+--- a/arch/arm/configs/imx_v6_v7_defconfig
++++ b/arch/arm/configs/imx_v6_v7_defconfig
+@@ -297,6 +297,7 @@ CONFIG_FB_MODE_HELPERS=y
+ CONFIG_LCD_CLASS_DEVICE=y
+ CONFIG_LCD_L4F00242T03=y
+ CONFIG_LCD_PLATFORM=y
++CONFIG_BACKLIGHT_CLASS_DEVICE=y
+ CONFIG_BACKLIGHT_PWM=y
+ CONFIG_BACKLIGHT_GPIO=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+diff --git a/arch/arm/include/asm/mman.h b/arch/arm/include/asm/mman.h
+new file mode 100644
+index 0000000000000..2189e507c8e08
+--- /dev/null
++++ b/arch/arm/include/asm/mman.h
+@@ -0,0 +1,14 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __ASM_MMAN_H__
++#define __ASM_MMAN_H__
++
++#include <asm/system_info.h>
++#include <uapi/asm/mman.h>
++
++static inline bool arch_memory_deny_write_exec_supported(void)
++{
++	return cpu_architecture() >= CPU_ARCH_ARMv6;
++}
++#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
++
++#endif /* __ASM_MMAN_H__ */
+diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
+index 771264d4726a7..ae2f2b2b4e5ab 100644
+--- a/arch/arm/kernel/Makefile
++++ b/arch/arm/kernel/Makefile
+@@ -75,8 +75,6 @@ obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
+ obj-$(CONFIG_CPU_XSCALE)	+= xscale-cp0.o
+ obj-$(CONFIG_CPU_XSC3)		+= xscale-cp0.o
+ obj-$(CONFIG_CPU_MOHAWK)	+= xscale-cp0.o
+-obj-$(CONFIG_CPU_PJ4)		+= pj4-cp0.o
+-obj-$(CONFIG_CPU_PJ4B)		+= pj4-cp0.o
+ obj-$(CONFIG_IWMMXT)		+= iwmmxt.o
+ obj-$(CONFIG_PERF_EVENTS)	+= perf_regs.o perf_callchain.o
+ obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event_xscale.o perf_event_v6.o \
+diff --git a/arch/arm/kernel/iwmmxt.S b/arch/arm/kernel/iwmmxt.S
+index a0218c4867b9b..4a335d3c59690 100644
+--- a/arch/arm/kernel/iwmmxt.S
++++ b/arch/arm/kernel/iwmmxt.S
+@@ -18,18 +18,6 @@
+ #include <asm/assembler.h>
+ #include "iwmmxt.h"
+ 
+-#if defined(CONFIG_CPU_PJ4) || defined(CONFIG_CPU_PJ4B)
+-#define PJ4(code...)		code
+-#define XSC(code...)
+-#elif defined(CONFIG_CPU_MOHAWK) || \
+-	defined(CONFIG_CPU_XSC3) || \
+-	defined(CONFIG_CPU_XSCALE)
+-#define PJ4(code...)
+-#define XSC(code...)		code
+-#else
+-#error "Unsupported iWMMXt architecture"
+-#endif
+-
+ #define MMX_WR0		 	(0x00)
+ #define MMX_WR1		 	(0x08)
+ #define MMX_WR2		 	(0x10)
+@@ -81,17 +69,13 @@ ENDPROC(iwmmxt_undef_handler)
+ ENTRY(iwmmxt_task_enable)
+ 	inc_preempt_count r10, r3
+ 
+-	XSC(mrc	p15, 0, r2, c15, c1, 0)
+-	PJ4(mrc p15, 0, r2, c1, c0, 2)
++	mrc	p15, 0, r2, c15, c1, 0
+ 	@ CP0 and CP1 accessible?
+-	XSC(tst	r2, #0x3)
+-	PJ4(tst	r2, #0xf)
++	tst	r2, #0x3
+ 	bne	4f				@ if so no business here
+ 	@ enable access to CP0 and CP1
+-	XSC(orr	r2, r2, #0x3)
+-	XSC(mcr	p15, 0, r2, c15, c1, 0)
+-	PJ4(orr	r2, r2, #0xf)
+-	PJ4(mcr	p15, 0, r2, c1, c0, 2)
++	orr	r2, r2, #0x3
++	mcr	p15, 0, r2, c15, c1, 0
+ 
+ 	ldr	r3, =concan_owner
+ 	ldr	r2, [r0, #S_PC]			@ current task pc value
+@@ -218,12 +202,9 @@ ENTRY(iwmmxt_task_disable)
+ 	bne	1f				@ no: quit
+ 
+ 	@ enable access to CP0 and CP1
+-	XSC(mrc	p15, 0, r4, c15, c1, 0)
+-	XSC(orr	r4, r4, #0x3)
+-	XSC(mcr	p15, 0, r4, c15, c1, 0)
+-	PJ4(mrc p15, 0, r4, c1, c0, 2)
+-	PJ4(orr	r4, r4, #0xf)
+-	PJ4(mcr	p15, 0, r4, c1, c0, 2)
++	mrc	p15, 0, r4, c15, c1, 0
++	orr	r4, r4, #0x3
++	mcr	p15, 0, r4, c15, c1, 0
+ 
+ 	mov	r0, #0				@ nothing to load
+ 	str	r0, [r3]			@ no more current owner
+@@ -232,10 +213,8 @@ ENTRY(iwmmxt_task_disable)
+ 	bl	concan_save
+ 
+ 	@ disable access to CP0 and CP1
+-	XSC(bic	r4, r4, #0x3)
+-	XSC(mcr	p15, 0, r4, c15, c1, 0)
+-	PJ4(bic	r4, r4, #0xf)
+-	PJ4(mcr	p15, 0, r4, c1, c0, 2)
++	bic	r4, r4, #0x3
++	mcr	p15, 0, r4, c15, c1, 0
+ 
+ 	mrc	p15, 0, r2, c2, c0, 0
+ 	mov	r2, r2				@ cpwait
+@@ -330,11 +309,9 @@ ENDPROC(iwmmxt_task_restore)
+  */
+ ENTRY(iwmmxt_task_switch)
+ 
+-	XSC(mrc	p15, 0, r1, c15, c1, 0)
+-	PJ4(mrc	p15, 0, r1, c1, c0, 2)
++	mrc	p15, 0, r1, c15, c1, 0
+ 	@ CP0 and CP1 accessible?
+-	XSC(tst	r1, #0x3)
+-	PJ4(tst	r1, #0xf)
++	tst	r1, #0x3
+ 	bne	1f				@ yes: block them for next task
+ 
+ 	ldr	r2, =concan_owner
+@@ -344,10 +321,8 @@ ENTRY(iwmmxt_task_switch)
+ 	retne	lr				@ no: leave Concan disabled
+ 
+ 1:	@ flip Concan access
+-	XSC(eor	r1, r1, #0x3)
+-	XSC(mcr	p15, 0, r1, c15, c1, 0)
+-	PJ4(eor r1, r1, #0xf)
+-	PJ4(mcr	p15, 0, r1, c1, c0, 2)
++	eor	r1, r1, #0x3
++	mcr	p15, 0, r1, c15, c1, 0
+ 
+ 	mrc	p15, 0, r1, c2, c0, 0
+ 	sub	pc, lr, r1, lsr #32		@ cpwait and return
+diff --git a/arch/arm/kernel/pj4-cp0.c b/arch/arm/kernel/pj4-cp0.c
+deleted file mode 100644
+index 4bca8098c4ff5..0000000000000
+--- a/arch/arm/kernel/pj4-cp0.c
++++ /dev/null
+@@ -1,135 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * linux/arch/arm/kernel/pj4-cp0.c
+- *
+- * PJ4 iWMMXt coprocessor context switching and handling
+- *
+- * Copyright (c) 2010 Marvell International Inc.
+- */
+-
+-#include <linux/types.h>
+-#include <linux/kernel.h>
+-#include <linux/signal.h>
+-#include <linux/sched.h>
+-#include <linux/init.h>
+-#include <linux/io.h>
+-#include <asm/thread_notify.h>
+-#include <asm/cputype.h>
+-
+-static int iwmmxt_do(struct notifier_block *self, unsigned long cmd, void *t)
+-{
+-	struct thread_info *thread = t;
+-
+-	switch (cmd) {
+-	case THREAD_NOTIFY_FLUSH:
+-		/*
+-		 * flush_thread() zeroes thread->fpstate, so no need
+-		 * to do anything here.
+-		 *
+-		 * FALLTHROUGH: Ensure we don't try to overwrite our newly
+-		 * initialised state information on the first fault.
+-		 */
+-
+-	case THREAD_NOTIFY_EXIT:
+-		iwmmxt_task_release(thread);
+-		break;
+-
+-	case THREAD_NOTIFY_SWITCH:
+-		iwmmxt_task_switch(thread);
+-		break;
+-	}
+-
+-	return NOTIFY_DONE;
+-}
+-
+-static struct notifier_block __maybe_unused iwmmxt_notifier_block = {
+-	.notifier_call	= iwmmxt_do,
+-};
+-
+-
+-static u32 __init pj4_cp_access_read(void)
+-{
+-	u32 value;
+-
+-	__asm__ __volatile__ (
+-		"mrc	p15, 0, %0, c1, c0, 2\n\t"
+-		: "=r" (value));
+-	return value;
+-}
+-
+-static void __init pj4_cp_access_write(u32 value)
+-{
+-	u32 temp;
+-
+-	__asm__ __volatile__ (
+-		"mcr	p15, 0, %1, c1, c0, 2\n\t"
+-#ifdef CONFIG_THUMB2_KERNEL
+-		"isb\n\t"
+-#else
+-		"mrc	p15, 0, %0, c1, c0, 2\n\t"
+-		"mov	%0, %0\n\t"
+-		"sub	pc, pc, #4\n\t"
+-#endif
+-		: "=r" (temp) : "r" (value));
+-}
+-
+-static int __init pj4_get_iwmmxt_version(void)
+-{
+-	u32 cp_access, wcid;
+-
+-	cp_access = pj4_cp_access_read();
+-	pj4_cp_access_write(cp_access | 0xf);
+-
+-	/* check if coprocessor 0 and 1 are available */
+-	if ((pj4_cp_access_read() & 0xf) != 0xf) {
+-		pj4_cp_access_write(cp_access);
+-		return -ENODEV;
+-	}
+-
+-	/* read iWMMXt coprocessor id register p1, c0 */
+-	__asm__ __volatile__ ("mrc    p1, 0, %0, c0, c0, 0\n" : "=r" (wcid));
+-
+-	pj4_cp_access_write(cp_access);
+-
+-	/* iWMMXt v1 */
+-	if ((wcid & 0xffffff00) == 0x56051000)
+-		return 1;
+-	/* iWMMXt v2 */
+-	if ((wcid & 0xffffff00) == 0x56052000)
+-		return 2;
+-
+-	return -EINVAL;
+-}
+-
+-/*
+- * Disable CP0/CP1 on boot, and let call_fpe() and the iWMMXt lazy
+- * switch code handle iWMMXt context switching.
+- */
+-static int __init pj4_cp0_init(void)
+-{
+-	u32 __maybe_unused cp_access;
+-	int vers;
+-
+-	if (!cpu_is_pj4())
+-		return 0;
+-
+-	vers = pj4_get_iwmmxt_version();
+-	if (vers < 0)
+-		return 0;
+-
+-#ifndef CONFIG_IWMMXT
+-	pr_info("PJ4 iWMMXt coprocessor detected, but kernel support is missing.\n");
+-#else
+-	cp_access = pj4_cp_access_read() & ~0xf;
+-	pj4_cp_access_write(cp_access);
+-
+-	pr_info("PJ4 iWMMXt v%d coprocessor enabled.\n", vers);
+-	elf_hwcap |= HWCAP_IWMMXT;
+-	thread_register_notifier(&iwmmxt_notifier_block);
+-	register_iwmmxt_undef_handler();
+-#endif
+-
+-	return 0;
+-}
+-
+-late_initcall(pj4_cp0_init);
+diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
+index d19d140a10c7d..0749cf8a66371 100644
+--- a/arch/arm/mm/flush.c
++++ b/arch/arm/mm/flush.c
+@@ -296,6 +296,9 @@ void __sync_icache_dcache(pte_t pteval)
+ 		return;
+ 
+ 	folio = page_folio(pfn_to_page(pfn));
++	if (folio_test_reserved(folio))
++		return;
++
+ 	if (cache_is_vipt_aliasing())
+ 		mapping = folio_flush_mapping(folio);
+ 	else
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 84de20c88a869..e456c9512f9c6 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -2147,8 +2147,16 @@ pcie1: pci@1c08000 {
+ 			ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+ 
+-			interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>;
+-			interrupt-names = "msi";
++			interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "msi0", "msi1", "msi2", "msi3",
++					  "msi4", "msi5", "msi6", "msi7";
+ 			#interrupt-cells = <1>;
+ 			interrupt-map-mask = <0 0 0 0x7>;
+ 			interrupt-map = <0 0 0 1 &intc 0 0 0 434 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+index f2055899ae7ae..a993ad15ea9a6 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+@@ -721,6 +721,8 @@ &pcie3a_phy {
+ };
+ 
+ &pcie4 {
++	max-link-speed = <2>;
++
+ 	perst-gpios = <&tlmm 141 GPIO_ACTIVE_LOW>;
+ 	wake-gpios = <&tlmm 139 GPIO_ACTIVE_LOW>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8450-hdk.dts b/arch/arm64/boot/dts/qcom/sm8450-hdk.dts
+index 20153d08eddec..ff346be5c916b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8450-hdk.dts
+@@ -930,8 +930,8 @@ &sound {
+ 			"TX DMIC3", "MIC BIAS1",
+ 			"TX SWR_INPUT0", "ADC1_OUTPUT",
+ 			"TX SWR_INPUT1", "ADC2_OUTPUT",
+-			"TX SWR_INPUT2", "ADC3_OUTPUT",
+-			"TX SWR_INPUT3", "ADC4_OUTPUT";
++			"TX SWR_INPUT0", "ADC3_OUTPUT",
++			"TX SWR_INPUT1", "ADC4_OUTPUT";
+ 
+ 	wcd-playback-dai-link {
+ 		link-name = "WCD Playback";
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+index 9a70875028b7e..3098bb6b93d67 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+@@ -745,7 +745,7 @@ &swr2 {
+ 	wcd_tx: codec@0,3 {
+ 		compatible = "sdw20217010d00";
+ 		reg = <0 3>;
+-		qcom,tx-port-mapping = <1 1 2 3>;
++		qcom,tx-port-mapping = <2 2 3 4>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-qrd.dts b/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
+index eef811def39bc..ad3d7ac29c6dc 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
+@@ -842,7 +842,7 @@ &swr2 {
+ 	wcd_tx: codec@0,3 {
+ 		compatible = "sdw20217010d00";
+ 		reg = <0 3>;
+-		qcom,tx-port-mapping = <1 1 2 3>;
++		qcom,tx-port-mapping = <2 2 3 4>;
+ 	};
+ };
+ 
+diff --git a/arch/hexagon/kernel/vmlinux.lds.S b/arch/hexagon/kernel/vmlinux.lds.S
+index 1140051a0c455..1150b77fa281c 100644
+--- a/arch/hexagon/kernel/vmlinux.lds.S
++++ b/arch/hexagon/kernel/vmlinux.lds.S
+@@ -63,6 +63,7 @@ SECTIONS
+ 	STABS_DEBUG
+ 	DWARF_DEBUG
+ 	ELF_DETAILS
++	.hexagon.attributes 0 : { *(.hexagon.attributes) }
+ 
+ 	DISCARDS
+ }
+diff --git a/arch/loongarch/crypto/crc32-loongarch.c b/arch/loongarch/crypto/crc32-loongarch.c
+index a49e507af38c0..3eebea3a7b478 100644
+--- a/arch/loongarch/crypto/crc32-loongarch.c
++++ b/arch/loongarch/crypto/crc32-loongarch.c
+@@ -44,7 +44,6 @@ static u32 crc32_loongarch_hw(u32 crc_, const u8 *p, unsigned int len)
+ 
+ 		CRC32(crc, value, w);
+ 		p += sizeof(u32);
+-		len -= sizeof(u32);
+ 	}
+ 
+ 	if (len & sizeof(u16)) {
+@@ -80,7 +79,6 @@ static u32 crc32c_loongarch_hw(u32 crc_, const u8 *p, unsigned int len)
+ 
+ 		CRC32C(crc, value, w);
+ 		p += sizeof(u32);
+-		len -= sizeof(u32);
+ 	}
+ 
+ 	if (len & sizeof(u16)) {
+diff --git a/arch/loongarch/include/asm/Kbuild b/arch/loongarch/include/asm/Kbuild
+index 93783fa24f6e9..dede0b422cfb9 100644
+--- a/arch/loongarch/include/asm/Kbuild
++++ b/arch/loongarch/include/asm/Kbuild
+@@ -4,6 +4,7 @@ generic-y += mcs_spinlock.h
+ generic-y += parport.h
+ generic-y += early_ioremap.h
+ generic-y += qrwlock.h
++generic-y += qspinlock.h
+ generic-y += rwsem.h
+ generic-y += segment.h
+ generic-y += user.h
+diff --git a/arch/loongarch/include/asm/io.h b/arch/loongarch/include/asm/io.h
+index c486c2341b662..4a8adcca329b8 100644
+--- a/arch/loongarch/include/asm/io.h
++++ b/arch/loongarch/include/asm/io.h
+@@ -71,6 +71,8 @@ extern void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t
+ #define memcpy_fromio(a, c, l) __memcpy_fromio((a), (c), (l))
+ #define memcpy_toio(c, a, l)   __memcpy_toio((c), (a), (l))
+ 
++#define __io_aw() mmiowb()
++
+ #include <asm-generic/io.h>
+ 
+ #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
+diff --git a/arch/loongarch/include/asm/percpu.h b/arch/loongarch/include/asm/percpu.h
+index 9b36ac003f890..8f290e5546cf7 100644
+--- a/arch/loongarch/include/asm/percpu.h
++++ b/arch/loongarch/include/asm/percpu.h
+@@ -29,7 +29,12 @@ static inline void set_my_cpu_offset(unsigned long off)
+ 	__my_cpu_offset = off;
+ 	csr_write64(off, PERCPU_BASE_KS);
+ }
+-#define __my_cpu_offset __my_cpu_offset
++
++#define __my_cpu_offset					\
++({							\
++	__asm__ __volatile__("":"+r"(__my_cpu_offset));	\
++	__my_cpu_offset;				\
++})
+ 
+ #define PERCPU_OP(op, asm_op, c_op)					\
+ static __always_inline unsigned long __percpu_##op(void *ptr,		\
+diff --git a/arch/loongarch/include/asm/qspinlock.h b/arch/loongarch/include/asm/qspinlock.h
+deleted file mode 100644
+index 34f43f8ad5912..0000000000000
+--- a/arch/loongarch/include/asm/qspinlock.h
++++ /dev/null
+@@ -1,18 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#ifndef _ASM_QSPINLOCK_H
+-#define _ASM_QSPINLOCK_H
+-
+-#include <asm-generic/qspinlock_types.h>
+-
+-#define queued_spin_unlock queued_spin_unlock
+-
+-static inline void queued_spin_unlock(struct qspinlock *lock)
+-{
+-	compiletime_assert_atomic_type(lock->locked);
+-	c_sync();
+-	WRITE_ONCE(lock->locked, 0);
+-}
+-
+-#include <asm-generic/qspinlock.h>
+-
+-#endif /* _ASM_QSPINLOCK_H */
+diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h
+index 5937d5edaba1e..000a28e1c5e8d 100644
+--- a/arch/parisc/include/asm/assembly.h
++++ b/arch/parisc/include/asm/assembly.h
+@@ -97,26 +97,28 @@
+ 	 * version takes two arguments: a src and destination register.
+ 	 * However, the source and destination registers can not be
+ 	 * the same register.
++	 *
++	 * We use add,l to avoid clobbering the C/B bits in the PSW.
+ 	 */
+ 
+ 	.macro  tophys  grvirt, grphys
+-	ldil    L%(__PAGE_OFFSET), \grphys
+-	sub     \grvirt, \grphys, \grphys
++	ldil    L%(-__PAGE_OFFSET), \grphys
++	addl    \grvirt, \grphys, \grphys
+ 	.endm
+-	
++
+ 	.macro  tovirt  grphys, grvirt
+ 	ldil    L%(__PAGE_OFFSET), \grvirt
+-	add     \grphys, \grvirt, \grvirt
++	addl    \grphys, \grvirt, \grvirt
+ 	.endm
+ 
+ 	.macro  tophys_r1  gr
+-	ldil    L%(__PAGE_OFFSET), %r1
+-	sub     \gr, %r1, \gr
++	ldil    L%(-__PAGE_OFFSET), %r1
++	addl    \gr, %r1, \gr
+ 	.endm
+-	
++
+ 	.macro  tovirt_r1  gr
+ 	ldil    L%(__PAGE_OFFSET), %r1
+-	add     \gr, %r1, \gr
++	addl    \gr, %r1, \gr
+ 	.endm
+ 
+ 	.macro delay value
+diff --git a/arch/parisc/include/asm/checksum.h b/arch/parisc/include/asm/checksum.h
+index 3c43baca7b397..2aceebcd695c8 100644
+--- a/arch/parisc/include/asm/checksum.h
++++ b/arch/parisc/include/asm/checksum.h
+@@ -40,7 +40,7 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
+ "	addc		%0, %5, %0\n"
+ "	addc		%0, %3, %0\n"
+ "1:	ldws,ma		4(%1), %3\n"
+-"	addib,<		0, %2, 1b\n"
++"	addib,>		-1, %2, 1b\n"
+ "	addc		%0, %3, %0\n"
+ "\n"
+ "	extru		%0, 31, 16, %4\n"
+@@ -126,6 +126,7 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ 	** Try to keep 4 registers with "live" values ahead of the ALU.
+ 	*/
+ 
++"	depdi		0, 31, 32, %0\n"/* clear upper half of incoming checksum */
+ "	ldd,ma		8(%1), %4\n"	/* get 1st saddr word */
+ "	ldd,ma		8(%2), %5\n"	/* get 1st daddr word */
+ "	add		%4, %0, %0\n"
+@@ -137,8 +138,8 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ "	add,dc		%3, %0, %0\n"  /* fold in proto+len | carry bit */
+ "	extrd,u		%0, 31, 32, %4\n"/* copy upper half down */
+ "	depdi		0, 31, 32, %0\n"/* clear upper half */
+-"	add		%4, %0, %0\n"	/* fold into 32-bits */
+-"	addc		0, %0, %0\n"	/* add carry */
++"	add,dc		%4, %0, %0\n"	/* fold into 32-bits, plus carry */
++"	addc		0, %0, %0\n"	/* add final carry */
+ 
+ #else
+ 
+@@ -163,7 +164,8 @@ static __inline__ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
+ "	ldw,ma		4(%2), %7\n"	/* 4th daddr */
+ "	addc		%6, %0, %0\n"
+ "	addc		%7, %0, %0\n"
+-"	addc		%3, %0, %0\n"	/* fold in proto+len, catch carry */
++"	addc		%3, %0, %0\n"	/* fold in proto+len */
++"	addc		0, %0, %0\n"	/* add carry */
+ 
+ #endif
+ 	: "=r" (sum), "=r" (saddr), "=r" (daddr), "=r" (len),
+diff --git a/arch/parisc/include/asm/mman.h b/arch/parisc/include/asm/mman.h
+new file mode 100644
+index 0000000000000..47c5a1991d103
+--- /dev/null
++++ b/arch/parisc/include/asm/mman.h
+@@ -0,0 +1,14 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __ASM_MMAN_H__
++#define __ASM_MMAN_H__
++
++#include <uapi/asm/mman.h>
++
++/* PARISC cannot allow mdwe as it needs writable stacks */
++static inline bool arch_memory_deny_write_exec_supported(void)
++{
++	return false;
++}
++#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
++
++#endif /* __ASM_MMAN_H__ */
+diff --git a/arch/parisc/kernel/unaligned.c b/arch/parisc/kernel/unaligned.c
+index c520e551a1652..a8e75e5b884a7 100644
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -169,6 +169,7 @@ static int emulate_ldw(struct pt_regs *regs, int toreg, int flop)
+ static int emulate_ldd(struct pt_regs *regs, int toreg, int flop)
+ {
+ 	unsigned long saddr = regs->ior;
++	unsigned long shift, temp1;
+ 	__u64 val = 0;
+ 	ASM_EXCEPTIONTABLE_VAR(ret);
+ 
+@@ -180,25 +181,22 @@ static int emulate_ldd(struct pt_regs *regs, int toreg, int flop)
+ 
+ #ifdef CONFIG_64BIT
+ 	__asm__ __volatile__  (
+-"	depd,z	%3,60,3,%%r19\n"		/* r19=(ofs&7)*8 */
+-"	mtsp	%4, %%sr1\n"
+-"	depd	%%r0,63,3,%3\n"
+-"1:	ldd	0(%%sr1,%3),%0\n"
+-"2:	ldd	8(%%sr1,%3),%%r20\n"
+-"	subi	64,%%r19,%%r19\n"
+-"	mtsar	%%r19\n"
+-"	shrpd	%0,%%r20,%%sar,%0\n"
++"	depd,z	%2,60,3,%3\n"		/* shift=(ofs&7)*8 */
++"	mtsp	%5, %%sr1\n"
++"	depd	%%r0,63,3,%2\n"
++"1:	ldd	0(%%sr1,%2),%0\n"
++"2:	ldd	8(%%sr1,%2),%4\n"
++"	subi	64,%3,%3\n"
++"	mtsar	%3\n"
++"	shrpd	%0,%4,%%sar,%0\n"
+ "3:	\n"
+ 	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1")
+ 	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1")
+-	: "=r" (val), "+r" (ret)
+-	: "0" (val), "r" (saddr), "r" (regs->isr)
+-	: "r19", "r20" );
++	: "+r" (val), "+r" (ret), "+r" (saddr), "=&r" (shift), "=&r" (temp1)
++	: "r" (regs->isr) );
+ #else
+-    {
+-	unsigned long shift, temp1;
+ 	__asm__ __volatile__  (
+-"	zdep	%2,29,2,%3\n"		/* r19=(ofs&3)*8 */
++"	zdep	%2,29,2,%3\n"		/* shift=(ofs&3)*8 */
+ "	mtsp	%5, %%sr1\n"
+ "	dep	%%r0,31,2,%2\n"
+ "1:	ldw	0(%%sr1,%2),%0\n"
+@@ -214,7 +212,6 @@ static int emulate_ldd(struct pt_regs *regs, int toreg, int flop)
+ 	ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 4b, "%1")
+ 	: "+r" (val), "+r" (ret), "+r" (saddr), "=&r" (shift), "=&r" (temp1)
+ 	: "r" (regs->isr) );
+-    }
+ #endif
+ 
+ 	DPRINTF("val = 0x%llx\n", val);
+diff --git a/arch/powerpc/include/asm/reg_fsl_emb.h b/arch/powerpc/include/asm/reg_fsl_emb.h
+index a21f529c43d96..8359c06d92d9f 100644
+--- a/arch/powerpc/include/asm/reg_fsl_emb.h
++++ b/arch/powerpc/include/asm/reg_fsl_emb.h
+@@ -12,9 +12,16 @@
+ #ifndef __ASSEMBLY__
+ /* Performance Monitor Registers */
+ #define mfpmr(rn)	({unsigned int rval; \
+-			asm volatile("mfpmr %0," __stringify(rn) \
++			asm volatile(".machine push; " \
++				     ".machine e300; " \
++				     "mfpmr %0," __stringify(rn) ";" \
++				     ".machine pop; " \
+ 				     : "=r" (rval)); rval;})
+-#define mtpmr(rn, v)	asm volatile("mtpmr " __stringify(rn) ",%0" : : "r" (v))
++#define mtpmr(rn, v)	asm volatile(".machine push; " \
++				     ".machine e300; " \
++				     "mtpmr " __stringify(rn) ",%0; " \
++				     ".machine pop; " \
++				     : : "r" (v))
+ #endif /* __ASSEMBLY__ */
+ 
+ /* Freescale Book E Performance Monitor APU Registers */
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 0b5878c3125b1..77364729a1b61 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -375,6 +375,18 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
+ 	if (IS_ENABLED(CONFIG_PPC64))
+ 		boot_cpu_hwid = be32_to_cpu(intserv[found_thread]);
+ 
++	if (nr_cpu_ids % nthreads != 0) {
++		set_nr_cpu_ids(ALIGN(nr_cpu_ids, nthreads));
++		pr_warn("nr_cpu_ids was not a multiple of threads_per_core, adjusted to %d\n",
++			nr_cpu_ids);
++	}
++
++	if (boot_cpuid >= nr_cpu_ids) {
++		set_nr_cpu_ids(min(CONFIG_NR_CPUS, ALIGN(boot_cpuid + 1, nthreads)));
++		pr_warn("Boot CPU %d >= nr_cpu_ids, adjusted nr_cpu_ids to %d\n",
++			boot_cpuid, nr_cpu_ids);
++	}
++
+ 	/*
+ 	 * PAPR defines "logical" PVR values for cpus that
+ 	 * meet various levels of the architecture:
+diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
+index 6eac63e79a899..0ab65eeb93ee3 100644
+--- a/arch/powerpc/lib/Makefile
++++ b/arch/powerpc/lib/Makefile
+@@ -76,7 +76,7 @@ obj-$(CONFIG_PPC_LIB_RHEAP) += rheap.o
+ obj-$(CONFIG_FTR_FIXUP_SELFTEST) += feature-fixups-test.o
+ 
+ obj-$(CONFIG_ALTIVEC)	+= xor_vmx.o xor_vmx_glue.o
+-CFLAGS_xor_vmx.o += -maltivec $(call cc-option,-mabi=altivec)
++CFLAGS_xor_vmx.o += -mhard-float -maltivec $(call cc-option,-mabi=altivec)
+ # Enable <altivec.h>
+ CFLAGS_xor_vmx.o += -isystem $(shell $(CC) -print-file-name=include)
+ 
+diff --git a/arch/sparc/include/asm/parport.h b/arch/sparc/include/asm/parport.h
+index 0a7ffcfd59cda..e2eed8f97665f 100644
+--- a/arch/sparc/include/asm/parport.h
++++ b/arch/sparc/include/asm/parport.h
+@@ -1,256 +1,11 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+-/* parport.h: sparc64 specific parport initialization and dma.
+- *
+- * Copyright (C) 1999  Eddie C. Dost  (ecd@skynet.be)
+- */
++#ifndef ___ASM_SPARC_PARPORT_H
++#define ___ASM_SPARC_PARPORT_H
+ 
+-#ifndef _ASM_SPARC64_PARPORT_H
+-#define _ASM_SPARC64_PARPORT_H 1
+-
+-#include <linux/of.h>
+-#include <linux/platform_device.h>
+-
+-#include <asm/ebus_dma.h>
+-#include <asm/ns87303.h>
+-#include <asm/prom.h>
+-
+-#define PARPORT_PC_MAX_PORTS	PARPORT_MAX
+-
+-/*
+- * While sparc64 doesn't have an ISA DMA API, we provide something that looks
+- * close enough to make parport_pc happy
+- */
+-#define HAS_DMA
+-
+-#ifdef CONFIG_PARPORT_PC_FIFO
+-static DEFINE_SPINLOCK(dma_spin_lock);
+-
+-#define claim_dma_lock() \
+-({	unsigned long flags; \
+-	spin_lock_irqsave(&dma_spin_lock, flags); \
+-	flags; \
+-})
+-
+-#define release_dma_lock(__flags) \
+-	spin_unlock_irqrestore(&dma_spin_lock, __flags);
++#if defined(__sparc__) && defined(__arch64__)
++#include <asm/parport_64.h>
++#else
++#include <asm-generic/parport.h>
++#endif
+ #endif
+ 
+-static struct sparc_ebus_info {
+-	struct ebus_dma_info info;
+-	unsigned int addr;
+-	unsigned int count;
+-	int lock;
+-
+-	struct parport *port;
+-} sparc_ebus_dmas[PARPORT_PC_MAX_PORTS];
+-
+-static DECLARE_BITMAP(dma_slot_map, PARPORT_PC_MAX_PORTS);
+-
+-static inline int request_dma(unsigned int dmanr, const char *device_id)
+-{
+-	if (dmanr >= PARPORT_PC_MAX_PORTS)
+-		return -EINVAL;
+-	if (xchg(&sparc_ebus_dmas[dmanr].lock, 1) != 0)
+-		return -EBUSY;
+-	return 0;
+-}
+-
+-static inline void free_dma(unsigned int dmanr)
+-{
+-	if (dmanr >= PARPORT_PC_MAX_PORTS) {
+-		printk(KERN_WARNING "Trying to free DMA%d\n", dmanr);
+-		return;
+-	}
+-	if (xchg(&sparc_ebus_dmas[dmanr].lock, 0) == 0) {
+-		printk(KERN_WARNING "Trying to free free DMA%d\n", dmanr);
+-		return;
+-	}
+-}
+-
+-static inline void enable_dma(unsigned int dmanr)
+-{
+-	ebus_dma_enable(&sparc_ebus_dmas[dmanr].info, 1);
+-
+-	if (ebus_dma_request(&sparc_ebus_dmas[dmanr].info,
+-			     sparc_ebus_dmas[dmanr].addr,
+-			     sparc_ebus_dmas[dmanr].count))
+-		BUG();
+-}
+-
+-static inline void disable_dma(unsigned int dmanr)
+-{
+-	ebus_dma_enable(&sparc_ebus_dmas[dmanr].info, 0);
+-}
+-
+-static inline void clear_dma_ff(unsigned int dmanr)
+-{
+-	/* nothing */
+-}
+-
+-static inline void set_dma_mode(unsigned int dmanr, char mode)
+-{
+-	ebus_dma_prepare(&sparc_ebus_dmas[dmanr].info, (mode != DMA_MODE_WRITE));
+-}
+-
+-static inline void set_dma_addr(unsigned int dmanr, unsigned int addr)
+-{
+-	sparc_ebus_dmas[dmanr].addr = addr;
+-}
+-
+-static inline void set_dma_count(unsigned int dmanr, unsigned int count)
+-{
+-	sparc_ebus_dmas[dmanr].count = count;
+-}
+-
+-static inline unsigned int get_dma_residue(unsigned int dmanr)
+-{
+-	return ebus_dma_residue(&sparc_ebus_dmas[dmanr].info);
+-}
+-
+-static int ecpp_probe(struct platform_device *op)
+-{
+-	unsigned long base = op->resource[0].start;
+-	unsigned long config = op->resource[1].start;
+-	unsigned long d_base = op->resource[2].start;
+-	unsigned long d_len;
+-	struct device_node *parent;
+-	struct parport *p;
+-	int slot, err;
+-
+-	parent = op->dev.of_node->parent;
+-	if (of_node_name_eq(parent, "dma")) {
+-		p = parport_pc_probe_port(base, base + 0x400,
+-					  op->archdata.irqs[0], PARPORT_DMA_NOFIFO,
+-					  op->dev.parent->parent, 0);
+-		if (!p)
+-			return -ENOMEM;
+-		dev_set_drvdata(&op->dev, p);
+-		return 0;
+-	}
+-
+-	for (slot = 0; slot < PARPORT_PC_MAX_PORTS; slot++) {
+-		if (!test_and_set_bit(slot, dma_slot_map))
+-			break;
+-	}
+-	err = -ENODEV;
+-	if (slot >= PARPORT_PC_MAX_PORTS)
+-		goto out_err;
+-
+-	spin_lock_init(&sparc_ebus_dmas[slot].info.lock);
+-
+-	d_len = (op->resource[2].end - d_base) + 1UL;
+-	sparc_ebus_dmas[slot].info.regs =
+-		of_ioremap(&op->resource[2], 0, d_len, "ECPP DMA");
+-
+-	if (!sparc_ebus_dmas[slot].info.regs)
+-		goto out_clear_map;
+-
+-	sparc_ebus_dmas[slot].info.flags = 0;
+-	sparc_ebus_dmas[slot].info.callback = NULL;
+-	sparc_ebus_dmas[slot].info.client_cookie = NULL;
+-	sparc_ebus_dmas[slot].info.irq = 0xdeadbeef;
+-	strcpy(sparc_ebus_dmas[slot].info.name, "parport");
+-	if (ebus_dma_register(&sparc_ebus_dmas[slot].info))
+-		goto out_unmap_regs;
+-
+-	ebus_dma_irq_enable(&sparc_ebus_dmas[slot].info, 1);
+-
+-	/* Configure IRQ to Push Pull, Level Low */
+-	/* Enable ECP, set bit 2 of the CTR first */
+-	outb(0x04, base + 0x02);
+-	ns87303_modify(config, PCR,
+-		       PCR_EPP_ENABLE |
+-		       PCR_IRQ_ODRAIN,
+-		       PCR_ECP_ENABLE |
+-		       PCR_ECP_CLK_ENA |
+-		       PCR_IRQ_POLAR);
+-
+-	/* CTR bit 5 controls direction of port */
+-	ns87303_modify(config, PTR,
+-		       0, PTR_LPT_REG_DIR);
+-
+-	p = parport_pc_probe_port(base, base + 0x400,
+-				  op->archdata.irqs[0],
+-				  slot,
+-				  op->dev.parent,
+-				  0);
+-	err = -ENOMEM;
+-	if (!p)
+-		goto out_disable_irq;
+-
+-	dev_set_drvdata(&op->dev, p);
+-
+-	return 0;
+-
+-out_disable_irq:
+-	ebus_dma_irq_enable(&sparc_ebus_dmas[slot].info, 0);
+-	ebus_dma_unregister(&sparc_ebus_dmas[slot].info);
+-
+-out_unmap_regs:
+-	of_iounmap(&op->resource[2], sparc_ebus_dmas[slot].info.regs, d_len);
+-
+-out_clear_map:
+-	clear_bit(slot, dma_slot_map);
+-
+-out_err:
+-	return err;
+-}
+-
+-static int ecpp_remove(struct platform_device *op)
+-{
+-	struct parport *p = dev_get_drvdata(&op->dev);
+-	int slot = p->dma;
+-
+-	parport_pc_unregister_port(p);
+-
+-	if (slot != PARPORT_DMA_NOFIFO) {
+-		unsigned long d_base = op->resource[2].start;
+-		unsigned long d_len;
+-
+-		d_len = (op->resource[2].end - d_base) + 1UL;
+-
+-		ebus_dma_irq_enable(&sparc_ebus_dmas[slot].info, 0);
+-		ebus_dma_unregister(&sparc_ebus_dmas[slot].info);
+-		of_iounmap(&op->resource[2],
+-			   sparc_ebus_dmas[slot].info.regs,
+-			   d_len);
+-		clear_bit(slot, dma_slot_map);
+-	}
+-
+-	return 0;
+-}
+-
+-static const struct of_device_id ecpp_match[] = {
+-	{
+-		.name = "ecpp",
+-	},
+-	{
+-		.name = "parallel",
+-		.compatible = "ecpp",
+-	},
+-	{
+-		.name = "parallel",
+-		.compatible = "ns87317-ecpp",
+-	},
+-	{
+-		.name = "parallel",
+-		.compatible = "pnpALI,1533,3",
+-	},
+-	{},
+-};
+-
+-static struct platform_driver ecpp_driver = {
+-	.driver = {
+-		.name = "ecpp",
+-		.of_match_table = ecpp_match,
+-	},
+-	.probe			= ecpp_probe,
+-	.remove			= ecpp_remove,
+-};
+-
+-static int parport_pc_find_nonpci_ports(int autoirq, int autodma)
+-{
+-	return platform_driver_register(&ecpp_driver);
+-}
+-
+-#endif /* !(_ASM_SPARC64_PARPORT_H */
+diff --git a/arch/sparc/include/asm/parport_64.h b/arch/sparc/include/asm/parport_64.h
+new file mode 100644
+index 0000000000000..0a7ffcfd59cda
+--- /dev/null
++++ b/arch/sparc/include/asm/parport_64.h
+@@ -0,0 +1,256 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/* parport.h: sparc64 specific parport initialization and dma.
++ *
++ * Copyright (C) 1999  Eddie C. Dost  (ecd@skynet.be)
++ */
++
++#ifndef _ASM_SPARC64_PARPORT_H
++#define _ASM_SPARC64_PARPORT_H 1
++
++#include <linux/of.h>
++#include <linux/platform_device.h>
++
++#include <asm/ebus_dma.h>
++#include <asm/ns87303.h>
++#include <asm/prom.h>
++
++#define PARPORT_PC_MAX_PORTS	PARPORT_MAX
++
++/*
++ * While sparc64 doesn't have an ISA DMA API, we provide something that looks
++ * close enough to make parport_pc happy
++ */
++#define HAS_DMA
++
++#ifdef CONFIG_PARPORT_PC_FIFO
++static DEFINE_SPINLOCK(dma_spin_lock);
++
++#define claim_dma_lock() \
++({	unsigned long flags; \
++	spin_lock_irqsave(&dma_spin_lock, flags); \
++	flags; \
++})
++
++#define release_dma_lock(__flags) \
++	spin_unlock_irqrestore(&dma_spin_lock, __flags);
++#endif
++
++static struct sparc_ebus_info {
++	struct ebus_dma_info info;
++	unsigned int addr;
++	unsigned int count;
++	int lock;
++
++	struct parport *port;
++} sparc_ebus_dmas[PARPORT_PC_MAX_PORTS];
++
++static DECLARE_BITMAP(dma_slot_map, PARPORT_PC_MAX_PORTS);
++
++static inline int request_dma(unsigned int dmanr, const char *device_id)
++{
++	if (dmanr >= PARPORT_PC_MAX_PORTS)
++		return -EINVAL;
++	if (xchg(&sparc_ebus_dmas[dmanr].lock, 1) != 0)
++		return -EBUSY;
++	return 0;
++}
++
++static inline void free_dma(unsigned int dmanr)
++{
++	if (dmanr >= PARPORT_PC_MAX_PORTS) {
++		printk(KERN_WARNING "Trying to free DMA%d\n", dmanr);
++		return;
++	}
++	if (xchg(&sparc_ebus_dmas[dmanr].lock, 0) == 0) {
++		printk(KERN_WARNING "Trying to free free DMA%d\n", dmanr);
++		return;
++	}
++}
++
++static inline void enable_dma(unsigned int dmanr)
++{
++	ebus_dma_enable(&sparc_ebus_dmas[dmanr].info, 1);
++
++	if (ebus_dma_request(&sparc_ebus_dmas[dmanr].info,
++			     sparc_ebus_dmas[dmanr].addr,
++			     sparc_ebus_dmas[dmanr].count))
++		BUG();
++}
++
++static inline void disable_dma(unsigned int dmanr)
++{
++	ebus_dma_enable(&sparc_ebus_dmas[dmanr].info, 0);
++}
++
++static inline void clear_dma_ff(unsigned int dmanr)
++{
++	/* nothing */
++}
++
++static inline void set_dma_mode(unsigned int dmanr, char mode)
++{
++	ebus_dma_prepare(&sparc_ebus_dmas[dmanr].info, (mode != DMA_MODE_WRITE));
++}
++
++static inline void set_dma_addr(unsigned int dmanr, unsigned int addr)
++{
++	sparc_ebus_dmas[dmanr].addr = addr;
++}
++
++static inline void set_dma_count(unsigned int dmanr, unsigned int count)
++{
++	sparc_ebus_dmas[dmanr].count = count;
++}
++
++static inline unsigned int get_dma_residue(unsigned int dmanr)
++{
++	return ebus_dma_residue(&sparc_ebus_dmas[dmanr].info);
++}
++
++static int ecpp_probe(struct platform_device *op)
++{
++	unsigned long base = op->resource[0].start;
++	unsigned long config = op->resource[1].start;
++	unsigned long d_base = op->resource[2].start;
++	unsigned long d_len;
++	struct device_node *parent;
++	struct parport *p;
++	int slot, err;
++
++	parent = op->dev.of_node->parent;
++	if (of_node_name_eq(parent, "dma")) {
++		p = parport_pc_probe_port(base, base + 0x400,
++					  op->archdata.irqs[0], PARPORT_DMA_NOFIFO,
++					  op->dev.parent->parent, 0);
++		if (!p)
++			return -ENOMEM;
++		dev_set_drvdata(&op->dev, p);
++		return 0;
++	}
++
++	for (slot = 0; slot < PARPORT_PC_MAX_PORTS; slot++) {
++		if (!test_and_set_bit(slot, dma_slot_map))
++			break;
++	}
++	err = -ENODEV;
++	if (slot >= PARPORT_PC_MAX_PORTS)
++		goto out_err;
++
++	spin_lock_init(&sparc_ebus_dmas[slot].info.lock);
++
++	d_len = (op->resource[2].end - d_base) + 1UL;
++	sparc_ebus_dmas[slot].info.regs =
++		of_ioremap(&op->resource[2], 0, d_len, "ECPP DMA");
++
++	if (!sparc_ebus_dmas[slot].info.regs)
++		goto out_clear_map;
++
++	sparc_ebus_dmas[slot].info.flags = 0;
++	sparc_ebus_dmas[slot].info.callback = NULL;
++	sparc_ebus_dmas[slot].info.client_cookie = NULL;
++	sparc_ebus_dmas[slot].info.irq = 0xdeadbeef;
++	strcpy(sparc_ebus_dmas[slot].info.name, "parport");
++	if (ebus_dma_register(&sparc_ebus_dmas[slot].info))
++		goto out_unmap_regs;
++
++	ebus_dma_irq_enable(&sparc_ebus_dmas[slot].info, 1);
++
++	/* Configure IRQ to Push Pull, Level Low */
++	/* Enable ECP, set bit 2 of the CTR first */
++	outb(0x04, base + 0x02);
++	ns87303_modify(config, PCR,
++		       PCR_EPP_ENABLE |
++		       PCR_IRQ_ODRAIN,
++		       PCR_ECP_ENABLE |
++		       PCR_ECP_CLK_ENA |
++		       PCR_IRQ_POLAR);
++
++	/* CTR bit 5 controls direction of port */
++	ns87303_modify(config, PTR,
++		       0, PTR_LPT_REG_DIR);
++
++	p = parport_pc_probe_port(base, base + 0x400,
++				  op->archdata.irqs[0],
++				  slot,
++				  op->dev.parent,
++				  0);
++	err = -ENOMEM;
++	if (!p)
++		goto out_disable_irq;
++
++	dev_set_drvdata(&op->dev, p);
++
++	return 0;
++
++out_disable_irq:
++	ebus_dma_irq_enable(&sparc_ebus_dmas[slot].info, 0);
++	ebus_dma_unregister(&sparc_ebus_dmas[slot].info);
++
++out_unmap_regs:
++	of_iounmap(&op->resource[2], sparc_ebus_dmas[slot].info.regs, d_len);
++
++out_clear_map:
++	clear_bit(slot, dma_slot_map);
++
++out_err:
++	return err;
++}
++
++static int ecpp_remove(struct platform_device *op)
++{
++	struct parport *p = dev_get_drvdata(&op->dev);
++	int slot = p->dma;
++
++	parport_pc_unregister_port(p);
++
++	if (slot != PARPORT_DMA_NOFIFO) {
++		unsigned long d_base = op->resource[2].start;
++		unsigned long d_len;
++
++		d_len = (op->resource[2].end - d_base) + 1UL;
++
++		ebus_dma_irq_enable(&sparc_ebus_dmas[slot].info, 0);
++		ebus_dma_unregister(&sparc_ebus_dmas[slot].info);
++		of_iounmap(&op->resource[2],
++			   sparc_ebus_dmas[slot].info.regs,
++			   d_len);
++		clear_bit(slot, dma_slot_map);
++	}
++
++	return 0;
++}
++
++static const struct of_device_id ecpp_match[] = {
++	{
++		.name = "ecpp",
++	},
++	{
++		.name = "parallel",
++		.compatible = "ecpp",
++	},
++	{
++		.name = "parallel",
++		.compatible = "ns87317-ecpp",
++	},
++	{
++		.name = "parallel",
++		.compatible = "pnpALI,1533,3",
++	},
++	{},
++};
++
++static struct platform_driver ecpp_driver = {
++	.driver = {
++		.name = "ecpp",
++		.of_match_table = ecpp_match,
++	},
++	.probe			= ecpp_probe,
++	.remove			= ecpp_remove,
++};
++
++static int parport_pc_find_nonpci_ports(int autoirq, int autodma)
++{
++	return platform_driver_register(&ecpp_driver);
++}
++
++#endif /* !(_ASM_SPARC64_PARPORT_H */
+diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
+index 17cdfdbf1f3b7..149adc0947530 100644
+--- a/arch/sparc/kernel/nmi.c
++++ b/arch/sparc/kernel/nmi.c
+@@ -279,7 +279,7 @@ static int __init setup_nmi_watchdog(char *str)
+ 	if (!strncmp(str, "panic", 5))
+ 		panic_on_timeout = 1;
+ 
+-	return 0;
++	return 1;
+ }
+ __setup("nmi_watchdog=", setup_nmi_watchdog);
+ 
+diff --git a/arch/sparc/vdso/vma.c b/arch/sparc/vdso/vma.c
+index 136c78f28f8ba..1bbf4335de454 100644
+--- a/arch/sparc/vdso/vma.c
++++ b/arch/sparc/vdso/vma.c
+@@ -449,9 +449,8 @@ static __init int vdso_setup(char *s)
+ 	unsigned long val;
+ 
+ 	err = kstrtoul(s, 10, &val);
+-	if (err)
+-		return err;
+-	vdso_enabled = val;
+-	return 0;
++	if (!err)
++		vdso_enabled = val;
++	return 1;
+ }
+ __setup("vdso=", vdso_setup);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index d2003865b7cf6..92412748582c8 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1539,19 +1539,6 @@ config AMD_MEM_ENCRYPT
+ 	  This requires an AMD processor that supports Secure Memory
+ 	  Encryption (SME).
+ 
+-config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
+-	bool "Activate AMD Secure Memory Encryption (SME) by default"
+-	depends on AMD_MEM_ENCRYPT
+-	help
+-	  Say yes to have system memory encrypted by default if running on
+-	  an AMD processor that supports Secure Memory Encryption (SME).
+-
+-	  If set to Y, then the encryption of system memory can be
+-	  deactivated with the mem_encrypt=off command line option.
+-
+-	  If set to N, then the encryption of system memory can be
+-	  activated with the mem_encrypt=on command line option.
+-
+ # Common NUMA Features
+ config NUMA
+ 	bool "NUMA Memory Allocation and Scheduler Support"
+diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
+index f4e22ef774ab6..876fc6d46a131 100644
+--- a/arch/x86/boot/compressed/efi_mixed.S
++++ b/arch/x86/boot/compressed/efi_mixed.S
+@@ -15,10 +15,12 @@
+  */
+ 
+ #include <linux/linkage.h>
++#include <asm/asm-offsets.h>
+ #include <asm/msr.h>
+ #include <asm/page_types.h>
+ #include <asm/processor-flags.h>
+ #include <asm/segment.h>
++#include <asm/setup.h>
+ 
+ 	.code64
+ 	.text
+@@ -49,6 +51,11 @@ SYM_FUNC_START(startup_64_mixed_mode)
+ 	lea	efi32_boot_args(%rip), %rdx
+ 	mov	0(%rdx), %edi
+ 	mov	4(%rdx), %esi
++
++	/* Switch to the firmware's stack */
++	movl	efi32_boot_sp(%rip), %esp
++	andl	$~7, %esp
++
+ #ifdef CONFIG_EFI_HANDOVER_PROTOCOL
+ 	mov	8(%rdx), %edx		// saved bootparams pointer
+ 	test	%edx, %edx
+@@ -144,6 +151,7 @@ SYM_FUNC_END(__efi64_thunk)
+ SYM_FUNC_START(efi32_stub_entry)
+ 	call	1f
+ 1:	popl	%ecx
++	leal	(efi32_boot_args - 1b)(%ecx), %ebx
+ 
+ 	/* Clear BSS */
+ 	xorl	%eax, %eax
+@@ -158,6 +166,7 @@ SYM_FUNC_START(efi32_stub_entry)
+ 	popl	%ecx
+ 	popl	%edx
+ 	popl	%esi
++	movl	%esi, 8(%ebx)
+ 	jmp	efi32_entry
+ SYM_FUNC_END(efi32_stub_entry)
+ #endif
+@@ -234,8 +243,6 @@ SYM_FUNC_END(efi_enter32)
+  *
+  * Arguments:	%ecx	image handle
+  * 		%edx	EFI system table pointer
+- *		%esi	struct bootparams pointer (or NULL when not using
+- *			the EFI handover protocol)
+  *
+  * Since this is the point of no return for ordinary execution, no registers
+  * are considered live except for the function parameters. [Note that the EFI
+@@ -254,13 +261,25 @@ SYM_FUNC_START_LOCAL(efi32_entry)
+ 	/* Store firmware IDT descriptor */
+ 	sidtl	(efi32_boot_idt - 1b)(%ebx)
+ 
++	/* Store firmware stack pointer */
++	movl	%esp, (efi32_boot_sp - 1b)(%ebx)
++
+ 	/* Store boot arguments */
+ 	leal	(efi32_boot_args - 1b)(%ebx), %ebx
+ 	movl	%ecx, 0(%ebx)
+ 	movl	%edx, 4(%ebx)
+-	movl	%esi, 8(%ebx)
+ 	movb	$0x0, 12(%ebx)          // efi_is64
+ 
++	/*
++	 * Allocate some memory for a temporary struct boot_params, which only
++	 * needs the minimal pieces that startup_32() relies on.
++	 */
++	subl	$PARAM_SIZE, %esp
++	movl	%esp, %esi
++	movl	$PAGE_SIZE, BP_kernel_alignment(%esi)
++	movl	$_end - 1b, BP_init_size(%esi)
++	subl	$startup_32 - 1b, BP_init_size(%esi)
++
+ 	/* Disable paging */
+ 	movl	%cr0, %eax
+ 	btrl	$X86_CR0_PG_BIT, %eax
+@@ -286,8 +305,7 @@ SYM_FUNC_START(efi32_pe_entry)
+ 
+ 	movl	8(%ebp), %ecx			// image_handle
+ 	movl	12(%ebp), %edx			// sys_table
+-	xorl	%esi, %esi
+-	jmp	efi32_entry			// pass %ecx, %edx, %esi
++	jmp	efi32_entry			// pass %ecx, %edx
+ 						// no other registers remain live
+ 
+ 2:	popl	%edi				// restore callee-save registers
+@@ -318,5 +336,6 @@ SYM_DATA_END(efi32_boot_idt)
+ 
+ SYM_DATA_LOCAL(efi32_boot_cs, .word 0)
+ SYM_DATA_LOCAL(efi32_boot_ds, .word 0)
++SYM_DATA_LOCAL(efi32_boot_sp, .long 0)
+ SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0)
+ SYM_DATA(efi_is64, .byte 1)
+diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c
+index eeec9986570ed..d07be9d05cd03 100644
+--- a/arch/x86/coco/core.c
++++ b/arch/x86/coco/core.c
+@@ -14,7 +14,7 @@
+ #include <asm/processor.h>
+ 
+ enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE;
+-static u64 cc_mask __ro_after_init;
++u64 cc_mask __ro_after_init;
+ 
+ static bool noinstr intel_cc_platform_has(enum cc_attr attr)
+ {
+@@ -148,8 +148,3 @@ u64 cc_mkdec(u64 val)
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(cc_mkdec);
+-
+-__init void cc_set_mask(u64 mask)
+-{
+-	cc_mask = mask;
+-}
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index b1a98fa38828e..0e82074517f6b 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -13,6 +13,7 @@
+ #include <asm/preempt.h>
+ #include <asm/asm.h>
+ #include <asm/gsseg.h>
++#include <asm/nospec-branch.h>
+ 
+ #ifndef CONFIG_X86_CMPXCHG64
+ extern void cmpxchg8b_emu(void);
+diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
+index fbcfec4dc4ccd..ca8eed1d496ab 100644
+--- a/arch/x86/include/asm/asm.h
++++ b/arch/x86/include/asm/asm.h
+@@ -113,6 +113,20 @@
+ 
+ #endif
+ 
++#ifndef __ASSEMBLY__
++#ifndef __pic__
++static __always_inline __pure void *rip_rel_ptr(void *p)
++{
++	asm("leaq %c1(%%rip), %0" : "=r"(p) : "i"(p));
++
++	return p;
++}
++#define RIP_REL_REF(var)	(*(typeof(&(var)))rip_rel_ptr(&(var)))
++#else
++#define RIP_REL_REF(var)	(var)
++#endif
++#endif
++
+ /*
+  * Macros to generate condition code outputs from inline assembly,
+  * The output operand must be type "bool".
+diff --git a/arch/x86/include/asm/coco.h b/arch/x86/include/asm/coco.h
+index 6ae2d16a7613b..21940ef8d2904 100644
+--- a/arch/x86/include/asm/coco.h
++++ b/arch/x86/include/asm/coco.h
+@@ -2,6 +2,7 @@
+ #ifndef _ASM_X86_COCO_H
+ #define _ASM_X86_COCO_H
+ 
++#include <asm/asm.h>
+ #include <asm/types.h>
+ 
+ enum cc_vendor {
+@@ -11,9 +12,14 @@ enum cc_vendor {
+ };
+ 
+ extern enum cc_vendor cc_vendor;
++extern u64 cc_mask;
+ 
+ #ifdef CONFIG_ARCH_HAS_CC_PLATFORM
+-void cc_set_mask(u64 mask);
++static inline void cc_set_mask(u64 mask)
++{
++	RIP_REL_REF(cc_mask) = mask;
++}
++
+ u64 cc_mkenc(u64 val);
+ u64 cc_mkdec(u64 val);
+ #else
+diff --git a/arch/x86/include/asm/crash_core.h b/arch/x86/include/asm/crash_core.h
+index 76af98f4e8012..041020da8d561 100644
+--- a/arch/x86/include/asm/crash_core.h
++++ b/arch/x86/include/asm/crash_core.h
+@@ -39,4 +39,6 @@ static inline unsigned long crash_low_size_default(void)
+ #endif
+ }
+ 
++#define HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY
++
+ #endif /* _X86_CRASH_CORE_H */
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index 359ada486fa92..b31eb9fd59544 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -15,7 +15,8 @@
+ #include <linux/init.h>
+ #include <linux/cc_platform.h>
+ 
+-#include <asm/bootparam.h>
++#include <asm/asm.h>
++struct boot_params;
+ 
+ #ifdef CONFIG_X86_MEM_ENCRYPT
+ void __init mem_encrypt_init(void);
+@@ -58,6 +59,11 @@ void __init mem_encrypt_free_decrypted_mem(void);
+ 
+ void __init sev_es_init_vc_handling(void);
+ 
++static inline u64 sme_get_me_mask(void)
++{
++	return RIP_REL_REF(sme_me_mask);
++}
++
+ #define __bss_decrypted __section(".bss..decrypted")
+ 
+ #else	/* !CONFIG_AMD_MEM_ENCRYPT */
+@@ -89,6 +95,8 @@ early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool en
+ 
+ static inline void mem_encrypt_free_decrypted_mem(void) { }
+ 
++static inline u64 sme_get_me_mask(void) { return 0; }
++
+ #define __bss_decrypted
+ 
+ #endif	/* CONFIG_AMD_MEM_ENCRYPT */
+@@ -106,11 +114,6 @@ void add_encrypt_protection_map(void);
+ 
+ extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
+ 
+-static inline u64 sme_get_me_mask(void)
+-{
+-	return sme_me_mask;
+-}
+-
+ #endif	/* __ASSEMBLY__ */
+ 
+ #endif	/* __X86_MEM_ENCRYPT_H__ */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index d15b35815ebae..4e33cc834bf88 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -271,11 +271,20 @@
+ .Lskip_rsb_\@:
+ .endm
+ 
++/*
++ * The CALL to srso_alias_untrain_ret() must be patched in directly at
++ * the spot where untraining must be done, ie., srso_alias_untrain_ret()
++ * must be the target of a CALL instruction instead of indirectly
++ * jumping to a wrapper which then calls it. Therefore, this macro is
++ * called outside of __UNTRAIN_RET below, for the time being, before the
++ * kernel can support nested alternatives with arbitrary nesting.
++ */
++.macro CALL_UNTRAIN_RET
+ #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+-#define CALL_UNTRAIN_RET	"call entry_untrain_ret"
+-#else
+-#define CALL_UNTRAIN_RET	""
++	ALTERNATIVE_2 "", "call entry_untrain_ret", X86_FEATURE_UNRET, \
++		          "call srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
+ #endif
++.endm
+ 
+ /*
+  * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the
+@@ -291,8 +300,8 @@
+ .macro __UNTRAIN_RET ibpb_feature, call_depth_insns
+ #if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
+ 	VALIDATE_UNRET_END
+-	ALTERNATIVE_3 "",						\
+-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
++	CALL_UNTRAIN_RET
++	ALTERNATIVE_2 "",						\
+ 		      "call entry_ibpb", \ibpb_feature,			\
+ 		     __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH
+ #endif
+@@ -351,6 +360,8 @@ extern void retbleed_return_thunk(void);
+ static inline void retbleed_return_thunk(void) {}
+ #endif
+ 
++extern void srso_alias_untrain_ret(void);
++
+ #ifdef CONFIG_CPU_SRSO
+ extern void srso_return_thunk(void);
+ extern void srso_alias_return_thunk(void);
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index 5b4a1ce3d3680..36f905797075e 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -203,12 +203,12 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
+ 					 unsigned long npages);
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+ 					unsigned long npages);
+-void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op);
+ void snp_set_memory_shared(unsigned long vaddr, unsigned long npages);
+ void snp_set_memory_private(unsigned long vaddr, unsigned long npages);
+ void snp_set_wakeup_secondary_cpu(void);
+ bool snp_init(struct boot_params *bp);
+ void __init __noreturn snp_abort(void);
++void snp_dmi_setup(void);
+ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio);
+ void snp_accept_memory(phys_addr_t start, phys_addr_t end);
+ u64 snp_get_unsupported_features(u64 status);
+@@ -227,12 +227,12 @@ static inline void __init
+ early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
+ static inline void __init
+ early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
+-static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { }
+ static inline void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { }
+ static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { }
+ static inline void snp_set_wakeup_secondary_cpu(void) { }
+ static inline bool snp_init(struct boot_params *bp) { return false; }
+ static inline void snp_abort(void) { }
++static inline void snp_dmi_setup(void) { }
+ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio)
+ {
+ 	return -ENOTTY;
+diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h
+index a800abb1a9925..d8416b3bf832e 100644
+--- a/arch/x86/include/asm/suspend_32.h
++++ b/arch/x86/include/asm/suspend_32.h
+@@ -12,11 +12,6 @@
+ 
+ /* image of the saved processor state */
+ struct saved_context {
+-	/*
+-	 * On x86_32, all segment registers except gs are saved at kernel
+-	 * entry in pt_regs.
+-	 */
+-	u16 gs;
+ 	unsigned long cr0, cr2, cr3, cr4;
+ 	u64 misc_enable;
+ 	struct saved_msrs saved_msrs;
+@@ -27,6 +22,11 @@ struct saved_context {
+ 	unsigned long tr;
+ 	unsigned long safety;
+ 	unsigned long return_address;
++	/*
++	 * On x86_32, all segment registers except gs are saved at kernel
++	 * entry in pt_regs.
++	 */
++	u16 gs;
+ 	bool misc_enable_saved;
+ } __attribute__((packed));
+ 
+diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
+index c878616a18b85..550dcbbbb1756 100644
+--- a/arch/x86/include/asm/x86_init.h
++++ b/arch/x86/include/asm/x86_init.h
+@@ -30,12 +30,13 @@ struct x86_init_mpparse {
+  * @reserve_resources:		reserve the standard resources for the
+  *				platform
+  * @memory_setup:		platform specific memory setup
+- *
++ * @dmi_setup:			platform specific DMI setup
+  */
+ struct x86_init_resources {
+ 	void (*probe_roms)(void);
+ 	void (*reserve_resources)(void);
+ 	char *(*memory_setup)(void);
++	void (*dmi_setup)(void);
+ };
+ 
+ /**
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 2055fb308f5b9..77a1ceb717d06 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1002,11 +1002,11 @@ static bool cpu_has_zenbleed_microcode(void)
+ 	u32 good_rev = 0;
+ 
+ 	switch (boot_cpu_data.x86_model) {
+-	case 0x30 ... 0x3f: good_rev = 0x0830107a; break;
+-	case 0x60 ... 0x67: good_rev = 0x0860010b; break;
+-	case 0x68 ... 0x6f: good_rev = 0x08608105; break;
+-	case 0x70 ... 0x7f: good_rev = 0x08701032; break;
+-	case 0xa0 ... 0xaf: good_rev = 0x08a00008; break;
++	case 0x30 ... 0x3f: good_rev = 0x0830107b; break;
++	case 0x60 ... 0x67: good_rev = 0x0860010c; break;
++	case 0x68 ... 0x6f: good_rev = 0x08608107; break;
++	case 0x70 ... 0x7f: good_rev = 0x08701033; break;
++	case 0xa0 ... 0xaf: good_rev = 0x08a00009; break;
+ 
+ 	default:
+ 		return false;
+diff --git a/arch/x86/kernel/eisa.c b/arch/x86/kernel/eisa.c
+index e963344b04490..53935b4d62e30 100644
+--- a/arch/x86/kernel/eisa.c
++++ b/arch/x86/kernel/eisa.c
+@@ -2,6 +2,7 @@
+ /*
+  * EISA specific code
+  */
++#include <linux/cc_platform.h>
+ #include <linux/ioport.h>
+ #include <linux/eisa.h>
+ #include <linux/io.h>
+@@ -12,7 +13,7 @@ static __init int eisa_bus_probe(void)
+ {
+ 	void __iomem *p;
+ 
+-	if (xen_pv_domain() && !xen_initial_domain())
++	if ((xen_pv_domain() && !xen_initial_domain()) || cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ 		return 0;
+ 
+ 	p = ioremap(0x0FFFD9, 4);
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 117e74c44e756..33a214b1a4cec 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -178,10 +178,11 @@ void fpu__init_cpu_xstate(void)
+ 	 * Must happen after CR4 setup and before xsetbv() to allow KVM
+ 	 * lazy passthrough.  Write independent of the dynamic state static
+ 	 * key as that does not work on the boot CPU. This also ensures
+-	 * that any stale state is wiped out from XFD.
++	 * that any stale state is wiped out from XFD. Reset the per CPU
++	 * xfd cache too.
+ 	 */
+ 	if (cpu_feature_enabled(X86_FEATURE_XFD))
+-		wrmsrl(MSR_IA32_XFD, init_fpstate.xfd);
++		xfd_set_state(init_fpstate.xfd);
+ 
+ 	/*
+ 	 * XCR_XFEATURE_ENABLED_MASK (aka. XCR0) sets user features
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 3518fb26d06b0..19ca623ffa2ac 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -148,20 +148,26 @@ static inline void xfd_validate_state(struct fpstate *fpstate, u64 mask, bool rs
+ #endif
+ 
+ #ifdef CONFIG_X86_64
++static inline void xfd_set_state(u64 xfd)
++{
++	wrmsrl(MSR_IA32_XFD, xfd);
++	__this_cpu_write(xfd_state, xfd);
++}
++
+ static inline void xfd_update_state(struct fpstate *fpstate)
+ {
+ 	if (fpu_state_size_dynamic()) {
+ 		u64 xfd = fpstate->xfd;
+ 
+-		if (__this_cpu_read(xfd_state) != xfd) {
+-			wrmsrl(MSR_IA32_XFD, xfd);
+-			__this_cpu_write(xfd_state, xfd);
+-		}
++		if (__this_cpu_read(xfd_state) != xfd)
++			xfd_set_state(xfd);
+ 	}
+ }
+ 
+ extern int __xfd_enable_feature(u64 which, struct fpu_guest *guest_fpu);
+ #else
++static inline void xfd_set_state(u64 xfd) { }
++
+ static inline void xfd_update_state(struct fpstate *fpstate) { }
+ 
+ static inline int __xfd_enable_feature(u64 which, struct fpu_guest *guest_fpu) {
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index a0ce46c0a2d88..a6a3475e1d609 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -335,7 +335,16 @@ static int can_probe(unsigned long paddr)
+ kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr, unsigned long offset,
+ 					 bool *on_func_entry)
+ {
+-	if (is_endbr(*(u32 *)addr)) {
++	u32 insn;
++
++	/*
++	 * Since 'addr' is not guaranteed to be safe to access, use
++	 * copy_from_kernel_nofault() to read the instruction:
++	 */
++	if (copy_from_kernel_nofault(&insn, (void *)addr, sizeof(u32)))
++		return NULL;
++
++	if (is_endbr(insn)) {
+ 		*on_func_entry = !offset || offset == 4;
+ 		if (*on_func_entry)
+ 			offset = 4;
+diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
+index b223922248e9f..15c700d358700 100644
+--- a/arch/x86/kernel/mpparse.c
++++ b/arch/x86/kernel/mpparse.c
+@@ -196,12 +196,12 @@ static int __init smp_read_mpc(struct mpc_table *mpc, unsigned early)
+ 	if (!smp_check_mpc(mpc, oem, str))
+ 		return 0;
+ 
+-	/* Initialize the lapic mapping */
+-	if (!acpi_lapic)
+-		register_lapic_address(mpc->lapic);
+-
+-	if (early)
++	if (early) {
++		/* Initialize the lapic mapping */
++		if (!acpi_lapic)
++			register_lapic_address(mpc->lapic);
+ 		return 1;
++	}
+ 
+ 	/* Now process the configuration blocks. */
+ 	while (count < mpc->length) {
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index 3082cf24b69e3..6da2cfa23c293 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -636,7 +636,7 @@ void nmi_backtrace_stall_check(const struct cpumask *btp)
+ 			msgp = nmi_check_stall_msg[idx];
+ 			if (nsp->idt_ignored_snap != READ_ONCE(nsp->idt_ignored) && (idx & 0x1))
+ 				modp = ", but OK because ignore_nmis was set";
+-			if (nmi_seq & ~0x1)
++			if (nmi_seq & 0x1)
+ 				msghp = " (CPU currently in NMI handler function)";
+ 			else if (nsp->idt_nmi_seq_snap + 1 == nmi_seq)
+ 				msghp = " (CPU exited one NMI handler function)";
+diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c
+index 319fef37d9dce..cc2c34ba7228a 100644
+--- a/arch/x86/kernel/probe_roms.c
++++ b/arch/x86/kernel/probe_roms.c
+@@ -203,16 +203,6 @@ void __init probe_roms(void)
+ 	unsigned char c;
+ 	int i;
+ 
+-	/*
+-	 * The ROM memory range is not part of the e820 table and is therefore not
+-	 * pre-validated by BIOS. The kernel page table maps the ROM region as encrypted
+-	 * memory, and SNP requires encrypted memory to be validated before access.
+-	 * Do that here.
+-	 */
+-	snp_prep_memory(video_rom_resource.start,
+-			((system_rom_resource.end + 1) - video_rom_resource.start),
+-			SNP_PAGE_STATE_PRIVATE);
+-
+ 	/* video rom */
+ 	upper = adapter_rom_resources[0].start;
+ 	for (start = video_rom_resource.start; start < upper; start += 2048) {
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 1526747bedf2f..b002ebf024d3d 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -9,7 +9,6 @@
+ #include <linux/console.h>
+ #include <linux/crash_dump.h>
+ #include <linux/dma-map-ops.h>
+-#include <linux/dmi.h>
+ #include <linux/efi.h>
+ #include <linux/ima.h>
+ #include <linux/init_ohci1394_dma.h>
+@@ -904,7 +903,7 @@ void __init setup_arch(char **cmdline_p)
+ 		efi_init();
+ 
+ 	reserve_ibft_region();
+-	dmi_setup();
++	x86_init.resources.dmi_setup();
+ 
+ 	/*
+ 	 * VMware detection requires dmi to be available, so this
+diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
+index ccb0915e84e10..466fe09898ccd 100644
+--- a/arch/x86/kernel/sev-shared.c
++++ b/arch/x86/kernel/sev-shared.c
+@@ -556,9 +556,9 @@ static int snp_cpuid(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpuid_le
+ 		leaf->eax = leaf->ebx = leaf->ecx = leaf->edx = 0;
+ 
+ 		/* Skip post-processing for out-of-range zero leafs. */
+-		if (!(leaf->fn <= cpuid_std_range_max ||
+-		      (leaf->fn >= 0x40000000 && leaf->fn <= cpuid_hyp_range_max) ||
+-		      (leaf->fn >= 0x80000000 && leaf->fn <= cpuid_ext_range_max)))
++		if (!(leaf->fn <= RIP_REL_REF(cpuid_std_range_max) ||
++		      (leaf->fn >= 0x40000000 && leaf->fn <= RIP_REL_REF(cpuid_hyp_range_max)) ||
++		      (leaf->fn >= 0x80000000 && leaf->fn <= RIP_REL_REF(cpuid_ext_range_max))))
+ 			return 0;
+ 	}
+ 
+@@ -1063,11 +1063,11 @@ static void __init setup_cpuid_table(const struct cc_blob_sev_info *cc_info)
+ 		const struct snp_cpuid_fn *fn = &cpuid_table->fn[i];
+ 
+ 		if (fn->eax_in == 0x0)
+-			cpuid_std_range_max = fn->eax;
++			RIP_REL_REF(cpuid_std_range_max) = fn->eax;
+ 		else if (fn->eax_in == 0x40000000)
+-			cpuid_hyp_range_max = fn->eax;
++			RIP_REL_REF(cpuid_hyp_range_max) = fn->eax;
+ 		else if (fn->eax_in == 0x80000000)
+-			cpuid_ext_range_max = fn->eax;
++			RIP_REL_REF(cpuid_ext_range_max) = fn->eax;
+ 	}
+ }
+ 
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index c67285824e826..0f58242b54b86 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -23,6 +23,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/io.h>
+ #include <linux/psp-sev.h>
++#include <linux/dmi.h>
+ #include <uapi/linux/sev-guest.h>
+ 
+ #include <asm/cpu_entry_area.h>
+@@ -748,7 +749,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
+ 	 * This eliminates worries about jump tables or checking boot_cpu_data
+ 	 * in the cc_platform_has() function.
+ 	 */
+-	if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
++	if (!(RIP_REL_REF(sev_status) & MSR_AMD64_SEV_SNP_ENABLED))
+ 		return;
+ 
+ 	 /*
+@@ -767,28 +768,13 @@ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr
+ 	 * This eliminates worries about jump tables or checking boot_cpu_data
+ 	 * in the cc_platform_has() function.
+ 	 */
+-	if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
++	if (!(RIP_REL_REF(sev_status) & MSR_AMD64_SEV_SNP_ENABLED))
+ 		return;
+ 
+ 	 /* Ask hypervisor to mark the memory pages shared in the RMP table. */
+ 	early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_SHARED);
+ }
+ 
+-void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op)
+-{
+-	unsigned long vaddr, npages;
+-
+-	vaddr = (unsigned long)__va(paddr);
+-	npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+-
+-	if (op == SNP_PAGE_STATE_PRIVATE)
+-		early_snp_set_memory_private(vaddr, paddr, npages);
+-	else if (op == SNP_PAGE_STATE_SHARED)
+-		early_snp_set_memory_shared(vaddr, paddr, npages);
+-	else
+-		WARN(1, "invalid memory op %d\n", op);
+-}
+-
+ static unsigned long __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr,
+ 				       unsigned long vaddr_end, int op)
+ {
+@@ -2112,6 +2098,17 @@ void __init __noreturn snp_abort(void)
+ 	sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED);
+ }
+ 
++/*
++ * SEV-SNP guests should only execute dmi_setup() if EFI_CONFIG_TABLES are
++ * enabled, as the alternative (fallback) logic for DMI probing in the legacy
++ * ROM region can cause a crash since this region is not pre-validated.
++ */
++void __init snp_dmi_setup(void)
++{
++	if (efi_enabled(EFI_CONFIG_TABLES))
++		dmi_setup();
++}
++
+ static void dump_cpuid_table(void)
+ {
+ 	const struct snp_cpuid_table *cpuid_table = snp_cpuid_get_table();
+diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
+index a37ebd3b47736..3f0718b4a7d28 100644
+--- a/arch/x86/kernel/x86_init.c
++++ b/arch/x86/kernel/x86_init.c
+@@ -3,6 +3,7 @@
+  *
+  *  For licencing details see kernel-base/COPYING
+  */
++#include <linux/dmi.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
+ #include <linux/export.h>
+@@ -66,6 +67,7 @@ struct x86_init_ops x86_init __initdata = {
+ 		.probe_roms		= probe_roms,
+ 		.reserve_resources	= reserve_standard_io_resources,
+ 		.memory_setup		= e820__memory_setup_default,
++		.dmi_setup		= dmi_setup,
+ 	},
+ 
+ 	.mpparse = {
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index dda6fc4cfae88..1811a9ddfe1d4 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -679,6 +679,11 @@ void kvm_set_cpu_caps(void)
+ 		F(AMX_COMPLEX)
+ 	);
+ 
++	kvm_cpu_cap_init_kvm_defined(CPUID_7_2_EDX,
++		F(INTEL_PSFD) | F(IPRED_CTRL) | F(RRSBA_CTRL) | F(DDPD_U) |
++		F(BHI_CTRL) | F(MCDT_NO)
++	);
++
+ 	kvm_cpu_cap_mask(CPUID_D_1_EAX,
+ 		F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1) | F(XSAVES) | f_xfd
+ 	);
+@@ -960,13 +965,13 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		break;
+ 	/* function 7 has additional index. */
+ 	case 7:
+-		entry->eax = min(entry->eax, 1u);
++		max_idx = entry->eax = min(entry->eax, 2u);
+ 		cpuid_entry_override(entry, CPUID_7_0_EBX);
+ 		cpuid_entry_override(entry, CPUID_7_ECX);
+ 		cpuid_entry_override(entry, CPUID_7_EDX);
+ 
+-		/* KVM only supports 0x7.0 and 0x7.1, capped above via min(). */
+-		if (entry->eax == 1) {
++		/* KVM only supports up to 0x7.2, capped above via min(). */
++		if (max_idx >= 1) {
+ 			entry = do_host_cpuid(array, function, 1);
+ 			if (!entry)
+ 				goto out;
+@@ -976,6 +981,16 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 			entry->ebx = 0;
+ 			entry->ecx = 0;
+ 		}
++		if (max_idx >= 2) {
++			entry = do_host_cpuid(array, function, 2);
++			if (!entry)
++				goto out;
++
++			cpuid_entry_override(entry, CPUID_7_2_EDX);
++			entry->ecx = 0;
++			entry->ebx = 0;
++			entry->eax = 0;
++		}
+ 		break;
+ 	case 0xa: { /* Architectural Performance Monitoring */
+ 		union cpuid10_eax eax;
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 238afd7335e46..4943f6b2bbee4 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -2388,7 +2388,7 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *h
+ 	if (!eventfd)
+ 		return HV_STATUS_INVALID_PORT_ID;
+ 
+-	eventfd_signal(eventfd, 1);
++	eventfd_signal(eventfd);
+ 	return HV_STATUS_SUCCESS;
+ }
+ 
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 245b20973caee..23fab75993a51 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -41,6 +41,7 @@
+ #include "ioapic.h"
+ #include "trace.h"
+ #include "x86.h"
++#include "xen.h"
+ #include "cpuid.h"
+ #include "hyperv.h"
+ #include "smm.h"
+@@ -499,8 +500,10 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
+ 	}
+ 
+ 	/* Check if there are APF page ready requests pending */
+-	if (enabled)
++	if (enabled) {
+ 		kvm_make_request(KVM_REQ_APF_READY, apic->vcpu);
++		kvm_xen_sw_enable_lapic(apic->vcpu);
++	}
+ }
+ 
+ static inline void kvm_apic_set_xapic_id(struct kvm_lapic *apic, u8 id)
+diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
+index b816506783755..aadefcaa9561d 100644
+--- a/arch/x86/kvm/reverse_cpuid.h
++++ b/arch/x86/kvm/reverse_cpuid.h
+@@ -16,6 +16,7 @@ enum kvm_only_cpuid_leafs {
+ 	CPUID_7_1_EDX,
+ 	CPUID_8000_0007_EDX,
+ 	CPUID_8000_0022_EAX,
++	CPUID_7_2_EDX,
+ 	NR_KVM_CPU_CAPS,
+ 
+ 	NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
+@@ -46,6 +47,14 @@ enum kvm_only_cpuid_leafs {
+ #define X86_FEATURE_AMX_COMPLEX         KVM_X86_FEATURE(CPUID_7_1_EDX, 8)
+ #define X86_FEATURE_PREFETCHITI         KVM_X86_FEATURE(CPUID_7_1_EDX, 14)
+ 
++/* Intel-defined sub-features, CPUID level 0x00000007:2 (EDX) */
++#define X86_FEATURE_INTEL_PSFD		KVM_X86_FEATURE(CPUID_7_2_EDX, 0)
++#define X86_FEATURE_IPRED_CTRL		KVM_X86_FEATURE(CPUID_7_2_EDX, 1)
++#define KVM_X86_FEATURE_RRSBA_CTRL	KVM_X86_FEATURE(CPUID_7_2_EDX, 2)
++#define X86_FEATURE_DDPD_U		KVM_X86_FEATURE(CPUID_7_2_EDX, 3)
++#define X86_FEATURE_BHI_CTRL		KVM_X86_FEATURE(CPUID_7_2_EDX, 4)
++#define X86_FEATURE_MCDT_NO		KVM_X86_FEATURE(CPUID_7_2_EDX, 5)
++
+ /* CPUID level 0x80000007 (EDX). */
+ #define KVM_X86_FEATURE_CONSTANT_TSC	KVM_X86_FEATURE(CPUID_8000_0007_EDX, 8)
+ 
+@@ -80,6 +89,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
+ 	[CPUID_8000_0007_EDX] = {0x80000007, 0, CPUID_EDX},
+ 	[CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
+ 	[CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX},
++	[CPUID_7_2_EDX]       = {         7, 2, CPUID_EDX},
+ };
+ 
+ /*
+@@ -106,18 +116,19 @@ static __always_inline void reverse_cpuid_check(unsigned int x86_leaf)
+  */
+ static __always_inline u32 __feature_translate(int x86_feature)
+ {
+-	if (x86_feature == X86_FEATURE_SGX1)
+-		return KVM_X86_FEATURE_SGX1;
+-	else if (x86_feature == X86_FEATURE_SGX2)
+-		return KVM_X86_FEATURE_SGX2;
+-	else if (x86_feature == X86_FEATURE_SGX_EDECCSSA)
+-		return KVM_X86_FEATURE_SGX_EDECCSSA;
+-	else if (x86_feature == X86_FEATURE_CONSTANT_TSC)
+-		return KVM_X86_FEATURE_CONSTANT_TSC;
+-	else if (x86_feature == X86_FEATURE_PERFMON_V2)
+-		return KVM_X86_FEATURE_PERFMON_V2;
+-
+-	return x86_feature;
++#define KVM_X86_TRANSLATE_FEATURE(f)	\
++	case X86_FEATURE_##f: return KVM_X86_FEATURE_##f
++
++	switch (x86_feature) {
++	KVM_X86_TRANSLATE_FEATURE(SGX1);
++	KVM_X86_TRANSLATE_FEATURE(SGX2);
++	KVM_X86_TRANSLATE_FEATURE(SGX_EDECCSSA);
++	KVM_X86_TRANSLATE_FEATURE(CONSTANT_TSC);
++	KVM_X86_TRANSLATE_FEATURE(PERFMON_V2);
++	KVM_X86_TRANSLATE_FEATURE(RRSBA_CTRL);
++	default:
++		return x86_feature;
++	}
+ }
+ 
+ static __always_inline u32 __feature_leaf(int x86_feature)
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 6ee925d666484..1226bb2151936 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -57,7 +57,7 @@ static bool sev_es_enabled = true;
+ module_param_named(sev_es, sev_es_enabled, bool, 0444);
+ 
+ /* enable/disable SEV-ES DebugSwap support */
+-static bool sev_es_debug_swap_enabled = true;
++static bool sev_es_debug_swap_enabled = false;
+ module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
+ #else
+ #define sev_enabled false
+@@ -612,8 +612,11 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
+ 	save->xss  = svm->vcpu.arch.ia32_xss;
+ 	save->dr6  = svm->vcpu.arch.dr6;
+ 
+-	if (sev_es_debug_swap_enabled)
++	if (sev_es_debug_swap_enabled) {
+ 		save->sev_features |= SVM_SEV_FEAT_DEBUG_SWAP;
++		pr_warn_once("Enabling DebugSwap with KVM_SEV_ES_INIT. "
++			     "This will not work starting with Linux 6.10\n");
++	}
+ 
+ 	pr_debug("Virtual Machine Save Area (VMSA):\n");
+ 	print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
+@@ -1975,20 +1978,22 @@ int sev_mem_enc_register_region(struct kvm *kvm,
+ 		goto e_free;
+ 	}
+ 
+-	region->uaddr = range->addr;
+-	region->size = range->size;
+-
+-	list_add_tail(&region->list, &sev->regions_list);
+-	mutex_unlock(&kvm->lock);
+-
+ 	/*
+ 	 * The guest may change the memory encryption attribute from C=0 -> C=1
+ 	 * or vice versa for this memory range. Lets make sure caches are
+ 	 * flushed to ensure that guest data gets written into memory with
+-	 * correct C-bit.
++	 * correct C-bit.  Note, this must be done before dropping kvm->lock,
++	 * as region and its array of pages can be freed by a different task
++	 * once kvm->lock is released.
+ 	 */
+ 	sev_clflush_pages(region->pages, region->npages);
+ 
++	region->uaddr = range->addr;
++	region->size = range->size;
++
++	list_add_tail(&region->list, &sev->regions_list);
++	mutex_unlock(&kvm->lock);
++
+ 	return ret;
+ 
+ e_free:
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 8021c62b0e7b0..365caf7328059 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7951,6 +7951,16 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
+ 
+ 	if (r < 0)
+ 		return X86EMUL_UNHANDLEABLE;
++
++	/*
++	 * Mark the page dirty _before_ checking whether or not the CMPXCHG was
++	 * successful, as the old value is written back on failure.  Note, for
++	 * live migration, this is unnecessarily conservative as CMPXCHG writes
++	 * back the original value and the access is atomic, but KVM's ABI is
++	 * that all writes are dirty logged, regardless of the value written.
++	 */
++	kvm_vcpu_mark_page_dirty(vcpu, gpa_to_gfn(gpa));
++
+ 	if (r)
+ 		return X86EMUL_CMPXCHG_FAILED;
+ 
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index e53fad915a626..c069521f249ec 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -493,7 +493,7 @@ void kvm_xen_update_runstate(struct kvm_vcpu *v, int state)
+ 		kvm_xen_update_runstate_guest(v, state == RUNSTATE_runnable);
+ }
+ 
+-static void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *v)
++void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *v)
+ {
+ 	struct kvm_lapic_irq irq = { };
+ 	int r;
+@@ -2088,7 +2088,7 @@ static bool kvm_xen_hcall_evtchn_send(struct kvm_vcpu *vcpu, u64 param, u64 *r)
+ 		if (ret < 0 && ret != -ENOTCONN)
+ 			return false;
+ 	} else {
+-		eventfd_signal(evtchnfd->deliver.eventfd.ctx, 1);
++		eventfd_signal(evtchnfd->deliver.eventfd.ctx);
+ 	}
+ 
+ 	*r = 0;
+diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h
+index f8f1fe22d0906..f5841d9000aeb 100644
+--- a/arch/x86/kvm/xen.h
++++ b/arch/x86/kvm/xen.h
+@@ -18,6 +18,7 @@ extern struct static_key_false_deferred kvm_xen_enabled;
+ 
+ int __kvm_xen_has_interrupt(struct kvm_vcpu *vcpu);
+ void kvm_xen_inject_pending_events(struct kvm_vcpu *vcpu);
++void kvm_xen_inject_vcpu_vector(struct kvm_vcpu *vcpu);
+ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data);
+ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data);
+ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data);
+@@ -36,6 +37,19 @@ int kvm_xen_setup_evtchn(struct kvm *kvm,
+ 			 const struct kvm_irq_routing_entry *ue);
+ void kvm_xen_update_tsc_info(struct kvm_vcpu *vcpu);
+ 
++static inline void kvm_xen_sw_enable_lapic(struct kvm_vcpu *vcpu)
++{
++	/*
++	 * The local APIC is being enabled. If the per-vCPU upcall vector is
++	 * set and the vCPU's evtchn_upcall_pending flag is set, inject the
++	 * interrupt.
++	 */
++	if (static_branch_unlikely(&kvm_xen_enabled.key) &&
++	    vcpu->arch.xen.vcpu_info_cache.active &&
++	    vcpu->arch.xen.upcall_vector && __kvm_xen_has_interrupt(vcpu))
++		kvm_xen_inject_vcpu_vector(vcpu);
++}
++
+ static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
+ {
+ 	return static_branch_unlikely(&kvm_xen_enabled.key) &&
+@@ -101,6 +115,10 @@ static inline void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu)
+ {
+ }
+ 
++static inline void kvm_xen_sw_enable_lapic(struct kvm_vcpu *vcpu)
++{
++}
++
+ static inline bool kvm_xen_msr_enabled(struct kvm *kvm)
+ {
+ 	return false;
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 7b2589877d065..1e59367b46813 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -163,6 +163,7 @@ SYM_CODE_START_NOALIGN(srso_alias_untrain_ret)
+ 	lfence
+ 	jmp srso_alias_return_thunk
+ SYM_FUNC_END(srso_alias_untrain_ret)
++__EXPORT_THUNK(srso_alias_untrain_ret)
+ 	.popsection
+ 
+ 	.pushsection .text..__x86.rethunk_safe
+@@ -224,10 +225,12 @@ SYM_CODE_START(srso_return_thunk)
+ SYM_CODE_END(srso_return_thunk)
+ 
+ #define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
+-#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
+ #else /* !CONFIG_CPU_SRSO */
+ #define JMP_SRSO_UNTRAIN_RET "ud2"
+-#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
++/* Dummy for the alternative in CALL_UNTRAIN_RET. */
++SYM_CODE_START(srso_alias_untrain_ret)
++	RET
++SYM_FUNC_END(srso_alias_untrain_ret)
+ #endif /* CONFIG_CPU_SRSO */
+ 
+ #ifdef CONFIG_CPU_UNRET_ENTRY
+@@ -319,9 +322,7 @@ SYM_FUNC_END(retbleed_untrain_ret)
+ #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
+ 
+ SYM_FUNC_START(entry_untrain_ret)
+-	ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET,				\
+-		      JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO,		\
+-		      JMP_SRSO_ALIAS_UNTRAIN_RET, X86_FEATURE_SRSO_ALIAS
++	ALTERNATIVE JMP_RETBLEED_UNTRAIN_RET, JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO
+ SYM_FUNC_END(entry_untrain_ret)
+ __EXPORT_THUNK(entry_untrain_ret)
+ 
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index 70b91de2e053a..94cd06d4b0af5 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -492,6 +492,24 @@ void __init sme_early_init(void)
+ 	 */
+ 	if (sev_status & MSR_AMD64_SEV_ENABLED)
+ 		ia32_disable();
++
++	/*
++	 * Override init functions that scan the ROM region in SEV-SNP guests,
++	 * as this memory is not pre-validated and would thus cause a crash.
++	 */
++	if (sev_status & MSR_AMD64_SEV_SNP_ENABLED) {
++		x86_init.mpparse.find_smp_config = x86_init_noop;
++		x86_init.pci.init_irq = x86_init_noop;
++		x86_init.resources.probe_roms = x86_init_noop;
++
++		/*
++		 * DMI setup behavior for SEV-SNP guests depends on
++		 * efi_enabled(EFI_CONFIG_TABLES), which hasn't been
++		 * parsed yet. snp_dmi_setup() will run after that
++		 * parsing has happened.
++		 */
++		x86_init.resources.dmi_setup = snp_dmi_setup;
++	}
+ }
+ 
+ void __init mem_encrypt_free_decrypted_mem(void)
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index 7f72472a34d6d..0166ab1780ccb 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -97,7 +97,6 @@ static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch");
+ 
+ static char sme_cmdline_arg[] __initdata = "mem_encrypt";
+ static char sme_cmdline_on[]  __initdata = "on";
+-static char sme_cmdline_off[] __initdata = "off";
+ 
+ static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd)
+ {
+@@ -305,7 +304,8 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
+ 	 * instrumentation or checking boot_cpu_data in the cc_platform_has()
+ 	 * function.
+ 	 */
+-	if (!sme_get_me_mask() || sev_status & MSR_AMD64_SEV_ENABLED)
++	if (!sme_get_me_mask() ||
++	    RIP_REL_REF(sev_status) & MSR_AMD64_SEV_ENABLED)
+ 		return;
+ 
+ 	/*
+@@ -504,7 +504,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
+ 
+ void __init sme_enable(struct boot_params *bp)
+ {
+-	const char *cmdline_ptr, *cmdline_arg, *cmdline_on, *cmdline_off;
++	const char *cmdline_ptr, *cmdline_arg, *cmdline_on;
+ 	unsigned int eax, ebx, ecx, edx;
+ 	unsigned long feature_mask;
+ 	unsigned long me_mask;
+@@ -542,11 +542,11 @@ void __init sme_enable(struct boot_params *bp)
+ 	me_mask = 1UL << (ebx & 0x3f);
+ 
+ 	/* Check the SEV MSR whether SEV or SME is enabled */
+-	sev_status   = __rdmsr(MSR_AMD64_SEV);
+-	feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
++	RIP_REL_REF(sev_status) = msr = __rdmsr(MSR_AMD64_SEV);
++	feature_mask = (msr & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
+ 
+ 	/* The SEV-SNP CC blob should never be present unless SEV-SNP is enabled. */
+-	if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED))
++	if (snp && !(msr & MSR_AMD64_SEV_SNP_ENABLED))
+ 		snp_abort();
+ 
+ 	/* Check if memory encryption is enabled */
+@@ -572,7 +572,6 @@ void __init sme_enable(struct boot_params *bp)
+ 			return;
+ 	} else {
+ 		/* SEV state cannot be controlled by a command line option */
+-		sme_me_mask = me_mask;
+ 		goto out;
+ 	}
+ 
+@@ -587,28 +586,17 @@ void __init sme_enable(struct boot_params *bp)
+ 	asm ("lea sme_cmdline_on(%%rip), %0"
+ 	     : "=r" (cmdline_on)
+ 	     : "p" (sme_cmdline_on));
+-	asm ("lea sme_cmdline_off(%%rip), %0"
+-	     : "=r" (cmdline_off)
+-	     : "p" (sme_cmdline_off));
+-
+-	if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT))
+-		sme_me_mask = me_mask;
+ 
+ 	cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr |
+ 				     ((u64)bp->ext_cmd_line_ptr << 32));
+ 
+-	if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0)
+-		goto out;
+-
+-	if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
+-		sme_me_mask = me_mask;
+-	else if (!strncmp(buffer, cmdline_off, sizeof(buffer)))
+-		sme_me_mask = 0;
++	if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0 ||
++	    strncmp(buffer, cmdline_on, sizeof(buffer)))
++		return;
+ 
+ out:
+-	if (sme_me_mask) {
+-		physical_mask &= ~sme_me_mask;
+-		cc_vendor = CC_VENDOR_AMD;
+-		cc_set_mask(sme_me_mask);
+-	}
++	RIP_REL_REF(sme_me_mask) = me_mask;
++	physical_mask &= ~me_mask;
++	cc_vendor = CC_VENDOR_AMD;
++	cc_set_mask(me_mask);
+ }
+diff --git a/block/bio.c b/block/bio.c
+index 270f6b99926ea..62419aa09d731 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1149,7 +1149,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
+ 
+ 	bio_for_each_folio_all(fi, bio) {
+ 		struct page *page;
+-		size_t done = 0;
++		size_t nr_pages;
+ 
+ 		if (mark_dirty) {
+ 			folio_lock(fi.folio);
+@@ -1157,10 +1157,11 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
+ 			folio_unlock(fi.folio);
+ 		}
+ 		page = folio_page(fi.folio, fi.offset / PAGE_SIZE);
++		nr_pages = (fi.offset + fi.length - 1) / PAGE_SIZE -
++			   fi.offset / PAGE_SIZE + 1;
+ 		do {
+ 			bio_release_page(bio, page++);
+-			done += PAGE_SIZE;
+-		} while (done < fi.length);
++		} while (--nr_pages != 0);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(__bio_release_pages);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index a02d3d922c583..a71974a5e57cd 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -772,16 +772,11 @@ static void req_bio_endio(struct request *rq, struct bio *bio,
+ 		/*
+ 		 * Partial zone append completions cannot be supported as the
+ 		 * BIO fragments may end up not being written sequentially.
+-		 * For such case, force the completed nbytes to be equal to
+-		 * the BIO size so that bio_advance() sets the BIO remaining
+-		 * size to 0 and we end up calling bio_endio() before returning.
+ 		 */
+-		if (bio->bi_iter.bi_size != nbytes) {
++		if (bio->bi_iter.bi_size != nbytes)
+ 			bio->bi_status = BLK_STS_IOERR;
+-			nbytes = bio->bi_iter.bi_size;
+-		} else {
++		else
+ 			bio->bi_iter.bi_sector = rq->__sector;
+-		}
+ 	}
+ 
+ 	bio_advance(bio, nbytes);
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 0046b447268f9..7019b8e204d96 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -686,6 +686,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
+ 	t->zone_write_granularity = max(t->zone_write_granularity,
+ 					b->zone_write_granularity);
+ 	t->zoned = max(t->zoned, b->zoned);
++	if (!t->zoned) {
++		t->zone_write_granularity = 0;
++		t->max_zone_append_sectors = 0;
++	}
+ 	return ret;
+ }
+ EXPORT_SYMBOL(blk_stack_limits);
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index f958e79277b8b..02a916ba62ee7 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -646,9 +646,8 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
+ 	struct request_queue *q = hctx->queue;
+ 	struct deadline_data *dd = q->elevator->elevator_data;
+ 	struct blk_mq_tags *tags = hctx->sched_tags;
+-	unsigned int shift = tags->bitmap_tags.sb.shift;
+ 
+-	dd->async_depth = max(1U, 3 * (1U << shift)  / 4);
++	dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
+ 
+ 	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
+ }
+diff --git a/crypto/asymmetric_keys/mscode_parser.c b/crypto/asymmetric_keys/mscode_parser.c
+index 05402ef8964ed..8aecbe4637f36 100644
+--- a/crypto/asymmetric_keys/mscode_parser.c
++++ b/crypto/asymmetric_keys/mscode_parser.c
+@@ -75,6 +75,9 @@ int mscode_note_digest_algo(void *context, size_t hdrlen,
+ 
+ 	oid = look_up_OID(value, vlen);
+ 	switch (oid) {
++	case OID_sha1:
++		ctx->digest_algo = "sha1";
++		break;
+ 	case OID_sha256:
+ 		ctx->digest_algo = "sha256";
+ 		break;
+diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c
+index 5b08c50722d0f..231ad7b3789d5 100644
+--- a/crypto/asymmetric_keys/pkcs7_parser.c
++++ b/crypto/asymmetric_keys/pkcs7_parser.c
+@@ -227,6 +227,9 @@ int pkcs7_sig_note_digest_algo(void *context, size_t hdrlen,
+ 	struct pkcs7_parse_context *ctx = context;
+ 
+ 	switch (ctx->last_oid) {
++	case OID_sha1:
++		ctx->sinfo->sig->hash_algo = "sha1";
++		break;
+ 	case OID_sha256:
+ 		ctx->sinfo->sig->hash_algo = "sha256";
+ 		break;
+@@ -278,6 +281,7 @@ int pkcs7_sig_note_pkey_algo(void *context, size_t hdrlen,
+ 		ctx->sinfo->sig->pkey_algo = "rsa";
+ 		ctx->sinfo->sig->encoding = "pkcs1";
+ 		break;
++	case OID_id_ecdsa_with_sha1:
+ 	case OID_id_ecdsa_with_sha224:
+ 	case OID_id_ecdsa_with_sha256:
+ 	case OID_id_ecdsa_with_sha384:
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index e5f22691febd5..e314fd57e6f88 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -115,7 +115,8 @@ software_key_determine_akcipher(const struct public_key *pkey,
+ 		 */
+ 		if (!hash_algo)
+ 			return -EINVAL;
+-		if (strcmp(hash_algo, "sha224") != 0 &&
++		if (strcmp(hash_algo, "sha1") != 0 &&
++		    strcmp(hash_algo, "sha224") != 0 &&
+ 		    strcmp(hash_algo, "sha256") != 0 &&
+ 		    strcmp(hash_algo, "sha384") != 0 &&
+ 		    strcmp(hash_algo, "sha512") != 0 &&
+diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c
+index 398983be77e8b..2deff81f8af50 100644
+--- a/crypto/asymmetric_keys/signature.c
++++ b/crypto/asymmetric_keys/signature.c
+@@ -115,7 +115,7 @@ EXPORT_SYMBOL_GPL(decrypt_blob);
+  * Sign the specified data blob using the private key specified by params->key.
+  * The signature is wrapped in an encoding if params->encoding is specified
+  * (eg. "pkcs1").  If the encoding needs to know the digest type, this can be
+- * passed through params->hash_algo (eg. "sha512").
++ * passed through params->hash_algo (eg. "sha1").
+  *
+  * Returns the length of the data placed in the signature buffer or an error.
+  */
+diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
+index 487204d394266..bb0bffa271b53 100644
+--- a/crypto/asymmetric_keys/x509_cert_parser.c
++++ b/crypto/asymmetric_keys/x509_cert_parser.c
+@@ -198,6 +198,10 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
+ 	default:
+ 		return -ENOPKG; /* Unsupported combination */
+ 
++	case OID_sha1WithRSAEncryption:
++		ctx->cert->sig->hash_algo = "sha1";
++		goto rsa_pkcs1;
++
+ 	case OID_sha256WithRSAEncryption:
+ 		ctx->cert->sig->hash_algo = "sha256";
+ 		goto rsa_pkcs1;
+@@ -214,6 +218,10 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
+ 		ctx->cert->sig->hash_algo = "sha224";
+ 		goto rsa_pkcs1;
+ 
++	case OID_id_ecdsa_with_sha1:
++		ctx->cert->sig->hash_algo = "sha1";
++		goto ecdsa;
++
+ 	case OID_id_rsassa_pkcs1_v1_5_with_sha3_256:
+ 		ctx->cert->sig->hash_algo = "sha3-256";
+ 		goto rsa_pkcs1;
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index d7e98397549b5..0cd6e0600255a 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -653,6 +653,30 @@ static const struct akcipher_testvec rsa_tv_template[] = {
+ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
+ 	{
+ 	.key =
++	"\x04\xf7\x46\xf8\x2f\x15\xf6\x22\x8e\xd7\x57\x4f\xcc\xe7\xbb\xc1"
++	"\xd4\x09\x73\xcf\xea\xd0\x15\x07\x3d\xa5\x8a\x8a\x95\x43\xe4\x68"
++	"\xea\xc6\x25\xc1\xc1\x01\x25\x4c\x7e\xc3\x3c\xa6\x04\x0a\xe7\x08"
++	"\x98",
++	.key_len = 49,
++	.params =
++	"\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
++	"\xce\x3d\x03\x01\x01",
++	.param_len = 21,
++	.m =
++	"\xcd\xb9\xd2\x1c\xb7\x6f\xcd\x44\xb3\xfd\x63\xea\xa3\x66\x7f\xae"
++	"\x63\x85\xe7\x82",
++	.m_size = 20,
++	.algo = OID_id_ecdsa_with_sha1,
++	.c =
++	"\x30\x35\x02\x19\x00\xba\xe5\x93\x83\x6e\xb6\x3b\x63\xa0\x27\x91"
++	"\xc6\xf6\x7f\xc3\x09\xad\x59\xad\x88\x27\xd6\x92\x6b\x02\x18\x10"
++	"\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86"
++	"\x80\x6f\xa5\x79\x77\xda\xd0",
++	.c_size = 55,
++	.public_key_vec = true,
++	.siggen_sigver_test = true,
++	}, {
++	.key =
+ 	"\x04\xb6\x4b\xb1\xd1\xac\xba\x24\x8f\x65\xb2\x60\x00\x90\xbf\xbd"
+ 	"\x78\x05\x73\xe9\x79\x1d\x6f\x7c\x0b\xd2\xc3\x93\xa7\x28\xe1\x75"
+ 	"\xf7\xd5\x95\x1d\x28\x10\xc0\x75\x50\x5c\x1a\x4f\x3f\x8f\xa5\xee"
+@@ -756,6 +780,32 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
+ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
+ 	{
+ 	.key =
++	"\x04\xb9\x7b\xbb\xd7\x17\x64\xd2\x7e\xfc\x81\x5d\x87\x06\x83\x41"
++	"\x22\xd6\x9a\xaa\x87\x17\xec\x4f\x63\x55\x2f\x94\xba\xdd\x83\xe9"
++	"\x34\x4b\xf3\xe9\x91\x13\x50\xb6\xcb\xca\x62\x08\xe7\x3b\x09\xdc"
++	"\xc3\x63\x4b\x2d\xb9\x73\x53\xe4\x45\xe6\x7c\xad\xe7\x6b\xb0\xe8"
++	"\xaf",
++	.key_len = 65,
++	.params =
++	"\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
++	"\xce\x3d\x03\x01\x07",
++	.param_len = 21,
++	.m =
++	"\xc2\x2b\x5f\x91\x78\x34\x26\x09\x42\x8d\x6f\x51\xb2\xc5\xaf\x4c"
++	"\x0b\xde\x6a\x42",
++	.m_size = 20,
++	.algo = OID_id_ecdsa_with_sha1,
++	.c =
++	"\x30\x46\x02\x21\x00\xf9\x25\xce\x9f\x3a\xa6\x35\x81\xcf\xd4\xe7"
++	"\xb7\xf0\x82\x56\x41\xf7\xd4\xad\x8d\x94\x5a\x69\x89\xee\xca\x6a"
++	"\x52\x0e\x48\x4d\xcc\x02\x21\x00\xd7\xe4\xef\x52\x66\xd3\x5b\x9d"
++	"\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7"
++	"\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad",
++	.c_size = 72,
++	.public_key_vec = true,
++	.siggen_sigver_test = true,
++	}, {
++	.key =
+ 	"\x04\x8b\x6d\xc0\x33\x8e\x2d\x8b\x67\xf5\xeb\xc4\x7f\xa0\xf5\xd9"
+ 	"\x7b\x03\xa5\x78\x9a\xb5\xea\x14\xe4\x23\xd0\xaf\xd7\x0e\x2e\xa0"
+ 	"\xc9\x8b\xdb\x95\xf8\xb3\xaf\xac\x00\x2c\x2c\x1f\x7a\xfd\x95\x88"
+@@ -866,6 +916,36 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
+ 
+ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
+ 	{
++	.key = /* secp384r1(sha1) */
++	"\x04\x89\x25\xf3\x97\x88\xcb\xb0\x78\xc5\x72\x9a\x14\x6e\x7a\xb1"
++	"\x5a\xa5\x24\xf1\x95\x06\x9e\x28\xfb\xc4\xb9\xbe\x5a\x0d\xd9\x9f"
++	"\xf3\xd1\x4d\x2d\x07\x99\xbd\xda\xa7\x66\xec\xbb\xea\xba\x79\x42"
++	"\xc9\x34\x89\x6a\xe7\x0b\xc3\xf2\xfe\x32\x30\xbe\xba\xf9\xdf\x7e"
++	"\x4b\x6a\x07\x8e\x26\x66\x3f\x1d\xec\xa2\x57\x91\x51\xdd\x17\x0e"
++	"\x0b\x25\xd6\x80\x5c\x3b\xe6\x1a\x98\x48\x91\x45\x7a\x73\xb0\xc3"
++	"\xf1",
++	.key_len = 97,
++	.params =
++	"\x30\x10\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x05\x2b\x81\x04"
++	"\x00\x22",
++	.param_len = 18,
++	.m =
++	"\x12\x55\x28\xf0\x77\xd5\xb6\x21\x71\x32\x48\xcd\x28\xa8\x25\x22"
++	"\x3a\x69\xc1\x93",
++	.m_size = 20,
++	.algo = OID_id_ecdsa_with_sha1,
++	.c =
++	"\x30\x66\x02\x31\x00\xf5\x0f\x24\x4c\x07\x93\x6f\x21\x57\x55\x07"
++	"\x20\x43\x30\xde\xa0\x8d\x26\x8e\xae\x63\x3f\xbc\x20\x3a\xc6\xf1"
++	"\x32\x3c\xce\x70\x2b\x78\xf1\x4c\x26\xe6\x5b\x86\xcf\xec\x7c\x7e"
++	"\xd0\x87\xd7\xd7\x6e\x02\x31\x00\xcd\xbb\x7e\x81\x5d\x8f\x63\xc0"
++	"\x5f\x63\xb1\xbe\x5e\x4c\x0e\xa1\xdf\x28\x8c\x1b\xfa\xf9\x95\x88"
++	"\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26"
++	"\x79\x12\x2a\xb7\xc5\x15\x92\xc5",
++	.c_size = 104,
++	.public_key_vec = true,
++	.siggen_sigver_test = true,
++	}, {
+ 	.key = /* secp384r1(sha224) */
+ 	"\x04\x69\x6c\xcf\x62\xee\xd0\x0d\xe5\xb5\x2f\x70\x54\xcf\x26\xa0"
+ 	"\xd9\x98\x8d\x92\x2a\xab\x9b\x11\xcb\x48\x18\xa1\xa9\x0d\xd5\x18"
+diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c
+index 9290d43745519..83821e1757539 100644
+--- a/drivers/accel/habanalabs/common/device.c
++++ b/drivers/accel/habanalabs/common/device.c
+@@ -2047,7 +2047,7 @@ static void hl_notifier_event_send(struct hl_notifier_event *notifier_event, u64
+ 	notifier_event->events_mask |= event_mask;
+ 
+ 	if (notifier_event->eventfd)
+-		eventfd_signal(notifier_event->eventfd, 1);
++		eventfd_signal(notifier_event->eventfd);
+ 
+ 	mutex_unlock(&notifier_event->lock);
+ }
+diff --git a/drivers/accessibility/speakup/synth.c b/drivers/accessibility/speakup/synth.c
+index eea2a2fa4f015..45f9061031338 100644
+--- a/drivers/accessibility/speakup/synth.c
++++ b/drivers/accessibility/speakup/synth.c
+@@ -208,8 +208,10 @@ void spk_do_flush(void)
+ 	wake_up_process(speakup_task);
+ }
+ 
+-void synth_write(const char *buf, size_t count)
++void synth_write(const char *_buf, size_t count)
+ {
++	const unsigned char *buf = (const unsigned char *) _buf;
++
+ 	while (count--)
+ 		synth_buffer_add(*buf++);
+ 	synth_start();
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index da2e74fce2d99..df3fd6474bf21 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -671,11 +671,6 @@ MODULE_PARM_DESC(mobile_lpm_policy, "Default LPM policy for mobile chipsets");
+ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ 					 struct ahci_host_priv *hpriv)
+ {
+-	if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) {
+-		dev_info(&pdev->dev, "ASM1166 has only six ports\n");
+-		hpriv->saved_port_map = 0x3f;
+-	}
+-
+ 	if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
+ 		dev_info(&pdev->dev, "JMB361 has only one port\n");
+ 		hpriv->saved_port_map = 1;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index b0d6e69c4a5b2..214b935c2ced7 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -712,8 +712,10 @@ void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap)
+ 				ehc->saved_ncq_enabled |= 1 << devno;
+ 
+ 			/* If we are resuming, wake up the device */
+-			if (ap->pflags & ATA_PFLAG_RESUMING)
++			if (ap->pflags & ATA_PFLAG_RESUMING) {
++				dev->flags |= ATA_DFLAG_RESUMING;
+ 				ehc->i.dev_action[devno] |= ATA_EH_SET_ACTIVE;
++			}
+ 		}
+ 	}
+ 
+@@ -3169,6 +3171,7 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
+ 	return 0;
+ 
+  err:
++	dev->flags &= ~ATA_DFLAG_RESUMING;
+ 	*r_failed_dev = dev;
+ 	return rc;
+ }
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 0a0f483124c3a..2f4c588376410 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -4730,6 +4730,7 @@ void ata_scsi_dev_rescan(struct work_struct *work)
+ 	struct ata_link *link;
+ 	struct ata_device *dev;
+ 	unsigned long flags;
++	bool do_resume;
+ 	int ret = 0;
+ 
+ 	mutex_lock(&ap->scsi_scan_mutex);
+@@ -4751,7 +4752,15 @@ void ata_scsi_dev_rescan(struct work_struct *work)
+ 			if (scsi_device_get(sdev))
+ 				continue;
+ 
++			do_resume = dev->flags & ATA_DFLAG_RESUMING;
++
+ 			spin_unlock_irqrestore(ap->lock, flags);
++			if (do_resume) {
++				ret = scsi_resume_device(sdev);
++				if (ret == -EWOULDBLOCK)
++					goto unlock;
++				dev->flags &= ~ATA_DFLAG_RESUMING;
++			}
+ 			ret = scsi_rescan_device(sdev);
+ 			scsi_device_put(sdev);
+ 			spin_lock_irqsave(ap->lock, flags);
+diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
+index 42171f766dcba..5a5a9e978e85f 100644
+--- a/drivers/base/power/wakeirq.c
++++ b/drivers/base/power/wakeirq.c
+@@ -313,8 +313,10 @@ void dev_pm_enable_wake_irq_complete(struct device *dev)
+ 		return;
+ 
+ 	if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED &&
+-	    wirq->status & WAKE_IRQ_DEDICATED_REVERSE)
++	    wirq->status & WAKE_IRQ_DEDICATED_REVERSE) {
+ 		enable_irq(wirq->irq);
++		wirq->status |= WAKE_IRQ_DEDICATED_ENABLED;
++	}
+ }
+ 
+ /**
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index 951fe3014a3f3..abccd571cf3ee 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1234,6 +1234,9 @@ static int btnxpuart_close(struct hci_dev *hdev)
+ 
+ 	ps_wakeup(nxpdev);
+ 	serdev_device_close(nxpdev->serdev);
++	skb_queue_purge(&nxpdev->txq);
++	kfree_skb(nxpdev->rx_skb);
++	nxpdev->rx_skb = NULL;
+ 	clear_bit(BTNXPUART_SERDEV_OPEN, &nxpdev->tx_state);
+ 	return 0;
+ }
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 1b350412d8a6b..64c875657687d 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -919,8 +919,6 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
+ 	int rc;
+ 	u32 int_status;
+ 
+-	INIT_WORK(&priv->free_irq_work, tpm_tis_free_irq_func);
+-
+ 	rc = devm_request_threaded_irq(chip->dev.parent, irq, NULL,
+ 				       tis_int_handler, IRQF_ONESHOT | flags,
+ 				       dev_name(&chip->dev), chip);
+@@ -1132,6 +1130,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 	priv->phy_ops = phy_ops;
+ 	priv->locality_count = 0;
+ 	mutex_init(&priv->locality_count_mutex);
++	INIT_WORK(&priv->free_irq_work, tpm_tis_free_irq_func);
+ 
+ 	dev_set_drvdata(&chip->dev, priv);
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq5018.c b/drivers/clk/qcom/gcc-ipq5018.c
+index e2bd54826a4ce..c1732d70e3a23 100644
+--- a/drivers/clk/qcom/gcc-ipq5018.c
++++ b/drivers/clk/qcom/gcc-ipq5018.c
+@@ -857,6 +857,7 @@ static struct clk_rcg2 lpass_sway_clk_src = {
+ 
+ static const struct freq_tbl ftbl_pcie0_aux_clk_src[] = {
+ 	F(2000000, P_XO, 12, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 pcie0_aux_clk_src = {
+@@ -1099,6 +1100,7 @@ static const struct freq_tbl ftbl_qpic_io_macro_clk_src[] = {
+ 	F(100000000, P_GPLL0, 8, 0, 0),
+ 	F(200000000, P_GPLL0, 4, 0, 0),
+ 	F(320000000, P_GPLL0, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 qpic_io_macro_clk_src = {
+@@ -1194,6 +1196,7 @@ static struct clk_rcg2 ubi0_axi_clk_src = {
+ static const struct freq_tbl ftbl_ubi0_core_clk_src[] = {
+ 	F(850000000, P_UBI32_PLL, 1, 0, 0),
+ 	F(1000000000, P_UBI32_PLL, 1, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 ubi0_core_clk_src = {
+diff --git a/drivers/clk/qcom/gcc-ipq6018.c b/drivers/clk/qcom/gcc-ipq6018.c
+index b366912cd6480..ef1e2ce4804d2 100644
+--- a/drivers/clk/qcom/gcc-ipq6018.c
++++ b/drivers/clk/qcom/gcc-ipq6018.c
+@@ -1554,6 +1554,7 @@ static struct clk_regmap_div nss_ubi0_div_clk_src = {
+ 
+ static const struct freq_tbl ftbl_pcie_aux_clk_src[] = {
+ 	F(24000000, P_XO, 1, 0, 0),
++	{ }
+ };
+ 
+ static const struct clk_parent_data gcc_xo_gpll0_core_pi_sleep_clk[] = {
+@@ -1734,6 +1735,7 @@ static const struct freq_tbl ftbl_sdcc_ice_core_clk_src[] = {
+ 	F(160000000, P_GPLL0, 5, 0, 0),
+ 	F(216000000, P_GPLL6, 5, 0, 0),
+ 	F(308570000, P_GPLL6, 3.5, 0, 0),
++	{ }
+ };
+ 
+ static const struct clk_parent_data gcc_xo_gpll0_gpll6_gpll0_div2[] = {
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index b7faf12a511a1..7bc679871f324 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -644,6 +644,7 @@ static struct clk_rcg2 pcie0_axi_clk_src = {
+ 
+ static const struct freq_tbl ftbl_pcie_aux_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
++	{ }
+ };
+ 
+ static const struct clk_parent_data gcc_xo_gpll0_sleep_clk[] = {
+@@ -795,6 +796,7 @@ static const struct freq_tbl ftbl_sdcc_ice_core_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
+ 	F(160000000, P_GPLL0, 5, 0, 0),
+ 	F(308570000, P_GPLL6, 3.5, 0, 0),
++	{ }
+ };
+ 
+ static const struct clk_parent_data gcc_xo_gpll0_gpll6_gpll0_div2[] = {
+diff --git a/drivers/clk/qcom/gcc-ipq9574.c b/drivers/clk/qcom/gcc-ipq9574.c
+index e8190108e1aef..0a3f846695b80 100644
+--- a/drivers/clk/qcom/gcc-ipq9574.c
++++ b/drivers/clk/qcom/gcc-ipq9574.c
+@@ -2082,6 +2082,7 @@ static struct clk_branch gcc_sdcc1_apps_clk = {
+ static const struct freq_tbl ftbl_sdcc_ice_core_clk_src[] = {
+ 	F(150000000, P_GPLL4, 8, 0, 0),
+ 	F(300000000, P_GPLL4, 4, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 sdcc1_ice_core_clk_src = {
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index 725cd52d2398e..ea4c3bf4fb9bf 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -4037,3 +4037,4 @@ module_exit(gcc_sdm845_exit);
+ MODULE_DESCRIPTION("QTI GCC SDM845 Driver");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:gcc-sdm845");
++MODULE_SOFTDEP("pre: rpmhpd");
+diff --git a/drivers/clk/qcom/mmcc-apq8084.c b/drivers/clk/qcom/mmcc-apq8084.c
+index 02fc21208dd14..c89700ab93f9c 100644
+--- a/drivers/clk/qcom/mmcc-apq8084.c
++++ b/drivers/clk/qcom/mmcc-apq8084.c
+@@ -348,6 +348,7 @@ static struct freq_tbl ftbl_mmss_axi_clk[] = {
+ 	F(333430000, P_MMPLL1, 3.5, 0, 0),
+ 	F(400000000, P_MMPLL0, 2, 0, 0),
+ 	F(466800000, P_MMPLL1, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 mmss_axi_clk_src = {
+@@ -372,6 +373,7 @@ static struct freq_tbl ftbl_ocmemnoc_clk[] = {
+ 	F(150000000, P_GPLL0, 4, 0, 0),
+ 	F(228570000, P_MMPLL0, 3.5, 0, 0),
+ 	F(320000000, P_MMPLL0, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 ocmemnoc_clk_src = {
+diff --git a/drivers/clk/qcom/mmcc-msm8974.c b/drivers/clk/qcom/mmcc-msm8974.c
+index a31f6cf0c4e0c..36f460b78be2c 100644
+--- a/drivers/clk/qcom/mmcc-msm8974.c
++++ b/drivers/clk/qcom/mmcc-msm8974.c
+@@ -290,6 +290,7 @@ static struct freq_tbl ftbl_mmss_axi_clk[] = {
+ 	F(291750000, P_MMPLL1, 4, 0, 0),
+ 	F(400000000, P_MMPLL0, 2, 0, 0),
+ 	F(466800000, P_MMPLL1, 2.5, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 mmss_axi_clk_src = {
+@@ -314,6 +315,7 @@ static struct freq_tbl ftbl_ocmemnoc_clk[] = {
+ 	F(150000000, P_GPLL0, 4, 0, 0),
+ 	F(291750000, P_MMPLL1, 4, 0, 0),
+ 	F(400000000, P_MMPLL0, 2, 0, 0),
++	{ }
+ };
+ 
+ static struct clk_rcg2 ocmemnoc_clk_src = {
+diff --git a/drivers/clocksource/arm_global_timer.c b/drivers/clocksource/arm_global_timer.c
+index 44a61dc6f9320..e1c773bb55359 100644
+--- a/drivers/clocksource/arm_global_timer.c
++++ b/drivers/clocksource/arm_global_timer.c
+@@ -32,7 +32,7 @@
+ #define GT_CONTROL_IRQ_ENABLE		BIT(2)	/* banked */
+ #define GT_CONTROL_AUTO_INC		BIT(3)	/* banked */
+ #define GT_CONTROL_PRESCALER_SHIFT      8
+-#define GT_CONTROL_PRESCALER_MAX        0xF
++#define GT_CONTROL_PRESCALER_MAX        0xFF
+ #define GT_CONTROL_PRESCALER_MASK       (GT_CONTROL_PRESCALER_MAX << \
+ 					 GT_CONTROL_PRESCALER_SHIFT)
+ 
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 57857c0dfba97..1c732479a2c8d 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -101,6 +101,9 @@ static int riscv_timer_starting_cpu(unsigned int cpu)
+ {
+ 	struct clock_event_device *ce = per_cpu_ptr(&riscv_clock_event, cpu);
+ 
++	/* Clear timer interrupt */
++	riscv_clock_event_stop();
++
+ 	ce->cpumask = cpumask_of(cpu);
+ 	ce->irq = riscv_clock_event_irq;
+ 	if (riscv_timer_cannot_wake_cpu)
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 1791d37fbc53c..07f3419954396 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -570,7 +570,7 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ 	if (target_perf < capacity)
+ 		des_perf = DIV_ROUND_UP(cap_perf * target_perf, capacity);
+ 
+-	min_perf = READ_ONCE(cpudata->highest_perf);
++	min_perf = READ_ONCE(cpudata->lowest_perf);
+ 	if (_min_perf < capacity)
+ 		min_perf = DIV_ROUND_UP(cap_perf * _min_perf, capacity);
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 8bd6e5e8f121c..2d83bbc65dd0b 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -208,7 +208,7 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+-	if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL))
++	if (!zalloc_cpumask_var(&priv->cpus, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+ 	cpumask_set_cpu(cpu, priv->cpus);
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 8d4c42863a621..d2cf9619018b1 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -299,22 +299,6 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
+ 	return err;
+ }
+ 
+-static void sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
+-{
+-	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq);
+-	struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
+-	struct sun8i_ce_dev *ce = op->ce;
+-	struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq);
+-	int flow, err;
+-
+-	flow = rctx->flow;
+-	err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
+-	local_bh_disable();
+-	crypto_finalize_skcipher_request(engine, breq, err);
+-	local_bh_enable();
+-}
+-
+ static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine,
+ 				      void *async_req)
+ {
+@@ -360,6 +344,23 @@ static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine,
+ 	dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
+ }
+ 
++static void sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
++{
++	struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
++	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq);
++	struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
++	struct sun8i_ce_dev *ce = op->ce;
++	struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq);
++	int flow, err;
++
++	flow = rctx->flow;
++	err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
++	sun8i_ce_cipher_unprepare(engine, areq);
++	local_bh_disable();
++	crypto_finalize_skcipher_request(engine, breq, err);
++	local_bh_enable();
++}
++
+ int sun8i_ce_cipher_do_one(struct crypto_engine *engine, void *areq)
+ {
+ 	int err = sun8i_ce_cipher_prepare(engine, areq);
+@@ -368,7 +369,6 @@ int sun8i_ce_cipher_do_one(struct crypto_engine *engine, void *areq)
+ 		return err;
+ 
+ 	sun8i_ce_cipher_run(engine, areq);
+-	sun8i_ce_cipher_unprepare(engine, areq);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+index a39e70bd4b21b..621d14ea3b81a 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_aer.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+@@ -92,7 +92,8 @@ static void adf_device_reset_worker(struct work_struct *work)
+ 	if (adf_dev_restart(accel_dev)) {
+ 		/* The device hanged and we can't restart it so stop here */
+ 		dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
+-		if (reset_data->mode == ADF_DEV_RESET_ASYNC)
++		if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
++		    completion_done(&reset_data->compl))
+ 			kfree(reset_data);
+ 		WARN(1, "QAT: device restart failed. Device is unusable\n");
+ 		return;
+@@ -100,11 +101,19 @@ static void adf_device_reset_worker(struct work_struct *work)
+ 	adf_dev_restarted_notify(accel_dev);
+ 	clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status);
+ 
+-	/* The dev is back alive. Notify the caller if in sync mode */
+-	if (reset_data->mode == ADF_DEV_RESET_SYNC)
+-		complete(&reset_data->compl);
+-	else
++	/*
++	 * The dev is back alive. Notify the caller if in sync mode
++	 *
++	 * If device restart will take a more time than expected,
++	 * the schedule_reset() function can timeout and exit. This can be
++	 * detected by calling the completion_done() function. In this case
++	 * the reset_data structure needs to be freed here.
++	 */
++	if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
++	    completion_done(&reset_data->compl))
+ 		kfree(reset_data);
++	else
++		complete(&reset_data->compl);
+ }
+ 
+ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
+@@ -137,8 +146,9 @@ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Reset device timeout expired\n");
+ 			ret = -EFAULT;
++		} else {
++			kfree(reset_data);
+ 		}
+-		kfree(reset_data);
+ 		return ret;
+ 	}
+ 	return 0;
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.c b/drivers/crypto/intel/qat/qat_common/adf_rl.c
+index de1b214dba1f9..d4f2db3c53d8c 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_rl.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_rl.c
+@@ -788,6 +788,24 @@ static void clear_sla(struct adf_rl *rl_data, struct rl_sla *sla)
+ 	sla_type_arr[node_id] = NULL;
+ }
+ 
++static void free_all_sla(struct adf_accel_dev *accel_dev)
++{
++	struct adf_rl *rl_data = accel_dev->rate_limiting;
++	int sla_id;
++
++	mutex_lock(&rl_data->rl_lock);
++
++	for (sla_id = 0; sla_id < RL_NODES_CNT_MAX; sla_id++) {
++		if (!rl_data->sla[sla_id])
++			continue;
++
++		kfree(rl_data->sla[sla_id]);
++		rl_data->sla[sla_id] = NULL;
++	}
++
++	mutex_unlock(&rl_data->rl_lock);
++}
++
+ /**
+  * add_update_sla() - handles the creation and the update of an SLA
+  * @accel_dev: pointer to acceleration device structure
+@@ -1155,7 +1173,7 @@ void adf_rl_stop(struct adf_accel_dev *accel_dev)
+ 		return;
+ 
+ 	adf_sysfs_rl_rm(accel_dev);
+-	adf_rl_remove_sla_all(accel_dev, true);
++	free_all_sla(accel_dev);
+ }
+ 
+ void adf_rl_exit(struct adf_accel_dev *accel_dev)
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+index 1b13b4aa16ecc..a235e6c300f1e 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+@@ -332,12 +332,12 @@ static int rk_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ 	pm_runtime_put_autosuspend(rkc->dev);
+ 
++	rk_hash_unprepare(engine, breq);
++
+ 	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	local_bh_enable();
+ 
+-	rk_hash_unprepare(engine, breq);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h
+index a0b5819bc70b3..f01d0709c9c32 100644
+--- a/drivers/cxl/core/trace.h
++++ b/drivers/cxl/core/trace.h
+@@ -642,18 +642,18 @@ u64 cxl_trace_hpa(struct cxl_region *cxlr, struct cxl_memdev *memdev, u64 dpa);
+ 
+ TRACE_EVENT(cxl_poison,
+ 
+-	TP_PROTO(struct cxl_memdev *cxlmd, struct cxl_region *region,
++	TP_PROTO(struct cxl_memdev *cxlmd, struct cxl_region *cxlr,
+ 		 const struct cxl_poison_record *record, u8 flags,
+ 		 __le64 overflow_ts, enum cxl_poison_trace_type trace_type),
+ 
+-	TP_ARGS(cxlmd, region, record, flags, overflow_ts, trace_type),
++	TP_ARGS(cxlmd, cxlr, record, flags, overflow_ts, trace_type),
+ 
+ 	TP_STRUCT__entry(
+ 		__string(memdev, dev_name(&cxlmd->dev))
+ 		__string(host, dev_name(cxlmd->dev.parent))
+ 		__field(u64, serial)
+ 		__field(u8, trace_type)
+-		__string(region, region)
++		__string(region, cxlr ? dev_name(&cxlr->dev) : "")
+ 		__field(u64, overflow_ts)
+ 		__field(u64, hpa)
+ 		__field(u64, dpa)
+@@ -673,10 +673,10 @@ TRACE_EVENT(cxl_poison,
+ 		__entry->source = cxl_poison_record_source(record);
+ 		__entry->trace_type = trace_type;
+ 		__entry->flags = flags;
+-		if (region) {
+-			__assign_str(region, dev_name(&region->dev));
+-			memcpy(__entry->uuid, &region->params.uuid, 16);
+-			__entry->hpa = cxl_trace_hpa(region, cxlmd,
++		if (cxlr) {
++			__assign_str(region, dev_name(&cxlr->dev));
++			memcpy(__entry->uuid, &cxlr->params.uuid, 16);
++			__entry->hpa = cxl_trace_hpa(cxlr, cxlmd,
+ 						     __entry->dpa);
+ 		} else {
+ 			__assign_str(region, "");
+diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c
+index 9db9290c32693..7bc71f4be64a0 100644
+--- a/drivers/firewire/ohci.c
++++ b/drivers/firewire/ohci.c
+@@ -3773,6 +3773,7 @@ static int pci_probe(struct pci_dev *dev,
+ 	return 0;
+ 
+  fail_msi:
++	devm_free_irq(&dev->dev, dev->irq, ohci);
+ 	pci_disable_msi(dev);
+ 
+ 	return err;
+@@ -3800,6 +3801,7 @@ static void pci_remove(struct pci_dev *dev)
+ 
+ 	software_reset(ohci);
+ 
++	devm_free_irq(&dev->dev, dev->irq, ohci);
+ 	pci_disable_msi(dev);
+ 
+ 	dev_notice(&dev->dev, "removing fw-ohci device\n");
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 9d3910d1abe19..abdfcb5aa470c 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -199,6 +199,8 @@ static bool generic_ops_supported(void)
+ 
+ 	name_size = sizeof(name);
+ 
++	if (!efi.get_next_variable)
++		return false;
+ 	status = efi.get_next_variable(&name_size, &name, &guid);
+ 	if (status == EFI_UNSUPPORTED)
+ 		return false;
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index 4e96a855fdf47..c41e7b2091cdd 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -120,7 +120,7 @@ efi_status_t efi_random_alloc(unsigned long size,
+ 			continue;
+ 		}
+ 
+-		target = round_up(md->phys_addr, align) + target_slot * align;
++		target = round_up(max_t(u64, md->phys_addr, alloc_min), align) + target_slot * align;
+ 		pages = size / EFI_PAGE_SIZE;
+ 
+ 		status = efi_bs_call(allocate_pages, EFI_ALLOCATE_ADDRESS,
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index c9857ee3880c2..dd80082aac1ac 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -487,6 +487,7 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
+ 	hdr->vid_mode	= 0xffff;
+ 
+ 	hdr->type_of_loader = 0x21;
++	hdr->initrd_addr_max = INT_MAX;
+ 
+ 	/* Convert unicode cmdline to ascii */
+ 	cmdline_ptr = efi_convert_cmdline(image, &options_size);
+diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c
+index dd7a783d53b5f..e73f88050f08d 100644
+--- a/drivers/fpga/dfl.c
++++ b/drivers/fpga/dfl.c
+@@ -1872,7 +1872,7 @@ static irqreturn_t dfl_irq_handler(int irq, void *arg)
+ {
+ 	struct eventfd_ctx *trigger = arg;
+ 
+-	eventfd_signal(trigger, 1);
++	eventfd_signal(trigger);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 31d4b5a2c5e83..13c62a26aa19c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -198,6 +198,7 @@ extern uint amdgpu_dc_debug_mask;
+ extern uint amdgpu_dc_visual_confirm;
+ extern uint amdgpu_dm_abm_level;
+ extern int amdgpu_backlight;
++extern int amdgpu_damage_clips;
+ extern struct amdgpu_mgpu_info mgpu_info;
+ extern int amdgpu_ras_enable;
+ extern uint amdgpu_ras_mask;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 10c4a8cfa18a0..855ab596323c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -209,6 +209,7 @@ int amdgpu_umsch_mm;
+ int amdgpu_seamless = -1; /* auto */
+ uint amdgpu_debug_mask;
+ int amdgpu_agp = -1; /* auto */
++int amdgpu_damage_clips = -1; /* auto */
+ 
+ static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);
+ 
+@@ -857,6 +858,18 @@ int amdgpu_backlight = -1;
+ MODULE_PARM_DESC(backlight, "Backlight control (0 = pwm, 1 = aux, -1 auto (default))");
+ module_param_named(backlight, amdgpu_backlight, bint, 0444);
+ 
++/**
++ * DOC: damageclips (int)
++ * Enable or disable damage clips support. If damage clips support is disabled,
++ * we will force full frame updates, irrespective of what user space sends to
++ * us.
++ *
++ * Defaults to -1 (where it is enabled unless a PSR-SU display is detected).
++ */
++MODULE_PARM_DESC(damageclips,
++		 "Damage clips support (0 = disable, 1 = enable, -1 auto (default))");
++module_param_named(damageclips, amdgpu_damage_clips, int, 0444);
++
+ /**
+  * DOC: tmz (int)
+  * Trusted Memory Zone (TMZ) is a method to protect data being written
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.c
+index 081267161d401..57516a8c5db34 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.c
+@@ -129,13 +129,25 @@ static const struct mmu_interval_notifier_ops amdgpu_hmm_hsa_ops = {
+  */
+ int amdgpu_hmm_register(struct amdgpu_bo *bo, unsigned long addr)
+ {
++	int r;
++
+ 	if (bo->kfd_bo)
+-		return mmu_interval_notifier_insert(&bo->notifier, current->mm,
++		r = mmu_interval_notifier_insert(&bo->notifier, current->mm,
+ 						    addr, amdgpu_bo_size(bo),
+ 						    &amdgpu_hmm_hsa_ops);
+-	return mmu_interval_notifier_insert(&bo->notifier, current->mm, addr,
+-					    amdgpu_bo_size(bo),
+-					    &amdgpu_hmm_gfx_ops);
++	else
++		r = mmu_interval_notifier_insert(&bo->notifier, current->mm, addr,
++							amdgpu_bo_size(bo),
++							&amdgpu_hmm_gfx_ops);
++	if (r)
++		/*
++		 * Make sure amdgpu_hmm_unregister() doesn't call
++		 * mmu_interval_notifier_remove() when the notifier isn't properly
++		 * initialized.
++		 */
++		bo->notifier.mm = NULL;
++
++	return r;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 45424ebf96814..4e526d7730339 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -524,46 +524,58 @@ static ssize_t amdgpu_debugfs_mqd_read(struct file *f, char __user *buf,
+ {
+ 	struct amdgpu_ring *ring = file_inode(f)->i_private;
+ 	volatile u32 *mqd;
+-	int r;
++	u32 *kbuf;
++	int r, i;
+ 	uint32_t value, result;
+ 
+ 	if (*pos & 3 || size & 3)
+ 		return -EINVAL;
+ 
+-	result = 0;
++	kbuf = kmalloc(ring->mqd_size, GFP_KERNEL);
++	if (!kbuf)
++		return -ENOMEM;
+ 
+ 	r = amdgpu_bo_reserve(ring->mqd_obj, false);
+ 	if (unlikely(r != 0))
+-		return r;
++		goto err_free;
+ 
+ 	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&mqd);
+-	if (r) {
+-		amdgpu_bo_unreserve(ring->mqd_obj);
+-		return r;
+-	}
++	if (r)
++		goto err_unreserve;
+ 
++	/*
++	 * Copy to local buffer to avoid put_user(), which might fault
++	 * and acquire mmap_sem, under reservation_ww_class_mutex.
++	 */
++	for (i = 0; i < ring->mqd_size/sizeof(u32); i++)
++		kbuf[i] = mqd[i];
++
++	amdgpu_bo_kunmap(ring->mqd_obj);
++	amdgpu_bo_unreserve(ring->mqd_obj);
++
++	result = 0;
+ 	while (size) {
+ 		if (*pos >= ring->mqd_size)
+-			goto done;
++			break;
+ 
+-		value = mqd[*pos/4];
++		value = kbuf[*pos/4];
+ 		r = put_user(value, (uint32_t *)buf);
+ 		if (r)
+-			goto done;
++			goto err_free;
+ 		buf += 4;
+ 		result += 4;
+ 		size -= 4;
+ 		*pos += 4;
+ 	}
+ 
+-done:
+-	amdgpu_bo_kunmap(ring->mqd_obj);
+-	mqd = NULL;
+-	amdgpu_bo_unreserve(ring->mqd_obj);
+-	if (r)
+-		return r;
+-
++	kfree(kbuf);
+ 	return result;
++
++err_unreserve:
++	amdgpu_bo_unreserve(ring->mqd_obj);
++err_free:
++	kfree(kbuf);
++	return r;
+ }
+ 
+ static const struct file_operations amdgpu_debugfs_mqd_fops = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 75c9fd2c6c2a1..b0ed10f4de609 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -869,6 +869,7 @@ static void amdgpu_ttm_gart_bind(struct amdgpu_device *adev,
+ 		amdgpu_gart_bind(adev, gtt->offset, ttm->num_pages,
+ 				 gtt->ttm.dma_address, flags);
+ 	}
++	gtt->bound = true;
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 71445ab63b5e5..02d1d9afbf66a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -1466,7 +1466,7 @@ void kfd_flush_tlb(struct kfd_process_device *pdd, enum TLB_FLUSH_TYPE type);
+ 
+ static inline bool kfd_flush_tlb_after_unmap(struct kfd_dev *dev)
+ {
+-	return KFD_GC_VERSION(dev) > IP_VERSION(9, 4, 2) ||
++	return KFD_GC_VERSION(dev) >= IP_VERSION(9, 4, 2) ||
+ 	       (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) && dev->sdma_fw_version >= 18) ||
+ 	       KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 49f0c9454a6e6..dafe9562a7370 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -5129,6 +5129,10 @@ static inline void fill_dc_dirty_rect(struct drm_plane *plane,
+  * @new_plane_state: New state of @plane
+  * @crtc_state: New state of CRTC connected to the @plane
+  * @flip_addrs: DC flip tracking struct, which also tracts dirty rects
++ * @is_psr_su: Flag indicating whether Panel Self Refresh Selective Update (PSR SU) is enabled.
++ *             If PSR SU is enabled and damage clips are available, only the regions of the screen
++ *             that have changed will be updated. If PSR SU is not enabled,
++ *             or if damage clips are not available, the entire screen will be updated.
+  * @dirty_regions_changed: dirty regions changed
+  *
+  * For PSR SU, DC informs the DMUB uController of dirty rectangle regions
+@@ -5147,6 +5151,7 @@ static void fill_dc_dirty_rects(struct drm_plane *plane,
+ 				struct drm_plane_state *new_plane_state,
+ 				struct drm_crtc_state *crtc_state,
+ 				struct dc_flip_addrs *flip_addrs,
++				bool is_psr_su,
+ 				bool *dirty_regions_changed)
+ {
+ 	struct dm_crtc_state *dm_crtc_state = to_dm_crtc_state(crtc_state);
+@@ -5171,6 +5176,10 @@ static void fill_dc_dirty_rects(struct drm_plane *plane,
+ 	num_clips = drm_plane_get_damage_clips_count(new_plane_state);
+ 	clips = drm_plane_get_damage_clips(new_plane_state);
+ 
++	if (num_clips && (!amdgpu_damage_clips || (amdgpu_damage_clips < 0 &&
++						   is_psr_su)))
++		goto ffu;
++
+ 	if (!dm_crtc_state->mpo_requested) {
+ 		if (!num_clips || num_clips > DC_MAX_DIRTY_RECTS)
+ 			goto ffu;
+@@ -6163,9 +6172,8 @@ create_stream_for_sink(struct drm_connector *connector,
+ 
+ 	if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A)
+ 		mod_build_hf_vsif_infopacket(stream, &stream->vsp_infopacket);
+-	else if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT ||
+-			 stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST ||
+-			 stream->signal == SIGNAL_TYPE_EDP) {
++
++	if (stream->link->psr_settings.psr_feature_enabled || stream->link->replay_settings.replay_feature_enabled) {
+ 		//
+ 		// should decide stream support vsc sdp colorimetry capability
+ 		// before building vsc info packet
+@@ -6181,9 +6189,8 @@ create_stream_for_sink(struct drm_connector *connector,
+ 		if (stream->out_transfer_func->tf == TRANSFER_FUNCTION_GAMMA22)
+ 			tf = TRANSFER_FUNC_GAMMA_22;
+ 		mod_build_vsc_infopacket(stream, &stream->vsc_infopacket, stream->output_color_space, tf);
++		aconnector->psr_skip_count = AMDGPU_DM_PSR_ENTRY_DELAY;
+ 
+-		if (stream->link->psr_settings.psr_feature_enabled)
+-			aconnector->psr_skip_count = AMDGPU_DM_PSR_ENTRY_DELAY;
+ 	}
+ finish:
+ 	dc_sink_release(sink);
+@@ -8218,6 +8225,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 			fill_dc_dirty_rects(plane, old_plane_state,
+ 					    new_plane_state, new_crtc_state,
+ 					    &bundle->flip_addrs[planes_count],
++					    acrtc_state->stream->link->psr_settings.psr_version ==
++					    DC_PSR_VERSION_SU_1,
+ 					    &dirty_rects_changed);
+ 
+ 			/*
+@@ -10839,18 +10848,24 @@ void amdgpu_dm_update_freesync_caps(struct drm_connector *connector,
+ 	if (!adev->dm.freesync_module)
+ 		goto update;
+ 
+-	if (sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT
+-		|| sink->sink_signal == SIGNAL_TYPE_EDP) {
++	if (edid && (sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT ||
++		     sink->sink_signal == SIGNAL_TYPE_EDP)) {
+ 		bool edid_check_required = false;
+ 
+-		if (edid) {
+-			edid_check_required = is_dp_capable_without_timing_msa(
+-						adev->dm.dc,
+-						amdgpu_dm_connector);
++		if (is_dp_capable_without_timing_msa(adev->dm.dc,
++						     amdgpu_dm_connector)) {
++			if (edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ) {
++				freesync_capable = true;
++				amdgpu_dm_connector->min_vfreq = connector->display_info.monitor_range.min_vfreq;
++				amdgpu_dm_connector->max_vfreq = connector->display_info.monitor_range.max_vfreq;
++			} else {
++				edid_check_required = edid->version > 1 ||
++						      (edid->version == 1 &&
++						       edid->revision > 1);
++			}
+ 		}
+ 
+-		if (edid_check_required == true && (edid->version > 1 ||
+-		   (edid->version == 1 && edid->revision > 1))) {
++		if (edid_check_required) {
+ 			for (i = 0; i < 4; i++) {
+ 
+ 				timing	= &edid->detailed_timings[i];
+@@ -10870,14 +10885,23 @@ void amdgpu_dm_update_freesync_caps(struct drm_connector *connector,
+ 				if (range->flags != 1)
+ 					continue;
+ 
+-				amdgpu_dm_connector->min_vfreq = range->min_vfreq;
+-				amdgpu_dm_connector->max_vfreq = range->max_vfreq;
+-				amdgpu_dm_connector->pixel_clock_mhz =
+-					range->pixel_clock_mhz * 10;
+-
+ 				connector->display_info.monitor_range.min_vfreq = range->min_vfreq;
+ 				connector->display_info.monitor_range.max_vfreq = range->max_vfreq;
+ 
++				if (edid->revision >= 4) {
++					if (data->pad2 & DRM_EDID_RANGE_OFFSET_MIN_VFREQ)
++						connector->display_info.monitor_range.min_vfreq += 255;
++					if (data->pad2 & DRM_EDID_RANGE_OFFSET_MAX_VFREQ)
++						connector->display_info.monitor_range.max_vfreq += 255;
++				}
++
++				amdgpu_dm_connector->min_vfreq =
++					connector->display_info.monitor_range.min_vfreq;
++				amdgpu_dm_connector->max_vfreq =
++					connector->display_info.monitor_range.max_vfreq;
++				amdgpu_dm_connector->pixel_clock_mhz =
++					range->pixel_clock_mhz * 10;
++
+ 				break;
+ 			}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+index a496930b1f9c0..289918ea7298d 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+@@ -217,6 +217,16 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
+ 	if (clk_mgr_base->bw_params->dc_mode_limit.dispclk_mhz > 1950)
+ 		clk_mgr_base->bw_params->dc_mode_limit.dispclk_mhz = 1950;
+ 
++	/* DPPCLK */
++	dcn32_init_single_clock(clk_mgr, PPCLK_DPPCLK,
++			&clk_mgr_base->bw_params->clk_table.entries[0].dppclk_mhz,
++			&num_entries_per_clk->num_dppclk_levels);
++	num_levels = num_entries_per_clk->num_dppclk_levels;
++	clk_mgr_base->bw_params->dc_mode_limit.dppclk_mhz = dcn30_smu_get_dc_mode_max_dpm_freq(clk_mgr, PPCLK_DPPCLK);
++	//HW recommends limit of 1950 MHz in display clock for all DCN3.2.x
++	if (clk_mgr_base->bw_params->dc_mode_limit.dppclk_mhz > 1950)
++		clk_mgr_base->bw_params->dc_mode_limit.dppclk_mhz = 1950;
++
+ 	if (num_entries_per_clk->num_dcfclk_levels &&
+ 			num_entries_per_clk->num_dtbclk_levels &&
+ 			num_entries_per_clk->num_dispclk_levels)
+@@ -241,6 +251,10 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
+ 					= khz_to_mhz_ceil(clk_mgr_base->ctx->dc->debug.min_dpp_clk_khz);
+ 	}
+ 
++	for (i = 0; i < num_levels; i++)
++		if (clk_mgr_base->bw_params->clk_table.entries[i].dppclk_mhz > 1950)
++			clk_mgr_base->bw_params->clk_table.entries[i].dppclk_mhz = 1950;
++
+ 	/* Get UCLK, update bounding box */
+ 	clk_mgr_base->funcs->get_memclk_states_from_smu(clk_mgr_base);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 54df6cac19c8b..353d5fb9e3f82 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -705,7 +705,7 @@ static void dcn35_clk_mgr_helper_populate_bw_params(struct clk_mgr_internal *clk
+ 		clock_table->NumFclkLevelsEnabled;
+ 	max_fclk = find_max_clk_value(clock_table->FclkClocks_Freq, num_fclk);
+ 
+-	num_dcfclk = (clock_table->NumFclkLevelsEnabled > NUM_DCFCLK_DPM_LEVELS) ? NUM_DCFCLK_DPM_LEVELS :
++	num_dcfclk = (clock_table->NumDcfClkLevelsEnabled > NUM_DCFCLK_DPM_LEVELS) ? NUM_DCFCLK_DPM_LEVELS :
+ 		clock_table->NumDcfClkLevelsEnabled;
+ 	for (i = 0; i < num_dcfclk; i++) {
+ 		int j;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index bbdeda489768b..b51208f44c240 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2258,23 +2258,16 @@ struct dc_state *dc_copy_state(struct dc_state *src_ctx)
+ {
+ 	int i, j;
+ 	struct dc_state *new_ctx = kvmalloc(sizeof(struct dc_state), GFP_KERNEL);
+-#ifdef CONFIG_DRM_AMD_DC_FP
+-	struct dml2_context *dml2 =  NULL;
+-#endif
+ 
+ 	if (!new_ctx)
+ 		return NULL;
+ 	memcpy(new_ctx, src_ctx, sizeof(struct dc_state));
+ 
+ #ifdef CONFIG_DRM_AMD_DC_FP
+-	if (new_ctx->bw_ctx.dml2) {
+-		dml2 = kzalloc(sizeof(struct dml2_context), GFP_KERNEL);
+-		if (!dml2)
+-			return NULL;
+-
+-		memcpy(dml2, src_ctx->bw_ctx.dml2, sizeof(struct dml2_context));
+-		new_ctx->bw_ctx.dml2 = dml2;
+-	}
++	if (new_ctx->bw_ctx.dml2 && !dml2_create_copy(&new_ctx->bw_ctx.dml2, src_ctx->bw_ctx.dml2)) {
++		dc_release_state(new_ctx);
++		return NULL;
++ 	}
+ #endif
+ 
+ 	for (i = 0; i < MAX_PIPES; i++) {
+@@ -3340,6 +3333,9 @@ static bool dc_dmub_should_send_dirty_rect_cmd(struct dc *dc, struct dc_stream_s
+ 	if (stream->link->replay_settings.config.replay_supported)
+ 		return true;
+ 
++	if (stream->ctx->dce_version >= DCN_VERSION_3_5 && stream->abm_level)
++		return true;
++
+ 	return false;
+ }
+ 
+@@ -4929,22 +4925,16 @@ void dc_allow_idle_optimizations(struct dc *dc, bool allow)
+ 
+ bool dc_dmub_is_ips_idle_state(struct dc *dc)
+ {
+-	uint32_t idle_state = 0;
+-
+ 	if (dc->debug.disable_idle_power_optimizations)
+ 		return false;
+ 
+ 	if (!dc->caps.ips_support || (dc->config.disable_ips == DMUB_IPS_DISABLE_ALL))
+ 		return false;
+ 
+-	if (dc->hwss.get_idle_state)
+-		idle_state = dc->hwss.get_idle_state(dc);
+-
+-	if (!(idle_state & DMUB_IPS1_ALLOW_MASK) ||
+-		!(idle_state & DMUB_IPS2_ALLOW_MASK))
+-		return true;
++	if (!dc->ctx->dmub_srv)
++		return false;
+ 
+-	return false;
++	return dc->ctx->dmub_srv->idle_allowed;
+ }
+ 
+ /* set min and max memory clock to lowest and highest DPM level, respectively */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
+index d1500b2238580..2f9da981a8bea 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
+@@ -44,6 +44,36 @@
+ #define NUM_ELEMENTS(a) (sizeof(a) / sizeof((a)[0]))
+ 
+ 
++void mpc3_mpc_init(struct mpc *mpc)
++{
++	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
++	int opp_id;
++
++	mpc1_mpc_init(mpc);
++
++	for (opp_id = 0; opp_id < MAX_OPP; opp_id++) {
++		if (REG(MUX[opp_id]))
++			/* disable mpc out rate and flow control */
++			REG_UPDATE_2(MUX[opp_id], MPC_OUT_RATE_CONTROL_DISABLE,
++					1, MPC_OUT_FLOW_CONTROL_COUNT, 0);
++	}
++}
++
++void mpc3_mpc_init_single_inst(struct mpc *mpc, unsigned int mpcc_id)
++{
++	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
++
++	mpc1_mpc_init_single_inst(mpc, mpcc_id);
++
++	/* assuming mpc out mux is connected to opp with the same index at this
++	 * point in time (e.g. transitioning from vbios to driver)
++	 */
++	if (mpcc_id < MAX_OPP && REG(MUX[mpcc_id]))
++		/* disable mpc out rate and flow control */
++		REG_UPDATE_2(MUX[mpcc_id], MPC_OUT_RATE_CONTROL_DISABLE,
++				1, MPC_OUT_FLOW_CONTROL_COUNT, 0);
++}
++
+ bool mpc3_is_dwb_idle(
+ 	struct mpc *mpc,
+ 	int dwb_id)
+@@ -80,25 +110,6 @@ void mpc3_disable_dwb_mux(
+ 		MPC_DWB0_MUX, 0xf);
+ }
+ 
+-void mpc3_set_out_rate_control(
+-	struct mpc *mpc,
+-	int opp_id,
+-	bool enable,
+-	bool rate_2x_mode,
+-	struct mpc_dwb_flow_control *flow_control)
+-{
+-	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+-
+-	REG_UPDATE_2(MUX[opp_id],
+-			MPC_OUT_RATE_CONTROL_DISABLE, !enable,
+-			MPC_OUT_RATE_CONTROL, rate_2x_mode);
+-
+-	if (flow_control)
+-		REG_UPDATE_2(MUX[opp_id],
+-			MPC_OUT_FLOW_CONTROL_MODE, flow_control->flow_ctrl_mode,
+-			MPC_OUT_FLOW_CONTROL_COUNT, flow_control->flow_ctrl_cnt1);
+-}
+-
+ enum dc_lut_mode mpc3_get_ogam_current(struct mpc *mpc, int mpcc_id)
+ {
+ 	/*Contrary to DCN2 and DCN1 wherein a single status register field holds this info;
+@@ -1386,8 +1397,8 @@ static const struct mpc_funcs dcn30_mpc_funcs = {
+ 	.read_mpcc_state = mpc1_read_mpcc_state,
+ 	.insert_plane = mpc1_insert_plane,
+ 	.remove_mpcc = mpc1_remove_mpcc,
+-	.mpc_init = mpc1_mpc_init,
+-	.mpc_init_single_inst = mpc1_mpc_init_single_inst,
++	.mpc_init = mpc3_mpc_init,
++	.mpc_init_single_inst = mpc3_mpc_init_single_inst,
+ 	.update_blending = mpc2_update_blending,
+ 	.cursor_lock = mpc1_cursor_lock,
+ 	.get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp,
+@@ -1404,7 +1415,6 @@ static const struct mpc_funcs dcn30_mpc_funcs = {
+ 	.set_dwb_mux = mpc3_set_dwb_mux,
+ 	.disable_dwb_mux = mpc3_disable_dwb_mux,
+ 	.is_dwb_idle = mpc3_is_dwb_idle,
+-	.set_out_rate_control = mpc3_set_out_rate_control,
+ 	.set_gamut_remap = mpc3_set_gamut_remap,
+ 	.program_shaper = mpc3_program_shaper,
+ 	.acquire_rmu = mpcc3_acquire_rmu,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
+index 5198f2167c7c8..d3b904517a22d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
+@@ -1007,6 +1007,13 @@ void dcn30_mpc_construct(struct dcn30_mpc *mpc30,
+ 	int num_mpcc,
+ 	int num_rmu);
+ 
++void mpc3_mpc_init(
++	struct mpc *mpc);
++
++void mpc3_mpc_init_single_inst(
++	struct mpc *mpc,
++	unsigned int mpcc_id);
++
+ bool mpc3_program_shaper(
+ 		struct mpc *mpc,
+ 		const struct pwl_params *params,
+@@ -1074,13 +1081,6 @@ bool mpc3_is_dwb_idle(
+ 	struct mpc *mpc,
+ 	int dwb_id);
+ 
+-void mpc3_set_out_rate_control(
+-	struct mpc *mpc,
+-	int opp_id,
+-	bool enable,
+-	bool rate_2x_mode,
+-	struct mpc_dwb_flow_control *flow_control);
+-
+ void mpc3_power_on_ogam_lut(
+ 	struct mpc *mpc, int mpcc_id,
+ 	bool power_on);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
+index 994b21ed272f1..3279b61022f10 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
+@@ -47,7 +47,7 @@ void mpc32_mpc_init(struct mpc *mpc)
+ 	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+ 	int mpcc_id;
+ 
+-	mpc1_mpc_init(mpc);
++	mpc3_mpc_init(mpc);
+ 
+ 	if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc) {
+ 		if (mpc30->mpc_mask->MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE && mpc30->mpc_mask->MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE) {
+@@ -990,7 +990,7 @@ static const struct mpc_funcs dcn32_mpc_funcs = {
+ 	.insert_plane = mpc1_insert_plane,
+ 	.remove_mpcc = mpc1_remove_mpcc,
+ 	.mpc_init = mpc32_mpc_init,
+-	.mpc_init_single_inst = mpc1_mpc_init_single_inst,
++	.mpc_init_single_inst = mpc3_mpc_init_single_inst,
+ 	.update_blending = mpc2_update_blending,
+ 	.cursor_lock = mpc1_cursor_lock,
+ 	.get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp,
+@@ -1007,7 +1007,6 @@ static const struct mpc_funcs dcn32_mpc_funcs = {
+ 	.set_dwb_mux = mpc3_set_dwb_mux,
+ 	.disable_dwb_mux = mpc3_disable_dwb_mux,
+ 	.is_dwb_idle = mpc3_is_dwb_idle,
+-	.set_out_rate_control = mpc3_set_out_rate_control,
+ 	.set_gamut_remap = mpc3_set_gamut_remap,
+ 	.program_shaper = mpc32_program_shaper,
+ 	.program_3dlut = mpc32_program_3dlut,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+index e940dd0f92b73..f663de1cdcdca 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+@@ -1866,6 +1866,7 @@ static bool dml1_validate(struct dc *dc, struct dc_state *context, bool fast_val
+ 	dc->res_pool->funcs->calculate_wm_and_dlg(dc, context, pipes, pipe_cnt, vlevel);
+ 
+ 	dcn32_override_min_req_memclk(dc, context);
++	dcn32_override_min_req_dcfclk(dc, context);
+ 
+ 	BW_VAL_TRACE_END_WATERMARKS();
+ 
+@@ -1924,7 +1925,21 @@ int dcn32_populate_dml_pipes_from_context(
+ 		dcn32_zero_pipe_dcc_fraction(pipes, pipe_cnt);
+ 		DC_FP_END();
+ 		pipes[pipe_cnt].pipe.dest.vfront_porch = timing->v_front_porch;
+-		pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal;
++		if (dc->config.enable_windowed_mpo_odm &&
++				dc->debug.enable_single_display_2to1_odm_policy) {
++			switch (resource_get_odm_slice_count(pipe)) {
++			case 2:
++				pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_2to1;
++				break;
++			case 4:
++				pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_4to1;
++				break;
++			default:
++				pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal;
++			}
++		} else {
++			pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal;
++		}
+ 		pipes[pipe_cnt].pipe.src.gpuvm_min_page_size_kbytes = 256; // according to spreadsheet
+ 		pipes[pipe_cnt].pipe.src.unbounded_req_mode = false;
+ 		pipes[pipe_cnt].pipe.scale_ratio_depth.lb_depth = dm_lb_19;
+@@ -2011,6 +2026,8 @@ static void dcn32_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw
+ {
+ 	DC_FP_START();
+ 	dcn32_update_bw_bounding_box_fpu(dc, bw_params);
++	if (dc->debug.using_dml2 && dc->current_state && dc->current_state->bw_ctx.dml2)
++		dml2_reinit(dc, &dc->dml2_options, &dc->current_state->bw_ctx.dml2);
+ 	DC_FP_END();
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
+index b931008114c91..351c8a28438c3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
+@@ -41,6 +41,7 @@
+ #define SUBVP_HIGH_REFRESH_LIST_LEN 4
+ #define DCN3_2_MAX_SUBVP_PIXEL_RATE_MHZ 1800
+ #define DCN3_2_VMIN_DISPCLK_HZ 717000000
++#define MIN_SUBVP_DCFCLK_KHZ 400000
+ 
+ #define TO_DCN32_RES_POOL(pool)\
+ 	container_of(pool, struct dcn32_resource_pool, base)
+@@ -183,6 +184,10 @@ bool dcn32_subvp_drr_admissable(struct dc *dc, struct dc_state *context);
+ 
+ bool dcn32_subvp_vblank_admissable(struct dc *dc, struct dc_state *context, int vlevel);
+ 
++void dcn32_update_dml_pipes_odm_policy_based_on_context(struct dc *dc, struct dc_state *context, display_e2e_pipe_params_st *pipes);
++
++void dcn32_override_min_req_dcfclk(struct dc *dc, struct dc_state *context);
++
+ /* definitions for run time init of reg offsets */
+ 
+ /* CLK SRC */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
+index bc5f0db23d0c3..1f89428499f76 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource_helpers.c
+@@ -778,3 +778,35 @@ bool dcn32_subvp_vblank_admissable(struct dc *dc, struct dc_state *context, int
+ 
+ 	return result;
+ }
++
++void dcn32_update_dml_pipes_odm_policy_based_on_context(struct dc *dc, struct dc_state *context,
++		display_e2e_pipe_params_st *pipes)
++{
++	int i, pipe_cnt;
++	struct resource_context *res_ctx = &context->res_ctx;
++	struct pipe_ctx *pipe = NULL;
++
++	for (i = 0, pipe_cnt = 0; i < dc->res_pool->pipe_count; i++) {
++		int odm_slice_count = 0;
++
++		if (!res_ctx->pipe_ctx[i].stream)
++			continue;
++		pipe = &res_ctx->pipe_ctx[i];
++		odm_slice_count = resource_get_odm_slice_count(pipe);
++
++		if (odm_slice_count == 1)
++			pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal;
++		else if (odm_slice_count == 2)
++			pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_2to1;
++		else if (odm_slice_count == 4)
++			pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_4to1;
++
++		pipe_cnt++;
++	}
++}
++
++void dcn32_override_min_req_dcfclk(struct dc *dc, struct dc_state *context)
++{
++	if (dcn32_subvp_in_use(dc, context) && context->bw_ctx.bw.dcn.clk.dcfclk_khz <= MIN_SUBVP_DCFCLK_KHZ)
++		context->bw_ctx.bw.dcn.clk.dcfclk_khz = MIN_SUBVP_DCFCLK_KHZ;
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+index 4156a8cc2bc7e..3b7505b5f0a41 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+@@ -1579,6 +1579,8 @@ static void dcn321_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *b
+ {
+ 	DC_FP_START();
+ 	dcn321_update_bw_bounding_box_fpu(dc, bw_params);
++	if (dc->debug.using_dml2 && dc->current_state && dc->current_state->bw_ctx.dml2)
++		dml2_reinit(dc, &dc->dml2_options, &dc->current_state->bw_ctx.dml2);
+ 	DC_FP_END();
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index fe2b67d745f0d..b315ca6f1cee8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -1265,7 +1265,7 @@ static bool update_pipes_with_split_flags(struct dc *dc, struct dc_state *contex
+ 	return updated;
+ }
+ 
+-static bool should_allow_odm_power_optimization(struct dc *dc,
++static bool should_apply_odm_power_optimization(struct dc *dc,
+ 		struct dc_state *context, struct vba_vars_st *v, int *split,
+ 		bool *merge)
+ {
+@@ -1369,9 +1369,12 @@ static void try_odm_power_optimization_and_revalidate(
+ {
+ 	int i;
+ 	unsigned int new_vlevel;
++	unsigned int cur_policy[MAX_PIPES];
+ 
+-	for (i = 0; i < pipe_cnt; i++)
++	for (i = 0; i < pipe_cnt; i++) {
++		cur_policy[i] = pipes[i].pipe.dest.odm_combine_policy;
+ 		pipes[i].pipe.dest.odm_combine_policy = dm_odm_combine_policy_2to1;
++	}
+ 
+ 	new_vlevel = dml_get_voltage_level(&context->bw_ctx.dml, pipes, pipe_cnt);
+ 
+@@ -1380,6 +1383,9 @@ static void try_odm_power_optimization_and_revalidate(
+ 		memset(merge, 0, MAX_PIPES * sizeof(bool));
+ 		*vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, new_vlevel, split, merge);
+ 		context->bw_ctx.dml.vba.VoltageLevel = *vlevel;
++	} else {
++		for (i = 0; i < pipe_cnt; i++)
++			pipes[i].pipe.dest.odm_combine_policy = cur_policy[i];
+ 	}
+ }
+ 
+@@ -1550,7 +1556,7 @@ static void dcn32_full_validate_bw_helper(struct dc *dc,
+ 		}
+ 	}
+ 
+-	if (should_allow_odm_power_optimization(dc, context, vba, split, merge))
++	if (should_apply_odm_power_optimization(dc, context, vba, split, merge))
+ 		try_odm_power_optimization_and_revalidate(
+ 				dc, context, pipes, split, merge, vlevel, *pipe_cnt);
+ 
+@@ -2178,6 +2184,8 @@ bool dcn32_internal_validate_bw(struct dc *dc,
+ 		int i;
+ 
+ 		pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
++		if (!dc->config.enable_windowed_mpo_odm)
++			dcn32_update_dml_pipes_odm_policy_based_on_context(dc, context, pipes);
+ 
+ 		/* repopulate_pipes = 1 means the pipes were either split or merged. In this case
+ 		 * we have to re-calculate the DET allocation and run through DML once more to
+@@ -2186,7 +2194,9 @@ bool dcn32_internal_validate_bw(struct dc *dc,
+ 		 * */
+ 		context->bw_ctx.dml.soc.allow_for_pstate_or_stutter_in_vblank_final =
+ 					dm_prefetch_support_uclk_fclk_and_stutter_if_possible;
++
+ 		vlevel = dml_get_voltage_level(&context->bw_ctx.dml, pipes, pipe_cnt);
++
+ 		if (vlevel == context->bw_ctx.dml.soc.num_states) {
+ 			/* failed after DET size changes */
+ 			goto validate_fail;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index 16452dae4acac..b6744ad778fe3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -228,17 +228,13 @@ void dml2_init_socbb_params(struct dml2_context *dml2, const struct dc *in_dc, s
+ 		break;
+ 
+ 	case dml_project_dcn35:
++	case dml_project_dcn351:
+ 		out->num_chans = 4;
+ 		out->round_trip_ping_latency_dcfclk_cycles = 106;
+ 		out->smn_latency_us = 2;
+ 		out->dispclk_dppclk_vco_speed_mhz = 3600;
+ 		break;
+ 
+-	case dml_project_dcn351:
+-		out->num_chans = 16;
+-		out->round_trip_ping_latency_dcfclk_cycles = 1100;
+-		out->smn_latency_us = 2;
+-		break;
+ 	}
+ 	/* ---Overrides if available--- */
+ 	if (dml2->config.bbox_overrides.dram_num_chan)
+@@ -824,13 +820,25 @@ static struct scaler_data get_scaler_data_for_plane(const struct dc_plane_state
+ 
+ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location, const struct dc_stream_state *in)
+ {
++	dml_uint_t width, height;
++
++	if (in->timing.h_addressable > 3840)
++		width = 3840;
++	else
++		width = in->timing.h_addressable;	// 4K max
++
++	if (in->timing.v_addressable > 2160)
++		height = 2160;
++	else
++		height = in->timing.v_addressable;	// 4K max
++
+ 	out->CursorBPP[location] = dml_cur_32bit;
+ 	out->CursorWidth[location] = 256;
+ 
+ 	out->GPUVMMinPageSizeKBytes[location] = 256;
+ 
+-	out->ViewportWidth[location] = in->timing.h_addressable;
+-	out->ViewportHeight[location] = in->timing.v_addressable;
++	out->ViewportWidth[location] = width;
++	out->ViewportHeight[location] = height;
+ 	out->ViewportStationary[location] = false;
+ 	out->ViewportWidthChroma[location] = 0;
+ 	out->ViewportHeightChroma[location] = 0;
+@@ -849,7 +857,7 @@ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned
+ 	out->HTapsChroma[location] = 0;
+ 	out->VTapsChroma[location] = 0;
+ 	out->SourceScan[location] = dml_rotation_0;
+-	out->ScalerRecoutWidth[location] = in->timing.h_addressable;
++	out->ScalerRecoutWidth[location] = width;
+ 
+ 	out->LBBitPerPixel[location] = 57;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index c62b61ac45d27..269bfb14c2399 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -696,13 +696,13 @@ bool dml2_validate(const struct dc *in_dc, struct dc_state *context, bool fast_v
+ 	return out;
+ }
+ 
+-bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
++static inline struct dml2_context *dml2_allocate_memory(void)
+ {
+-	// Allocate Mode Lib Ctx
+-	*dml2 = (struct dml2_context *) kzalloc(sizeof(struct dml2_context), GFP_KERNEL);
++	return (struct dml2_context *) kzalloc(sizeof(struct dml2_context), GFP_KERNEL);
++}
+ 
+-	if (!(*dml2))
+-		return false;
++static void dml2_init(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
++{
+ 
+ 	// Store config options
+ 	(*dml2)->config = *config;
+@@ -730,9 +730,18 @@ bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options
+ 	initialize_dml2_soc_bbox(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc);
+ 
+ 	initialize_dml2_soc_states(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc, &(*dml2)->v20.dml_core_ctx.states);
++}
++
++bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
++{
++	// Allocate Mode Lib Ctx
++	*dml2 = dml2_allocate_memory();
++
++	if (!(*dml2))
++		return false;
++
++	dml2_init(in_dc, config, dml2);
+ 
+-	/*Initialize DML20 instance which calls dml2_core_create, and core_dcn3_populate_informative*/
+-	//dml2_initialize_instance(&(*dml_ctx)->v20.dml_init);
+ 	return true;
+ }
+ 
+@@ -750,3 +759,33 @@ void dml2_extract_dram_and_fclk_change_support(struct dml2_context *dml2,
+ 	*fclk_change_support = (unsigned int) dml2->v20.dml_core_ctx.ms.support.FCLKChangeSupport[0];
+ 	*dram_clk_change_support = (unsigned int) dml2->v20.dml_core_ctx.ms.support.DRAMClockChangeSupport[0];
+ }
++
++void dml2_copy(struct dml2_context *dst_dml2,
++	struct dml2_context *src_dml2)
++{
++	/* copy Mode Lib Ctx */
++	memcpy(dst_dml2, src_dml2, sizeof(struct dml2_context));
++}
++
++bool dml2_create_copy(struct dml2_context **dst_dml2,
++	struct dml2_context *src_dml2)
++{
++	/* Allocate Mode Lib Ctx */
++	*dst_dml2 = dml2_allocate_memory();
++
++	if (!(*dst_dml2))
++		return false;
++
++	/* copy Mode Lib Ctx */
++	dml2_copy(*dst_dml2, src_dml2);
++
++	return true;
++}
++
++void dml2_reinit(const struct dc *in_dc,
++				 const struct dml2_configuration_options *config,
++				 struct dml2_context **dml2)
++{
++
++	dml2_init(in_dc, config, dml2);
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
+index fe15baa4bf094..548504d7de1e9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
+@@ -191,6 +191,13 @@ bool dml2_create(const struct dc *in_dc,
+ 				 struct dml2_context **dml2);
+ 
+ void dml2_destroy(struct dml2_context *dml2);
++void dml2_copy(struct dml2_context *dst_dml2,
++	struct dml2_context *src_dml2);
++bool dml2_create_copy(struct dml2_context **dst_dml2,
++	struct dml2_context *src_dml2);
++void dml2_reinit(const struct dc *in_dc,
++				 const struct dml2_configuration_options *config,
++				 struct dml2_context **dml2);
+ 
+ /*
+  * dml2_validate - Determines if a display configuration is supported or not.
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index c1f1665e553d6..3642c069bd1bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -1184,7 +1184,8 @@ void dce110_disable_stream(struct pipe_ctx *pipe_ctx)
+ 		if (dccg) {
+ 			dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
+ 			dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst, dp_hpo_inst);
+-			dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
++			if (dccg && dccg->funcs->set_dtbclk_dto)
++				dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
+ 		}
+ 	} else if (dccg && dccg->funcs->disable_symclk_se) {
+ 		dccg->funcs->disable_symclk_se(dccg, stream_enc->stream_enc_inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index c966f38583cb9..e3f547e0613c9 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1385,6 +1385,11 @@ static void dcn20_detect_pipe_changes(struct pipe_ctx *old_pipe, struct pipe_ctx
+ 		return;
+ 	}
+ 
++	if (resource_is_pipe_type(new_pipe, OTG_MASTER) &&
++			resource_is_odm_topology_changed(new_pipe, old_pipe))
++		/* Detect odm changes */
++		new_pipe->update_flags.bits.odm = 1;
++
+ 	/* Exit on unchanged, unused pipe */
+ 	if (!old_pipe->plane_state && !new_pipe->plane_state)
+ 		return;
+@@ -1434,10 +1439,6 @@ static void dcn20_detect_pipe_changes(struct pipe_ctx *old_pipe, struct pipe_ctx
+ 
+ 	/* Detect top pipe only changes */
+ 	if (resource_is_pipe_type(new_pipe, OTG_MASTER)) {
+-		/* Detect odm changes */
+-		if (resource_is_odm_topology_changed(new_pipe, old_pipe))
+-			new_pipe->update_flags.bits.odm = 1;
+-
+ 		/* Detect global sync changes */
+ 		if (old_pipe->pipe_dlg_param.vready_offset != new_pipe->pipe_dlg_param.vready_offset
+ 				|| old_pipe->pipe_dlg_param.vstartup_start != new_pipe->pipe_dlg_param.vstartup_start
+@@ -1879,19 +1880,20 @@ void dcn20_program_front_end_for_ctx(
+ 	DC_LOGGER_INIT(dc->ctx->logger);
+ 	unsigned int prev_hubp_count = 0;
+ 	unsigned int hubp_count = 0;
++	struct pipe_ctx *pipe;
+ 
+ 	if (resource_is_pipe_topology_changed(dc->current_state, context))
+ 		resource_log_pipe_topology_update(dc, context);
+ 
+ 	if (dc->hwss.program_triplebuffer != NULL && dc->debug.enable_tri_buf) {
+ 		for (i = 0; i < dc->res_pool->pipe_count; i++) {
+-			struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++			pipe = &context->res_ctx.pipe_ctx[i];
+ 
+-			if (!pipe_ctx->top_pipe && !pipe_ctx->prev_odm_pipe && pipe_ctx->plane_state) {
+-				ASSERT(!pipe_ctx->plane_state->triplebuffer_flips);
++			if (!pipe->top_pipe && !pipe->prev_odm_pipe && pipe->plane_state) {
++				ASSERT(!pipe->plane_state->triplebuffer_flips);
+ 				/*turn off triple buffer for full update*/
+ 				dc->hwss.program_triplebuffer(
+-						dc, pipe_ctx, pipe_ctx->plane_state->triplebuffer_flips);
++						dc, pipe, pipe->plane_state->triplebuffer_flips);
+ 			}
+ 		}
+ 	}
+@@ -1965,12 +1967,22 @@ void dcn20_program_front_end_for_ctx(
+ 			DC_LOG_DC("Reset mpcc for pipe %d\n", dc->current_state->res_ctx.pipe_ctx[i].pipe_idx);
+ 		}
+ 
++	/* update ODM for blanked OTG master pipes */
++	for (i = 0; i < dc->res_pool->pipe_count; i++) {
++		pipe = &context->res_ctx.pipe_ctx[i];
++		if (resource_is_pipe_type(pipe, OTG_MASTER) &&
++				!resource_is_pipe_type(pipe, DPP_PIPE) &&
++				pipe->update_flags.bits.odm &&
++				hws->funcs.update_odm)
++			hws->funcs.update_odm(dc, context, pipe);
++	}
++
+ 	/*
+ 	 * Program all updated pipes, order matters for mpcc setup. Start with
+ 	 * top pipe and program all pipes that follow in order
+ 	 */
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+-		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
++		pipe = &context->res_ctx.pipe_ctx[i];
+ 
+ 		if (pipe->plane_state && !pipe->top_pipe) {
+ 			while (pipe) {
+@@ -2009,17 +2021,6 @@ void dcn20_program_front_end_for_ctx(
+ 			context->stream_status[0].plane_count > 1) {
+ 			pipe->plane_res.hubp->funcs->hubp_wait_pipe_read_start(pipe->plane_res.hubp);
+ 		}
+-
+-		/* when dynamic ODM is active, pipes must be reconfigured when all planes are
+-		 * disabled, as some transitions will leave software and hardware state
+-		 * mismatched.
+-		 */
+-		if (dc->debug.enable_single_display_2to1_odm_policy &&
+-			pipe->stream &&
+-			pipe->update_flags.bits.disable &&
+-			!pipe->prev_odm_pipe &&
+-			hws->funcs.update_odm)
+-			hws->funcs.update_odm(dc, context, pipe);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+index 772dc0db916f7..c89149d153026 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+@@ -656,10 +656,20 @@ void dcn30_set_avmute(struct pipe_ctx *pipe_ctx, bool enable)
+ 	if (pipe_ctx == NULL)
+ 		return;
+ 
+-	if (dc_is_hdmi_signal(pipe_ctx->stream->signal) && pipe_ctx->stream_res.stream_enc != NULL)
++	if (dc_is_hdmi_signal(pipe_ctx->stream->signal) && pipe_ctx->stream_res.stream_enc != NULL) {
+ 		pipe_ctx->stream_res.stream_enc->funcs->set_avmute(
+ 				pipe_ctx->stream_res.stream_enc,
+ 				enable);
++
++		/* Wait for two frame to make sure AV mute is sent out */
++		if (enable) {
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
++			pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
++		}
++	}
+ }
+ 
+ void dcn30_update_info_frame(struct pipe_ctx *pipe_ctx)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+index 3a9cc8ac0c079..093f4387553ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+@@ -69,29 +69,6 @@
+ #define FN(reg_name, field_name) \
+ 	hws->shifts->field_name, hws->masks->field_name
+ 
+-static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream,
+-		int opp_cnt)
+-{
+-	bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing);
+-	int flow_ctrl_cnt;
+-
+-	if (opp_cnt >= 2)
+-		hblank_halved = true;
+-
+-	flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable -
+-			stream->timing.h_border_left -
+-			stream->timing.h_border_right;
+-
+-	if (hblank_halved)
+-		flow_ctrl_cnt /= 2;
+-
+-	/* ODM combine 4:1 case */
+-	if (opp_cnt == 4)
+-		flow_ctrl_cnt /= 2;
+-
+-	return flow_ctrl_cnt;
+-}
+-
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ {
+ 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
+@@ -183,10 +160,6 @@ void dcn314_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx
+ 	struct pipe_ctx *odm_pipe;
+ 	int opp_cnt = 0;
+ 	int opp_inst[MAX_PIPES] = {0};
+-	bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing));
+-	struct mpc_dwb_flow_control flow_control;
+-	struct mpc *mpc = dc->res_pool->mpc;
+-	int i;
+ 
+ 	opp_cnt = get_odm_config(pipe_ctx, opp_inst);
+ 
+@@ -199,20 +172,6 @@ void dcn314_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx
+ 		pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+ 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+ 
+-	rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1;
+-	flow_control.flow_ctrl_mode = 0;
+-	flow_control.flow_ctrl_cnt0 = 0x80;
+-	flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt);
+-	if (mpc->funcs->set_out_rate_control) {
+-		for (i = 0; i < opp_cnt; ++i) {
+-			mpc->funcs->set_out_rate_control(
+-					mpc, opp_inst[i],
+-					true,
+-					rate_control_2x_pclk,
+-					&flow_control);
+-		}
+-	}
+-
+ 	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+ 		odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
+ 				odm_pipe->stream_res.opp,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+index cb9d8389329ff..580afb0088d7c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+@@ -969,29 +969,6 @@ void dcn32_init_hw(struct dc *dc)
+ 	}
+ }
+ 
+-static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream,
+-		int opp_cnt)
+-{
+-	bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing);
+-	int flow_ctrl_cnt;
+-
+-	if (opp_cnt >= 2)
+-		hblank_halved = true;
+-
+-	flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable -
+-			stream->timing.h_border_left -
+-			stream->timing.h_border_right;
+-
+-	if (hblank_halved)
+-		flow_ctrl_cnt /= 2;
+-
+-	/* ODM combine 4:1 case */
+-	if (opp_cnt == 4)
+-		flow_ctrl_cnt /= 2;
+-
+-	return flow_ctrl_cnt;
+-}
+-
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ {
+ 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
+@@ -1106,10 +1083,6 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
+ 	struct pipe_ctx *odm_pipe;
+ 	int opp_cnt = 0;
+ 	int opp_inst[MAX_PIPES] = {0};
+-	bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing));
+-	struct mpc_dwb_flow_control flow_control;
+-	struct mpc *mpc = dc->res_pool->mpc;
+-	int i;
+ 
+ 	opp_cnt = get_odm_config(pipe_ctx, opp_inst);
+ 
+@@ -1122,20 +1095,6 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
+ 		pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+ 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+ 
+-	rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1;
+-	flow_control.flow_ctrl_mode = 0;
+-	flow_control.flow_ctrl_cnt0 = 0x80;
+-	flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt);
+-	if (mpc->funcs->set_out_rate_control) {
+-		for (i = 0; i < opp_cnt; ++i) {
+-			mpc->funcs->set_out_rate_control(
+-					mpc, opp_inst[i],
+-					true,
+-					rate_control_2x_pclk,
+-					&flow_control);
+-		}
+-	}
+-
+ 	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+ 		odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
+ 				odm_pipe->stream_res.opp,
+@@ -1159,6 +1118,13 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
+ 			dsc->funcs->dsc_disconnect(dsc);
+ 		}
+ 	}
++
++	if (!resource_is_pipe_type(pipe_ctx, DPP_PIPE))
++		/*
++		 * blank pattern is generated by OPP, reprogram blank pattern
++		 * due to OPP count change
++		 */
++		dc->hwseq->funcs.blank_pixel_data(dc, pipe_ctx, true);
+ }
+ 
+ unsigned int dcn32_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsigned int *k1_div, unsigned int *k2_div)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index cf26d2ad40083..325a711a14e37 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -338,29 +338,6 @@ void dcn35_init_hw(struct dc *dc)
+ 	}
+ }
+ 
+-static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream,
+-		int opp_cnt)
+-{
+-	bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing);
+-	int flow_ctrl_cnt;
+-
+-	if (opp_cnt >= 2)
+-		hblank_halved = true;
+-
+-	flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable -
+-			stream->timing.h_border_left -
+-			stream->timing.h_border_right;
+-
+-	if (hblank_halved)
+-		flow_ctrl_cnt /= 2;
+-
+-	/* ODM combine 4:1 case */
+-	if (opp_cnt == 4)
+-		flow_ctrl_cnt /= 2;
+-
+-	return flow_ctrl_cnt;
+-}
+-
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+ {
+ 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
+@@ -454,10 +431,6 @@ void dcn35_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
+ 	struct pipe_ctx *odm_pipe;
+ 	int opp_cnt = 0;
+ 	int opp_inst[MAX_PIPES] = {0};
+-	bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing));
+-	struct mpc_dwb_flow_control flow_control;
+-	struct mpc *mpc = dc->res_pool->mpc;
+-	int i;
+ 
+ 	opp_cnt = get_odm_config(pipe_ctx, opp_inst);
+ 
+@@ -470,20 +443,6 @@ void dcn35_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
+ 		pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+ 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+ 
+-	rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1;
+-	flow_control.flow_ctrl_mode = 0;
+-	flow_control.flow_ctrl_cnt0 = 0x80;
+-	flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt);
+-	if (mpc->funcs->set_out_rate_control) {
+-		for (i = 0; i < opp_cnt; ++i) {
+-			mpc->funcs->set_out_rate_control(
+-					mpc, opp_inst[i],
+-					true,
+-					rate_control_2x_pclk,
+-					&flow_control);
+-		}
+-	}
+-
+ 	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+ 		odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
+ 				odm_pipe->stream_res.opp,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index 06ca8bfb91e7d..3d7244393807a 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -427,22 +427,18 @@ struct pipe_ctx *resource_get_primary_dpp_pipe(const struct pipe_ctx *dpp_pipe);
+ int resource_get_mpc_slice_index(const struct pipe_ctx *dpp_pipe);
+ 
+ /*
+- * Get number of MPC "cuts" of the plane associated with the pipe. MPC slice
+- * count is equal to MPC splits + 1. For example if a plane is cut 3 times, it
+- * will have 4 pieces of slice.
+- * return - 0 if pipe is not used for a plane with MPCC combine. otherwise
+- * the number of MPC "cuts" for the plane.
++ * Get the number of MPC slices associated with the pipe.
++ * The function returns 0 if the pipe is not associated with an MPC combine
++ * pipe topology.
+  */
+-int resource_get_mpc_slice_count(const struct pipe_ctx *opp_head);
++int resource_get_mpc_slice_count(const struct pipe_ctx *pipe);
+ 
+ /*
+- * Get number of ODM "cuts" of the timing associated with the pipe. ODM slice
+- * count is equal to ODM splits + 1. For example if a timing is cut 3 times, it
+- * will have 4 pieces of slice.
+- * return - 0 if pipe is not used for ODM combine. otherwise
+- * the number of ODM "cuts" for the timing.
++ * Get the number of ODM slices associated with the pipe.
++ * The function returns 0 if the pipe is not associated with an ODM combine
++ * pipe topology.
+  */
+-int resource_get_odm_slice_count(const struct pipe_ctx *otg_master);
++int resource_get_odm_slice_count(const struct pipe_ctx *pipe);
+ 
+ /* Get the ODM slice index counting from 0 from left most slice */
+ int resource_get_odm_slice_index(const struct pipe_ctx *opp_head);
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index ee67a35c2a8ed..ff930a71e496a 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -513,6 +513,9 @@ enum mod_hdcp_status mod_hdcp_hdcp2_create_session(struct mod_hdcp *hdcp)
+ 	hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf;
+ 	memset(hdcp_cmd, 0, sizeof(struct ta_hdcp_shared_memory));
+ 
++	if (!display)
++		return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND;
++
+ 	hdcp_cmd->in_msg.hdcp2_create_session_v2.display_handle = display->index;
+ 
+ 	if (hdcp->connection.link.adjust.hdcp2.force_type == MOD_HDCP_FORCE_TYPE_0)
+diff --git a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+index 738ee763f24a5..84f9b412a4f11 100644
+--- a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
++++ b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+@@ -147,15 +147,12 @@ void mod_build_vsc_infopacket(const struct dc_stream_state *stream,
+ 	}
+ 
+ 	/* VSC packet set to 4 for PSR-SU, or 2 for PSR1 */
+-	if (stream->link->psr_settings.psr_feature_enabled) {
+-		if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
+-			vsc_packet_revision = vsc_packet_rev4;
+-		else if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_1)
+-			vsc_packet_revision = vsc_packet_rev2;
+-	}
+-
+-	if (stream->link->replay_settings.config.replay_supported)
++	if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
++		vsc_packet_revision = vsc_packet_rev4;
++	else if (stream->link->replay_settings.config.replay_supported)
+ 		vsc_packet_revision = vsc_packet_rev4;
++	else if (stream->link->psr_settings.psr_version == DC_PSR_VERSION_1)
++		vsc_packet_revision = vsc_packet_rev2;
+ 
+ 	/* Update to revision 5 for extended colorimetry support */
+ 	if (stream->use_vsc_sdp_for_colorimetry)
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 20c53eefd680f..fee86be55c4ed 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2518,6 +2518,7 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
+ {
+ 	struct amdgpu_device *adev = dev_get_drvdata(dev);
+ 	int err, ret;
++	u32 pwm_mode;
+ 	int value;
+ 
+ 	if (amdgpu_in_reset(adev))
+@@ -2529,13 +2530,22 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
++	if (value == 0)
++		pwm_mode = AMD_FAN_CTRL_NONE;
++	else if (value == 1)
++		pwm_mode = AMD_FAN_CTRL_MANUAL;
++	else if (value == 2)
++		pwm_mode = AMD_FAN_CTRL_AUTO;
++	else
++		return -EINVAL;
++
+ 	ret = pm_runtime_get_sync(adev_to_drm(adev)->dev);
+ 	if (ret < 0) {
+ 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
+ 		return ret;
+ 	}
+ 
+-	ret = amdgpu_dpm_set_fan_control_mode(adev, value);
++	ret = amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
+ 
+ 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
+ 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index 294a890beceb4..ac04ed1e49f35 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1285,8 +1285,9 @@ static int arcturus_get_power_limit(struct smu_context *smu,
+ {
+ 	struct smu_11_0_powerplay_table *powerplay_table =
+ 		(struct smu_11_0_powerplay_table *)smu->smu_table.power_play_table;
++	struct smu_11_0_overdrive_table *od_settings = smu->od_settings;
+ 	PPTable_t *pptable = smu->smu_table.driver_pptable;
+-	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
+ 
+ 	if (smu_v11_0_get_current_power_limit(smu, &power_limit)) {
+ 		/* the last hope to figure out the ppt limit */
+@@ -1303,12 +1304,16 @@ static int arcturus_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled)
+-		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+-	else
+-		od_percent_upper = 0;
+-
+-	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++	if (powerplay_table) {
++		if (smu->od_enabled &&
++				od_settings->cap[SMU_11_0_ODCAP_POWER_LIMIT]) {
++			od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++		} else if (od_settings->cap[SMU_11_0_ODCAP_POWER_LIMIT]) {
++			od_percent_upper = 0;
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++		}
++	}
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 							od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index aefe72c8abd2d..082101845968c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2339,7 +2339,7 @@ static int navi10_get_power_limit(struct smu_context *smu,
+ 		(struct smu_11_0_powerplay_table *)smu->smu_table.power_play_table;
+ 	struct smu_11_0_overdrive_table *od_settings = smu->od_settings;
+ 	PPTable_t *pptable = smu->smu_table.driver_pptable;
+-	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
+ 
+ 	if (smu_v11_0_get_current_power_limit(smu, &power_limit)) {
+ 		/* the last hope to figure out the ppt limit */
+@@ -2356,13 +2356,16 @@ static int navi10_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled &&
+-		    navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT))
+-		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
+-	else
+-		od_percent_upper = 0;
+-
+-	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++	if (powerplay_table) {
++		if (smu->od_enabled &&
++			    navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT)) {
++			od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++		} else if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT)) {
++			od_percent_upper = 0;
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]);
++		}
++	}
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index fa953d4445487..6f37ca7a06184 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -617,6 +617,12 @@ static uint32_t sienna_cichlid_get_throttler_status_locked(struct smu_context *s
+ 	return throttler_status;
+ }
+ 
++static bool sienna_cichlid_is_od_feature_supported(struct smu_11_0_7_overdrive_table *od_table,
++						   enum SMU_11_0_7_ODFEATURE_CAP cap)
++{
++	return od_table->cap[cap];
++}
++
+ static int sienna_cichlid_get_power_limit(struct smu_context *smu,
+ 					  uint32_t *current_power_limit,
+ 					  uint32_t *default_power_limit,
+@@ -625,7 +631,8 @@ static int sienna_cichlid_get_power_limit(struct smu_context *smu,
+ {
+ 	struct smu_11_0_7_powerplay_table *powerplay_table =
+ 		(struct smu_11_0_7_powerplay_table *)smu->smu_table.power_play_table;
+-	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	struct smu_11_0_7_overdrive_table *od_settings = smu->od_settings;
++	uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
+ 	uint16_t *table_member;
+ 
+ 	GET_PPTABLE_MEMBER(SocketPowerLimitAc, &table_member);
+@@ -640,12 +647,16 @@ static int sienna_cichlid_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled)
+-		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
+-	else
+-		od_percent_upper = 0;
+-
+-	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
++	if (powerplay_table) {
++		if (smu->od_enabled &&
++				sienna_cichlid_is_od_feature_supported(od_settings, SMU_11_0_7_ODCAP_POWER_LIMIT)) {
++			od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
++		} else if ((sienna_cichlid_is_od_feature_supported(od_settings, SMU_11_0_7_ODCAP_POWER_LIMIT))) {
++			od_percent_upper = 0;
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]);
++		}
++	}
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+@@ -1250,12 +1261,6 @@ static bool sienna_cichlid_is_support_fine_grained_dpm(struct smu_context *smu,
+ 	return dpm_desc->SnapToDiscrete == 0;
+ }
+ 
+-static bool sienna_cichlid_is_od_feature_supported(struct smu_11_0_7_overdrive_table *od_table,
+-						   enum SMU_11_0_7_ODFEATURE_CAP cap)
+-{
+-	return od_table->cap[cap];
+-}
+-
+ static void sienna_cichlid_get_od_setting_range(struct smu_11_0_7_overdrive_table *od_table,
+ 						enum SMU_11_0_7_ODSETTING_ID setting,
+ 						uint32_t *min, uint32_t *max)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index ac3c9e0966ed6..9ac6c408d2b68 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2351,7 +2351,7 @@ static int smu_v13_0_0_get_power_limit(struct smu_context *smu,
+ 		(struct smu_13_0_0_powerplay_table *)table_context->power_play_table;
+ 	PPTable_t *pptable = table_context->driver_pptable;
+ 	SkuTable_t *skutable = &pptable->SkuTable;
+-	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
+ 	uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
+ 
+ 	if (smu_v13_0_get_current_power_limit(smu, &power_limit))
+@@ -2364,12 +2364,16 @@ static int smu_v13_0_0_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled)
+-		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
+-	else
+-		od_percent_upper = 0;
+-
+-	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
++	if (powerplay_table) {
++		if (smu->od_enabled &&
++				smu_v13_0_0_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
++			od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
++		} else if (smu_v13_0_0_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
++			od_percent_upper = 0;
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]);
++		}
++	}
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index b296c5f9d98d0..402e0d184e147 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2315,7 +2315,7 @@ static int smu_v13_0_7_get_power_limit(struct smu_context *smu,
+ 		(struct smu_13_0_7_powerplay_table *)table_context->power_play_table;
+ 	PPTable_t *pptable = table_context->driver_pptable;
+ 	SkuTable_t *skutable = &pptable->SkuTable;
+-	uint32_t power_limit, od_percent_upper, od_percent_lower;
++	uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
+ 	uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
+ 
+ 	if (smu_v13_0_get_current_power_limit(smu, &power_limit))
+@@ -2328,12 +2328,16 @@ static int smu_v13_0_7_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (smu->od_enabled)
+-		od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
+-	else
+-		od_percent_upper = 0;
+-
+-	od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
++	if (powerplay_table) {
++		if (smu->od_enabled &&
++				(smu_v13_0_7_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT))) {
++			od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
++		} else if (smu_v13_0_7_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
++			od_percent_upper = 0;
++			od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]);
++		}
++	}
+ 
+ 	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
+ 					od_percent_upper, od_percent_lower, power_limit);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+index d8f8ad0e71375..01f2ab4567246 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+@@ -229,8 +229,6 @@ int smu_v14_0_check_fw_version(struct smu_context *smu)
+ 		smu->smc_driver_if_version = SMU14_DRIVER_IF_VERSION_SMU_V14_0_2;
+ 		break;
+ 	case IP_VERSION(14, 0, 0):
+-		if ((smu->smc_fw_version < 0x5d3a00))
+-			dev_warn(smu->adev->dev, "The PMFW version(%x) is behind in this BIOS!\n", smu->smc_fw_version);
+ 		smu->smc_driver_if_version = SMU14_DRIVER_IF_VERSION_SMU_V14_0_0;
+ 		break;
+ 	default:
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
+index 94ccdbfd70909..24a43374a753b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_0_ppt.c
+@@ -261,7 +261,10 @@ static int smu_v14_0_0_get_smu_metrics_data(struct smu_context *smu,
+ 		*value = metrics->MpipuclkFrequency;
+ 		break;
+ 	case METRICS_AVERAGE_GFXACTIVITY:
+-		*value = metrics->GfxActivity / 100;
++		if ((smu->smc_fw_version > 0x5d4600))
++			*value = metrics->GfxActivity;
++		else
++			*value = metrics->GfxActivity / 100;
+ 		break;
+ 	case METRICS_AVERAGE_VCNACTIVITY:
+ 		*value = metrics->VcnActivity / 100;
+diff --git a/drivers/gpu/drm/bridge/lontium-lt8912b.c b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+index 03532efb893bb..e5839c89a355a 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c
++++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c
+@@ -429,26 +429,24 @@ lt8912_connector_mode_valid(struct drm_connector *connector,
+ 
+ static int lt8912_connector_get_modes(struct drm_connector *connector)
+ {
+-	struct edid *edid;
+-	int ret = -1;
+-	int num = 0;
++	const struct drm_edid *drm_edid;
+ 	struct lt8912 *lt = connector_to_lt8912(connector);
+ 	u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24;
++	int ret, num;
+ 
+-	edid = drm_bridge_get_edid(lt->hdmi_port, connector);
+-	if (edid) {
+-		drm_connector_update_edid_property(connector, edid);
+-		num = drm_add_edid_modes(connector, edid);
+-	} else {
+-		return ret;
+-	}
++	drm_edid = drm_bridge_edid_read(lt->hdmi_port, connector);
++	drm_edid_connector_update(connector, drm_edid);
++	if (!drm_edid)
++		return 0;
++
++	num = drm_edid_connector_add_modes(connector);
+ 
+ 	ret = drm_display_info_set_bus_formats(&connector->display_info,
+ 					       &bus_format, 1);
+-	if (ret)
+-		num = ret;
++	if (ret < 0)
++		num = 0;
+ 
+-	kfree(edid);
++	drm_edid_free(drm_edid);
+ 	return num;
+ }
+ 
+diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
+index 30d66bee0ec6a..e1cfba2ff583b 100644
+--- a/drivers/gpu/drm/drm_bridge.c
++++ b/drivers/gpu/drm/drm_bridge.c
+@@ -27,8 +27,9 @@
+ #include <linux/mutex.h>
+ 
+ #include <drm/drm_atomic_state_helper.h>
+-#include <drm/drm_debugfs.h>
+ #include <drm/drm_bridge.h>
++#include <drm/drm_debugfs.h>
++#include <drm/drm_edid.h>
+ #include <drm/drm_encoder.h>
+ #include <drm/drm_file.h>
+ #include <drm/drm_of.h>
+@@ -1206,6 +1207,47 @@ int drm_bridge_get_modes(struct drm_bridge *bridge,
+ }
+ EXPORT_SYMBOL_GPL(drm_bridge_get_modes);
+ 
++/**
++ * drm_bridge_edid_read - read the EDID data of the connected display
++ * @bridge: bridge control structure
++ * @connector: the connector to read EDID for
++ *
++ * If the bridge supports output EDID retrieval, as reported by the
++ * DRM_BRIDGE_OP_EDID bridge ops flag, call &drm_bridge_funcs.edid_read to get
++ * the EDID and return it. Otherwise return NULL.
++ *
++ * If &drm_bridge_funcs.edid_read is not set, fall back to using
++ * drm_bridge_get_edid() and wrapping it in struct drm_edid.
++ *
++ * RETURNS:
++ * The retrieved EDID on success, or NULL otherwise.
++ */
++const struct drm_edid *drm_bridge_edid_read(struct drm_bridge *bridge,
++					    struct drm_connector *connector)
++{
++	if (!(bridge->ops & DRM_BRIDGE_OP_EDID))
++		return NULL;
++
++	/* Transitional: Fall back to ->get_edid. */
++	if (!bridge->funcs->edid_read) {
++		const struct drm_edid *drm_edid;
++		struct edid *edid;
++
++		edid = drm_bridge_get_edid(bridge, connector);
++		if (!edid)
++			return NULL;
++
++		drm_edid = drm_edid_alloc(edid, (edid->extensions + 1) * EDID_LENGTH);
++
++		kfree(edid);
++
++		return drm_edid;
++	}
++
++	return bridge->funcs->edid_read(bridge, connector);
++}
++EXPORT_SYMBOL_GPL(drm_bridge_edid_read);
++
+ /**
+  * drm_bridge_get_edid - get the EDID data of the connected display
+  * @bridge: bridge control structure
+@@ -1215,6 +1257,8 @@ EXPORT_SYMBOL_GPL(drm_bridge_get_modes);
+  * DRM_BRIDGE_OP_EDID bridge ops flag, call &drm_bridge_funcs.get_edid to
+  * get the EDID and return it. Otherwise return NULL.
+  *
++ * Deprecated. Prefer using drm_bridge_edid_read().
++ *
+  * RETURNS:
+  * The retrieved EDID on success, or NULL otherwise.
+  */
+diff --git a/drivers/gpu/drm/drm_panel.c b/drivers/gpu/drm/drm_panel.c
+index e814020bbcd3b..cfbe020de54e0 100644
+--- a/drivers/gpu/drm/drm_panel.c
++++ b/drivers/gpu/drm/drm_panel.c
+@@ -274,19 +274,24 @@ EXPORT_SYMBOL(drm_panel_disable);
+  * The modes probed from the panel are automatically added to the connector
+  * that the panel is attached to.
+  *
+- * Return: The number of modes available from the panel on success or a
+- * negative error code on failure.
++ * Return: The number of modes available from the panel on success, or 0 on
++ * failure (no modes).
+  */
+ int drm_panel_get_modes(struct drm_panel *panel,
+ 			struct drm_connector *connector)
+ {
+ 	if (!panel)
+-		return -EINVAL;
++		return 0;
+ 
+-	if (panel->funcs && panel->funcs->get_modes)
+-		return panel->funcs->get_modes(panel, connector);
++	if (panel->funcs && panel->funcs->get_modes) {
++		int num;
+ 
+-	return -EOPNOTSUPP;
++		num = panel->funcs->get_modes(panel, connector);
++		if (num > 0)
++			return num;
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(drm_panel_get_modes);
+ 
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index 3f479483d7d80..15ed974bcb988 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -419,6 +419,13 @@ static int drm_helper_probe_get_modes(struct drm_connector *connector)
+ 
+ 	count = connector_funcs->get_modes(connector);
+ 
++	/* The .get_modes() callback should not return negative values. */
++	if (count < 0) {
++		drm_err(connector->dev, ".get_modes() returned %pe\n",
++			ERR_PTR(count));
++		count = 0;
++	}
++
+ 	/*
+ 	 * Fallback for when DDC probe failed in drm_get_edid() and thus skipped
+ 	 * override/firmware EDID.
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index 5860428da8de8..f0bdcfc0c2e74 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -1367,7 +1367,7 @@ static void syncobj_eventfd_entry_fence_func(struct dma_fence *fence,
+ 	struct syncobj_eventfd_entry *entry =
+ 		container_of(cb, struct syncobj_eventfd_entry, fence_cb);
+ 
+-	eventfd_signal(entry->ev_fd_ctx, 1);
++	eventfd_signal(entry->ev_fd_ctx);
+ 	syncobj_eventfd_entry_free(entry);
+ }
+ 
+@@ -1401,13 +1401,13 @@ syncobj_eventfd_entry_func(struct drm_syncobj *syncobj,
+ 	entry->fence = fence;
+ 
+ 	if (entry->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE) {
+-		eventfd_signal(entry->ev_fd_ctx, 1);
++		eventfd_signal(entry->ev_fd_ctx);
+ 		syncobj_eventfd_entry_free(entry);
+ 	} else {
+ 		ret = dma_fence_add_callback(fence, &entry->fence_cb,
+ 					     syncobj_eventfd_entry_fence_func);
+ 		if (ret == -ENOENT) {
+-			eventfd_signal(entry->ev_fd_ctx, 1);
++			eventfd_signal(entry->ev_fd_ctx);
+ 			syncobj_eventfd_entry_free(entry);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index a8d3fa81e4ec5..f9bc837e22bdd 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -494,7 +494,7 @@ static const struct drm_driver etnaviv_drm_driver = {
+ 	.desc               = "etnaviv DRM",
+ 	.date               = "20151214",
+ 	.major              = 1,
+-	.minor              = 3,
++	.minor              = 4,
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c b/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
+index 67201242438be..8665f2658d51b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
+@@ -265,6 +265,9 @@ static const struct etnaviv_chip_identity etnaviv_chip_identities[] = {
+ bool etnaviv_fill_identity_from_hwdb(struct etnaviv_gpu *gpu)
+ {
+ 	struct etnaviv_chip_identity *ident = &gpu->identity;
++	const u32 product_id = ident->product_id;
++	const u32 customer_id = ident->customer_id;
++	const u32 eco_id = ident->eco_id;
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(etnaviv_chip_identities); i++) {
+@@ -278,6 +281,12 @@ bool etnaviv_fill_identity_from_hwdb(struct etnaviv_gpu *gpu)
+ 			 etnaviv_chip_identities[i].eco_id == ~0U)) {
+ 			memcpy(ident, &etnaviv_chip_identities[i],
+ 			       sizeof(*ident));
++
++			/* Restore some id values as ~0U aka 'don't care' might been used. */
++			ident->product_id = product_id;
++			ident->customer_id = customer_id;
++			ident->eco_id = eco_id;
++
+ 			return true;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+index f5e1adfcaa514..fb941a8c99f0f 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+@@ -316,14 +316,14 @@ static int vidi_get_modes(struct drm_connector *connector)
+ 	 */
+ 	if (!ctx->raw_edid) {
+ 		DRM_DEV_DEBUG_KMS(ctx->dev, "raw_edid is null.\n");
+-		return -EFAULT;
++		return 0;
+ 	}
+ 
+ 	edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
+ 	edid = kmemdup(ctx->raw_edid, edid_len, GFP_KERNEL);
+ 	if (!edid) {
+ 		DRM_DEV_DEBUG_KMS(ctx->dev, "failed to allocate edid\n");
+-		return -ENOMEM;
++		return 0;
+ 	}
+ 
+ 	drm_connector_update_edid_property(connector, edid);
+diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
+index dd9903eab563e..eff51bfc46440 100644
+--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
++++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
+@@ -887,11 +887,11 @@ static int hdmi_get_modes(struct drm_connector *connector)
+ 	int ret;
+ 
+ 	if (!hdata->ddc_adpt)
+-		return -ENODEV;
++		return 0;
+ 
+ 	edid = drm_get_edid(connector, hdata->ddc_adpt);
+ 	if (!edid)
+-		return -ENODEV;
++		return 0;
+ 
+ 	hdata->dvi_mode = !connector->display_info.is_hdmi;
+ 	DRM_DEV_DEBUG_KMS(hdata->dev, "%s : width[%d] x height[%d]\n",
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index bf0dbd7d0697d..67143a0f51893 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -1155,7 +1155,6 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
+ 	}
+ 
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP);
+-	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
+ 
+ 	/* ensure all panel commands dispatched before enabling transcoder */
+ 	wait_for_cmds_dispatched_to_panel(encoder);
+@@ -1256,6 +1255,8 @@ static void gen11_dsi_enable(struct intel_atomic_state *state,
+ 	/* step6d: enable dsi transcoder */
+ 	gen11_dsi_enable_transcoder(encoder);
+ 
++	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON);
++
+ 	/* step7: enable backlight */
+ 	intel_backlight_enable(crtc_state, conn_state);
+ 	intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_BACKLIGHT_ON);
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index 4e8f1e91bb089..e7ea2878455db 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -1952,16 +1952,12 @@ static int get_init_otp_deassert_fragment_len(struct drm_i915_private *i915,
+  * these devices we split the init OTP sequence into a deassert sequence and
+  * the actual init OTP part.
+  */
+-static void fixup_mipi_sequences(struct drm_i915_private *i915,
+-				 struct intel_panel *panel)
++static void vlv_fixup_mipi_sequences(struct drm_i915_private *i915,
++				     struct intel_panel *panel)
+ {
+ 	u8 *init_otp;
+ 	int len;
+ 
+-	/* Limit this to VLV for now. */
+-	if (!IS_VALLEYVIEW(i915))
+-		return;
+-
+ 	/* Limit this to v1 vid-mode sequences */
+ 	if (panel->vbt.dsi.config->is_cmd_mode ||
+ 	    panel->vbt.dsi.seq_version != 1)
+@@ -1997,6 +1993,41 @@ static void fixup_mipi_sequences(struct drm_i915_private *i915,
+ 	panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] = init_otp + len - 1;
+ }
+ 
++/*
++ * Some machines (eg. Lenovo 82TQ) appear to have broken
++ * VBT sequences:
++ * - INIT_OTP is not present at all
++ * - what should be in INIT_OTP is in DISPLAY_ON
++ * - what should be in DISPLAY_ON is in BACKLIGHT_ON
++ *   (along with the actual backlight stuff)
++ *
++ * To make those work we simply swap DISPLAY_ON and INIT_OTP.
++ *
++ * TODO: Do we need to limit this to specific machines,
++ *       or examine the contents of the sequences to
++ *       avoid false positives?
++ */
++static void icl_fixup_mipi_sequences(struct drm_i915_private *i915,
++				     struct intel_panel *panel)
++{
++	if (!panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP] &&
++	    panel->vbt.dsi.sequence[MIPI_SEQ_DISPLAY_ON]) {
++		drm_dbg_kms(&i915->drm, "Broken VBT: Swapping INIT_OTP and DISPLAY_ON sequences\n");
++
++		swap(panel->vbt.dsi.sequence[MIPI_SEQ_INIT_OTP],
++		     panel->vbt.dsi.sequence[MIPI_SEQ_DISPLAY_ON]);
++	}
++}
++
++static void fixup_mipi_sequences(struct drm_i915_private *i915,
++				 struct intel_panel *panel)
++{
++	if (DISPLAY_VER(i915) >= 11)
++		icl_fixup_mipi_sequences(i915, panel);
++	else if (IS_VALLEYVIEW(i915))
++		vlv_fixup_mipi_sequences(i915, panel);
++}
++
+ static void
+ parse_mipi_sequence(struct drm_i915_private *i915,
+ 		    struct intel_panel *panel)
+@@ -3321,6 +3352,9 @@ bool intel_bios_encoder_supports_dp_dual_mode(const struct intel_bios_encoder_da
+ {
+ 	const struct child_device_config *child = &devdata->child;
+ 
++	if (!devdata)
++		return false;
++
+ 	if (!intel_bios_encoder_supports_dp(devdata) ||
+ 	    !intel_bios_encoder_supports_hdmi(devdata))
+ 		return false;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power_well.c b/drivers/gpu/drm/i915/display/intel_display_power_well.c
+index 07d6500500992..82dd5be72ee28 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power_well.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power_well.c
+@@ -246,7 +246,14 @@ static enum phy icl_aux_pw_to_phy(struct drm_i915_private *i915,
+ 	enum aux_ch aux_ch = icl_aux_pw_to_ch(power_well);
+ 	struct intel_digital_port *dig_port = aux_ch_to_digital_port(i915, aux_ch);
+ 
+-	return intel_port_to_phy(i915, dig_port->base.port);
++	/*
++	 * FIXME should we care about the (VBT defined) dig_port->aux_ch
++	 * relationship or should this be purely defined by the hardware layout?
++	 * Currently if the port doesn't appear in the VBT, or if it's declared
++	 * as HDMI-only and routed to a combo PHY, the encoder either won't be
++	 * present at all or it will not have an aux_ch assigned.
++	 */
++	return dig_port ? intel_port_to_phy(i915, dig_port->base.port) : PHY_NONE;
+ }
+ 
+ static void hsw_wait_for_power_well_enable(struct drm_i915_private *dev_priv,
+@@ -414,7 +421,8 @@ icl_combo_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
+ 
+ 	intel_de_rmw(dev_priv, regs->driver, 0, HSW_PWR_WELL_CTL_REQ(pw_idx));
+ 
+-	if (DISPLAY_VER(dev_priv) < 12)
++	/* FIXME this is a mess */
++	if (phy != PHY_NONE)
+ 		intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(phy),
+ 			     0, ICL_LANE_ENABLE_AUX);
+ 
+@@ -437,7 +445,10 @@ icl_combo_phy_aux_power_well_disable(struct drm_i915_private *dev_priv,
+ 
+ 	drm_WARN_ON(&dev_priv->drm, !IS_ICELAKE(dev_priv));
+ 
+-	intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(phy), ICL_LANE_ENABLE_AUX, 0);
++	/* FIXME this is a mess */
++	if (phy != PHY_NONE)
++		intel_de_rmw(dev_priv, ICL_PORT_CL_DW12(phy),
++			     ICL_LANE_ENABLE_AUX, 0);
+ 
+ 	intel_de_rmw(dev_priv, regs->driver, HSW_PWR_WELL_CTL_REQ(pw_idx), 0);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_display_trace.h b/drivers/gpu/drm/i915/display/intel_display_trace.h
+index 99bdb833591ce..7862e7cefe027 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_trace.h
++++ b/drivers/gpu/drm/i915/display/intel_display_trace.h
+@@ -411,7 +411,7 @@ TRACE_EVENT(intel_fbc_activate,
+ 			   struct intel_crtc *crtc = intel_crtc_for_pipe(to_i915(plane->base.dev),
+ 									 plane->pipe);
+ 			   __assign_str(dev, __dev_name_kms(plane));
+-			   __assign_str(name, plane->base.name)
++			   __assign_str(name, plane->base.name);
+ 			   __entry->pipe = crtc->pipe;
+ 			   __entry->frame = intel_crtc_get_vblank_counter(crtc);
+ 			   __entry->scanline = intel_get_crtc_scanline(crtc);
+@@ -438,7 +438,7 @@ TRACE_EVENT(intel_fbc_deactivate,
+ 			   struct intel_crtc *crtc = intel_crtc_for_pipe(to_i915(plane->base.dev),
+ 									 plane->pipe);
+ 			   __assign_str(dev, __dev_name_kms(plane));
+-			   __assign_str(name, plane->base.name)
++			   __assign_str(name, plane->base.name);
+ 			   __entry->pipe = crtc->pipe;
+ 			   __entry->frame = intel_crtc_get_vblank_counter(crtc);
+ 			   __entry->scanline = intel_get_crtc_scanline(crtc);
+@@ -465,7 +465,7 @@ TRACE_EVENT(intel_fbc_nuke,
+ 			   struct intel_crtc *crtc = intel_crtc_for_pipe(to_i915(plane->base.dev),
+ 									 plane->pipe);
+ 			   __assign_str(dev, __dev_name_kms(plane));
+-			   __assign_str(name, plane->base.name)
++			   __assign_str(name, plane->base.name);
+ 			   __entry->pipe = crtc->pipe;
+ 			   __entry->frame = intel_crtc_get_vblank_counter(crtc);
+ 			   __entry->scanline = intel_get_crtc_scanline(crtc);
+diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+index 399653a20f987..fb694ff9dc61d 100644
+--- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
++++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.c
+@@ -631,9 +631,9 @@ static const struct intel_shared_dpll_funcs ibx_pch_dpll_funcs = {
+ };
+ 
+ static const struct dpll_info pch_plls[] = {
+-	{ "PCH DPLL A", &ibx_pch_dpll_funcs, DPLL_ID_PCH_PLL_A, 0 },
+-	{ "PCH DPLL B", &ibx_pch_dpll_funcs, DPLL_ID_PCH_PLL_B, 0 },
+-	{ },
++	{ .name = "PCH DPLL A", .funcs = &ibx_pch_dpll_funcs, .id = DPLL_ID_PCH_PLL_A, },
++	{ .name = "PCH DPLL B", .funcs = &ibx_pch_dpll_funcs, .id = DPLL_ID_PCH_PLL_B, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr pch_pll_mgr = {
+@@ -1239,13 +1239,16 @@ static const struct intel_shared_dpll_funcs hsw_ddi_lcpll_funcs = {
+ };
+ 
+ static const struct dpll_info hsw_plls[] = {
+-	{ "WRPLL 1",    &hsw_ddi_wrpll_funcs, DPLL_ID_WRPLL1,     0 },
+-	{ "WRPLL 2",    &hsw_ddi_wrpll_funcs, DPLL_ID_WRPLL2,     0 },
+-	{ "SPLL",       &hsw_ddi_spll_funcs,  DPLL_ID_SPLL,       0 },
+-	{ "LCPLL 810",  &hsw_ddi_lcpll_funcs, DPLL_ID_LCPLL_810,  INTEL_DPLL_ALWAYS_ON },
+-	{ "LCPLL 1350", &hsw_ddi_lcpll_funcs, DPLL_ID_LCPLL_1350, INTEL_DPLL_ALWAYS_ON },
+-	{ "LCPLL 2700", &hsw_ddi_lcpll_funcs, DPLL_ID_LCPLL_2700, INTEL_DPLL_ALWAYS_ON },
+-	{ },
++	{ .name = "WRPLL 1", .funcs = &hsw_ddi_wrpll_funcs, .id = DPLL_ID_WRPLL1, },
++	{ .name = "WRPLL 2", .funcs = &hsw_ddi_wrpll_funcs, .id = DPLL_ID_WRPLL2, },
++	{ .name = "SPLL", .funcs = &hsw_ddi_spll_funcs, .id = DPLL_ID_SPLL, },
++	{ .name = "LCPLL 810", .funcs = &hsw_ddi_lcpll_funcs, .id = DPLL_ID_LCPLL_810,
++	  .flags = INTEL_DPLL_ALWAYS_ON, },
++	{ .name = "LCPLL 1350", .funcs = &hsw_ddi_lcpll_funcs, .id = DPLL_ID_LCPLL_1350,
++	  .flags = INTEL_DPLL_ALWAYS_ON, },
++	{ .name = "LCPLL 2700", .funcs = &hsw_ddi_lcpll_funcs, .id = DPLL_ID_LCPLL_2700,
++	  .flags = INTEL_DPLL_ALWAYS_ON, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr hsw_pll_mgr = {
+@@ -1921,11 +1924,12 @@ static const struct intel_shared_dpll_funcs skl_ddi_dpll0_funcs = {
+ };
+ 
+ static const struct dpll_info skl_plls[] = {
+-	{ "DPLL 0", &skl_ddi_dpll0_funcs, DPLL_ID_SKL_DPLL0, INTEL_DPLL_ALWAYS_ON },
+-	{ "DPLL 1", &skl_ddi_pll_funcs,   DPLL_ID_SKL_DPLL1, 0 },
+-	{ "DPLL 2", &skl_ddi_pll_funcs,   DPLL_ID_SKL_DPLL2, 0 },
+-	{ "DPLL 3", &skl_ddi_pll_funcs,   DPLL_ID_SKL_DPLL3, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &skl_ddi_dpll0_funcs, .id = DPLL_ID_SKL_DPLL0,
++	  .flags = INTEL_DPLL_ALWAYS_ON, },
++	{ .name = "DPLL 1", .funcs = &skl_ddi_pll_funcs, .id = DPLL_ID_SKL_DPLL1, },
++	{ .name = "DPLL 2", .funcs = &skl_ddi_pll_funcs, .id = DPLL_ID_SKL_DPLL2, },
++	{ .name = "DPLL 3", .funcs = &skl_ddi_pll_funcs, .id = DPLL_ID_SKL_DPLL3, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr skl_pll_mgr = {
+@@ -2376,10 +2380,10 @@ static const struct intel_shared_dpll_funcs bxt_ddi_pll_funcs = {
+ };
+ 
+ static const struct dpll_info bxt_plls[] = {
+-	{ "PORT PLL A", &bxt_ddi_pll_funcs, DPLL_ID_SKL_DPLL0, 0 },
+-	{ "PORT PLL B", &bxt_ddi_pll_funcs, DPLL_ID_SKL_DPLL1, 0 },
+-	{ "PORT PLL C", &bxt_ddi_pll_funcs, DPLL_ID_SKL_DPLL2, 0 },
+-	{ },
++	{ .name = "PORT PLL A", .funcs = &bxt_ddi_pll_funcs, .id = DPLL_ID_SKL_DPLL0, },
++	{ .name = "PORT PLL B", .funcs = &bxt_ddi_pll_funcs, .id = DPLL_ID_SKL_DPLL1, },
++	{ .name = "PORT PLL C", .funcs = &bxt_ddi_pll_funcs, .id = DPLL_ID_SKL_DPLL2, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr bxt_pll_mgr = {
+@@ -2485,7 +2489,7 @@ static void icl_wrpll_params_populate(struct skl_wrpll_params *params,
+ static bool
+ ehl_combo_pll_div_frac_wa_needed(struct drm_i915_private *i915)
+ {
+-	return (((IS_ELKHARTLAKE(i915) || IS_JASPERLAKE(i915)) &&
++	return ((IS_ELKHARTLAKE(i915) &&
+ 		 IS_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) ||
+ 		 IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) &&
+ 		 i915->display.dpll.ref_clks.nssc == 38400;
+@@ -3284,6 +3288,8 @@ static int icl_compute_tc_phy_dplls(struct intel_atomic_state *state,
+ 	struct drm_i915_private *i915 = to_i915(state->base.dev);
+ 	struct intel_crtc_state *crtc_state =
+ 		intel_atomic_get_new_crtc_state(state, crtc);
++	const struct intel_crtc_state *old_crtc_state =
++		intel_atomic_get_old_crtc_state(state, crtc);
+ 	struct icl_port_dpll *port_dpll =
+ 		&crtc_state->icl_port_dplls[ICL_PORT_DPLL_DEFAULT];
+ 	struct skl_wrpll_params pll_params = {};
+@@ -3302,7 +3308,11 @@ static int icl_compute_tc_phy_dplls(struct intel_atomic_state *state,
+ 		return ret;
+ 
+ 	/* this is mainly for the fastset check */
+-	icl_set_active_port_dpll(crtc_state, ICL_PORT_DPLL_MG_PHY);
++	if (old_crtc_state->shared_dpll &&
++	    old_crtc_state->shared_dpll->info->id == DPLL_ID_ICL_TBTPLL)
++		icl_set_active_port_dpll(crtc_state, ICL_PORT_DPLL_DEFAULT);
++	else
++		icl_set_active_port_dpll(crtc_state, ICL_PORT_DPLL_MG_PHY);
+ 
+ 	crtc_state->port_clock = icl_ddi_mg_pll_get_freq(i915, NULL,
+ 							 &port_dpll->hw_state);
+@@ -4014,14 +4024,15 @@ static const struct intel_shared_dpll_funcs mg_pll_funcs = {
+ };
+ 
+ static const struct dpll_info icl_plls[] = {
+-	{ "DPLL 0",   &combo_pll_funcs, DPLL_ID_ICL_DPLL0,  0 },
+-	{ "DPLL 1",   &combo_pll_funcs, DPLL_ID_ICL_DPLL1,  0 },
+-	{ "TBT PLL",  &tbt_pll_funcs, DPLL_ID_ICL_TBTPLL, 0 },
+-	{ "MG PLL 1", &mg_pll_funcs, DPLL_ID_ICL_MGPLL1, 0 },
+-	{ "MG PLL 2", &mg_pll_funcs, DPLL_ID_ICL_MGPLL2, 0 },
+-	{ "MG PLL 3", &mg_pll_funcs, DPLL_ID_ICL_MGPLL3, 0 },
+-	{ "MG PLL 4", &mg_pll_funcs, DPLL_ID_ICL_MGPLL4, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL1, },
++	{ .name = "TBT PLL", .funcs = &tbt_pll_funcs, .id = DPLL_ID_ICL_TBTPLL,
++	  .flags = INTEL_DPLL_IS_ALT_PORT_DPLL, },
++	{ .name = "MG PLL 1", .funcs = &mg_pll_funcs, .id = DPLL_ID_ICL_MGPLL1, },
++	{ .name = "MG PLL 2", .funcs = &mg_pll_funcs, .id = DPLL_ID_ICL_MGPLL2, },
++	{ .name = "MG PLL 3", .funcs = &mg_pll_funcs, .id = DPLL_ID_ICL_MGPLL3, },
++	{ .name = "MG PLL 4", .funcs = &mg_pll_funcs, .id = DPLL_ID_ICL_MGPLL4, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr icl_pll_mgr = {
+@@ -4035,10 +4046,10 @@ static const struct intel_dpll_mgr icl_pll_mgr = {
+ };
+ 
+ static const struct dpll_info ehl_plls[] = {
+-	{ "DPLL 0", &combo_pll_funcs, DPLL_ID_ICL_DPLL0, 0 },
+-	{ "DPLL 1", &combo_pll_funcs, DPLL_ID_ICL_DPLL1, 0 },
+-	{ "DPLL 4", &combo_pll_funcs, DPLL_ID_EHL_DPLL4, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL1, },
++	{ .name = "DPLL 4", .funcs = &combo_pll_funcs, .id = DPLL_ID_EHL_DPLL4, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr ehl_pll_mgr = {
+@@ -4058,16 +4069,17 @@ static const struct intel_shared_dpll_funcs dkl_pll_funcs = {
+ };
+ 
+ static const struct dpll_info tgl_plls[] = {
+-	{ "DPLL 0", &combo_pll_funcs, DPLL_ID_ICL_DPLL0,  0 },
+-	{ "DPLL 1", &combo_pll_funcs, DPLL_ID_ICL_DPLL1,  0 },
+-	{ "TBT PLL",  &tbt_pll_funcs, DPLL_ID_ICL_TBTPLL, 0 },
+-	{ "TC PLL 1", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL1, 0 },
+-	{ "TC PLL 2", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL2, 0 },
+-	{ "TC PLL 3", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL3, 0 },
+-	{ "TC PLL 4", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL4, 0 },
+-	{ "TC PLL 5", &dkl_pll_funcs, DPLL_ID_TGL_MGPLL5, 0 },
+-	{ "TC PLL 6", &dkl_pll_funcs, DPLL_ID_TGL_MGPLL6, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL1, },
++	{ .name = "TBT PLL", .funcs = &tbt_pll_funcs, .id = DPLL_ID_ICL_TBTPLL,
++	  .flags = INTEL_DPLL_IS_ALT_PORT_DPLL, },
++	{ .name = "TC PLL 1", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL1, },
++	{ .name = "TC PLL 2", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL2, },
++	{ .name = "TC PLL 3", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL3, },
++	{ .name = "TC PLL 4", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL4, },
++	{ .name = "TC PLL 5", .funcs = &dkl_pll_funcs, .id = DPLL_ID_TGL_MGPLL5, },
++	{ .name = "TC PLL 6", .funcs = &dkl_pll_funcs, .id = DPLL_ID_TGL_MGPLL6, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr tgl_pll_mgr = {
+@@ -4081,10 +4093,10 @@ static const struct intel_dpll_mgr tgl_pll_mgr = {
+ };
+ 
+ static const struct dpll_info rkl_plls[] = {
+-	{ "DPLL 0", &combo_pll_funcs, DPLL_ID_ICL_DPLL0, 0 },
+-	{ "DPLL 1", &combo_pll_funcs, DPLL_ID_ICL_DPLL1, 0 },
+-	{ "DPLL 4", &combo_pll_funcs, DPLL_ID_EHL_DPLL4, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL1, },
++	{ .name = "DPLL 4", .funcs = &combo_pll_funcs, .id = DPLL_ID_EHL_DPLL4, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr rkl_pll_mgr = {
+@@ -4097,11 +4109,11 @@ static const struct intel_dpll_mgr rkl_pll_mgr = {
+ };
+ 
+ static const struct dpll_info dg1_plls[] = {
+-	{ "DPLL 0", &combo_pll_funcs, DPLL_ID_DG1_DPLL0, 0 },
+-	{ "DPLL 1", &combo_pll_funcs, DPLL_ID_DG1_DPLL1, 0 },
+-	{ "DPLL 2", &combo_pll_funcs, DPLL_ID_DG1_DPLL2, 0 },
+-	{ "DPLL 3", &combo_pll_funcs, DPLL_ID_DG1_DPLL3, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_DG1_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_DG1_DPLL1, },
++	{ .name = "DPLL 2", .funcs = &combo_pll_funcs, .id = DPLL_ID_DG1_DPLL2, },
++	{ .name = "DPLL 3", .funcs = &combo_pll_funcs, .id = DPLL_ID_DG1_DPLL3, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr dg1_pll_mgr = {
+@@ -4114,11 +4126,11 @@ static const struct intel_dpll_mgr dg1_pll_mgr = {
+ };
+ 
+ static const struct dpll_info adls_plls[] = {
+-	{ "DPLL 0", &combo_pll_funcs, DPLL_ID_ICL_DPLL0, 0 },
+-	{ "DPLL 1", &combo_pll_funcs, DPLL_ID_ICL_DPLL1, 0 },
+-	{ "DPLL 2", &combo_pll_funcs, DPLL_ID_DG1_DPLL2, 0 },
+-	{ "DPLL 3", &combo_pll_funcs, DPLL_ID_DG1_DPLL3, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL1, },
++	{ .name = "DPLL 2", .funcs = &combo_pll_funcs, .id = DPLL_ID_DG1_DPLL2, },
++	{ .name = "DPLL 3", .funcs = &combo_pll_funcs, .id = DPLL_ID_DG1_DPLL3, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr adls_pll_mgr = {
+@@ -4131,14 +4143,15 @@ static const struct intel_dpll_mgr adls_pll_mgr = {
+ };
+ 
+ static const struct dpll_info adlp_plls[] = {
+-	{ "DPLL 0", &combo_pll_funcs, DPLL_ID_ICL_DPLL0,  0 },
+-	{ "DPLL 1", &combo_pll_funcs, DPLL_ID_ICL_DPLL1,  0 },
+-	{ "TBT PLL",  &tbt_pll_funcs, DPLL_ID_ICL_TBTPLL, 0 },
+-	{ "TC PLL 1", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL1, 0 },
+-	{ "TC PLL 2", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL2, 0 },
+-	{ "TC PLL 3", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL3, 0 },
+-	{ "TC PLL 4", &dkl_pll_funcs, DPLL_ID_ICL_MGPLL4, 0 },
+-	{ },
++	{ .name = "DPLL 0", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL0, },
++	{ .name = "DPLL 1", .funcs = &combo_pll_funcs, .id = DPLL_ID_ICL_DPLL1, },
++	{ .name = "TBT PLL", .funcs = &tbt_pll_funcs, .id = DPLL_ID_ICL_TBTPLL,
++	  .flags = INTEL_DPLL_IS_ALT_PORT_DPLL, },
++	{ .name = "TC PLL 1", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL1, },
++	{ .name = "TC PLL 2", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL2, },
++	{ .name = "TC PLL 3", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL3, },
++	{ .name = "TC PLL 4", .funcs = &dkl_pll_funcs, .id = DPLL_ID_ICL_MGPLL4, },
++	{}
+ };
+ 
+ static const struct intel_dpll_mgr adlp_pll_mgr = {
+@@ -4462,31 +4475,29 @@ verify_single_dpll_state(struct drm_i915_private *i915,
+ 			 struct intel_crtc *crtc,
+ 			 const struct intel_crtc_state *new_crtc_state)
+ {
+-	struct intel_dpll_hw_state dpll_hw_state;
++	struct intel_dpll_hw_state dpll_hw_state = {};
+ 	u8 pipe_mask;
+ 	bool active;
+ 
+-	memset(&dpll_hw_state, 0, sizeof(dpll_hw_state));
+-
+-	drm_dbg_kms(&i915->drm, "%s\n", pll->info->name);
+-
+ 	active = intel_dpll_get_hw_state(i915, pll, &dpll_hw_state);
+ 
+ 	if (!(pll->info->flags & INTEL_DPLL_ALWAYS_ON)) {
+ 		I915_STATE_WARN(i915, !pll->on && pll->active_mask,
+-				"pll in active use but not on in sw tracking\n");
++				"%s: pll in active use but not on in sw tracking\n",
++				pll->info->name);
+ 		I915_STATE_WARN(i915, pll->on && !pll->active_mask,
+-				"pll is on but not used by any active pipe\n");
++				"%s: pll is on but not used by any active pipe\n",
++				pll->info->name);
+ 		I915_STATE_WARN(i915, pll->on != active,
+-				"pll on state mismatch (expected %i, found %i)\n",
+-				pll->on, active);
++				"%s: pll on state mismatch (expected %i, found %i)\n",
++				pll->info->name, pll->on, active);
+ 	}
+ 
+ 	if (!crtc) {
+ 		I915_STATE_WARN(i915,
+ 				pll->active_mask & ~pll->state.pipe_mask,
+-				"more active pll users than references: 0x%x vs 0x%x\n",
+-				pll->active_mask, pll->state.pipe_mask);
++				"%s: more active pll users than references: 0x%x vs 0x%x\n",
++				pll->info->name, pll->active_mask, pll->state.pipe_mask);
+ 
+ 		return;
+ 	}
+@@ -4495,21 +4506,30 @@ verify_single_dpll_state(struct drm_i915_private *i915,
+ 
+ 	if (new_crtc_state->hw.active)
+ 		I915_STATE_WARN(i915, !(pll->active_mask & pipe_mask),
+-				"pll active mismatch (expected pipe %c in active mask 0x%x)\n",
+-				pipe_name(crtc->pipe), pll->active_mask);
++				"%s: pll active mismatch (expected pipe %c in active mask 0x%x)\n",
++				pll->info->name, pipe_name(crtc->pipe), pll->active_mask);
+ 	else
+ 		I915_STATE_WARN(i915, pll->active_mask & pipe_mask,
+-				"pll active mismatch (didn't expect pipe %c in active mask 0x%x)\n",
+-				pipe_name(crtc->pipe), pll->active_mask);
++				"%s: pll active mismatch (didn't expect pipe %c in active mask 0x%x)\n",
++				pll->info->name, pipe_name(crtc->pipe), pll->active_mask);
+ 
+ 	I915_STATE_WARN(i915, !(pll->state.pipe_mask & pipe_mask),
+-			"pll enabled crtcs mismatch (expected 0x%x in 0x%x)\n",
+-			pipe_mask, pll->state.pipe_mask);
++			"%s: pll enabled crtcs mismatch (expected 0x%x in 0x%x)\n",
++			pll->info->name, pipe_mask, pll->state.pipe_mask);
+ 
+ 	I915_STATE_WARN(i915,
+ 			pll->on && memcmp(&pll->state.hw_state, &dpll_hw_state,
+ 					  sizeof(dpll_hw_state)),
+-			"pll hw state mismatch\n");
++			"%s: pll hw state mismatch\n",
++			pll->info->name);
++}
++
++static bool has_alt_port_dpll(const struct intel_shared_dpll *old_pll,
++			      const struct intel_shared_dpll *new_pll)
++{
++	return old_pll && new_pll && old_pll != new_pll &&
++		(old_pll->info->flags & INTEL_DPLL_IS_ALT_PORT_DPLL ||
++		 new_pll->info->flags & INTEL_DPLL_IS_ALT_PORT_DPLL);
+ }
+ 
+ void intel_shared_dpll_state_verify(struct intel_atomic_state *state,
+@@ -4531,11 +4551,15 @@ void intel_shared_dpll_state_verify(struct intel_atomic_state *state,
+ 		struct intel_shared_dpll *pll = old_crtc_state->shared_dpll;
+ 
+ 		I915_STATE_WARN(i915, pll->active_mask & pipe_mask,
+-				"pll active mismatch (didn't expect pipe %c in active mask (0x%x))\n",
+-				pipe_name(crtc->pipe), pll->active_mask);
+-		I915_STATE_WARN(i915, pll->state.pipe_mask & pipe_mask,
+-				"pll enabled crtcs mismatch (found %x in enabled mask (0x%x))\n",
+-				pipe_name(crtc->pipe), pll->state.pipe_mask);
++				"%s: pll active mismatch (didn't expect pipe %c in active mask (0x%x))\n",
++				pll->info->name, pipe_name(crtc->pipe), pll->active_mask);
++
++		/* TC ports have both MG/TC and TBT PLL referenced simultaneously */
++		I915_STATE_WARN(i915, !has_alt_port_dpll(old_crtc_state->shared_dpll,
++							 new_crtc_state->shared_dpll) &&
++				pll->state.pipe_mask & pipe_mask,
++				"%s: pll enabled crtcs mismatch (found pipe %c in enabled mask (0x%x))\n",
++				pll->info->name, pipe_name(crtc->pipe), pll->state.pipe_mask);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dpll_mgr.h b/drivers/gpu/drm/i915/display/intel_dpll_mgr.h
+index dd4796a61751f..1a88ccc98344f 100644
+--- a/drivers/gpu/drm/i915/display/intel_dpll_mgr.h
++++ b/drivers/gpu/drm/i915/display/intel_dpll_mgr.h
+@@ -271,12 +271,16 @@ struct dpll_info {
+ 	enum intel_dpll_id id;
+ 
+ #define INTEL_DPLL_ALWAYS_ON	(1 << 0)
++#define INTEL_DPLL_IS_ALT_PORT_DPLL	(1 << 1)
+ 	/**
+ 	 * @flags:
+ 	 *
+ 	 * INTEL_DPLL_ALWAYS_ON
+ 	 *     Inform the state checker that the DPLL is kept enabled even if
+ 	 *     not in use by any CRTC.
++	 * INTEL_DPLL_IS_ALT_PORT_DPLL
++	 *     Inform the state checker that the DPLL can be used as a fallback
++	 *     (for TC->TBT fallback).
+ 	 */
+ 	u32 flags;
+ };
+diff --git a/drivers/gpu/drm/i915/display/intel_dsb.c b/drivers/gpu/drm/i915/display/intel_dsb.c
+index 7fd6280c54a79..15c3dd5d388b2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsb.c
++++ b/drivers/gpu/drm/i915/display/intel_dsb.c
+@@ -339,6 +339,17 @@ static int intel_dsb_dewake_scanline(const struct intel_crtc_state *crtc_state)
+ 	return max(0, vblank_start - intel_usecs_to_scanlines(adjusted_mode, latency));
+ }
+ 
++static u32 dsb_chicken(struct intel_crtc *crtc)
++{
++	if (crtc->mode_flags & I915_MODE_FLAG_VRR)
++		return DSB_CTRL_WAIT_SAFE_WINDOW |
++			DSB_CTRL_NO_WAIT_VBLANK |
++			DSB_INST_WAIT_SAFE_WINDOW |
++			DSB_INST_NO_WAIT_VBLANK;
++	else
++		return 0;
++}
++
+ static void _intel_dsb_commit(struct intel_dsb *dsb, u32 ctrl,
+ 			      int dewake_scanline)
+ {
+@@ -360,6 +371,9 @@ static void _intel_dsb_commit(struct intel_dsb *dsb, u32 ctrl,
+ 	intel_de_write_fw(dev_priv, DSB_CTRL(pipe, dsb->id),
+ 			  ctrl | DSB_ENABLE);
+ 
++	intel_de_write_fw(dev_priv, DSB_CHICKEN(pipe, dsb->id),
++			  dsb_chicken(crtc));
++
+ 	intel_de_write_fw(dev_priv, DSB_HEAD(pipe, dsb->id),
+ 			  i915_ggtt_offset(dsb->vma));
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_vrr.c b/drivers/gpu/drm/i915/display/intel_vrr.c
+index 5d905f932cb4b..eb5bd07439020 100644
+--- a/drivers/gpu/drm/i915/display/intel_vrr.c
++++ b/drivers/gpu/drm/i915/display/intel_vrr.c
+@@ -187,10 +187,11 @@ void intel_vrr_set_transcoder_timings(const struct intel_crtc_state *crtc_state)
+ 	enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
+ 
+ 	/*
+-	 * TRANS_SET_CONTEXT_LATENCY with VRR enabled
+-	 * requires this chicken bit on ADL/DG2.
++	 * This bit seems to have two meanings depending on the platform:
++	 * TGL: generate VRR "safe window" for DSB vblank waits
++	 * ADL/DG2: make TRANS_SET_CONTEXT_LATENCY effective with VRR
+ 	 */
+-	if (DISPLAY_VER(dev_priv) == 13)
++	if (IS_DISPLAY_VER(dev_priv, 12, 13))
+ 		intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder),
+ 			     0, PIPE_VBLANK_WITH_DELAY);
+ 
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+index 1d3ebdf4069b5..c08b67593565c 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+@@ -379,6 +379,9 @@ i915_gem_userptr_release(struct drm_i915_gem_object *obj)
+ {
+ 	GEM_WARN_ON(obj->userptr.page_ref);
+ 
++	if (!obj->userptr.notifier.mm)
++		return;
++
+ 	mmu_interval_notifier_remove(&obj->userptr.notifier);
+ 	obj->userptr.notifier.mm = NULL;
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+index e91fc881dbf18..5a3a5b29d1507 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+@@ -278,9 +278,6 @@ static int __engine_park(struct intel_wakeref *wf)
+ 	intel_engine_park_heartbeat(engine);
+ 	intel_breadcrumbs_park(engine->breadcrumbs);
+ 
+-	/* Must be reset upon idling, or we may miss the busy wakeup. */
+-	GEM_BUG_ON(engine->sched_engine->queue_priority_hint != INT_MIN);
+-
+ 	if (engine->park)
+ 		engine->park(engine);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+index e8f42ec6b1b47..42e09f1589205 100644
+--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+@@ -3272,6 +3272,9 @@ static void execlists_park(struct intel_engine_cs *engine)
+ {
+ 	cancel_timer(&engine->execlists.timer);
+ 	cancel_timer(&engine->execlists.preempt);
++
++	/* Reset upon idling, or we may delay the busy wakeup. */
++	WRITE_ONCE(engine->sched_engine->queue_priority_hint, INT_MIN);
+ }
+ 
+ static void add_to_engine(struct i915_request *rq)
+diff --git a/drivers/gpu/drm/i915/gvt/interrupt.c b/drivers/gpu/drm/i915/gvt/interrupt.c
+index de3f5903d1a7a..c8e7dfc9f7910 100644
+--- a/drivers/gpu/drm/i915/gvt/interrupt.c
++++ b/drivers/gpu/drm/i915/gvt/interrupt.c
+@@ -422,7 +422,7 @@ static void init_irq_map(struct intel_gvt_irq *irq)
+ #define MSI_CAP_DATA(offset) (offset + 8)
+ #define MSI_CAP_EN 0x1
+ 
+-static int inject_virtual_interrupt(struct intel_vgpu *vgpu)
++static void inject_virtual_interrupt(struct intel_vgpu *vgpu)
+ {
+ 	unsigned long offset = vgpu->gvt->device_info.msi_cap_offset;
+ 	u16 control, data;
+@@ -434,10 +434,10 @@ static int inject_virtual_interrupt(struct intel_vgpu *vgpu)
+ 
+ 	/* Do not generate MSI if MSIEN is disabled */
+ 	if (!(control & MSI_CAP_EN))
+-		return 0;
++		return;
+ 
+ 	if (WARN(control & GENMASK(15, 1), "only support one MSI format\n"))
+-		return -EINVAL;
++		return;
+ 
+ 	trace_inject_msi(vgpu->id, addr, data);
+ 
+@@ -451,10 +451,9 @@ static int inject_virtual_interrupt(struct intel_vgpu *vgpu)
+ 	 * returned and don't inject interrupt into guest.
+ 	 */
+ 	if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))
+-		return -ESRCH;
+-	if (vgpu->msi_trigger && eventfd_signal(vgpu->msi_trigger, 1) != 1)
+-		return -EFAULT;
+-	return 0;
++		return;
++	if (vgpu->msi_trigger)
++		eventfd_signal(vgpu->msi_trigger);
+ }
+ 
+ static void propagate_event(struct intel_gvt_irq *irq,
+diff --git a/drivers/gpu/drm/i915/i915_hwmon.c b/drivers/gpu/drm/i915/i915_hwmon.c
+index 8c3f443c8347e..b758fd110c204 100644
+--- a/drivers/gpu/drm/i915/i915_hwmon.c
++++ b/drivers/gpu/drm/i915/i915_hwmon.c
+@@ -72,12 +72,13 @@ hwm_locked_with_pm_intel_uncore_rmw(struct hwm_drvdata *ddat,
+ 	struct intel_uncore *uncore = ddat->uncore;
+ 	intel_wakeref_t wakeref;
+ 
+-	mutex_lock(&hwmon->hwmon_lock);
++	with_intel_runtime_pm(uncore->rpm, wakeref) {
++		mutex_lock(&hwmon->hwmon_lock);
+ 
+-	with_intel_runtime_pm(uncore->rpm, wakeref)
+ 		intel_uncore_rmw(uncore, reg, clear, set);
+ 
+-	mutex_unlock(&hwmon->hwmon_lock);
++		mutex_unlock(&hwmon->hwmon_lock);
++	}
+ }
+ 
+ /*
+@@ -136,20 +137,21 @@ hwm_energy(struct hwm_drvdata *ddat, long *energy)
+ 	else
+ 		rgaddr = hwmon->rg.energy_status_all;
+ 
+-	mutex_lock(&hwmon->hwmon_lock);
++	with_intel_runtime_pm(uncore->rpm, wakeref) {
++		mutex_lock(&hwmon->hwmon_lock);
+ 
+-	with_intel_runtime_pm(uncore->rpm, wakeref)
+ 		reg_val = intel_uncore_read(uncore, rgaddr);
+ 
+-	if (reg_val >= ei->reg_val_prev)
+-		ei->accum_energy += reg_val - ei->reg_val_prev;
+-	else
+-		ei->accum_energy += UINT_MAX - ei->reg_val_prev + reg_val;
+-	ei->reg_val_prev = reg_val;
++		if (reg_val >= ei->reg_val_prev)
++			ei->accum_energy += reg_val - ei->reg_val_prev;
++		else
++			ei->accum_energy += UINT_MAX - ei->reg_val_prev + reg_val;
++		ei->reg_val_prev = reg_val;
+ 
+-	*energy = mul_u64_u32_shr(ei->accum_energy, SF_ENERGY,
+-				  hwmon->scl_shift_energy);
+-	mutex_unlock(&hwmon->hwmon_lock);
++		*energy = mul_u64_u32_shr(ei->accum_energy, SF_ENERGY,
++					  hwmon->scl_shift_energy);
++		mutex_unlock(&hwmon->hwmon_lock);
++	}
+ }
+ 
+ static ssize_t
+@@ -404,6 +406,7 @@ hwm_power_max_write(struct hwm_drvdata *ddat, long val)
+ 
+ 	/* Block waiting for GuC reset to complete when needed */
+ 	for (;;) {
++		wakeref = intel_runtime_pm_get(ddat->uncore->rpm);
+ 		mutex_lock(&hwmon->hwmon_lock);
+ 
+ 		prepare_to_wait(&ddat->waitq, &wait, TASK_INTERRUPTIBLE);
+@@ -417,14 +420,13 @@ hwm_power_max_write(struct hwm_drvdata *ddat, long val)
+ 		}
+ 
+ 		mutex_unlock(&hwmon->hwmon_lock);
++		intel_runtime_pm_put(ddat->uncore->rpm, wakeref);
+ 
+ 		schedule();
+ 	}
+ 	finish_wait(&ddat->waitq, &wait);
+ 	if (ret)
+-		goto unlock;
+-
+-	wakeref = intel_runtime_pm_get(ddat->uncore->rpm);
++		goto exit;
+ 
+ 	/* Disable PL1 limit and verify, because the limit cannot be disabled on all platforms */
+ 	if (val == PL1_DISABLE) {
+@@ -444,9 +446,8 @@ hwm_power_max_write(struct hwm_drvdata *ddat, long val)
+ 	intel_uncore_rmw(ddat->uncore, hwmon->rg.pkg_rapl_limit,
+ 			 PKG_PWR_LIM_1_EN | PKG_PWR_LIM_1, nval);
+ exit:
+-	intel_runtime_pm_put(ddat->uncore->rpm, wakeref);
+-unlock:
+ 	mutex_unlock(&hwmon->hwmon_lock);
++	intel_runtime_pm_put(ddat->uncore->rpm, wakeref);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 135e8d8dbdf06..b89cf2041f73b 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -4599,7 +4599,7 @@
+ #define MTL_CHICKEN_TRANS(trans)	_MMIO_TRANS((trans), \
+ 						    _MTL_CHICKEN_TRANS_A, \
+ 						    _MTL_CHICKEN_TRANS_B)
+-#define   PIPE_VBLANK_WITH_DELAY	REG_BIT(31) /* ADL/DG2 */
++#define   PIPE_VBLANK_WITH_DELAY	REG_BIT(31) /* tgl+ */
+ #define   SKL_UNMASK_VBL_TO_PIPE_IN_SRD	REG_BIT(30) /* skl+ */
+ #define   HSW_FRAME_START_DELAY_MASK	REG_GENMASK(28, 27)
+ #define   HSW_FRAME_START_DELAY(x)	REG_FIELD_PREP(HSW_FRAME_START_DELAY_MASK, x)
+diff --git a/drivers/gpu/drm/imx/ipuv3/parallel-display.c b/drivers/gpu/drm/imx/ipuv3/parallel-display.c
+index 70349739dd89b..55dedd73f528c 100644
+--- a/drivers/gpu/drm/imx/ipuv3/parallel-display.c
++++ b/drivers/gpu/drm/imx/ipuv3/parallel-display.c
+@@ -72,14 +72,14 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
+ 		int ret;
+ 
+ 		if (!mode)
+-			return -EINVAL;
++			return 0;
+ 
+ 		ret = of_get_drm_display_mode(np, &imxpd->mode,
+ 					      &imxpd->bus_flags,
+ 					      OF_USE_NATIVE_MODE);
+ 		if (ret) {
+ 			drm_mode_destroy(connector->dev, mode);
+-			return ret;
++			return 0;
+ 		}
+ 
+ 		drm_mode_copy(mode, &imxpd->mode);
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/core/client.h b/drivers/gpu/drm/nouveau/include/nvkm/core/client.h
+index 0d9fc741a7193..932c9fd0b2d89 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/client.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/client.h
+@@ -11,6 +11,7 @@ struct nvkm_client {
+ 	u32 debug;
+ 
+ 	struct rb_root objroot;
++	spinlock_t obj_lock;
+ 
+ 	void *data;
+ 	int (*event)(u64 token, void *argv, u32 argc);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
+index 12feecf71e752..6fb65b01d7780 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
+@@ -378,9 +378,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
+ 	dma_addr_t *dma_addrs;
+ 	struct nouveau_fence *fence;
+ 
+-	src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
+-	dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
+-	dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
++	src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
++	dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
++	dma_addrs = kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOFAIL);
+ 
+ 	migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,
+ 			npages);
+@@ -406,11 +406,11 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
+ 	migrate_device_pages(src_pfns, dst_pfns, npages);
+ 	nouveau_dmem_fence_done(&fence);
+ 	migrate_device_finalize(src_pfns, dst_pfns, npages);
+-	kfree(src_pfns);
+-	kfree(dst_pfns);
++	kvfree(src_pfns);
++	kvfree(dst_pfns);
+ 	for (i = 0; i < npages; i++)
+ 		dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
+-	kfree(dma_addrs);
++	kvfree(dma_addrs);
+ }
+ 
+ void
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index a0d303e5ce3d8..7b69e6df57486 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -758,7 +758,7 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data,
+ 		return -ENOMEM;
+ 
+ 	if (unlikely(nouveau_cli_uvmm(cli)))
+-		return -ENOSYS;
++		return nouveau_abi16_put(abi16, -ENOSYS);
+ 
+ 	list_for_each_entry(temp, &abi16->channels, head) {
+ 		if (temp->chan->chid == req->channel) {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/core/client.c b/drivers/gpu/drm/nouveau/nvkm/core/client.c
+index ebdeb8eb9e774..c55662937ab22 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/core/client.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/client.c
+@@ -180,6 +180,7 @@ nvkm_client_new(const char *name, u64 device, const char *cfg, const char *dbg,
+ 	client->device = device;
+ 	client->debug = nvkm_dbgopt(dbg, "CLIENT");
+ 	client->objroot = RB_ROOT;
++	spin_lock_init(&client->obj_lock);
+ 	client->event = event;
+ 	INIT_LIST_HEAD(&client->umem);
+ 	spin_lock_init(&client->lock);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/core/object.c b/drivers/gpu/drm/nouveau/nvkm/core/object.c
+index 7c554c14e8841..aea3ba72027ab 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/core/object.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/object.c
+@@ -30,8 +30,10 @@ nvkm_object_search(struct nvkm_client *client, u64 handle,
+ 		   const struct nvkm_object_func *func)
+ {
+ 	struct nvkm_object *object;
++	unsigned long flags;
+ 
+ 	if (handle) {
++		spin_lock_irqsave(&client->obj_lock, flags);
+ 		struct rb_node *node = client->objroot.rb_node;
+ 		while (node) {
+ 			object = rb_entry(node, typeof(*object), node);
+@@ -40,9 +42,12 @@ nvkm_object_search(struct nvkm_client *client, u64 handle,
+ 			else
+ 			if (handle > object->object)
+ 				node = node->rb_right;
+-			else
++			else {
++				spin_unlock_irqrestore(&client->obj_lock, flags);
+ 				goto done;
++			}
+ 		}
++		spin_unlock_irqrestore(&client->obj_lock, flags);
+ 		return ERR_PTR(-ENOENT);
+ 	} else {
+ 		object = &client->object;
+@@ -57,30 +62,39 @@ nvkm_object_search(struct nvkm_client *client, u64 handle,
+ void
+ nvkm_object_remove(struct nvkm_object *object)
+ {
++	unsigned long flags;
++
++	spin_lock_irqsave(&object->client->obj_lock, flags);
+ 	if (!RB_EMPTY_NODE(&object->node))
+ 		rb_erase(&object->node, &object->client->objroot);
++	spin_unlock_irqrestore(&object->client->obj_lock, flags);
+ }
+ 
+ bool
+ nvkm_object_insert(struct nvkm_object *object)
+ {
+-	struct rb_node **ptr = &object->client->objroot.rb_node;
++	struct rb_node **ptr;
+ 	struct rb_node *parent = NULL;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&object->client->obj_lock, flags);
++	ptr = &object->client->objroot.rb_node;
+ 	while (*ptr) {
+ 		struct nvkm_object *this = rb_entry(*ptr, typeof(*this), node);
+ 		parent = *ptr;
+-		if (object->object < this->object)
++		if (object->object < this->object) {
+ 			ptr = &parent->rb_left;
+-		else
+-		if (object->object > this->object)
++		} else if (object->object > this->object) {
+ 			ptr = &parent->rb_right;
+-		else
++		} else {
++			spin_unlock_irqrestore(&object->client->obj_lock, flags);
+ 			return false;
++		}
+ 	}
+ 
+ 	rb_link_node(&object->node, parent, ptr);
+ 	rb_insert_color(&object->node, &object->client->objroot);
++	spin_unlock_irqrestore(&object->client->obj_lock, flags);
+ 	return true;
+ }
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 0a7a7e4ad8d19..9ee92898216b6 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -71,13 +71,19 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ 	entity->guilty = guilty;
+ 	entity->num_sched_list = num_sched_list;
+ 	entity->priority = priority;
++	/*
++	 * It's perfectly valid to initialize an entity without having a valid
++	 * scheduler attached. It's just not valid to use the scheduler before it
++	 * is initialized itself.
++	 */
+ 	entity->sched_list = num_sched_list > 1 ? sched_list : NULL;
+ 	RCU_INIT_POINTER(entity->last_scheduled, NULL);
+ 	RB_CLEAR_NODE(&entity->rb_tree_node);
+ 
+-	if (!sched_list[0]->sched_rq) {
+-		/* Warn drivers not to do this and to fix their DRM
+-		 * calling order.
++	if (num_sched_list && !sched_list[0]->sched_rq) {
++		/* Since every entry covered by num_sched_list
++		 * should be non-NULL and therefore we warn drivers
++		 * not to do this and to fix their DRM calling order.
+ 		 */
+ 		pr_warn("%s: called with uninitialized scheduler\n", __func__);
+ 	} else if (num_sched_list) {
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
+index fd9fd3d15101c..0b3f4267130c4 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
+@@ -294,7 +294,13 @@ pgprot_t ttm_io_prot(struct ttm_buffer_object *bo, struct ttm_resource *res,
+ 	enum ttm_caching caching;
+ 
+ 	man = ttm_manager_type(bo->bdev, res->mem_type);
+-	caching = man->use_tt ? bo->ttm->caching : res->bus.caching;
++	if (man->use_tt) {
++		caching = bo->ttm->caching;
++		if (bo->ttm->page_flags & TTM_TT_FLAG_DECRYPTED)
++			tmp = pgprot_decrypted(tmp);
++	} else  {
++		caching = res->bus.caching;
++	}
+ 
+ 	return ttm_prot_from_caching(caching, tmp);
+ }
+@@ -337,6 +343,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
+ 		.no_wait_gpu = false
+ 	};
+ 	struct ttm_tt *ttm = bo->ttm;
++	struct ttm_resource_manager *man =
++			ttm_manager_type(bo->bdev, bo->resource->mem_type);
+ 	pgprot_t prot;
+ 	int ret;
+ 
+@@ -346,7 +354,8 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (num_pages == 1 && ttm->caching == ttm_cached) {
++	if (num_pages == 1 && ttm->caching == ttm_cached &&
++	    !(man->use_tt && (ttm->page_flags & TTM_TT_FLAG_DECRYPTED))) {
+ 		/*
+ 		 * We're mapping a single page, and the desired
+ 		 * page protection is consistent with the bo.
+diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
+index e0a77671edd6c..43eaffa7faae3 100644
+--- a/drivers/gpu/drm/ttm/ttm_tt.c
++++ b/drivers/gpu/drm/ttm/ttm_tt.c
+@@ -31,11 +31,14 @@
+ 
+ #define pr_fmt(fmt) "[TTM] " fmt
+ 
++#include <linux/cc_platform.h>
+ #include <linux/sched.h>
+ #include <linux/shmem_fs.h>
+ #include <linux/file.h>
+ #include <linux/module.h>
+ #include <drm/drm_cache.h>
++#include <drm/drm_device.h>
++#include <drm/drm_util.h>
+ #include <drm/ttm/ttm_bo.h>
+ #include <drm/ttm/ttm_tt.h>
+ 
+@@ -60,6 +63,7 @@ static atomic_long_t ttm_dma32_pages_allocated;
+ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc)
+ {
+ 	struct ttm_device *bdev = bo->bdev;
++	struct drm_device *ddev = bo->base.dev;
+ 	uint32_t page_flags = 0;
+ 
+ 	dma_resv_assert_held(bo->base.resv);
+@@ -81,6 +85,15 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc)
+ 		pr_err("Illegal buffer object type\n");
+ 		return -EINVAL;
+ 	}
++	/*
++	 * When using dma_alloc_coherent with memory encryption the
++	 * mapped TT pages need to be decrypted or otherwise the drivers
++	 * will end up sending encrypted mem to the gpu.
++	 */
++	if (bdev->pool.use_dma_alloc && cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
++		page_flags |= TTM_TT_FLAG_DECRYPTED;
++		drm_info(ddev, "TT memory decryption enabled.");
++	}
+ 
+ 	bo->ttm = bdev->funcs->ttm_tt_create(bo, page_flags);
+ 	if (unlikely(bo->ttm == NULL))
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 25c9c71256d35..4626fe9aac563 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -508,7 +508,7 @@ static int vc4_hdmi_connector_get_modes(struct drm_connector *connector)
+ 	edid = drm_get_edid(connector, vc4_hdmi->ddc);
+ 	cec_s_phys_addr_from_edid(vc4_hdmi->cec_adap, edid);
+ 	if (!edid)
+-		return -ENODEV;
++		return 0;
+ 
+ 	drm_connector_update_edid_property(connector, edid);
+ 	ret = drm_add_edid_modes(connector, edid);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index d3e308fdfd5be..c7d90f96d16a6 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -1444,12 +1444,15 @@ static void vmw_debugfs_resource_managers_init(struct vmw_private *vmw)
+ 					    root, "system_ttm");
+ 	ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, TTM_PL_VRAM),
+ 					    root, "vram_ttm");
+-	ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_GMR),
+-					    root, "gmr_ttm");
+-	ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_MOB),
+-					    root, "mob_ttm");
+-	ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_SYSTEM),
+-					    root, "system_mob_ttm");
++	if (vmw->has_gmr)
++		ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_GMR),
++						    root, "gmr_ttm");
++	if (vmw->has_mob) {
++		ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_MOB),
++						    root, "mob_ttm");
++		ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_SYSTEM),
++						    root, "system_mob_ttm");
++	}
+ }
+ 
+ static int vmwgfx_pm_notifier(struct notifier_block *nb, unsigned long val,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 36987ef3fc300..5fef0b31c1179 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -447,7 +447,7 @@ static int vmw_resource_context_res_add(struct vmw_private *dev_priv,
+ 	    vmw_res_type(ctx) == vmw_res_dx_context) {
+ 		for (i = 0; i < cotable_max; ++i) {
+ 			res = vmw_context_cotable(ctx, i);
+-			if (IS_ERR(res))
++			if (IS_ERR_OR_NULL(res))
+ 				continue;
+ 
+ 			ret = vmw_execbuf_res_val_add(sw_context, res,
+@@ -1266,6 +1266,8 @@ static int vmw_cmd_dx_define_query(struct vmw_private *dev_priv,
+ 		return -EINVAL;
+ 
+ 	cotable_res = vmw_context_cotable(ctx_node->ctx, SVGA_COTABLE_DXQUERY);
++	if (IS_ERR_OR_NULL(cotable_res))
++		return cotable_res ? PTR_ERR(cotable_res) : -EINVAL;
+ 	ret = vmw_cotable_notify(cotable_res, cmd->body.queryId);
+ 
+ 	return ret;
+@@ -2484,6 +2486,8 @@ static int vmw_cmd_dx_view_define(struct vmw_private *dev_priv,
+ 		return ret;
+ 
+ 	res = vmw_context_cotable(ctx_node->ctx, vmw_view_cotables[view_type]);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	ret = vmw_cotable_notify(res, cmd->defined_id);
+ 	if (unlikely(ret != 0))
+ 		return ret;
+@@ -2569,8 +2573,8 @@ static int vmw_cmd_dx_so_define(struct vmw_private *dev_priv,
+ 
+ 	so_type = vmw_so_cmd_to_type(header->id);
+ 	res = vmw_context_cotable(ctx_node->ctx, vmw_so_cotables[so_type]);
+-	if (IS_ERR(res))
+-		return PTR_ERR(res);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 	ret = vmw_cotable_notify(res, cmd->defined_id);
+ 
+@@ -2689,6 +2693,8 @@ static int vmw_cmd_dx_define_shader(struct vmw_private *dev_priv,
+ 		return -EINVAL;
+ 
+ 	res = vmw_context_cotable(ctx_node->ctx, SVGA_COTABLE_DXSHADER);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	ret = vmw_cotable_notify(res, cmd->body.shaderId);
+ 	if (ret)
+ 		return ret;
+@@ -3010,6 +3016,8 @@ static int vmw_cmd_dx_define_streamoutput(struct vmw_private *dev_priv,
+ 	}
+ 
+ 	res = vmw_context_cotable(ctx_node->ctx, SVGA_COTABLE_STREAMOUTPUT);
++	if (IS_ERR_OR_NULL(res))
++		return res ? PTR_ERR(res) : -EINVAL;
+ 	ret = vmw_cotable_notify(res, cmd->body.soid);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index b51578918cf8d..5681a1b42aa24 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -184,13 +184,12 @@ static u32 vmw_du_cursor_mob_size(u32 w, u32 h)
+  */
+ static u32 *vmw_du_cursor_plane_acquire_image(struct vmw_plane_state *vps)
+ {
+-	bool is_iomem;
+ 	if (vps->surf) {
+ 		if (vps->surf_mapped)
+ 			return vmw_bo_map_and_cache(vps->surf->res.guest_memory_bo);
+ 		return vps->surf->snooper.image;
+ 	} else if (vps->bo)
+-		return ttm_kmap_obj_virtual(&vps->bo->map, &is_iomem);
++		return vmw_bo_map_and_cache(vps->bo);
+ 	return NULL;
+ }
+ 
+@@ -652,22 +651,12 @@ vmw_du_cursor_plane_cleanup_fb(struct drm_plane *plane,
+ {
+ 	struct vmw_cursor_plane *vcp = vmw_plane_to_vcp(plane);
+ 	struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state);
+-	bool is_iomem;
+ 
+ 	if (vps->surf_mapped) {
+ 		vmw_bo_unmap(vps->surf->res.guest_memory_bo);
+ 		vps->surf_mapped = false;
+ 	}
+ 
+-	if (vps->bo && ttm_kmap_obj_virtual(&vps->bo->map, &is_iomem)) {
+-		const int ret = ttm_bo_reserve(&vps->bo->tbo, true, false, NULL);
+-
+-		if (likely(ret == 0)) {
+-			ttm_bo_kunmap(&vps->bo->map);
+-			ttm_bo_unreserve(&vps->bo->tbo);
+-		}
+-	}
+-
+ 	vmw_du_cursor_plane_unmap_cm(vps);
+ 	vmw_du_put_cursor_mob(vcp, vps);
+ 
+@@ -703,6 +692,10 @@ vmw_du_cursor_plane_prepare_fb(struct drm_plane *plane,
+ 	int ret = 0;
+ 
+ 	if (vps->surf) {
++		if (vps->surf_mapped) {
++			vmw_bo_unmap(vps->surf->res.guest_memory_bo);
++			vps->surf_mapped = false;
++		}
+ 		vmw_surface_unreference(&vps->surf);
+ 		vps->surf = NULL;
+ 	}
+diff --git a/drivers/hwmon/amc6821.c b/drivers/hwmon/amc6821.c
+index 2a7a4b6b00942..9b02b304c2f5d 100644
+--- a/drivers/hwmon/amc6821.c
++++ b/drivers/hwmon/amc6821.c
+@@ -934,10 +934,21 @@ static const struct i2c_device_id amc6821_id[] = {
+ 
+ MODULE_DEVICE_TABLE(i2c, amc6821_id);
+ 
++static const struct of_device_id __maybe_unused amc6821_of_match[] = {
++	{
++		.compatible = "ti,amc6821",
++		.data = (void *)amc6821,
++	},
++	{ }
++};
++
++MODULE_DEVICE_TABLE(of, amc6821_of_match);
++
+ static struct i2c_driver amc6821_driver = {
+ 	.class = I2C_CLASS_HWMON,
+ 	.driver = {
+ 		.name	= "amc6821",
++		.of_match_table = of_match_ptr(amc6821_of_match),
+ 	},
+ 	.probe = amc6821_probe,
+ 	.id_table = amc6821_id,
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 6a5a93cf4ecc6..643a816883ad7 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1414,7 +1414,6 @@ static void i801_add_mux(struct i801_priv *priv)
+ 		lookup->table[i] = GPIO_LOOKUP(mux_config->gpio_chip,
+ 					       mux_config->gpios[i], "mux", 0);
+ 	gpiod_add_lookup_table(lookup);
+-	priv->lookup = lookup;
+ 
+ 	/*
+ 	 * Register the mux device, we use PLATFORM_DEVID_NONE here
+@@ -1428,7 +1427,10 @@ static void i801_add_mux(struct i801_priv *priv)
+ 				sizeof(struct i2c_mux_gpio_platform_data));
+ 	if (IS_ERR(priv->mux_pdev)) {
+ 		gpiod_remove_lookup_table(lookup);
++		devm_kfree(dev, lookup);
+ 		dev_err(dev, "Failed to register i2c-mux-gpio device\n");
++	} else {
++		priv->lookup = lookup;
+ 	}
+ }
+ 
+@@ -1740,9 +1742,9 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 
+ 	i801_enable_host_notify(&priv->adapter);
+ 
+-	i801_probe_optional_slaves(priv);
+ 	/* We ignore errors - multiplexing is optional */
+ 	i801_add_mux(priv);
++	i801_probe_optional_slaves(priv);
+ 
+ 	pci_set_drvdata(dev, priv);
+ 
+diff --git a/drivers/iio/accel/adxl367.c b/drivers/iio/accel/adxl367.c
+index 90b7ae6d42b77..484fe2e9fb174 100644
+--- a/drivers/iio/accel/adxl367.c
++++ b/drivers/iio/accel/adxl367.c
+@@ -1429,9 +1429,11 @@ static int adxl367_verify_devid(struct adxl367_state *st)
+ 	unsigned int val;
+ 	int ret;
+ 
+-	ret = regmap_read_poll_timeout(st->regmap, ADXL367_REG_DEVID, val,
+-				       val == ADXL367_DEVID_AD, 1000, 10000);
++	ret = regmap_read(st->regmap, ADXL367_REG_DEVID, &val);
+ 	if (ret)
++		return dev_err_probe(st->dev, ret, "Failed to read dev id\n");
++
++	if (val != ADXL367_DEVID_AD)
+ 		return dev_err_probe(st->dev, -ENODEV,
+ 				     "Invalid dev id 0x%02X, expected 0x%02X\n",
+ 				     val, ADXL367_DEVID_AD);
+@@ -1510,6 +1512,8 @@ int adxl367_probe(struct device *dev, const struct adxl367_ops *ops,
+ 	if (ret)
+ 		return ret;
+ 
++	fsleep(15000);
++
+ 	ret = adxl367_verify_devid(st);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/iio/accel/adxl367_i2c.c b/drivers/iio/accel/adxl367_i2c.c
+index b595fe94f3a32..62c74bdc0d77b 100644
+--- a/drivers/iio/accel/adxl367_i2c.c
++++ b/drivers/iio/accel/adxl367_i2c.c
+@@ -11,7 +11,7 @@
+ 
+ #include "adxl367.h"
+ 
+-#define ADXL367_I2C_FIFO_DATA	0x42
++#define ADXL367_I2C_FIFO_DATA	0x18
+ 
+ struct adxl367_i2c_state {
+ 	struct regmap *regmap;
+diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
+index dd94667a623bd..1c0042fbbb548 100644
+--- a/drivers/iio/adc/rockchip_saradc.c
++++ b/drivers/iio/adc/rockchip_saradc.c
+@@ -52,7 +52,7 @@
+ #define SARADC2_START			BIT(4)
+ #define SARADC2_SINGLE_MODE		BIT(5)
+ 
+-#define SARADC2_CONV_CHANNELS GENMASK(15, 0)
++#define SARADC2_CONV_CHANNELS GENMASK(3, 0)
+ 
+ struct rockchip_saradc;
+ 
+@@ -102,12 +102,12 @@ static void rockchip_saradc_start_v2(struct rockchip_saradc *info, int chn)
+ 	writel_relaxed(0xc, info->regs + SARADC_T_DAS_SOC);
+ 	writel_relaxed(0x20, info->regs + SARADC_T_PD_SOC);
+ 	val = FIELD_PREP(SARADC2_EN_END_INT, 1);
+-	val |= val << 16;
++	val |= SARADC2_EN_END_INT << 16;
+ 	writel_relaxed(val, info->regs + SARADC2_END_INT_EN);
+ 	val = FIELD_PREP(SARADC2_START, 1) |
+ 	      FIELD_PREP(SARADC2_SINGLE_MODE, 1) |
+ 	      FIELD_PREP(SARADC2_CONV_CHANNELS, chn);
+-	val |= val << 16;
++	val |= (SARADC2_START | SARADC2_SINGLE_MODE | SARADC2_CONV_CHANNELS) << 16;
+ 	writel(val, info->regs + SARADC2_CONV_CON);
+ }
+ 
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+index 66d4ba088e70f..d4f9b5d8d28d6 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+@@ -109,6 +109,8 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ 	/* compute and process only all complete datum */
+ 	nb = fifo_count / bytes_per_datum;
+ 	fifo_count = nb * bytes_per_datum;
++	if (nb == 0)
++		goto end_session;
+ 	/* Each FIFO data contains all sensors, so same number for FIFO and sensor data */
+ 	fifo_period = NSEC_PER_SEC / INV_MPU6050_DIVIDER_TO_FIFO_RATE(st->chip_config.divider);
+ 	inv_sensors_timestamp_interrupt(&st->timestamp, fifo_period, nb, nb, pf->timestamp);
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+index 676704f9151fc..e6e6e94452a32 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+@@ -111,6 +111,7 @@ int inv_mpu6050_prepare_fifo(struct inv_mpu6050_state *st, bool enable)
+ 	if (enable) {
+ 		/* reset timestamping */
+ 		inv_sensors_timestamp_reset(&st->timestamp);
++		inv_sensors_timestamp_apply_odr(&st->timestamp, 0, 0, 0);
+ 		/* reset FIFO */
+ 		d = st->chip_config.user_ctrl | INV_MPU6050_BIT_FIFO_RST;
+ 		ret = regmap_write(st->map, st->reg->user_ctrl, d);
+@@ -184,6 +185,10 @@ static int inv_mpu6050_set_enable(struct iio_dev *indio_dev, bool enable)
+ 		if (result)
+ 			goto error_power_off;
+ 	} else {
++		st->chip_config.gyro_fifo_enable = 0;
++		st->chip_config.accl_fifo_enable = 0;
++		st->chip_config.temp_fifo_enable = 0;
++		st->chip_config.magn_fifo_enable = 0;
+ 		result = inv_mpu6050_prepare_fifo(st, false);
+ 		if (result)
+ 			goto error_power_off;
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 6e19974ecf6e7..253fea374a72d 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -2498,7 +2498,7 @@ static void dispatch_event_fd(struct list_head *fd_list,
+ 
+ 	list_for_each_entry_rcu(item, fd_list, xa_list) {
+ 		if (item->eventfd)
+-			eventfd_signal(item->eventfd, 1);
++			eventfd_signal(item->eventfd);
+ 		else
+ 			deliver_event(item, data);
+ 	}
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index d0bb3edfd0a09..c11af4441cf25 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -130,7 +130,12 @@ static const struct xpad_device {
+ 	{ 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x03eb, 0xff01, "Wooting One (Legacy)", 0, XTYPE_XBOX360 },
+ 	{ 0x03eb, 0xff02, "Wooting Two (Legacy)", 0, XTYPE_XBOX360 },
++	{ 0x03f0, 0x038D, "HyperX Clutch", 0, XTYPE_XBOX360 },			/* wired */
++	{ 0x03f0, 0x048D, "HyperX Clutch", 0, XTYPE_XBOX360 },			/* wireless */
+ 	{ 0x03f0, 0x0495, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE },
++	{ 0x03f0, 0x07A0, "HyperX Clutch Gladiate RGB", 0, XTYPE_XBOXONE },
++	{ 0x03f0, 0x08B6, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE },		/* v2 */
++	{ 0x03f0, 0x09B4, "HyperX Clutch Tanto", 0, XTYPE_XBOXONE },
+ 	{ 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ 	{ 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ 	{ 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+@@ -463,6 +468,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	{ USB_INTERFACE_INFO('X', 'B', 0) },	/* Xbox USB-IF not-approved class */
+ 	XPAD_XBOX360_VENDOR(0x0079),		/* GPD Win 2 controller */
+ 	XPAD_XBOX360_VENDOR(0x03eb),		/* Wooting Keyboards (Legacy) */
++	XPAD_XBOX360_VENDOR(0x03f0),		/* HP HyperX Xbox 360 controllers */
+ 	XPAD_XBOXONE_VENDOR(0x03f0),		/* HP HyperX Xbox One controllers */
+ 	XPAD_XBOX360_VENDOR(0x044f),		/* Thrustmaster Xbox 360 controllers */
+ 	XPAD_XBOX360_VENDOR(0x045e),		/* Microsoft Xbox 360 controllers */
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 037fcf826407f..a0767ce1bd133 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -1706,6 +1706,14 @@ static size_t iommu_dma_opt_mapping_size(void)
+ 	return iova_rcache_range();
+ }
+ 
++static size_t iommu_dma_max_mapping_size(struct device *dev)
++{
++	if (dev_is_untrusted(dev))
++		return swiotlb_max_mapping_size(dev);
++
++	return SIZE_MAX;
++}
++
+ static const struct dma_map_ops iommu_dma_ops = {
+ 	.flags			= DMA_F_PCI_P2PDMA_SUPPORTED,
+ 	.alloc			= iommu_dma_alloc,
+@@ -1728,6 +1736,7 @@ static const struct dma_map_ops iommu_dma_ops = {
+ 	.unmap_resource		= iommu_dma_unmap_resource,
+ 	.get_merge_boundary	= iommu_dma_get_merge_boundary,
+ 	.opt_mapping_size	= iommu_dma_opt_mapping_size,
++	.max_mapping_size       = iommu_dma_max_mapping_size,
+ };
+ 
+ /*
+diff --git a/drivers/irqchip/irq-renesas-rzg2l.c b/drivers/irqchip/irq-renesas-rzg2l.c
+index fe8d516f36149..dc822111fc5d5 100644
+--- a/drivers/irqchip/irq-renesas-rzg2l.c
++++ b/drivers/irqchip/irq-renesas-rzg2l.c
+@@ -28,8 +28,7 @@
+ #define ISCR				0x10
+ #define IITSR				0x14
+ #define TSCR				0x20
+-#define TITSR0				0x24
+-#define TITSR1				0x28
++#define TITSR(n)			(0x24 + (n) * 4)
+ #define TITSR0_MAX_INT			16
+ #define TITSEL_WIDTH			0x2
+ #define TSSR(n)				(0x30 + ((n) * 4))
+@@ -67,28 +66,43 @@ static struct rzg2l_irqc_priv *irq_data_to_priv(struct irq_data *data)
+ 	return data->domain->host_data;
+ }
+ 
+-static void rzg2l_irq_eoi(struct irq_data *d)
++static void rzg2l_clear_irq_int(struct rzg2l_irqc_priv *priv, unsigned int hwirq)
+ {
+-	unsigned int hw_irq = irqd_to_hwirq(d) - IRQC_IRQ_START;
+-	struct rzg2l_irqc_priv *priv = irq_data_to_priv(d);
++	unsigned int hw_irq = hwirq - IRQC_IRQ_START;
+ 	u32 bit = BIT(hw_irq);
+-	u32 reg;
++	u32 iitsr, iscr;
+ 
+-	reg = readl_relaxed(priv->base + ISCR);
+-	if (reg & bit)
+-		writel_relaxed(reg & ~bit, priv->base + ISCR);
++	iscr = readl_relaxed(priv->base + ISCR);
++	iitsr = readl_relaxed(priv->base + IITSR);
++
++	/*
++	 * ISCR can only be cleared if the type is falling-edge, rising-edge or
++	 * falling/rising-edge.
++	 */
++	if ((iscr & bit) && (iitsr & IITSR_IITSEL_MASK(hw_irq))) {
++		writel_relaxed(iscr & ~bit, priv->base + ISCR);
++		/*
++		 * Enforce that the posted write is flushed to prevent that the
++		 * just handled interrupt is raised again.
++		 */
++		readl_relaxed(priv->base + ISCR);
++	}
+ }
+ 
+-static void rzg2l_tint_eoi(struct irq_data *d)
++static void rzg2l_clear_tint_int(struct rzg2l_irqc_priv *priv, unsigned int hwirq)
+ {
+-	unsigned int hw_irq = irqd_to_hwirq(d) - IRQC_TINT_START;
+-	struct rzg2l_irqc_priv *priv = irq_data_to_priv(d);
+-	u32 bit = BIT(hw_irq);
++	u32 bit = BIT(hwirq - IRQC_TINT_START);
+ 	u32 reg;
+ 
+ 	reg = readl_relaxed(priv->base + TSCR);
+-	if (reg & bit)
++	if (reg & bit) {
+ 		writel_relaxed(reg & ~bit, priv->base + TSCR);
++		/*
++		 * Enforce that the posted write is flushed to prevent that the
++		 * just handled interrupt is raised again.
++		 */
++		readl_relaxed(priv->base + TSCR);
++	}
+ }
+ 
+ static void rzg2l_irqc_eoi(struct irq_data *d)
+@@ -98,9 +112,9 @@ static void rzg2l_irqc_eoi(struct irq_data *d)
+ 
+ 	raw_spin_lock(&priv->lock);
+ 	if (hw_irq >= IRQC_IRQ_START && hw_irq <= IRQC_IRQ_COUNT)
+-		rzg2l_irq_eoi(d);
++		rzg2l_clear_irq_int(priv, hw_irq);
+ 	else if (hw_irq >= IRQC_TINT_START && hw_irq < IRQC_NUM_IRQ)
+-		rzg2l_tint_eoi(d);
++		rzg2l_clear_tint_int(priv, hw_irq);
+ 	raw_spin_unlock(&priv->lock);
+ 	irq_chip_eoi_parent(d);
+ }
+@@ -148,8 +162,10 @@ static void rzg2l_irqc_irq_enable(struct irq_data *d)
+ 
+ static int rzg2l_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+-	unsigned int hw_irq = irqd_to_hwirq(d) - IRQC_IRQ_START;
+ 	struct rzg2l_irqc_priv *priv = irq_data_to_priv(d);
++	unsigned int hwirq = irqd_to_hwirq(d);
++	u32 iitseln = hwirq - IRQC_IRQ_START;
++	bool clear_irq_int = false;
+ 	u16 sense, tmp;
+ 
+ 	switch (type & IRQ_TYPE_SENSE_MASK) {
+@@ -159,14 +175,17 @@ static int rzg2l_irq_set_type(struct irq_data *d, unsigned int type)
+ 
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		sense = IITSR_IITSEL_EDGE_FALLING;
++		clear_irq_int = true;
+ 		break;
+ 
+ 	case IRQ_TYPE_EDGE_RISING:
+ 		sense = IITSR_IITSEL_EDGE_RISING;
++		clear_irq_int = true;
+ 		break;
+ 
+ 	case IRQ_TYPE_EDGE_BOTH:
+ 		sense = IITSR_IITSEL_EDGE_BOTH;
++		clear_irq_int = true;
+ 		break;
+ 
+ 	default:
+@@ -175,22 +194,40 @@ static int rzg2l_irq_set_type(struct irq_data *d, unsigned int type)
+ 
+ 	raw_spin_lock(&priv->lock);
+ 	tmp = readl_relaxed(priv->base + IITSR);
+-	tmp &= ~IITSR_IITSEL_MASK(hw_irq);
+-	tmp |= IITSR_IITSEL(hw_irq, sense);
++	tmp &= ~IITSR_IITSEL_MASK(iitseln);
++	tmp |= IITSR_IITSEL(iitseln, sense);
++	if (clear_irq_int)
++		rzg2l_clear_irq_int(priv, hwirq);
+ 	writel_relaxed(tmp, priv->base + IITSR);
+ 	raw_spin_unlock(&priv->lock);
+ 
+ 	return 0;
+ }
+ 
++static u32 rzg2l_disable_tint_and_set_tint_source(struct irq_data *d, struct rzg2l_irqc_priv *priv,
++						  u32 reg, u32 tssr_offset, u8 tssr_index)
++{
++	u32 tint = (u32)(uintptr_t)irq_data_get_irq_chip_data(d);
++	u32 tien = reg & (TIEN << TSSEL_SHIFT(tssr_offset));
++
++	/* Clear the relevant byte in reg */
++	reg &= ~(TSSEL_MASK << TSSEL_SHIFT(tssr_offset));
++	/* Set TINT and leave TIEN clear */
++	reg |= tint << TSSEL_SHIFT(tssr_offset);
++	writel_relaxed(reg, priv->base + TSSR(tssr_index));
++
++	return reg | tien;
++}
++
+ static int rzg2l_tint_set_edge(struct irq_data *d, unsigned int type)
+ {
+ 	struct rzg2l_irqc_priv *priv = irq_data_to_priv(d);
+ 	unsigned int hwirq = irqd_to_hwirq(d);
+ 	u32 titseln = hwirq - IRQC_TINT_START;
+-	u32 offset;
+-	u8 sense;
+-	u32 reg;
++	u32 tssr_offset = TSSR_OFFSET(titseln);
++	u8 tssr_index = TSSR_INDEX(titseln);
++	u8 index, sense;
++	u32 reg, tssr;
+ 
+ 	switch (type & IRQ_TYPE_SENSE_MASK) {
+ 	case IRQ_TYPE_EDGE_RISING:
+@@ -205,17 +242,21 @@ static int rzg2l_tint_set_edge(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	offset = TITSR0;
++	index = 0;
+ 	if (titseln >= TITSR0_MAX_INT) {
+ 		titseln -= TITSR0_MAX_INT;
+-		offset = TITSR1;
++		index = 1;
+ 	}
+ 
+ 	raw_spin_lock(&priv->lock);
+-	reg = readl_relaxed(priv->base + offset);
++	tssr = readl_relaxed(priv->base + TSSR(tssr_index));
++	tssr = rzg2l_disable_tint_and_set_tint_source(d, priv, tssr, tssr_offset, tssr_index);
++	reg = readl_relaxed(priv->base + TITSR(index));
+ 	reg &= ~(IRQ_MASK << (titseln * TITSEL_WIDTH));
+ 	reg |= sense << (titseln * TITSEL_WIDTH);
+-	writel_relaxed(reg, priv->base + offset);
++	writel_relaxed(reg, priv->base + TITSR(index));
++	rzg2l_clear_tint_int(priv, hwirq);
++	writel_relaxed(tssr, priv->base + TSSR(tssr_index));
+ 	raw_spin_unlock(&priv->lock);
+ 
+ 	return 0;
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index d76214fa9ad86..79719fc8a08fb 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -462,12 +462,12 @@ static int netdev_trig_notify(struct notifier_block *nb,
+ 	trigger_data->duplex = DUPLEX_UNKNOWN;
+ 	switch (evt) {
+ 	case NETDEV_CHANGENAME:
+-		get_device_state(trigger_data);
+-		fallthrough;
+ 	case NETDEV_REGISTER:
+ 		dev_put(trigger_data->net_dev);
+ 		dev_hold(dev);
+ 		trigger_data->net_dev = dev;
++		if (evt == NETDEV_CHANGENAME)
++			get_device_state(trigger_data);
+ 		break;
+ 	case NETDEV_UNREGISTER:
+ 		dev_put(trigger_data->net_dev);
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 13eb47b997f94..d97355e9b9a6e 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -213,6 +213,7 @@ struct raid_dev {
+ #define RT_FLAG_RS_IN_SYNC		6
+ #define RT_FLAG_RS_RESYNCING		7
+ #define RT_FLAG_RS_GROW			8
++#define RT_FLAG_RS_FROZEN		9
+ 
+ /* Array elements of 64 bit needed for rebuild/failed disk bits */
+ #define DISKS_ARRAY_ELEMS ((MAX_RAID_DEVICES + (sizeof(uint64_t) * 8 - 1)) / sizeof(uint64_t) / 8)
+@@ -3240,11 +3241,12 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	rs->md.ro = 1;
+ 	rs->md.in_sync = 1;
+ 
+-	/* Keep array frozen until resume. */
+-	set_bit(MD_RECOVERY_FROZEN, &rs->md.recovery);
+-
+ 	/* Has to be held on running the array */
+ 	mddev_suspend_and_lock_nointr(&rs->md);
++
++	/* Keep array frozen until resume. */
++	md_frozen_sync_thread(&rs->md);
++
+ 	r = md_run(&rs->md);
+ 	rs->md.in_sync = 0; /* Assume already marked dirty */
+ 	if (r) {
+@@ -3339,7 +3341,8 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
+ 	if (unlikely(bio_has_data(bio) && bio_end_sector(bio) > mddev->array_sectors))
+ 		return DM_MAPIO_REQUEUE;
+ 
+-	md_handle_request(mddev, bio);
++	if (unlikely(!md_handle_request(mddev, bio)))
++		return DM_MAPIO_REQUEUE;
+ 
+ 	return DM_MAPIO_SUBMITTED;
+ }
+@@ -3718,21 +3721,33 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv,
+ {
+ 	struct raid_set *rs = ti->private;
+ 	struct mddev *mddev = &rs->md;
++	int ret = 0;
+ 
+ 	if (!mddev->pers || !mddev->pers->sync_request)
+ 		return -EINVAL;
+ 
+-	if (!strcasecmp(argv[0], "frozen"))
+-		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-	else
+-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++	if (test_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags) ||
++	    test_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags))
++		return -EBUSY;
+ 
+-	if (!strcasecmp(argv[0], "idle") || !strcasecmp(argv[0], "frozen")) {
+-		if (mddev->sync_thread) {
+-			set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+-			md_reap_sync_thread(mddev);
+-		}
+-	} else if (decipher_sync_action(mddev, mddev->recovery) != st_idle)
++	if (!strcasecmp(argv[0], "frozen")) {
++		ret = mddev_lock(mddev);
++		if (ret)
++			return ret;
++
++		md_frozen_sync_thread(mddev);
++		mddev_unlock(mddev);
++	} else if (!strcasecmp(argv[0], "idle")) {
++		ret = mddev_lock(mddev);
++		if (ret)
++			return ret;
++
++		md_idle_sync_thread(mddev);
++		mddev_unlock(mddev);
++	}
++
++	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++	if (decipher_sync_action(mddev, mddev->recovery) != st_idle)
+ 		return -EBUSY;
+ 	else if (!strcasecmp(argv[0], "resync"))
+ 		; /* MD_RECOVERY_NEEDED set below */
+@@ -3791,15 +3806,46 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
+ }
+ 
++static void raid_presuspend(struct dm_target *ti)
++{
++	struct raid_set *rs = ti->private;
++	struct mddev *mddev = &rs->md;
++
++	/*
++	 * From now on, disallow raid_message() to change sync_thread until
++	 * resume, raid_postsuspend() is too late.
++	 */
++	set_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
++
++	if (!reshape_interrupted(mddev))
++		return;
++
++	/*
++	 * For raid456, if reshape is interrupted, IO across reshape position
++	 * will never make progress, while caller will wait for IO to be done.
++	 * Inform raid456 to handle those IO to prevent deadlock.
++	 */
++	if (mddev->pers && mddev->pers->prepare_suspend)
++		mddev->pers->prepare_suspend(mddev);
++}
++
++static void raid_presuspend_undo(struct dm_target *ti)
++{
++	struct raid_set *rs = ti->private;
++
++	clear_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
++}
++
+ static void raid_postsuspend(struct dm_target *ti)
+ {
+ 	struct raid_set *rs = ti->private;
+ 
+ 	if (!test_and_set_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
+-		/* Writes have to be stopped before suspending to avoid deadlocks. */
+-		if (!test_bit(MD_RECOVERY_FROZEN, &rs->md.recovery))
+-			md_stop_writes(&rs->md);
+-
++		/*
++		 * sync_thread must be stopped during suspend, and writes have
++		 * to be stopped before suspending to avoid deadlocks.
++		 */
++		md_stop_writes(&rs->md);
+ 		mddev_suspend(&rs->md, false);
+ 	}
+ }
+@@ -4012,8 +4058,6 @@ static int raid_preresume(struct dm_target *ti)
+ 	}
+ 
+ 	/* Check for any resize/reshape on @rs and adjust/initiate */
+-	/* Be prepared for mddev_resume() in raid_resume() */
+-	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 	if (mddev->recovery_cp && mddev->recovery_cp < MaxSector) {
+ 		set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
+ 		mddev->resync_min = mddev->recovery_cp;
+@@ -4047,7 +4091,9 @@ static void raid_resume(struct dm_target *ti)
+ 		 * Take this opportunity to check whether any failed
+ 		 * devices are reachable again.
+ 		 */
++		mddev_lock_nointr(mddev);
+ 		attempt_restore_of_faulty_devices(rs);
++		mddev_unlock(mddev);
+ 	}
+ 
+ 	if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
+@@ -4055,10 +4101,13 @@ static void raid_resume(struct dm_target *ti)
+ 		if (mddev->delta_disks < 0)
+ 			rs_set_capacity(rs);
+ 
++		WARN_ON_ONCE(!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery));
++		WARN_ON_ONCE(test_bit(MD_RECOVERY_RUNNING, &mddev->recovery));
++		clear_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
+ 		mddev_lock_nointr(mddev);
+-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+ 		mddev->ro = 0;
+ 		mddev->in_sync = 0;
++		md_unfrozen_sync_thread(mddev);
+ 		mddev_unlock_and_resume(mddev);
+ 	}
+ }
+@@ -4074,6 +4123,8 @@ static struct target_type raid_target = {
+ 	.message = raid_message,
+ 	.iterate_devices = raid_iterate_devices,
+ 	.io_hints = raid_io_hints,
++	.presuspend = raid_presuspend,
++	.presuspend_undo = raid_presuspend_undo,
+ 	.postsuspend = raid_postsuspend,
+ 	.preresume = raid_preresume,
+ 	.resume = raid_resume,
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index bf7a574499a34..0ace06d1bee38 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -684,8 +684,10 @@ static void dm_exception_table_exit(struct dm_exception_table *et,
+ 	for (i = 0; i < size; i++) {
+ 		slot = et->table + i;
+ 
+-		hlist_bl_for_each_entry_safe(ex, pos, n, slot, hash_list)
++		hlist_bl_for_each_entry_safe(ex, pos, n, slot, hash_list) {
+ 			kmem_cache_free(mem, ex);
++			cond_resched();
++		}
+ 	}
+ 
+ 	kvfree(et->table);
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 9672f75c30503..a4976ceae8688 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -234,7 +234,8 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
+ 	sector_t doff;
+ 
+ 	bdev = (rdev->meta_bdev) ? rdev->meta_bdev : rdev->bdev;
+-	if (pg_index == store->file_pages - 1) {
++	/* we compare length (page numbers), not page offset. */
++	if ((pg_index - store->sb_index) == store->file_pages - 1) {
+ 		unsigned int last_page_size = store->bytes & (PAGE_SIZE - 1);
+ 
+ 		if (last_page_size == 0)
+@@ -438,8 +439,8 @@ static void filemap_write_page(struct bitmap *bitmap, unsigned long pg_index,
+ 	struct page *page = store->filemap[pg_index];
+ 
+ 	if (mddev_is_clustered(bitmap->mddev)) {
+-		pg_index += bitmap->cluster_slot *
+-			DIV_ROUND_UP(store->bytes, PAGE_SIZE);
++		/* go to node bitmap area starting point */
++		pg_index += store->sb_index;
+ 	}
+ 
+ 	if (store->file)
+@@ -952,6 +953,7 @@ static void md_bitmap_file_set_bit(struct bitmap *bitmap, sector_t block)
+ 	unsigned long index = file_page_index(store, chunk);
+ 	unsigned long node_offset = 0;
+ 
++	index += store->sb_index;
+ 	if (mddev_is_clustered(bitmap->mddev))
+ 		node_offset = bitmap->cluster_slot * store->file_pages;
+ 
+@@ -982,6 +984,7 @@ static void md_bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block)
+ 	unsigned long index = file_page_index(store, chunk);
+ 	unsigned long node_offset = 0;
+ 
++	index += store->sb_index;
+ 	if (mddev_is_clustered(bitmap->mddev))
+ 		node_offset = bitmap->cluster_slot * store->file_pages;
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 99b60d37114c4..67befb598cdd0 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -99,18 +99,6 @@ static void mddev_detach(struct mddev *mddev);
+ static void export_rdev(struct md_rdev *rdev, struct mddev *mddev);
+ static void md_wakeup_thread_directly(struct md_thread __rcu *thread);
+ 
+-enum md_ro_state {
+-	MD_RDWR,
+-	MD_RDONLY,
+-	MD_AUTO_READ,
+-	MD_MAX_STATE
+-};
+-
+-static bool md_is_rdwr(struct mddev *mddev)
+-{
+-	return (mddev->ro == MD_RDWR);
+-}
+-
+ /*
+  * Default number of read corrections we'll attempt on an rdev
+  * before ejecting it from the array. We divide the read error
+@@ -378,7 +366,7 @@ static bool is_suspended(struct mddev *mddev, struct bio *bio)
+ 	return true;
+ }
+ 
+-void md_handle_request(struct mddev *mddev, struct bio *bio)
++bool md_handle_request(struct mddev *mddev, struct bio *bio)
+ {
+ check_suspended:
+ 	if (is_suspended(mddev, bio)) {
+@@ -386,7 +374,7 @@ void md_handle_request(struct mddev *mddev, struct bio *bio)
+ 		/* Bail out if REQ_NOWAIT is set for the bio */
+ 		if (bio->bi_opf & REQ_NOWAIT) {
+ 			bio_wouldblock_error(bio);
+-			return;
++			return true;
+ 		}
+ 		for (;;) {
+ 			prepare_to_wait(&mddev->sb_wait, &__wait,
+@@ -402,10 +390,13 @@ void md_handle_request(struct mddev *mddev, struct bio *bio)
+ 
+ 	if (!mddev->pers->make_request(mddev, bio)) {
+ 		percpu_ref_put(&mddev->active_io);
++		if (!mddev->gendisk && mddev->pers->prepare_suspend)
++			return false;
+ 		goto check_suspended;
+ 	}
+ 
+ 	percpu_ref_put(&mddev->active_io);
++	return true;
+ }
+ EXPORT_SYMBOL(md_handle_request);
+ 
+@@ -4945,6 +4936,35 @@ static void stop_sync_thread(struct mddev *mddev, bool locked, bool check_seq)
+ 		mddev_lock_nointr(mddev);
+ }
+ 
++void md_idle_sync_thread(struct mddev *mddev)
++{
++	lockdep_assert_held(&mddev->reconfig_mutex);
++
++	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++	stop_sync_thread(mddev, true, true);
++}
++EXPORT_SYMBOL_GPL(md_idle_sync_thread);
++
++void md_frozen_sync_thread(struct mddev *mddev)
++{
++	lockdep_assert_held(&mddev->reconfig_mutex);
++
++	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++	stop_sync_thread(mddev, true, false);
++}
++EXPORT_SYMBOL_GPL(md_frozen_sync_thread);
++
++void md_unfrozen_sync_thread(struct mddev *mddev)
++{
++	lockdep_assert_held(&mddev->reconfig_mutex);
++
++	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++	md_wakeup_thread(mddev->thread);
++	sysfs_notify_dirent_safe(mddev->sysfs_action);
++}
++EXPORT_SYMBOL_GPL(md_unfrozen_sync_thread);
++
+ static void idle_sync_thread(struct mddev *mddev)
+ {
+ 	mutex_lock(&mddev->sync_mutex);
+@@ -6064,7 +6084,10 @@ int md_run(struct mddev *mddev)
+ 			pr_warn("True protection against single-disk failure might be compromised.\n");
+ 	}
+ 
+-	mddev->recovery = 0;
++	/* dm-raid expect sync_thread to be frozen until resume */
++	if (mddev->gendisk)
++		mddev->recovery = 0;
++
+ 	/* may be over-ridden by personality */
+ 	mddev->resync_max_sectors = mddev->dev_sectors;
+ 
+@@ -6349,7 +6372,6 @@ static void md_clean(struct mddev *mddev)
+ 
+ static void __md_stop_writes(struct mddev *mddev)
+ {
+-	stop_sync_thread(mddev, true, false);
+ 	del_timer_sync(&mddev->safemode_timer);
+ 
+ 	if (mddev->pers && mddev->pers->quiesce) {
+@@ -6374,6 +6396,8 @@ static void __md_stop_writes(struct mddev *mddev)
+ void md_stop_writes(struct mddev *mddev)
+ {
+ 	mddev_lock_nointr(mddev);
++	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++	stop_sync_thread(mddev, true, false);
+ 	__md_stop_writes(mddev);
+ 	mddev_unlock(mddev);
+ }
+@@ -8769,6 +8793,23 @@ void md_account_bio(struct mddev *mddev, struct bio **bio)
+ }
+ EXPORT_SYMBOL_GPL(md_account_bio);
+ 
++void md_free_cloned_bio(struct bio *bio)
++{
++	struct md_io_clone *md_io_clone = bio->bi_private;
++	struct bio *orig_bio = md_io_clone->orig_bio;
++	struct mddev *mddev = md_io_clone->mddev;
++
++	if (bio->bi_status && !orig_bio->bi_status)
++		orig_bio->bi_status = bio->bi_status;
++
++	if (md_io_clone->start_time)
++		bio_end_io_acct(orig_bio, md_io_clone->start_time);
++
++	bio_put(bio);
++	percpu_ref_put(&mddev->active_io);
++}
++EXPORT_SYMBOL_GPL(md_free_cloned_bio);
++
+ /* md_allow_write(mddev)
+  * Calling this ensures that the array is marked 'active' so that writes
+  * may proceed without blocking.  It is important to call this before
+@@ -9302,9 +9343,14 @@ static bool md_spares_need_change(struct mddev *mddev)
+ {
+ 	struct md_rdev *rdev;
+ 
+-	rdev_for_each(rdev, mddev)
+-		if (rdev_removeable(rdev) || rdev_addable(rdev))
++	rcu_read_lock();
++	rdev_for_each_rcu(rdev, mddev) {
++		if (rdev_removeable(rdev) || rdev_addable(rdev)) {
++			rcu_read_unlock();
+ 			return true;
++		}
++	}
++	rcu_read_unlock();
+ 	return false;
+ }
+ 
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 27d187ca6258a..375ad4a2df71d 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -559,6 +559,37 @@ enum recovery_flags {
+ 	MD_RESYNCING_REMOTE,	/* remote node is running resync thread */
+ };
+ 
++enum md_ro_state {
++	MD_RDWR,
++	MD_RDONLY,
++	MD_AUTO_READ,
++	MD_MAX_STATE
++};
++
++static inline bool md_is_rdwr(struct mddev *mddev)
++{
++	return (mddev->ro == MD_RDWR);
++}
++
++static inline bool reshape_interrupted(struct mddev *mddev)
++{
++	/* reshape never start */
++	if (mddev->reshape_position == MaxSector)
++		return false;
++
++	/* interrupted */
++	if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
++		return true;
++
++	/* running reshape will be interrupted soon. */
++	if (test_bit(MD_RECOVERY_WAIT, &mddev->recovery) ||
++	    test_bit(MD_RECOVERY_INTR, &mddev->recovery) ||
++	    test_bit(MD_RECOVERY_FROZEN, &mddev->recovery))
++		return true;
++
++	return false;
++}
++
+ static inline int __must_check mddev_lock(struct mddev *mddev)
+ {
+ 	return mutex_lock_interruptible(&mddev->reconfig_mutex);
+@@ -618,6 +649,7 @@ struct md_personality
+ 	int (*start_reshape) (struct mddev *mddev);
+ 	void (*finish_reshape) (struct mddev *mddev);
+ 	void (*update_reshape_pos) (struct mddev *mddev);
++	void (*prepare_suspend) (struct mddev *mddev);
+ 	/* quiesce suspends or resumes internal processing.
+ 	 * 1 - stop new actions and wait for action io to complete
+ 	 * 0 - return to normal behaviour
+@@ -751,6 +783,7 @@ extern void md_finish_reshape(struct mddev *mddev);
+ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+ 			struct bio *bio, sector_t start, sector_t size);
+ void md_account_bio(struct mddev *mddev, struct bio **bio);
++void md_free_cloned_bio(struct bio *bio);
+ 
+ extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
+ extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
+@@ -779,9 +812,12 @@ extern void md_stop_writes(struct mddev *mddev);
+ extern int md_rdev_init(struct md_rdev *rdev);
+ extern void md_rdev_clear(struct md_rdev *rdev);
+ 
+-extern void md_handle_request(struct mddev *mddev, struct bio *bio);
++extern bool md_handle_request(struct mddev *mddev, struct bio *bio);
+ extern int mddev_suspend(struct mddev *mddev, bool interruptible);
+ extern void mddev_resume(struct mddev *mddev);
++extern void md_idle_sync_thread(struct mddev *mddev);
++extern void md_frozen_sync_thread(struct mddev *mddev);
++extern void md_unfrozen_sync_thread(struct mddev *mddev);
+ 
+ extern void md_reload_sb(struct mddev *mddev, int raid_disk);
+ extern void md_update_sb(struct mddev *mddev, int force);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index f03e4231bec11..e1d8b5199f81e 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -763,6 +763,7 @@ enum stripe_result {
+ 	STRIPE_RETRY,
+ 	STRIPE_SCHEDULE_AND_RETRY,
+ 	STRIPE_FAIL,
++	STRIPE_WAIT_RESHAPE,
+ };
+ 
+ struct stripe_request_ctx {
+@@ -2422,7 +2423,7 @@ static int grow_one_stripe(struct r5conf *conf, gfp_t gfp)
+ 	atomic_inc(&conf->active_stripes);
+ 
+ 	raid5_release_stripe(sh);
+-	conf->max_nr_stripes++;
++	WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1);
+ 	return 1;
+ }
+ 
+@@ -2717,7 +2718,7 @@ static int drop_one_stripe(struct r5conf *conf)
+ 	shrink_buffers(sh);
+ 	free_stripe(conf->slab_cache, sh);
+ 	atomic_dec(&conf->active_stripes);
+-	conf->max_nr_stripes--;
++	WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes - 1);
+ 	return 1;
+ }
+ 
+@@ -5991,7 +5992,8 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
+ 			if (ahead_of_reshape(mddev, logical_sector,
+ 					     conf->reshape_safe)) {
+ 				spin_unlock_irq(&conf->device_lock);
+-				return STRIPE_SCHEDULE_AND_RETRY;
++				ret = STRIPE_SCHEDULE_AND_RETRY;
++				goto out;
+ 			}
+ 		}
+ 		spin_unlock_irq(&conf->device_lock);
+@@ -6070,6 +6072,12 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
+ 
+ out_release:
+ 	raid5_release_stripe(sh);
++out:
++	if (ret == STRIPE_SCHEDULE_AND_RETRY && reshape_interrupted(mddev)) {
++		bi->bi_status = BLK_STS_RESOURCE;
++		ret = STRIPE_WAIT_RESHAPE;
++		pr_err_ratelimited("dm-raid456: io across reshape position while reshape can't make progress");
++	}
+ 	return ret;
+ }
+ 
+@@ -6191,7 +6199,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ 	while (1) {
+ 		res = make_stripe_request(mddev, conf, &ctx, logical_sector,
+ 					  bi);
+-		if (res == STRIPE_FAIL)
++		if (res == STRIPE_FAIL || res == STRIPE_WAIT_RESHAPE)
+ 			break;
+ 
+ 		if (res == STRIPE_RETRY)
+@@ -6229,6 +6237,11 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ 
+ 	if (rw == WRITE)
+ 		md_write_end(mddev);
++	if (res == STRIPE_WAIT_RESHAPE) {
++		md_free_cloned_bio(bi);
++		return false;
++	}
++
+ 	bio_endio(bi);
+ 	return true;
+ }
+@@ -6878,7 +6891,7 @@ raid5_set_cache_size(struct mddev *mddev, int size)
+ 	if (size <= 16 || size > 32768)
+ 		return -EINVAL;
+ 
+-	conf->min_nr_stripes = size;
++	WRITE_ONCE(conf->min_nr_stripes, size);
+ 	mutex_lock(&conf->cache_size_mutex);
+ 	while (size < conf->max_nr_stripes &&
+ 	       drop_one_stripe(conf))
+@@ -6890,7 +6903,7 @@ raid5_set_cache_size(struct mddev *mddev, int size)
+ 	mutex_lock(&conf->cache_size_mutex);
+ 	while (size > conf->max_nr_stripes)
+ 		if (!grow_one_stripe(conf, GFP_KERNEL)) {
+-			conf->min_nr_stripes = conf->max_nr_stripes;
++			WRITE_ONCE(conf->min_nr_stripes, conf->max_nr_stripes);
+ 			result = -ENOMEM;
+ 			break;
+ 		}
+@@ -7448,11 +7461,13 @@ static unsigned long raid5_cache_count(struct shrinker *shrink,
+ 				       struct shrink_control *sc)
+ {
+ 	struct r5conf *conf = shrink->private_data;
++	int max_stripes = READ_ONCE(conf->max_nr_stripes);
++	int min_stripes = READ_ONCE(conf->min_nr_stripes);
+ 
+-	if (conf->max_nr_stripes < conf->min_nr_stripes)
++	if (max_stripes < min_stripes)
+ 		/* unlikely, but not impossible */
+ 		return 0;
+-	return conf->max_nr_stripes - conf->min_nr_stripes;
++	return max_stripes - min_stripes;
+ }
+ 
+ static struct r5conf *setup_conf(struct mddev *mddev)
+@@ -8981,6 +8996,18 @@ static int raid5_start(struct mddev *mddev)
+ 	return r5l_start(conf->log);
+ }
+ 
++/*
++ * This is only used for dm-raid456, caller already frozen sync_thread, hence
++ * if rehsape is still in progress, io that is waiting for reshape can never be
++ * done now, hence wake up and handle those IO.
++ */
++static void raid5_prepare_suspend(struct mddev *mddev)
++{
++	struct r5conf *conf = mddev->private;
++
++	wake_up(&conf->wait_for_overlap);
++}
++
+ static struct md_personality raid6_personality =
+ {
+ 	.name		= "raid6",
+@@ -9004,6 +9031,7 @@ static struct md_personality raid6_personality =
+ 	.quiesce	= raid5_quiesce,
+ 	.takeover	= raid6_takeover,
+ 	.change_consistency_policy = raid5_change_consistency_policy,
++	.prepare_suspend = raid5_prepare_suspend,
+ };
+ static struct md_personality raid5_personality =
+ {
+@@ -9028,6 +9056,7 @@ static struct md_personality raid5_personality =
+ 	.quiesce	= raid5_quiesce,
+ 	.takeover	= raid5_takeover,
+ 	.change_consistency_policy = raid5_change_consistency_policy,
++	.prepare_suspend = raid5_prepare_suspend,
+ };
+ 
+ static struct md_personality raid4_personality =
+@@ -9053,6 +9082,7 @@ static struct md_personality raid4_personality =
+ 	.quiesce	= raid5_quiesce,
+ 	.takeover	= raid4_takeover,
+ 	.change_consistency_policy = raid5_change_consistency_policy,
++	.prepare_suspend = raid5_prepare_suspend,
+ };
+ 
+ static int __init raid5_init(void)
+diff --git a/drivers/media/mc/mc-entity.c b/drivers/media/mc/mc-entity.c
+index 543a392f86357..0e28b9a7936ef 100644
+--- a/drivers/media/mc/mc-entity.c
++++ b/drivers/media/mc/mc-entity.c
+@@ -535,14 +535,15 @@ static int media_pipeline_walk_push(struct media_pipeline_walk *walk,
+ 
+ /*
+  * Move the top entry link cursor to the next link. If all links of the entry
+- * have been visited, pop the entry itself.
++ * have been visited, pop the entry itself. Return true if the entry has been
++ * popped.
+  */
+-static void media_pipeline_walk_pop(struct media_pipeline_walk *walk)
++static bool media_pipeline_walk_pop(struct media_pipeline_walk *walk)
+ {
+ 	struct media_pipeline_walk_entry *entry;
+ 
+ 	if (WARN_ON(walk->stack.top < 0))
+-		return;
++		return false;
+ 
+ 	entry = media_pipeline_walk_top(walk);
+ 
+@@ -552,7 +553,7 @@ static void media_pipeline_walk_pop(struct media_pipeline_walk *walk)
+ 			walk->stack.top);
+ 
+ 		walk->stack.top--;
+-		return;
++		return true;
+ 	}
+ 
+ 	entry->links = entry->links->next;
+@@ -560,6 +561,8 @@ static void media_pipeline_walk_pop(struct media_pipeline_walk *walk)
+ 	dev_dbg(walk->mdev->dev,
+ 		"media pipeline: moved entry %u to next link\n",
+ 		walk->stack.top);
++
++	return false;
+ }
+ 
+ /* Free all memory allocated while walking the pipeline. */
+@@ -605,30 +608,24 @@ static int media_pipeline_explore_next_link(struct media_pipeline *pipe,
+ 					    struct media_pipeline_walk *walk)
+ {
+ 	struct media_pipeline_walk_entry *entry = media_pipeline_walk_top(walk);
+-	struct media_pad *pad;
++	struct media_pad *origin;
+ 	struct media_link *link;
+ 	struct media_pad *local;
+ 	struct media_pad *remote;
++	bool last_link;
+ 	int ret;
+ 
+-	pad = entry->pad;
++	origin = entry->pad;
+ 	link = list_entry(entry->links, typeof(*link), list);
+-	media_pipeline_walk_pop(walk);
++	last_link = media_pipeline_walk_pop(walk);
+ 
+ 	dev_dbg(walk->mdev->dev,
+ 		"media pipeline: exploring link '%s':%u -> '%s':%u\n",
+ 		link->source->entity->name, link->source->index,
+ 		link->sink->entity->name, link->sink->index);
+ 
+-	/* Skip links that are not enabled. */
+-	if (!(link->flags & MEDIA_LNK_FL_ENABLED)) {
+-		dev_dbg(walk->mdev->dev,
+-			"media pipeline: skipping link (disabled)\n");
+-		return 0;
+-	}
+-
+ 	/* Get the local pad and remote pad. */
+-	if (link->source->entity == pad->entity) {
++	if (link->source->entity == origin->entity) {
+ 		local = link->source;
+ 		remote = link->sink;
+ 	} else {
+@@ -640,25 +637,64 @@ static int media_pipeline_explore_next_link(struct media_pipeline *pipe,
+ 	 * Skip links that originate from a different pad than the incoming pad
+ 	 * that is not connected internally in the entity to the incoming pad.
+ 	 */
+-	if (pad != local &&
+-	    !media_entity_has_pad_interdep(pad->entity, pad->index, local->index)) {
++	if (origin != local &&
++	    !media_entity_has_pad_interdep(origin->entity, origin->index,
++					   local->index)) {
+ 		dev_dbg(walk->mdev->dev,
+ 			"media pipeline: skipping link (no route)\n");
+-		return 0;
++		goto done;
+ 	}
+ 
+ 	/*
+-	 * Add the local and remote pads of the link to the pipeline and push
+-	 * them to the stack, if they're not already present.
++	 * Add the local pad of the link to the pipeline and push it to the
++	 * stack, if not already present.
+ 	 */
+ 	ret = media_pipeline_add_pad(pipe, walk, local);
+ 	if (ret)
+ 		return ret;
+ 
++	/* Similarly, add the remote pad, but only if the link is enabled. */
++	if (!(link->flags & MEDIA_LNK_FL_ENABLED)) {
++		dev_dbg(walk->mdev->dev,
++			"media pipeline: skipping link (disabled)\n");
++		goto done;
++	}
++
+ 	ret = media_pipeline_add_pad(pipe, walk, remote);
+ 	if (ret)
+ 		return ret;
+ 
++done:
++	/*
++	 * If we're done iterating over links, iterate over pads of the entity.
++	 * This is necessary to discover pads that are not connected with any
++	 * link. Those are dead ends from a pipeline exploration point of view,
++	 * but are still part of the pipeline and need to be added to enable
++	 * proper validation.
++	 */
++	if (!last_link)
++		return 0;
++
++	dev_dbg(walk->mdev->dev,
++		"media pipeline: adding unconnected pads of '%s'\n",
++		local->entity->name);
++
++	media_entity_for_each_pad(origin->entity, local) {
++		/*
++		 * Skip the origin pad (already handled), pad that have links
++		 * (already discovered through iterating over links) and pads
++		 * not internally connected.
++		 */
++		if (origin == local || !local->num_links ||
++		    !media_entity_has_pad_interdep(origin->entity, origin->index,
++						   local->index))
++			continue;
++
++		ret = media_pipeline_add_pad(pipe, walk, local);
++		if (ret)
++			return ret;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -770,7 +806,6 @@ __must_check int __media_pipeline_start(struct media_pad *pad,
+ 		struct media_pad *pad = ppad->pad;
+ 		struct media_entity *entity = pad->entity;
+ 		bool has_enabled_link = false;
+-		bool has_link = false;
+ 		struct media_link *link;
+ 
+ 		dev_dbg(mdev->dev, "Validating pad '%s':%u\n", pad->entity->name,
+@@ -800,7 +835,6 @@ __must_check int __media_pipeline_start(struct media_pad *pad,
+ 			/* Record if the pad has links and enabled links. */
+ 			if (link->flags & MEDIA_LNK_FL_ENABLED)
+ 				has_enabled_link = true;
+-			has_link = true;
+ 
+ 			/*
+ 			 * Validate the link if it's enabled and has the
+@@ -838,7 +872,7 @@ __must_check int __media_pipeline_start(struct media_pad *pad,
+ 		 * 3. If the pad has the MEDIA_PAD_FL_MUST_CONNECT flag set,
+ 		 * ensure that it has either no link or an enabled link.
+ 		 */
+-		if ((pad->flags & MEDIA_PAD_FL_MUST_CONNECT) && has_link &&
++		if ((pad->flags & MEDIA_PAD_FL_MUST_CONNECT) &&
+ 		    !has_enabled_link) {
+ 			dev_dbg(mdev->dev,
+ 				"Pad '%s':%u must be connected by an enabled link\n",
+@@ -1038,6 +1072,9 @@ static void __media_entity_remove_link(struct media_entity *entity,
+ 
+ 	/* Remove the reverse links for a data link. */
+ 	if ((link->flags & MEDIA_LNK_FL_LINK_TYPE) == MEDIA_LNK_FL_DATA_LINK) {
++		link->source->num_links--;
++		link->sink->num_links--;
++
+ 		if (link->source->entity == entity)
+ 			remote = link->sink->entity;
+ 		else
+@@ -1092,6 +1129,11 @@ media_create_pad_link(struct media_entity *source, u16 source_pad,
+ 	struct media_link *link;
+ 	struct media_link *backlink;
+ 
++	if (flags & MEDIA_LNK_FL_LINK_TYPE)
++		return -EINVAL;
++
++	flags |= MEDIA_LNK_FL_DATA_LINK;
++
+ 	if (WARN_ON(!source || !sink) ||
+ 	    WARN_ON(source_pad >= source->num_pads) ||
+ 	    WARN_ON(sink_pad >= sink->num_pads))
+@@ -1107,7 +1149,7 @@ media_create_pad_link(struct media_entity *source, u16 source_pad,
+ 
+ 	link->source = &source->pads[source_pad];
+ 	link->sink = &sink->pads[sink_pad];
+-	link->flags = flags & ~MEDIA_LNK_FL_INTERFACE_LINK;
++	link->flags = flags;
+ 
+ 	/* Initialize graph object embedded at the new link */
+ 	media_gobj_create(source->graph_obj.mdev, MEDIA_GRAPH_LINK,
+@@ -1138,6 +1180,9 @@ media_create_pad_link(struct media_entity *source, u16 source_pad,
+ 	sink->num_links++;
+ 	source->num_links++;
+ 
++	link->source->num_links++;
++	link->sink->num_links++;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(media_create_pad_link);
+diff --git a/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c b/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c
+index 792f031e032ae..c9a4d091b5707 100644
+--- a/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c
++++ b/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c
+@@ -161,7 +161,6 @@ mxc_isi_crossbar_xlate_streams(struct mxc_isi_crossbar *xbar,
+ 
+ 	pad = media_pad_remote_pad_first(&xbar->pads[sink_pad]);
+ 	sd = media_entity_to_v4l2_subdev(pad->entity);
+-
+ 	if (!sd) {
+ 		dev_dbg(xbar->isi->dev,
+ 			"no entity connected to crossbar input %u\n",
+@@ -465,7 +464,8 @@ int mxc_isi_crossbar_init(struct mxc_isi_dev *isi)
+ 	}
+ 
+ 	for (i = 0; i < xbar->num_sinks; ++i)
+-		xbar->pads[i].flags = MEDIA_PAD_FL_SINK;
++		xbar->pads[i].flags = MEDIA_PAD_FL_SINK
++				    | MEDIA_PAD_FL_MUST_CONNECT;
+ 	for (i = 0; i < xbar->num_sources; ++i)
+ 		xbar->pads[i + xbar->num_sinks].flags = MEDIA_PAD_FL_SOURCE;
+ 
+diff --git a/drivers/media/tuners/xc4000.c b/drivers/media/tuners/xc4000.c
+index 57ded9ff3f043..29bc63021c5aa 100644
+--- a/drivers/media/tuners/xc4000.c
++++ b/drivers/media/tuners/xc4000.c
+@@ -1515,10 +1515,10 @@ static int xc4000_get_frequency(struct dvb_frontend *fe, u32 *freq)
+ {
+ 	struct xc4000_priv *priv = fe->tuner_priv;
+ 
++	mutex_lock(&priv->lock);
+ 	*freq = priv->freq_hz + priv->freq_offset;
+ 
+ 	if (debug) {
+-		mutex_lock(&priv->lock);
+ 		if ((priv->cur_fw.type
+ 		     & (BASE | FM | DTV6 | DTV7 | DTV78 | DTV8)) == BASE) {
+ 			u16	snr = 0;
+@@ -1529,8 +1529,8 @@ static int xc4000_get_frequency(struct dvb_frontend *fe, u32 *freq)
+ 				return 0;
+ 			}
+ 		}
+-		mutex_unlock(&priv->lock);
+ 	}
++	mutex_unlock(&priv->lock);
+ 
+ 	dprintk(1, "%s()\n", __func__);
+ 
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 68d71b4b55bd3..f8fdf82238182 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -1773,6 +1773,7 @@ config TWL4030_CORE
+ 	bool "TI TWL4030/TWL5030/TWL6030/TPS659x0 Support"
+ 	depends on I2C=y
+ 	select IRQ_DOMAIN
++	select MFD_CORE
+ 	select REGMAP_I2C
+ 	help
+ 	  Say yes here if you have TWL4030 / TWL6030 family chip on your board.
+diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
+index ae5759200622c..cab11ed23b4f3 100644
+--- a/drivers/mfd/intel-lpss-pci.c
++++ b/drivers/mfd/intel-lpss-pci.c
+@@ -18,18 +18,29 @@
+ 
+ #include "intel-lpss.h"
+ 
+-/* Some DSDTs have an unused GEXP ACPI device conflicting with I2C4 resources */
+-static const struct pci_device_id ignore_resource_conflicts_ids[] = {
+-	/* Microsoft Surface Go (version 1) I2C4 */
+-	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_INTEL, 0x9d64, 0x152d, 0x1182), },
+-	/* Microsoft Surface Go 2 I2C4 */
+-	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_INTEL, 0x9d64, 0x152d, 0x1237), },
++static const struct pci_device_id quirk_ids[] = {
++	{
++		/* Microsoft Surface Go (version 1) I2C4 */
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_INTEL, 0x9d64, 0x152d, 0x1182),
++		.driver_data = QUIRK_IGNORE_RESOURCE_CONFLICTS,
++	},
++	{
++		/* Microsoft Surface Go 2 I2C4 */
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_INTEL, 0x9d64, 0x152d, 0x1237),
++		.driver_data = QUIRK_IGNORE_RESOURCE_CONFLICTS,
++	},
++	{
++		/* Dell XPS 9530 (2023) */
++		PCI_DEVICE_SUB(PCI_VENDOR_ID_INTEL, 0x51fb, 0x1028, 0x0beb),
++		.driver_data = QUIRK_CLOCK_DIVIDER_UNITY,
++	},
+ 	{ }
+ };
+ 
+ static int intel_lpss_pci_probe(struct pci_dev *pdev,
+ 				const struct pci_device_id *id)
+ {
++	const struct pci_device_id *quirk_pci_info;
+ 	struct intel_lpss_platform_info *info;
+ 	int ret;
+ 
+@@ -45,8 +56,9 @@ static int intel_lpss_pci_probe(struct pci_dev *pdev,
+ 	info->mem = &pdev->resource[0];
+ 	info->irq = pdev->irq;
+ 
+-	if (pci_match_id(ignore_resource_conflicts_ids, pdev))
+-		info->ignore_resource_conflicts = true;
++	quirk_pci_info = pci_match_id(quirk_ids, pdev);
++	if (quirk_pci_info)
++		info->quirks = quirk_pci_info->driver_data;
+ 
+ 	pdev->d3cold_delay = 0;
+ 
+diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
+index 00e7b578bb3e8..d422e88ba491b 100644
+--- a/drivers/mfd/intel-lpss.c
++++ b/drivers/mfd/intel-lpss.c
+@@ -292,6 +292,7 @@ static int intel_lpss_register_clock_divider(struct intel_lpss *lpss,
+ {
+ 	char name[32];
+ 	struct clk *tmp = *clk;
++	int ret;
+ 
+ 	snprintf(name, sizeof(name), "%s-enable", devname);
+ 	tmp = clk_register_gate(NULL, name, __clk_get_name(tmp), 0,
+@@ -308,6 +309,12 @@ static int intel_lpss_register_clock_divider(struct intel_lpss *lpss,
+ 		return PTR_ERR(tmp);
+ 	*clk = tmp;
+ 
++	if (lpss->info->quirks & QUIRK_CLOCK_DIVIDER_UNITY) {
++		ret = clk_set_rate(tmp, lpss->info->clk_rate);
++		if (ret)
++			return ret;
++	}
++
+ 	snprintf(name, sizeof(name), "%s-update", devname);
+ 	tmp = clk_register_gate(NULL, name, __clk_get_name(tmp),
+ 				CLK_SET_RATE_PARENT, lpss->priv, 31, 0, NULL);
+@@ -401,7 +408,7 @@ int intel_lpss_probe(struct device *dev,
+ 		return ret;
+ 
+ 	lpss->cell->swnode = info->swnode;
+-	lpss->cell->ignore_resource_conflicts = info->ignore_resource_conflicts;
++	lpss->cell->ignore_resource_conflicts = info->quirks & QUIRK_IGNORE_RESOURCE_CONFLICTS;
+ 
+ 	intel_lpss_init_dev(lpss);
+ 
+diff --git a/drivers/mfd/intel-lpss.h b/drivers/mfd/intel-lpss.h
+index 062ce95b68b9a..f50d11d60d94a 100644
+--- a/drivers/mfd/intel-lpss.h
++++ b/drivers/mfd/intel-lpss.h
+@@ -11,16 +11,28 @@
+ #ifndef __MFD_INTEL_LPSS_H
+ #define __MFD_INTEL_LPSS_H
+ 
++#include <linux/bits.h>
+ #include <linux/pm.h>
+ 
++/*
++ * Some DSDTs have an unused GEXP ACPI device conflicting with I2C4 resources.
++ * Set to ignore resource conflicts with ACPI declared SystemMemory regions.
++ */
++#define QUIRK_IGNORE_RESOURCE_CONFLICTS BIT(0)
++/*
++ * Some devices have misconfigured clock divider due to a firmware bug.
++ * Set this to force the clock divider to 1:1 ratio.
++ */
++#define QUIRK_CLOCK_DIVIDER_UNITY		BIT(1)
++
+ struct device;
+ struct resource;
+ struct software_node;
+ 
+ struct intel_lpss_platform_info {
+ 	struct resource *mem;
+-	bool ignore_resource_conflicts;
+ 	int irq;
++	unsigned int quirks;
+ 	unsigned long clk_rate;
+ 	const char *clk_con_id;
+ 	const struct software_node *swnode;
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 03319a1fa97fd..dbd26c3b245bc 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -263,7 +263,6 @@ struct fastrpc_channel_ctx {
+ 	int domain_id;
+ 	int sesscount;
+ 	int vmcount;
+-	u64 perms;
+ 	struct qcom_scm_vmperm vmperms[FASTRPC_MAX_VMIDS];
+ 	struct rpmsg_device *rpdev;
+ 	struct fastrpc_session_ctx session[FASTRPC_MAX_SESSIONS];
+@@ -1279,9 +1278,11 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl,
+ 
+ 		/* Map if we have any heap VMIDs associated with this ADSP Static Process. */
+ 		if (fl->cctx->vmcount) {
++			u64 src_perms = BIT(QCOM_SCM_VMID_HLOS);
++
+ 			err = qcom_scm_assign_mem(fl->cctx->remote_heap->phys,
+ 							(u64)fl->cctx->remote_heap->size,
+-							&fl->cctx->perms,
++							&src_perms,
+ 							fl->cctx->vmperms, fl->cctx->vmcount);
+ 			if (err) {
+ 				dev_err(fl->sctx->dev, "Failed to assign memory with phys 0x%llx size 0x%llx err %d",
+@@ -1915,8 +1916,10 @@ static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp)
+ 
+ 	/* Add memory to static PD pool, protection thru hypervisor */
+ 	if (req.flags == ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) {
++		u64 src_perms = BIT(QCOM_SCM_VMID_HLOS);
++
+ 		err = qcom_scm_assign_mem(buf->phys, (u64)buf->size,
+-			&fl->cctx->perms, fl->cctx->vmperms, fl->cctx->vmcount);
++			&src_perms, fl->cctx->vmperms, fl->cctx->vmcount);
+ 		if (err) {
+ 			dev_err(fl->sctx->dev, "Failed to assign memory phys 0x%llx size 0x%llx err %d",
+ 					buf->phys, buf->size, err);
+@@ -2290,7 +2293,6 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
+ 
+ 	if (vmcount) {
+ 		data->vmcount = vmcount;
+-		data->perms = BIT(QCOM_SCM_VMID_HLOS);
+ 		for (i = 0; i < data->vmcount; i++) {
+ 			data->vmperms[i].vmid = vmids[i];
+ 			data->vmperms[i].perm = QCOM_SCM_PERM_RWX;
+diff --git a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
+index c6eb27d46cb06..15119584473ca 100644
+--- a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
++++ b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
+@@ -198,8 +198,14 @@ static int lis3lv02d_i2c_suspend(struct device *dev)
+ 	struct i2c_client *client = to_i2c_client(dev);
+ 	struct lis3lv02d *lis3 = i2c_get_clientdata(client);
+ 
+-	if (!lis3->pdata || !lis3->pdata->wakeup_flags)
++	/* Turn on for wakeup if turned off by runtime suspend */
++	if (lis3->pdata && lis3->pdata->wakeup_flags) {
++		if (pm_runtime_suspended(dev))
++			lis3lv02d_poweron(lis3);
++	/* For non wakeup turn off if not already turned off by runtime suspend */
++	} else if (!pm_runtime_suspended(dev))
+ 		lis3lv02d_poweroff(lis3);
++
+ 	return 0;
+ }
+ 
+@@ -208,13 +214,12 @@ static int lis3lv02d_i2c_resume(struct device *dev)
+ 	struct i2c_client *client = to_i2c_client(dev);
+ 	struct lis3lv02d *lis3 = i2c_get_clientdata(client);
+ 
+-	/*
+-	 * pm_runtime documentation says that devices should always
+-	 * be powered on at resume. Pm_runtime turns them off after system
+-	 * wide resume is complete.
+-	 */
+-	if (!lis3->pdata || !lis3->pdata->wakeup_flags ||
+-		pm_runtime_suspended(dev))
++	/* Turn back off if turned on for wakeup and runtime suspended*/
++	if (lis3->pdata && lis3->pdata->wakeup_flags) {
++		if (pm_runtime_suspended(dev))
++			lis3lv02d_poweroff(lis3);
++	/* For non wakeup turn back on if not runtime suspended */
++	} else if (!pm_runtime_suspended(dev))
+ 		lis3lv02d_poweron(lis3);
+ 
+ 	return 0;
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 961e5d53a27a8..aac36750d2c54 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -112,6 +112,8 @@
+ #define MEI_DEV_ID_RPL_S      0x7A68  /* Raptor Lake Point S */
+ 
+ #define MEI_DEV_ID_MTL_M      0x7E70  /* Meteor Lake Point M */
++#define MEI_DEV_ID_ARL_S      0x7F68  /* Arrow Lake Point S */
++#define MEI_DEV_ID_ARL_H      0x7770  /* Arrow Lake Point H */
+ 
+ /*
+  * MEI HW Section
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 676d566f38ddf..8cf636c540322 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -119,6 +119,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_ARL_H, MEI_ME_PCH15_CFG)},
+ 
+ 	/* required last entry */
+ 	{0, }
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index ac69b7f361f5b..7eb74711ac968 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -184,7 +184,7 @@ static irqreturn_t irq_handler(void *private)
+ {
+ 	struct eventfd_ctx *ev_ctx = private;
+ 
+-	eventfd_signal(ev_ctx, 1);
++	eventfd_signal(ev_ctx);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 32d49100dff51..3564a0f63c9c7 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -413,7 +413,7 @@ static struct mmc_blk_ioc_data *mmc_blk_ioctl_copy_from_user(
+ 	struct mmc_blk_ioc_data *idata;
+ 	int err;
+ 
+-	idata = kmalloc(sizeof(*idata), GFP_KERNEL);
++	idata = kzalloc(sizeof(*idata), GFP_KERNEL);
+ 	if (!idata) {
+ 		err = -ENOMEM;
+ 		goto out;
+@@ -488,7 +488,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	if (idata->flags & MMC_BLK_IOC_DROP)
+ 		return 0;
+ 
+-	if (idata->flags & MMC_BLK_IOC_SBC)
++	if (idata->flags & MMC_BLK_IOC_SBC && i > 0)
+ 		prev_idata = idatas[i - 1];
+ 
+ 	/*
+@@ -874,10 +874,11 @@ static const struct block_device_operations mmc_bdops = {
+ static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ 				   unsigned int part_type)
+ {
+-	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_RPMB;
++	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_MASK;
++	const unsigned int rpmb = EXT_CSD_PART_CONFIG_ACC_RPMB;
+ 	int ret = 0;
+ 
+-	if ((part_type & mask) == mask) {
++	if ((part_type & mask) == rpmb) {
+ 		if (card->ext_csd.cmdq_en) {
+ 			ret = mmc_cmdq_disable(card);
+ 			if (ret)
+@@ -892,10 +893,11 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
+ static int mmc_blk_part_switch_post(struct mmc_card *card,
+ 				    unsigned int part_type)
+ {
+-	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_RPMB;
++	const unsigned int mask = EXT_CSD_PART_CONFIG_ACC_MASK;
++	const unsigned int rpmb = EXT_CSD_PART_CONFIG_ACC_RPMB;
+ 	int ret = 0;
+ 
+-	if ((part_type & mask) == mask) {
++	if ((part_type & mask) == rpmb) {
+ 		mmc_retune_unpause(card->host);
+ 		if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
+ 			ret = mmc_cmdq_enable(card);
+diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c
+index 3a3bae6948a89..a0524127ca073 100644
+--- a/drivers/mmc/host/sdhci-of-dwcmshc.c
++++ b/drivers/mmc/host/sdhci-of-dwcmshc.c
+@@ -584,6 +584,17 @@ static int dwcmshc_probe(struct platform_device *pdev)
+ 	return err;
+ }
+ 
++static void dwcmshc_disable_card_clk(struct sdhci_host *host)
++{
++	u16 ctrl;
++
++	ctrl = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
++	if (ctrl & SDHCI_CLOCK_CARD_EN) {
++		ctrl &= ~SDHCI_CLOCK_CARD_EN;
++		sdhci_writew(host, ctrl, SDHCI_CLOCK_CONTROL);
++	}
++}
++
+ static void dwcmshc_remove(struct platform_device *pdev)
+ {
+ 	struct sdhci_host *host = platform_get_drvdata(pdev);
+@@ -591,8 +602,14 @@ static void dwcmshc_remove(struct platform_device *pdev)
+ 	struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host);
+ 	struct rk35xx_priv *rk_priv = priv->priv;
+ 
++	pm_runtime_get_sync(&pdev->dev);
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++
+ 	sdhci_remove_host(host, 0);
+ 
++	dwcmshc_disable_card_clk(host);
++
+ 	clk_disable_unprepare(pltfm_host->clk);
+ 	clk_disable_unprepare(priv->bus_clk);
+ 	if (rk_priv)
+@@ -684,17 +701,6 @@ static void dwcmshc_enable_card_clk(struct sdhci_host *host)
+ 	}
+ }
+ 
+-static void dwcmshc_disable_card_clk(struct sdhci_host *host)
+-{
+-	u16 ctrl;
+-
+-	ctrl = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
+-	if (ctrl & SDHCI_CLOCK_CARD_EN) {
+-		ctrl &= ~SDHCI_CLOCK_CARD_EN;
+-		sdhci_writew(host, ctrl, SDHCI_CLOCK_CONTROL);
+-	}
+-}
+-
+ static int dwcmshc_runtime_suspend(struct device *dev)
+ {
+ 	struct sdhci_host *host = dev_get_drvdata(dev);
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 1e0bc7bace1b0..0a26831b3b67d 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -1439,6 +1439,9 @@ static int __maybe_unused sdhci_omap_runtime_suspend(struct device *dev)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host);
+ 
++	if (host->tuning_mode != SDHCI_TUNING_MODE_3)
++		mmc_retune_needed(host->mmc);
++
+ 	if (omap_host->con != -EINVAL)
+ 		sdhci_runtime_suspend_host(host);
+ 
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index be7f18fd4836a..c253d176db691 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -259,6 +259,8 @@ static void tmio_mmc_reset_work(struct work_struct *work)
+ 	else
+ 		mrq->cmd->error = -ETIMEDOUT;
+ 
++	/* No new calls yet, but disallow concurrent tmio_mmc_done_work() */
++	host->mrq = ERR_PTR(-EBUSY);
+ 	host->cmd = NULL;
+ 	host->data = NULL;
+ 
+diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
+index 71ec4052e52a6..b3a881cbcd23b 100644
+--- a/drivers/mtd/nand/raw/meson_nand.c
++++ b/drivers/mtd/nand/raw/meson_nand.c
+@@ -63,7 +63,7 @@
+ #define CMDRWGEN(cmd_dir, ran, bch, short_mode, page_size, pages)	\
+ 	(								\
+ 		(cmd_dir)			|			\
+-		((ran) << 19)			|			\
++		(ran)				|			\
+ 		((bch) << 14)			|			\
+ 		((short_mode) << 13)		|			\
+ 		(((page_size) & 0x7f) << 6)	|			\
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index bbdcfbe643f3f..ccbd42a427e77 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -1207,21 +1207,36 @@ static int nand_lp_exec_read_page_op(struct nand_chip *chip, unsigned int page,
+ 	return nand_exec_op(chip, &op);
+ }
+ 
++static unsigned int rawnand_last_page_of_lun(unsigned int pages_per_lun, unsigned int lun)
++{
++	/* lun is expected to be very small */
++	return (lun * pages_per_lun) + pages_per_lun - 1;
++}
++
+ static void rawnand_cap_cont_reads(struct nand_chip *chip)
+ {
+ 	struct nand_memory_organization *memorg;
+-	unsigned int pages_per_lun, first_lun, last_lun;
++	unsigned int ppl, first_lun, last_lun;
+ 
+ 	memorg = nanddev_get_memorg(&chip->base);
+-	pages_per_lun = memorg->pages_per_eraseblock * memorg->eraseblocks_per_lun;
+-	first_lun = chip->cont_read.first_page / pages_per_lun;
+-	last_lun = chip->cont_read.last_page / pages_per_lun;
++	ppl = memorg->pages_per_eraseblock * memorg->eraseblocks_per_lun;
++	first_lun = chip->cont_read.first_page / ppl;
++	last_lun = chip->cont_read.last_page / ppl;
+ 
+ 	/* Prevent sequential cache reads across LUN boundaries */
+ 	if (first_lun != last_lun)
+-		chip->cont_read.pause_page = first_lun * pages_per_lun + pages_per_lun - 1;
++		chip->cont_read.pause_page = rawnand_last_page_of_lun(ppl, first_lun);
+ 	else
+ 		chip->cont_read.pause_page = chip->cont_read.last_page;
++
++	if (chip->cont_read.first_page == chip->cont_read.pause_page) {
++		chip->cont_read.first_page++;
++		chip->cont_read.pause_page = min(chip->cont_read.last_page,
++						 rawnand_last_page_of_lun(ppl, first_lun + 1));
++	}
++
++	if (chip->cont_read.first_page >= chip->cont_read.last_page)
++		chip->cont_read.ongoing = false;
+ }
+ 
+ static int nand_lp_exec_cont_read_page_op(struct nand_chip *chip, unsigned int page,
+@@ -1288,12 +1303,11 @@ static int nand_lp_exec_cont_read_page_op(struct nand_chip *chip, unsigned int p
+ 	if (!chip->cont_read.ongoing)
+ 		return 0;
+ 
+-	if (page == chip->cont_read.pause_page &&
+-	    page != chip->cont_read.last_page) {
+-		chip->cont_read.first_page = chip->cont_read.pause_page + 1;
+-		rawnand_cap_cont_reads(chip);
+-	} else if (page == chip->cont_read.last_page) {
++	if (page == chip->cont_read.last_page) {
+ 		chip->cont_read.ongoing = false;
++	} else if (page == chip->cont_read.pause_page) {
++		chip->cont_read.first_page++;
++		rawnand_cap_cont_reads(chip);
+ 	}
+ 
+ 	return 0;
+@@ -3460,30 +3474,36 @@ static void rawnand_enable_cont_reads(struct nand_chip *chip, unsigned int page,
+ 				      u32 readlen, int col)
+ {
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
+-	unsigned int end_page, end_col;
++	unsigned int first_page, last_page;
+ 
+ 	chip->cont_read.ongoing = false;
+ 
+ 	if (!chip->controller->supported_op.cont_read)
+ 		return;
+ 
+-	end_page = DIV_ROUND_UP(col + readlen, mtd->writesize);
+-	end_col = (col + readlen) % mtd->writesize;
++	/*
++	 * Don't bother making any calculations if the length is too small.
++	 * Side effect: avoids possible integer underflows below.
++	 */
++	if (readlen < (2 * mtd->writesize))
++		return;
+ 
++	/* Derive the page where continuous read should start (the first full page read) */
++	first_page = page;
+ 	if (col)
+-		page++;
++		first_page++;
+ 
+-	if (end_col && end_page)
+-		end_page--;
++	/* Derive the page where continuous read should stop (the last full page read) */
++	last_page = page + ((col + readlen) / mtd->writesize) - 1;
+ 
+-	if (page + 1 > end_page)
+-		return;
+-
+-	chip->cont_read.first_page = page;
+-	chip->cont_read.last_page = end_page;
+-	chip->cont_read.ongoing = true;
+-
+-	rawnand_cap_cont_reads(chip);
++	/* Configure and enable continuous read when suitable */
++	if (first_page < last_page) {
++		chip->cont_read.first_page = first_page;
++		chip->cont_read.last_page = last_page;
++		chip->cont_read.ongoing = true;
++		/* May reset the ongoing flag */
++		rawnand_cap_cont_reads(chip);
++	}
+ }
+ 
+ static void rawnand_cont_read_skip_first_page(struct nand_chip *chip, unsigned int page)
+@@ -3492,10 +3512,7 @@ static void rawnand_cont_read_skip_first_page(struct nand_chip *chip, unsigned i
+ 		return;
+ 
+ 	chip->cont_read.first_page++;
+-	if (chip->cont_read.first_page == chip->cont_read.pause_page)
+-		chip->cont_read.first_page++;
+-	if (chip->cont_read.first_page >= chip->cont_read.last_page)
+-		chip->cont_read.ongoing = false;
++	rawnand_cap_cont_reads(chip);
+ }
+ 
+ /**
+@@ -3571,7 +3588,8 @@ static int nand_do_read_ops(struct nand_chip *chip, loff_t from,
+ 	oob = ops->oobbuf;
+ 	oob_required = oob ? 1 : 0;
+ 
+-	rawnand_enable_cont_reads(chip, page, readlen, col);
++	if (likely(ops->mode != MTD_OPS_RAW))
++		rawnand_enable_cont_reads(chip, page, readlen, col);
+ 
+ 	while (1) {
+ 		struct mtd_ecc_stats ecc_stats = mtd->ecc_stats;
+@@ -5189,6 +5207,15 @@ static void rawnand_late_check_supported_ops(struct nand_chip *chip)
+ 	if (!nand_has_exec_op(chip))
+ 		return;
+ 
++	/*
++	 * For now, continuous reads can only be used with the core page helpers.
++	 * This can be extended later.
++	 */
++	if (!(chip->ecc.read_page == nand_read_page_hwecc ||
++	      chip->ecc.read_page == nand_read_page_syndrome ||
++	      chip->ecc.read_page == nand_read_page_swecc))
++		return;
++
+ 	rawnand_check_cont_read_support(chip);
+ }
+ 
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 2a728c31e6b85..9a4940874be5b 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -85,9 +85,10 @@ size_t ubi_calc_fm_size(struct ubi_device *ubi)
+ 		sizeof(struct ubi_fm_scan_pool) +
+ 		sizeof(struct ubi_fm_scan_pool) +
+ 		(ubi->peb_count * sizeof(struct ubi_fm_ec)) +
+-		(sizeof(struct ubi_fm_eba) +
+-		(ubi->peb_count * sizeof(__be32))) +
+-		sizeof(struct ubi_fm_volhdr) * UBI_MAX_VOLUMES;
++		((sizeof(struct ubi_fm_eba) +
++		  sizeof(struct ubi_fm_volhdr)) *
++		 (UBI_MAX_VOLUMES + UBI_INT_VOL_COUNT)) +
++		(ubi->peb_count * sizeof(__be32));
+ 	return roundup(size, ubi->leb_size);
+ }
+ 
+diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
+index f700f0e4f2ec4..6e5489e233dd2 100644
+--- a/drivers/mtd/ubi/vtbl.c
++++ b/drivers/mtd/ubi/vtbl.c
+@@ -791,6 +791,12 @@ int ubi_read_volume_table(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ 	 * The number of supported volumes is limited by the eraseblock size
+ 	 * and by the UBI_MAX_VOLUMES constant.
+ 	 */
++
++	if (ubi->leb_size < UBI_VTBL_RECORD_SIZE) {
++		ubi_err(ubi, "LEB size too small for a volume record");
++		return -EINVAL;
++	}
++
+ 	ubi->vtbl_slots = ubi->leb_size / UBI_VTBL_RECORD_SIZE;
+ 	if (ubi->vtbl_slots > UBI_MAX_VOLUMES)
+ 		ubi->vtbl_slots = UBI_MAX_VOLUMES;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h
+index 8510b88d49820..f3cd5a376eca9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_trace.h
+@@ -24,7 +24,7 @@ TRACE_EVENT(hclge_pf_mbx_get,
+ 		__field(u8, code)
+ 		__field(u8, subcode)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->vport[0].nic.kinfo.netdev->name)
++		__string(devname, hdev->vport[0].nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, PF_GET_MBX_LEN)
+ 	),
+ 
+@@ -33,7 +33,7 @@ TRACE_EVENT(hclge_pf_mbx_get,
+ 		__entry->code = req->msg.code;
+ 		__entry->subcode = req->msg.subcode;
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->vport[0].nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->vport[0].nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_vf_to_pf_cmd));
+ 	),
+@@ -56,7 +56,7 @@ TRACE_EVENT(hclge_pf_mbx_send,
+ 		__field(u8, vfid)
+ 		__field(u16, code)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->vport[0].nic.kinfo.netdev->name)
++		__string(devname, hdev->vport[0].nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, PF_SEND_MBX_LEN)
+ 	),
+ 
+@@ -64,7 +64,7 @@ TRACE_EVENT(hclge_pf_mbx_send,
+ 		__entry->vfid = req->dest_vfid;
+ 		__entry->code = le16_to_cpu(req->msg.code);
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->vport[0].nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->vport[0].nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_pf_to_vf_cmd));
+ 	),
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h
+index 5d4895bb57a17..b259e95dd53c2 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_trace.h
+@@ -23,7 +23,7 @@ TRACE_EVENT(hclge_vf_mbx_get,
+ 		__field(u8, vfid)
+ 		__field(u16, code)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->nic.kinfo.netdev->name)
++		__string(devname, hdev->nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, VF_GET_MBX_LEN)
+ 	),
+ 
+@@ -31,7 +31,7 @@ TRACE_EVENT(hclge_vf_mbx_get,
+ 		__entry->vfid = req->dest_vfid;
+ 		__entry->code = le16_to_cpu(req->msg.code);
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_pf_to_vf_cmd));
+ 	),
+@@ -55,7 +55,7 @@ TRACE_EVENT(hclge_vf_mbx_send,
+ 		__field(u8, code)
+ 		__field(u8, subcode)
+ 		__string(pciname, pci_name(hdev->pdev))
+-		__string(devname, &hdev->nic.kinfo.netdev->name)
++		__string(devname, hdev->nic.kinfo.netdev->name)
+ 		__array(u32, mbx_data, VF_SEND_MBX_LEN)
+ 	),
+ 
+@@ -64,7 +64,7 @@ TRACE_EVENT(hclge_vf_mbx_send,
+ 		__entry->code = req->msg.code;
+ 		__entry->subcode = req->msg.subcode;
+ 		__assign_str(pciname, pci_name(hdev->pdev));
+-		__assign_str(devname, &hdev->nic.kinfo.netdev->name);
++		__assign_str(devname, hdev->nic.kinfo.netdev->name);
+ 		memcpy(__entry->mbx_data, req,
+ 		       sizeof(struct hclge_mbx_vf_to_pf_cmd));
+ 	),
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 9df39cf8b0975..1072e2210aed3 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1443,7 +1443,7 @@ static int temac_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* map device registers */
+-	lp->regs = devm_platform_ioremap_resource_byname(pdev, 0);
++	lp->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(lp->regs)) {
+ 		dev_err(&pdev->dev, "could not map TEMAC registers\n");
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index e220d761b1f27..f7055180ba4aa 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -164,8 +164,8 @@ get_peer(struct wg_peer *peer, struct sk_buff *skb, struct dump_ctx *ctx)
+ 	if (!allowedips_node)
+ 		goto no_allowedips;
+ 	if (!ctx->allowedips_seq)
+-		ctx->allowedips_seq = peer->device->peer_allowedips.seq;
+-	else if (ctx->allowedips_seq != peer->device->peer_allowedips.seq)
++		ctx->allowedips_seq = ctx->wg->peer_allowedips.seq;
++	else if (ctx->allowedips_seq != ctx->wg->peer_allowedips.seq)
+ 		goto no_allowedips;
+ 
+ 	allowedips_nest = nla_nest_start(skb, WGPEER_A_ALLOWEDIPS);
+@@ -255,17 +255,17 @@ static int wg_get_device_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (!peers_nest)
+ 		goto out;
+ 	ret = 0;
+-	/* If the last cursor was removed via list_del_init in peer_remove, then
++	lockdep_assert_held(&wg->device_update_lock);
++	/* If the last cursor was removed in peer_remove or peer_remove_all, then
+ 	 * we just treat this the same as there being no more peers left. The
+ 	 * reason is that seq_nr should indicate to userspace that this isn't a
+ 	 * coherent dump anyway, so they'll try again.
+ 	 */
+ 	if (list_empty(&wg->peer_list) ||
+-	    (ctx->next_peer && list_empty(&ctx->next_peer->peer_list))) {
++	    (ctx->next_peer && ctx->next_peer->is_dead)) {
+ 		nla_nest_cancel(skb, peers_nest);
+ 		goto out;
+ 	}
+-	lockdep_assert_held(&wg->device_update_lock);
+ 	peer = list_prepare_entry(ctx->next_peer, &wg->peer_list, peer_list);
+ 	list_for_each_entry_continue(peer, &wg->peer_list, peer_list) {
+ 		if (get_peer(peer, skb, ctx)) {
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bca/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bca/core.c
+index ac3a36fa3640c..a963c242975ac 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bca/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bca/core.c
+@@ -7,21 +7,16 @@
+ #include <core.h>
+ #include <bus.h>
+ #include <fwvid.h>
++#include <feature.h>
+ 
+ #include "vops.h"
+ 
+-static int brcmf_bca_attach(struct brcmf_pub *drvr)
++static void brcmf_bca_feat_attach(struct brcmf_if *ifp)
+ {
+-	pr_err("%s: executing\n", __func__);
+-	return 0;
+-}
+-
+-static void brcmf_bca_detach(struct brcmf_pub *drvr)
+-{
+-	pr_err("%s: executing\n", __func__);
++	/* SAE support not confirmed so disabling for now */
++	ifp->drvr->feat_flags &= ~BIT(BRCMF_FEAT_SAE);
+ }
+ 
+ const struct brcmf_fwvid_ops brcmf_bca_ops = {
+-	.attach = brcmf_bca_attach,
+-	.detach = brcmf_bca_detach,
++	.feat_attach = brcmf_bca_feat_attach,
+ };
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 8facd40d713e6..0ff15d4083e1d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -32,6 +32,7 @@
+ #include "vendor.h"
+ #include "bus.h"
+ #include "common.h"
++#include "fwvid.h"
+ 
+ #define BRCMF_SCAN_IE_LEN_MAX		2048
+ 
+@@ -1179,8 +1180,7 @@ s32 brcmf_notify_escan_complete(struct brcmf_cfg80211_info *cfg,
+ 	scan_request = cfg->scan_request;
+ 	cfg->scan_request = NULL;
+ 
+-	if (timer_pending(&cfg->escan_timeout))
+-		del_timer_sync(&cfg->escan_timeout);
++	timer_delete_sync(&cfg->escan_timeout);
+ 
+ 	if (fw_abort) {
+ 		/* Do a scan abort to stop the driver's scan engine */
+@@ -1687,52 +1687,39 @@ static u16 brcmf_map_fw_linkdown_reason(const struct brcmf_event_msg *e)
+ 	return reason;
+ }
+ 
+-static int brcmf_set_pmk(struct brcmf_if *ifp, const u8 *pmk_data, u16 pmk_len)
++int brcmf_set_wsec(struct brcmf_if *ifp, const u8 *key, u16 key_len, u16 flags)
+ {
+ 	struct brcmf_pub *drvr = ifp->drvr;
+ 	struct brcmf_wsec_pmk_le pmk;
+ 	int err;
+ 
++	if (key_len > sizeof(pmk.key)) {
++		bphy_err(drvr, "key must be less than %zu bytes\n",
++			 sizeof(pmk.key));
++		return -EINVAL;
++	}
++
+ 	memset(&pmk, 0, sizeof(pmk));
+ 
+-	/* pass pmk directly */
+-	pmk.key_len = cpu_to_le16(pmk_len);
+-	pmk.flags = cpu_to_le16(0);
+-	memcpy(pmk.key, pmk_data, pmk_len);
++	/* pass key material directly */
++	pmk.key_len = cpu_to_le16(key_len);
++	pmk.flags = cpu_to_le16(flags);
++	memcpy(pmk.key, key, key_len);
+ 
+-	/* store psk in firmware */
++	/* store key material in firmware */
+ 	err = brcmf_fil_cmd_data_set(ifp, BRCMF_C_SET_WSEC_PMK,
+ 				     &pmk, sizeof(pmk));
+ 	if (err < 0)
+ 		bphy_err(drvr, "failed to change PSK in firmware (len=%u)\n",
+-			 pmk_len);
++			 key_len);
+ 
+ 	return err;
+ }
++BRCMF_EXPORT_SYMBOL_GPL(brcmf_set_wsec);
+ 
+-static int brcmf_set_sae_password(struct brcmf_if *ifp, const u8 *pwd_data,
+-				  u16 pwd_len)
++static int brcmf_set_pmk(struct brcmf_if *ifp, const u8 *pmk_data, u16 pmk_len)
+ {
+-	struct brcmf_pub *drvr = ifp->drvr;
+-	struct brcmf_wsec_sae_pwd_le sae_pwd;
+-	int err;
+-
+-	if (pwd_len > BRCMF_WSEC_MAX_SAE_PASSWORD_LEN) {
+-		bphy_err(drvr, "sae_password must be less than %d\n",
+-			 BRCMF_WSEC_MAX_SAE_PASSWORD_LEN);
+-		return -EINVAL;
+-	}
+-
+-	sae_pwd.key_len = cpu_to_le16(pwd_len);
+-	memcpy(sae_pwd.key, pwd_data, pwd_len);
+-
+-	err = brcmf_fil_iovar_data_set(ifp, "sae_password", &sae_pwd,
+-				       sizeof(sae_pwd));
+-	if (err < 0)
+-		bphy_err(drvr, "failed to set SAE password in firmware (len=%u)\n",
+-			 pwd_len);
+-
+-	return err;
++	return brcmf_set_wsec(ifp, pmk_data, pmk_len, 0);
+ }
+ 
+ static void brcmf_link_down(struct brcmf_cfg80211_vif *vif, u16 reason,
+@@ -2503,8 +2490,7 @@ brcmf_cfg80211_connect(struct wiphy *wiphy, struct net_device *ndev,
+ 			bphy_err(drvr, "failed to clean up user-space RSNE\n");
+ 			goto done;
+ 		}
+-		err = brcmf_set_sae_password(ifp, sme->crypto.sae_pwd,
+-					     sme->crypto.sae_pwd_len);
++		err = brcmf_fwvid_set_sae_password(ifp, &sme->crypto);
+ 		if (!err && sme->crypto.psk)
+ 			err = brcmf_set_pmk(ifp, sme->crypto.psk,
+ 					    BRCMF_WSEC_MAX_PSK_LEN);
+@@ -5257,8 +5243,7 @@ brcmf_cfg80211_start_ap(struct wiphy *wiphy, struct net_device *ndev,
+ 		if (crypto->sae_pwd) {
+ 			brcmf_dbg(INFO, "using SAE offload\n");
+ 			profile->use_fwauth |= BIT(BRCMF_PROFILE_FWAUTH_SAE);
+-			err = brcmf_set_sae_password(ifp, crypto->sae_pwd,
+-						     crypto->sae_pwd_len);
++			err = brcmf_fwvid_set_sae_password(ifp, crypto);
+ 			if (err < 0)
+ 				goto exit;
+ 		}
+@@ -5365,10 +5350,12 @@ static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev,
+ 		msleep(400);
+ 
+ 		if (profile->use_fwauth != BIT(BRCMF_PROFILE_FWAUTH_NONE)) {
++			struct cfg80211_crypto_settings crypto = {};
++
+ 			if (profile->use_fwauth & BIT(BRCMF_PROFILE_FWAUTH_PSK))
+ 				brcmf_set_pmk(ifp, NULL, 0);
+ 			if (profile->use_fwauth & BIT(BRCMF_PROFILE_FWAUTH_SAE))
+-				brcmf_set_sae_password(ifp, NULL, 0);
++				brcmf_fwvid_set_sae_password(ifp, &crypto);
+ 			profile->use_fwauth = BIT(BRCMF_PROFILE_FWAUTH_NONE);
+ 		}
+ 
+@@ -8440,6 +8427,7 @@ void brcmf_cfg80211_detach(struct brcmf_cfg80211_info *cfg)
+ 	brcmf_btcoex_detach(cfg);
+ 	wiphy_unregister(cfg->wiphy);
+ 	wl_deinit_priv(cfg);
++	cancel_work_sync(&cfg->escan_timeout_work);
+ 	brcmf_free_wiphy(cfg->wiphy);
+ 	kfree(cfg);
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
+index 0e1fa3f0dea2c..dc3a6a537507d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
+@@ -468,4 +468,6 @@ void brcmf_set_mpc(struct brcmf_if *ndev, int mpc);
+ void brcmf_abort_scanning(struct brcmf_cfg80211_info *cfg);
+ void brcmf_cfg80211_free_netdev(struct net_device *ndev);
+ 
++int brcmf_set_wsec(struct brcmf_if *ifp, const u8 *key, u16 key_len, u16 flags);
++
+ #endif /* BRCMFMAC_CFG80211_H */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c
+index b75652ba9359f..bec5748310b9c 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c
+@@ -7,21 +7,36 @@
+ #include <core.h>
+ #include <bus.h>
+ #include <fwvid.h>
++#include <fwil.h>
+ 
+ #include "vops.h"
+ 
+-static int brcmf_cyw_attach(struct brcmf_pub *drvr)
++static int brcmf_cyw_set_sae_pwd(struct brcmf_if *ifp,
++				 struct cfg80211_crypto_settings *crypto)
+ {
+-	pr_err("%s: executing\n", __func__);
+-	return 0;
+-}
++	struct brcmf_pub *drvr = ifp->drvr;
++	struct brcmf_wsec_sae_pwd_le sae_pwd;
++	u16 pwd_len = crypto->sae_pwd_len;
++	int err;
+ 
+-static void brcmf_cyw_detach(struct brcmf_pub *drvr)
+-{
+-	pr_err("%s: executing\n", __func__);
++	if (pwd_len > BRCMF_WSEC_MAX_SAE_PASSWORD_LEN) {
++		bphy_err(drvr, "sae_password must be less than %d\n",
++			 BRCMF_WSEC_MAX_SAE_PASSWORD_LEN);
++		return -EINVAL;
++	}
++
++	sae_pwd.key_len = cpu_to_le16(pwd_len);
++	memcpy(sae_pwd.key, crypto->sae_pwd, pwd_len);
++
++	err = brcmf_fil_iovar_data_set(ifp, "sae_password", &sae_pwd,
++				       sizeof(sae_pwd));
++	if (err < 0)
++		bphy_err(drvr, "failed to set SAE password in firmware (len=%u)\n",
++			 pwd_len);
++
++	return err;
+ }
+ 
+ const struct brcmf_fwvid_ops brcmf_cyw_ops = {
+-	.attach = brcmf_cyw_attach,
+-	.detach = brcmf_cyw_detach,
++	.set_sae_password = brcmf_cyw_set_sae_pwd,
+ };
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+index 6d10c9efbe93d..909a34a1ab503 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
+@@ -13,6 +13,7 @@
+ #include "debug.h"
+ #include "fwil.h"
+ #include "fwil_types.h"
++#include "fwvid.h"
+ #include "feature.h"
+ #include "common.h"
+ 
+@@ -339,6 +340,8 @@ void brcmf_feat_attach(struct brcmf_pub *drvr)
+ 	brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_FWSUP, "sup_wpa");
+ 	brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_SCAN_V2, "scan_ver");
+ 
++	brcmf_fwvid_feat_attach(ifp);
++
+ 	if (drvr->settings->feature_disable) {
+ 		brcmf_dbg(INFO, "Features: 0x%02x, disable: 0x%02x\n",
+ 			  ifp->drvr->feat_flags,
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c
+index 72fe8bce6eaf5..a9514d72f770b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.c
+@@ -239,6 +239,7 @@ brcmf_fil_iovar_data_set(struct brcmf_if *ifp, const char *name, const void *dat
+ 	mutex_unlock(&drvr->proto_block);
+ 	return err;
+ }
++BRCMF_EXPORT_SYMBOL_GPL(brcmf_fil_iovar_data_set);
+ 
+ s32
+ brcmf_fil_iovar_data_get(struct brcmf_if *ifp, const char *name, void *data,
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+index 9d248ba1c0b2b..e74a23e11830c 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+@@ -584,7 +584,7 @@ struct brcmf_wsec_key_le {
+ struct brcmf_wsec_pmk_le {
+ 	__le16  key_len;
+ 	__le16  flags;
+-	u8 key[2 * BRCMF_WSEC_MAX_PSK_LEN + 1];
++	u8 key[BRCMF_WSEC_MAX_SAE_PASSWORD_LEN];
+ };
+ 
+ /**
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.c
+index 86eafdb405419..b427782554b59 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.c
+@@ -89,8 +89,7 @@ int brcmf_fwvid_register_vendor(enum brcmf_fwvendor fwvid, struct module *vmod,
+ 	if (fwvid >= BRCMF_FWVENDOR_NUM)
+ 		return -ERANGE;
+ 
+-	if (WARN_ON(!vmod) || WARN_ON(!vops) ||
+-	    WARN_ON(!vops->attach) || WARN_ON(!vops->detach))
++	if (WARN_ON(!vmod) || WARN_ON(!vops))
+ 		return -EINVAL;
+ 
+ 	if (WARN_ON(fwvid_list[fwvid].vmod))
+@@ -150,7 +149,7 @@ static inline int brcmf_fwvid_request_module(enum brcmf_fwvendor fwvid)
+ }
+ #endif
+ 
+-int brcmf_fwvid_attach_ops(struct brcmf_pub *drvr)
++int brcmf_fwvid_attach(struct brcmf_pub *drvr)
+ {
+ 	enum brcmf_fwvendor fwvid = drvr->bus_if->fwvid;
+ 	int ret;
+@@ -175,7 +174,7 @@ int brcmf_fwvid_attach_ops(struct brcmf_pub *drvr)
+ 	return ret;
+ }
+ 
+-void brcmf_fwvid_detach_ops(struct brcmf_pub *drvr)
++void brcmf_fwvid_detach(struct brcmf_pub *drvr)
+ {
+ 	enum brcmf_fwvendor fwvid = drvr->bus_if->fwvid;
+ 
+@@ -187,9 +186,10 @@ void brcmf_fwvid_detach_ops(struct brcmf_pub *drvr)
+ 
+ 	mutex_lock(&fwvid_list_lock);
+ 
+-	drvr->vops = NULL;
+-	list_del(&drvr->bus_if->list);
+-
++	if (drvr->vops) {
++		drvr->vops = NULL;
++		list_del(&drvr->bus_if->list);
++	}
+ 	mutex_unlock(&fwvid_list_lock);
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.h
+index 43df58bb70ad3..dac22534d0334 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwvid.h
+@@ -6,12 +6,14 @@
+ #define FWVID_H_
+ 
+ #include "firmware.h"
++#include "cfg80211.h"
+ 
+ struct brcmf_pub;
++struct brcmf_if;
+ 
+ struct brcmf_fwvid_ops {
+-	int (*attach)(struct brcmf_pub *drvr);
+-	void (*detach)(struct brcmf_pub *drvr);
++	void (*feat_attach)(struct brcmf_if *ifp);
++	int (*set_sae_password)(struct brcmf_if *ifp, struct cfg80211_crypto_settings *crypto);
+ };
+ 
+ /* exported functions */
+@@ -20,28 +22,29 @@ int brcmf_fwvid_register_vendor(enum brcmf_fwvendor fwvid, struct module *mod,
+ int brcmf_fwvid_unregister_vendor(enum brcmf_fwvendor fwvid, struct module *mod);
+ 
+ /* core driver functions */
+-int brcmf_fwvid_attach_ops(struct brcmf_pub *drvr);
+-void brcmf_fwvid_detach_ops(struct brcmf_pub *drvr);
++int brcmf_fwvid_attach(struct brcmf_pub *drvr);
++void brcmf_fwvid_detach(struct brcmf_pub *drvr);
+ const char *brcmf_fwvid_vendor_name(struct brcmf_pub *drvr);
+ 
+-static inline int brcmf_fwvid_attach(struct brcmf_pub *drvr)
++static inline void brcmf_fwvid_feat_attach(struct brcmf_if *ifp)
+ {
+-	int ret;
++	const struct brcmf_fwvid_ops *vops = ifp->drvr->vops;
+ 
+-	ret = brcmf_fwvid_attach_ops(drvr);
+-	if (ret)
+-		return ret;
++	if (!vops->feat_attach)
++		return;
+ 
+-	return drvr->vops->attach(drvr);
++	vops->feat_attach(ifp);
+ }
+ 
+-static inline void brcmf_fwvid_detach(struct brcmf_pub *drvr)
++static inline int brcmf_fwvid_set_sae_password(struct brcmf_if *ifp,
++					       struct cfg80211_crypto_settings *crypto)
+ {
+-	if (!drvr->vops)
+-		return;
++	const struct brcmf_fwvid_ops *vops = ifp->drvr->vops;
++
++	if (!vops || !vops->set_sae_password)
++		return -EOPNOTSUPP;
+ 
+-	drvr->vops->detach(drvr);
+-	brcmf_fwvid_detach_ops(drvr);
++	return vops->set_sae_password(ifp, crypto);
+ }
+ 
+ #endif /* FWVID_H_ */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/wcc/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/wcc/core.c
+index 5573a47766ad5..fd593b93ad404 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/wcc/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/wcc/core.c
+@@ -7,21 +7,17 @@
+ #include <core.h>
+ #include <bus.h>
+ #include <fwvid.h>
++#include <cfg80211.h>
+ 
+ #include "vops.h"
+ 
+-static int brcmf_wcc_attach(struct brcmf_pub *drvr)
++static int brcmf_wcc_set_sae_pwd(struct brcmf_if *ifp,
++				 struct cfg80211_crypto_settings *crypto)
+ {
+-	pr_debug("%s: executing\n", __func__);
+-	return 0;
+-}
+-
+-static void brcmf_wcc_detach(struct brcmf_pub *drvr)
+-{
+-	pr_debug("%s: executing\n", __func__);
++	return brcmf_set_wsec(ifp, crypto->sae_pwd, crypto->sae_pwd_len,
++			      BRCMF_WSEC_PASSPHRASE);
+ }
+ 
+ const struct brcmf_fwvid_ops brcmf_wcc_ops = {
+-	.attach = brcmf_wcc_attach,
+-	.detach = brcmf_wcc_detach,
++	.set_sae_password = brcmf_wcc_set_sae_pwd,
+ };
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 3975a53a9f209..57ddd8aefa42b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -3075,8 +3075,6 @@ static void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt, u8 wk_idx)
+ 	struct iwl_fw_dbg_params params = {0};
+ 	struct iwl_fwrt_dump_data *dump_data =
+ 		&fwrt->dump.wks[wk_idx].dump_data;
+-	u32 policy;
+-	u32 time_point;
+ 	if (!test_bit(wk_idx, &fwrt->dump.active_wks))
+ 		return;
+ 
+@@ -3107,13 +3105,16 @@ static void iwl_fw_dbg_collect_sync(struct iwl_fw_runtime *fwrt, u8 wk_idx)
+ 
+ 	iwl_fw_dbg_stop_restart_recording(fwrt, &params, false);
+ 
+-	policy = le32_to_cpu(dump_data->trig->apply_policy);
+-	time_point = le32_to_cpu(dump_data->trig->time_point);
++	if (iwl_trans_dbg_ini_valid(fwrt->trans)) {
++		u32 policy = le32_to_cpu(dump_data->trig->apply_policy);
++		u32 time_point = le32_to_cpu(dump_data->trig->time_point);
+ 
+-	if (policy & IWL_FW_INI_APPLY_POLICY_DUMP_COMPLETE_CMD) {
+-		IWL_DEBUG_FW_INFO(fwrt, "WRT: sending dump complete\n");
+-		iwl_send_dbg_dump_complete_cmd(fwrt, time_point, 0);
++		if (policy & IWL_FW_INI_APPLY_POLICY_DUMP_COMPLETE_CMD) {
++			IWL_DEBUG_FW_INFO(fwrt, "WRT: sending dump complete\n");
++			iwl_send_dbg_dump_complete_cmd(fwrt, time_point, 0);
++		}
+ 	}
++
+ 	if (fwrt->trans->dbg.last_tp_resetfw == IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY)
+ 		iwl_force_nmi(fwrt->trans);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+index e8b881596baf1..34b8771d1edac 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+@@ -735,7 +735,9 @@ void iwl_mvm_vif_dbgfs_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ {
+ 	struct dentry *dbgfs_dir = vif->debugfs_dir;
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	char buf[100];
++	char buf[3 * 3 + 11 + (NL80211_WIPHY_NAME_MAXLEN + 1) +
++		 (7 + IFNAMSIZ + 1) + 6 + 1];
++	char name[7 + IFNAMSIZ + 1];
+ 
+ 	/* this will happen in monitor mode */
+ 	if (!dbgfs_dir)
+@@ -748,10 +750,11 @@ void iwl_mvm_vif_dbgfs_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 	 * find
+ 	 * netdev:wlan0 -> ../../../ieee80211/phy0/netdev:wlan0/iwlmvm/
+ 	 */
+-	snprintf(buf, 100, "../../../%pd3/iwlmvm", dbgfs_dir);
++	snprintf(name, sizeof(name), "%pd", dbgfs_dir);
++	snprintf(buf, sizeof(buf), "../../../%pd3/iwlmvm", dbgfs_dir);
+ 
+-	mvmvif->dbgfs_slink = debugfs_create_symlink(dbgfs_dir->d_name.name,
+-						     mvm->debugfs_dir, buf);
++	mvmvif->dbgfs_slink =
++		debugfs_create_symlink(name, mvm->debugfs_dir, buf);
+ }
+ 
+ void iwl_mvm_vif_dbgfs_rm_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 68c3d54f587cc..168e4a2ffeb5a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -344,7 +344,7 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
+ 	if (mvm->mld_api_is_used && mvm->nvm_data->sku_cap_11be_enable &&
+ 	    !iwlwifi_mod_params.disable_11ax &&
+ 	    !iwlwifi_mod_params.disable_11be)
+-		hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_MLO;
++		hw->wiphy->flags |= WIPHY_FLAG_DISABLE_WEXT;
+ 
+ 	/* With MLD FW API, it tracks timing by itself,
+ 	 * no need for any timing from the host
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index 298663b035808..0c1c1ff31085c 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -309,6 +309,13 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ 	pwr_seq = pwr_on ? chip->pwr_on_seq : chip->pwr_off_seq;
+ 	ret = rtw_pwr_seq_parser(rtwdev, pwr_seq);
+ 
++	if (pwr_on && rtw_hci_type(rtwdev) == RTW_HCI_TYPE_USB) {
++		if (chip->id == RTW_CHIP_TYPE_8822C ||
++		    chip->id == RTW_CHIP_TYPE_8822B ||
++		    chip->id == RTW_CHIP_TYPE_8821C)
++			rtw_write8_clr(rtwdev, REG_SYS_STATUS1 + 1, BIT(0));
++	}
++
+ 	if (rtw_hci_type(rtwdev) == RTW_HCI_TYPE_SDIO)
+ 		rtw_write32(rtwdev, REG_SDIO_HIMR, imr);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+index 7a5cbdc31ef79..e2c7d9f876836 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+@@ -9,24 +9,36 @@
+ #include "usb.h"
+ 
+ static const struct usb_device_id rtw_8821cu_id_table[] = {
+-	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xb82b, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8821CU */
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0x2006, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0x8731, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0x8811, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xb820, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8821CU */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc821, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8821CU */
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xb82b, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc80c, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc811, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc820, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8821CU */
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc821, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82a, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8821CU */
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82b, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8821CU */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc811, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8811CU */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0x8811, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* 8811CU */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0x2006, 0xff, 0xff, 0xff),
+-	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* TOTOLINK A650UA v3 */
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(RTW_USB_VENDOR_ID_REALTEK, 0xc82c, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x331d, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* D-Link */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xc811, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* Edimax */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xd811, 0xff, 0xff, 0xff),
++	  .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* Edimax */
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(usb, rtw_8821cu_id_table);
+diff --git a/drivers/nvmem/meson-efuse.c b/drivers/nvmem/meson-efuse.c
+index b922df99f9bc3..33678d0af2c24 100644
+--- a/drivers/nvmem/meson-efuse.c
++++ b/drivers/nvmem/meson-efuse.c
+@@ -47,7 +47,6 @@ static int meson_efuse_probe(struct platform_device *pdev)
+ 	struct nvmem_config *econfig;
+ 	struct clk *clk;
+ 	unsigned int size;
+-	int ret;
+ 
+ 	sm_np = of_parse_phandle(pdev->dev.of_node, "secure-monitor", 0);
+ 	if (!sm_np) {
+@@ -60,27 +59,9 @@ static int meson_efuse_probe(struct platform_device *pdev)
+ 	if (!fw)
+ 		return -EPROBE_DEFER;
+ 
+-	clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(clk)) {
+-		ret = PTR_ERR(clk);
+-		if (ret != -EPROBE_DEFER)
+-			dev_err(dev, "failed to get efuse gate");
+-		return ret;
+-	}
+-
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "failed to enable gate");
+-		return ret;
+-	}
+-
+-	ret = devm_add_action_or_reset(dev,
+-				       (void(*)(void *))clk_disable_unprepare,
+-				       clk);
+-	if (ret) {
+-		dev_err(dev, "failed to add disable callback");
+-		return ret;
+-	}
++	clk = devm_clk_get_enabled(dev, NULL);
++	if (IS_ERR(clk))
++		return dev_err_probe(dev, PTR_ERR(clk), "failed to get efuse gate");
+ 
+ 	if (meson_sm_call(fw, SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) {
+ 		dev_err(dev, "failed to get max user");
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index c2630db745608..19b6708f60efb 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -692,8 +692,13 @@ int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
+ 		nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
+ 			PCI_REBAR_CTRL_NBAR_SHIFT;
+ 
++		/*
++		 * PCIe r6.0, sec 7.8.6.2 require us to support at least one
++		 * size in the range from 1 MB to 512 GB. Advertise support
++		 * for 1 MB BAR size only.
++		 */
+ 		for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
+-			dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
++			dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
+ 	}
+ 
+ 	/*
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index cbc3f08817708..e3dc27f2acf72 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -53,6 +53,7 @@
+ #define PARF_SLV_ADDR_SPACE_SIZE		0x358
+ #define PARF_DEVICE_TYPE			0x1000
+ #define PARF_BDF_TO_SID_TABLE_N			0x2000
++#define PARF_BDF_TO_SID_CFG			0x2c00
+ 
+ /* ELBI registers */
+ #define ELBI_SYS_CTRL				0x04
+@@ -120,6 +121,9 @@
+ /* PARF_DEVICE_TYPE register fields */
+ #define DEVICE_TYPE_RC				0x4
+ 
++/* PARF_BDF_TO_SID_CFG fields */
++#define BDF_TO_SID_BYPASS			BIT(0)
++
+ /* ELBI_SYS_CTRL register fields */
+ #define ELBI_SYS_CTRL_LT_ENABLE			BIT(0)
+ 
+@@ -229,6 +233,7 @@ struct qcom_pcie_ops {
+ 
+ struct qcom_pcie_cfg {
+ 	const struct qcom_pcie_ops *ops;
++	bool no_l0s;
+ };
+ 
+ struct qcom_pcie {
+@@ -272,6 +277,26 @@ static int qcom_pcie_start_link(struct dw_pcie *pci)
+ 	return 0;
+ }
+ 
++static void qcom_pcie_clear_aspm_l0s(struct dw_pcie *pci)
++{
++	struct qcom_pcie *pcie = to_qcom_pcie(pci);
++	u16 offset;
++	u32 val;
++
++	if (!pcie->cfg->no_l0s)
++		return;
++
++	offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
++
++	dw_pcie_dbi_ro_wr_en(pci);
++
++	val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP);
++	val &= ~PCI_EXP_LNKCAP_ASPM_L0S;
++	writel(val, pci->dbi_base + offset + PCI_EXP_LNKCAP);
++
++	dw_pcie_dbi_ro_wr_dis(pci);
++}
++
+ static void qcom_pcie_clear_hpc(struct dw_pcie *pci)
+ {
+ 	u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
+@@ -961,6 +986,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ 
+ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
+ {
++	qcom_pcie_clear_aspm_l0s(pcie->pci);
+ 	qcom_pcie_clear_hpc(pcie->pci);
+ 
+ 	return 0;
+@@ -1008,11 +1034,17 @@ static int qcom_pcie_config_sid_1_9_0(struct qcom_pcie *pcie)
+ 	u8 qcom_pcie_crc8_table[CRC8_TABLE_SIZE];
+ 	int i, nr_map, size = 0;
+ 	u32 smmu_sid_base;
++	u32 val;
+ 
+ 	of_get_property(dev->of_node, "iommu-map", &size);
+ 	if (!size)
+ 		return 0;
+ 
++	/* Enable BDF to SID translation by disabling bypass mode (default) */
++	val = readl(pcie->parf + PARF_BDF_TO_SID_CFG);
++	val &= ~BDF_TO_SID_BYPASS;
++	writel(val, pcie->parf + PARF_BDF_TO_SID_CFG);
++
+ 	map = kzalloc(size, GFP_KERNEL);
+ 	if (!map)
+ 		return -ENOMEM;
+@@ -1358,6 +1390,11 @@ static const struct qcom_pcie_cfg cfg_2_9_0 = {
+ 	.ops = &ops_2_9_0,
+ };
+ 
++static const struct qcom_pcie_cfg cfg_sc8280xp = {
++	.ops = &ops_1_9_0,
++	.no_l0s = true,
++};
++
+ static const struct dw_pcie_ops dw_pcie_ops = {
+ 	.link_up = qcom_pcie_link_up,
+ 	.start_link = qcom_pcie_start_link,
+@@ -1629,11 +1666,11 @@ static const struct of_device_id qcom_pcie_match[] = {
+ 	{ .compatible = "qcom,pcie-ipq8074-gen3", .data = &cfg_2_9_0 },
+ 	{ .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
+ 	{ .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
+-	{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_1_9_0 },
++	{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
+ 	{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_9_0},
+ 	{ .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 },
+ 	{ .compatible = "qcom,pcie-sc8180x", .data = &cfg_1_9_0 },
+-	{ .compatible = "qcom,pcie-sc8280xp", .data = &cfg_1_9_0 },
++	{ .compatible = "qcom,pcie-sc8280xp", .data = &cfg_sc8280xp },
+ 	{ .compatible = "qcom,pcie-sdm845", .data = &cfg_2_7_0 },
+ 	{ .compatible = "qcom,pcie-sdx55", .data = &cfg_1_9_0 },
+ 	{ .compatible = "qcom,pcie-sm8150", .data = &cfg_1_9_0 },
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 30c7dfeccb16f..11cc354a30ea5 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -49,6 +49,7 @@
+ #include <linux/refcount.h>
+ #include <linux/irqdomain.h>
+ #include <linux/acpi.h>
++#include <linux/sizes.h>
+ #include <asm/mshyperv.h>
+ 
+ /*
+@@ -465,7 +466,7 @@ struct pci_eject_response {
+ 	u32 status;
+ } __packed;
+ 
+-static int pci_ring_size = (4 * PAGE_SIZE);
++static int pci_ring_size = VMBUS_RING_SIZE(SZ_16K);
+ 
+ /*
+  * Driver specific state.
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index 51ec9e7e784f0..9c59bf03d6579 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -473,6 +473,13 @@ static void pci_device_remove(struct device *dev)
+ 
+ 	if (drv->remove) {
+ 		pm_runtime_get_sync(dev);
++		/*
++		 * If the driver provides a .runtime_idle() callback and it has
++		 * started to run already, it may continue to run in parallel
++		 * with the code below, so wait until all of the runtime PM
++		 * activity has completed.
++		 */
++		pm_runtime_barrier(dev);
+ 		drv->remove(pci_dev);
+ 		pm_runtime_put_noidle(dev);
+ 	}
+diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
+index 59c90d04a609a..705893b5f7b09 100644
+--- a/drivers/pci/pcie/err.c
++++ b/drivers/pci/pcie/err.c
+@@ -13,6 +13,7 @@
+ #define dev_fmt(fmt) "AER: " fmt
+ 
+ #include <linux/pci.h>
++#include <linux/pm_runtime.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+@@ -85,6 +86,18 @@ static int report_error_detected(struct pci_dev *dev,
+ 	return 0;
+ }
+ 
++static int pci_pm_runtime_get_sync(struct pci_dev *pdev, void *data)
++{
++	pm_runtime_get_sync(&pdev->dev);
++	return 0;
++}
++
++static int pci_pm_runtime_put(struct pci_dev *pdev, void *data)
++{
++	pm_runtime_put(&pdev->dev);
++	return 0;
++}
++
+ static int report_frozen_detected(struct pci_dev *dev, void *data)
+ {
+ 	return report_error_detected(dev, pci_channel_io_frozen, data);
+@@ -207,6 +220,8 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 	else
+ 		bridge = pci_upstream_bridge(dev);
+ 
++	pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL);
++
+ 	pci_dbg(bridge, "broadcast error_detected message\n");
+ 	if (state == pci_channel_io_frozen) {
+ 		pci_walk_bridge(bridge, report_frozen_detected, &status);
+@@ -251,10 +266,15 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
+ 		pcie_clear_device_status(dev);
+ 		pci_aer_clear_nonfatal_status(dev);
+ 	}
++
++	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
++
+ 	pci_info(bridge, "device recovery successful\n");
+ 	return status;
+ 
+ failed:
++	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
++
+ 	pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT);
+ 
+ 	/* TODO: Should kernel panic here? */
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 528044237bf9f..fa770601c655a 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -6219,6 +6219,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2b, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2d, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa73f, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size);
+ #endif
+ 
+ /*
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index 142ebe0247cc0..983a6e6173bd2 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -1531,6 +1531,19 @@ int tegra_xusb_padctl_get_usb3_companion(struct tegra_xusb_padctl *padctl,
+ }
+ EXPORT_SYMBOL_GPL(tegra_xusb_padctl_get_usb3_companion);
+ 
++int tegra_xusb_padctl_get_port_number(struct phy *phy)
++{
++	struct tegra_xusb_lane *lane;
++
++	if (!phy)
++		return -ENODEV;
++
++	lane = phy_get_drvdata(phy);
++
++	return lane->index;
++}
++EXPORT_SYMBOL_GPL(tegra_xusb_padctl_get_port_number);
++
+ MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
+ MODULE_DESCRIPTION("Tegra XUSB Pad Controller driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/platform/x86/intel/tpmi.c b/drivers/platform/x86/intel/tpmi.c
+index 311abcac894a6..c2f6e20b45bc0 100644
+--- a/drivers/platform/x86/intel/tpmi.c
++++ b/drivers/platform/x86/intel/tpmi.c
+@@ -96,7 +96,7 @@ struct intel_tpmi_pfs_entry {
+  */
+ struct intel_tpmi_pm_feature {
+ 	struct intel_tpmi_pfs_entry pfs_header;
+-	unsigned int vsec_offset;
++	u64 vsec_offset;
+ 	struct intel_vsec_device *vsec_dev;
+ };
+ 
+@@ -389,7 +389,7 @@ static int tpmi_pfs_dbg_show(struct seq_file *s, void *unused)
+ 			read_blocked = feature_state.read_blocked ? 'Y' : 'N';
+ 			write_blocked = feature_state.write_blocked ? 'Y' : 'N';
+ 		}
+-		seq_printf(s, "0x%02x\t\t0x%02x\t\t0x%04x\t\t0x%04x\t\t0x%02x\t\t0x%08x\t%c\t%c\t\t%c\t\t%c\n",
++		seq_printf(s, "0x%02x\t\t0x%02x\t\t0x%04x\t\t0x%04x\t\t0x%02x\t\t0x%016llx\t%c\t%c\t\t%c\t\t%c\n",
+ 			   pfs->pfs_header.tpmi_id, pfs->pfs_header.num_entries,
+ 			   pfs->pfs_header.entry_size, pfs->pfs_header.cap_offset,
+ 			   pfs->pfs_header.attribute, pfs->vsec_offset, locked, disabled,
+@@ -408,7 +408,8 @@ static int tpmi_mem_dump_show(struct seq_file *s, void *unused)
+ 	struct intel_tpmi_pm_feature *pfs = s->private;
+ 	int count, ret = 0;
+ 	void __iomem *mem;
+-	u32 off, size;
++	u32 size;
++	u64 off;
+ 	u8 *buffer;
+ 
+ 	size = TPMI_GET_SINGLE_ENTRY_SIZE(pfs);
+@@ -424,7 +425,7 @@ static int tpmi_mem_dump_show(struct seq_file *s, void *unused)
+ 	mutex_lock(&tpmi_dev_lock);
+ 
+ 	for (count = 0; count < pfs->pfs_header.num_entries; ++count) {
+-		seq_printf(s, "TPMI Instance:%d offset:0x%x\n", count, off);
++		seq_printf(s, "TPMI Instance:%d offset:0x%llx\n", count, off);
+ 
+ 		mem = ioremap(off, size);
+ 		if (!mem) {
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 2feed036c1cd4..9d3e102f1a76b 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -5,6 +5,7 @@
+  */
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
++#include <linux/cleanup.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/list.h>
+@@ -759,6 +760,11 @@ static int rapl_config(struct rapl_package *rp)
+ 	default:
+ 		return -EINVAL;
+ 	}
++
++	/* defaults_msr can be NULL on unsupported platforms */
++	if (!rp->priv->defaults || !rp->priv->rpi)
++		return -ENODEV;
++
+ 	return 0;
+ }
+ 
+@@ -1499,7 +1505,7 @@ static int rapl_detect_domains(struct rapl_package *rp)
+ }
+ 
+ /* called from CPU hotplug notifier, hotplug lock held */
+-void rapl_remove_package(struct rapl_package *rp)
++void rapl_remove_package_cpuslocked(struct rapl_package *rp)
+ {
+ 	struct rapl_domain *rd, *rd_package = NULL;
+ 
+@@ -1528,10 +1534,18 @@ void rapl_remove_package(struct rapl_package *rp)
+ 	list_del(&rp->plist);
+ 	kfree(rp);
+ }
++EXPORT_SYMBOL_GPL(rapl_remove_package_cpuslocked);
++
++void rapl_remove_package(struct rapl_package *rp)
++{
++	guard(cpus_read_lock)();
++	rapl_remove_package_cpuslocked(rp);
++}
+ EXPORT_SYMBOL_GPL(rapl_remove_package);
+ 
+ /* caller to ensure CPU hotplug lock is held */
+-struct rapl_package *rapl_find_package_domain(int id, struct rapl_if_priv *priv, bool id_is_cpu)
++struct rapl_package *rapl_find_package_domain_cpuslocked(int id, struct rapl_if_priv *priv,
++							 bool id_is_cpu)
+ {
+ 	struct rapl_package *rp;
+ 	int uid;
+@@ -1549,10 +1563,17 @@ struct rapl_package *rapl_find_package_domain(int id, struct rapl_if_priv *priv,
+ 
+ 	return NULL;
+ }
++EXPORT_SYMBOL_GPL(rapl_find_package_domain_cpuslocked);
++
++struct rapl_package *rapl_find_package_domain(int id, struct rapl_if_priv *priv, bool id_is_cpu)
++{
++	guard(cpus_read_lock)();
++	return rapl_find_package_domain_cpuslocked(id, priv, id_is_cpu);
++}
+ EXPORT_SYMBOL_GPL(rapl_find_package_domain);
+ 
+ /* called from CPU hotplug notifier, hotplug lock held */
+-struct rapl_package *rapl_add_package(int id, struct rapl_if_priv *priv, bool id_is_cpu)
++struct rapl_package *rapl_add_package_cpuslocked(int id, struct rapl_if_priv *priv, bool id_is_cpu)
+ {
+ 	struct rapl_package *rp;
+ 	int ret;
+@@ -1598,6 +1619,13 @@ struct rapl_package *rapl_add_package(int id, struct rapl_if_priv *priv, bool id
+ 	kfree(rp);
+ 	return ERR_PTR(ret);
+ }
++EXPORT_SYMBOL_GPL(rapl_add_package_cpuslocked);
++
++struct rapl_package *rapl_add_package(int id, struct rapl_if_priv *priv, bool id_is_cpu)
++{
++	guard(cpus_read_lock)();
++	return rapl_add_package_cpuslocked(id, priv, id_is_cpu);
++}
+ EXPORT_SYMBOL_GPL(rapl_add_package);
+ 
+ static void power_limit_state_save(void)
+diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
+index 250bd41a588c7..b4b6930cacb0b 100644
+--- a/drivers/powercap/intel_rapl_msr.c
++++ b/drivers/powercap/intel_rapl_msr.c
+@@ -73,9 +73,9 @@ static int rapl_cpu_online(unsigned int cpu)
+ {
+ 	struct rapl_package *rp;
+ 
+-	rp = rapl_find_package_domain(cpu, rapl_msr_priv, true);
++	rp = rapl_find_package_domain_cpuslocked(cpu, rapl_msr_priv, true);
+ 	if (!rp) {
+-		rp = rapl_add_package(cpu, rapl_msr_priv, true);
++		rp = rapl_add_package_cpuslocked(cpu, rapl_msr_priv, true);
+ 		if (IS_ERR(rp))
+ 			return PTR_ERR(rp);
+ 	}
+@@ -88,14 +88,14 @@ static int rapl_cpu_down_prep(unsigned int cpu)
+ 	struct rapl_package *rp;
+ 	int lead_cpu;
+ 
+-	rp = rapl_find_package_domain(cpu, rapl_msr_priv, true);
++	rp = rapl_find_package_domain_cpuslocked(cpu, rapl_msr_priv, true);
+ 	if (!rp)
+ 		return 0;
+ 
+ 	cpumask_clear_cpu(cpu, &rp->cpumask);
+ 	lead_cpu = cpumask_first(&rp->cpumask);
+ 	if (lead_cpu >= nr_cpu_ids)
+-		rapl_remove_package(rp);
++		rapl_remove_package_cpuslocked(rp);
+ 	else if (rp->lead_cpu == cpu)
+ 		rp->lead_cpu = lead_cpu;
+ 	return 0;
+diff --git a/drivers/powercap/intel_rapl_tpmi.c b/drivers/powercap/intel_rapl_tpmi.c
+index 891c90fefd8b7..f6b7f085977ce 100644
+--- a/drivers/powercap/intel_rapl_tpmi.c
++++ b/drivers/powercap/intel_rapl_tpmi.c
+@@ -40,6 +40,7 @@ enum tpmi_rapl_register {
+ 	TPMI_RAPL_REG_ENERGY_STATUS,
+ 	TPMI_RAPL_REG_PERF_STATUS,
+ 	TPMI_RAPL_REG_POWER_INFO,
++	TPMI_RAPL_REG_DOMAIN_INFO,
+ 	TPMI_RAPL_REG_INTERRUPT,
+ 	TPMI_RAPL_REG_MAX = 15,
+ };
+@@ -130,6 +131,12 @@ static void trp_release(struct tpmi_rapl_package *trp)
+ 	mutex_unlock(&tpmi_rapl_lock);
+ }
+ 
++/*
++ * Bit 0 of TPMI_RAPL_REG_DOMAIN_INFO indicates if the current package is a domain
++ * root or not. Only domain root packages can enumerate System (Psys) Domain.
++ */
++#define TPMI_RAPL_DOMAIN_ROOT	BIT(0)
++
+ static int parse_one_domain(struct tpmi_rapl_package *trp, u32 offset)
+ {
+ 	u8 tpmi_domain_version;
+@@ -139,6 +146,7 @@ static int parse_one_domain(struct tpmi_rapl_package *trp, u32 offset)
+ 	enum rapl_domain_reg_id reg_id;
+ 	int tpmi_domain_size, tpmi_domain_flags;
+ 	u64 tpmi_domain_header = readq(trp->base + offset);
++	u64 tpmi_domain_info;
+ 
+ 	/* Domain Parent bits are ignored for now */
+ 	tpmi_domain_version = tpmi_domain_header & 0xff;
+@@ -169,6 +177,13 @@ static int parse_one_domain(struct tpmi_rapl_package *trp, u32 offset)
+ 		domain_type = RAPL_DOMAIN_PACKAGE;
+ 		break;
+ 	case TPMI_RAPL_DOMAIN_SYSTEM:
++		if (!(tpmi_domain_flags & BIT(TPMI_RAPL_REG_DOMAIN_INFO))) {
++			pr_warn(FW_BUG "System domain must support Domain Info register\n");
++			return -ENODEV;
++		}
++		tpmi_domain_info = readq(trp->base + offset + TPMI_RAPL_REG_DOMAIN_INFO);
++		if (!(tpmi_domain_info & TPMI_RAPL_DOMAIN_ROOT))
++			return 0;
+ 		domain_type = RAPL_DOMAIN_PLATFORM;
+ 		break;
+ 	case TPMI_RAPL_DOMAIN_MEMORY:
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index 116fa060e3029..29dcf38f3b521 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -288,9 +288,9 @@ static int img_pwm_probe(struct platform_device *pdev)
+ 		return PTR_ERR(imgchip->sys_clk);
+ 	}
+ 
+-	imgchip->pwm_clk = devm_clk_get(&pdev->dev, "imgchip");
++	imgchip->pwm_clk = devm_clk_get(&pdev->dev, "pwm");
+ 	if (IS_ERR(imgchip->pwm_clk)) {
+-		dev_err(&pdev->dev, "failed to get imgchip clock\n");
++		dev_err(&pdev->dev, "failed to get pwm clock\n");
+ 		return PTR_ERR(imgchip->pwm_clk);
+ 	}
+ 
+diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c
+index 83d76915a6ad6..25b66b113b695 100644
+--- a/drivers/remoteproc/remoteproc_virtio.c
++++ b/drivers/remoteproc/remoteproc_virtio.c
+@@ -351,6 +351,9 @@ static void rproc_virtio_dev_release(struct device *dev)
+ 
+ 	kfree(vdev);
+ 
++	of_reserved_mem_device_release(&rvdev->pdev->dev);
++	dma_release_coherent_memory(&rvdev->pdev->dev);
++
+ 	put_device(&rvdev->pdev->dev);
+ }
+ 
+@@ -584,9 +587,6 @@ static void rproc_virtio_remove(struct platform_device *pdev)
+ 	rproc_remove_subdev(rproc, &rvdev->subdev);
+ 	rproc_remove_rvdev(rvdev);
+ 
+-	of_reserved_mem_device_release(&pdev->dev);
+-	dma_release_coherent_memory(&pdev->dev);
+-
+ 	put_device(&rproc->dev);
+ }
+ 
+diff --git a/drivers/s390/cio/vfio_ccw_chp.c b/drivers/s390/cio/vfio_ccw_chp.c
+index d3f3a611f95b4..38c176cf62957 100644
+--- a/drivers/s390/cio/vfio_ccw_chp.c
++++ b/drivers/s390/cio/vfio_ccw_chp.c
+@@ -115,7 +115,7 @@ static ssize_t vfio_ccw_crw_region_read(struct vfio_ccw_private *private,
+ 
+ 	/* Notify the guest if more CRWs are on our queue */
+ 	if (!list_empty(&private->crw) && private->crw_trigger)
+-		eventfd_signal(private->crw_trigger, 1);
++		eventfd_signal(private->crw_trigger);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index 43601816ea4e4..bfb35cfce1ef1 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -112,7 +112,7 @@ void vfio_ccw_sch_io_todo(struct work_struct *work)
+ 		private->state = VFIO_CCW_STATE_IDLE;
+ 
+ 	if (private->io_trigger)
+-		eventfd_signal(private->io_trigger, 1);
++		eventfd_signal(private->io_trigger);
+ }
+ 
+ void vfio_ccw_crw_todo(struct work_struct *work)
+@@ -122,7 +122,7 @@ void vfio_ccw_crw_todo(struct work_struct *work)
+ 	private = container_of(work, struct vfio_ccw_private, crw_work);
+ 
+ 	if (!list_empty(&private->crw) && private->crw_trigger)
+-		eventfd_signal(private->crw_trigger, 1);
++		eventfd_signal(private->crw_trigger);
+ }
+ 
+ /*
+diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
+index cba4971618ff6..ea532a8a4a0c2 100644
+--- a/drivers/s390/cio/vfio_ccw_ops.c
++++ b/drivers/s390/cio/vfio_ccw_ops.c
+@@ -421,7 +421,7 @@ static int vfio_ccw_mdev_set_irqs(struct vfio_ccw_private *private,
+ 	case VFIO_IRQ_SET_DATA_NONE:
+ 	{
+ 		if (*ctx)
+-			eventfd_signal(*ctx, 1);
++			eventfd_signal(*ctx);
+ 		return 0;
+ 	}
+ 	case VFIO_IRQ_SET_DATA_BOOL:
+@@ -432,7 +432,7 @@ static int vfio_ccw_mdev_set_irqs(struct vfio_ccw_private *private,
+ 			return -EFAULT;
+ 
+ 		if (trigger && *ctx)
+-			eventfd_signal(*ctx, 1);
++			eventfd_signal(*ctx);
+ 		return 0;
+ 	}
+ 	case VFIO_IRQ_SET_DATA_EVENTFD:
+@@ -612,7 +612,7 @@ static void vfio_ccw_mdev_request(struct vfio_device *vdev, unsigned int count)
+ 					       "Relaying device request to user (#%u)\n",
+ 					       count);
+ 
+-		eventfd_signal(private->req_trigger, 1);
++		eventfd_signal(private->req_trigger);
+ 	} else if (count == 0) {
+ 		dev_notice(dev,
+ 			   "No device request channel registered, blocked until released by user\n");
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index d6ea2fd4c2a02..76429585c1bc9 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -1862,7 +1862,7 @@ static void vfio_ap_mdev_request(struct vfio_device *vdev, unsigned int count)
+ 					       "Relaying device request to user (#%u)\n",
+ 					       count);
+ 
+-		eventfd_signal(matrix_mdev->req_trigger, 1);
++		eventfd_signal(matrix_mdev->req_trigger);
+ 	} else if (count == 0) {
+ 		dev_notice(dev,
+ 			   "No device request registered, blocked until released by user\n");
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index dcd6c7299fa9a..973aa937536e3 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -579,6 +579,7 @@ static inline struct zcrypt_queue *zcrypt_pick_queue(struct zcrypt_card *zc,
+ {
+ 	if (!zq || !try_module_get(zq->queue->ap_dev.device.driver->owner))
+ 		return NULL;
++	zcrypt_card_get(zc);
+ 	zcrypt_queue_get(zq);
+ 	get_device(&zq->queue->ap_dev.device);
+ 	atomic_add(weight, &zc->load);
+@@ -598,6 +599,7 @@ static inline void zcrypt_drop_queue(struct zcrypt_card *zc,
+ 	atomic_sub(weight, &zq->load);
+ 	put_device(&zq->queue->ap_dev.device);
+ 	zcrypt_queue_put(zq);
++	zcrypt_card_put(zc);
+ 	module_put(mod);
+ }
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index d7f51b84f3c78..445f4a220df3e 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -353,12 +353,13 @@ static void scsi_host_dev_release(struct device *dev)
+ 
+ 	if (shost->shost_state == SHOST_CREATED) {
+ 		/*
+-		 * Free the shost_dev device name here if scsi_host_alloc()
+-		 * and scsi_host_put() have been called but neither
++		 * Free the shost_dev device name and remove the proc host dir
++		 * here if scsi_host_{alloc,put}() have been called but neither
+ 		 * scsi_host_add() nor scsi_remove_host() has been called.
+ 		 * This avoids that the memory allocated for the shost_dev
+-		 * name is leaked.
++		 * name as well as the proc dir structure are leaked.
+ 		 */
++		scsi_proc_hostdir_rm(shost->hostt);
+ 		kfree(dev_name(&shost->shost_dev));
+ 	}
+ 
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index a2204674b6808..5c261005b74e4 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -1621,6 +1621,16 @@ int sas_discover_root_expander(struct domain_device *dev)
+ 
+ /* ---------- Domain revalidation ---------- */
+ 
++static void sas_get_sas_addr_and_dev_type(struct smp_disc_resp *disc_resp,
++					  u8 *sas_addr,
++					  enum sas_device_type *type)
++{
++	memcpy(sas_addr, disc_resp->disc.attached_sas_addr, SAS_ADDR_SIZE);
++	*type = to_dev_type(&disc_resp->disc);
++	if (*type == SAS_PHY_UNUSED)
++		memset(sas_addr, 0, SAS_ADDR_SIZE);
++}
++
+ static int sas_get_phy_discover(struct domain_device *dev,
+ 				int phy_id, struct smp_disc_resp *disc_resp)
+ {
+@@ -1674,13 +1684,8 @@ int sas_get_phy_attached_dev(struct domain_device *dev, int phy_id,
+ 		return -ENOMEM;
+ 
+ 	res = sas_get_phy_discover(dev, phy_id, disc_resp);
+-	if (res == 0) {
+-		memcpy(sas_addr, disc_resp->disc.attached_sas_addr,
+-		       SAS_ADDR_SIZE);
+-		*type = to_dev_type(&disc_resp->disc);
+-		if (*type == 0)
+-			memset(sas_addr, 0, SAS_ADDR_SIZE);
+-	}
++	if (res == 0)
++		sas_get_sas_addr_and_dev_type(disc_resp, sas_addr, type);
+ 	kfree(disc_resp);
+ 	return res;
+ }
+@@ -1940,6 +1945,7 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id,
+ 	struct expander_device *ex = &dev->ex_dev;
+ 	struct ex_phy *phy = &ex->ex_phy[phy_id];
+ 	enum sas_device_type type = SAS_PHY_UNUSED;
++	struct smp_disc_resp *disc_resp;
+ 	u8 sas_addr[SAS_ADDR_SIZE];
+ 	char msg[80] = "";
+ 	int res;
+@@ -1951,33 +1957,41 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id,
+ 		 SAS_ADDR(dev->sas_addr), phy_id, msg);
+ 
+ 	memset(sas_addr, 0, SAS_ADDR_SIZE);
+-	res = sas_get_phy_attached_dev(dev, phy_id, sas_addr, &type);
++	disc_resp = alloc_smp_resp(DISCOVER_RESP_SIZE);
++	if (!disc_resp)
++		return -ENOMEM;
++
++	res = sas_get_phy_discover(dev, phy_id, disc_resp);
+ 	switch (res) {
+ 	case SMP_RESP_NO_PHY:
+ 		phy->phy_state = PHY_NOT_PRESENT;
+ 		sas_unregister_devs_sas_addr(dev, phy_id, last);
+-		return res;
++		goto out_free_resp;
+ 	case SMP_RESP_PHY_VACANT:
+ 		phy->phy_state = PHY_VACANT;
+ 		sas_unregister_devs_sas_addr(dev, phy_id, last);
+-		return res;
++		goto out_free_resp;
+ 	case SMP_RESP_FUNC_ACC:
+ 		break;
+ 	case -ECOMM:
+ 		break;
+ 	default:
+-		return res;
++		goto out_free_resp;
+ 	}
+ 
++	if (res == 0)
++		sas_get_sas_addr_and_dev_type(disc_resp, sas_addr, &type);
++
+ 	if ((SAS_ADDR(sas_addr) == 0) || (res == -ECOMM)) {
+ 		phy->phy_state = PHY_EMPTY;
+ 		sas_unregister_devs_sas_addr(dev, phy_id, last);
+ 		/*
+-		 * Even though the PHY is empty, for convenience we discover
+-		 * the PHY to update the PHY info, like negotiated linkrate.
++		 * Even though the PHY is empty, for convenience we update
++		 * the PHY info, like negotiated linkrate.
+ 		 */
+-		sas_ex_phy_discover(dev, phy_id);
+-		return res;
++		if (res == 0)
++			sas_set_ex_phy(dev, phy_id, disc_resp);
++		goto out_free_resp;
+ 	} else if (SAS_ADDR(sas_addr) == SAS_ADDR(phy->attached_sas_addr) &&
+ 		   dev_type_flutter(type, phy->attached_dev_type)) {
+ 		struct domain_device *ata_dev = sas_ex_to_ata(dev, phy_id);
+@@ -1989,7 +2003,7 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id,
+ 			action = ", needs recovery";
+ 		pr_debug("ex %016llx phy%02d broadcast flutter%s\n",
+ 			 SAS_ADDR(dev->sas_addr), phy_id, action);
+-		return res;
++		goto out_free_resp;
+ 	}
+ 
+ 	/* we always have to delete the old device when we went here */
+@@ -1998,7 +2012,10 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id,
+ 		SAS_ADDR(phy->attached_sas_addr));
+ 	sas_unregister_devs_sas_addr(dev, phy_id, last);
+ 
+-	return sas_discover_new(dev, phy_id);
++	res = sas_discover_new(dev, phy_id);
++out_free_resp:
++	kfree(disc_resp);
++	return res;
+ }
+ 
+ /**
+diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c
+index 595dca92e8db5..2919579fa0846 100644
+--- a/drivers/scsi/lpfc/lpfc_bsg.c
++++ b/drivers/scsi/lpfc/lpfc_bsg.c
+@@ -3169,10 +3169,10 @@ lpfc_bsg_diag_loopback_run(struct bsg_job *job)
+ 	}
+ 
+ 	cmdwqe = &cmdiocbq->wqe;
+-	memset(cmdwqe, 0, sizeof(union lpfc_wqe));
++	memset(cmdwqe, 0, sizeof(*cmdwqe));
+ 	if (phba->sli_rev < LPFC_SLI_REV4) {
+ 		rspwqe = &rspiocbq->wqe;
+-		memset(rspwqe, 0, sizeof(union lpfc_wqe));
++		memset(rspwqe, 0, sizeof(*rspwqe));
+ 	}
+ 
+ 	INIT_LIST_HEAD(&head);
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 425328d9c2d80..d41fea53e41e9 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -1586,7 +1586,7 @@ lpfc_nvmet_setup_io_context(struct lpfc_hba *phba)
+ 		wqe = &nvmewqe->wqe;
+ 
+ 		/* Initialize WQE */
+-		memset(wqe, 0, sizeof(union lpfc_wqe));
++		memset(wqe, 0, sizeof(*wqe));
+ 
+ 		ctx_buf->iocbq->cmd_dmabuf = NULL;
+ 		spin_lock(&phba->sli4_hba.sgl_list_lock);
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 44449c70a375f..76eeba435fd04 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2741,7 +2741,13 @@ qla2x00_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 		return;
+ 
+ 	if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) {
+-		qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16);
++		/* Will wait for wind down of adapter */
++		ql_dbg(ql_dbg_aer, fcport->vha, 0x900c,
++		    "%s pci offline detected (id %06x)\n", __func__,
++		    fcport->d_id.b24);
++		qla_pci_set_eeh_busy(fcport->vha);
++		qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24,
++		    0, WAIT_TARGET);
+ 		return;
+ 	}
+ }
+@@ -2763,7 +2769,11 @@ qla2x00_terminate_rport_io(struct fc_rport *rport)
+ 	vha = fcport->vha;
+ 
+ 	if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) {
+-		qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16);
++		/* Will wait for wind down of adapter */
++		ql_dbg(ql_dbg_aer, fcport->vha, 0x900b,
++		    "%s pci offline detected (id %06x)\n", __func__,
++		    fcport->d_id.b24);
++		qla_pci_set_eeh_busy(vha);
+ 		qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24,
+ 			0, WAIT_TARGET);
+ 		return;
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index deb642607deb6..2f49baf131e26 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -82,7 +82,7 @@ typedef union {
+ #include "qla_nvme.h"
+ #define QLA2XXX_DRIVER_NAME	"qla2xxx"
+ #define QLA2XXX_APIDEV		"ql2xapidev"
+-#define QLA2XXX_MANUFACTURER	"Marvell Semiconductor, Inc."
++#define QLA2XXX_MANUFACTURER	"Marvell"
+ 
+ /*
+  * We have MAILBOX_REGISTER_COUNT sized arrays in a few places,
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 09cb9413670a5..7309310d2ab94 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -44,7 +44,7 @@ extern int qla2x00_fabric_login(scsi_qla_host_t *, fc_port_t *, uint16_t *);
+ extern int qla2x00_local_device_login(scsi_qla_host_t *, fc_port_t *);
+ 
+ extern int qla24xx_els_dcmd_iocb(scsi_qla_host_t *, int, port_id_t);
+-extern int qla24xx_els_dcmd2_iocb(scsi_qla_host_t *, int, fc_port_t *, bool);
++extern int qla24xx_els_dcmd2_iocb(scsi_qla_host_t *, int, fc_port_t *);
+ extern void qla2x00_els_dcmd2_free(scsi_qla_host_t *vha,
+ 				   struct els_plogi *els_plogi);
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index a314cfc5b263f..8377624d76c98 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1193,8 +1193,12 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return rval;
+ 
+ done_free_sp:
+-	/* ref: INIT */
+-	kref_put(&sp->cmd_kref, qla2x00_sp_release);
++	/*
++	 * use qla24xx_async_gnl_sp_done to purge all pending gnl request.
++	 * kref_put is call behind the scene.
++	 */
++	sp->u.iocb_cmd.u.mbx.in_mb[0] = MBS_COMMAND_ERROR;
++	qla24xx_async_gnl_sp_done(sp, QLA_COMMAND_ERROR);
+ 	fcport->flags &= ~(FCF_ASYNC_SENT);
+ done:
+ 	fcport->flags &= ~(FCF_ASYNC_ACTIVE);
+@@ -2665,6 +2669,40 @@ qla83xx_nic_core_fw_load(scsi_qla_host_t *vha)
+ 	return rval;
+ }
+ 
++static void qla_enable_fce_trace(scsi_qla_host_t *vha)
++{
++	int rval;
++	struct qla_hw_data *ha = vha->hw;
++
++	if (ha->fce) {
++		ha->flags.fce_enabled = 1;
++		memset(ha->fce, 0, fce_calc_size(ha->fce_bufs));
++		rval = qla2x00_enable_fce_trace(vha,
++		    ha->fce_dma, ha->fce_bufs, ha->fce_mb, &ha->fce_bufs);
++
++		if (rval) {
++			ql_log(ql_log_warn, vha, 0x8033,
++			    "Unable to reinitialize FCE (%d).\n", rval);
++			ha->flags.fce_enabled = 0;
++		}
++	}
++}
++
++static void qla_enable_eft_trace(scsi_qla_host_t *vha)
++{
++	int rval;
++	struct qla_hw_data *ha = vha->hw;
++
++	if (ha->eft) {
++		memset(ha->eft, 0, EFT_SIZE);
++		rval = qla2x00_enable_eft_trace(vha, ha->eft_dma, EFT_NUM_BUFFERS);
++
++		if (rval) {
++			ql_log(ql_log_warn, vha, 0x8034,
++			    "Unable to reinitialize EFT (%d).\n", rval);
++		}
++	}
++}
+ /*
+ * qla2x00_initialize_adapter
+ *      Initialize board.
+@@ -3668,9 +3706,8 @@ qla24xx_chip_diag(scsi_qla_host_t *vha)
+ }
+ 
+ static void
+-qla2x00_init_fce_trace(scsi_qla_host_t *vha)
++qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ {
+-	int rval;
+ 	dma_addr_t tc_dma;
+ 	void *tc;
+ 	struct qla_hw_data *ha = vha->hw;
+@@ -3699,27 +3736,17 @@ qla2x00_init_fce_trace(scsi_qla_host_t *vha)
+ 		return;
+ 	}
+ 
+-	rval = qla2x00_enable_fce_trace(vha, tc_dma, FCE_NUM_BUFFERS,
+-					ha->fce_mb, &ha->fce_bufs);
+-	if (rval) {
+-		ql_log(ql_log_warn, vha, 0x00bf,
+-		       "Unable to initialize FCE (%d).\n", rval);
+-		dma_free_coherent(&ha->pdev->dev, FCE_SIZE, tc, tc_dma);
+-		return;
+-	}
+-
+ 	ql_dbg(ql_dbg_init, vha, 0x00c0,
+ 	       "Allocated (%d KB) for FCE...\n", FCE_SIZE / 1024);
+ 
+-	ha->flags.fce_enabled = 1;
+ 	ha->fce_dma = tc_dma;
+ 	ha->fce = tc;
++	ha->fce_bufs = FCE_NUM_BUFFERS;
+ }
+ 
+ static void
+-qla2x00_init_eft_trace(scsi_qla_host_t *vha)
++qla2x00_alloc_eft_trace(scsi_qla_host_t *vha)
+ {
+-	int rval;
+ 	dma_addr_t tc_dma;
+ 	void *tc;
+ 	struct qla_hw_data *ha = vha->hw;
+@@ -3744,14 +3771,6 @@ qla2x00_init_eft_trace(scsi_qla_host_t *vha)
+ 		return;
+ 	}
+ 
+-	rval = qla2x00_enable_eft_trace(vha, tc_dma, EFT_NUM_BUFFERS);
+-	if (rval) {
+-		ql_log(ql_log_warn, vha, 0x00c2,
+-		       "Unable to initialize EFT (%d).\n", rval);
+-		dma_free_coherent(&ha->pdev->dev, EFT_SIZE, tc, tc_dma);
+-		return;
+-	}
+-
+ 	ql_dbg(ql_dbg_init, vha, 0x00c3,
+ 	       "Allocated (%d KB) EFT ...\n", EFT_SIZE / 1024);
+ 
+@@ -3759,13 +3778,6 @@ qla2x00_init_eft_trace(scsi_qla_host_t *vha)
+ 	ha->eft = tc;
+ }
+ 
+-static void
+-qla2x00_alloc_offload_mem(scsi_qla_host_t *vha)
+-{
+-	qla2x00_init_fce_trace(vha);
+-	qla2x00_init_eft_trace(vha);
+-}
+-
+ void
+ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ {
+@@ -3820,10 +3832,10 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ 		if (ha->tgt.atio_ring)
+ 			mq_size += ha->tgt.atio_q_length * sizeof(request_t);
+ 
+-		qla2x00_init_fce_trace(vha);
++		qla2x00_alloc_fce_trace(vha);
+ 		if (ha->fce)
+ 			fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE;
+-		qla2x00_init_eft_trace(vha);
++		qla2x00_alloc_eft_trace(vha);
+ 		if (ha->eft)
+ 			eft_size = EFT_SIZE;
+ 	}
+@@ -4253,7 +4265,6 @@ qla2x00_setup_chip(scsi_qla_host_t *vha)
+ 	struct qla_hw_data *ha = vha->hw;
+ 	struct device_reg_2xxx __iomem *reg = &ha->iobase->isp;
+ 	unsigned long flags;
+-	uint16_t fw_major_version;
+ 	int done_once = 0;
+ 
+ 	if (IS_P3P_TYPE(ha)) {
+@@ -4320,7 +4331,6 @@ qla2x00_setup_chip(scsi_qla_host_t *vha)
+ 					goto failed;
+ 
+ enable_82xx_npiv:
+-				fw_major_version = ha->fw_major_version;
+ 				if (IS_P3P_TYPE(ha))
+ 					qla82xx_check_md_needed(vha);
+ 				else
+@@ -4349,12 +4359,11 @@ qla2x00_setup_chip(scsi_qla_host_t *vha)
+ 				if (rval != QLA_SUCCESS)
+ 					goto failed;
+ 
+-				if (!fw_major_version && !(IS_P3P_TYPE(ha)))
+-					qla2x00_alloc_offload_mem(vha);
+-
+ 				if (ql2xallocfwdump && !(IS_P3P_TYPE(ha)))
+ 					qla2x00_alloc_fw_dump(vha);
+ 
++				qla_enable_fce_trace(vha);
++				qla_enable_eft_trace(vha);
+ 			} else {
+ 				goto failed;
+ 			}
+@@ -7487,12 +7496,12 @@ qla2x00_abort_isp_cleanup(scsi_qla_host_t *vha)
+ int
+ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ {
+-	int rval;
+ 	uint8_t        status = 0;
+ 	struct qla_hw_data *ha = vha->hw;
+ 	struct scsi_qla_host *vp, *tvp;
+ 	struct req_que *req = ha->req_q_map[0];
+ 	unsigned long flags;
++	fc_port_t *fcport;
+ 
+ 	if (vha->flags.online) {
+ 		qla2x00_abort_isp_cleanup(vha);
+@@ -7561,6 +7570,15 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 			       "ISP Abort - ISP reg disconnect post nvmram config, exiting.\n");
+ 			return status;
+ 		}
++
++		/* User may have updated [fcp|nvme] prefer in flash */
++		list_for_each_entry(fcport, &vha->vp_fcports, list) {
++			if (NVME_PRIORITY(ha, fcport))
++				fcport->do_prli_nvme = 1;
++			else
++				fcport->do_prli_nvme = 0;
++		}
++
+ 		if (!qla2x00_restart_isp(vha)) {
+ 			clear_bit(RESET_MARKER_NEEDED, &vha->dpc_flags);
+ 
+@@ -7581,31 +7599,7 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 
+ 			if (IS_QLA81XX(ha) || IS_QLA8031(ha))
+ 				qla2x00_get_fw_version(vha);
+-			if (ha->fce) {
+-				ha->flags.fce_enabled = 1;
+-				memset(ha->fce, 0,
+-				    fce_calc_size(ha->fce_bufs));
+-				rval = qla2x00_enable_fce_trace(vha,
+-				    ha->fce_dma, ha->fce_bufs, ha->fce_mb,
+-				    &ha->fce_bufs);
+-				if (rval) {
+-					ql_log(ql_log_warn, vha, 0x8033,
+-					    "Unable to reinitialize FCE "
+-					    "(%d).\n", rval);
+-					ha->flags.fce_enabled = 0;
+-				}
+-			}
+ 
+-			if (ha->eft) {
+-				memset(ha->eft, 0, EFT_SIZE);
+-				rval = qla2x00_enable_eft_trace(vha,
+-				    ha->eft_dma, EFT_NUM_BUFFERS);
+-				if (rval) {
+-					ql_log(ql_log_warn, vha, 0x8034,
+-					    "Unable to reinitialize EFT "
+-					    "(%d).\n", rval);
+-				}
+-			}
+ 		} else {	/* failed the ISP abort */
+ 			vha->flags.online = 1;
+ 			if (test_bit(ISP_ABORT_RETRY, &vha->dpc_flags)) {
+@@ -7655,6 +7649,14 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 				atomic_inc(&vp->vref_count);
+ 				spin_unlock_irqrestore(&ha->vport_slock, flags);
+ 
++				/* User may have updated [fcp|nvme] prefer in flash */
++				list_for_each_entry(fcport, &vp->vp_fcports, list) {
++					if (NVME_PRIORITY(ha, fcport))
++						fcport->do_prli_nvme = 1;
++					else
++						fcport->do_prli_nvme = 0;
++				}
++
+ 				qla2x00_vp_abort_isp(vp);
+ 
+ 				spin_lock_irqsave(&ha->vport_slock, flags);
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index df90169f82440..0b41e8a066026 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2587,6 +2587,33 @@ void
+ qla2x00_sp_release(struct kref *kref)
+ {
+ 	struct srb *sp = container_of(kref, struct srb, cmd_kref);
++	struct scsi_qla_host *vha = sp->vha;
++
++	switch (sp->type) {
++	case SRB_CT_PTHRU_CMD:
++		/* GPSC & GFPNID use fcport->ct_desc.ct_sns for both req & rsp */
++		if (sp->u.iocb_cmd.u.ctarg.req &&
++			(!sp->fcport ||
++			 sp->u.iocb_cmd.u.ctarg.req != sp->fcport->ct_desc.ct_sns)) {
++			dma_free_coherent(&vha->hw->pdev->dev,
++			    sp->u.iocb_cmd.u.ctarg.req_allocated_size,
++			    sp->u.iocb_cmd.u.ctarg.req,
++			    sp->u.iocb_cmd.u.ctarg.req_dma);
++			sp->u.iocb_cmd.u.ctarg.req = NULL;
++		}
++		if (sp->u.iocb_cmd.u.ctarg.rsp &&
++			(!sp->fcport ||
++			 sp->u.iocb_cmd.u.ctarg.rsp != sp->fcport->ct_desc.ct_sns)) {
++			dma_free_coherent(&vha->hw->pdev->dev,
++			    sp->u.iocb_cmd.u.ctarg.rsp_allocated_size,
++			    sp->u.iocb_cmd.u.ctarg.rsp,
++			    sp->u.iocb_cmd.u.ctarg.rsp_dma);
++			sp->u.iocb_cmd.u.ctarg.rsp = NULL;
++		}
++		break;
++	default:
++		break;
++	}
+ 
+ 	sp->free(sp);
+ }
+@@ -2610,7 +2637,8 @@ static void qla2x00_els_dcmd_sp_free(srb_t *sp)
+ {
+ 	struct srb_iocb *elsio = &sp->u.iocb_cmd;
+ 
+-	kfree(sp->fcport);
++	if (sp->fcport)
++		qla2x00_free_fcport(sp->fcport);
+ 
+ 	if (elsio->u.els_logo.els_logo_pyld)
+ 		dma_free_coherent(&sp->vha->hw->pdev->dev, DMA_POOL_SIZE,
+@@ -2692,7 +2720,7 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	 */
+ 	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ 	if (!sp) {
+-		kfree(fcport);
++		qla2x00_free_fcport(fcport);
+ 		ql_log(ql_log_info, vha, 0x70e6,
+ 		 "SRB allocation failed\n");
+ 		return -ENOMEM;
+@@ -2723,6 +2751,7 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	if (!elsio->u.els_logo.els_logo_pyld) {
+ 		/* ref: INIT */
+ 		kref_put(&sp->cmd_kref, qla2x00_sp_release);
++		qla2x00_free_fcport(fcport);
+ 		return QLA_FUNCTION_FAILED;
+ 	}
+ 
+@@ -2747,6 +2776,7 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	if (rval != QLA_SUCCESS) {
+ 		/* ref: INIT */
+ 		kref_put(&sp->cmd_kref, qla2x00_sp_release);
++		qla2x00_free_fcport(fcport);
+ 		return QLA_FUNCTION_FAILED;
+ 	}
+ 
+@@ -3012,7 +3042,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ 
+ int
+ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+-    fc_port_t *fcport, bool wait)
++			fc_port_t *fcport)
+ {
+ 	srb_t *sp;
+ 	struct srb_iocb *elsio = NULL;
+@@ -3027,8 +3057,7 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	if (!sp) {
+ 		ql_log(ql_log_info, vha, 0x70e6,
+ 		 "SRB allocation failed\n");
+-		fcport->flags &= ~FCF_ASYNC_ACTIVE;
+-		return -ENOMEM;
++		goto done;
+ 	}
+ 
+ 	fcport->flags |= FCF_ASYNC_SENT;
+@@ -3037,9 +3066,6 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	ql_dbg(ql_dbg_io, vha, 0x3073,
+ 	       "%s Enter: PLOGI portid=%06x\n", __func__, fcport->d_id.b24);
+ 
+-	if (wait)
+-		sp->flags = SRB_WAKEUP_ON_COMP;
+-
+ 	sp->type = SRB_ELS_DCMD;
+ 	sp->name = "ELS_DCMD";
+ 	sp->fcport = fcport;
+@@ -3055,7 +3081,7 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 
+ 	if (!elsio->u.els_plogi.els_plogi_pyld) {
+ 		rval = QLA_FUNCTION_FAILED;
+-		goto out;
++		goto done_free_sp;
+ 	}
+ 
+ 	resp_ptr = elsio->u.els_plogi.els_resp_pyld =
+@@ -3064,7 +3090,7 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 
+ 	if (!elsio->u.els_plogi.els_resp_pyld) {
+ 		rval = QLA_FUNCTION_FAILED;
+-		goto out;
++		goto done_free_sp;
+ 	}
+ 
+ 	ql_dbg(ql_dbg_io, vha, 0x3073, "PLOGI %p %p\n", ptr, resp_ptr);
+@@ -3080,7 +3106,6 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 
+ 	if (els_opcode == ELS_DCMD_PLOGI && DBELL_ACTIVE(vha)) {
+ 		struct fc_els_flogi *p = ptr;
+-
+ 		p->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_SEC);
+ 	}
+ 
+@@ -3089,10 +3114,11 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	    (uint8_t *)elsio->u.els_plogi.els_plogi_pyld,
+ 	    sizeof(*elsio->u.els_plogi.els_plogi_pyld));
+ 
+-	init_completion(&elsio->u.els_plogi.comp);
+ 	rval = qla2x00_start_sp(sp);
+ 	if (rval != QLA_SUCCESS) {
+-		rval = QLA_FUNCTION_FAILED;
++		fcport->flags |= FCF_LOGIN_NEEDED;
++		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		goto done_free_sp;
+ 	} else {
+ 		ql_dbg(ql_dbg_disc, vha, 0x3074,
+ 		    "%s PLOGI sent, hdl=%x, loopid=%x, to port_id %06x from port_id %06x\n",
+@@ -3100,21 +3126,15 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 		    fcport->d_id.b24, vha->d_id.b24);
+ 	}
+ 
+-	if (wait) {
+-		wait_for_completion(&elsio->u.els_plogi.comp);
+-
+-		if (elsio->u.els_plogi.comp_status != CS_COMPLETE)
+-			rval = QLA_FUNCTION_FAILED;
+-	} else {
+-		goto done;
+-	}
++	return rval;
+ 
+-out:
+-	fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
++done_free_sp:
+ 	qla2x00_els_dcmd2_free(vha, &elsio->u.els_plogi);
+ 	/* ref: INIT */
+ 	kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
++	fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
++	qla2x00_set_fcport_disc_state(fcport, DSC_DELETED);
+ 	return rval;
+ }
+ 
+@@ -3918,7 +3938,7 @@ qla2x00_start_sp(srb_t *sp)
+ 		return -EAGAIN;
+ 	}
+ 
+-	pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
++	pkt = qla2x00_alloc_iocbs_ready(sp->qpair, sp);
+ 	if (!pkt) {
+ 		rval = -EAGAIN;
+ 		ql_log(ql_log_warn, vha, 0x700c,
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 21ec32b4fb280..0cd6f3e148824 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -194,7 +194,7 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
+ 	if (ha->flags.purge_mbox || chip_reset != ha->chip_reset ||
+ 	    ha->flags.eeh_busy) {
+ 		ql_log(ql_log_warn, vha, 0xd035,
+-		       "Error detected: purge[%d] eeh[%d] cmd=0x%x, Exiting.\n",
++		       "Purge mbox: purge[%d] eeh[%d] cmd=0x%x, Exiting.\n",
+ 		       ha->flags.purge_mbox, ha->flags.eeh_busy, mcp->mb[0]);
+ 		rval = QLA_ABORTED;
+ 		goto premature_exit;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 03348f605c2e9..931bdaeaee1de 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -4602,6 +4602,7 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len,
+ 	ha->init_cb_dma = 0;
+ fail_free_vp_map:
+ 	kfree(ha->vp_map);
++	ha->vp_map = NULL;
+ fail:
+ 	ql_log(ql_log_fatal, NULL, 0x0030,
+ 	    "Memory allocation failure.\n");
+@@ -5583,7 +5584,7 @@ qla2x00_do_work(struct scsi_qla_host *vha)
+ 			break;
+ 		case QLA_EVT_ELS_PLOGI:
+ 			qla24xx_els_dcmd2_iocb(vha, ELS_DCMD_PLOGI,
+-			    e->u.fcport.fcport, false);
++			    e->u.fcport.fcport);
+ 			break;
+ 		case QLA_EVT_SA_REPLACE:
+ 			rc = qla24xx_issue_sa_replace_iocb(vha, e);
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 2ef2dbac0db27..d7551b1443e4a 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1062,6 +1062,16 @@ void qlt_free_session_done(struct work_struct *work)
+ 		    "%s: sess %p logout completed\n", __func__, sess);
+ 	}
+ 
++	/* check for any straggling io left behind */
++	if (!(sess->flags & FCF_FCP2_DEVICE) &&
++	    qla2x00_eh_wait_for_pending_commands(sess->vha, sess->d_id.b24, 0, WAIT_TARGET)) {
++		ql_log(ql_log_warn, vha, 0x3027,
++		    "IO not return. Resetting.\n");
++		set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags);
++		qla2xxx_wake_dpc(vha);
++		qla2x00_wait_for_chip_reset(vha);
++	}
++
+ 	if (sess->logo_ack_needed) {
+ 		sess->logo_ack_needed = 0;
+ 		qla24xx_async_notify_ack(vha, sess,
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 44680f65ea145..ca99be7341d9b 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -1619,6 +1619,40 @@ int scsi_add_device(struct Scsi_Host *host, uint channel,
+ }
+ EXPORT_SYMBOL(scsi_add_device);
+ 
++int scsi_resume_device(struct scsi_device *sdev)
++{
++	struct device *dev = &sdev->sdev_gendev;
++	int ret = 0;
++
++	device_lock(dev);
++
++	/*
++	 * Bail out if the device or its queue are not running. Otherwise,
++	 * the rescan may block waiting for commands to be executed, with us
++	 * holding the device lock. This can result in a potential deadlock
++	 * in the power management core code when system resume is on-going.
++	 */
++	if (sdev->sdev_state != SDEV_RUNNING ||
++	    blk_queue_pm_only(sdev->request_queue)) {
++		ret = -EWOULDBLOCK;
++		goto unlock;
++	}
++
++	if (dev->driver && try_module_get(dev->driver->owner)) {
++		struct scsi_driver *drv = to_scsi_driver(dev->driver);
++
++		if (drv->resume)
++			ret = drv->resume(dev);
++		module_put(dev->driver->owner);
++	}
++
++unlock:
++	device_unlock(dev);
++
++	return ret;
++}
++EXPORT_SYMBOL(scsi_resume_device);
++
+ int scsi_rescan_device(struct scsi_device *sdev)
+ {
+ 	struct device *dev = &sdev->sdev_gendev;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index a12ff43ac8fd0..f76dbeade0c4e 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3948,7 +3948,21 @@ static int sd_suspend_runtime(struct device *dev)
+ 	return sd_suspend_common(dev, true);
+ }
+ 
+-static int sd_resume(struct device *dev, bool runtime)
++static int sd_resume(struct device *dev)
++{
++	struct scsi_disk *sdkp = dev_get_drvdata(dev);
++
++	sd_printk(KERN_NOTICE, sdkp, "Starting disk\n");
++
++	if (opal_unlock_from_suspend(sdkp->opal_dev)) {
++		sd_printk(KERN_NOTICE, sdkp, "OPAL unlock failed\n");
++		return -EIO;
++	}
++
++	return 0;
++}
++
++static int sd_resume_common(struct device *dev, bool runtime)
+ {
+ 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
+ 	int ret;
+@@ -3964,7 +3978,7 @@ static int sd_resume(struct device *dev, bool runtime)
+ 	sd_printk(KERN_NOTICE, sdkp, "Starting disk\n");
+ 	ret = sd_start_stop_device(sdkp, 1);
+ 	if (!ret) {
+-		opal_unlock_from_suspend(sdkp->opal_dev);
++		sd_resume(dev);
+ 		sdkp->suspended = false;
+ 	}
+ 
+@@ -3983,7 +3997,7 @@ static int sd_resume_system(struct device *dev)
+ 		return 0;
+ 	}
+ 
+-	return sd_resume(dev, false);
++	return sd_resume_common(dev, false);
+ }
+ 
+ static int sd_resume_runtime(struct device *dev)
+@@ -4010,7 +4024,7 @@ static int sd_resume_runtime(struct device *dev)
+ 				  "Failed to clear sense data\n");
+ 	}
+ 
+-	return sd_resume(dev, true);
++	return sd_resume_common(dev, true);
+ }
+ 
+ static const struct dev_pm_ops sd_pm_ops = {
+@@ -4033,6 +4047,7 @@ static struct scsi_driver sd_template = {
+ 		.pm		= &sd_pm_ops,
+ 	},
+ 	.rescan			= sd_rescan,
++	.resume			= sd_resume,
+ 	.init_command		= sd_init_command,
+ 	.uninit_command		= sd_uninit_command,
+ 	.done			= sd_done,
+diff --git a/drivers/slimbus/core.c b/drivers/slimbus/core.c
+index d43873bb5fe6d..01cbd46219810 100644
+--- a/drivers/slimbus/core.c
++++ b/drivers/slimbus/core.c
+@@ -436,8 +436,8 @@ static int slim_device_alloc_laddr(struct slim_device *sbdev,
+ 		if (ret < 0)
+ 			goto err;
+ 	} else if (report_present) {
+-		ret = ida_simple_get(&ctrl->laddr_ida,
+-				     0, SLIM_LA_MANAGER - 1, GFP_KERNEL);
++		ret = ida_alloc_max(&ctrl->laddr_ida,
++				    SLIM_LA_MANAGER - 1, GFP_KERNEL);
+ 		if (ret < 0)
+ 			goto err;
+ 
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index 739e4eee6b75c..7e9074519ad22 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -991,7 +991,7 @@ struct qman_portal {
+ 	/* linked-list of CSCN handlers. */
+ 	struct list_head cgr_cbs;
+ 	/* list lock */
+-	spinlock_t cgr_lock;
++	raw_spinlock_t cgr_lock;
+ 	struct work_struct congestion_work;
+ 	struct work_struct mr_work;
+ 	char irqname[MAX_IRQNAME];
+@@ -1281,7 +1281,7 @@ static int qman_create_portal(struct qman_portal *portal,
+ 		/* if the given mask is NULL, assume all CGRs can be seen */
+ 		qman_cgrs_fill(&portal->cgrs[0]);
+ 	INIT_LIST_HEAD(&portal->cgr_cbs);
+-	spin_lock_init(&portal->cgr_lock);
++	raw_spin_lock_init(&portal->cgr_lock);
+ 	INIT_WORK(&portal->congestion_work, qm_congestion_task);
+ 	INIT_WORK(&portal->mr_work, qm_mr_process_task);
+ 	portal->bits = 0;
+@@ -1456,11 +1456,14 @@ static void qm_congestion_task(struct work_struct *work)
+ 	union qm_mc_result *mcr;
+ 	struct qman_cgr *cgr;
+ 
+-	spin_lock(&p->cgr_lock);
++	/*
++	 * FIXME: QM_MCR_TIMEOUT is 10ms, which is too long for a raw spinlock!
++	 */
++	raw_spin_lock_irq(&p->cgr_lock);
+ 	qm_mc_start(&p->p);
+ 	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);
+ 	if (!qm_mc_result_timeout(&p->p, &mcr)) {
+-		spin_unlock(&p->cgr_lock);
++		raw_spin_unlock_irq(&p->cgr_lock);
+ 		dev_crit(p->config->dev, "QUERYCONGESTION timeout\n");
+ 		qman_p_irqsource_add(p, QM_PIRQ_CSCI);
+ 		return;
+@@ -1476,7 +1479,7 @@ static void qm_congestion_task(struct work_struct *work)
+ 	list_for_each_entry(cgr, &p->cgr_cbs, node)
+ 		if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))
+ 			cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));
+-	spin_unlock(&p->cgr_lock);
++	raw_spin_unlock_irq(&p->cgr_lock);
+ 	qman_p_irqsource_add(p, QM_PIRQ_CSCI);
+ }
+ 
+@@ -2440,7 +2443,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+ 	preempt_enable();
+ 
+ 	cgr->chan = p->config->channel;
+-	spin_lock(&p->cgr_lock);
++	raw_spin_lock_irq(&p->cgr_lock);
+ 
+ 	if (opts) {
+ 		struct qm_mcc_initcgr local_opts = *opts;
+@@ -2477,7 +2480,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
+ 	    qman_cgrs_get(&p->cgrs[1], cgr->cgrid))
+ 		cgr->cb(p, cgr, 1);
+ out:
+-	spin_unlock(&p->cgr_lock);
++	raw_spin_unlock_irq(&p->cgr_lock);
+ 	put_affine_portal();
+ 	return ret;
+ }
+@@ -2512,7 +2515,7 @@ int qman_delete_cgr(struct qman_cgr *cgr)
+ 		return -EINVAL;
+ 
+ 	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));
+-	spin_lock_irqsave(&p->cgr_lock, irqflags);
++	raw_spin_lock_irqsave(&p->cgr_lock, irqflags);
+ 	list_del(&cgr->node);
+ 	/*
+ 	 * If there are no other CGR objects for this CGRID in the list,
+@@ -2537,7 +2540,7 @@ int qman_delete_cgr(struct qman_cgr *cgr)
+ 		/* add back to the list */
+ 		list_add(&cgr->node, &p->cgr_cbs);
+ release_lock:
+-	spin_unlock_irqrestore(&p->cgr_lock, irqflags);
++	raw_spin_unlock_irqrestore(&p->cgr_lock, irqflags);
+ 	put_affine_portal();
+ 	return ret;
+ }
+@@ -2577,9 +2580,9 @@ static int qman_update_cgr(struct qman_cgr *cgr, struct qm_mcc_initcgr *opts)
+ 	if (!p)
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&p->cgr_lock, irqflags);
++	raw_spin_lock_irqsave(&p->cgr_lock, irqflags);
+ 	ret = qm_modify_cgr(cgr, 0, opts);
+-	spin_unlock_irqrestore(&p->cgr_lock, irqflags);
++	raw_spin_unlock_irqrestore(&p->cgr_lock, irqflags);
+ 	put_affine_portal();
+ 	return ret;
+ }
+diff --git a/drivers/staging/media/ipu3/ipu3-v4l2.c b/drivers/staging/media/ipu3/ipu3-v4l2.c
+index e530767e80a5d..55cc44a401bc4 100644
+--- a/drivers/staging/media/ipu3/ipu3-v4l2.c
++++ b/drivers/staging/media/ipu3/ipu3-v4l2.c
+@@ -1069,6 +1069,11 @@ static int imgu_v4l2_subdev_register(struct imgu_device *imgu,
+ 	struct imgu_media_pipe *imgu_pipe = &imgu->imgu_pipe[pipe];
+ 
+ 	/* Initialize subdev media entity */
++	imgu_sd->subdev.entity.ops = &imgu_media_ops;
++	for (i = 0; i < IMGU_NODE_NUM; i++) {
++		imgu_sd->subdev_pads[i].flags = imgu_pipe->nodes[i].output ?
++			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
++	}
+ 	r = media_entity_pads_init(&imgu_sd->subdev.entity, IMGU_NODE_NUM,
+ 				   imgu_sd->subdev_pads);
+ 	if (r) {
+@@ -1076,11 +1081,6 @@ static int imgu_v4l2_subdev_register(struct imgu_device *imgu,
+ 			"failed initialize subdev media entity (%d)\n", r);
+ 		return r;
+ 	}
+-	imgu_sd->subdev.entity.ops = &imgu_media_ops;
+-	for (i = 0; i < IMGU_NODE_NUM; i++) {
+-		imgu_sd->subdev_pads[i].flags = imgu_pipe->nodes[i].output ?
+-			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+-	}
+ 
+ 	/* Initialize subdev */
+ 	v4l2_subdev_init(&imgu_sd->subdev, &imgu_subdev_ops);
+@@ -1177,15 +1177,15 @@ static int imgu_v4l2_node_setup(struct imgu_device *imgu, unsigned int pipe,
+ 	}
+ 
+ 	/* Initialize media entities */
++	node->vdev_pad.flags = node->output ?
++		MEDIA_PAD_FL_SOURCE : MEDIA_PAD_FL_SINK;
++	vdev->entity.ops = NULL;
+ 	r = media_entity_pads_init(&vdev->entity, 1, &node->vdev_pad);
+ 	if (r) {
+ 		dev_err(dev, "failed initialize media entity (%d)\n", r);
+ 		mutex_destroy(&node->lock);
+ 		return r;
+ 	}
+-	node->vdev_pad.flags = node->output ?
+-		MEDIA_PAD_FL_SOURCE : MEDIA_PAD_FL_SINK;
+-	vdev->entity.ops = NULL;
+ 
+ 	/* Initialize vbq */
+ 	vbq->type = node->vdev_fmt.type;
+diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+index 258aa0e37f554..4c3684dd902ed 100644
+--- a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+@@ -937,8 +937,9 @@ static int create_component(struct vchiq_mmal_instance *instance,
+ 	/* build component create message */
+ 	m.h.type = MMAL_MSG_TYPE_COMPONENT_CREATE;
+ 	m.u.component_create.client_component = component->client_component;
+-	strncpy(m.u.component_create.name, name,
+-		sizeof(m.u.component_create.name));
++	strscpy_pad(m.u.component_create.name, name,
++		    sizeof(m.u.component_create.name));
++	m.u.component_create.pid = 0;
+ 
+ 	ret = send_synchronous_mmal_msg(instance, &m,
+ 					sizeof(m.u.component_create),
+diff --git a/drivers/tee/optee/device.c b/drivers/tee/optee/device.c
+index 4b10921276942..1892e49a8e6a6 100644
+--- a/drivers/tee/optee/device.c
++++ b/drivers/tee/optee/device.c
+@@ -90,13 +90,14 @@ static int optee_register_device(const uuid_t *device_uuid, u32 func)
+ 	if (rc) {
+ 		pr_err("device registration failed, err: %d\n", rc);
+ 		put_device(&optee_device->dev);
++		return rc;
+ 	}
+ 
+ 	if (func == PTA_CMD_GET_DEVICES_SUPP)
+ 		device_create_file(&optee_device->dev,
+ 				   &dev_attr_need_supplicant);
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ static int __optee_enumerate_devices(u32 func)
+diff --git a/drivers/thermal/devfreq_cooling.c b/drivers/thermal/devfreq_cooling.c
+index 262e62ab6cf2f..90b828bcca243 100644
+--- a/drivers/thermal/devfreq_cooling.c
++++ b/drivers/thermal/devfreq_cooling.c
+@@ -201,7 +201,7 @@ static int devfreq_cooling_get_requested_power(struct thermal_cooling_device *cd
+ 
+ 		res = dfc->power_ops->get_real_power(df, power, freq, voltage);
+ 		if (!res) {
+-			state = dfc->capped_state;
++			state = dfc->max_state - dfc->capped_state;
+ 
+ 			/* Convert EM power into milli-Watts first */
+ 			dfc->res_util = dfc->em_pd->table[state].power;
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 649f67fdf3454..d75fae7b7ed22 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -176,14 +176,14 @@ static int proc_thermal_get_zone_temp(struct thermal_zone_device *zone,
+ 					 int *temp)
+ {
+ 	int cpu;
+-	int curr_temp;
++	int curr_temp, ret;
+ 
+ 	*temp = 0;
+ 
+ 	for_each_online_cpu(cpu) {
+-		curr_temp = intel_tcc_get_temp(cpu, false);
+-		if (curr_temp < 0)
+-			return curr_temp;
++		ret = intel_tcc_get_temp(cpu, &curr_temp, false);
++		if (ret < 0)
++			return ret;
+ 		if (!*temp || curr_temp > *temp)
+ 			*temp = curr_temp;
+ 	}
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c
+index 2f00fc3bf274a..e964a9375722a 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c
+@@ -27,9 +27,9 @@ static int rapl_mmio_cpu_online(unsigned int cpu)
+ 	if (topology_physical_package_id(cpu))
+ 		return 0;
+ 
+-	rp = rapl_find_package_domain(cpu, &rapl_mmio_priv, true);
++	rp = rapl_find_package_domain_cpuslocked(cpu, &rapl_mmio_priv, true);
+ 	if (!rp) {
+-		rp = rapl_add_package(cpu, &rapl_mmio_priv, true);
++		rp = rapl_add_package_cpuslocked(cpu, &rapl_mmio_priv, true);
+ 		if (IS_ERR(rp))
+ 			return PTR_ERR(rp);
+ 	}
+@@ -42,14 +42,14 @@ static int rapl_mmio_cpu_down_prep(unsigned int cpu)
+ 	struct rapl_package *rp;
+ 	int lead_cpu;
+ 
+-	rp = rapl_find_package_domain(cpu, &rapl_mmio_priv, true);
++	rp = rapl_find_package_domain_cpuslocked(cpu, &rapl_mmio_priv, true);
+ 	if (!rp)
+ 		return 0;
+ 
+ 	cpumask_clear_cpu(cpu, &rp->cpumask);
+ 	lead_cpu = cpumask_first(&rp->cpumask);
+ 	if (lead_cpu >= nr_cpu_ids)
+-		rapl_remove_package(rp);
++		rapl_remove_package_cpuslocked(rp);
+ 	else if (rp->lead_cpu == cpu)
+ 		rp->lead_cpu = lead_cpu;
+ 	return 0;
+diff --git a/drivers/thermal/intel/intel_tcc.c b/drivers/thermal/intel/intel_tcc.c
+index 2e5c741c41ca0..5e8b7f34b3951 100644
+--- a/drivers/thermal/intel/intel_tcc.c
++++ b/drivers/thermal/intel/intel_tcc.c
+@@ -103,18 +103,19 @@ EXPORT_SYMBOL_NS_GPL(intel_tcc_set_offset, INTEL_TCC);
+ /**
+  * intel_tcc_get_temp() - returns the current temperature
+  * @cpu: cpu that the MSR should be run on, nagative value means any cpu.
++ * @temp: pointer to the memory for saving cpu temperature.
+  * @pkg: true: Package Thermal Sensor. false: Core Thermal Sensor.
+  *
+  * Get the current temperature returned by the CPU core/package level
+  * thermal sensor, in degrees C.
+  *
+- * Return: Temperature in degrees C on success, negative error code otherwise.
++ * Return: 0 on success, negative error code otherwise.
+  */
+-int intel_tcc_get_temp(int cpu, bool pkg)
++int intel_tcc_get_temp(int cpu, int *temp, bool pkg)
+ {
+ 	u32 low, high;
+ 	u32 msr = pkg ? MSR_IA32_PACKAGE_THERM_STATUS : MSR_IA32_THERM_STATUS;
+-	int tjmax, temp, err;
++	int tjmax, err;
+ 
+ 	tjmax = intel_tcc_get_tjmax(cpu);
+ 	if (tjmax < 0)
+@@ -131,9 +132,8 @@ int intel_tcc_get_temp(int cpu, bool pkg)
+ 	if (!(low & BIT(31)))
+ 		return -ENODATA;
+ 
+-	temp = tjmax - ((low >> 16) & 0x7f);
++	*temp = tjmax - ((low >> 16) & 0x7f);
+ 
+-	/* Do not allow negative CPU temperature */
+-	return temp >= 0 ? temp : -ENODATA;
++	return 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(intel_tcc_get_temp, INTEL_TCC);
+diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
+index 11a7f8108bbbf..61c3d450ee605 100644
+--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
++++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
+@@ -108,11 +108,11 @@ static struct zone_device *pkg_temp_thermal_get_dev(unsigned int cpu)
+ static int sys_get_curr_temp(struct thermal_zone_device *tzd, int *temp)
+ {
+ 	struct zone_device *zonedev = thermal_zone_device_priv(tzd);
+-	int val;
++	int val, ret;
+ 
+-	val = intel_tcc_get_temp(zonedev->cpu, true);
+-	if (val < 0)
+-		return val;
++	ret = intel_tcc_get_temp(zonedev->cpu, &val, true);
++	if (ret < 0)
++		return ret;
+ 
+ 	*temp = val * 1000;
+ 	pr_debug("sys_get_curr_temp %d\n", *temp);
+diff --git a/drivers/thermal/mediatek/auxadc_thermal.c b/drivers/thermal/mediatek/auxadc_thermal.c
+index 8b0edb2048443..9ee2e7283435a 100644
+--- a/drivers/thermal/mediatek/auxadc_thermal.c
++++ b/drivers/thermal/mediatek/auxadc_thermal.c
+@@ -690,6 +690,9 @@ static const struct mtk_thermal_data mt7986_thermal_data = {
+ 	.adcpnp = mt7986_adcpnp,
+ 	.sensor_mux_values = mt7986_mux_values,
+ 	.version = MTK_THERMAL_V3,
++	.apmixed_buffer_ctl_reg = APMIXED_SYS_TS_CON1,
++	.apmixed_buffer_ctl_mask = GENMASK(31, 6) | BIT(3),
++	.apmixed_buffer_ctl_set = BIT(0),
+ };
+ 
+ static bool mtk_thermal_temp_is_valid(int temp)
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index 44e9b09de47a5..a3c68c808eebe 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -1265,6 +1265,9 @@ int tb_port_update_credits(struct tb_port *port)
+ 	ret = tb_port_do_update_credits(port);
+ 	if (ret)
+ 		return ret;
++
++	if (!port->dual_link_port)
++		return 0;
+ 	return tb_port_do_update_credits(port->dual_link_port);
+ }
+ 
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 8ca061d3bbb92..1d65055dde276 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1329,9 +1329,6 @@ static void autoconfig_irq(struct uart_8250_port *up)
+ 		inb_p(ICP);
+ 	}
+ 
+-	if (uart_console(port))
+-		console_lock();
+-
+ 	/* forget possible initially masked and pending IRQ */
+ 	probe_irq_off(probe_irq_on());
+ 	save_mcr = serial8250_in_MCR(up);
+@@ -1371,9 +1368,6 @@ static void autoconfig_irq(struct uart_8250_port *up)
+ 	if (port->flags & UPF_FOURPORT)
+ 		outb_p(save_ICP, ICP);
+ 
+-	if (uart_console(port))
+-		console_unlock();
+-
+ 	port->irq = (irq > 0) ? irq : 0;
+ }
+ 
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 6d0cfb2e86b45..71d0cbd748076 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2345,9 +2345,12 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ 
+ 	lpuart32_write(&sport->port, bd, UARTBAUD);
+ 	lpuart32_serial_setbrg(sport, baud);
+-	lpuart32_write(&sport->port, modem, UARTMODIR);
+-	lpuart32_write(&sport->port, ctrl, UARTCTRL);
++	/* disable CTS before enabling UARTCTRL_TE to avoid pending idle preamble */
++	lpuart32_write(&sport->port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
+ 	/* restore control register */
++	lpuart32_write(&sport->port, ctrl, UARTCTRL);
++	/* re-enable the CTS if needed */
++	lpuart32_write(&sport->port, modem, UARTMODIR);
+ 
+ 	if ((ctrl & (UARTCTRL_PE | UARTCTRL_M)) == UARTCTRL_PE)
+ 		sport->is_cs7 = true;
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 81557a58f86f4..5545bf4a79fcb 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -462,8 +462,7 @@ static void imx_uart_stop_tx(struct uart_port *port)
+ 	}
+ }
+ 
+-/* called with port.lock taken and irqs off */
+-static void imx_uart_stop_rx(struct uart_port *port)
++static void imx_uart_stop_rx_with_loopback_ctrl(struct uart_port *port, bool loopback)
+ {
+ 	struct imx_port *sport = (struct imx_port *)port;
+ 	u32 ucr1, ucr2, ucr4, uts;
+@@ -485,7 +484,7 @@ static void imx_uart_stop_rx(struct uart_port *port)
+ 	/* See SER_RS485_ENABLED/UTS_LOOP comment in imx_uart_probe() */
+ 	if (port->rs485.flags & SER_RS485_ENABLED &&
+ 	    port->rs485.flags & SER_RS485_RTS_ON_SEND &&
+-	    sport->have_rtscts && !sport->have_rtsgpio) {
++	    sport->have_rtscts && !sport->have_rtsgpio && loopback) {
+ 		uts = imx_uart_readl(sport, imx_uart_uts_reg(sport));
+ 		uts |= UTS_LOOP;
+ 		imx_uart_writel(sport, uts, imx_uart_uts_reg(sport));
+@@ -497,6 +496,16 @@ static void imx_uart_stop_rx(struct uart_port *port)
+ 	imx_uart_writel(sport, ucr2, UCR2);
+ }
+ 
++/* called with port.lock taken and irqs off */
++static void imx_uart_stop_rx(struct uart_port *port)
++{
++	/*
++	 * Stop RX and enable loopback in order to make sure RS485 bus
++	 * is not blocked. Se comment in imx_uart_probe().
++	 */
++	imx_uart_stop_rx_with_loopback_ctrl(port, true);
++}
++
+ /* called with port.lock taken and irqs off */
+ static void imx_uart_enable_ms(struct uart_port *port)
+ {
+@@ -682,9 +691,14 @@ static void imx_uart_start_tx(struct uart_port *port)
+ 				imx_uart_rts_inactive(sport, &ucr2);
+ 			imx_uart_writel(sport, ucr2, UCR2);
+ 
++			/*
++			 * Since we are about to transmit we can not stop RX
++			 * with loopback enabled because that will make our
++			 * transmitted data being just looped to RX.
++			 */
+ 			if (!(port->rs485.flags & SER_RS485_RX_DURING_TX) &&
+ 			    !port->rs485_rx_during_tx_gpio)
+-				imx_uart_stop_rx(port);
++				imx_uart_stop_rx_with_loopback_ctrl(port, false);
+ 
+ 			sport->tx_state = WAIT_AFTER_RTS;
+ 
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 5aff179bf2978..1a4bb55652afa 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1635,13 +1635,16 @@ static unsigned short max310x_i2c_slave_addr(unsigned short addr,
+ 
+ static int max310x_i2c_probe(struct i2c_client *client)
+ {
+-	const struct max310x_devtype *devtype =
+-			device_get_match_data(&client->dev);
++	const struct max310x_devtype *devtype;
+ 	struct i2c_client *port_client;
+ 	struct regmap *regmaps[4];
+ 	unsigned int i;
+ 	u8 port_addr;
+ 
++	devtype = device_get_match_data(&client->dev);
++	if (!devtype)
++		return dev_err_probe(&client->dev, -ENODEV, "Failed to match device\n");
++
+ 	if (client->addr < devtype->slave_addr.min ||
+ 		client->addr > devtype->slave_addr.max)
+ 		return dev_err_probe(&client->dev, -EINVAL,
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 7e78f97e8f435..5499096440115 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -851,19 +851,21 @@ static void qcom_geni_serial_stop_tx(struct uart_port *uport)
+ }
+ 
+ static void qcom_geni_serial_send_chunk_fifo(struct uart_port *uport,
+-					     unsigned int remaining)
++					     unsigned int chunk)
+ {
+ 	struct qcom_geni_serial_port *port = to_dev_port(uport);
+ 	struct circ_buf *xmit = &uport->state->xmit;
+-	unsigned int tx_bytes;
++	unsigned int tx_bytes, c, remaining = chunk;
+ 	u8 buf[BYTES_PER_FIFO_WORD];
+ 
+ 	while (remaining) {
+ 		memset(buf, 0, sizeof(buf));
+ 		tx_bytes = min(remaining, BYTES_PER_FIFO_WORD);
+ 
+-		memcpy(buf, &xmit->buf[xmit->tail], tx_bytes);
+-		uart_xmit_advance(uport, tx_bytes);
++		for (c = 0; c < tx_bytes ; c++) {
++			buf[c] = xmit->buf[xmit->tail];
++			uart_xmit_advance(uport, 1);
++		}
+ 
+ 		iowrite32_rep(uport->membase + SE_GENI_TX_FIFOn, buf, 1);
+ 
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 39fa9ae5099bf..e383d9a10ace1 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -2603,7 +2603,12 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 			port->type = PORT_UNKNOWN;
+ 			flags |= UART_CONFIG_TYPE;
+ 		}
++		/* Synchronize with possible boot console. */
++		if (uart_console(port))
++			console_lock();
+ 		port->ops->config_port(port, flags);
++		if (uart_console(port))
++			console_unlock();
+ 	}
+ 
+ 	if (port->type != PORT_UNKNOWN) {
+@@ -2611,6 +2616,10 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 
+ 		uart_report_port(drv, port);
+ 
++		/* Synchronize with possible boot console. */
++		if (uart_console(port))
++			console_lock();
++
+ 		/* Power up port for set_mctrl() */
+ 		uart_change_pm(state, UART_PM_STATE_ON);
+ 
+@@ -2627,6 +2636,9 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
+ 
+ 		uart_rs485_config(port);
+ 
++		if (uart_console(port))
++			console_unlock();
++
+ 		/*
+ 		 * If this driver supports console, and it hasn't been
+ 		 * successfully registered yet, try to re-register it.
+diff --git a/drivers/tty/serial/serial_port.c b/drivers/tty/serial/serial_port.c
+index 88975a4df3060..72b6f4f326e2b 100644
+--- a/drivers/tty/serial/serial_port.c
++++ b/drivers/tty/serial/serial_port.c
+@@ -46,8 +46,31 @@ static int serial_port_runtime_resume(struct device *dev)
+ 	return 0;
+ }
+ 
++static int serial_port_runtime_suspend(struct device *dev)
++{
++	struct serial_port_device *port_dev = to_serial_base_port_device(dev);
++	struct uart_port *port = port_dev->port;
++	unsigned long flags;
++	bool busy;
++
++	if (port->flags & UPF_DEAD)
++		return 0;
++
++	uart_port_lock_irqsave(port, &flags);
++	busy = __serial_port_busy(port);
++	if (busy)
++		port->ops->start_tx(port);
++	uart_port_unlock_irqrestore(port, flags);
++
++	if (busy)
++		pm_runtime_mark_last_busy(dev);
++
++	return busy ? -EBUSY : 0;
++}
++
+ static DEFINE_RUNTIME_DEV_PM_OPS(serial_port_pm,
+-				 NULL, serial_port_runtime_resume, NULL);
++				 serial_port_runtime_suspend,
++				 serial_port_runtime_resume, NULL);
+ 
+ static int serial_port_probe(struct device *dev)
+ {
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 6617d3a8e84c9..52e6ca1ba21df 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -381,7 +381,7 @@ static void vc_uniscr_delete(struct vc_data *vc, unsigned int nr)
+ 		u32 *ln = vc->vc_uni_lines[vc->state.y];
+ 		unsigned int x = vc->state.x, cols = vc->vc_cols;
+ 
+-		memcpy(&ln[x], &ln[x + nr], (cols - x - nr) * sizeof(*ln));
++		memmove(&ln[x], &ln[x + nr], (cols - x - nr) * sizeof(*ln));
+ 		memset32(&ln[cols - nr], ' ', nr);
+ 	}
+ }
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 0f4b3f16d3d7a..0d3886455375b 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -1397,8 +1397,10 @@ static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up)
+ 
+ 	list_for_each_entry(clki, head, list) {
+ 		if (!IS_ERR_OR_NULL(clki->clk) &&
+-			!strcmp(clki->name, "core_clk_unipro")) {
+-			if (is_scale_up)
++		    !strcmp(clki->name, "core_clk_unipro")) {
++			if (!clki->max_freq)
++				cycles_in_1us = 150; /* default for backwards compatibility */
++			else if (is_scale_up)
+ 				cycles_in_1us = ceil(clki->max_freq, (1000 * 1000));
+ 			else
+ 				cycles_in_1us = ceil(clk_get_rate(clki->clk), (1000 * 1000));
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index c553decb54610..c8262e2f29177 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -485,6 +485,7 @@ static ssize_t wdm_write
+ static int service_outstanding_interrupt(struct wdm_device *desc)
+ {
+ 	int rv = 0;
++	int used;
+ 
+ 	/* submit read urb only if the device is waiting for it */
+ 	if (!desc->resp_count || !--desc->resp_count)
+@@ -499,7 +500,10 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 		goto out;
+ 	}
+ 
+-	set_bit(WDM_RESPONDING, &desc->flags);
++	used = test_and_set_bit(WDM_RESPONDING, &desc->flags);
++	if (used)
++		goto out;
++
+ 	spin_unlock_irq(&desc->iuspin);
+ 	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 4854d883e601d..80145bd4b5c91 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -123,7 +123,6 @@ EXPORT_SYMBOL_GPL(ehci_cf_port_reset_rwsem);
+ #define HUB_DEBOUNCE_STEP	  25
+ #define HUB_DEBOUNCE_STABLE	 100
+ 
+-static void hub_release(struct kref *kref);
+ static int usb_reset_and_verify_device(struct usb_device *udev);
+ static int hub_port_disable(struct usb_hub *hub, int port1, int set_state);
+ static bool hub_port_warm_reset_required(struct usb_hub *hub, int port1,
+@@ -685,14 +684,14 @@ static void kick_hub_wq(struct usb_hub *hub)
+ 	 */
+ 	intf = to_usb_interface(hub->intfdev);
+ 	usb_autopm_get_interface_no_resume(intf);
+-	kref_get(&hub->kref);
++	hub_get(hub);
+ 
+ 	if (queue_work(hub_wq, &hub->events))
+ 		return;
+ 
+ 	/* the work has already been scheduled */
+ 	usb_autopm_put_interface_async(intf);
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ }
+ 
+ void usb_kick_hub_wq(struct usb_device *hdev)
+@@ -1060,7 +1059,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 			goto init2;
+ 		goto init3;
+ 	}
+-	kref_get(&hub->kref);
++	hub_get(hub);
+ 
+ 	/* The superspeed hub except for root hub has to use Hub Depth
+ 	 * value as an offset into the route string to locate the bits
+@@ -1308,7 +1307,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ 		device_unlock(&hdev->dev);
+ 	}
+ 
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ }
+ 
+ /* Implement the continuations for the delays above */
+@@ -1724,6 +1723,16 @@ static void hub_release(struct kref *kref)
+ 	kfree(hub);
+ }
+ 
++void hub_get(struct usb_hub *hub)
++{
++	kref_get(&hub->kref);
++}
++
++void hub_put(struct usb_hub *hub)
++{
++	kref_put(&hub->kref, hub_release);
++}
++
+ static unsigned highspeed_hubs;
+ 
+ static void hub_disconnect(struct usb_interface *intf)
+@@ -1772,7 +1781,7 @@ static void hub_disconnect(struct usb_interface *intf)
+ 
+ 	onboard_hub_destroy_pdevs(&hub->onboard_hub_devs);
+ 
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ }
+ 
+ static bool hub_descriptor_is_sane(struct usb_host_interface *desc)
+@@ -5894,7 +5903,7 @@ static void hub_event(struct work_struct *work)
+ 
+ 	/* Balance the stuff in kick_hub_wq() and allow autosuspend */
+ 	usb_autopm_put_interface(intf);
+-	kref_put(&hub->kref, hub_release);
++	hub_put(hub);
+ 
+ 	kcov_remote_stop();
+ }
+diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
+index 43ce21c96a511..183b69dc29554 100644
+--- a/drivers/usb/core/hub.h
++++ b/drivers/usb/core/hub.h
+@@ -129,6 +129,8 @@ extern void usb_hub_remove_port_device(struct usb_hub *hub,
+ extern int usb_hub_set_port_power(struct usb_device *hdev, struct usb_hub *hub,
+ 		int port1, bool set);
+ extern struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev);
++extern void hub_get(struct usb_hub *hub);
++extern void hub_put(struct usb_hub *hub);
+ extern int hub_port_debounce(struct usb_hub *hub, int port1,
+ 		bool must_be_connected);
+ extern int usb_clear_port_feature(struct usb_device *hdev,
+diff --git a/drivers/usb/core/port.c b/drivers/usb/core/port.c
+index c628c1abc9071..a5776531ba4d3 100644
+--- a/drivers/usb/core/port.c
++++ b/drivers/usb/core/port.c
+@@ -55,11 +55,22 @@ static ssize_t disable_show(struct device *dev,
+ 	u16 portstatus, unused;
+ 	bool disabled;
+ 	int rc;
++	struct kernfs_node *kn;
+ 
++	hub_get(hub);
+ 	rc = usb_autopm_get_interface(intf);
+ 	if (rc < 0)
+-		return rc;
++		goto out_hub_get;
+ 
++	/*
++	 * Prevent deadlock if another process is concurrently
++	 * trying to unregister hdev.
++	 */
++	kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++	if (!kn) {
++		rc = -ENODEV;
++		goto out_autopm;
++	}
+ 	usb_lock_device(hdev);
+ 	if (hub->disconnected) {
+ 		rc = -ENODEV;
+@@ -69,9 +80,13 @@ static ssize_t disable_show(struct device *dev,
+ 	usb_hub_port_status(hub, port1, &portstatus, &unused);
+ 	disabled = !usb_port_is_power_on(hub, portstatus);
+ 
+-out_hdev_lock:
++ out_hdev_lock:
+ 	usb_unlock_device(hdev);
++	sysfs_unbreak_active_protection(kn);
++ out_autopm:
+ 	usb_autopm_put_interface(intf);
++ out_hub_get:
++	hub_put(hub);
+ 
+ 	if (rc)
+ 		return rc;
+@@ -89,15 +104,26 @@ static ssize_t disable_store(struct device *dev, struct device_attribute *attr,
+ 	int port1 = port_dev->portnum;
+ 	bool disabled;
+ 	int rc;
++	struct kernfs_node *kn;
+ 
+ 	rc = kstrtobool(buf, &disabled);
+ 	if (rc)
+ 		return rc;
+ 
++	hub_get(hub);
+ 	rc = usb_autopm_get_interface(intf);
+ 	if (rc < 0)
+-		return rc;
++		goto out_hub_get;
+ 
++	/*
++	 * Prevent deadlock if another process is concurrently
++	 * trying to unregister hdev.
++	 */
++	kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++	if (!kn) {
++		rc = -ENODEV;
++		goto out_autopm;
++	}
+ 	usb_lock_device(hdev);
+ 	if (hub->disconnected) {
+ 		rc = -ENODEV;
+@@ -118,9 +144,13 @@ static ssize_t disable_store(struct device *dev, struct device_attribute *attr,
+ 	if (!rc)
+ 		rc = count;
+ 
+-out_hdev_lock:
++ out_hdev_lock:
+ 	usb_unlock_device(hdev);
++	sysfs_unbreak_active_protection(kn);
++ out_autopm:
+ 	usb_autopm_put_interface(intf);
++ out_hub_get:
++	hub_put(hub);
+ 
+ 	return rc;
+ }
+@@ -573,7 +603,7 @@ static int match_location(struct usb_device *peer_hdev, void *p)
+ 	struct usb_hub *peer_hub = usb_hub_to_struct_hub(peer_hdev);
+ 	struct usb_device *hdev = to_usb_device(port_dev->dev.parent->parent);
+ 
+-	if (!peer_hub)
++	if (!peer_hub || port_dev->connect_type == USB_PORT_NOT_USED)
+ 		return 0;
+ 
+ 	hcd = bus_to_hcd(hdev->bus);
+@@ -584,7 +614,8 @@ static int match_location(struct usb_device *peer_hdev, void *p)
+ 
+ 	for (port1 = 1; port1 <= peer_hdev->maxchild; port1++) {
+ 		peer = peer_hub->ports[port1 - 1];
+-		if (peer && peer->location == port_dev->location) {
++		if (peer && peer->connect_type != USB_PORT_NOT_USED &&
++		    peer->location == port_dev->location) {
+ 			link_peers_report(port_dev, peer);
+ 			return 1; /* done */
+ 		}
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index 5d21718afb05c..f91f543ec636d 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -1168,14 +1168,24 @@ static ssize_t interface_authorized_store(struct device *dev,
+ {
+ 	struct usb_interface *intf = to_usb_interface(dev);
+ 	bool val;
++	struct kernfs_node *kn;
+ 
+ 	if (kstrtobool(buf, &val) != 0)
+ 		return -EINVAL;
+ 
+-	if (val)
++	if (val) {
+ 		usb_authorize_interface(intf);
+-	else
+-		usb_deauthorize_interface(intf);
++	} else {
++		/*
++		 * Prevent deadlock if another process is concurrently
++		 * trying to unregister intf.
++		 */
++		kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++		if (kn) {
++			usb_deauthorize_interface(intf);
++			sysfs_unbreak_active_protection(kn);
++		}
++	}
+ 
+ 	return count;
+ }
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index c92a1da46a014..a141f83aba0cc 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -729,8 +729,14 @@ struct dwc2_dregs_backup {
+  * struct dwc2_hregs_backup - Holds host registers state before
+  * entering partial power down
+  * @hcfg:		Backup of HCFG register
++ * @hflbaddr:		Backup of HFLBADDR register
+  * @haintmsk:		Backup of HAINTMSK register
++ * @hcchar:		Backup of HCCHAR register
++ * @hcsplt:		Backup of HCSPLT register
+  * @hcintmsk:		Backup of HCINTMSK register
++ * @hctsiz:		Backup of HCTSIZ register
++ * @hdma:		Backup of HCDMA register
++ * @hcdmab:		Backup of HCDMAB register
+  * @hprt0:		Backup of HPTR0 register
+  * @hfir:		Backup of HFIR register
+  * @hptxfsiz:		Backup of HPTXFSIZ register
+@@ -738,8 +744,14 @@ struct dwc2_dregs_backup {
+  */
+ struct dwc2_hregs_backup {
+ 	u32 hcfg;
++	u32 hflbaddr;
+ 	u32 haintmsk;
++	u32 hcchar[MAX_EPS_CHANNELS];
++	u32 hcsplt[MAX_EPS_CHANNELS];
+ 	u32 hcintmsk[MAX_EPS_CHANNELS];
++	u32 hctsiz[MAX_EPS_CHANNELS];
++	u32 hcidma[MAX_EPS_CHANNELS];
++	u32 hcidmab[MAX_EPS_CHANNELS];
+ 	u32 hprt0;
+ 	u32 hfir;
+ 	u32 hptxfsiz;
+@@ -1086,6 +1098,7 @@ struct dwc2_hsotg {
+ 	bool needs_byte_swap;
+ 
+ 	/* DWC OTG HW Release versions */
++#define DWC2_CORE_REV_4_30a	0x4f54430a
+ #define DWC2_CORE_REV_2_71a	0x4f54271a
+ #define DWC2_CORE_REV_2_72a     0x4f54272a
+ #define DWC2_CORE_REV_2_80a	0x4f54280a
+@@ -1323,6 +1336,7 @@ int dwc2_backup_global_registers(struct dwc2_hsotg *hsotg);
+ int dwc2_restore_global_registers(struct dwc2_hsotg *hsotg);
+ 
+ void dwc2_enable_acg(struct dwc2_hsotg *hsotg);
++void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg, bool remotewakeup);
+ 
+ /* This function should be called on every hardware interrupt. */
+ irqreturn_t dwc2_handle_common_intr(int irq, void *dev);
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 158ede7538548..26d752a4c3ca9 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -297,7 +297,8 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
+ 
+ 			/* Exit gadget mode clock gating. */
+ 			if (hsotg->params.power_down ==
+-			    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
++			    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended &&
++			    !hsotg->params.no_clock_gating)
+ 				dwc2_gadget_exit_clock_gating(hsotg, 0);
+ 		}
+ 
+@@ -322,10 +323,11 @@ static void dwc2_handle_session_req_intr(struct dwc2_hsotg *hsotg)
+  * @hsotg: Programming view of DWC_otg controller
+  *
+  */
+-static void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg)
++void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg, bool remotewakeup)
+ {
+ 	u32 glpmcfg;
+-	u32 i = 0;
++	u32 pcgctl;
++	u32 dctl;
+ 
+ 	if (hsotg->lx_state != DWC2_L1) {
+ 		dev_err(hsotg->dev, "Core isn't in DWC2_L1 state\n");
+@@ -334,37 +336,57 @@ static void dwc2_wakeup_from_lpm_l1(struct dwc2_hsotg *hsotg)
+ 
+ 	glpmcfg = dwc2_readl(hsotg, GLPMCFG);
+ 	if (dwc2_is_device_mode(hsotg)) {
+-		dev_dbg(hsotg->dev, "Exit from L1 state\n");
++		dev_dbg(hsotg->dev, "Exit from L1 state, remotewakeup=%d\n", remotewakeup);
+ 		glpmcfg &= ~GLPMCFG_ENBLSLPM;
+-		glpmcfg &= ~GLPMCFG_HIRD_THRES_EN;
++		glpmcfg &= ~GLPMCFG_HIRD_THRES_MASK;
+ 		dwc2_writel(hsotg, glpmcfg, GLPMCFG);
+ 
+-		do {
+-			glpmcfg = dwc2_readl(hsotg, GLPMCFG);
++		pcgctl = dwc2_readl(hsotg, PCGCTL);
++		pcgctl &= ~PCGCTL_ENBL_SLEEP_GATING;
++		dwc2_writel(hsotg, pcgctl, PCGCTL);
+ 
+-			if (!(glpmcfg & (GLPMCFG_COREL1RES_MASK |
+-					 GLPMCFG_L1RESUMEOK | GLPMCFG_SLPSTS)))
+-				break;
++		glpmcfg = dwc2_readl(hsotg, GLPMCFG);
++		if (glpmcfg & GLPMCFG_ENBESL) {
++			glpmcfg |= GLPMCFG_RSTRSLPSTS;
++			dwc2_writel(hsotg, glpmcfg, GLPMCFG);
++		}
++
++		if (remotewakeup) {
++			if (dwc2_hsotg_wait_bit_set(hsotg, GLPMCFG, GLPMCFG_L1RESUMEOK, 1000)) {
++				dev_warn(hsotg->dev, "%s: timeout GLPMCFG_L1RESUMEOK\n", __func__);
++				goto fail;
++				return;
++			}
++
++			dctl = dwc2_readl(hsotg, DCTL);
++			dctl |= DCTL_RMTWKUPSIG;
++			dwc2_writel(hsotg, dctl, DCTL);
+ 
+-			udelay(1);
+-		} while (++i < 200);
++			if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS, GINTSTS_WKUPINT, 1000)) {
++				dev_warn(hsotg->dev, "%s: timeout GINTSTS_WKUPINT\n", __func__);
++				goto fail;
++				return;
++			}
++		}
+ 
+-		if (i == 200) {
+-			dev_err(hsotg->dev, "Failed to exit L1 sleep state in 200us.\n");
++		glpmcfg = dwc2_readl(hsotg, GLPMCFG);
++		if (glpmcfg & GLPMCFG_COREL1RES_MASK || glpmcfg & GLPMCFG_SLPSTS ||
++		    glpmcfg & GLPMCFG_L1RESUMEOK) {
++			goto fail;
+ 			return;
+ 		}
+-		dwc2_gadget_init_lpm(hsotg);
++
++		/* Inform gadget to exit from L1 */
++		call_gadget(hsotg, resume);
++		/* Change to L0 state */
++		hsotg->lx_state = DWC2_L0;
++		hsotg->bus_suspended = false;
++fail:		dwc2_gadget_init_lpm(hsotg);
+ 	} else {
+ 		/* TODO */
+ 		dev_err(hsotg->dev, "Host side LPM is not supported.\n");
+ 		return;
+ 	}
+-
+-	/* Change to L0 state */
+-	hsotg->lx_state = DWC2_L0;
+-
+-	/* Inform gadget to exit from L1 */
+-	call_gadget(hsotg, resume);
+ }
+ 
+ /*
+@@ -385,7 +407,7 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
+ 	dev_dbg(hsotg->dev, "%s lxstate = %d\n", __func__, hsotg->lx_state);
+ 
+ 	if (hsotg->lx_state == DWC2_L1) {
+-		dwc2_wakeup_from_lpm_l1(hsotg);
++		dwc2_wakeup_from_lpm_l1(hsotg, false);
+ 		return;
+ 	}
+ 
+@@ -408,7 +430,8 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
+ 
+ 			/* Exit gadget mode clock gating. */
+ 			if (hsotg->params.power_down ==
+-			    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
++			    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended &&
++			    !hsotg->params.no_clock_gating)
+ 				dwc2_gadget_exit_clock_gating(hsotg, 0);
+ 		} else {
+ 			/* Change to L0 state */
+@@ -425,7 +448,8 @@ static void dwc2_handle_wakeup_detected_intr(struct dwc2_hsotg *hsotg)
+ 			}
+ 
+ 			if (hsotg->params.power_down ==
+-			    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended)
++			    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended &&
++			    !hsotg->params.no_clock_gating)
+ 				dwc2_host_exit_clock_gating(hsotg, 1);
+ 
+ 			/*
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index b517a7216de22..b2f6da5b65ccd 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -1415,6 +1415,10 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
+ 		ep->name, req, req->length, req->buf, req->no_interrupt,
+ 		req->zero, req->short_not_ok);
+ 
++	if (hs->lx_state == DWC2_L1) {
++		dwc2_wakeup_from_lpm_l1(hs, true);
++	}
++
+ 	/* Prevent new request submission when controller is suspended */
+ 	if (hs->lx_state != DWC2_L0) {
+ 		dev_dbg(hs->dev, "%s: submit request only in active state\n",
+@@ -3727,6 +3731,12 @@ static irqreturn_t dwc2_hsotg_irq(int irq, void *pw)
+ 		if (hsotg->in_ppd && hsotg->lx_state == DWC2_L2)
+ 			dwc2_exit_partial_power_down(hsotg, 0, true);
+ 
++		/* Exit gadget mode clock gating. */
++		if (hsotg->params.power_down ==
++		    DWC2_POWER_DOWN_PARAM_NONE && hsotg->bus_suspended &&
++		    !hsotg->params.no_clock_gating)
++			dwc2_gadget_exit_clock_gating(hsotg, 0);
++
+ 		hsotg->lx_state = DWC2_L0;
+ 	}
+ 
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 35c7a4df8e717..dd5b1c5691e11 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -2701,8 +2701,11 @@ enum dwc2_transaction_type dwc2_hcd_select_transactions(
+ 			hsotg->available_host_channels--;
+ 		}
+ 		qh = list_entry(qh_ptr, struct dwc2_qh, qh_list_entry);
+-		if (dwc2_assign_and_init_hc(hsotg, qh))
++		if (dwc2_assign_and_init_hc(hsotg, qh)) {
++			if (hsotg->params.uframe_sched)
++				hsotg->available_host_channels++;
+ 			break;
++		}
+ 
+ 		/*
+ 		 * Move the QH from the periodic ready schedule to the
+@@ -2735,8 +2738,11 @@ enum dwc2_transaction_type dwc2_hcd_select_transactions(
+ 			hsotg->available_host_channels--;
+ 		}
+ 
+-		if (dwc2_assign_and_init_hc(hsotg, qh))
++		if (dwc2_assign_and_init_hc(hsotg, qh)) {
++			if (hsotg->params.uframe_sched)
++				hsotg->available_host_channels++;
+ 			break;
++		}
+ 
+ 		/*
+ 		 * Move the QH from the non-periodic inactive schedule to the
+@@ -4143,6 +4149,8 @@ void dwc2_host_complete(struct dwc2_hsotg *hsotg, struct dwc2_qtd *qtd,
+ 			 urb->actual_length);
+ 
+ 	if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
++		if (!hsotg->params.dma_desc_enable)
++			urb->start_frame = qtd->qh->start_active_frame;
+ 		urb->error_count = dwc2_hcd_urb_get_error_count(qtd->urb);
+ 		for (i = 0; i < urb->number_of_packets; ++i) {
+ 			urb->iso_frame_desc[i].actual_length =
+@@ -4649,7 +4657,7 @@ static int _dwc2_hcd_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
+ 	}
+ 
+ 	if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
+-	    hsotg->bus_suspended) {
++	    hsotg->bus_suspended && !hsotg->params.no_clock_gating) {
+ 		if (dwc2_is_device_mode(hsotg))
+ 			dwc2_gadget_exit_clock_gating(hsotg, 0);
+ 		else
+@@ -5406,9 +5414,16 @@ int dwc2_backup_host_registers(struct dwc2_hsotg *hsotg)
+ 	/* Backup Host regs */
+ 	hr = &hsotg->hr_backup;
+ 	hr->hcfg = dwc2_readl(hsotg, HCFG);
++	hr->hflbaddr = dwc2_readl(hsotg, HFLBADDR);
+ 	hr->haintmsk = dwc2_readl(hsotg, HAINTMSK);
+-	for (i = 0; i < hsotg->params.host_channels; ++i)
++	for (i = 0; i < hsotg->params.host_channels; ++i) {
++		hr->hcchar[i] = dwc2_readl(hsotg, HCCHAR(i));
++		hr->hcsplt[i] = dwc2_readl(hsotg, HCSPLT(i));
+ 		hr->hcintmsk[i] = dwc2_readl(hsotg, HCINTMSK(i));
++		hr->hctsiz[i] = dwc2_readl(hsotg, HCTSIZ(i));
++		hr->hcidma[i] = dwc2_readl(hsotg, HCDMA(i));
++		hr->hcidmab[i] = dwc2_readl(hsotg, HCDMAB(i));
++	}
+ 
+ 	hr->hprt0 = dwc2_read_hprt0(hsotg);
+ 	hr->hfir = dwc2_readl(hsotg, HFIR);
+@@ -5442,10 +5457,17 @@ int dwc2_restore_host_registers(struct dwc2_hsotg *hsotg)
+ 	hr->valid = false;
+ 
+ 	dwc2_writel(hsotg, hr->hcfg, HCFG);
++	dwc2_writel(hsotg, hr->hflbaddr, HFLBADDR);
+ 	dwc2_writel(hsotg, hr->haintmsk, HAINTMSK);
+ 
+-	for (i = 0; i < hsotg->params.host_channels; ++i)
++	for (i = 0; i < hsotg->params.host_channels; ++i) {
++		dwc2_writel(hsotg, hr->hcchar[i], HCCHAR(i));
++		dwc2_writel(hsotg, hr->hcsplt[i], HCSPLT(i));
+ 		dwc2_writel(hsotg, hr->hcintmsk[i], HCINTMSK(i));
++		dwc2_writel(hsotg, hr->hctsiz[i], HCTSIZ(i));
++		dwc2_writel(hsotg, hr->hcidma[i], HCDMA(i));
++		dwc2_writel(hsotg, hr->hcidmab[i], HCDMAB(i));
++	}
+ 
+ 	dwc2_writel(hsotg, hr->hprt0, HPRT0);
+ 	dwc2_writel(hsotg, hr->hfir, HFIR);
+@@ -5610,10 +5632,12 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ 	dwc2_writel(hsotg, hr->hcfg, HCFG);
+ 
+ 	/* De-assert Wakeup Logic */
+-	gpwrdn = dwc2_readl(hsotg, GPWRDN);
+-	gpwrdn &= ~GPWRDN_PMUACTV;
+-	dwc2_writel(hsotg, gpwrdn, GPWRDN);
+-	udelay(10);
++	if (!(rem_wakeup && hsotg->hw_params.snpsid >= DWC2_CORE_REV_4_30a)) {
++		gpwrdn = dwc2_readl(hsotg, GPWRDN);
++		gpwrdn &= ~GPWRDN_PMUACTV;
++		dwc2_writel(hsotg, gpwrdn, GPWRDN);
++		udelay(10);
++	}
+ 
+ 	hprt0 = hr->hprt0;
+ 	hprt0 |= HPRT0_PWR;
+@@ -5638,6 +5662,13 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ 		hprt0 |= HPRT0_RES;
+ 		dwc2_writel(hsotg, hprt0, HPRT0);
+ 
++		/* De-assert Wakeup Logic */
++		if ((rem_wakeup && hsotg->hw_params.snpsid >= DWC2_CORE_REV_4_30a)) {
++			gpwrdn = dwc2_readl(hsotg, GPWRDN);
++			gpwrdn &= ~GPWRDN_PMUACTV;
++			dwc2_writel(hsotg, gpwrdn, GPWRDN);
++			udelay(10);
++		}
+ 		/* Wait for Resume time and then program HPRT again */
+ 		mdelay(100);
+ 		hprt0 &= ~HPRT0_RES;
+diff --git a/drivers/usb/dwc2/hcd_ddma.c b/drivers/usb/dwc2/hcd_ddma.c
+index 6b4d825e97a2d..79582b102c7ed 100644
+--- a/drivers/usb/dwc2/hcd_ddma.c
++++ b/drivers/usb/dwc2/hcd_ddma.c
+@@ -559,7 +559,7 @@ static void dwc2_init_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 	idx = qh->td_last;
+ 	inc = qh->host_interval;
+ 	hsotg->frame_number = dwc2_hcd_get_frame_number(hsotg);
+-	cur_idx = dwc2_frame_list_idx(hsotg->frame_number);
++	cur_idx = idx;
+ 	next_idx = dwc2_desclist_idx_inc(qh->td_last, inc, qh->dev_speed);
+ 
+ 	/*
+@@ -866,6 +866,8 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ {
+ 	struct dwc2_dma_desc *dma_desc;
+ 	struct dwc2_hcd_iso_packet_desc *frame_desc;
++	u16 frame_desc_idx;
++	struct urb *usb_urb = qtd->urb->priv;
+ 	u16 remain = 0;
+ 	int rc = 0;
+ 
+@@ -878,8 +880,11 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 				DMA_FROM_DEVICE);
+ 
+ 	dma_desc = &qh->desc_list[idx];
++	frame_desc_idx = (idx - qtd->isoc_td_first) & (usb_urb->number_of_packets - 1);
+ 
+-	frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index_last];
++	frame_desc = &qtd->urb->iso_descs[frame_desc_idx];
++	if (idx == qtd->isoc_td_first)
++		usb_urb->start_frame = dwc2_hcd_get_frame_number(hsotg);
+ 	dma_desc->buf = (u32)(qtd->urb->dma + frame_desc->offset);
+ 	if (chan->ep_is_in)
+ 		remain = (dma_desc->status & HOST_DMA_ISOC_NBYTES_MASK) >>
+@@ -900,7 +905,7 @@ static int dwc2_cmpl_host_isoc_dma_desc(struct dwc2_hsotg *hsotg,
+ 		frame_desc->status = 0;
+ 	}
+ 
+-	if (++qtd->isoc_frame_index == qtd->urb->packet_count) {
++	if (++qtd->isoc_frame_index == usb_urb->number_of_packets) {
+ 		/*
+ 		 * urb->status is not used for isoc transfers here. The
+ 		 * individual frame_desc status are used instead.
+@@ -1005,11 +1010,11 @@ static void dwc2_complete_isoc_xfer_ddma(struct dwc2_hsotg *hsotg,
+ 				return;
+ 			idx = dwc2_desclist_idx_inc(idx, qh->host_interval,
+ 						    chan->speed);
+-			if (!rc)
++			if (rc == 0)
+ 				continue;
+ 
+-			if (rc == DWC2_CMPL_DONE)
+-				break;
++			if (rc == DWC2_CMPL_DONE || rc == DWC2_CMPL_STOP)
++				goto stop_scan;
+ 
+ 			/* rc == DWC2_CMPL_STOP */
+ 
+diff --git a/drivers/usb/dwc2/hw.h b/drivers/usb/dwc2/hw.h
+index 13abdd5f67529..12f8c7f86dc98 100644
+--- a/drivers/usb/dwc2/hw.h
++++ b/drivers/usb/dwc2/hw.h
+@@ -698,7 +698,7 @@
+ #define TXSTS_QTOP_TOKEN_MASK		(0x3 << 25)
+ #define TXSTS_QTOP_TOKEN_SHIFT		25
+ #define TXSTS_QTOP_TERMINATE		BIT(24)
+-#define TXSTS_QSPCAVAIL_MASK		(0xff << 16)
++#define TXSTS_QSPCAVAIL_MASK		(0x7f << 16)
+ #define TXSTS_QSPCAVAIL_SHIFT		16
+ #define TXSTS_FSPCAVAIL_MASK		(0xffff << 0)
+ #define TXSTS_FSPCAVAIL_SHIFT		0
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index b1d48019e944f..7b84416dfc2b1 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -331,7 +331,7 @@ static void dwc2_driver_remove(struct platform_device *dev)
+ 
+ 	/* Exit clock gating when driver is removed. */
+ 	if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
+-	    hsotg->bus_suspended) {
++	    hsotg->bus_suspended && !hsotg->params.no_clock_gating) {
+ 		if (dwc2_is_device_mode(hsotg))
+ 			dwc2_gadget_exit_clock_gating(hsotg, 0);
+ 		else
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index f50b5575d588a..8de1ab8517932 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1507,6 +1507,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ 	else
+ 		dwc->sysdev = dwc->dev;
+ 
++	dwc->sys_wakeup = device_may_wakeup(dwc->sysdev);
++
+ 	ret = device_property_read_string(dev, "usb-psy-name", &usb_psy_name);
+ 	if (ret >= 0) {
+ 		dwc->usb_psy = power_supply_get_by_name(usb_psy_name);
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index efe6caf4d0e87..80265ef608aae 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1127,6 +1127,7 @@ struct dwc3_scratchpad_array {
+  *	3	- Reserved
+  * @dis_metastability_quirk: set to disable metastability quirk.
+  * @dis_split_quirk: set to disable split boundary.
++ * @sys_wakeup: set if the device may do system wakeup.
+  * @wakeup_configured: set if the device is configured for remote wakeup.
+  * @suspended: set to track suspend event due to U3/L2.
+  * @imod_interval: set the interrupt moderation interval in 250ns
+@@ -1350,6 +1351,7 @@ struct dwc3 {
+ 
+ 	unsigned		dis_split_quirk:1;
+ 	unsigned		async_callbacks:1;
++	unsigned		sys_wakeup:1;
+ 	unsigned		wakeup_configured:1;
+ 	unsigned		suspended:1;
+ 
+diff --git a/drivers/usb/dwc3/dwc3-am62.c b/drivers/usb/dwc3/dwc3-am62.c
+index 90a587bc29b74..ea6e29091c0c9 100644
+--- a/drivers/usb/dwc3/dwc3-am62.c
++++ b/drivers/usb/dwc3/dwc3-am62.c
+@@ -267,21 +267,15 @@ static int dwc3_ti_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
+-static int dwc3_ti_remove_core(struct device *dev, void *c)
+-{
+-	struct platform_device *pdev = to_platform_device(dev);
+-
+-	platform_device_unregister(pdev);
+-	return 0;
+-}
+-
+ static void dwc3_ti_remove(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct dwc3_am62 *am62 = platform_get_drvdata(pdev);
+ 	u32 reg;
+ 
+-	device_for_each_child(dev, NULL, dwc3_ti_remove_core);
++	pm_runtime_get_sync(dev);
++	device_init_wakeup(dev, false);
++	of_platform_depopulate(dev);
+ 
+ 	/* Clear mode valid bit */
+ 	reg = dwc3_ti_readl(am62, USBSS_MODE_CONTROL);
+@@ -289,7 +283,6 @@ static void dwc3_ti_remove(struct platform_device *pdev)
+ 	dwc3_ti_writel(am62, USBSS_MODE_CONTROL, reg);
+ 
+ 	pm_runtime_put_sync(dev);
+-	clk_disable_unprepare(am62->usb2_refclk);
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+ }
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 39564e17f3b07..497deed38c0c1 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -51,7 +51,6 @@
+ #define PCI_DEVICE_ID_INTEL_MTLP		0x7ec1
+ #define PCI_DEVICE_ID_INTEL_MTLS		0x7f6f
+ #define PCI_DEVICE_ID_INTEL_MTL			0x7e7e
+-#define PCI_DEVICE_ID_INTEL_ARLH		0x7ec1
+ #define PCI_DEVICE_ID_INTEL_ARLH_PCH		0x777e
+ #define PCI_DEVICE_ID_INTEL_TGL			0x9a15
+ #define PCI_DEVICE_ID_AMD_MR			0x163a
+@@ -423,7 +422,6 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, MTLP, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, MTL, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, MTLS, &dwc3_pci_intel_swnode) },
+-	{ PCI_DEVICE_DATA(INTEL, ARLH, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, ARLH_PCH, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, TGL, &dwc3_pci_intel_swnode) },
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 28f49400f3e8b..07820b1a88a24 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2968,6 +2968,9 @@ static int dwc3_gadget_start(struct usb_gadget *g,
+ 	dwc->gadget_driver	= driver;
+ 	spin_unlock_irqrestore(&dwc->lock, flags);
+ 
++	if (dwc->sys_wakeup)
++		device_wakeup_enable(dwc->sysdev);
++
+ 	return 0;
+ }
+ 
+@@ -2983,6 +2986,9 @@ static int dwc3_gadget_stop(struct usb_gadget *g)
+ 	struct dwc3		*dwc = gadget_to_dwc(g);
+ 	unsigned long		flags;
+ 
++	if (dwc->sys_wakeup)
++		device_wakeup_disable(dwc->sysdev);
++
+ 	spin_lock_irqsave(&dwc->lock, flags);
+ 	dwc->gadget_driver	= NULL;
+ 	dwc->max_cfg_eps = 0;
+@@ -4664,6 +4670,10 @@ int dwc3_gadget_init(struct dwc3 *dwc)
+ 	else
+ 		dwc3_gadget_set_speed(dwc->gadget, dwc->maximum_speed);
+ 
++	/* No system wakeup if no gadget driver bound */
++	if (dwc->sys_wakeup)
++		device_wakeup_disable(dwc->sysdev);
++
+ 	return 0;
+ 
+ err5:
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index 43230915323c7..f6a020d77fa18 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -123,6 +123,14 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 		goto err;
+ 	}
+ 
++	if (dwc->sys_wakeup) {
++		/* Restore wakeup setting if switched from device */
++		device_wakeup_enable(dwc->sysdev);
++
++		/* Pass on wakeup setting to the new xhci platform device */
++		device_init_wakeup(&xhci->dev, true);
++	}
++
+ 	return 0;
+ err:
+ 	platform_device_put(xhci);
+@@ -131,6 +139,9 @@ int dwc3_host_init(struct dwc3 *dwc)
+ 
+ void dwc3_host_exit(struct dwc3 *dwc)
+ {
++	if (dwc->sys_wakeup)
++		device_init_wakeup(&dwc->xhci->dev, false);
++
+ 	platform_device_unregister(dwc->xhci);
+ 	dwc->xhci = NULL;
+ }
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index efe3e3b857695..fdd0fc7b8f259 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -831,7 +831,7 @@ static void ffs_user_copy_worker(struct work_struct *work)
+ 	io_data->kiocb->ki_complete(io_data->kiocb, ret);
+ 
+ 	if (io_data->ffs->ffs_eventfd && !kiocb_has_eventfd)
+-		eventfd_signal(io_data->ffs->ffs_eventfd, 1);
++		eventfd_signal(io_data->ffs->ffs_eventfd);
+ 
+ 	if (io_data->read)
+ 		kfree(io_data->to_free);
+@@ -2738,7 +2738,7 @@ static void __ffs_event_add(struct ffs_data *ffs,
+ 	ffs->ev.types[ffs->ev.count++] = type;
+ 	wake_up_locked(&ffs->ev.waitq);
+ 	if (ffs->ffs_eventfd)
+-		eventfd_signal(ffs->ffs_eventfd, 1);
++		eventfd_signal(ffs->ffs_eventfd);
+ }
+ 
+ static void ffs_event_add(struct ffs_data *ffs,
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index 5712883a7527c..f3456b8bf4152 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -1333,7 +1333,7 @@ static int ncm_unwrap_ntb(struct gether *port,
+ 	if (to_process == 1 &&
+ 	    (*(unsigned char *)(ntb_ptr + block_len) == 0x00)) {
+ 		to_process--;
+-	} else if (to_process > 0) {
++	} else if ((to_process > 0) && (block_len != 0)) {
+ 		ntb_ptr = (unsigned char *)(ntb_ptr + block_len);
+ 		goto parse_ntb;
+ 	}
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index d59f94464b870..8ac29f7230fcd 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -292,7 +292,9 @@ int usb_ep_queue(struct usb_ep *ep,
+ {
+ 	int ret = 0;
+ 
+-	if (WARN_ON_ONCE(!ep->enabled && ep->address)) {
++	if (!ep->enabled && ep->address) {
++		pr_debug("USB gadget: queue request to disabled ep 0x%x (%s)\n",
++				 ep->address, ep->name);
+ 		ret = -ESHUTDOWN;
+ 		goto out;
+ 	}
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index cb85168fd00c2..7aa46d426f31b 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3491,8 +3491,8 @@ static void tegra_xudc_device_params_init(struct tegra_xudc *xudc)
+ 
+ static int tegra_xudc_phy_get(struct tegra_xudc *xudc)
+ {
+-	int err = 0, usb3;
+-	unsigned int i;
++	int err = 0, usb3_companion_port;
++	unsigned int i, j;
+ 
+ 	xudc->utmi_phy = devm_kcalloc(xudc->dev, xudc->soc->num_phys,
+ 					   sizeof(*xudc->utmi_phy), GFP_KERNEL);
+@@ -3520,7 +3520,7 @@ static int tegra_xudc_phy_get(struct tegra_xudc *xudc)
+ 		if (IS_ERR(xudc->utmi_phy[i])) {
+ 			err = PTR_ERR(xudc->utmi_phy[i]);
+ 			dev_err_probe(xudc->dev, err,
+-				      "failed to get usb2-%d PHY\n", i);
++				"failed to get PHY for phy-name usb2-%d\n", i);
+ 			goto clean_up;
+ 		} else if (xudc->utmi_phy[i]) {
+ 			/* Get usb-phy, if utmi phy is available */
+@@ -3539,19 +3539,30 @@ static int tegra_xudc_phy_get(struct tegra_xudc *xudc)
+ 		}
+ 
+ 		/* Get USB3 phy */
+-		usb3 = tegra_xusb_padctl_get_usb3_companion(xudc->padctl, i);
+-		if (usb3 < 0)
++		usb3_companion_port = tegra_xusb_padctl_get_usb3_companion(xudc->padctl, i);
++		if (usb3_companion_port < 0)
+ 			continue;
+ 
+-		snprintf(phy_name, sizeof(phy_name), "usb3-%d", usb3);
+-		xudc->usb3_phy[i] = devm_phy_optional_get(xudc->dev, phy_name);
+-		if (IS_ERR(xudc->usb3_phy[i])) {
+-			err = PTR_ERR(xudc->usb3_phy[i]);
+-			dev_err_probe(xudc->dev, err,
+-				      "failed to get usb3-%d PHY\n", usb3);
+-			goto clean_up;
+-		} else if (xudc->usb3_phy[i])
+-			dev_dbg(xudc->dev, "usb3-%d PHY registered", usb3);
++		for (j = 0; j < xudc->soc->num_phys; j++) {
++			snprintf(phy_name, sizeof(phy_name), "usb3-%d", j);
++			xudc->usb3_phy[i] = devm_phy_optional_get(xudc->dev, phy_name);
++			if (IS_ERR(xudc->usb3_phy[i])) {
++				err = PTR_ERR(xudc->usb3_phy[i]);
++				dev_err_probe(xudc->dev, err,
++					"failed to get PHY for phy-name usb3-%d\n", j);
++				goto clean_up;
++			} else if (xudc->usb3_phy[i]) {
++				int usb2_port =
++					tegra_xusb_padctl_get_port_number(xudc->utmi_phy[i]);
++				int usb3_port =
++					tegra_xusb_padctl_get_port_number(xudc->usb3_phy[i]);
++				if (usb3_port == usb3_companion_port) {
++					dev_dbg(xudc->dev, "USB2 port %d is paired with USB3 port %d for device mode port %d\n",
++					 usb2_port, usb3_port, i);
++					break;
++				}
++			}
++		}
+ 	}
+ 
+ 	return err;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 9673354d70d59..2647245d5b35a 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -326,7 +326,13 @@ static unsigned int xhci_ring_expansion_needed(struct xhci_hcd *xhci, struct xhc
+ 	/* how many trbs will be queued past the enqueue segment? */
+ 	trbs_past_seg = enq_used + num_trbs - (TRBS_PER_SEGMENT - 1);
+ 
+-	if (trbs_past_seg <= 0)
++	/*
++	 * Consider expanding the ring already if num_trbs fills the current
++	 * segment (i.e. trbs_past_seg == 0), not only when num_trbs goes into
++	 * the next segment. Avoids confusing full ring with special empty ring
++	 * case below
++	 */
++	if (trbs_past_seg < 0)
+ 		return 0;
+ 
+ 	/* Empty ring special case, enqueue stuck on link trb while dequeue advanced */
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 884b0898d9c95..943b87d15f7ba 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1202,6 +1202,8 @@ static int xhci_map_temp_buffer(struct usb_hcd *hcd, struct urb *urb)
+ 
+ 	temp = kzalloc_node(buf_len, GFP_ATOMIC,
+ 			    dev_to_node(hcd->self.sysdev));
++	if (!temp)
++		return -ENOMEM;
+ 
+ 	if (usb_urb_dir_out(urb))
+ 		sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
+diff --git a/drivers/usb/misc/usb-ljca.c b/drivers/usb/misc/usb-ljca.c
+index 35770e608c649..2d30fc1be3066 100644
+--- a/drivers/usb/misc/usb-ljca.c
++++ b/drivers/usb/misc/usb-ljca.c
+@@ -518,8 +518,10 @@ static int ljca_new_client_device(struct ljca_adapter *adap, u8 type, u8 id,
+ 	int ret;
+ 
+ 	client = kzalloc(sizeof *client, GFP_KERNEL);
+-	if (!client)
++	if (!client) {
++		kfree(data);
+ 		return -ENOMEM;
++	}
+ 
+ 	client->type = type;
+ 	client->id = id;
+@@ -535,8 +537,10 @@ static int ljca_new_client_device(struct ljca_adapter *adap, u8 type, u8 id,
+ 	auxdev->dev.release = ljca_auxdev_release;
+ 
+ 	ret = auxiliary_device_init(auxdev);
+-	if (ret)
++	if (ret) {
++		kfree(data);
+ 		goto err_free;
++	}
+ 
+ 	ljca_auxdev_acpi_bind(adap, auxdev, adr, id);
+ 
+@@ -590,12 +594,8 @@ static int ljca_enumerate_gpio(struct ljca_adapter *adap)
+ 		valid_pin[i] = get_unaligned_le32(&desc->bank_desc[i].valid_pins);
+ 	bitmap_from_arr32(gpio_info->valid_pin_map, valid_pin, gpio_num);
+ 
+-	ret = ljca_new_client_device(adap, LJCA_CLIENT_GPIO, 0, "ljca-gpio",
++	return ljca_new_client_device(adap, LJCA_CLIENT_GPIO, 0, "ljca-gpio",
+ 				     gpio_info, LJCA_GPIO_ACPI_ADR);
+-	if (ret)
+-		kfree(gpio_info);
+-
+-	return ret;
+ }
+ 
+ static int ljca_enumerate_i2c(struct ljca_adapter *adap)
+@@ -629,10 +629,8 @@ static int ljca_enumerate_i2c(struct ljca_adapter *adap)
+ 		ret = ljca_new_client_device(adap, LJCA_CLIENT_I2C, i,
+ 					     "ljca-i2c", i2c_info,
+ 					     LJCA_I2C1_ACPI_ADR + i);
+-		if (ret) {
+-			kfree(i2c_info);
++		if (ret)
+ 			return ret;
+-		}
+ 	}
+ 
+ 	return 0;
+@@ -669,10 +667,8 @@ static int ljca_enumerate_spi(struct ljca_adapter *adap)
+ 		ret = ljca_new_client_device(adap, LJCA_CLIENT_SPI, i,
+ 					     "ljca-spi", spi_info,
+ 					     LJCA_SPI1_ACPI_ADR + i);
+-		if (ret) {
+-			kfree(spi_info);
++		if (ret)
+ 			return ret;
+-		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index b855d291dfe6b..770081b828a42 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -268,13 +268,6 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ 		return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
+ 				     "could not get vbus regulator\n");
+ 
+-	nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
+-	if (PTR_ERR(nop->vbus_draw) == -ENODEV)
+-		nop->vbus_draw = NULL;
+-	if (IS_ERR(nop->vbus_draw))
+-		return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
+-				     "could not get vbus regulator\n");
+-
+ 	nop->dev		= dev;
+ 	nop->phy.dev		= nop->dev;
+ 	nop->phy.label		= "nop-xceiv";
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 923e0ed85444b..21fd26609252b 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -56,6 +56,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x0471, 0x066A) }, /* AKTAKOM ACE-1001 cable */
+ 	{ USB_DEVICE(0x0489, 0xE000) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
+ 	{ USB_DEVICE(0x0489, 0xE003) }, /* Pirelli Broadband S.p.A, DP-L10 SIP/GSM Mobile */
++	{ USB_DEVICE(0x04BF, 0x1301) }, /* TDK Corporation NC0110013M - Network Controller */
++	{ USB_DEVICE(0x04BF, 0x1303) }, /* TDK Corporation MM0110113M - i3 Micro Module */
+ 	{ USB_DEVICE(0x0745, 0x1000) }, /* CipherLab USB CCD Barcode Scanner 1000 */
+ 	{ USB_DEVICE(0x0846, 0x1100) }, /* NetGear Managed Switch M4100 series, M5300 series, M7100 series */
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+@@ -144,6 +146,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */
+ 	{ USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */
+ 	{ USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */
++	{ USB_DEVICE(0x10C4, 0x863C) }, /* MGP Instruments PDS100 */
+ 	{ USB_DEVICE(0x10C4, 0x8664) }, /* AC-Services CAN-IF */
+ 	{ USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */
+ 	{ USB_DEVICE(0x10C4, 0x87ED) }, /* IMST USB-Stick for Smart Meter */
+@@ -177,6 +180,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0xF004) }, /* Elan Digital Systems USBcount50 */
+ 	{ USB_DEVICE(0x10C5, 0xEA61) }, /* Silicon Labs MobiData GPRS USB Modem */
+ 	{ USB_DEVICE(0x10CE, 0xEA6A) }, /* Silicon Labs MobiData GPRS USB Modem 100EU */
++	{ USB_DEVICE(0x11CA, 0x0212) }, /* Verifone USB to Printer (UART, CP2102) */
+ 	{ USB_DEVICE(0x12B8, 0xEC60) }, /* Link G4 ECU */
+ 	{ USB_DEVICE(0x12B8, 0xEC62) }, /* Link G4+ ECU */
+ 	{ USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 13a56783830df..22d01a0f10fbc 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1077,6 +1077,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_FALCONIA_JTAG_UNBUF_PID),
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
++	/* GMC devices */
++	{ USB_DEVICE(GMC_VID, GMC_Z216C_PID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 21a2b5a25fc09..5ee60ba2a73cd 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1606,3 +1606,9 @@
+ #define UBLOX_VID			0x1546
+ #define UBLOX_C099F9P_ZED_PID		0x0502
+ #define UBLOX_C099F9P_ODIN_PID		0x0503
++
++/*
++ * GMC devices
++ */
++#define GMC_VID				0x1cd7
++#define GMC_Z216C_PID			0x0217 /* GMC Z216C Adapter IR-USB */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2ae124c49d448..55a65d941ccbf 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -613,6 +613,11 @@ static void option_instat_callback(struct urb *urb);
+ /* Luat Air72*U series based on UNISOC UIS8910 uses UNISOC's vendor ID */
+ #define LUAT_PRODUCT_AIR720U			0x4e00
+ 
++/* MeiG Smart Technology products */
++#define MEIGSMART_VENDOR_ID			0x2dee
++/* MeiG Smart SLM320 based on UNISOC UIS8910 */
++#define MEIGSMART_PRODUCT_SLM320		0x4d41
++
+ /* Device flags */
+ 
+ /* Highest interface number which can be used with NCTRL() and RSVD() */
+@@ -2282,6 +2287,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/isd200.c b/drivers/usb/storage/isd200.c
+index 4e0eef1440b7f..300aeef160e75 100644
+--- a/drivers/usb/storage/isd200.c
++++ b/drivers/usb/storage/isd200.c
+@@ -1105,7 +1105,7 @@ static void isd200_dump_driveid(struct us_data *us, u16 *id)
+ static int isd200_get_inquiry_data( struct us_data *us )
+ {
+ 	struct isd200_info *info = (struct isd200_info *)us->extra;
+-	int retStatus = ISD200_GOOD;
++	int retStatus;
+ 	u16 *id = info->id;
+ 
+ 	usb_stor_dbg(us, "Entering isd200_get_inquiry_data\n");
+@@ -1137,6 +1137,13 @@ static int isd200_get_inquiry_data( struct us_data *us )
+ 				isd200_fix_driveid(id);
+ 				isd200_dump_driveid(us, id);
+ 
++				/* Prevent division by 0 in isd200_scsi_to_ata() */
++				if (id[ATA_ID_HEADS] == 0 || id[ATA_ID_SECTORS] == 0) {
++					usb_stor_dbg(us, "   Invalid ATA Identify data\n");
++					retStatus = ISD200_ERROR;
++					goto Done;
++				}
++
+ 				memset(&info->InquiryData, 0, sizeof(info->InquiryData));
+ 
+ 				/* Standard IDE interface only supports disks */
+@@ -1202,6 +1209,7 @@ static int isd200_get_inquiry_data( struct us_data *us )
+ 		}
+ 	}
+ 
++ Done:
+ 	usb_stor_dbg(us, "Leaving isd200_get_inquiry_data %08X\n", retStatus);
+ 
+ 	return(retStatus);
+@@ -1481,22 +1489,27 @@ static int isd200_init_info(struct us_data *us)
+ 
+ static int isd200_Initialization(struct us_data *us)
+ {
++	int rc = 0;
++
+ 	usb_stor_dbg(us, "ISD200 Initialization...\n");
+ 
+ 	/* Initialize ISD200 info struct */
+ 
+-	if (isd200_init_info(us) == ISD200_ERROR) {
++	if (isd200_init_info(us) < 0) {
+ 		usb_stor_dbg(us, "ERROR Initializing ISD200 Info struct\n");
++		rc = -ENOMEM;
+ 	} else {
+ 		/* Get device specific data */
+ 
+-		if (isd200_get_inquiry_data(us) != ISD200_GOOD)
++		if (isd200_get_inquiry_data(us) != ISD200_GOOD) {
+ 			usb_stor_dbg(us, "ISD200 Initialization Failure\n");
+-		else
++			rc = -EINVAL;
++		} else {
+ 			usb_stor_dbg(us, "ISD200 Initialization complete\n");
++		}
+ 	}
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ 
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 299a6767b7b30..d002eead62b4b 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -533,7 +533,7 @@ static struct urb *uas_alloc_cmd_urb(struct uas_dev_info *devinfo, gfp_t gfp,
+  * daft to me.
+  */
+ 
+-static struct urb *uas_submit_sense_urb(struct scsi_cmnd *cmnd, gfp_t gfp)
++static int uas_submit_sense_urb(struct scsi_cmnd *cmnd, gfp_t gfp)
+ {
+ 	struct uas_dev_info *devinfo = cmnd->device->hostdata;
+ 	struct urb *urb;
+@@ -541,30 +541,28 @@ static struct urb *uas_submit_sense_urb(struct scsi_cmnd *cmnd, gfp_t gfp)
+ 
+ 	urb = uas_alloc_sense_urb(devinfo, gfp, cmnd);
+ 	if (!urb)
+-		return NULL;
++		return -ENOMEM;
+ 	usb_anchor_urb(urb, &devinfo->sense_urbs);
+ 	err = usb_submit_urb(urb, gfp);
+ 	if (err) {
+ 		usb_unanchor_urb(urb);
+ 		uas_log_cmd_state(cmnd, "sense submit err", err);
+ 		usb_free_urb(urb);
+-		return NULL;
+ 	}
+-	return urb;
++	return err;
+ }
+ 
+ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 			   struct uas_dev_info *devinfo)
+ {
+ 	struct uas_cmd_info *cmdinfo = scsi_cmd_priv(cmnd);
+-	struct urb *urb;
+ 	int err;
+ 
+ 	lockdep_assert_held(&devinfo->lock);
+ 	if (cmdinfo->state & SUBMIT_STATUS_URB) {
+-		urb = uas_submit_sense_urb(cmnd, GFP_ATOMIC);
+-		if (!urb)
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++		err = uas_submit_sense_urb(cmnd, GFP_ATOMIC);
++		if (err)
++			return err;
+ 		cmdinfo->state &= ~SUBMIT_STATUS_URB;
+ 	}
+ 
+@@ -572,7 +570,7 @@ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 		cmdinfo->data_in_urb = uas_alloc_data_urb(devinfo, GFP_ATOMIC,
+ 							cmnd, DMA_FROM_DEVICE);
+ 		if (!cmdinfo->data_in_urb)
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++			return -ENOMEM;
+ 		cmdinfo->state &= ~ALLOC_DATA_IN_URB;
+ 	}
+ 
+@@ -582,7 +580,7 @@ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 		if (err) {
+ 			usb_unanchor_urb(cmdinfo->data_in_urb);
+ 			uas_log_cmd_state(cmnd, "data in submit err", err);
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++			return err;
+ 		}
+ 		cmdinfo->state &= ~SUBMIT_DATA_IN_URB;
+ 		cmdinfo->state |= DATA_IN_URB_INFLIGHT;
+@@ -592,7 +590,7 @@ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 		cmdinfo->data_out_urb = uas_alloc_data_urb(devinfo, GFP_ATOMIC,
+ 							cmnd, DMA_TO_DEVICE);
+ 		if (!cmdinfo->data_out_urb)
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++			return -ENOMEM;
+ 		cmdinfo->state &= ~ALLOC_DATA_OUT_URB;
+ 	}
+ 
+@@ -602,7 +600,7 @@ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 		if (err) {
+ 			usb_unanchor_urb(cmdinfo->data_out_urb);
+ 			uas_log_cmd_state(cmnd, "data out submit err", err);
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++			return err;
+ 		}
+ 		cmdinfo->state &= ~SUBMIT_DATA_OUT_URB;
+ 		cmdinfo->state |= DATA_OUT_URB_INFLIGHT;
+@@ -611,7 +609,7 @@ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 	if (cmdinfo->state & ALLOC_CMD_URB) {
+ 		cmdinfo->cmd_urb = uas_alloc_cmd_urb(devinfo, GFP_ATOMIC, cmnd);
+ 		if (!cmdinfo->cmd_urb)
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++			return -ENOMEM;
+ 		cmdinfo->state &= ~ALLOC_CMD_URB;
+ 	}
+ 
+@@ -621,7 +619,7 @@ static int uas_submit_urbs(struct scsi_cmnd *cmnd,
+ 		if (err) {
+ 			usb_unanchor_urb(cmdinfo->cmd_urb);
+ 			uas_log_cmd_state(cmnd, "cmd submit err", err);
+-			return SCSI_MLQUEUE_DEVICE_BUSY;
++			return err;
+ 		}
+ 		cmdinfo->cmd_urb = NULL;
+ 		cmdinfo->state &= ~SUBMIT_CMD_URB;
+@@ -698,7 +696,7 @@ static int uas_queuecommand_lck(struct scsi_cmnd *cmnd)
+ 	 * of queueing, no matter how fatal the error
+ 	 */
+ 	if (err == -ENODEV) {
+-		set_host_byte(cmnd, DID_ERROR);
++		set_host_byte(cmnd, DID_NO_CONNECT);
+ 		scsi_done(cmnd);
+ 		goto zombie;
+ 	}
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index f81bec0c7b864..f8ea3054be542 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -559,16 +559,21 @@ static ssize_t hpd_show(struct device *dev, struct device_attribute *attr, char
+ }
+ static DEVICE_ATTR_RO(hpd);
+ 
+-static struct attribute *dp_altmode_attrs[] = {
++static struct attribute *displayport_attrs[] = {
+ 	&dev_attr_configuration.attr,
+ 	&dev_attr_pin_assignment.attr,
+ 	&dev_attr_hpd.attr,
+ 	NULL
+ };
+ 
+-static const struct attribute_group dp_altmode_group = {
++static const struct attribute_group displayport_group = {
+ 	.name = "displayport",
+-	.attrs = dp_altmode_attrs,
++	.attrs = displayport_attrs,
++};
++
++static const struct attribute_group *displayport_groups[] = {
++	&displayport_group,
++	NULL,
+ };
+ 
+ int dp_altmode_probe(struct typec_altmode *alt)
+@@ -576,7 +581,6 @@ int dp_altmode_probe(struct typec_altmode *alt)
+ 	const struct typec_altmode *port = typec_altmode_get_partner(alt);
+ 	struct fwnode_handle *fwnode;
+ 	struct dp_altmode *dp;
+-	int ret;
+ 
+ 	/* FIXME: Port can only be DFP_U. */
+ 
+@@ -587,10 +591,6 @@ int dp_altmode_probe(struct typec_altmode *alt)
+ 	      DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo)))
+ 		return -ENODEV;
+ 
+-	ret = sysfs_create_group(&alt->dev.kobj, &dp_altmode_group);
+-	if (ret)
+-		return ret;
+-
+ 	dp = devm_kzalloc(&alt->dev, sizeof(*dp), GFP_KERNEL);
+ 	if (!dp)
+ 		return -ENOMEM;
+@@ -624,7 +624,6 @@ void dp_altmode_remove(struct typec_altmode *alt)
+ {
+ 	struct dp_altmode *dp = typec_altmode_get_drvdata(alt);
+ 
+-	sysfs_remove_group(&alt->dev.kobj, &dp_altmode_group);
+ 	cancel_work_sync(&dp->work);
+ 
+ 	if (dp->connector_fwnode) {
+@@ -649,6 +648,7 @@ static struct typec_altmode_driver dp_altmode_driver = {
+ 	.driver = {
+ 		.name = "typec_displayport",
+ 		.owner = THIS_MODULE,
++		.dev_groups = displayport_groups,
+ 	},
+ };
+ module_typec_altmode_driver(dp_altmode_driver);
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 2da19feacd915..2373cd39b4841 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -1310,6 +1310,7 @@ static ssize_t select_usb_power_delivery_store(struct device *dev,
+ {
+ 	struct typec_port *port = to_typec_port(dev);
+ 	struct usb_power_delivery *pd;
++	int ret;
+ 
+ 	if (!port->ops || !port->ops->pd_set)
+ 		return -EOPNOTSUPP;
+@@ -1318,7 +1319,11 @@ static ssize_t select_usb_power_delivery_store(struct device *dev,
+ 	if (!pd)
+ 		return -EINVAL;
+ 
+-	return port->ops->pd_set(port, pd);
++	ret = port->ops->pd_set(port, pd);
++	if (ret)
++		return ret;
++
++	return size;
+ }
+ 
+ static ssize_t select_usb_power_delivery_show(struct device *dev,
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index a8e03cfde9dda..c5776b3c96773 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4859,8 +4859,11 @@ static void run_state_machine(struct tcpm_port *port)
+ 		break;
+ 	case PORT_RESET:
+ 		tcpm_reset_port(port);
+-		tcpm_set_cc(port, tcpm_default_state(port) == SNK_UNATTACHED ?
+-			    TYPEC_CC_RD : tcpm_rp_cc(port));
++		if (port->self_powered)
++			tcpm_set_cc(port, TYPEC_CC_OPEN);
++		else
++			tcpm_set_cc(port, tcpm_default_state(port) == SNK_UNATTACHED ?
++				    TYPEC_CC_RD : tcpm_rp_cc(port));
+ 		tcpm_set_state(port, PORT_RESET_WAIT_OFF,
+ 			       PD_T_ERROR_RECOVERY);
+ 		break;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 8f9dff993b3da..70d9f4eebf1a7 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -138,8 +138,12 @@ static int ucsi_exec_command(struct ucsi *ucsi, u64 cmd)
+ 	if (!(cci & UCSI_CCI_COMMAND_COMPLETE))
+ 		return -EIO;
+ 
+-	if (cci & UCSI_CCI_NOT_SUPPORTED)
++	if (cci & UCSI_CCI_NOT_SUPPORTED) {
++		if (ucsi_acknowledge_command(ucsi) < 0)
++			dev_err(ucsi->dev,
++				"ACK of unsupported command failed\n");
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	if (cci & UCSI_CCI_ERROR) {
+ 		if (cmd == UCSI_GET_ERROR_STATUS)
+@@ -933,11 +937,11 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 	if (con->status.change & UCSI_CONSTAT_CAM_CHANGE)
+ 		ucsi_partner_task(con, ucsi_check_altmodes, 1, 0);
+ 
+-	clear_bit(EVENT_PENDING, &con->ucsi->flags);
+-
+ 	mutex_lock(&ucsi->ppm_lock);
++	clear_bit(EVENT_PENDING, &con->ucsi->flags);
+ 	ret = ucsi_acknowledge_connector_change(ucsi);
+ 	mutex_unlock(&ucsi->ppm_lock);
++
+ 	if (ret)
+ 		dev_err(ucsi->dev, "%s: ACK failed (%d)", __func__, ret);
+ 
+@@ -978,13 +982,47 @@ static int ucsi_reset_connector(struct ucsi_connector *con, bool hard)
+ 
+ static int ucsi_reset_ppm(struct ucsi *ucsi)
+ {
+-	u64 command = UCSI_PPM_RESET;
++	u64 command;
+ 	unsigned long tmo;
+ 	u32 cci;
+ 	int ret;
+ 
+ 	mutex_lock(&ucsi->ppm_lock);
+ 
++	ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci));
++	if (ret < 0)
++		goto out;
++
++	/*
++	 * If UCSI_CCI_RESET_COMPLETE is already set we must clear
++	 * the flag before we start another reset. Send a
++	 * UCSI_SET_NOTIFICATION_ENABLE command to achieve this.
++	 * Ignore a timeout and try the reset anyway if this fails.
++	 */
++	if (cci & UCSI_CCI_RESET_COMPLETE) {
++		command = UCSI_SET_NOTIFICATION_ENABLE;
++		ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command,
++					     sizeof(command));
++		if (ret < 0)
++			goto out;
++
++		tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS);
++		do {
++			ret = ucsi->ops->read(ucsi, UCSI_CCI,
++					      &cci, sizeof(cci));
++			if (ret < 0)
++				goto out;
++			if (cci & UCSI_CCI_COMMAND_COMPLETE)
++				break;
++			if (time_is_before_jiffies(tmo))
++				break;
++			msleep(20);
++		} while (1);
++
++		WARN_ON(cci & UCSI_CCI_RESET_COMPLETE);
++	}
++
++	command = UCSI_PPM_RESET;
+ 	ret = ucsi->ops->async_write(ucsi, UCSI_CONTROL, &command,
+ 				     sizeof(command));
+ 	if (ret < 0)
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 474315a72c770..13ec976b1c747 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -221,12 +221,12 @@ struct ucsi_cable_property {
+ #define UCSI_CABLE_PROP_FLAG_VBUS_IN_CABLE	BIT(0)
+ #define UCSI_CABLE_PROP_FLAG_ACTIVE_CABLE	BIT(1)
+ #define UCSI_CABLE_PROP_FLAG_DIRECTIONALITY	BIT(2)
+-#define UCSI_CABLE_PROP_FLAG_PLUG_TYPE(_f_)	((_f_) & GENMASK(3, 0))
++#define UCSI_CABLE_PROP_FLAG_PLUG_TYPE(_f_)	(((_f_) & GENMASK(4, 3)) >> 3)
+ #define   UCSI_CABLE_PROPERTY_PLUG_TYPE_A	0
+ #define   UCSI_CABLE_PROPERTY_PLUG_TYPE_B	1
+ #define   UCSI_CABLE_PROPERTY_PLUG_TYPE_C	2
+ #define   UCSI_CABLE_PROPERTY_PLUG_OTHER	3
+-#define UCSI_CABLE_PROP_MODE_SUPPORT		BIT(5)
++#define UCSI_CABLE_PROP_FLAG_MODE_SUPPORT	BIT(5)
+ 	u8 latency;
+ } __packed;
+ 
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 928eacbeb21ac..7b3ac133ef861 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -23,10 +23,11 @@ struct ucsi_acpi {
+ 	void *base;
+ 	struct completion complete;
+ 	unsigned long flags;
++#define UCSI_ACPI_SUPPRESS_EVENT	0
++#define UCSI_ACPI_COMMAND_PENDING	1
++#define UCSI_ACPI_ACK_PENDING		2
+ 	guid_t guid;
+ 	u64 cmd;
+-	bool dell_quirk_probed;
+-	bool dell_quirk_active;
+ };
+ 
+ static int ucsi_acpi_dsm(struct ucsi_acpi *ua, int func)
+@@ -79,9 +80,9 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 	int ret;
+ 
+ 	if (ack)
+-		set_bit(ACK_PENDING, &ua->flags);
++		set_bit(UCSI_ACPI_ACK_PENDING, &ua->flags);
+ 	else
+-		set_bit(COMMAND_PENDING, &ua->flags);
++		set_bit(UCSI_ACPI_COMMAND_PENDING, &ua->flags);
+ 
+ 	ret = ucsi_acpi_async_write(ucsi, offset, val, val_len);
+ 	if (ret)
+@@ -92,9 +93,9 @@ static int ucsi_acpi_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 
+ out_clear_bit:
+ 	if (ack)
+-		clear_bit(ACK_PENDING, &ua->flags);
++		clear_bit(UCSI_ACPI_ACK_PENDING, &ua->flags);
+ 	else
+-		clear_bit(COMMAND_PENDING, &ua->flags);
++		clear_bit(UCSI_ACPI_COMMAND_PENDING, &ua->flags);
+ 
+ 	return ret;
+ }
+@@ -129,51 +130,40 @@ static const struct ucsi_operations ucsi_zenbook_ops = {
+ };
+ 
+ /*
+- * Some Dell laptops expect that an ACK command with the
+- * UCSI_ACK_CONNECTOR_CHANGE bit set is followed by a (separate)
+- * ACK command that only has the UCSI_ACK_COMMAND_COMPLETE bit set.
+- * If this is not done events are not delivered to OSPM and
+- * subsequent commands will timeout.
++ * Some Dell laptops don't like ACK commands with the
++ * UCSI_ACK_CONNECTOR_CHANGE but not the UCSI_ACK_COMMAND_COMPLETE
++ * bit set. To work around this send a dummy command and bundle the
++ * UCSI_ACK_CONNECTOR_CHANGE with the UCSI_ACK_COMMAND_COMPLETE
++ * for the dummy command.
+  */
+ static int
+ ucsi_dell_sync_write(struct ucsi *ucsi, unsigned int offset,
+ 		     const void *val, size_t val_len)
+ {
+ 	struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+-	u64 cmd = *(u64 *)val, ack = 0;
++	u64 cmd = *(u64 *)val;
++	u64 dummycmd = UCSI_GET_CAPABILITY;
+ 	int ret;
+ 
+-	if (UCSI_COMMAND(cmd) == UCSI_ACK_CC_CI &&
+-	    cmd & UCSI_ACK_CONNECTOR_CHANGE)
+-		ack = UCSI_ACK_CC_CI | UCSI_ACK_COMMAND_COMPLETE;
+-
+-	ret = ucsi_acpi_sync_write(ucsi, offset, val, val_len);
+-	if (ret != 0)
+-		return ret;
+-	if (ack == 0)
+-		return ret;
+-
+-	if (!ua->dell_quirk_probed) {
+-		ua->dell_quirk_probed = true;
+-
+-		cmd = UCSI_GET_CAPABILITY;
+-		ret = ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &cmd,
+-					   sizeof(cmd));
+-		if (ret == 0)
+-			return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL,
+-						    &ack, sizeof(ack));
+-		if (ret != -ETIMEDOUT)
++	if (cmd == (UCSI_ACK_CC_CI | UCSI_ACK_CONNECTOR_CHANGE)) {
++		cmd |= UCSI_ACK_COMMAND_COMPLETE;
++
++		/*
++		 * The UCSI core thinks it is sending a connector change ack
++		 * and will accept new connector change events. We don't want
++		 * this to happen for the dummy command as its response will
++		 * still report the very event that the core is trying to clear.
++		 */
++		set_bit(UCSI_ACPI_SUPPRESS_EVENT, &ua->flags);
++		ret = ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &dummycmd,
++					   sizeof(dummycmd));
++		clear_bit(UCSI_ACPI_SUPPRESS_EVENT, &ua->flags);
++
++		if (ret < 0)
+ 			return ret;
+-
+-		ua->dell_quirk_active = true;
+-		dev_err(ua->dev, "Firmware bug: Additional ACK required after ACKing a connector change.\n");
+-		dev_err(ua->dev, "Firmware bug: Enabling workaround\n");
+ 	}
+ 
+-	if (!ua->dell_quirk_active)
+-		return ret;
+-
+-	return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &ack, sizeof(ack));
++	return ucsi_acpi_sync_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd));
+ }
+ 
+ static const struct ucsi_operations ucsi_dell_ops = {
+@@ -209,13 +199,14 @@ static void ucsi_acpi_notify(acpi_handle handle, u32 event, void *data)
+ 	if (ret)
+ 		return;
+ 
+-	if (UCSI_CCI_CONNECTOR(cci))
++	if (UCSI_CCI_CONNECTOR(cci) &&
++	    !test_bit(UCSI_ACPI_SUPPRESS_EVENT, &ua->flags))
+ 		ucsi_connector_change(ua->ucsi, UCSI_CCI_CONNECTOR(cci));
+ 
+ 	if (cci & UCSI_CCI_ACK_COMPLETE && test_bit(ACK_PENDING, &ua->flags))
+ 		complete(&ua->complete);
+ 	if (cci & UCSI_CCI_COMMAND_COMPLETE &&
+-	    test_bit(COMMAND_PENDING, &ua->flags))
++	    test_bit(UCSI_ACPI_COMMAND_PENDING, &ua->flags))
+ 		complete(&ua->complete);
+ }
+ 
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index 4853141cd10c8..894622b6556a6 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -254,6 +254,20 @@ static void pmic_glink_ucsi_notify(struct work_struct *work)
+ static void pmic_glink_ucsi_register(struct work_struct *work)
+ {
+ 	struct pmic_glink_ucsi *ucsi = container_of(work, struct pmic_glink_ucsi, register_work);
++	int orientation;
++	int i;
++
++	for (i = 0; i < PMIC_GLINK_MAX_PORTS; i++) {
++		if (!ucsi->port_orientation[i])
++			continue;
++		orientation = gpiod_get_value(ucsi->port_orientation[i]);
++
++		if (orientation >= 0) {
++			typec_switch_set(ucsi->port_switch[i],
++					 orientation ? TYPEC_ORIENTATION_REVERSE
++					     : TYPEC_ORIENTATION_NORMAL);
++		}
++	}
+ 
+ 	ucsi_register(ucsi->ucsi);
+ }
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 0ddd4b8abecb3..6cb5ce4a8b9af 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -493,7 +493,7 @@ static void vduse_vq_kick(struct vduse_virtqueue *vq)
+ 		goto unlock;
+ 
+ 	if (vq->kickfd)
+-		eventfd_signal(vq->kickfd, 1);
++		eventfd_signal(vq->kickfd);
+ 	else
+ 		vq->kicked = true;
+ unlock:
+@@ -911,7 +911,7 @@ static int vduse_kickfd_setup(struct vduse_dev *dev,
+ 		eventfd_ctx_put(vq->kickfd);
+ 	vq->kickfd = ctx;
+ 	if (vq->ready && vq->kicked && vq->kickfd) {
+-		eventfd_signal(vq->kickfd, 1);
++		eventfd_signal(vq->kickfd);
+ 		vq->kicked = false;
+ 	}
+ 	spin_unlock(&vq->kick_lock);
+@@ -960,7 +960,7 @@ static bool vduse_vq_signal_irqfd(struct vduse_virtqueue *vq)
+ 
+ 	spin_lock_irq(&vq->irq_lock);
+ 	if (vq->ready && vq->cb.trigger) {
+-		eventfd_signal(vq->cb.trigger, 1);
++		eventfd_signal(vq->cb.trigger);
+ 		signal = true;
+ 	}
+ 	spin_unlock_irq(&vq->irq_lock);
+diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c b/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
+index c51229fccbd6a..82b2afa9b7e31 100644
+--- a/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
++++ b/drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c
+@@ -54,7 +54,7 @@ static irqreturn_t vfio_fsl_mc_irq_handler(int irq_num, void *arg)
+ {
+ 	struct vfio_fsl_mc_irq *mc_irq = (struct vfio_fsl_mc_irq *)arg;
+ 
+-	eventfd_signal(mc_irq->trigger, 1);
++	eventfd_signal(mc_irq->trigger);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -141,13 +141,14 @@ static int vfio_fsl_mc_set_irq_trigger(struct vfio_fsl_mc_device *vdev,
+ 	irq = &vdev->mc_irqs[index];
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-		vfio_fsl_mc_irq_handler(hwirq, irq);
++		if (irq->trigger)
++			eventfd_signal(irq->trigger);
+ 
+ 	} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 		u8 trigger = *(u8 *)data;
+ 
+-		if (trigger)
+-			vfio_fsl_mc_irq_handler(hwirq, irq);
++		if (trigger && irq->trigger)
++			eventfd_signal(irq->trigger);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/vfio/pci/pds/lm.c b/drivers/vfio/pci/pds/lm.c
+index 79fe2e66bb498..6b94cc0bf45b4 100644
+--- a/drivers/vfio/pci/pds/lm.c
++++ b/drivers/vfio/pci/pds/lm.c
+@@ -92,8 +92,10 @@ static void pds_vfio_put_lm_file(struct pds_vfio_lm_file *lm_file)
+ {
+ 	mutex_lock(&lm_file->lock);
+ 
++	lm_file->disabled = true;
+ 	lm_file->size = 0;
+ 	lm_file->alloc_size = 0;
++	lm_file->filep->f_pos = 0;
+ 
+ 	/* Free scatter list of file pages */
+ 	sg_free_table(&lm_file->sg_table);
+@@ -183,6 +185,12 @@ static ssize_t pds_vfio_save_read(struct file *filp, char __user *buf,
+ 	pos = &filp->f_pos;
+ 
+ 	mutex_lock(&lm_file->lock);
++
++	if (lm_file->disabled) {
++		done = -ENODEV;
++		goto out_unlock;
++	}
++
+ 	if (*pos > lm_file->size) {
+ 		done = -EINVAL;
+ 		goto out_unlock;
+@@ -283,6 +291,11 @@ static ssize_t pds_vfio_restore_write(struct file *filp, const char __user *buf,
+ 
+ 	mutex_lock(&lm_file->lock);
+ 
++	if (lm_file->disabled) {
++		done = -ENODEV;
++		goto out_unlock;
++	}
++
+ 	while (len) {
+ 		size_t page_offset;
+ 		struct page *page;
+diff --git a/drivers/vfio/pci/pds/lm.h b/drivers/vfio/pci/pds/lm.h
+index 13be893198b74..9511b1afc6a11 100644
+--- a/drivers/vfio/pci/pds/lm.h
++++ b/drivers/vfio/pci/pds/lm.h
+@@ -27,6 +27,7 @@ struct pds_vfio_lm_file {
+ 	struct scatterlist *last_offset_sg;	/* Iterator */
+ 	unsigned int sg_last_entry;
+ 	unsigned long last_offset;
++	bool disabled;
+ };
+ 
+ struct pds_vfio_pci_device;
+diff --git a/drivers/vfio/pci/pds/vfio_dev.c b/drivers/vfio/pci/pds/vfio_dev.c
+index 4c351c59d05a9..a286ebcc71126 100644
+--- a/drivers/vfio/pci/pds/vfio_dev.c
++++ b/drivers/vfio/pci/pds/vfio_dev.c
+@@ -32,9 +32,9 @@ void pds_vfio_state_mutex_unlock(struct pds_vfio_pci_device *pds_vfio)
+ 	mutex_lock(&pds_vfio->reset_mutex);
+ 	if (pds_vfio->deferred_reset) {
+ 		pds_vfio->deferred_reset = false;
++		pds_vfio_put_restore_file(pds_vfio);
++		pds_vfio_put_save_file(pds_vfio);
+ 		if (pds_vfio->state == VFIO_DEVICE_STATE_ERROR) {
+-			pds_vfio_put_restore_file(pds_vfio);
+-			pds_vfio_put_save_file(pds_vfio);
+ 			pds_vfio_dirty_disable(pds_vfio, false);
+ 		}
+ 		pds_vfio->state = pds_vfio->deferred_reset_state;
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 1929103ee59a3..1cbc990d42e07 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -443,7 +443,7 @@ static int vfio_pci_core_runtime_resume(struct device *dev)
+ 	 */
+ 	down_write(&vdev->memory_lock);
+ 	if (vdev->pm_wake_eventfd_ctx) {
+-		eventfd_signal(vdev->pm_wake_eventfd_ctx, 1);
++		eventfd_signal(vdev->pm_wake_eventfd_ctx);
+ 		__vfio_pci_runtime_pm_exit(vdev);
+ 	}
+ 	up_write(&vdev->memory_lock);
+@@ -1883,7 +1883,7 @@ void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count)
+ 			pci_notice_ratelimited(pdev,
+ 				"Relaying device request to user (#%u)\n",
+ 				count);
+-		eventfd_signal(vdev->req_trigger, 1);
++		eventfd_signal(vdev->req_trigger);
+ 	} else if (count == 0) {
+ 		pci_warn(pdev,
+ 			"No device request channel registered, blocked until released by user\n");
+@@ -2302,7 +2302,7 @@ pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev,
+ 	mutex_lock(&vdev->igate);
+ 
+ 	if (vdev->err_trigger)
+-		eventfd_signal(vdev->err_trigger, 1);
++		eventfd_signal(vdev->err_trigger);
+ 
+ 	mutex_unlock(&vdev->igate);
+ 
+diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
+index cbb4bcbfbf83d..fb5392b749fff 100644
+--- a/drivers/vfio/pci/vfio_pci_intrs.c
++++ b/drivers/vfio/pci/vfio_pci_intrs.c
+@@ -90,22 +90,28 @@ static void vfio_send_intx_eventfd(void *opaque, void *unused)
+ 
+ 	if (likely(is_intx(vdev) && !vdev->virq_disabled)) {
+ 		struct vfio_pci_irq_ctx *ctx;
++		struct eventfd_ctx *trigger;
+ 
+ 		ctx = vfio_irq_ctx_get(vdev, 0);
+ 		if (WARN_ON_ONCE(!ctx))
+ 			return;
+-		eventfd_signal(ctx->trigger, 1);
++
++		trigger = READ_ONCE(ctx->trigger);
++		if (likely(trigger))
++			eventfd_signal(trigger);
+ 	}
+ }
+ 
+ /* Returns true if the INTx vfio_pci_irq_ctx.masked value is changed. */
+-bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
++static bool __vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
+ {
+ 	struct pci_dev *pdev = vdev->pdev;
+ 	struct vfio_pci_irq_ctx *ctx;
+ 	unsigned long flags;
+ 	bool masked_changed = false;
+ 
++	lockdep_assert_held(&vdev->igate);
++
+ 	spin_lock_irqsave(&vdev->irqlock, flags);
+ 
+ 	/*
+@@ -143,6 +149,17 @@ bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
+ 	return masked_changed;
+ }
+ 
++bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev)
++{
++	bool mask_changed;
++
++	mutex_lock(&vdev->igate);
++	mask_changed = __vfio_pci_intx_mask(vdev);
++	mutex_unlock(&vdev->igate);
++
++	return mask_changed;
++}
++
+ /*
+  * If this is triggered by an eventfd, we can't call eventfd_signal
+  * or else we'll deadlock on the eventfd wait queue.  Return >0 when
+@@ -194,12 +211,21 @@ static int vfio_pci_intx_unmask_handler(void *opaque, void *unused)
+ 	return ret;
+ }
+ 
+-void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev)
++static void __vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev)
+ {
++	lockdep_assert_held(&vdev->igate);
++
+ 	if (vfio_pci_intx_unmask_handler(vdev, NULL) > 0)
+ 		vfio_send_intx_eventfd(vdev, NULL);
+ }
+ 
++void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev)
++{
++	mutex_lock(&vdev->igate);
++	__vfio_pci_intx_unmask(vdev);
++	mutex_unlock(&vdev->igate);
++}
++
+ static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
+ {
+ 	struct vfio_pci_core_device *vdev = dev_id;
+@@ -231,97 +257,100 @@ static irqreturn_t vfio_intx_handler(int irq, void *dev_id)
+ 	return ret;
+ }
+ 
+-static int vfio_intx_enable(struct vfio_pci_core_device *vdev)
++static int vfio_intx_enable(struct vfio_pci_core_device *vdev,
++			    struct eventfd_ctx *trigger)
+ {
++	struct pci_dev *pdev = vdev->pdev;
+ 	struct vfio_pci_irq_ctx *ctx;
++	unsigned long irqflags;
++	char *name;
++	int ret;
+ 
+ 	if (!is_irq_none(vdev))
+ 		return -EINVAL;
+ 
+-	if (!vdev->pdev->irq)
++	if (!pdev->irq)
+ 		return -ENODEV;
+ 
++	name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-intx(%s)", pci_name(pdev));
++	if (!name)
++		return -ENOMEM;
++
+ 	ctx = vfio_irq_ctx_alloc(vdev, 0);
+ 	if (!ctx)
+ 		return -ENOMEM;
+ 
++	ctx->name = name;
++	ctx->trigger = trigger;
++
+ 	/*
+-	 * If the virtual interrupt is masked, restore it.  Devices
+-	 * supporting DisINTx can be masked at the hardware level
+-	 * here, non-PCI-2.3 devices will have to wait until the
+-	 * interrupt is enabled.
++	 * Fill the initial masked state based on virq_disabled.  After
++	 * enable, changing the DisINTx bit in vconfig directly changes INTx
++	 * masking.  igate prevents races during setup, once running masked
++	 * is protected via irqlock.
++	 *
++	 * Devices supporting DisINTx also reflect the current mask state in
++	 * the physical DisINTx bit, which is not affected during IRQ setup.
++	 *
++	 * Devices without DisINTx support require an exclusive interrupt.
++	 * IRQ masking is performed at the IRQ chip.  Again, igate protects
++	 * against races during setup and IRQ handlers and irqfds are not
++	 * yet active, therefore masked is stable and can be used to
++	 * conditionally auto-enable the IRQ.
++	 *
++	 * irq_type must be stable while the IRQ handler is registered,
++	 * therefore it must be set before request_irq().
+ 	 */
+ 	ctx->masked = vdev->virq_disabled;
+-	if (vdev->pci_2_3)
+-		pci_intx(vdev->pdev, !ctx->masked);
++	if (vdev->pci_2_3) {
++		pci_intx(pdev, !ctx->masked);
++		irqflags = IRQF_SHARED;
++	} else {
++		irqflags = ctx->masked ? IRQF_NO_AUTOEN : 0;
++	}
+ 
+ 	vdev->irq_type = VFIO_PCI_INTX_IRQ_INDEX;
+ 
++	ret = request_irq(pdev->irq, vfio_intx_handler,
++			  irqflags, ctx->name, vdev);
++	if (ret) {
++		vdev->irq_type = VFIO_PCI_NUM_IRQS;
++		kfree(name);
++		vfio_irq_ctx_free(vdev, ctx, 0);
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+-static int vfio_intx_set_signal(struct vfio_pci_core_device *vdev, int fd)
++static int vfio_intx_set_signal(struct vfio_pci_core_device *vdev,
++				struct eventfd_ctx *trigger)
+ {
+ 	struct pci_dev *pdev = vdev->pdev;
+-	unsigned long irqflags = IRQF_SHARED;
+ 	struct vfio_pci_irq_ctx *ctx;
+-	struct eventfd_ctx *trigger;
+-	unsigned long flags;
+-	int ret;
++	struct eventfd_ctx *old;
+ 
+ 	ctx = vfio_irq_ctx_get(vdev, 0);
+ 	if (WARN_ON_ONCE(!ctx))
+ 		return -EINVAL;
+ 
+-	if (ctx->trigger) {
+-		free_irq(pdev->irq, vdev);
+-		kfree(ctx->name);
+-		eventfd_ctx_put(ctx->trigger);
+-		ctx->trigger = NULL;
+-	}
+-
+-	if (fd < 0) /* Disable only */
+-		return 0;
++	old = ctx->trigger;
+ 
+-	ctx->name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-intx(%s)",
+-			      pci_name(pdev));
+-	if (!ctx->name)
+-		return -ENOMEM;
++	WRITE_ONCE(ctx->trigger, trigger);
+ 
+-	trigger = eventfd_ctx_fdget(fd);
+-	if (IS_ERR(trigger)) {
+-		kfree(ctx->name);
+-		return PTR_ERR(trigger);
++	/* Releasing an old ctx requires synchronizing in-flight users */
++	if (old) {
++		synchronize_irq(pdev->irq);
++		vfio_virqfd_flush_thread(&ctx->unmask);
++		eventfd_ctx_put(old);
+ 	}
+ 
+-	ctx->trigger = trigger;
+-
+-	if (!vdev->pci_2_3)
+-		irqflags = 0;
+-
+-	ret = request_irq(pdev->irq, vfio_intx_handler,
+-			  irqflags, ctx->name, vdev);
+-	if (ret) {
+-		ctx->trigger = NULL;
+-		kfree(ctx->name);
+-		eventfd_ctx_put(trigger);
+-		return ret;
+-	}
+-
+-	/*
+-	 * INTx disable will stick across the new irq setup,
+-	 * disable_irq won't.
+-	 */
+-	spin_lock_irqsave(&vdev->irqlock, flags);
+-	if (!vdev->pci_2_3 && ctx->masked)
+-		disable_irq_nosync(pdev->irq);
+-	spin_unlock_irqrestore(&vdev->irqlock, flags);
+-
+ 	return 0;
+ }
+ 
+ static void vfio_intx_disable(struct vfio_pci_core_device *vdev)
+ {
++	struct pci_dev *pdev = vdev->pdev;
+ 	struct vfio_pci_irq_ctx *ctx;
+ 
+ 	ctx = vfio_irq_ctx_get(vdev, 0);
+@@ -329,10 +358,13 @@ static void vfio_intx_disable(struct vfio_pci_core_device *vdev)
+ 	if (ctx) {
+ 		vfio_virqfd_disable(&ctx->unmask);
+ 		vfio_virqfd_disable(&ctx->mask);
++		free_irq(pdev->irq, vdev);
++		if (ctx->trigger)
++			eventfd_ctx_put(ctx->trigger);
++		kfree(ctx->name);
++		vfio_irq_ctx_free(vdev, ctx, 0);
+ 	}
+-	vfio_intx_set_signal(vdev, -1);
+ 	vdev->irq_type = VFIO_PCI_NUM_IRQS;
+-	vfio_irq_ctx_free(vdev, ctx, 0);
+ }
+ 
+ /*
+@@ -342,7 +374,7 @@ static irqreturn_t vfio_msihandler(int irq, void *arg)
+ {
+ 	struct eventfd_ctx *trigger = arg;
+ 
+-	eventfd_signal(trigger, 1);
++	eventfd_signal(trigger);
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -560,11 +592,11 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_core_device *vdev,
+ 		return -EINVAL;
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-		vfio_pci_intx_unmask(vdev);
++		__vfio_pci_intx_unmask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 		uint8_t unmask = *(uint8_t *)data;
+ 		if (unmask)
+-			vfio_pci_intx_unmask(vdev);
++			__vfio_pci_intx_unmask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+ 		struct vfio_pci_irq_ctx *ctx = vfio_irq_ctx_get(vdev, 0);
+ 		int32_t fd = *(int32_t *)data;
+@@ -591,11 +623,11 @@ static int vfio_pci_set_intx_mask(struct vfio_pci_core_device *vdev,
+ 		return -EINVAL;
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-		vfio_pci_intx_mask(vdev);
++		__vfio_pci_intx_mask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 		uint8_t mask = *(uint8_t *)data;
+ 		if (mask)
+-			vfio_pci_intx_mask(vdev);
++			__vfio_pci_intx_mask(vdev);
+ 	} else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+ 		return -ENOTTY; /* XXX implement me */
+ 	}
+@@ -616,19 +648,23 @@ static int vfio_pci_set_intx_trigger(struct vfio_pci_core_device *vdev,
+ 		return -EINVAL;
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
++		struct eventfd_ctx *trigger = NULL;
+ 		int32_t fd = *(int32_t *)data;
+ 		int ret;
+ 
+-		if (is_intx(vdev))
+-			return vfio_intx_set_signal(vdev, fd);
++		if (fd >= 0) {
++			trigger = eventfd_ctx_fdget(fd);
++			if (IS_ERR(trigger))
++				return PTR_ERR(trigger);
++		}
+ 
+-		ret = vfio_intx_enable(vdev);
+-		if (ret)
+-			return ret;
++		if (is_intx(vdev))
++			ret = vfio_intx_set_signal(vdev, trigger);
++		else
++			ret = vfio_intx_enable(vdev, trigger);
+ 
+-		ret = vfio_intx_set_signal(vdev, fd);
+-		if (ret)
+-			vfio_intx_disable(vdev);
++		if (ret && trigger)
++			eventfd_ctx_put(trigger);
+ 
+ 		return ret;
+ 	}
+@@ -689,11 +725,11 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_core_device *vdev,
+ 		if (!ctx)
+ 			continue;
+ 		if (flags & VFIO_IRQ_SET_DATA_NONE) {
+-			eventfd_signal(ctx->trigger, 1);
++			eventfd_signal(ctx->trigger);
+ 		} else if (flags & VFIO_IRQ_SET_DATA_BOOL) {
+ 			uint8_t *bools = data;
+ 			if (bools[i - start])
+-				eventfd_signal(ctx->trigger, 1);
++				eventfd_signal(ctx->trigger);
+ 		}
+ 	}
+ 	return 0;
+@@ -707,7 +743,7 @@ static int vfio_pci_set_ctx_trigger_single(struct eventfd_ctx **ctx,
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+ 		if (*ctx) {
+ 			if (count) {
+-				eventfd_signal(*ctx, 1);
++				eventfd_signal(*ctx);
+ 			} else {
+ 				eventfd_ctx_put(*ctx);
+ 				*ctx = NULL;
+@@ -722,7 +758,7 @@ static int vfio_pci_set_ctx_trigger_single(struct eventfd_ctx **ctx,
+ 
+ 		trigger = *(uint8_t *)data;
+ 		if (trigger && *ctx)
+-			eventfd_signal(*ctx, 1);
++			eventfd_signal(*ctx);
+ 
+ 		return 0;
+ 	} else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+diff --git a/drivers/vfio/platform/vfio_platform_irq.c b/drivers/vfio/platform/vfio_platform_irq.c
+index 665197caed89e..ef41ecef83af1 100644
+--- a/drivers/vfio/platform/vfio_platform_irq.c
++++ b/drivers/vfio/platform/vfio_platform_irq.c
+@@ -136,6 +136,16 @@ static int vfio_platform_set_irq_unmask(struct vfio_platform_device *vdev,
+ 	return 0;
+ }
+ 
++/*
++ * The trigger eventfd is guaranteed valid in the interrupt path
++ * and protected by the igate mutex when triggered via ioctl.
++ */
++static void vfio_send_eventfd(struct vfio_platform_irq *irq_ctx)
++{
++	if (likely(irq_ctx->trigger))
++		eventfd_signal(irq_ctx->trigger);
++}
++
+ static irqreturn_t vfio_automasked_irq_handler(int irq, void *dev_id)
+ {
+ 	struct vfio_platform_irq *irq_ctx = dev_id;
+@@ -155,7 +165,7 @@ static irqreturn_t vfio_automasked_irq_handler(int irq, void *dev_id)
+ 	spin_unlock_irqrestore(&irq_ctx->lock, flags);
+ 
+ 	if (ret == IRQ_HANDLED)
+-		eventfd_signal(irq_ctx->trigger, 1);
++		vfio_send_eventfd(irq_ctx);
+ 
+ 	return ret;
+ }
+@@ -164,52 +174,40 @@ static irqreturn_t vfio_irq_handler(int irq, void *dev_id)
+ {
+ 	struct vfio_platform_irq *irq_ctx = dev_id;
+ 
+-	eventfd_signal(irq_ctx->trigger, 1);
++	vfio_send_eventfd(irq_ctx);
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
+ static int vfio_set_trigger(struct vfio_platform_device *vdev, int index,
+-			    int fd, irq_handler_t handler)
++			    int fd)
+ {
+ 	struct vfio_platform_irq *irq = &vdev->irqs[index];
+ 	struct eventfd_ctx *trigger;
+-	int ret;
+ 
+ 	if (irq->trigger) {
+-		irq_clear_status_flags(irq->hwirq, IRQ_NOAUTOEN);
+-		free_irq(irq->hwirq, irq);
+-		kfree(irq->name);
++		disable_irq(irq->hwirq);
+ 		eventfd_ctx_put(irq->trigger);
+ 		irq->trigger = NULL;
+ 	}
+ 
+ 	if (fd < 0) /* Disable only */
+ 		return 0;
+-	irq->name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-irq[%d](%s)",
+-			      irq->hwirq, vdev->name);
+-	if (!irq->name)
+-		return -ENOMEM;
+ 
+ 	trigger = eventfd_ctx_fdget(fd);
+-	if (IS_ERR(trigger)) {
+-		kfree(irq->name);
++	if (IS_ERR(trigger))
+ 		return PTR_ERR(trigger);
+-	}
+ 
+ 	irq->trigger = trigger;
+ 
+-	irq_set_status_flags(irq->hwirq, IRQ_NOAUTOEN);
+-	ret = request_irq(irq->hwirq, handler, 0, irq->name, irq);
+-	if (ret) {
+-		kfree(irq->name);
+-		eventfd_ctx_put(trigger);
+-		irq->trigger = NULL;
+-		return ret;
+-	}
+-
+-	if (!irq->masked)
+-		enable_irq(irq->hwirq);
++	/*
++	 * irq->masked effectively provides nested disables within the overall
++	 * enable relative to trigger.  Specifically request_irq() is called
++	 * with NO_AUTOEN, therefore the IRQ is initially disabled.  The user
++	 * may only further disable the IRQ with a MASK operations because
++	 * irq->masked is initially false.
++	 */
++	enable_irq(irq->hwirq);
+ 
+ 	return 0;
+ }
+@@ -228,7 +226,7 @@ static int vfio_platform_set_irq_trigger(struct vfio_platform_device *vdev,
+ 		handler = vfio_irq_handler;
+ 
+ 	if (!count && (flags & VFIO_IRQ_SET_DATA_NONE))
+-		return vfio_set_trigger(vdev, index, -1, handler);
++		return vfio_set_trigger(vdev, index, -1);
+ 
+ 	if (start != 0 || count != 1)
+ 		return -EINVAL;
+@@ -236,7 +234,7 @@ static int vfio_platform_set_irq_trigger(struct vfio_platform_device *vdev,
+ 	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+ 		int32_t fd = *(int32_t *)data;
+ 
+-		return vfio_set_trigger(vdev, index, fd, handler);
++		return vfio_set_trigger(vdev, index, fd);
+ 	}
+ 
+ 	if (flags & VFIO_IRQ_SET_DATA_NONE) {
+@@ -260,6 +258,14 @@ int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
+ 		    unsigned start, unsigned count, uint32_t flags,
+ 		    void *data) = NULL;
+ 
++	/*
++	 * For compatibility, errors from request_irq() are local to the
++	 * SET_IRQS path and reflected in the name pointer.  This allows,
++	 * for example, polling mode fallback for an exclusive IRQ failure.
++	 */
++	if (IS_ERR(vdev->irqs[index].name))
++		return PTR_ERR(vdev->irqs[index].name);
++
+ 	switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ 	case VFIO_IRQ_SET_ACTION_MASK:
+ 		func = vfio_platform_set_irq_mask;
+@@ -280,7 +286,7 @@ int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
+ 
+ int vfio_platform_irq_init(struct vfio_platform_device *vdev)
+ {
+-	int cnt = 0, i;
++	int cnt = 0, i, ret = 0;
+ 
+ 	while (vdev->get_irq(vdev, cnt) >= 0)
+ 		cnt++;
+@@ -292,37 +298,70 @@ int vfio_platform_irq_init(struct vfio_platform_device *vdev)
+ 
+ 	for (i = 0; i < cnt; i++) {
+ 		int hwirq = vdev->get_irq(vdev, i);
++		irq_handler_t handler = vfio_irq_handler;
+ 
+-		if (hwirq < 0)
++		if (hwirq < 0) {
++			ret = -EINVAL;
+ 			goto err;
++		}
+ 
+ 		spin_lock_init(&vdev->irqs[i].lock);
+ 
+ 		vdev->irqs[i].flags = VFIO_IRQ_INFO_EVENTFD;
+ 
+-		if (irq_get_trigger_type(hwirq) & IRQ_TYPE_LEVEL_MASK)
++		if (irq_get_trigger_type(hwirq) & IRQ_TYPE_LEVEL_MASK) {
+ 			vdev->irqs[i].flags |= VFIO_IRQ_INFO_MASKABLE
+ 						| VFIO_IRQ_INFO_AUTOMASKED;
++			handler = vfio_automasked_irq_handler;
++		}
+ 
+ 		vdev->irqs[i].count = 1;
+ 		vdev->irqs[i].hwirq = hwirq;
+ 		vdev->irqs[i].masked = false;
++		vdev->irqs[i].name = kasprintf(GFP_KERNEL_ACCOUNT,
++					       "vfio-irq[%d](%s)", hwirq,
++					       vdev->name);
++		if (!vdev->irqs[i].name) {
++			ret = -ENOMEM;
++			goto err;
++		}
++
++		ret = request_irq(hwirq, handler, IRQF_NO_AUTOEN,
++				  vdev->irqs[i].name, &vdev->irqs[i]);
++		if (ret) {
++			kfree(vdev->irqs[i].name);
++			vdev->irqs[i].name = ERR_PTR(ret);
++		}
+ 	}
+ 
+ 	vdev->num_irqs = cnt;
+ 
+ 	return 0;
+ err:
++	for (--i; i >= 0; i--) {
++		if (!IS_ERR(vdev->irqs[i].name)) {
++			free_irq(vdev->irqs[i].hwirq, &vdev->irqs[i]);
++			kfree(vdev->irqs[i].name);
++		}
++	}
+ 	kfree(vdev->irqs);
+-	return -EINVAL;
++	return ret;
+ }
+ 
+ void vfio_platform_irq_cleanup(struct vfio_platform_device *vdev)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < vdev->num_irqs; i++)
+-		vfio_set_trigger(vdev, i, -1, NULL);
++	for (i = 0; i < vdev->num_irqs; i++) {
++		vfio_virqfd_disable(&vdev->irqs[i].mask);
++		vfio_virqfd_disable(&vdev->irqs[i].unmask);
++		if (!IS_ERR(vdev->irqs[i].name)) {
++			free_irq(vdev->irqs[i].hwirq, &vdev->irqs[i]);
++			if (vdev->irqs[i].trigger)
++				eventfd_ctx_put(vdev->irqs[i].trigger);
++			kfree(vdev->irqs[i].name);
++		}
++	}
+ 
+ 	vdev->num_irqs = 0;
+ 	kfree(vdev->irqs);
+diff --git a/drivers/vfio/virqfd.c b/drivers/vfio/virqfd.c
+index 29c564b7a6e13..5322691338019 100644
+--- a/drivers/vfio/virqfd.c
++++ b/drivers/vfio/virqfd.c
+@@ -101,6 +101,13 @@ static void virqfd_inject(struct work_struct *work)
+ 		virqfd->thread(virqfd->opaque, virqfd->data);
+ }
+ 
++static void virqfd_flush_inject(struct work_struct *work)
++{
++	struct virqfd *virqfd = container_of(work, struct virqfd, flush_inject);
++
++	flush_work(&virqfd->inject);
++}
++
+ int vfio_virqfd_enable(void *opaque,
+ 		       int (*handler)(void *, void *),
+ 		       void (*thread)(void *, void *),
+@@ -124,6 +131,7 @@ int vfio_virqfd_enable(void *opaque,
+ 
+ 	INIT_WORK(&virqfd->shutdown, virqfd_shutdown);
+ 	INIT_WORK(&virqfd->inject, virqfd_inject);
++	INIT_WORK(&virqfd->flush_inject, virqfd_flush_inject);
+ 
+ 	irqfd = fdget(fd);
+ 	if (!irqfd.file) {
+@@ -213,3 +221,16 @@ void vfio_virqfd_disable(struct virqfd **pvirqfd)
+ 	flush_workqueue(vfio_irqfd_cleanup_wq);
+ }
+ EXPORT_SYMBOL_GPL(vfio_virqfd_disable);
++
++void vfio_virqfd_flush_thread(struct virqfd **pvirqfd)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&virqfd_lock, flags);
++	if (*pvirqfd && (*pvirqfd)->thread)
++		queue_work(vfio_irqfd_cleanup_wq, &(*pvirqfd)->flush_inject);
++	spin_unlock_irqrestore(&virqfd_lock, flags);
++
++	flush_workqueue(vfio_irqfd_cleanup_wq);
++}
++EXPORT_SYMBOL_GPL(vfio_virqfd_flush_thread);
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index da7ec77cdaff0..173beda74b38c 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -178,7 +178,7 @@ static irqreturn_t vhost_vdpa_virtqueue_cb(void *private)
+ 	struct eventfd_ctx *call_ctx = vq->call_ctx.ctx;
+ 
+ 	if (call_ctx)
+-		eventfd_signal(call_ctx, 1);
++		eventfd_signal(call_ctx);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -189,7 +189,7 @@ static irqreturn_t vhost_vdpa_config_cb(void *private)
+ 	struct eventfd_ctx *config_ctx = v->config_ctx;
+ 
+ 	if (config_ctx)
+-		eventfd_signal(config_ctx, 1);
++		eventfd_signal(config_ctx);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index e0c181ad17e31..045f666b4f12a 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2248,7 +2248,7 @@ int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
+ 		len -= l;
+ 		if (!len) {
+ 			if (vq->log_ctx)
+-				eventfd_signal(vq->log_ctx, 1);
++				eventfd_signal(vq->log_ctx);
+ 			return 0;
+ 		}
+ 	}
+@@ -2271,7 +2271,7 @@ static int vhost_update_used_flags(struct vhost_virtqueue *vq)
+ 		log_used(vq, (used - (void __user *)vq->used),
+ 			 sizeof vq->used->flags);
+ 		if (vq->log_ctx)
+-			eventfd_signal(vq->log_ctx, 1);
++			eventfd_signal(vq->log_ctx);
+ 	}
+ 	return 0;
+ }
+@@ -2289,7 +2289,7 @@ static int vhost_update_avail_event(struct vhost_virtqueue *vq)
+ 		log_used(vq, (used - (void __user *)vq->used),
+ 			 sizeof *vhost_avail_event(vq));
+ 		if (vq->log_ctx)
+-			eventfd_signal(vq->log_ctx, 1);
++			eventfd_signal(vq->log_ctx);
+ 	}
+ 	return 0;
+ }
+@@ -2715,7 +2715,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
+ 		log_used(vq, offsetof(struct vring_used, idx),
+ 			 sizeof vq->used->idx);
+ 		if (vq->log_ctx)
+-			eventfd_signal(vq->log_ctx, 1);
++			eventfd_signal(vq->log_ctx);
+ 	}
+ 	return r;
+ }
+@@ -2763,7 +2763,7 @@ void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq)
+ {
+ 	/* Signal the Guest tell them we used something up. */
+ 	if (vq->call_ctx.ctx && vhost_notify(dev, vq))
+-		eventfd_signal(vq->call_ctx.ctx, 1);
++		eventfd_signal(vq->call_ctx.ctx);
+ }
+ EXPORT_SYMBOL_GPL(vhost_signal);
+ 
+diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
+index f60d5f7bef944..9e942fcda5c3f 100644
+--- a/drivers/vhost/vhost.h
++++ b/drivers/vhost/vhost.h
+@@ -249,7 +249,7 @@ void vhost_iotlb_map_free(struct vhost_iotlb *iotlb,
+ #define vq_err(vq, fmt, ...) do {                                  \
+ 		pr_debug(pr_fmt(fmt), ##__VA_ARGS__);       \
+ 		if ((vq)->error_ctx)                               \
+-				eventfd_signal((vq)->error_ctx, 1);\
++				eventfd_signal((vq)->error_ctx);\
+ 	} while (0)
+ 
+ enum {
+diff --git a/drivers/virt/acrn/ioeventfd.c b/drivers/virt/acrn/ioeventfd.c
+index ac4037e9f947e..4e845c6ca0b57 100644
+--- a/drivers/virt/acrn/ioeventfd.c
++++ b/drivers/virt/acrn/ioeventfd.c
+@@ -223,7 +223,7 @@ static int acrn_ioeventfd_handler(struct acrn_ioreq_client *client,
+ 	mutex_lock(&client->vm->ioeventfds_lock);
+ 	p = hsm_ioeventfd_match(client->vm, addr, val, size, req->type);
+ 	if (p)
+-		eventfd_signal(p->eventfd, 1);
++		eventfd_signal(p->eventfd);
+ 	mutex_unlock(&client->vm->ioeventfds_lock);
+ 
+ 	return 0;
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 3893dc29eb263..71dee622b771b 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -489,13 +489,19 @@ EXPORT_SYMBOL_GPL(unregister_virtio_device);
+ int virtio_device_freeze(struct virtio_device *dev)
+ {
+ 	struct virtio_driver *drv = drv_to_virtio(dev->dev.driver);
++	int ret;
+ 
+ 	virtio_config_disable(dev);
+ 
+ 	dev->failed = dev->config->get_status(dev) & VIRTIO_CONFIG_S_FAILED;
+ 
+-	if (drv && drv->freeze)
+-		return drv->freeze(dev);
++	if (drv && drv->freeze) {
++		ret = drv->freeze(dev);
++		if (ret) {
++			virtio_config_enable(dev);
++			return ret;
++		}
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index 0eb337a8ec0fa..35b6e306026a4 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -1147,7 +1147,7 @@ static irqreturn_t ioeventfd_interrupt(int irq, void *dev_id)
+ 		if (ioreq->addr == kioeventfd->addr + VIRTIO_MMIO_QUEUE_NOTIFY &&
+ 		    ioreq->size == kioeventfd->addr_len &&
+ 		    (ioreq->data & QUEUE_NOTIFY_VQ_MASK) == kioeventfd->vq) {
+-			eventfd_signal(kioeventfd->eventfd, 1);
++			eventfd_signal(kioeventfd->eventfd);
+ 			state = STATE_IORESP_READY;
+ 			break;
+ 		}
+diff --git a/fs/aio.c b/fs/aio.c
+index 3235d4e6cc623..be12d7c049eff 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -590,8 +590,8 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
+ 
+ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
+ {
+-	struct aio_kiocb *req = container_of(iocb, struct aio_kiocb, rw);
+-	struct kioctx *ctx = req->ki_ctx;
++	struct aio_kiocb *req;
++	struct kioctx *ctx;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -601,9 +601,13 @@ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
+ 	if (!(iocb->ki_flags & IOCB_AIO_RW))
+ 		return;
+ 
++	req = container_of(iocb, struct aio_kiocb, rw);
++
+ 	if (WARN_ON_ONCE(!list_empty(&req->ki_list)))
+ 		return;
+ 
++	ctx = req->ki_ctx;
++
+ 	spin_lock_irqsave(&ctx->ctx_lock, flags);
+ 	list_add_tail(&req->ki_list, &ctx->active_reqs);
+ 	req->ki_cancel = cancel;
+@@ -1173,7 +1177,7 @@ static void aio_complete(struct aio_kiocb *iocb)
+ 	 * from IRQ context.
+ 	 */
+ 	if (iocb->ki_eventfd)
+-		eventfd_signal(iocb->ki_eventfd, 1);
++		eventfd_signal(iocb->ki_eventfd);
+ 
+ 	/*
+ 	 * We have to order our ring_info tail store above and test
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index aca24186d66bc..52bab8cf7867a 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1562,7 +1562,8 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		 * needing to allocate extents from the block group.
+ 		 */
+ 		used = btrfs_space_info_used(space_info, true);
+-		if (space_info->total_bytes - block_group->length < used) {
++		if (space_info->total_bytes - block_group->length < used &&
++		    block_group->zone_unusable < block_group->length) {
+ 			/*
+ 			 * Add a reference for the list, compensate for the ref
+ 			 * drop under the "next" label for the
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index eade0432bd9ce..87082f9732bdb 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2734,16 +2734,34 @@ static int fiemap_process_hole(struct btrfs_inode *inode,
+ 	 * it beyond i_size.
+ 	 */
+ 	while (cur_offset < end && cur_offset < i_size) {
++		struct extent_state *cached_state = NULL;
+ 		u64 delalloc_start;
+ 		u64 delalloc_end;
+ 		u64 prealloc_start;
++		u64 lockstart;
++		u64 lockend;
+ 		u64 prealloc_len = 0;
+ 		bool delalloc;
+ 
++		lockstart = round_down(cur_offset, inode->root->fs_info->sectorsize);
++		lockend = round_up(end, inode->root->fs_info->sectorsize);
++
++		/*
++		 * We are only locking for the delalloc range because that's the
++		 * only thing that can change here.  With fiemap we have a lock
++		 * on the inode, so no buffered or direct writes can happen.
++		 *
++		 * However mmaps and normal page writeback will cause this to
++		 * change arbitrarily.  We have to lock the extent lock here to
++		 * make sure that nobody messes with the tree while we're doing
++		 * btrfs_find_delalloc_in_range.
++		 */
++		lock_extent(&inode->io_tree, lockstart, lockend, &cached_state);
+ 		delalloc = btrfs_find_delalloc_in_range(inode, cur_offset, end,
+ 							delalloc_cached_state,
+ 							&delalloc_start,
+ 							&delalloc_end);
++		unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state);
+ 		if (!delalloc)
+ 			break;
+ 
+@@ -2911,15 +2929,15 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 		  u64 start, u64 len)
+ {
+ 	const u64 ino = btrfs_ino(inode);
+-	struct extent_state *cached_state = NULL;
+ 	struct extent_state *delalloc_cached_state = NULL;
+ 	struct btrfs_path *path;
+ 	struct fiemap_cache cache = { 0 };
+ 	struct btrfs_backref_share_check_ctx *backref_ctx;
+ 	u64 last_extent_end;
+ 	u64 prev_extent_end;
+-	u64 lockstart;
+-	u64 lockend;
++	u64 range_start;
++	u64 range_end;
++	const u64 sectorsize = inode->root->fs_info->sectorsize;
+ 	bool stopped = false;
+ 	int ret;
+ 
+@@ -2930,12 +2948,11 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 		goto out;
+ 	}
+ 
+-	lockstart = round_down(start, inode->root->fs_info->sectorsize);
+-	lockend = round_up(start + len, inode->root->fs_info->sectorsize);
+-	prev_extent_end = lockstart;
++	range_start = round_down(start, sectorsize);
++	range_end = round_up(start + len, sectorsize);
++	prev_extent_end = range_start;
+ 
+ 	btrfs_inode_lock(inode, BTRFS_ILOCK_SHARED);
+-	lock_extent(&inode->io_tree, lockstart, lockend, &cached_state);
+ 
+ 	ret = fiemap_find_last_extent_offset(inode, path, &last_extent_end);
+ 	if (ret < 0)
+@@ -2943,7 +2960,7 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 	btrfs_release_path(path);
+ 
+ 	path->reada = READA_FORWARD;
+-	ret = fiemap_search_slot(inode, path, lockstart);
++	ret = fiemap_search_slot(inode, path, range_start);
+ 	if (ret < 0) {
+ 		goto out_unlock;
+ 	} else if (ret > 0) {
+@@ -2955,7 +2972,7 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 		goto check_eof_delalloc;
+ 	}
+ 
+-	while (prev_extent_end < lockend) {
++	while (prev_extent_end < range_end) {
+ 		struct extent_buffer *leaf = path->nodes[0];
+ 		struct btrfs_file_extent_item *ei;
+ 		struct btrfs_key key;
+@@ -2978,19 +2995,19 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 		 * The first iteration can leave us at an extent item that ends
+ 		 * before our range's start. Move to the next item.
+ 		 */
+-		if (extent_end <= lockstart)
++		if (extent_end <= range_start)
+ 			goto next_item;
+ 
+ 		backref_ctx->curr_leaf_bytenr = leaf->start;
+ 
+ 		/* We have in implicit hole (NO_HOLES feature enabled). */
+ 		if (prev_extent_end < key.offset) {
+-			const u64 range_end = min(key.offset, lockend) - 1;
++			const u64 hole_end = min(key.offset, range_end) - 1;
+ 
+ 			ret = fiemap_process_hole(inode, fieinfo, &cache,
+ 						  &delalloc_cached_state,
+ 						  backref_ctx, 0, 0, 0,
+-						  prev_extent_end, range_end);
++						  prev_extent_end, hole_end);
+ 			if (ret < 0) {
+ 				goto out_unlock;
+ 			} else if (ret > 0) {
+@@ -3000,7 +3017,7 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 			}
+ 
+ 			/* We've reached the end of the fiemap range, stop. */
+-			if (key.offset >= lockend) {
++			if (key.offset >= range_end) {
+ 				stopped = true;
+ 				break;
+ 			}
+@@ -3094,29 +3111,41 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 	btrfs_free_path(path);
+ 	path = NULL;
+ 
+-	if (!stopped && prev_extent_end < lockend) {
++	if (!stopped && prev_extent_end < range_end) {
+ 		ret = fiemap_process_hole(inode, fieinfo, &cache,
+ 					  &delalloc_cached_state, backref_ctx,
+-					  0, 0, 0, prev_extent_end, lockend - 1);
++					  0, 0, 0, prev_extent_end, range_end - 1);
+ 		if (ret < 0)
+ 			goto out_unlock;
+-		prev_extent_end = lockend;
++		prev_extent_end = range_end;
+ 	}
+ 
+ 	if (cache.cached && cache.offset + cache.len >= last_extent_end) {
+ 		const u64 i_size = i_size_read(&inode->vfs_inode);
+ 
+ 		if (prev_extent_end < i_size) {
++			struct extent_state *cached_state = NULL;
+ 			u64 delalloc_start;
+ 			u64 delalloc_end;
++			u64 lockstart;
++			u64 lockend;
+ 			bool delalloc;
+ 
++			lockstart = round_down(prev_extent_end, sectorsize);
++			lockend = round_up(i_size, sectorsize);
++
++			/*
++			 * See the comment in fiemap_process_hole as to why
++			 * we're doing the locking here.
++			 */
++			lock_extent(&inode->io_tree, lockstart, lockend, &cached_state);
+ 			delalloc = btrfs_find_delalloc_in_range(inode,
+ 								prev_extent_end,
+ 								i_size - 1,
+ 								&delalloc_cached_state,
+ 								&delalloc_start,
+ 								&delalloc_end);
++			unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state);
+ 			if (!delalloc)
+ 				cache.flags |= FIEMAP_EXTENT_LAST;
+ 		} else {
+@@ -3127,7 +3156,6 @@ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ 	ret = emit_last_fiemap_cache(fieinfo, &cache);
+ 
+ out_unlock:
+-	unlock_extent(&inode->io_tree, lockstart, lockend, &cached_state);
+ 	btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
+ out:
+ 	free_extent_state(delalloc_cached_state);
+@@ -4024,6 +4052,19 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num,
+ 	if (test_and_set_bit(EXTENT_BUFFER_READING, &eb->bflags))
+ 		goto done;
+ 
++	/*
++	 * Between the initial test_bit(EXTENT_BUFFER_UPTODATE) and the above
++	 * test_and_set_bit(EXTENT_BUFFER_READING), someone else could have
++	 * started and finished reading the same eb.  In this case, UPTODATE
++	 * will now be set, and we shouldn't read it in again.
++	 */
++	if (unlikely(test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags))) {
++		clear_bit(EXTENT_BUFFER_READING, &eb->bflags);
++		smp_mb__after_atomic();
++		wake_up_bit(&eb->bflags, EXTENT_BUFFER_READING);
++		return 0;
++	}
++
+ 	clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
+ 	eb->read_mirror = 0;
+ 	check_buffer_tree_ref(eb);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index bbfa44b89bc45..1dcf5bb8dfa6f 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2959,11 +2959,6 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
+ 				ctx.roots = NULL;
+ 			}
+ 
+-			/* Free the reserved data space */
+-			btrfs_qgroup_free_refroot(fs_info,
+-					record->data_rsv_refroot,
+-					record->data_rsv,
+-					BTRFS_QGROUP_RSV_DATA);
+ 			/*
+ 			 * Use BTRFS_SEQ_LAST as time_seq to do special search,
+ 			 * which doesn't lock tree or delayed_refs and search
+@@ -2987,6 +2982,11 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
+ 			record->old_roots = NULL;
+ 			new_roots = NULL;
+ 		}
++		/* Free the reserved data space */
++		btrfs_qgroup_free_refroot(fs_info,
++				record->data_rsv_refroot,
++				record->data_rsv,
++				BTRFS_QGROUP_RSV_DATA);
+ cleanup:
+ 		ulist_free(record->old_roots);
+ 		ulist_free(new_roots);
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 443d2519f0a9d..258b3e5585f73 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -2809,7 +2809,17 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx,
+ 		gen = btrfs_get_last_trans_committed(fs_info);
+ 
+ 	for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) {
+-		bytenr = btrfs_sb_offset(i);
++		ret = btrfs_sb_log_location(scrub_dev, i, 0, &bytenr);
++		if (ret == -ENOENT)
++			break;
++
++		if (ret) {
++			spin_lock(&sctx->stat_lock);
++			sctx->stat.super_errors++;
++			spin_unlock(&sctx->stat_lock);
++			continue;
++		}
++
+ 		if (bytenr + BTRFS_SUPER_INFO_SIZE >
+ 		    scrub_dev->commit_total_bytes)
+ 			break;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f627674b37db5..fd30dc3d59c85 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -683,6 +683,16 @@ static int btrfs_open_one_device(struct btrfs_fs_devices *fs_devices,
+ 	device->bdev = bdev_handle->bdev;
+ 	clear_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state);
+ 
++	if (device->devt != device->bdev->bd_dev) {
++		btrfs_warn(NULL,
++			   "device %s maj:min changed from %d:%d to %d:%d",
++			   device->name->str, MAJOR(device->devt),
++			   MINOR(device->devt), MAJOR(device->bdev->bd_dev),
++			   MINOR(device->bdev->bd_dev));
++
++		device->devt = device->bdev->bd_dev;
++	}
++
+ 	fs_devices->open_devices++;
+ 	if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) &&
+ 	    device->devid != BTRFS_DEV_REPLACE_DEVID) {
+@@ -1290,6 +1300,47 @@ int btrfs_forget_devices(dev_t devt)
+ 	return ret;
+ }
+ 
++static bool btrfs_skip_registration(struct btrfs_super_block *disk_super,
++				    const char *path, dev_t devt,
++				    bool mount_arg_dev)
++{
++	struct btrfs_fs_devices *fs_devices;
++
++	/*
++	 * Do not skip device registration for mounted devices with matching
++	 * maj:min but different paths. Booting without initrd relies on
++	 * /dev/root initially, later replaced with the actual root device.
++	 * A successful scan ensures grub2-probe selects the correct device.
++	 */
++	list_for_each_entry(fs_devices, &fs_uuids, fs_list) {
++		struct btrfs_device *device;
++
++		mutex_lock(&fs_devices->device_list_mutex);
++
++		if (!fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
++			continue;
++		}
++
++		list_for_each_entry(device, &fs_devices->devices, dev_list) {
++			if (device->bdev && (device->bdev->bd_dev == devt) &&
++			    strcmp(device->name->str, path) != 0) {
++				mutex_unlock(&fs_devices->device_list_mutex);
++
++				/* Do not skip registration. */
++				return false;
++			}
++		}
++		mutex_unlock(&fs_devices->device_list_mutex);
++	}
++
++	if (!mount_arg_dev && btrfs_super_num_devices(disk_super) == 1 &&
++	    !(btrfs_super_flags(disk_super) & BTRFS_SUPER_FLAG_SEEDING))
++		return true;
++
++	return false;
++}
++
+ /*
+  * Look for a btrfs signature on a device. This may be called out of the mount path
+  * and we are not allowed to call set_blocksize during the scan. The superblock
+@@ -1346,18 +1397,14 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ 		goto error_bdev_put;
+ 	}
+ 
+-	if (!mount_arg_dev && btrfs_super_num_devices(disk_super) == 1 &&
+-	    !(btrfs_super_flags(disk_super) & BTRFS_SUPER_FLAG_SEEDING)) {
+-		dev_t devt;
++	if (btrfs_skip_registration(disk_super, path, bdev_handle->bdev->bd_dev,
++				    mount_arg_dev)) {
++		pr_debug("BTRFS: skip registering single non-seed device %s (%d:%d)\n",
++			  path, MAJOR(bdev_handle->bdev->bd_dev),
++			  MINOR(bdev_handle->bdev->bd_dev));
+ 
+-		ret = lookup_bdev(path, &devt);
+-		if (ret)
+-			btrfs_warn(NULL, "lookup bdev failed for path %s: %d",
+-				   path, ret);
+-		else
+-			btrfs_free_stale_devices(devt, NULL);
++		btrfs_free_stale_devices(bdev_handle->bdev->bd_dev, NULL);
+ 
+-		pr_debug("BTRFS: skip registering single non-seed device %s\n", path);
+ 		device = NULL;
+ 		goto free_disk_super;
+ 	}
+@@ -1392,7 +1439,7 @@ static bool contains_pending_extent(struct btrfs_device *device, u64 *start,
+ 
+ 		if (in_range(physical_start, *start, len) ||
+ 		    in_range(*start, physical_start,
+-			     physical_end - physical_start)) {
++			     physical_end + 1 - physical_start)) {
+ 			*start = physical_end + 1;
+ 			return true;
+ 		}
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 034a617cb1a5e..a40da00654336 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -751,13 +751,28 @@ static void __debugfs_file_removed(struct dentry *dentry)
+ 	if ((unsigned long)fsd & DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)
+ 		return;
+ 
+-	/* if we hit zero, just wait for all to finish */
+-	if (!refcount_dec_and_test(&fsd->active_users)) {
+-		wait_for_completion(&fsd->active_users_drained);
++	/* if this was the last reference, we're done */
++	if (refcount_dec_and_test(&fsd->active_users))
+ 		return;
+-	}
+ 
+-	/* if we didn't hit zero, try to cancel any we can */
++	/*
++	 * If there's still a reference, the code that obtained it can
++	 * be in different states:
++	 *  - The common case of not using cancellations, or already
++	 *    after debugfs_leave_cancellation(), where we just need
++	 *    to wait for debugfs_file_put() which signals the completion;
++	 *  - inside a cancellation section, i.e. between
++	 *    debugfs_enter_cancellation() and debugfs_leave_cancellation(),
++	 *    in which case we need to trigger the ->cancel() function,
++	 *    and then wait for debugfs_file_put() just like in the
++	 *    previous case;
++	 *  - before debugfs_enter_cancellation() (but obviously after
++	 *    debugfs_file_get()), in which case we may not see the
++	 *    cancellation in the list on the first round of the loop,
++	 *    but debugfs_enter_cancellation() signals the completion
++	 *    after adding it, so this code gets woken up to call the
++	 *    ->cancel() function.
++	 */
+ 	while (refcount_read(&fsd->active_users)) {
+ 		struct debugfs_cancellation *c;
+ 
+diff --git a/fs/dlm/user.c b/fs/dlm/user.c
+index 695e691b38b31..9f9b68448830e 100644
+--- a/fs/dlm/user.c
++++ b/fs/dlm/user.c
+@@ -806,7 +806,7 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count,
+ 	struct dlm_lkb *lkb;
+ 	DECLARE_WAITQUEUE(wait, current);
+ 	struct dlm_callback *cb;
+-	int rv, copy_lvb = 0;
++	int rv, ret, copy_lvb = 0;
+ 	int old_mode, new_mode;
+ 
+ 	if (count == sizeof(struct dlm_device_version)) {
+@@ -906,9 +906,9 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count,
+ 		trace_dlm_ast(lkb->lkb_resource->res_ls, lkb);
+ 	}
+ 
+-	rv = copy_result_to_user(lkb->lkb_ua,
+-				 test_bit(DLM_PROC_FLAGS_COMPAT, &proc->flags),
+-				 cb->flags, cb->mode, copy_lvb, buf, count);
++	ret = copy_result_to_user(lkb->lkb_ua,
++				  test_bit(DLM_PROC_FLAGS_COMPAT, &proc->flags),
++				  cb->flags, cb->mode, copy_lvb, buf, count);
+ 
+ 	kref_put(&cb->ref, dlm_release_callback);
+ 
+@@ -916,7 +916,7 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count,
+ 	if (rv == DLM_DEQUEUE_CALLBACK_LAST)
+ 		dlm_put_lkb(lkb);
+ 
+-	return rv;
++	return ret;
+ }
+ 
+ static __poll_t device_poll(struct file *file, poll_table *wait)
+diff --git a/fs/eventfd.c b/fs/eventfd.c
+index 33a918f9566c3..d2f7d2d8a3511 100644
+--- a/fs/eventfd.c
++++ b/fs/eventfd.c
+@@ -72,22 +72,19 @@ __u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, __poll_t mask)
+ }
+ 
+ /**
+- * eventfd_signal - Adds @n to the eventfd counter.
++ * eventfd_signal - Increment the event counter
+  * @ctx: [in] Pointer to the eventfd context.
+- * @n: [in] Value of the counter to be added to the eventfd internal counter.
+- *          The value cannot be negative.
+  *
+  * This function is supposed to be called by the kernel in paths that do not
+  * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX
+  * value, and we signal this as overflow condition by returning a EPOLLERR
+  * to poll(2).
+  *
+- * Returns the amount by which the counter was incremented.  This will be less
+- * than @n if the counter has overflowed.
++ * Returns the amount by which the counter was incremented.
+  */
+-__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
++__u64 eventfd_signal(struct eventfd_ctx *ctx)
+ {
+-	return eventfd_signal_mask(ctx, n, 0);
++	return eventfd_signal_mask(ctx, 1, 0);
+ }
+ EXPORT_SYMBOL_GPL(eventfd_signal);
+ 
+diff --git a/fs/exec.c b/fs/exec.c
+index 6d9ed2d765efe..7a1861a718c25 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -894,6 +894,7 @@ int transfer_args_to_stack(struct linux_binprm *bprm,
+ 			goto out;
+ 	}
+ 
++	bprm->exec += *sp_location - MAX_ARG_PAGES * PAGE_SIZE;
+ 	*sp_location = sp;
+ 
+ out:
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 7497a789d002e..38ec0fdb33953 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5173,10 +5173,16 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 			.fe_len = ac->ac_orig_goal_len,
+ 		};
+ 		loff_t orig_goal_end = extent_logical_end(sbi, &ex);
++		loff_t o_ex_end = extent_logical_end(sbi, &ac->ac_o_ex);
+ 
+-		/* we can't allocate as much as normalizer wants.
+-		 * so, found space must get proper lstart
+-		 * to cover original request */
++		/*
++		 * We can't allocate as much as normalizer wants, so we try
++		 * to get proper lstart to cover the original request, except
++		 * when the goal doesn't cover the original request as below:
++		 *
++		 * orig_ex:2045/2055(10), isize:8417280 -> normalized:0/2048
++		 * best_ex:0/200(200) -> adjusted: 1848/2048(200)
++		 */
+ 		BUG_ON(ac->ac_g_ex.fe_logical > ac->ac_o_ex.fe_logical);
+ 		BUG_ON(ac->ac_g_ex.fe_len < ac->ac_o_ex.fe_len);
+ 
+@@ -5188,7 +5194,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		 * 1. Check if best ex can be kept at end of goal (before
+ 		 *    cr_best_avail trimmed it) and still cover original start
+ 		 * 2. Else, check if best ex can be kept at start of goal and
+-		 *    still cover original start
++		 *    still cover original end
+ 		 * 3. Else, keep the best ex at start of original request.
+ 		 */
+ 		ex.fe_len = ac->ac_b_ex.fe_len;
+@@ -5198,7 +5204,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 			goto adjust_bex;
+ 
+ 		ex.fe_logical = ac->ac_g_ex.fe_logical;
+-		if (ac->ac_o_ex.fe_logical < extent_logical_end(sbi, &ex))
++		if (o_ex_end <= extent_logical_end(sbi, &ex))
+ 			goto adjust_bex;
+ 
+ 		ex.fe_logical = ac->ac_o_ex.fe_logical;
+@@ -5206,7 +5212,6 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
+ 		ac->ac_b_ex.fe_logical = ex.fe_logical;
+ 
+ 		BUG_ON(ac->ac_o_ex.fe_logical < ac->ac_b_ex.fe_logical);
+-		BUG_ON(ac->ac_o_ex.fe_len > ac->ac_b_ex.fe_len);
+ 		BUG_ON(extent_logical_end(sbi, &ex) > orig_goal_end);
+ 	}
+ 
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e168a9f596001..9a39596d2ac4d 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1592,7 +1592,8 @@ static int ext4_flex_group_add(struct super_block *sb,
+ 		int gdb_num = group / EXT4_DESC_PER_BLOCK(sb);
+ 		int gdb_num_end = ((group + flex_gd->count - 1) /
+ 				   EXT4_DESC_PER_BLOCK(sb));
+-		int meta_bg = ext4_has_feature_meta_bg(sb);
++		int meta_bg = ext4_has_feature_meta_bg(sb) &&
++			      gdb_num >= le32_to_cpu(es->s_first_meta_bg);
+ 		sector_t padding_blocks = meta_bg ? 0 : sbi->s_sbh->b_blocknr -
+ 					 ext4_group_first_block_no(sb, 0);
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 200e1e43ef9bb..3b6133c865a29 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3025,6 +3025,7 @@ static inline void __mark_inode_dirty_flag(struct inode *inode,
+ 	case FI_INLINE_DOTS:
+ 	case FI_PIN_FILE:
+ 	case FI_COMPRESS_RELEASED:
++	case FI_ATOMIC_COMMITTED:
+ 		f2fs_mark_inode_dirty_sync(inode, true);
+ 	}
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index dfc7fd8aa1e8f..7a0143825e19d 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -192,6 +192,9 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean)
+ 	if (!f2fs_is_atomic_file(inode))
+ 		return;
+ 
++	if (clean)
++		truncate_inode_pages_final(inode->i_mapping);
++
+ 	release_atomic_write_cnt(inode);
+ 	clear_inode_flag(inode, FI_ATOMIC_COMMITTED);
+ 	clear_inode_flag(inode, FI_ATOMIC_REPLACE);
+@@ -201,7 +204,6 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean)
+ 	F2FS_I(inode)->atomic_write_task = NULL;
+ 
+ 	if (clean) {
+-		truncate_inode_pages_final(inode->i_mapping);
+ 		f2fs_i_size_write(inode, fi->original_i_size);
+ 		fi->original_i_size = 0;
+ 	}
+diff --git a/fs/fat/nfs.c b/fs/fat/nfs.c
+index c52e63e10d35c..509eea96a457d 100644
+--- a/fs/fat/nfs.c
++++ b/fs/fat/nfs.c
+@@ -130,6 +130,12 @@ fat_encode_fh_nostale(struct inode *inode, __u32 *fh, int *lenp,
+ 		fid->parent_i_gen = parent->i_generation;
+ 		type = FILEID_FAT_WITH_PARENT;
+ 		*lenp = FAT_FID_SIZE_WITH_PARENT;
++	} else {
++		/*
++		 * We need to initialize this field because the fh is actually
++		 * 12 bytes long
++		 */
++		fid->parent_i_pos_hi = 0;
+ 	}
+ 
+ 	return type;
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index d19cbf34c6341..9307bb4393b8f 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -391,6 +391,10 @@ int fuse_lookup_name(struct super_block *sb, u64 nodeid, const struct qstr *name
+ 	err = -EIO;
+ 	if (fuse_invalid_attr(&outarg->attr))
+ 		goto out_put_forget;
++	if (outarg->nodeid == FUSE_ROOT_ID && outarg->generation != 0) {
++		pr_warn_once("root generation should be zero\n");
++		outarg->generation = 0;
++	}
+ 
+ 	*inode = fuse_iget(sb, outarg->nodeid, outarg->generation,
+ 			   &outarg->attr, ATTR_TIMEOUT(outarg),
+@@ -1210,7 +1214,7 @@ static int fuse_do_statx(struct inode *inode, struct file *file,
+ 	if (((sx->mask & STATX_SIZE) && !fuse_valid_size(sx->size)) ||
+ 	    ((sx->mask & STATX_TYPE) && (!fuse_valid_type(sx->mode) ||
+ 					 inode_wrong_type(inode, sx->mode)))) {
+-		make_bad_inode(inode);
++		fuse_make_bad(inode);
+ 		return -EIO;
+ 	}
+ 
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index a660f1f21540a..cc9651a01351c 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -2467,7 +2467,8 @@ static int fuse_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 		return fuse_dax_mmap(file, vma);
+ 
+ 	if (ff->open_flags & FOPEN_DIRECT_IO) {
+-		/* Can't provide the coherency needed for MAP_SHARED
++		/*
++		 * Can't provide the coherency needed for MAP_SHARED
+ 		 * if FUSE_DIRECT_IO_ALLOW_MMAP isn't set.
+ 		 */
+ 		if ((vma->vm_flags & VM_MAYSHARE) && !fc->direct_io_allow_mmap)
+@@ -2475,7 +2476,10 @@ static int fuse_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 
+ 		invalidate_inode_pages2(file->f_mapping);
+ 
+-		return generic_file_mmap(file, vma);
++		if (!(vma->vm_flags & VM_MAYSHARE)) {
++			/* MAP_PRIVATE */
++			return generic_file_mmap(file, vma);
++		}
+ 	}
+ 
+ 	if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 1df83eebda927..b5c241e16964d 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -939,7 +939,6 @@ static inline bool fuse_stale_inode(const struct inode *inode, int generation,
+ 
+ static inline void fuse_make_bad(struct inode *inode)
+ {
+-	remove_inode_hash(inode);
+ 	set_bit(FUSE_I_BAD, &get_fuse_inode(inode)->state);
+ }
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 2a6d44f91729b..b676c72c62adf 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -469,8 +469,11 @@ struct inode *fuse_iget(struct super_block *sb, u64 nodeid,
+ 	} else if (fuse_stale_inode(inode, generation, attr)) {
+ 		/* nodeid was reused, any I/O on the old inode should fail */
+ 		fuse_make_bad(inode);
+-		iput(inode);
+-		goto retry;
++		if (inode != d_inode(sb->s_root)) {
++			remove_inode_hash(inode);
++			iput(inode);
++			goto retry;
++		}
+ 	}
+ 	fi = get_fuse_inode(inode);
+ 	spin_lock(&fi->lock);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 5918c67dae0da..b6f801e73bfdc 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -668,10 +668,17 @@ static void nfs_direct_commit_schedule(struct nfs_direct_req *dreq)
+ 	LIST_HEAD(mds_list);
+ 
+ 	nfs_init_cinfo_from_dreq(&cinfo, dreq);
++	nfs_commit_begin(cinfo.mds);
+ 	nfs_scan_commit(dreq->inode, &mds_list, &cinfo);
+ 	res = nfs_generic_commit_list(dreq->inode, &mds_list, 0, &cinfo);
+-	if (res < 0) /* res == -ENOMEM */
+-		nfs_direct_write_reschedule(dreq);
++	if (res < 0) { /* res == -ENOMEM */
++		spin_lock(&dreq->lock);
++		if (dreq->flags == 0)
++			dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
++		spin_unlock(&dreq->lock);
++	}
++	if (nfs_commit_end(cinfo.mds))
++		nfs_direct_write_complete(dreq);
+ }
+ 
+ static void nfs_direct_write_clear_reqs(struct nfs_direct_req *dreq)
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index 7dc21a48e3e7b..a142287d86f68 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -305,6 +305,8 @@ int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio,
+ 	new = nfs_page_create_from_folio(ctx, folio, 0, aligned_len);
+ 	if (IS_ERR(new)) {
+ 		error = PTR_ERR(new);
++		if (nfs_netfs_folio_unlock(folio))
++			folio_unlock(folio);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 9e345d3c305a6..5c2ff4a31a340 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1663,7 +1663,7 @@ static int wait_on_commit(struct nfs_mds_commit_info *cinfo)
+ 				       !atomic_read(&cinfo->rpcs_out));
+ }
+ 
+-static void nfs_commit_begin(struct nfs_mds_commit_info *cinfo)
++void nfs_commit_begin(struct nfs_mds_commit_info *cinfo)
+ {
+ 	atomic_inc(&cinfo->rpcs_out);
+ }
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index fbc0ccb404241..4c7a296c4189d 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -843,7 +843,7 @@ DECLARE_EVENT_CLASS(nfsd_clid_class,
+ 		__array(unsigned char, addr, sizeof(struct sockaddr_in6))
+ 		__field(unsigned long, flavor)
+ 		__array(unsigned char, verifier, NFS4_VERIFIER_SIZE)
+-		__string_len(name, name, clp->cl_name.len)
++		__string_len(name, clp->cl_name.data, clp->cl_name.len)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->cl_boot = clp->cl_clientid.cl_boot;
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 13592e82eaf68..65659fa0372e6 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -724,7 +724,7 @@ static int nilfs_btree_lookup_contig(const struct nilfs_bmap *btree,
+ 		dat = nilfs_bmap_get_dat(btree);
+ 		ret = nilfs_dat_translate(dat, ptr, &blocknr);
+ 		if (ret < 0)
+-			goto out;
++			goto dat_error;
+ 		ptr = blocknr;
+ 	}
+ 	cnt = 1;
+@@ -743,7 +743,7 @@ static int nilfs_btree_lookup_contig(const struct nilfs_bmap *btree,
+ 			if (dat) {
+ 				ret = nilfs_dat_translate(dat, ptr2, &blocknr);
+ 				if (ret < 0)
+-					goto out;
++					goto dat_error;
+ 				ptr2 = blocknr;
+ 			}
+ 			if (ptr2 != ptr + cnt || ++cnt == maxblocks)
+@@ -781,6 +781,11 @@ static int nilfs_btree_lookup_contig(const struct nilfs_bmap *btree,
+  out:
+ 	nilfs_btree_free_path(path);
+ 	return ret;
++
++ dat_error:
++	if (ret == -ENOENT)
++		ret = -EINVAL;  /* Notify bmap layer of metadata corruption */
++	goto out;
+ }
+ 
+ static void nilfs_btree_promote_key(struct nilfs_bmap *btree,
+diff --git a/fs/nilfs2/direct.c b/fs/nilfs2/direct.c
+index 4c85914f2abc3..893ab36824cc2 100644
+--- a/fs/nilfs2/direct.c
++++ b/fs/nilfs2/direct.c
+@@ -66,7 +66,7 @@ static int nilfs_direct_lookup_contig(const struct nilfs_bmap *direct,
+ 		dat = nilfs_bmap_get_dat(direct);
+ 		ret = nilfs_dat_translate(dat, ptr, &blocknr);
+ 		if (ret < 0)
+-			return ret;
++			goto dat_error;
+ 		ptr = blocknr;
+ 	}
+ 
+@@ -79,7 +79,7 @@ static int nilfs_direct_lookup_contig(const struct nilfs_bmap *direct,
+ 		if (dat) {
+ 			ret = nilfs_dat_translate(dat, ptr2, &blocknr);
+ 			if (ret < 0)
+-				return ret;
++				goto dat_error;
+ 			ptr2 = blocknr;
+ 		}
+ 		if (ptr2 != ptr + cnt)
+@@ -87,6 +87,11 @@ static int nilfs_direct_lookup_contig(const struct nilfs_bmap *direct,
+ 	}
+ 	*ptrp = ptr;
+ 	return cnt;
++
++ dat_error:
++	if (ret == -ENOENT)
++		ret = -EINVAL;  /* Notify bmap layer of metadata corruption */
++	return ret;
+ }
+ 
+ static __u64
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index f861f3a0bf5cf..da97149f832f2 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -112,7 +112,7 @@ int nilfs_get_block(struct inode *inode, sector_t blkoff,
+ 					   "%s (ino=%lu): a race condition while inserting a data block at offset=%llu",
+ 					   __func__, inode->i_ino,
+ 					   (unsigned long long)blkoff);
+-				err = 0;
++				err = -EAGAIN;
+ 			}
+ 			nilfs_transaction_abort(inode->i_sb);
+ 			goto out;
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index 5730c65ffb40d..15e1215bc4e5a 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -233,7 +233,8 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ 		.tcon = tcon,
+ 		.path = path,
+ 		.create_options = cifs_create_options(cifs_sb, CREATE_NOT_FILE),
+-		.desired_access =  FILE_READ_DATA | FILE_READ_ATTRIBUTES,
++		.desired_access =  FILE_READ_DATA | FILE_READ_ATTRIBUTES |
++				   FILE_READ_EA,
+ 		.disposition = FILE_OPEN,
+ 		.fid = pfid,
+ 	};
+diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c
+index 60027f5aebe87..04a6351a9295b 100644
+--- a/fs/smb/client/cifs_debug.c
++++ b/fs/smb/client/cifs_debug.c
+@@ -488,6 +488,8 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 				ses->ses_count, ses->serverOS, ses->serverNOS,
+ 				ses->capabilities, ses->ses_status);
+ 			}
++			if (ses->expired_pwd)
++				seq_puts(m, "password no longer valid ");
+ 			spin_unlock(&ses->ses_lock);
+ 
+ 			seq_printf(m, "\n\tSecurity type: %s ",
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 462554917e5a1..35a12413bbee6 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -339,6 +339,9 @@ struct smb_version_operations {
+ 	/* informational QFS call */
+ 	void (*qfs_tcon)(const unsigned int, struct cifs_tcon *,
+ 			 struct cifs_sb_info *);
++	/* query for server interfaces */
++	int (*query_server_interfaces)(const unsigned int, struct cifs_tcon *,
++				       bool);
+ 	/* check if a path is accessible or not */
+ 	int (*is_path_accessible)(const unsigned int, struct cifs_tcon *,
+ 				  struct cifs_sb_info *, const char *);
+@@ -1052,6 +1055,7 @@ struct cifs_ses {
+ 	enum securityEnum sectype; /* what security flavor was specified? */
+ 	bool sign;		/* is signing required? */
+ 	bool domainAuto:1;
++	bool expired_pwd;  /* track if access denied or expired pwd so can know if need to update */
+ 	unsigned int flags;
+ 	__u16 session_flags;
+ 	__u8 smb3signingkey[SMB3_SIGN_KEY_SIZE];
+@@ -1562,6 +1566,7 @@ struct cifsInodeInfo {
+ 	spinlock_t deferred_lock; /* protection on deferred list */
+ 	bool lease_granted; /* Flag to indicate whether lease or oplock is granted. */
+ 	char *symlink_target;
++	__u32 reparse_tag;
+ };
+ 
+ static inline struct cifsInodeInfo *
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 9516f57323246..13131957d9616 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -144,7 +144,8 @@ extern int cifs_reconnect(struct TCP_Server_Info *server,
+ extern int checkSMB(char *buf, unsigned int len, struct TCP_Server_Info *srvr);
+ extern bool is_valid_oplock_break(char *, struct TCP_Server_Info *);
+ extern bool backup_cred(struct cifs_sb_info *);
+-extern bool is_size_safe_to_change(struct cifsInodeInfo *, __u64 eof);
++extern bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 eof,
++				   bool from_readdir);
+ extern void cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset,
+ 			    unsigned int bytes_written);
+ extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *, int);
+@@ -201,7 +202,8 @@ extern void cifs_unix_basic_to_fattr(struct cifs_fattr *fattr,
+ 				     struct cifs_sb_info *cifs_sb);
+ extern void cifs_dir_info_to_fattr(struct cifs_fattr *, FILE_DIRECTORY_INFO *,
+ 					struct cifs_sb_info *);
+-extern int cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr);
++extern int cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr,
++			       bool from_readdir);
+ extern struct inode *cifs_iget(struct super_block *sb,
+ 			       struct cifs_fattr *fattr);
+ 
+@@ -652,7 +654,7 @@ cifs_chan_is_iface_active(struct cifs_ses *ses,
+ 			  struct TCP_Server_Info *server);
+ void
+ cifs_disable_secondary_channels(struct cifs_ses *ses);
+-int
++void
+ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server);
+ int
+ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon, bool in_mount);
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index c3d805ecb7f11..bc8d09bab18bb 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -123,12 +123,16 @@ static void smb2_query_server_interfaces(struct work_struct *work)
+ 	struct cifs_tcon *tcon = container_of(work,
+ 					struct cifs_tcon,
+ 					query_interfaces.work);
++	struct TCP_Server_Info *server = tcon->ses->server;
+ 
+ 	/*
+ 	 * query server network interfaces, in case they change
+ 	 */
++	if (!server->ops->query_server_interfaces)
++		return;
++
+ 	xid = get_xid();
+-	rc = SMB3_request_interfaces(xid, tcon, false);
++	rc = server->ops->query_server_interfaces(xid, tcon, false);
+ 	free_xid(xid);
+ 
+ 	if (rc) {
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index c156460eb5587..c711d5eb2987e 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -329,7 +329,7 @@ int cifs_posix_open(const char *full_path, struct inode **pinode,
+ 		}
+ 	} else {
+ 		cifs_revalidate_mapping(*pinode);
+-		rc = cifs_fattr_to_inode(*pinode, &fattr);
++		rc = cifs_fattr_to_inode(*pinode, &fattr, false);
+ 	}
+ 
+ posix_open_ret:
+@@ -4766,12 +4766,14 @@ static int is_inode_writable(struct cifsInodeInfo *cifs_inode)
+    refreshing the inode only on increases in the file size
+    but this is tricky to do without racing with writebehind
+    page caching in the current Linux kernel design */
+-bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file)
++bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file,
++			    bool from_readdir)
+ {
+ 	if (!cifsInode)
+ 		return true;
+ 
+-	if (is_inode_writable(cifsInode)) {
++	if (is_inode_writable(cifsInode) ||
++		((cifsInode->oplock & CIFS_CACHE_RW_FLG) != 0 && from_readdir)) {
+ 		/* This inode is open for write at least once */
+ 		struct cifs_sb_info *cifs_sb;
+ 
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 6ecbf48d0f0c6..e4a6b240d2263 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -771,7 +771,7 @@ static void smb3_fs_context_free(struct fs_context *fc)
+  */
+ static int smb3_verify_reconfigure_ctx(struct fs_context *fc,
+ 				       struct smb3_fs_context *new_ctx,
+-				       struct smb3_fs_context *old_ctx)
++				       struct smb3_fs_context *old_ctx, bool need_recon)
+ {
+ 	if (new_ctx->posix_paths != old_ctx->posix_paths) {
+ 		cifs_errorf(fc, "can not change posixpaths during remount\n");
+@@ -797,8 +797,15 @@ static int smb3_verify_reconfigure_ctx(struct fs_context *fc,
+ 	}
+ 	if (new_ctx->password &&
+ 	    (!old_ctx->password || strcmp(new_ctx->password, old_ctx->password))) {
+-		cifs_errorf(fc, "can not change password during remount\n");
+-		return -EINVAL;
++		if (need_recon == false) {
++			cifs_errorf(fc,
++				    "can not change password of active session during remount\n");
++			return -EINVAL;
++		} else if (old_ctx->sectype == Kerberos) {
++			cifs_errorf(fc,
++				    "can not change password for Kerberos via remount\n");
++			return -EINVAL;
++		}
+ 	}
+ 	if (new_ctx->domainname &&
+ 	    (!old_ctx->domainname || strcmp(new_ctx->domainname, old_ctx->domainname))) {
+@@ -842,9 +849,14 @@ static int smb3_reconfigure(struct fs_context *fc)
+ 	struct smb3_fs_context *ctx = smb3_fc2context(fc);
+ 	struct dentry *root = fc->root;
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(root->d_sb);
++	struct cifs_ses *ses = cifs_sb_master_tcon(cifs_sb)->ses;
++	bool need_recon = false;
+ 	int rc;
+ 
+-	rc = smb3_verify_reconfigure_ctx(fc, ctx, cifs_sb->ctx);
++	if (ses->expired_pwd)
++		need_recon = true;
++
++	rc = smb3_verify_reconfigure_ctx(fc, ctx, cifs_sb->ctx, need_recon);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -857,7 +869,12 @@ static int smb3_reconfigure(struct fs_context *fc)
+ 	STEAL_STRING(cifs_sb, ctx, UNC);
+ 	STEAL_STRING(cifs_sb, ctx, source);
+ 	STEAL_STRING(cifs_sb, ctx, username);
+-	STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
++	if (need_recon == false)
++		STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
++	else  {
++		kfree_sensitive(ses->password);
++		ses->password = kstrdup(ctx->password, GFP_KERNEL);
++	}
+ 	STEAL_STRING(cifs_sb, ctx, domainname);
+ 	STEAL_STRING(cifs_sb, ctx, nodename);
+ 	STEAL_STRING(cifs_sb, ctx, iocharset);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index eb54e48937771..cb9e719e67ae2 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -147,7 +147,8 @@ cifs_nlink_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
+ 
+ /* populate an inode with info from a cifs_fattr struct */
+ int
+-cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
++cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr,
++		    bool from_readdir)
+ {
+ 	struct cifsInodeInfo *cifs_i = CIFS_I(inode);
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+@@ -182,6 +183,7 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
+ 		inode->i_mode = fattr->cf_mode;
+ 
+ 	cifs_i->cifsAttrs = fattr->cf_cifsattrs;
++	cifs_i->reparse_tag = fattr->cf_cifstag;
+ 
+ 	if (fattr->cf_flags & CIFS_FATTR_NEED_REVAL)
+ 		cifs_i->time = 0;
+@@ -198,7 +200,7 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
+ 	 * Can't safely change the file size here if the client is writing to
+ 	 * it due to potential races.
+ 	 */
+-	if (is_size_safe_to_change(cifs_i, fattr->cf_eof)) {
++	if (is_size_safe_to_change(cifs_i, fattr->cf_eof, from_readdir)) {
+ 		i_size_write(inode, fattr->cf_eof);
+ 
+ 		/*
+@@ -209,7 +211,7 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
+ 		inode->i_blocks = (512 - 1 + fattr->cf_bytes) >> 9;
+ 	}
+ 
+-	if (S_ISLNK(fattr->cf_mode)) {
++	if (S_ISLNK(fattr->cf_mode) && fattr->cf_symlink_target) {
+ 		kfree(cifs_i->symlink_target);
+ 		cifs_i->symlink_target = fattr->cf_symlink_target;
+ 		fattr->cf_symlink_target = NULL;
+@@ -367,7 +369,7 @@ static int update_inode_info(struct super_block *sb,
+ 		CIFS_I(*inode)->time = 0; /* force reval */
+ 		return -ESTALE;
+ 	}
+-	return cifs_fattr_to_inode(*inode, fattr);
++	return cifs_fattr_to_inode(*inode, fattr, false);
+ }
+ 
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+@@ -402,7 +404,7 @@ cifs_get_file_info_unix(struct file *filp)
+ 	} else
+ 		goto cifs_gfiunix_out;
+ 
+-	rc = cifs_fattr_to_inode(inode, &fattr);
++	rc = cifs_fattr_to_inode(inode, &fattr, false);
+ 
+ cifs_gfiunix_out:
+ 	free_xid(xid);
+@@ -927,7 +929,7 @@ cifs_get_file_info(struct file *filp)
+ 	fattr.cf_uniqueid = CIFS_I(inode)->uniqueid;
+ 	fattr.cf_flags |= CIFS_FATTR_NEED_REVAL;
+ 	/* if filetype is different, return error */
+-	rc = cifs_fattr_to_inode(inode, &fattr);
++	rc = cifs_fattr_to_inode(inode, &fattr, false);
+ cgfi_exit:
+ 	cifs_free_open_info(&data);
+ 	free_xid(xid);
+@@ -1103,6 +1105,7 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ 
+ 	cifs_open_info_to_fattr(fattr, data, sb);
+ out:
++	fattr->cf_cifstag = data->reparse.tag;
+ 	free_rsp_buf(rsp_buftype, rsp_iov.iov_base);
+ 	return rc;
+ }
+@@ -1465,7 +1468,7 @@ cifs_iget(struct super_block *sb, struct cifs_fattr *fattr)
+ 		}
+ 
+ 		/* can't fail - see cifs_find_inode() */
+-		cifs_fattr_to_inode(inode, fattr);
++		cifs_fattr_to_inode(inode, fattr, false);
+ 		if (sb->s_flags & SB_NOATIME)
+ 			inode->i_flags |= S_NOATIME | S_NOCMTIME;
+ 		if (inode->i_state & I_NEW) {
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index e23cd216bffbe..56033e4e4bae9 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -55,6 +55,23 @@ static inline void dump_cifs_file_struct(struct file *file, char *label)
+ }
+ #endif /* DEBUG2 */
+ 
++/*
++ * Match a reparse point inode if reparse tag and ctime haven't changed.
++ *
++ * Windows Server updates ctime of reparse points when their data have changed.
++ * The server doesn't allow changing reparse tags from existing reparse points,
++ * though it's worth checking.
++ */
++static inline bool reparse_inode_match(struct inode *inode,
++				       struct cifs_fattr *fattr)
++{
++	struct timespec64 ctime = inode_get_ctime(inode);
++
++	return (CIFS_I(inode)->cifsAttrs & ATTR_REPARSE) &&
++		CIFS_I(inode)->reparse_tag == fattr->cf_cifstag &&
++		timespec64_equal(&ctime, &fattr->cf_ctime);
++}
++
+ /*
+  * Attempt to preload the dcache with the results from the FIND_FIRST/NEXT
+  *
+@@ -71,6 +88,7 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ 	struct super_block *sb = parent->d_sb;
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ 	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++	int rc;
+ 
+ 	cifs_dbg(FYI, "%s: for %s\n", __func__, name->name);
+ 
+@@ -82,9 +100,11 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ 		 * We'll end up doing an on the wire call either way and
+ 		 * this spares us an invalidation.
+ 		 */
+-		if (fattr->cf_flags & CIFS_FATTR_NEED_REVAL)
+-			return;
+ retry:
++		if ((fattr->cf_cifsattrs & ATTR_REPARSE) ||
++		    (fattr->cf_flags & CIFS_FATTR_NEED_REVAL))
++			return;
++
+ 		dentry = d_alloc_parallel(parent, name, &wq);
+ 	}
+ 	if (IS_ERR(dentry))
+@@ -104,12 +124,34 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ 			if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM))
+ 				fattr->cf_uniqueid = CIFS_I(inode)->uniqueid;
+ 
+-			/* update inode in place
+-			 * if both i_ino and i_mode didn't change */
+-			if (CIFS_I(inode)->uniqueid == fattr->cf_uniqueid &&
+-			    cifs_fattr_to_inode(inode, fattr) == 0) {
+-				dput(dentry);
+-				return;
++			/*
++			 * Update inode in place if both i_ino and i_mode didn't
++			 * change.
++			 */
++			if (CIFS_I(inode)->uniqueid == fattr->cf_uniqueid) {
++				/*
++				 * Query dir responses don't provide enough
++				 * information about reparse points other than
++				 * their reparse tags.  Save an invalidation by
++				 * not clobbering the existing mode, size and
++				 * symlink target (if any) when reparse tag and
++				 * ctime haven't changed.
++				 */
++				rc = 0;
++				if (fattr->cf_cifsattrs & ATTR_REPARSE) {
++					if (likely(reparse_inode_match(inode, fattr))) {
++						fattr->cf_mode = inode->i_mode;
++						fattr->cf_eof = CIFS_I(inode)->server_eof;
++						fattr->cf_symlink_target = NULL;
++					} else {
++						CIFS_I(inode)->time = 0;
++						rc = -ESTALE;
++					}
++				}
++				if (!rc && !cifs_fattr_to_inode(inode, fattr, true)) {
++					dput(dentry);
++					return;
++				}
+ 			}
+ 		}
+ 		d_invalidate(dentry);
+@@ -127,29 +169,6 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ 	dput(dentry);
+ }
+ 
+-static bool reparse_file_needs_reval(const struct cifs_fattr *fattr)
+-{
+-	if (!(fattr->cf_cifsattrs & ATTR_REPARSE))
+-		return false;
+-	/*
+-	 * The DFS tags should be only intepreted by server side as per
+-	 * MS-FSCC 2.1.2.1, but let's include them anyway.
+-	 *
+-	 * Besides, if cf_cifstag is unset (0), then we still need it to be
+-	 * revalidated to know exactly what reparse point it is.
+-	 */
+-	switch (fattr->cf_cifstag) {
+-	case IO_REPARSE_TAG_DFS:
+-	case IO_REPARSE_TAG_DFSR:
+-	case IO_REPARSE_TAG_SYMLINK:
+-	case IO_REPARSE_TAG_NFS:
+-	case IO_REPARSE_TAG_MOUNT_POINT:
+-	case 0:
+-		return true;
+-	}
+-	return false;
+-}
+-
+ static void
+ cifs_fill_common_info(struct cifs_fattr *fattr, struct cifs_sb_info *cifs_sb)
+ {
+@@ -181,14 +200,6 @@ cifs_fill_common_info(struct cifs_fattr *fattr, struct cifs_sb_info *cifs_sb)
+ 	}
+ 
+ out_reparse:
+-	/*
+-	 * We need to revalidate it further to make a decision about whether it
+-	 * is a symbolic link, DFS referral or a reparse point with a direct
+-	 * access like junctions, deduplicated files, NFS symlinks.
+-	 */
+-	if (reparse_file_needs_reval(fattr))
+-		fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;
+-
+ 	/* non-unix readdir doesn't provide nlink */
+ 	fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK;
+ 
+@@ -269,9 +280,6 @@ cifs_posix_to_fattr(struct cifs_fattr *fattr, struct smb2_posix_info *info,
+ 		fattr->cf_dtype = DT_REG;
+ 	}
+ 
+-	if (reparse_file_needs_reval(fattr))
+-		fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;
+-
+ 	sid_to_id(cifs_sb, &parsed.owner, fattr, SIDOWNER);
+ 	sid_to_id(cifs_sb, &parsed.group, fattr, SIDGROUP);
+ }
+@@ -333,38 +341,6 @@ cifs_std_info_to_fattr(struct cifs_fattr *fattr, FIND_FILE_STANDARD_INFO *info,
+ 	cifs_fill_common_info(fattr, cifs_sb);
+ }
+ 
+-/* BB eventually need to add the following helper function to
+-      resolve NT_STATUS_STOPPED_ON_SYMLINK return code when
+-      we try to do FindFirst on (NTFS) directory symlinks */
+-/*
+-int get_symlink_reparse_path(char *full_path, struct cifs_sb_info *cifs_sb,
+-			     unsigned int xid)
+-{
+-	__u16 fid;
+-	int len;
+-	int oplock = 0;
+-	int rc;
+-	struct cifs_tcon *ptcon = cifs_sb_tcon(cifs_sb);
+-	char *tmpbuffer;
+-
+-	rc = CIFSSMBOpen(xid, ptcon, full_path, FILE_OPEN, GENERIC_READ,
+-			OPEN_REPARSE_POINT, &fid, &oplock, NULL,
+-			cifs_sb->local_nls,
+-			cifs_remap(cifs_sb);
+-	if (!rc) {
+-		tmpbuffer = kmalloc(maxpath);
+-		rc = CIFSSMBQueryReparseLinkInfo(xid, ptcon, full_path,
+-				tmpbuffer,
+-				maxpath -1,
+-				fid,
+-				cifs_sb->local_nls);
+-		if (CIFSSMBClose(xid, ptcon, fid)) {
+-			cifs_dbg(FYI, "Error closing temporary reparsepoint open\n");
+-		}
+-	}
+-}
+- */
+-
+ static int
+ _initiate_cifs_search(const unsigned int xid, struct file *file,
+ 		     const char *full_path)
+@@ -433,13 +409,10 @@ _initiate_cifs_search(const unsigned int xid, struct file *file,
+ 					  &cifsFile->fid, search_flags,
+ 					  &cifsFile->srch_inf);
+ 
+-	if (rc == 0)
++	if (rc == 0) {
+ 		cifsFile->invalidHandle = false;
+-	/* BB add following call to handle readdir on new NTFS symlink errors
+-	else if STATUS_STOPPED_ON_SYMLINK
+-		call get_symlink_reparse_path and retry with new path */
+-	else if ((rc == -EOPNOTSUPP) &&
+-		(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {
++	} else if ((rc == -EOPNOTSUPP) &&
++		   (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {
+ 		cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;
+ 		goto ffirst_retry;
+ 	}
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index 94c5d50aa3474..5de32640f0265 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -230,7 +230,7 @@ int cifs_try_adding_channels(struct cifs_ses *ses)
+ 		spin_lock(&ses->iface_lock);
+ 		if (!ses->iface_count) {
+ 			spin_unlock(&ses->iface_lock);
+-			cifs_dbg(VFS, "server %s does not advertise interfaces\n",
++			cifs_dbg(ONCE, "server %s does not advertise interfaces\n",
+ 				      ses->server->hostname);
+ 			break;
+ 		}
+@@ -361,10 +361,9 @@ cifs_disable_secondary_channels(struct cifs_ses *ses)
+ 
+ /*
+  * update the iface for the channel if necessary.
+- * will return 0 when iface is updated, 1 if removed, 2 otherwise
+  * Must be called with chan_lock held.
+  */
+-int
++void
+ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ {
+ 	unsigned int chan_index;
+@@ -373,20 +372,19 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	struct cifs_server_iface *old_iface = NULL;
+ 	struct cifs_server_iface *last_iface = NULL;
+ 	struct sockaddr_storage ss;
+-	int rc = 0;
+ 
+ 	spin_lock(&ses->chan_lock);
+ 	chan_index = cifs_ses_get_chan_index(ses, server);
+ 	if (chan_index == CIFS_INVAL_CHAN_INDEX) {
+ 		spin_unlock(&ses->chan_lock);
+-		return 0;
++		return;
+ 	}
+ 
+ 	if (ses->chans[chan_index].iface) {
+ 		old_iface = ses->chans[chan_index].iface;
+ 		if (old_iface->is_active) {
+ 			spin_unlock(&ses->chan_lock);
+-			return 1;
++			return;
+ 		}
+ 	}
+ 	spin_unlock(&ses->chan_lock);
+@@ -398,8 +396,8 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	spin_lock(&ses->iface_lock);
+ 	if (!ses->iface_count) {
+ 		spin_unlock(&ses->iface_lock);
+-		cifs_dbg(VFS, "server %s does not advertise interfaces\n", ses->server->hostname);
+-		return 0;
++		cifs_dbg(ONCE, "server %s does not advertise interfaces\n", ses->server->hostname);
++		return;
+ 	}
+ 
+ 	last_iface = list_last_entry(&ses->iface_list, struct cifs_server_iface,
+@@ -439,7 +437,6 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	}
+ 
+ 	if (list_entry_is_head(iface, &ses->iface_list, iface_head)) {
+-		rc = 1;
+ 		iface = NULL;
+ 		cifs_dbg(FYI, "unable to find a suitable iface\n");
+ 	}
+@@ -454,7 +451,7 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 		}
+ 
+ 		spin_unlock(&ses->iface_lock);
+-		return 0;
++		return;
+ 	}
+ 
+ 	/* now drop the ref to the current iface */
+@@ -472,28 +469,24 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 		kref_put(&old_iface->refcount, release_iface);
+ 	} else if (!chan_index) {
+ 		/* special case: update interface for primary channel */
+-		if (iface) {
+-			cifs_dbg(FYI, "referencing primary channel iface: %pIS\n",
+-				 &iface->sockaddr);
+-			iface->num_channels++;
+-			iface->weight_fulfilled++;
+-		}
++		cifs_dbg(FYI, "referencing primary channel iface: %pIS\n",
++			 &iface->sockaddr);
++		iface->num_channels++;
++		iface->weight_fulfilled++;
+ 	}
+ 	spin_unlock(&ses->iface_lock);
+ 
+-	if (iface) {
+-		spin_lock(&ses->chan_lock);
+-		chan_index = cifs_ses_get_chan_index(ses, server);
+-		if (chan_index == CIFS_INVAL_CHAN_INDEX) {
+-			spin_unlock(&ses->chan_lock);
+-			return 0;
+-		}
+-
+-		ses->chans[chan_index].iface = iface;
++	spin_lock(&ses->chan_lock);
++	chan_index = cifs_ses_get_chan_index(ses, server);
++	if (chan_index == CIFS_INVAL_CHAN_INDEX) {
+ 		spin_unlock(&ses->chan_lock);
++		return;
+ 	}
+ 
+-	return rc;
++	ses->chans[chan_index].iface = iface;
++	spin_unlock(&ses->chan_lock);
++
++	return;
+ }
+ 
+ /*
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index ba734395b0360..4852afe3929be 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -5429,6 +5429,7 @@ struct smb_version_operations smb30_operations = {
+ 	.tree_connect = SMB2_tcon,
+ 	.tree_disconnect = SMB2_tdis,
+ 	.qfs_tcon = smb3_qfs_tcon,
++	.query_server_interfaces = SMB3_request_interfaces,
+ 	.is_path_accessible = smb2_is_path_accessible,
+ 	.can_echo = smb2_can_echo,
+ 	.echo = SMB2_echo,
+@@ -5543,6 +5544,7 @@ struct smb_version_operations smb311_operations = {
+ 	.tree_connect = SMB2_tcon,
+ 	.tree_disconnect = SMB2_tdis,
+ 	.qfs_tcon = smb3_qfs_tcon,
++	.query_server_interfaces = SMB3_request_interfaces,
+ 	.is_path_accessible = smb2_is_path_accessible,
+ 	.can_echo = smb2_can_echo,
+ 	.echo = SMB2_echo,
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 9d34a55fdb5e4..4d7d0bdf7a472 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -409,14 +409,15 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ 	spin_unlock(&ses->ses_lock);
+ 
+ 	if (!rc &&
+-	    (server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL)) {
++	    (server->capabilities & SMB2_GLOBAL_CAP_MULTI_CHANNEL) &&
++	    server->ops->query_server_interfaces) {
+ 		mutex_unlock(&ses->session_mutex);
+ 
+ 		/*
+ 		 * query server network interfaces, in case they change
+ 		 */
+ 		xid = get_xid();
+-		rc = SMB3_request_interfaces(xid, tcon, false);
++		rc = server->ops->query_server_interfaces(xid, tcon, false);
+ 		free_xid(xid);
+ 
+ 		if (rc == -EOPNOTSUPP && ses->chan_count > 1) {
+@@ -1536,6 +1537,11 @@ SMB2_sess_sendreceive(struct SMB2_sess_data *sess_data)
+ 			    &sess_data->buf0_type,
+ 			    CIFS_LOG_ERROR | CIFS_SESS_OP, &rsp_iov);
+ 	cifs_small_buf_release(sess_data->iov[0].iov_base);
++	if (rc == 0)
++		sess_data->ses->expired_pwd = false;
++	else if ((rc == -EACCES) || (rc == -EKEYEXPIRED) || (rc == -EKEYREVOKED))
++		sess_data->ses->expired_pwd = true;
++
+ 	memcpy(&sess_data->iov[0], &rsp_iov, sizeof(struct kvec));
+ 
+ 	return rc;
+diff --git a/fs/smb/server/smb2misc.c b/fs/smb/server/smb2misc.c
+index 03dded29a9804..727cb49926ee5 100644
+--- a/fs/smb/server/smb2misc.c
++++ b/fs/smb/server/smb2misc.c
+@@ -101,13 +101,17 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ 		*len = le16_to_cpu(((struct smb2_sess_setup_req *)hdr)->SecurityBufferLength);
+ 		break;
+ 	case SMB2_TREE_CONNECT:
+-		*off = le16_to_cpu(((struct smb2_tree_connect_req *)hdr)->PathOffset);
++		*off = max_t(unsigned short int,
++			     le16_to_cpu(((struct smb2_tree_connect_req *)hdr)->PathOffset),
++			     offsetof(struct smb2_tree_connect_req, Buffer));
+ 		*len = le16_to_cpu(((struct smb2_tree_connect_req *)hdr)->PathLength);
+ 		break;
+ 	case SMB2_CREATE:
+ 	{
+ 		unsigned short int name_off =
+-			le16_to_cpu(((struct smb2_create_req *)hdr)->NameOffset);
++			max_t(unsigned short int,
++			      le16_to_cpu(((struct smb2_create_req *)hdr)->NameOffset),
++			      offsetof(struct smb2_create_req, Buffer));
+ 		unsigned short int name_len =
+ 			le16_to_cpu(((struct smb2_create_req *)hdr)->NameLength);
+ 
+@@ -128,11 +132,15 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ 		break;
+ 	}
+ 	case SMB2_QUERY_INFO:
+-		*off = le16_to_cpu(((struct smb2_query_info_req *)hdr)->InputBufferOffset);
++		*off = max_t(unsigned int,
++			     le16_to_cpu(((struct smb2_query_info_req *)hdr)->InputBufferOffset),
++			     offsetof(struct smb2_query_info_req, Buffer));
+ 		*len = le32_to_cpu(((struct smb2_query_info_req *)hdr)->InputBufferLength);
+ 		break;
+ 	case SMB2_SET_INFO:
+-		*off = le16_to_cpu(((struct smb2_set_info_req *)hdr)->BufferOffset);
++		*off = max_t(unsigned int,
++			     le16_to_cpu(((struct smb2_set_info_req *)hdr)->BufferOffset),
++			     offsetof(struct smb2_set_info_req, Buffer));
+ 		*len = le32_to_cpu(((struct smb2_set_info_req *)hdr)->BufferLength);
+ 		break;
+ 	case SMB2_READ:
+@@ -142,7 +150,7 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ 	case SMB2_WRITE:
+ 		if (((struct smb2_write_req *)hdr)->DataOffset ||
+ 		    ((struct smb2_write_req *)hdr)->Length) {
+-			*off = max_t(unsigned int,
++			*off = max_t(unsigned short int,
+ 				     le16_to_cpu(((struct smb2_write_req *)hdr)->DataOffset),
+ 				     offsetof(struct smb2_write_req, Buffer));
+ 			*len = le32_to_cpu(((struct smb2_write_req *)hdr)->Length);
+@@ -153,7 +161,9 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ 		*len = le16_to_cpu(((struct smb2_write_req *)hdr)->WriteChannelInfoLength);
+ 		break;
+ 	case SMB2_QUERY_DIRECTORY:
+-		*off = le16_to_cpu(((struct smb2_query_directory_req *)hdr)->FileNameOffset);
++		*off = max_t(unsigned short int,
++			     le16_to_cpu(((struct smb2_query_directory_req *)hdr)->FileNameOffset),
++			     offsetof(struct smb2_query_directory_req, Buffer));
+ 		*len = le16_to_cpu(((struct smb2_query_directory_req *)hdr)->FileNameLength);
+ 		break;
+ 	case SMB2_LOCK:
+@@ -168,7 +178,9 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ 		break;
+ 	}
+ 	case SMB2_IOCTL:
+-		*off = le32_to_cpu(((struct smb2_ioctl_req *)hdr)->InputOffset);
++		*off = max_t(unsigned int,
++			     le32_to_cpu(((struct smb2_ioctl_req *)hdr)->InputOffset),
++			     offsetof(struct smb2_ioctl_req, Buffer));
+ 		*len = le32_to_cpu(((struct smb2_ioctl_req *)hdr)->InputCount);
+ 		break;
+ 	default:
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 0c97d3c860726..88db6e207e0ee 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1951,7 +1951,7 @@ int smb2_tree_connect(struct ksmbd_work *work)
+ 
+ 	WORK_BUFFERS(work, req, rsp);
+ 
+-	treename = smb_strndup_from_utf16(req->Buffer,
++	treename = smb_strndup_from_utf16((char *)req + le16_to_cpu(req->PathOffset),
+ 					  le16_to_cpu(req->PathLength), true,
+ 					  conn->local_nls);
+ 	if (IS_ERR(treename)) {
+@@ -2704,7 +2704,7 @@ int smb2_open(struct ksmbd_work *work)
+ 			goto err_out2;
+ 		}
+ 
+-		name = smb2_get_name(req->Buffer,
++		name = smb2_get_name((char *)req + le16_to_cpu(req->NameOffset),
+ 				     le16_to_cpu(req->NameLength),
+ 				     work->conn->local_nls);
+ 		if (IS_ERR(name)) {
+@@ -3828,11 +3828,16 @@ static int process_query_dir_entries(struct smb2_query_dir_private *priv)
+ 		}
+ 
+ 		ksmbd_kstat.kstat = &kstat;
+-		if (priv->info_level != FILE_NAMES_INFORMATION)
+-			ksmbd_vfs_fill_dentry_attrs(priv->work,
+-						    idmap,
+-						    dent,
+-						    &ksmbd_kstat);
++		if (priv->info_level != FILE_NAMES_INFORMATION) {
++			rc = ksmbd_vfs_fill_dentry_attrs(priv->work,
++							 idmap,
++							 dent,
++							 &ksmbd_kstat);
++			if (rc) {
++				dput(dent);
++				continue;
++			}
++		}
+ 
+ 		rc = smb2_populate_readdir_entry(priv->work->conn,
+ 						 priv->info_level,
+@@ -4075,7 +4080,7 @@ int smb2_query_dir(struct ksmbd_work *work)
+ 	}
+ 
+ 	srch_flag = req->Flags;
+-	srch_ptr = smb_strndup_from_utf16(req->Buffer,
++	srch_ptr = smb_strndup_from_utf16((char *)req + le16_to_cpu(req->FileNameOffset),
+ 					  le16_to_cpu(req->FileNameLength), 1,
+ 					  conn->local_nls);
+ 	if (IS_ERR(srch_ptr)) {
+@@ -4335,7 +4340,8 @@ static int smb2_get_ea(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		    sizeof(struct smb2_ea_info_req))
+ 			return -EINVAL;
+ 
+-		ea_req = (struct smb2_ea_info_req *)req->Buffer;
++		ea_req = (struct smb2_ea_info_req *)((char *)req +
++						     le16_to_cpu(req->InputBufferOffset));
+ 	} else {
+ 		/* need to send all EAs, if no specific EA is requested*/
+ 		if (le32_to_cpu(req->Flags) & SL_RETURN_SINGLE_ENTRY)
+@@ -4480,6 +4486,7 @@ static int get_file_basic_info(struct smb2_query_info_rsp *rsp,
+ 	struct smb2_file_basic_info *basic_info;
+ 	struct kstat stat;
+ 	u64 time;
++	int ret;
+ 
+ 	if (!(fp->daccess & FILE_READ_ATTRIBUTES_LE)) {
+ 		pr_err("no right to read the attributes : 0x%x\n",
+@@ -4487,9 +4494,12 @@ static int get_file_basic_info(struct smb2_query_info_rsp *rsp,
+ 		return -EACCES;
+ 	}
+ 
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
++
+ 	basic_info = (struct smb2_file_basic_info *)rsp->Buffer;
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS,
+-			 file_inode(fp->filp), &stat);
+ 	basic_info->CreationTime = cpu_to_le64(fp->create_time);
+ 	time = ksmbd_UnixTimeToNT(stat.atime);
+ 	basic_info->LastAccessTime = cpu_to_le64(time);
+@@ -4504,27 +4514,31 @@ static int get_file_basic_info(struct smb2_query_info_rsp *rsp,
+ 	return 0;
+ }
+ 
+-static void get_file_standard_info(struct smb2_query_info_rsp *rsp,
+-				   struct ksmbd_file *fp, void *rsp_org)
++static int get_file_standard_info(struct smb2_query_info_rsp *rsp,
++				  struct ksmbd_file *fp, void *rsp_org)
+ {
+ 	struct smb2_file_standard_info *sinfo;
+ 	unsigned int delete_pending;
+-	struct inode *inode;
+ 	struct kstat stat;
++	int ret;
+ 
+-	inode = file_inode(fp->filp);
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS, inode, &stat);
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+ 	sinfo = (struct smb2_file_standard_info *)rsp->Buffer;
+ 	delete_pending = ksmbd_inode_pending_delete(fp);
+ 
+-	sinfo->AllocationSize = cpu_to_le64(inode->i_blocks << 9);
++	sinfo->AllocationSize = cpu_to_le64(stat.blocks << 9);
+ 	sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size);
+ 	sinfo->NumberOfLinks = cpu_to_le32(get_nlink(&stat) - delete_pending);
+ 	sinfo->DeletePending = delete_pending;
+ 	sinfo->Directory = S_ISDIR(stat.mode) ? 1 : 0;
+ 	rsp->OutputBufferLength =
+ 		cpu_to_le32(sizeof(struct smb2_file_standard_info));
++
++	return 0;
+ }
+ 
+ static void get_file_alignment_info(struct smb2_query_info_rsp *rsp,
+@@ -4546,11 +4560,11 @@ static int get_file_all_info(struct ksmbd_work *work,
+ 	struct ksmbd_conn *conn = work->conn;
+ 	struct smb2_file_all_info *file_info;
+ 	unsigned int delete_pending;
+-	struct inode *inode;
+ 	struct kstat stat;
+ 	int conv_len;
+ 	char *filename;
+ 	u64 time;
++	int ret;
+ 
+ 	if (!(fp->daccess & FILE_READ_ATTRIBUTES_LE)) {
+ 		ksmbd_debug(SMB, "no right to read the attributes : 0x%x\n",
+@@ -4562,8 +4576,10 @@ static int get_file_all_info(struct ksmbd_work *work,
+ 	if (IS_ERR(filename))
+ 		return PTR_ERR(filename);
+ 
+-	inode = file_inode(fp->filp);
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS, inode, &stat);
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+ 	ksmbd_debug(SMB, "filename = %s\n", filename);
+ 	delete_pending = ksmbd_inode_pending_delete(fp);
+@@ -4579,7 +4595,7 @@ static int get_file_all_info(struct ksmbd_work *work,
+ 	file_info->Attributes = fp->f_ci->m_fattr;
+ 	file_info->Pad1 = 0;
+ 	file_info->AllocationSize =
+-		cpu_to_le64(inode->i_blocks << 9);
++		cpu_to_le64(stat.blocks << 9);
+ 	file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size);
+ 	file_info->NumberOfLinks =
+ 			cpu_to_le32(get_nlink(&stat) - delete_pending);
+@@ -4623,10 +4639,10 @@ static void get_file_alternate_info(struct ksmbd_work *work,
+ 		cpu_to_le32(sizeof(struct smb2_file_alt_name_info) + conv_len);
+ }
+ 
+-static void get_file_stream_info(struct ksmbd_work *work,
+-				 struct smb2_query_info_rsp *rsp,
+-				 struct ksmbd_file *fp,
+-				 void *rsp_org)
++static int get_file_stream_info(struct ksmbd_work *work,
++				struct smb2_query_info_rsp *rsp,
++				struct ksmbd_file *fp,
++				void *rsp_org)
+ {
+ 	struct ksmbd_conn *conn = work->conn;
+ 	struct smb2_file_stream_info *file_info;
+@@ -4637,9 +4653,13 @@ static void get_file_stream_info(struct ksmbd_work *work,
+ 	int nbytes = 0, streamlen, stream_name_len, next, idx = 0;
+ 	int buf_free_len;
+ 	struct smb2_query_info_req *req = ksmbd_req_buf_next(work);
++	int ret;
++
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS,
+-			 file_inode(fp->filp), &stat);
+ 	file_info = (struct smb2_file_stream_info *)rsp->Buffer;
+ 
+ 	buf_free_len =
+@@ -4720,29 +4740,37 @@ static void get_file_stream_info(struct ksmbd_work *work,
+ 	kvfree(xattr_list);
+ 
+ 	rsp->OutputBufferLength = cpu_to_le32(nbytes);
++
++	return 0;
+ }
+ 
+-static void get_file_internal_info(struct smb2_query_info_rsp *rsp,
+-				   struct ksmbd_file *fp, void *rsp_org)
++static int get_file_internal_info(struct smb2_query_info_rsp *rsp,
++				  struct ksmbd_file *fp, void *rsp_org)
+ {
+ 	struct smb2_file_internal_info *file_info;
+ 	struct kstat stat;
++	int ret;
++
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS,
+-			 file_inode(fp->filp), &stat);
+ 	file_info = (struct smb2_file_internal_info *)rsp->Buffer;
+ 	file_info->IndexNumber = cpu_to_le64(stat.ino);
+ 	rsp->OutputBufferLength =
+ 		cpu_to_le32(sizeof(struct smb2_file_internal_info));
++
++	return 0;
+ }
+ 
+ static int get_file_network_open_info(struct smb2_query_info_rsp *rsp,
+ 				      struct ksmbd_file *fp, void *rsp_org)
+ {
+ 	struct smb2_file_ntwrk_info *file_info;
+-	struct inode *inode;
+ 	struct kstat stat;
+ 	u64 time;
++	int ret;
+ 
+ 	if (!(fp->daccess & FILE_READ_ATTRIBUTES_LE)) {
+ 		pr_err("no right to read the attributes : 0x%x\n",
+@@ -4750,10 +4778,12 @@ static int get_file_network_open_info(struct smb2_query_info_rsp *rsp,
+ 		return -EACCES;
+ 	}
+ 
+-	file_info = (struct smb2_file_ntwrk_info *)rsp->Buffer;
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+-	inode = file_inode(fp->filp);
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS, inode, &stat);
++	file_info = (struct smb2_file_ntwrk_info *)rsp->Buffer;
+ 
+ 	file_info->CreationTime = cpu_to_le64(fp->create_time);
+ 	time = ksmbd_UnixTimeToNT(stat.atime);
+@@ -4763,8 +4793,7 @@ static int get_file_network_open_info(struct smb2_query_info_rsp *rsp,
+ 	time = ksmbd_UnixTimeToNT(stat.ctime);
+ 	file_info->ChangeTime = cpu_to_le64(time);
+ 	file_info->Attributes = fp->f_ci->m_fattr;
+-	file_info->AllocationSize =
+-		cpu_to_le64(inode->i_blocks << 9);
++	file_info->AllocationSize = cpu_to_le64(stat.blocks << 9);
+ 	file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size);
+ 	file_info->Reserved = cpu_to_le32(0);
+ 	rsp->OutputBufferLength =
+@@ -4804,14 +4833,17 @@ static void get_file_mode_info(struct smb2_query_info_rsp *rsp,
+ 		cpu_to_le32(sizeof(struct smb2_file_mode_info));
+ }
+ 
+-static void get_file_compression_info(struct smb2_query_info_rsp *rsp,
+-				      struct ksmbd_file *fp, void *rsp_org)
++static int get_file_compression_info(struct smb2_query_info_rsp *rsp,
++				     struct ksmbd_file *fp, void *rsp_org)
+ {
+ 	struct smb2_file_comp_info *file_info;
+ 	struct kstat stat;
++	int ret;
+ 
+-	generic_fillattr(file_mnt_idmap(fp->filp), STATX_BASIC_STATS,
+-			 file_inode(fp->filp), &stat);
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+ 	file_info = (struct smb2_file_comp_info *)rsp->Buffer;
+ 	file_info->CompressedFileSize = cpu_to_le64(stat.blocks << 9);
+@@ -4823,6 +4855,8 @@ static void get_file_compression_info(struct smb2_query_info_rsp *rsp,
+ 
+ 	rsp->OutputBufferLength =
+ 		cpu_to_le32(sizeof(struct smb2_file_comp_info));
++
++	return 0;
+ }
+ 
+ static int get_file_attribute_tag_info(struct smb2_query_info_rsp *rsp,
+@@ -4844,7 +4878,7 @@ static int get_file_attribute_tag_info(struct smb2_query_info_rsp *rsp,
+ 	return 0;
+ }
+ 
+-static void find_file_posix_info(struct smb2_query_info_rsp *rsp,
++static int find_file_posix_info(struct smb2_query_info_rsp *rsp,
+ 				struct ksmbd_file *fp, void *rsp_org)
+ {
+ 	struct smb311_posix_qinfo *file_info;
+@@ -4852,24 +4886,31 @@ static void find_file_posix_info(struct smb2_query_info_rsp *rsp,
+ 	struct mnt_idmap *idmap = file_mnt_idmap(fp->filp);
+ 	vfsuid_t vfsuid = i_uid_into_vfsuid(idmap, inode);
+ 	vfsgid_t vfsgid = i_gid_into_vfsgid(idmap, inode);
++	struct kstat stat;
+ 	u64 time;
+ 	int out_buf_len = sizeof(struct smb311_posix_qinfo) + 32;
++	int ret;
++
++	ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			  AT_STATX_SYNC_AS_STAT);
++	if (ret)
++		return ret;
+ 
+ 	file_info = (struct smb311_posix_qinfo *)rsp->Buffer;
+ 	file_info->CreationTime = cpu_to_le64(fp->create_time);
+-	time = ksmbd_UnixTimeToNT(inode_get_atime(inode));
++	time = ksmbd_UnixTimeToNT(stat.atime);
+ 	file_info->LastAccessTime = cpu_to_le64(time);
+-	time = ksmbd_UnixTimeToNT(inode_get_mtime(inode));
++	time = ksmbd_UnixTimeToNT(stat.mtime);
+ 	file_info->LastWriteTime = cpu_to_le64(time);
+-	time = ksmbd_UnixTimeToNT(inode_get_ctime(inode));
++	time = ksmbd_UnixTimeToNT(stat.ctime);
+ 	file_info->ChangeTime = cpu_to_le64(time);
+ 	file_info->DosAttributes = fp->f_ci->m_fattr;
+-	file_info->Inode = cpu_to_le64(inode->i_ino);
+-	file_info->EndOfFile = cpu_to_le64(inode->i_size);
+-	file_info->AllocationSize = cpu_to_le64(inode->i_blocks << 9);
+-	file_info->HardLinks = cpu_to_le32(inode->i_nlink);
+-	file_info->Mode = cpu_to_le32(inode->i_mode & 0777);
+-	file_info->DeviceId = cpu_to_le32(inode->i_rdev);
++	file_info->Inode = cpu_to_le64(stat.ino);
++	file_info->EndOfFile = cpu_to_le64(stat.size);
++	file_info->AllocationSize = cpu_to_le64(stat.blocks << 9);
++	file_info->HardLinks = cpu_to_le32(stat.nlink);
++	file_info->Mode = cpu_to_le32(stat.mode & 0777);
++	file_info->DeviceId = cpu_to_le32(stat.rdev);
+ 
+ 	/*
+ 	 * Sids(32) contain two sids(Domain sid(16), UNIX group sid(16)).
+@@ -4882,6 +4923,8 @@ static void find_file_posix_info(struct smb2_query_info_rsp *rsp,
+ 		  SIDUNIX_GROUP, (struct smb_sid *)&file_info->Sids[16]);
+ 
+ 	rsp->OutputBufferLength = cpu_to_le32(out_buf_len);
++
++	return 0;
+ }
+ 
+ static int smb2_get_info_file(struct ksmbd_work *work,
+@@ -4930,7 +4973,7 @@ static int smb2_get_info_file(struct ksmbd_work *work,
+ 		break;
+ 
+ 	case FILE_STANDARD_INFORMATION:
+-		get_file_standard_info(rsp, fp, work->response_buf);
++		rc = get_file_standard_info(rsp, fp, work->response_buf);
+ 		break;
+ 
+ 	case FILE_ALIGNMENT_INFORMATION:
+@@ -4946,11 +4989,11 @@ static int smb2_get_info_file(struct ksmbd_work *work,
+ 		break;
+ 
+ 	case FILE_STREAM_INFORMATION:
+-		get_file_stream_info(work, rsp, fp, work->response_buf);
++		rc = get_file_stream_info(work, rsp, fp, work->response_buf);
+ 		break;
+ 
+ 	case FILE_INTERNAL_INFORMATION:
+-		get_file_internal_info(rsp, fp, work->response_buf);
++		rc = get_file_internal_info(rsp, fp, work->response_buf);
+ 		break;
+ 
+ 	case FILE_NETWORK_OPEN_INFORMATION:
+@@ -4974,7 +5017,7 @@ static int smb2_get_info_file(struct ksmbd_work *work,
+ 		break;
+ 
+ 	case FILE_COMPRESSION_INFORMATION:
+-		get_file_compression_info(rsp, fp, work->response_buf);
++		rc = get_file_compression_info(rsp, fp, work->response_buf);
+ 		break;
+ 
+ 	case FILE_ATTRIBUTE_TAG_INFORMATION:
+@@ -4985,7 +5028,7 @@ static int smb2_get_info_file(struct ksmbd_work *work,
+ 			pr_err("client doesn't negotiate with SMB3.1.1 POSIX Extensions\n");
+ 			rc = -EOPNOTSUPP;
+ 		} else {
+-			find_file_posix_info(rsp, fp, work->response_buf);
++			rc = find_file_posix_info(rsp, fp, work->response_buf);
+ 		}
+ 		break;
+ 	default:
+@@ -5398,7 +5441,6 @@ int smb2_close(struct ksmbd_work *work)
+ 	struct smb2_close_rsp *rsp;
+ 	struct ksmbd_conn *conn = work->conn;
+ 	struct ksmbd_file *fp;
+-	struct inode *inode;
+ 	u64 time;
+ 	int err = 0;
+ 
+@@ -5453,24 +5495,33 @@ int smb2_close(struct ksmbd_work *work)
+ 	rsp->Reserved = 0;
+ 
+ 	if (req->Flags == SMB2_CLOSE_FLAG_POSTQUERY_ATTRIB) {
++		struct kstat stat;
++		int ret;
++
+ 		fp = ksmbd_lookup_fd_fast(work, volatile_id);
+ 		if (!fp) {
+ 			err = -ENOENT;
+ 			goto out;
+ 		}
+ 
+-		inode = file_inode(fp->filp);
++		ret = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++				  AT_STATX_SYNC_AS_STAT);
++		if (ret) {
++			ksmbd_fd_put(work, fp);
++			goto out;
++		}
++
+ 		rsp->Flags = SMB2_CLOSE_FLAG_POSTQUERY_ATTRIB;
+-		rsp->AllocationSize = S_ISDIR(inode->i_mode) ? 0 :
+-			cpu_to_le64(inode->i_blocks << 9);
+-		rsp->EndOfFile = cpu_to_le64(inode->i_size);
++		rsp->AllocationSize = S_ISDIR(stat.mode) ? 0 :
++			cpu_to_le64(stat.blocks << 9);
++		rsp->EndOfFile = cpu_to_le64(stat.size);
+ 		rsp->Attributes = fp->f_ci->m_fattr;
+ 		rsp->CreationTime = cpu_to_le64(fp->create_time);
+-		time = ksmbd_UnixTimeToNT(inode_get_atime(inode));
++		time = ksmbd_UnixTimeToNT(stat.atime);
+ 		rsp->LastAccessTime = cpu_to_le64(time);
+-		time = ksmbd_UnixTimeToNT(inode_get_mtime(inode));
++		time = ksmbd_UnixTimeToNT(stat.mtime);
+ 		rsp->LastWriteTime = cpu_to_le64(time);
+-		time = ksmbd_UnixTimeToNT(inode_get_ctime(inode));
++		time = ksmbd_UnixTimeToNT(stat.ctime);
+ 		rsp->ChangeTime = cpu_to_le64(time);
+ 		ksmbd_fd_put(work, fp);
+ 	} else {
+@@ -5759,15 +5810,21 @@ static int set_file_allocation_info(struct ksmbd_work *work,
+ 
+ 	loff_t alloc_blks;
+ 	struct inode *inode;
++	struct kstat stat;
+ 	int rc;
+ 
+ 	if (!(fp->daccess & FILE_WRITE_DATA_LE))
+ 		return -EACCES;
+ 
++	rc = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS,
++			 AT_STATX_SYNC_AS_STAT);
++	if (rc)
++		return rc;
++
+ 	alloc_blks = (le64_to_cpu(file_alloc_info->AllocationSize) + 511) >> 9;
+ 	inode = file_inode(fp->filp);
+ 
+-	if (alloc_blks > inode->i_blocks) {
++	if (alloc_blks > stat.blocks) {
+ 		smb_break_all_levII_oplock(work, fp, 1);
+ 		rc = vfs_fallocate(fp->filp, FALLOC_FL_KEEP_SIZE, 0,
+ 				   alloc_blks * 512);
+@@ -5775,7 +5832,7 @@ static int set_file_allocation_info(struct ksmbd_work *work,
+ 			pr_err("vfs_fallocate is failed : %d\n", rc);
+ 			return rc;
+ 		}
+-	} else if (alloc_blks < inode->i_blocks) {
++	} else if (alloc_blks < stat.blocks) {
+ 		loff_t size;
+ 
+ 		/*
+@@ -5930,6 +5987,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			      struct ksmbd_share_config *share)
+ {
+ 	unsigned int buf_len = le32_to_cpu(req->BufferLength);
++	char *buffer = (char *)req + le16_to_cpu(req->BufferOffset);
+ 
+ 	switch (req->FileInfoClass) {
+ 	case FILE_BASIC_INFORMATION:
+@@ -5937,7 +5995,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		if (buf_len < sizeof(struct smb2_file_basic_info))
+ 			return -EINVAL;
+ 
+-		return set_file_basic_info(fp, (struct smb2_file_basic_info *)req->Buffer, share);
++		return set_file_basic_info(fp, (struct smb2_file_basic_info *)buffer, share);
+ 	}
+ 	case FILE_ALLOCATION_INFORMATION:
+ 	{
+@@ -5945,7 +6003,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			return -EINVAL;
+ 
+ 		return set_file_allocation_info(work, fp,
+-						(struct smb2_file_alloc_info *)req->Buffer);
++						(struct smb2_file_alloc_info *)buffer);
+ 	}
+ 	case FILE_END_OF_FILE_INFORMATION:
+ 	{
+@@ -5953,7 +6011,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			return -EINVAL;
+ 
+ 		return set_end_of_file_info(work, fp,
+-					    (struct smb2_file_eof_info *)req->Buffer);
++					    (struct smb2_file_eof_info *)buffer);
+ 	}
+ 	case FILE_RENAME_INFORMATION:
+ 	{
+@@ -5961,7 +6019,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			return -EINVAL;
+ 
+ 		return set_rename_info(work, fp,
+-				       (struct smb2_file_rename_info *)req->Buffer,
++				       (struct smb2_file_rename_info *)buffer,
+ 				       buf_len);
+ 	}
+ 	case FILE_LINK_INFORMATION:
+@@ -5970,7 +6028,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			return -EINVAL;
+ 
+ 		return smb2_create_link(work, work->tcon->share_conf,
+-					(struct smb2_file_link_info *)req->Buffer,
++					(struct smb2_file_link_info *)buffer,
+ 					buf_len, fp->filp,
+ 					work->conn->local_nls);
+ 	}
+@@ -5980,7 +6038,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 			return -EINVAL;
+ 
+ 		return set_file_disposition_info(fp,
+-						 (struct smb2_file_disposition_info *)req->Buffer);
++						 (struct smb2_file_disposition_info *)buffer);
+ 	}
+ 	case FILE_FULL_EA_INFORMATION:
+ 	{
+@@ -5993,7 +6051,7 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		if (buf_len < sizeof(struct smb2_ea_info))
+ 			return -EINVAL;
+ 
+-		return smb2_set_ea((struct smb2_ea_info *)req->Buffer,
++		return smb2_set_ea((struct smb2_ea_info *)buffer,
+ 				   buf_len, &fp->filp->f_path, true);
+ 	}
+ 	case FILE_POSITION_INFORMATION:
+@@ -6001,14 +6059,14 @@ static int smb2_set_info_file(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		if (buf_len < sizeof(struct smb2_file_pos_info))
+ 			return -EINVAL;
+ 
+-		return set_file_position_info(fp, (struct smb2_file_pos_info *)req->Buffer);
++		return set_file_position_info(fp, (struct smb2_file_pos_info *)buffer);
+ 	}
+ 	case FILE_MODE_INFORMATION:
+ 	{
+ 		if (buf_len < sizeof(struct smb2_file_mode_info))
+ 			return -EINVAL;
+ 
+-		return set_file_mode_info(fp, (struct smb2_file_mode_info *)req->Buffer);
++		return set_file_mode_info(fp, (struct smb2_file_mode_info *)buffer);
+ 	}
+ 	}
+ 
+@@ -6089,7 +6147,7 @@ int smb2_set_info(struct ksmbd_work *work)
+ 		}
+ 		rc = smb2_set_info_sec(fp,
+ 				       le32_to_cpu(req->AdditionalInformation),
+-				       req->Buffer,
++				       (char *)req + le16_to_cpu(req->BufferOffset),
+ 				       le32_to_cpu(req->BufferLength));
+ 		ksmbd_revert_fsids(work);
+ 		break;
+@@ -7535,7 +7593,7 @@ static int fsctl_pipe_transceive(struct ksmbd_work *work, u64 id,
+ 				 struct smb2_ioctl_rsp *rsp)
+ {
+ 	struct ksmbd_rpc_command *rpc_resp;
+-	char *data_buf = (char *)&req->Buffer[0];
++	char *data_buf = (char *)req + le32_to_cpu(req->InputOffset);
+ 	int nbytes = 0;
+ 
+ 	rpc_resp = ksmbd_rpc_ioctl(work->sess, id, data_buf,
+@@ -7648,6 +7706,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 	u64 id = KSMBD_NO_FID;
+ 	struct ksmbd_conn *conn = work->conn;
+ 	int ret = 0;
++	char *buffer;
+ 
+ 	if (work->next_smb2_rcv_hdr_off) {
+ 		req = ksmbd_req_buf_next(work);
+@@ -7670,6 +7729,8 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 		goto out;
+ 	}
+ 
++	buffer = (char *)req + le32_to_cpu(req->InputOffset);
++
+ 	cnt_code = le32_to_cpu(req->CtlCode);
+ 	ret = smb2_calc_max_out_buf_len(work, 48,
+ 					le32_to_cpu(req->MaxOutputResponse));
+@@ -7727,7 +7788,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 		}
+ 
+ 		ret = fsctl_validate_negotiate_info(conn,
+-			(struct validate_negotiate_info_req *)&req->Buffer[0],
++			(struct validate_negotiate_info_req *)buffer,
+ 			(struct validate_negotiate_info_rsp *)&rsp->Buffer[0],
+ 			in_buf_len);
+ 		if (ret < 0)
+@@ -7780,7 +7841,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 		rsp->VolatileFileId = req->VolatileFileId;
+ 		rsp->PersistentFileId = req->PersistentFileId;
+ 		fsctl_copychunk(work,
+-				(struct copychunk_ioctl_req *)&req->Buffer[0],
++				(struct copychunk_ioctl_req *)buffer,
+ 				le32_to_cpu(req->CtlCode),
+ 				le32_to_cpu(req->InputCount),
+ 				req->VolatileFileId,
+@@ -7793,8 +7854,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 			goto out;
+ 		}
+ 
+-		ret = fsctl_set_sparse(work, id,
+-				       (struct file_sparse *)&req->Buffer[0]);
++		ret = fsctl_set_sparse(work, id, (struct file_sparse *)buffer);
+ 		if (ret < 0)
+ 			goto out;
+ 		break;
+@@ -7817,7 +7877,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 		}
+ 
+ 		zero_data =
+-			(struct file_zero_data_information *)&req->Buffer[0];
++			(struct file_zero_data_information *)buffer;
+ 
+ 		off = le64_to_cpu(zero_data->FileOffset);
+ 		bfz = le64_to_cpu(zero_data->BeyondFinalZero);
+@@ -7848,7 +7908,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 		}
+ 
+ 		ret = fsctl_query_allocated_ranges(work, id,
+-			(struct file_allocated_range_buffer *)&req->Buffer[0],
++			(struct file_allocated_range_buffer *)buffer,
+ 			(struct file_allocated_range_buffer *)&rsp->Buffer[0],
+ 			out_buf_len /
+ 			sizeof(struct file_allocated_range_buffer), &nbytes);
+@@ -7892,7 +7952,7 @@ int smb2_ioctl(struct ksmbd_work *work)
+ 			goto out;
+ 		}
+ 
+-		dup_ext = (struct duplicate_extents_to_file *)&req->Buffer[0];
++		dup_ext = (struct duplicate_extents_to_file *)buffer;
+ 
+ 		fp_in = ksmbd_lookup_fd_slow(work, dup_ext->VolatileFileHandle,
+ 					     dup_ext->PersistentFileHandle);
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 7c98bf699772f..fcaf373cc0080 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -457,10 +457,13 @@ int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level,
+ 			}
+ 
+ 			ksmbd_kstat.kstat = &kstat;
+-			ksmbd_vfs_fill_dentry_attrs(work,
+-						    idmap,
+-						    dentry,
+-						    &ksmbd_kstat);
++			rc = ksmbd_vfs_fill_dentry_attrs(work,
++							 idmap,
++							 dentry,
++							 &ksmbd_kstat);
++			if (rc)
++				break;
++
+ 			rc = fn(conn, info_level, d_info, &ksmbd_kstat);
+ 			if (rc)
+ 				break;
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 4277750a6da1b..a8936aba7710e 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -1669,11 +1669,19 @@ int ksmbd_vfs_fill_dentry_attrs(struct ksmbd_work *work,
+ 				struct dentry *dentry,
+ 				struct ksmbd_kstat *ksmbd_kstat)
+ {
++	struct ksmbd_share_config *share_conf = work->tcon->share_conf;
+ 	u64 time;
+ 	int rc;
++	struct path path = {
++		.mnt = share_conf->vfs_path.mnt,
++		.dentry = dentry,
++	};
+ 
+-	generic_fillattr(idmap, STATX_BASIC_STATS, d_inode(dentry),
+-			 ksmbd_kstat->kstat);
++	rc = vfs_getattr(&path, ksmbd_kstat->kstat,
++			 STATX_BASIC_STATS | STATX_BTIME,
++			 AT_STATX_SYNC_AS_STAT);
++	if (rc)
++		return rc;
+ 
+ 	time = ksmbd_UnixTimeToNT(ksmbd_kstat->kstat->ctime);
+ 	ksmbd_kstat->create_time = time;
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 2d2b39f843ce9..abf4a77584cf4 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -261,9 +261,6 @@ static int write_begin_slow(struct address_space *mapping,
+ 				return err;
+ 			}
+ 		}
+-
+-		SetPageUptodate(page);
+-		ClearPageError(page);
+ 	}
+ 
+ 	if (PagePrivate(page))
+@@ -462,9 +459,6 @@ static int ubifs_write_begin(struct file *file, struct address_space *mapping,
+ 				return err;
+ 			}
+ 		}
+-
+-		SetPageUptodate(page);
+-		ClearPageError(page);
+ 	}
+ 
+ 	err = allocate_budget(c, page, ui, appending);
+@@ -474,10 +468,8 @@ static int ubifs_write_begin(struct file *file, struct address_space *mapping,
+ 		 * If we skipped reading the page because we were going to
+ 		 * write all of it, then it is not up to date.
+ 		 */
+-		if (skipped_read) {
++		if (skipped_read)
+ 			ClearPageChecked(page);
+-			ClearPageUptodate(page);
+-		}
+ 		/*
+ 		 * Budgeting failed which means it would have to force
+ 		 * write-back but didn't, because we set the @fast flag in the
+@@ -568,6 +560,9 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping,
+ 		goto out;
+ 	}
+ 
++	if (len == PAGE_SIZE)
++		SetPageUptodate(page);
++
+ 	if (!PagePrivate(page)) {
+ 		attach_page_private(page, (void *)1);
+ 		atomic_long_inc(&c->dirty_pg_cnt);
+diff --git a/include/drm/drm_bridge.h b/include/drm/drm_bridge.h
+index 9ef461aa9b9e2..99aea536013ef 100644
+--- a/include/drm/drm_bridge.h
++++ b/include/drm/drm_bridge.h
+@@ -557,6 +557,37 @@ struct drm_bridge_funcs {
+ 	int (*get_modes)(struct drm_bridge *bridge,
+ 			 struct drm_connector *connector);
+ 
++	/**
++	 * @edid_read:
++	 *
++	 * Read the EDID data of the connected display.
++	 *
++	 * The @edid_read callback is the preferred way of reporting mode
++	 * information for a display connected to the bridge output. Bridges
++	 * that support reading EDID shall implement this callback and leave
++	 * the @get_modes callback unimplemented.
++	 *
++	 * The caller of this operation shall first verify the output
++	 * connection status and refrain from reading EDID from a disconnected
++	 * output.
++	 *
++	 * This callback is optional. Bridges that implement it shall set the
++	 * DRM_BRIDGE_OP_EDID flag in their &drm_bridge->ops.
++	 *
++	 * The connector parameter shall be used for the sole purpose of EDID
++	 * retrieval, and shall not be stored internally by bridge drivers for
++	 * future usage.
++	 *
++	 * RETURNS:
++	 *
++	 * An edid structure newly allocated with drm_edid_alloc() or returned
++	 * from drm_edid_read() family of functions on success, or NULL
++	 * otherwise. The caller is responsible for freeing the returned edid
++	 * structure with drm_edid_free().
++	 */
++	const struct drm_edid *(*edid_read)(struct drm_bridge *bridge,
++					    struct drm_connector *connector);
++
+ 	/**
+ 	 * @get_edid:
+ 	 *
+@@ -888,6 +919,8 @@ drm_atomic_helper_bridge_propagate_bus_fmt(struct drm_bridge *bridge,
+ enum drm_connector_status drm_bridge_detect(struct drm_bridge *bridge);
+ int drm_bridge_get_modes(struct drm_bridge *bridge,
+ 			 struct drm_connector *connector);
++const struct drm_edid *drm_bridge_edid_read(struct drm_bridge *bridge,
++					    struct drm_connector *connector);
+ struct edid *drm_bridge_get_edid(struct drm_bridge *bridge,
+ 				 struct drm_connector *connector);
+ void drm_bridge_hpd_enable(struct drm_bridge *bridge,
+diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h
+index e3c3ac6159094..159213786e6e1 100644
+--- a/include/drm/drm_modeset_helper_vtables.h
++++ b/include/drm/drm_modeset_helper_vtables.h
+@@ -898,7 +898,8 @@ struct drm_connector_helper_funcs {
+ 	 *
+ 	 * RETURNS:
+ 	 *
+-	 * The number of modes added by calling drm_mode_probed_add().
++	 * The number of modes added by calling drm_mode_probed_add(). Return 0
++	 * on failures (no modes) instead of negative error codes.
+ 	 */
+ 	int (*get_modes)(struct drm_connector *connector);
+ 
+diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
+index a4eff85b1f449..2b9d856ff388d 100644
+--- a/include/drm/ttm/ttm_tt.h
++++ b/include/drm/ttm/ttm_tt.h
+@@ -79,6 +79,12 @@ struct ttm_tt {
+ 	 *   page_flags = TTM_TT_FLAG_EXTERNAL |
+ 	 *		  TTM_TT_FLAG_EXTERNAL_MAPPABLE;
+ 	 *
++	 * TTM_TT_FLAG_DECRYPTED: The mapped ttm pages should be marked as
++	 * not encrypted. The framework will try to match what the dma layer
++	 * is doing, but note that it is a little fragile because ttm page
++	 * fault handling abuses the DMA api a bit and dma_map_attrs can't be
++	 * used to assure pgprot always matches.
++	 *
+ 	 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT USE. This is
+ 	 * set by TTM after ttm_tt_populate() has successfully returned, and is
+ 	 * then unset when TTM calls ttm_tt_unpopulate().
+@@ -87,8 +93,9 @@ struct ttm_tt {
+ #define TTM_TT_FLAG_ZERO_ALLOC		BIT(1)
+ #define TTM_TT_FLAG_EXTERNAL		BIT(2)
+ #define TTM_TT_FLAG_EXTERNAL_MAPPABLE	BIT(3)
++#define TTM_TT_FLAG_DECRYPTED		BIT(4)
+ 
+-#define TTM_TT_FLAG_PRIV_POPULATED	BIT(4)
++#define TTM_TT_FLAG_PRIV_POPULATED	BIT(5)
+ 	uint32_t page_flags;
+ 	/** @num_pages: Number of pages in the page array. */
+ 	uint32_t num_pages;
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index 1c5ca92a0555f..90f8bd1736a2c 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -1021,6 +1021,18 @@ static inline int cpufreq_table_find_index_c(struct cpufreq_policy *policy,
+ 						   efficiencies);
+ }
+ 
++static inline bool cpufreq_is_in_limits(struct cpufreq_policy *policy, int idx)
++{
++	unsigned int freq;
++
++	if (idx < 0)
++		return false;
++
++	freq = policy->freq_table[idx].frequency;
++
++	return freq == clamp_val(freq, policy->min, policy->max);
++}
++
+ static inline int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
+ 						 unsigned int target_freq,
+ 						 unsigned int relation)
+@@ -1054,7 +1066,8 @@ static inline int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
+ 		return 0;
+ 	}
+ 
+-	if (idx < 0 && efficiencies) {
++	/* Limit frequency index to honor policy->min/max */
++	if (!cpufreq_is_in_limits(policy, idx) && efficiencies) {
+ 		efficiencies = false;
+ 		goto retry;
+ 	}
+diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
+index b9d83652c097a..5620894315514 100644
+--- a/include/linux/eventfd.h
++++ b/include/linux/eventfd.h
+@@ -35,7 +35,7 @@ void eventfd_ctx_put(struct eventfd_ctx *ctx);
+ struct file *eventfd_fget(int fd);
+ struct eventfd_ctx *eventfd_ctx_fdget(int fd);
+ struct eventfd_ctx *eventfd_ctx_fileget(struct file *file);
+-__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n);
++__u64 eventfd_signal(struct eventfd_ctx *ctx);
+ __u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, __poll_t mask);
+ int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, wait_queue_entry_t *wait,
+ 				  __u64 *cnt);
+@@ -58,7 +58,7 @@ static inline struct eventfd_ctx *eventfd_ctx_fdget(int fd)
+ 	return ERR_PTR(-ENOSYS);
+ }
+ 
+-static inline int eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
++static inline int eventfd_signal(struct eventfd_ctx *ctx)
+ {
+ 	return -ENOSYS;
+ }
+diff --git a/include/linux/gfp.h b/include/linux/gfp.h
+index de292a0071389..e2a916cf29c42 100644
+--- a/include/linux/gfp.h
++++ b/include/linux/gfp.h
+@@ -353,6 +353,15 @@ static inline bool gfp_has_io_fs(gfp_t gfp)
+ 	return (gfp & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS);
+ }
+ 
++/*
++ * Check if the gfp flags allow compaction - GFP_NOIO is a really
++ * tricky context because the migration might require IO.
++ */
++static inline bool gfp_compaction_allowed(gfp_t gfp_mask)
++{
++	return IS_ENABLED(CONFIG_COMPACTION) && (gfp_mask & __GFP_IO);
++}
++
+ extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma);
+ 
+ #ifdef CONFIG_CONTIG_ALLOC
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 2b00faf98017c..6ef0557b4bff8 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -164,8 +164,28 @@ struct hv_ring_buffer {
+ 	u8 buffer[];
+ } __packed;
+ 
++
++/*
++ * If the requested ring buffer size is at least 8 times the size of the
++ * header, steal space from the ring buffer for the header. Otherwise, add
++ * space for the header so that is doesn't take too much of the ring buffer
++ * space.
++ *
++ * The factor of 8 is somewhat arbitrary. The goal is to prevent adding a
++ * relatively small header (4 Kbytes on x86) to a large-ish power-of-2 ring
++ * buffer size (such as 128 Kbytes) and so end up making a nearly twice as
++ * large allocation that will be almost half wasted. As a contrasting example,
++ * on ARM64 with 64 Kbyte page size, we don't want to take 64 Kbytes for the
++ * header from a 128 Kbyte allocation, leaving only 64 Kbytes for the ring.
++ * In this latter case, we must add 64 Kbytes for the header and not worry
++ * about what's wasted.
++ */
++#define VMBUS_HEADER_ADJ(payload_sz) \
++	((payload_sz) >=  8 * sizeof(struct hv_ring_buffer) ? \
++	0 : sizeof(struct hv_ring_buffer))
++
+ /* Calculate the proper size of a ringbuffer, it must be page-aligned */
+-#define VMBUS_RING_SIZE(payload_sz) PAGE_ALIGN(sizeof(struct hv_ring_buffer) + \
++#define VMBUS_RING_SIZE(payload_sz) PAGE_ALIGN(VMBUS_HEADER_ADJ(payload_sz) + \
+ 					       (payload_sz))
+ 
+ struct hv_ring_buffer_info {
+diff --git a/include/linux/intel_rapl.h b/include/linux/intel_rapl.h
+index 33f21bd85dbf2..f3196f82fd8a1 100644
+--- a/include/linux/intel_rapl.h
++++ b/include/linux/intel_rapl.h
+@@ -178,6 +178,12 @@ struct rapl_package {
+ 	struct rapl_if_priv *priv;
+ };
+ 
++struct rapl_package *rapl_find_package_domain_cpuslocked(int id, struct rapl_if_priv *priv,
++						       bool id_is_cpu);
++struct rapl_package *rapl_add_package_cpuslocked(int id, struct rapl_if_priv *priv,
++						 bool id_is_cpu);
++void rapl_remove_package_cpuslocked(struct rapl_package *rp);
++
+ struct rapl_package *rapl_find_package_domain(int id, struct rapl_if_priv *priv, bool id_is_cpu);
+ struct rapl_package *rapl_add_package(int id, struct rapl_if_priv *priv, bool id_is_cpu);
+ void rapl_remove_package(struct rapl_package *rp);
+diff --git a/include/linux/intel_tcc.h b/include/linux/intel_tcc.h
+index f422612c28d6b..8ff8eabb4a987 100644
+--- a/include/linux/intel_tcc.h
++++ b/include/linux/intel_tcc.h
+@@ -13,6 +13,6 @@
+ int intel_tcc_get_tjmax(int cpu);
+ int intel_tcc_get_offset(int cpu);
+ int intel_tcc_set_offset(int cpu, int offset);
+-int intel_tcc_get_temp(int cpu, bool pkg);
++int intel_tcc_get_temp(int cpu, int *temp, bool pkg);
+ 
+ #endif /* __INTEL_TCC_H__ */
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 1dbb14daccfaf..1d2afcd47523d 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -107,6 +107,7 @@ enum {
+ 
+ 	ATA_DFLAG_NCQ_PRIO_ENABLED = (1 << 20), /* Priority cmds sent to dev */
+ 	ATA_DFLAG_CDL_ENABLED	= (1 << 21), /* cmd duration limits is enabled */
++	ATA_DFLAG_RESUMING	= (1 << 22),  /* Device is resuming */
+ 	ATA_DFLAG_DETACH	= (1 << 24),
+ 	ATA_DFLAG_DETACHED	= (1 << 25),
+ 	ATA_DFLAG_DA		= (1 << 26), /* device supports Device Attention */
+diff --git a/include/linux/mman.h b/include/linux/mman.h
+index dc7048824be81..bcb201ab7a412 100644
+--- a/include/linux/mman.h
++++ b/include/linux/mman.h
+@@ -162,6 +162,14 @@ calc_vm_flag_bits(unsigned long flags)
+ 
+ unsigned long vm_commit_limit(void);
+ 
++#ifndef arch_memory_deny_write_exec_supported
++static inline bool arch_memory_deny_write_exec_supported(void)
++{
++	return true;
++}
++#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
++#endif
++
+ /*
+  * Denies creating a writable executable mapping or gaining executable permissions.
+  *
+diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
+index badb4c1ac079e..5c19ead604996 100644
+--- a/include/linux/mtd/spinand.h
++++ b/include/linux/mtd/spinand.h
+@@ -169,7 +169,7 @@
+ struct spinand_op;
+ struct spinand_device;
+ 
+-#define SPINAND_MAX_ID_LEN	4
++#define SPINAND_MAX_ID_LEN	5
+ /*
+  * For erase, write and read operation, we got the following timings :
+  * tBERS (erase) 1ms to 4ms
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 279262057a925..832b7e354b4e3 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -612,6 +612,7 @@ int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio);
+ extern int  nfs_commit_inode(struct inode *, int);
+ extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
++void nfs_commit_begin(struct nfs_mds_commit_info *cinfo);
+ bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+ 
+ static inline bool nfs_have_writebacks(const struct inode *inode)
+diff --git a/include/linux/oid_registry.h b/include/linux/oid_registry.h
+index 3921fbed0b286..51421fdbb0bad 100644
+--- a/include/linux/oid_registry.h
++++ b/include/linux/oid_registry.h
+@@ -17,10 +17,12 @@
+  *	  build_OID_registry.pl to generate the data for look_up_OID().
+  */
+ enum OID {
++	OID_id_dsa_with_sha1,		/* 1.2.840.10030.4.3 */
+ 	OID_id_dsa,			/* 1.2.840.10040.4.1 */
+ 	OID_id_ecPublicKey,		/* 1.2.840.10045.2.1 */
+ 	OID_id_prime192v1,		/* 1.2.840.10045.3.1.1 */
+ 	OID_id_prime256v1,		/* 1.2.840.10045.3.1.7 */
++	OID_id_ecdsa_with_sha1,		/* 1.2.840.10045.4.1 */
+ 	OID_id_ecdsa_with_sha224,	/* 1.2.840.10045.4.3.1 */
+ 	OID_id_ecdsa_with_sha256,	/* 1.2.840.10045.4.3.2 */
+ 	OID_id_ecdsa_with_sha384,	/* 1.2.840.10045.4.3.3 */
+@@ -28,6 +30,7 @@ enum OID {
+ 
+ 	/* PKCS#1 {iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1)} */
+ 	OID_rsaEncryption,		/* 1.2.840.113549.1.1.1 */
++	OID_sha1WithRSAEncryption,	/* 1.2.840.113549.1.1.5 */
+ 	OID_sha256WithRSAEncryption,	/* 1.2.840.113549.1.1.11 */
+ 	OID_sha384WithRSAEncryption,	/* 1.2.840.113549.1.1.12 */
+ 	OID_sha512WithRSAEncryption,	/* 1.2.840.113549.1.1.13 */
+@@ -64,6 +67,7 @@ enum OID {
+ 	OID_PKU2U,			/* 1.3.5.1.5.2.7 */
+ 	OID_Scram,			/* 1.3.6.1.5.5.14 */
+ 	OID_certAuthInfoAccess,		/* 1.3.6.1.5.5.7.1.1 */
++	OID_sha1,			/* 1.3.14.3.2.26 */
+ 	OID_id_ansip384r1,		/* 1.3.132.0.34 */
+ 	OID_sha256,			/* 2.16.840.1.101.3.4.2.1 */
+ 	OID_sha384,			/* 2.16.840.1.101.3.4.2.2 */
+diff --git a/include/linux/phy/tegra/xusb.h b/include/linux/phy/tegra/xusb.h
+index 70998e6dd6fdc..6ca51e0080ec0 100644
+--- a/include/linux/phy/tegra/xusb.h
++++ b/include/linux/phy/tegra/xusb.h
+@@ -26,6 +26,7 @@ void tegra_phy_xusb_utmi_pad_power_down(struct phy *phy);
+ int tegra_phy_xusb_utmi_port_reset(struct phy *phy);
+ int tegra_xusb_padctl_get_usb3_companion(struct tegra_xusb_padctl *padctl,
+ 					 unsigned int port);
++int tegra_xusb_padctl_get_port_number(struct phy *phy);
+ int tegra_xusb_padctl_enable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy,
+ 					   enum usb_device_speed speed);
+ int tegra_xusb_padctl_disable_phy_sleepwalk(struct tegra_xusb_padctl *padctl, struct phy *phy);
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 782e14f62201f..ded528d23f855 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -98,6 +98,7 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *k
+ 	__ring_buffer_alloc((size), (flags), &__key);	\
+ })
+ 
++typedef bool (*ring_buffer_cond_fn)(void *data);
+ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 			  struct file *filp, poll_table *poll_table, int full);
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index f43aca7f3b01e..678409c47b885 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -786,7 +786,8 @@ enum UART_TX_FLAGS {
+ 	if (pending < WAKEUP_CHARS) {					      \
+ 		uart_write_wakeup(__port);				      \
+ 									      \
+-		if (!((flags) & UART_TX_NOSTOP) && pending == 0)	      \
++		if (!((flags) & UART_TX_NOSTOP) && pending == 0 &&	      \
++		    __port->ops->tx_empty(__port))			      \
+ 			__port->ops->stop_tx(__port);			      \
+ 	}								      \
+ 									      \
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 763e9264402f0..bd06942757f5f 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3438,6 +3438,16 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
+ 
+ bool napi_pp_put_page(struct page *page, bool napi_safe);
+ 
++static inline void
++skb_page_unref(const struct sk_buff *skb, struct page *page, bool napi_safe)
++{
++#ifdef CONFIG_PAGE_POOL
++	if (skb->pp_recycle && napi_pp_put_page(page, napi_safe))
++		return;
++#endif
++	put_page(page);
++}
++
+ static inline void
+ napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe)
+ {
+diff --git a/include/linux/vfio.h b/include/linux/vfio.h
+index a65b2513f8cdc..5ac5f182ce0bb 100644
+--- a/include/linux/vfio.h
++++ b/include/linux/vfio.h
+@@ -349,6 +349,7 @@ struct virqfd {
+ 	wait_queue_entry_t		wait;
+ 	poll_table		pt;
+ 	struct work_struct	shutdown;
++	struct work_struct	flush_inject;
+ 	struct virqfd		**pvirqfd;
+ };
+ 
+@@ -356,5 +357,6 @@ int vfio_virqfd_enable(void *opaque, int (*handler)(void *, void *),
+ 		       void (*thread)(void *, void *), void *data,
+ 		       struct virqfd **pvirqfd, int fd);
+ void vfio_virqfd_disable(struct virqfd **pvirqfd);
++void vfio_virqfd_flush_thread(struct virqfd **pvirqfd);
+ 
+ #endif /* VFIO_H */
+diff --git a/include/media/media-entity.h b/include/media/media-entity.h
+index 2b6cd343ee9e0..4d95893c89846 100644
+--- a/include/media/media-entity.h
++++ b/include/media/media-entity.h
+@@ -225,6 +225,7 @@ enum media_pad_signal_type {
+  * @graph_obj:	Embedded structure containing the media object common data
+  * @entity:	Entity this pad belongs to
+  * @index:	Pad index in the entity pads array, numbered from 0 to n
++ * @num_links:	Number of links connected to this pad
+  * @sig_type:	Type of the signal inside a media pad
+  * @flags:	Pad flags, as defined in
+  *		:ref:`include/uapi/linux/media.h <media_header>`
+@@ -236,6 +237,7 @@ struct media_pad {
+ 	struct media_gobj graph_obj;	/* must be first field in struct */
+ 	struct media_entity *entity;
+ 	u16 index;
++	u16 num_links;
+ 	enum media_pad_signal_type sig_type;
+ 	unsigned long flags;
+ 
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 8f2c487618334..e90596e21cfc7 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -4914,6 +4914,7 @@ struct cfg80211_ops {
+  *	NL80211_REGDOM_SET_BY_DRIVER.
+  * @WIPHY_FLAG_CHANNEL_CHANGE_ON_BEACON: reg_call_notifier() is called if driver
+  *	set this flag to update channels on beacon hints.
++ * @WIPHY_FLAG_DISABLE_WEXT: disable wireless extensions for this device
+  */
+ enum wiphy_flags {
+ 	WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK		= BIT(0),
+@@ -4925,6 +4926,7 @@ enum wiphy_flags {
+ 	WIPHY_FLAG_4ADDR_STATION		= BIT(6),
+ 	WIPHY_FLAG_CONTROL_PORT_PROTOCOL	= BIT(7),
+ 	WIPHY_FLAG_IBSS_RSN			= BIT(8),
++	WIPHY_FLAG_DISABLE_WEXT			= BIT(9),
+ 	WIPHY_FLAG_MESH_AUTH			= BIT(10),
+ 	WIPHY_FLAG_SUPPORTS_EXT_KCK_32          = BIT(11),
+ 	/* use hole at 12 */
+diff --git a/include/net/cfg802154.h b/include/net/cfg802154.h
+index f79ce133e51a7..519d23941b541 100644
+--- a/include/net/cfg802154.h
++++ b/include/net/cfg802154.h
+@@ -378,6 +378,7 @@ struct ieee802154_llsec_key {
+ 
+ struct ieee802154_llsec_key_entry {
+ 	struct list_head list;
++	struct rcu_head rcu;
+ 
+ 	struct ieee802154_llsec_key_id id;
+ 	struct ieee802154_llsec_key *key;
+diff --git a/include/scsi/scsi_driver.h b/include/scsi/scsi_driver.h
+index 4ce1988b2ba01..f40915d2eceef 100644
+--- a/include/scsi/scsi_driver.h
++++ b/include/scsi/scsi_driver.h
+@@ -12,6 +12,7 @@ struct request;
+ struct scsi_driver {
+ 	struct device_driver	gendrv;
+ 
++	int (*resume)(struct device *);
+ 	void (*rescan)(struct device *);
+ 	blk_status_t (*init_command)(struct scsi_cmnd *);
+ 	void (*uninit_command)(struct scsi_cmnd *);
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 3b907fc2ef08f..510f594b06368 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -767,6 +767,7 @@ scsi_template_proc_dir(const struct scsi_host_template *sht);
+ #define scsi_template_proc_dir(sht) NULL
+ #endif
+ extern void scsi_scan_host(struct Scsi_Host *);
++extern int scsi_resume_device(struct scsi_device *sdev);
+ extern int scsi_rescan_device(struct scsi_device *sdev);
+ extern void scsi_remove_host(struct Scsi_Host *);
+ extern struct Scsi_Host *scsi_host_get(struct Scsi_Host *);
+diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
+index 8881aea60f6f1..2445f365bce74 100644
+--- a/include/uapi/linux/virtio_config.h
++++ b/include/uapi/linux/virtio_config.h
+@@ -52,7 +52,7 @@
+  * rest are per-device feature bits.
+  */
+ #define VIRTIO_TRANSPORT_F_START	28
+-#define VIRTIO_TRANSPORT_F_END		41
++#define VIRTIO_TRANSPORT_F_END		42
+ 
+ #ifndef VIRTIO_CONFIG_NO_LEGACY
+ /* Do we get callbacks when the ring is completely used, even if we've
+@@ -114,4 +114,10 @@
+  * This feature indicates that the driver can reset a queue individually.
+  */
+ #define VIRTIO_F_RING_RESET		40
++
++/*
++ * This feature indicates that the device support administration virtqueues.
++ */
++#define VIRTIO_F_ADMIN_VQ		41
++
+ #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */
+diff --git a/init/Kconfig b/init/Kconfig
+index bfde8189c2bec..e4ff62f583404 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -876,14 +876,14 @@ config CC_IMPLICIT_FALLTHROUGH
+ 	default "-Wimplicit-fallthrough=5" if CC_IS_GCC && $(cc-option,-Wimplicit-fallthrough=5)
+ 	default "-Wimplicit-fallthrough" if CC_IS_CLANG && $(cc-option,-Wunreachable-code-fallthrough)
+ 
+-# Currently, disable gcc-11+ array-bounds globally.
++# Currently, disable gcc-10+ array-bounds globally.
+ # It's still broken in gcc-13, so no upper bound yet.
+-config GCC11_NO_ARRAY_BOUNDS
++config GCC10_NO_ARRAY_BOUNDS
+ 	def_bool y
+ 
+ config CC_NO_ARRAY_BOUNDS
+ 	bool
+-	default y if CC_IS_GCC && GCC_VERSION >= 110000 && GCC11_NO_ARRAY_BOUNDS
++	default y if CC_IS_GCC && GCC_VERSION >= 100000 && GCC10_NO_ARRAY_BOUNDS
+ 
+ #
+ # For architectures that know their GCC __int128 support is sound
+diff --git a/init/initramfs.c b/init/initramfs.c
+index 8d0fd946cdd2b..efc477b905a48 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -673,7 +673,7 @@ static void __init populate_initrd_image(char *err)
+ 
+ 	printk(KERN_INFO "rootfs image is not initramfs (%s); looks like an initrd\n",
+ 			err);
+-	file = filp_open("/initrd.image", O_WRONLY | O_CREAT, 0700);
++	file = filp_open("/initrd.image", O_WRONLY|O_CREAT|O_LARGEFILE, 0700);
+ 	if (IS_ERR(file))
+ 		return;
+ 
+diff --git a/io_uring/futex.c b/io_uring/futex.c
+index 3c3575303c3d0..792a03df58dea 100644
+--- a/io_uring/futex.c
++++ b/io_uring/futex.c
+@@ -159,6 +159,7 @@ bool io_futex_remove_all(struct io_ring_ctx *ctx, struct task_struct *task,
+ 	hlist_for_each_entry_safe(req, tmp, &ctx->futex_list, hash_node) {
+ 		if (!io_match_task_safe(req, task, cancel_all))
+ 			continue;
++		hlist_del_init(&req->hash_node);
+ 		__io_futex_cancel(ctx, req);
+ 		found = true;
+ 	}
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 45d6e440bdc04..13a9d9fcd2ecd 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2757,14 +2757,15 @@ static void io_rings_free(struct io_ring_ctx *ctx)
+ 	if (!(ctx->flags & IORING_SETUP_NO_MMAP)) {
+ 		io_mem_free(ctx->rings);
+ 		io_mem_free(ctx->sq_sqes);
+-		ctx->rings = NULL;
+-		ctx->sq_sqes = NULL;
+ 	} else {
+ 		io_pages_free(&ctx->ring_pages, ctx->n_ring_pages);
+ 		ctx->n_ring_pages = 0;
+ 		io_pages_free(&ctx->sqe_pages, ctx->n_sqe_pages);
+ 		ctx->n_sqe_pages = 0;
+ 	}
++
++	ctx->rings = NULL;
++	ctx->sq_sqes = NULL;
+ }
+ 
+ void *io_mem_alloc(size_t size)
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 4aaeada03f1e7..5a4001139e288 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -87,7 +87,7 @@ static inline bool io_check_multishot(struct io_kiocb *req,
+ 	 * generic paths but multipoll may decide to post extra cqes.
+ 	 */
+ 	return !(issue_flags & IO_URING_F_IOWQ) ||
+-		!(issue_flags & IO_URING_F_MULTISHOT) ||
++		!(req->flags & REQ_F_APOLL_MULTISHOT) ||
+ 		!req->ctx->task_complete;
+ }
+ 
+@@ -915,7 +915,8 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 			kfree(kmsg->free_iov);
+ 		io_netmsg_recycle(req, issue_flags);
+ 		req->flags &= ~REQ_F_NEED_CLEANUP;
+-	}
++	} else if (ret == -EAGAIN)
++		return io_setup_async_msg(req, kmsg, issue_flags);
+ 
+ 	return ret;
+ }
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 58b7556f621eb..c6f4789623cb2 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -539,14 +539,6 @@ static void __io_queue_proc(struct io_poll *poll, struct io_poll_table *pt,
+ 	poll->wait.private = (void *) wqe_private;
+ 
+ 	if (poll->events & EPOLLEXCLUSIVE) {
+-		/*
+-		 * Exclusive waits may only wake a limited amount of entries
+-		 * rather than all of them, this may interfere with lazy
+-		 * wake if someone does wait(events > 1). Ensure we don't do
+-		 * lazy wake for those, as we need to process each one as they
+-		 * come in.
+-		 */
+-		req->flags |= REQ_F_POLL_NO_LAZY;
+ 		add_wait_queue_exclusive(head, &poll->wait);
+ 	} else {
+ 		add_wait_queue(head, &poll->wait);
+@@ -618,6 +610,17 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
+ 	if (issue_flags & IO_URING_F_UNLOCKED)
+ 		req->flags &= ~REQ_F_HASH_LOCKED;
+ 
++
++	/*
++	 * Exclusive waits may only wake a limited amount of entries
++	 * rather than all of them, this may interfere with lazy
++	 * wake if someone does wait(events > 1). Ensure we don't do
++	 * lazy wake for those, as we need to process each one as they
++	 * come in.
++	 */
++	if (poll->events & EPOLLEXCLUSIVE)
++		req->flags |= REQ_F_POLL_NO_LAZY;
++
+ 	mask = vfs_poll(req->file, &ipt->pt) & poll->events;
+ 
+ 	if (unlikely(ipt->error || !ipt->nr_entries)) {
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 9394bf83e8358..70c5beb05d4e9 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -926,6 +926,8 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ 	 */
+ 	if (!file_can_poll(req->file))
+ 		return -EBADFD;
++	if (issue_flags & IO_URING_F_IOWQ)
++		return -EAGAIN;
+ 
+ 	ret = __io_read(req, issue_flags);
+ 
+@@ -940,6 +942,8 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ 		 */
+ 		if (io_kbuf_recycle(req, issue_flags))
+ 			rw->len = 0;
++		if (issue_flags & IO_URING_F_MULTISHOT)
++			return IOU_ISSUE_SKIP_COMPLETE;
+ 		return -EAGAIN;
+ 	}
+ 
+diff --git a/io_uring/waitid.c b/io_uring/waitid.c
+index 6f851978606d9..77d340666cb95 100644
+--- a/io_uring/waitid.c
++++ b/io_uring/waitid.c
+@@ -125,12 +125,6 @@ static void io_waitid_complete(struct io_kiocb *req, int ret)
+ 
+ 	lockdep_assert_held(&req->ctx->uring_lock);
+ 
+-	/*
+-	 * Did cancel find it meanwhile?
+-	 */
+-	if (hlist_unhashed(&req->hash_node))
+-		return;
+-
+ 	hlist_del_init(&req->hash_node);
+ 
+ 	ret = io_waitid_finish(req, ret);
+@@ -202,6 +196,7 @@ bool io_waitid_remove_all(struct io_ring_ctx *ctx, struct task_struct *task,
+ 	hlist_for_each_entry_safe(req, tmp, &ctx->waitid_list, hash_node) {
+ 		if (!io_match_task_safe(req, task, cancel_all))
+ 			continue;
++		hlist_del_init(&req->hash_node);
+ 		__io_waitid_cancel(ctx, req);
+ 		found = true;
+ 	}
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index b529182e8b04f..c5a9fcd2d6228 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -19,7 +19,7 @@ int main(void)
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+ 	DEFINE(MAX_NR_ZONES, __MAX_NR_ZONES);
+ #ifdef CONFIG_SMP
+-	DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS));
++	DEFINE(NR_CPUS_BITS, bits_per(CONFIG_NR_CPUS));
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ #ifdef CONFIG_LRU_GEN
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 615daaf87f1fc..ffe0e0029437b 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -2466,7 +2466,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 		update_partition_sd_lb(cs, old_prs);
+ out_free:
+ 	free_cpumasks(NULL, &tmp);
+-	return 0;
++	return retval;
+ }
+ 
+ /**
+@@ -2502,9 +2502,6 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 	if (cpumask_equal(cs->exclusive_cpus, trialcs->exclusive_cpus))
+ 		return 0;
+ 
+-	if (alloc_cpumasks(NULL, &tmp))
+-		return -ENOMEM;
+-
+ 	if (*buf)
+ 		compute_effective_exclusive_cpumask(trialcs, NULL);
+ 
+@@ -2519,6 +2516,9 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ 	if (retval)
+ 		return retval;
+ 
++	if (alloc_cpumasks(NULL, &tmp))
++		return -ENOMEM;
++
+ 	if (old_prs) {
+ 		if (cpumask_empty(trialcs->effective_xcpus)) {
+ 			invalidate = true;
+diff --git a/kernel/crash_core.c b/kernel/crash_core.c
+index 755d8d4ef5b08..9e337493d7f50 100644
+--- a/kernel/crash_core.c
++++ b/kernel/crash_core.c
+@@ -377,6 +377,9 @@ static int __init reserve_crashkernel_low(unsigned long long low_size)
+ 
+ 	crashk_low_res.start = low_base;
+ 	crashk_low_res.end   = low_base + low_size - 1;
++#ifdef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY
++	insert_resource(&iomem_resource, &crashk_low_res);
++#endif
+ #endif
+ 	return 0;
+ }
+@@ -458,8 +461,12 @@ void __init reserve_crashkernel_generic(char *cmdline,
+ 
+ 	crashk_res.start = crash_base;
+ 	crashk_res.end = crash_base + crash_size - 1;
++#ifdef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY
++	insert_resource(&iomem_resource, &crashk_res);
++#endif
+ }
+ 
++#ifndef HAVE_ARCH_ADD_CRASH_RES_TO_IOMEM_EARLY
+ static __init int insert_crashkernel_resources(void)
+ {
+ 	if (crashk_res.start < crashk_res.end)
+@@ -472,6 +479,7 @@ static __init int insert_crashkernel_resources(void)
+ }
+ early_initcall(insert_crashkernel_resources);
+ #endif
++#endif
+ 
+ int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map,
+ 			  void **addr, unsigned long *sz)
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 33d942615be54..9edfb3b7702bb 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -981,8 +981,7 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool,
+ 	dma_addr_t tbl_dma_addr =
+ 		phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
+ 	unsigned long max_slots = get_max_slots(boundary_mask);
+-	unsigned int iotlb_align_mask =
+-		dma_get_min_align_mask(dev) | alloc_align_mask;
++	unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
+ 	unsigned int nslots = nr_slots(alloc_size), stride;
+ 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
+ 	unsigned int index, slots_checked, count = 0, i;
+@@ -994,18 +993,25 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool,
+ 	BUG_ON(area_index >= pool->nareas);
+ 
+ 	/*
+-	 * For allocations of PAGE_SIZE or larger only look for page aligned
+-	 * allocations.
++	 * Ensure that the allocation is at least slot-aligned and update
++	 * 'iotlb_align_mask' to ignore bits that will be preserved when
++	 * offsetting into the allocation.
+ 	 */
+-	if (alloc_size >= PAGE_SIZE)
+-		iotlb_align_mask |= ~PAGE_MASK;
+-	iotlb_align_mask &= ~(IO_TLB_SIZE - 1);
++	alloc_align_mask |= (IO_TLB_SIZE - 1);
++	iotlb_align_mask &= ~alloc_align_mask;
+ 
+ 	/*
+ 	 * For mappings with an alignment requirement don't bother looping to
+ 	 * unaligned slots once we found an aligned one.
+ 	 */
+-	stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
++	stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask));
++
++	/*
++	 * For allocations of PAGE_SIZE or larger only look for page aligned
++	 * allocations.
++	 */
++	if (alloc_size >= PAGE_SIZE)
++		stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
+ 
+ 	spin_lock_irqsave(&area->lock, flags);
+ 	if (unlikely(nslots > pool->area_nslabs - area->used))
+@@ -1015,11 +1021,14 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool,
+ 	index = area->index;
+ 
+ 	for (slots_checked = 0; slots_checked < pool->area_nslabs; ) {
++		phys_addr_t tlb_addr;
++
+ 		slot_index = slot_base + index;
++		tlb_addr = slot_addr(tbl_dma_addr, slot_index);
+ 
+-		if (orig_addr &&
+-		    (slot_addr(tbl_dma_addr, slot_index) &
+-		     iotlb_align_mask) != (orig_addr & iotlb_align_mask)) {
++		if ((tlb_addr & alloc_align_mask) ||
++		    (orig_addr && (tlb_addr & iotlb_align_mask) !=
++				  (orig_addr & iotlb_align_mask))) {
+ 			index = wrap_area_index(pool, index + 1);
+ 			slots_checked++;
+ 			continue;
+@@ -1608,12 +1617,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
+ 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+ 	struct io_tlb_pool *pool;
+ 	phys_addr_t tlb_addr;
++	unsigned int align;
+ 	int index;
+ 
+ 	if (!mem)
+ 		return NULL;
+ 
+-	index = swiotlb_find_slots(dev, 0, size, 0, &pool);
++	align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
++	index = swiotlb_find_slots(dev, 0, size, align, &pool);
+ 	if (index == -1)
+ 		return NULL;
+ 
+diff --git a/kernel/entry/common.c b/kernel/entry/common.c
+index d7ee4bc3f2ba3..5ff4f1cd36445 100644
+--- a/kernel/entry/common.c
++++ b/kernel/entry/common.c
+@@ -77,8 +77,14 @@ static long syscall_trace_enter(struct pt_regs *regs, long syscall,
+ 	/* Either of the above might have changed the syscall number */
+ 	syscall = syscall_get_nr(current, regs);
+ 
+-	if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT))
++	if (unlikely(work & SYSCALL_WORK_SYSCALL_TRACEPOINT)) {
+ 		trace_sys_enter(regs, syscall);
++		/*
++		 * Probes or BPF hooks in the tracepoint may have changed the
++		 * system call number as well.
++		 */
++		syscall = syscall_get_nr(current, regs);
++	}
+ 
+ 	syscall_enter_audit(regs, syscall);
+ 
+diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
+index 0ea1b2970a23b..28db5b7589eb2 100644
+--- a/kernel/module/Kconfig
++++ b/kernel/module/Kconfig
+@@ -236,6 +236,10 @@ choice
+ 	  possible to load a signed module containing the algorithm to check
+ 	  the signature on that module.
+ 
++config MODULE_SIG_SHA1
++	bool "Sign modules with SHA-1"
++	select CRYPTO_SHA1
++
+ config MODULE_SIG_SHA256
+ 	bool "Sign modules with SHA-256"
+ 	select CRYPTO_SHA256
+@@ -265,6 +269,7 @@ endchoice
+ config MODULE_SIG_HASH
+ 	string
+ 	depends on MODULE_SIG || IMA_APPRAISE_MODSIG
++	default "sha1" if MODULE_SIG_SHA1
+ 	default "sha256" if MODULE_SIG_SHA256
+ 	default "sha384" if MODULE_SIG_SHA384
+ 	default "sha512" if MODULE_SIG_SHA512
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index fa3bf161d13f7..a718067deecee 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -192,6 +192,7 @@ static int __init mem_sleep_default_setup(char *str)
+ 		if (mem_sleep_labels[state] &&
+ 		    !strcmp(str, mem_sleep_labels[state])) {
+ 			mem_sleep_default = state;
++			mem_sleep_current = state;
+ 			break;
+ 		}
+ 
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 72f6a564e832f..7a835b277e98d 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2026,6 +2026,12 @@ static int console_trylock_spinning(void)
+ 	 */
+ 	mutex_acquire(&console_lock_dep_map, 0, 1, _THIS_IP_);
+ 
++	/*
++	 * Update @console_may_schedule for trylock because the previous
++	 * owner may have been schedulable.
++	 */
++	console_may_schedule = 0;
++
+ 	return 1;
+ }
+ 
+@@ -3295,6 +3301,21 @@ static int __init keep_bootcon_setup(char *str)
+ 
+ early_param("keep_bootcon", keep_bootcon_setup);
+ 
++static int console_call_setup(struct console *newcon, char *options)
++{
++	int err;
++
++	if (!newcon->setup)
++		return 0;
++
++	/* Synchronize with possible boot console. */
++	console_lock();
++	err = newcon->setup(newcon, options);
++	console_unlock();
++
++	return err;
++}
++
+ /*
+  * This is called by register_console() to try to match
+  * the newly registered console with any of the ones selected
+@@ -3330,8 +3351,8 @@ static int try_enable_preferred_console(struct console *newcon,
+ 			if (_braille_register_console(newcon, c))
+ 				return 0;
+ 
+-			if (newcon->setup &&
+-			    (err = newcon->setup(newcon, c->options)) != 0)
++			err = console_call_setup(newcon, c->options);
++			if (err)
+ 				return err;
+ 		}
+ 		newcon->flags |= CON_ENABLED;
+@@ -3357,7 +3378,7 @@ static void try_enable_default_console(struct console *newcon)
+ 	if (newcon->index < 0)
+ 		newcon->index = 0;
+ 
+-	if (newcon->setup && newcon->setup(newcon, NULL) != 0)
++	if (console_call_setup(newcon, NULL) != 0)
+ 		return;
+ 
+ 	newcon->flags |= CON_ENABLED;
+diff --git a/kernel/sys.c b/kernel/sys.c
+index f8e543f1e38a0..8bb106a56b3a5 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -2408,8 +2408,11 @@ static inline int prctl_set_mdwe(unsigned long bits, unsigned long arg3,
+ 	if (bits & PR_MDWE_NO_INHERIT && !(bits & PR_MDWE_REFUSE_EXEC_GAIN))
+ 		return -EINVAL;
+ 
+-	/* PARISC cannot allow mdwe as it needs writable stacks */
+-	if (IS_ENABLED(CONFIG_PARISC))
++	/*
++	 * EOPNOTSUPP might be more appropriate here in principle, but
++	 * existing userspace depends on EINVAL specifically.
++	 */
++	if (!arch_memory_deny_write_exec_supported())
+ 		return -EINVAL;
+ 
+ 	current_bits = get_current_mdwe();
+diff --git a/kernel/time/posix-clock.c b/kernel/time/posix-clock.c
+index 9de66bbbb3d15..4782edcbe7b9b 100644
+--- a/kernel/time/posix-clock.c
++++ b/kernel/time/posix-clock.c
+@@ -129,15 +129,17 @@ static int posix_clock_open(struct inode *inode, struct file *fp)
+ 		goto out;
+ 	}
+ 	pccontext->clk = clk;
+-	fp->private_data = pccontext;
+-	if (clk->ops.open)
++	if (clk->ops.open) {
+ 		err = clk->ops.open(pccontext, fp->f_mode);
+-	else
+-		err = 0;
+-
+-	if (!err) {
+-		get_device(clk->dev);
++		if (err) {
++			kfree(pccontext);
++			goto out;
++		}
+ 	}
++
++	fp->private_data = pccontext;
++	get_device(clk->dev);
++	err = 0;
+ out:
+ 	up_read(&clk->rwsem);
+ 	return err;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 6fa67c297e8fa..140f8eed83da6 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -412,7 +412,6 @@ struct rb_irq_work {
+ 	struct irq_work			work;
+ 	wait_queue_head_t		waiters;
+ 	wait_queue_head_t		full_waiters;
+-	long				wait_index;
+ 	bool				waiters_pending;
+ 	bool				full_waiters_pending;
+ 	bool				wakeup_full;
+@@ -903,8 +902,19 @@ static void rb_wake_up_waiters(struct irq_work *work)
+ 
+ 	wake_up_all(&rbwork->waiters);
+ 	if (rbwork->full_waiters_pending || rbwork->wakeup_full) {
++		/* Only cpu_buffer sets the above flags */
++		struct ring_buffer_per_cpu *cpu_buffer =
++			container_of(rbwork, struct ring_buffer_per_cpu, irq_work);
++
++		/* Called from interrupt context */
++		raw_spin_lock(&cpu_buffer->reader_lock);
+ 		rbwork->wakeup_full = false;
+ 		rbwork->full_waiters_pending = false;
++
++		/* Waking up all waiters, they will reset the shortest full */
++		cpu_buffer->shortest_full = 0;
++		raw_spin_unlock(&cpu_buffer->reader_lock);
++
+ 		wake_up_all(&rbwork->full_waiters);
+ 	}
+ }
+@@ -945,14 +955,95 @@ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
+ 		rbwork = &cpu_buffer->irq_work;
+ 	}
+ 
+-	rbwork->wait_index++;
+-	/* make sure the waiters see the new index */
+-	smp_wmb();
+-
+ 	/* This can be called in any context */
+ 	irq_work_queue(&rbwork->work);
+ }
+ 
++static bool rb_watermark_hit(struct trace_buffer *buffer, int cpu, int full)
++{
++	struct ring_buffer_per_cpu *cpu_buffer;
++	bool ret = false;
++
++	/* Reads of all CPUs always waits for any data */
++	if (cpu == RING_BUFFER_ALL_CPUS)
++		return !ring_buffer_empty(buffer);
++
++	cpu_buffer = buffer->buffers[cpu];
++
++	if (!ring_buffer_empty_cpu(buffer, cpu)) {
++		unsigned long flags;
++		bool pagebusy;
++
++		if (!full)
++			return true;
++
++		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++		pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
++		ret = !pagebusy && full_hit(buffer, cpu, full);
++
++		if (!ret && (!cpu_buffer->shortest_full ||
++			     cpu_buffer->shortest_full > full)) {
++		    cpu_buffer->shortest_full = full;
++		}
++		raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++	}
++	return ret;
++}
++
++static inline bool
++rb_wait_cond(struct rb_irq_work *rbwork, struct trace_buffer *buffer,
++	     int cpu, int full, ring_buffer_cond_fn cond, void *data)
++{
++	if (rb_watermark_hit(buffer, cpu, full))
++		return true;
++
++	if (cond(data))
++		return true;
++
++	/*
++	 * The events can happen in critical sections where
++	 * checking a work queue can cause deadlocks.
++	 * After adding a task to the queue, this flag is set
++	 * only to notify events to try to wake up the queue
++	 * using irq_work.
++	 *
++	 * We don't clear it even if the buffer is no longer
++	 * empty. The flag only causes the next event to run
++	 * irq_work to do the work queue wake up. The worse
++	 * that can happen if we race with !trace_empty() is that
++	 * an event will cause an irq_work to try to wake up
++	 * an empty queue.
++	 *
++	 * There's no reason to protect this flag either, as
++	 * the work queue and irq_work logic will do the necessary
++	 * synchronization for the wake ups. The only thing
++	 * that is necessary is that the wake up happens after
++	 * a task has been queued. It's OK for spurious wake ups.
++	 */
++	if (full)
++		rbwork->full_waiters_pending = true;
++	else
++		rbwork->waiters_pending = true;
++
++	return false;
++}
++
++/*
++ * The default wait condition for ring_buffer_wait() is to just to exit the
++ * wait loop the first time it is woken up.
++ */
++static bool rb_wait_once(void *data)
++{
++	long *once = data;
++
++	/* wait_event() actually calls this twice before scheduling*/
++	if (*once > 1)
++		return true;
++
++	(*once)++;
++	return false;
++}
++
+ /**
+  * ring_buffer_wait - wait for input to the ring buffer
+  * @buffer: buffer to wait on
+@@ -966,101 +1057,39 @@ void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu)
+ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+-	DEFINE_WAIT(wait);
+-	struct rb_irq_work *work;
+-	long wait_index;
++	struct wait_queue_head *waitq;
++	ring_buffer_cond_fn cond;
++	struct rb_irq_work *rbwork;
++	void *data;
++	long once = 0;
+ 	int ret = 0;
+ 
++	cond = rb_wait_once;
++	data = &once;
++
+ 	/*
+ 	 * Depending on what the caller is waiting for, either any
+ 	 * data in any cpu buffer, or a specific buffer, put the
+ 	 * caller on the appropriate wait queue.
+ 	 */
+ 	if (cpu == RING_BUFFER_ALL_CPUS) {
+-		work = &buffer->irq_work;
++		rbwork = &buffer->irq_work;
+ 		/* Full only makes sense on per cpu reads */
+ 		full = 0;
+ 	} else {
+ 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 			return -ENODEV;
+ 		cpu_buffer = buffer->buffers[cpu];
+-		work = &cpu_buffer->irq_work;
+-	}
+-
+-	wait_index = READ_ONCE(work->wait_index);
+-
+-	while (true) {
+-		if (full)
+-			prepare_to_wait(&work->full_waiters, &wait, TASK_INTERRUPTIBLE);
+-		else
+-			prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE);
+-
+-		/*
+-		 * The events can happen in critical sections where
+-		 * checking a work queue can cause deadlocks.
+-		 * After adding a task to the queue, this flag is set
+-		 * only to notify events to try to wake up the queue
+-		 * using irq_work.
+-		 *
+-		 * We don't clear it even if the buffer is no longer
+-		 * empty. The flag only causes the next event to run
+-		 * irq_work to do the work queue wake up. The worse
+-		 * that can happen if we race with !trace_empty() is that
+-		 * an event will cause an irq_work to try to wake up
+-		 * an empty queue.
+-		 *
+-		 * There's no reason to protect this flag either, as
+-		 * the work queue and irq_work logic will do the necessary
+-		 * synchronization for the wake ups. The only thing
+-		 * that is necessary is that the wake up happens after
+-		 * a task has been queued. It's OK for spurious wake ups.
+-		 */
+-		if (full)
+-			work->full_waiters_pending = true;
+-		else
+-			work->waiters_pending = true;
+-
+-		if (signal_pending(current)) {
+-			ret = -EINTR;
+-			break;
+-		}
+-
+-		if (cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer))
+-			break;
+-
+-		if (cpu != RING_BUFFER_ALL_CPUS &&
+-		    !ring_buffer_empty_cpu(buffer, cpu)) {
+-			unsigned long flags;
+-			bool pagebusy;
+-			bool done;
+-
+-			if (!full)
+-				break;
+-
+-			raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+-			pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
+-			done = !pagebusy && full_hit(buffer, cpu, full);
+-
+-			if (!cpu_buffer->shortest_full ||
+-			    cpu_buffer->shortest_full > full)
+-				cpu_buffer->shortest_full = full;
+-			raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+-			if (done)
+-				break;
+-		}
+-
+-		schedule();
+-
+-		/* Make sure to see the new wait index */
+-		smp_rmb();
+-		if (wait_index != work->wait_index)
+-			break;
++		rbwork = &cpu_buffer->irq_work;
+ 	}
+ 
+ 	if (full)
+-		finish_wait(&work->full_waiters, &wait);
++		waitq = &rbwork->full_waiters;
+ 	else
+-		finish_wait(&work->waiters, &wait);
++		waitq = &rbwork->waiters;
++
++	ret = wait_event_interruptible((*waitq),
++				rb_wait_cond(rbwork, buffer, cpu, full, cond, data));
+ 
+ 	return ret;
+ }
+@@ -1084,30 +1113,51 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 			  struct file *filp, poll_table *poll_table, int full)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+-	struct rb_irq_work *work;
++	struct rb_irq_work *rbwork;
+ 
+ 	if (cpu == RING_BUFFER_ALL_CPUS) {
+-		work = &buffer->irq_work;
++		rbwork = &buffer->irq_work;
+ 		full = 0;
+ 	} else {
+ 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 			return EPOLLERR;
+ 
+ 		cpu_buffer = buffer->buffers[cpu];
+-		work = &cpu_buffer->irq_work;
++		rbwork = &cpu_buffer->irq_work;
+ 	}
+ 
+ 	if (full) {
+-		poll_wait(filp, &work->full_waiters, poll_table);
+-		work->full_waiters_pending = true;
++		unsigned long flags;
++
++		poll_wait(filp, &rbwork->full_waiters, poll_table);
++
++		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ 		if (!cpu_buffer->shortest_full ||
+ 		    cpu_buffer->shortest_full > full)
+ 			cpu_buffer->shortest_full = full;
+-	} else {
+-		poll_wait(filp, &work->waiters, poll_table);
+-		work->waiters_pending = true;
++		raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++		if (full_hit(buffer, cpu, full))
++			return EPOLLIN | EPOLLRDNORM;
++		/*
++		 * Only allow full_waiters_pending update to be seen after
++		 * the shortest_full is set. If the writer sees the
++		 * full_waiters_pending flag set, it will compare the
++		 * amount in the ring buffer to shortest_full. If the amount
++		 * in the ring buffer is greater than the shortest_full
++		 * percent, it will call the irq_work handler to wake up
++		 * this list. The irq_handler will reset shortest_full
++		 * back to zero. That's done under the reader_lock, but
++		 * the below smp_mb() makes sure that the update to
++		 * full_waiters_pending doesn't leak up into the above.
++		 */
++		smp_mb();
++		rbwork->full_waiters_pending = true;
++		return 0;
+ 	}
+ 
++	poll_wait(filp, &rbwork->waiters, poll_table);
++	rbwork->waiters_pending = true;
++
+ 	/*
+ 	 * There's a tight race between setting the waiters_pending and
+ 	 * checking if the ring buffer is empty.  Once the waiters_pending bit
+@@ -1123,9 +1173,6 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 	 */
+ 	smp_mb();
+ 
+-	if (full)
+-		return full_hit(buffer, cpu, full) ? EPOLLIN | EPOLLRDNORM : 0;
+-
+ 	if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||
+ 	    (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))
+ 		return EPOLLIN | EPOLLRDNORM;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 3fdc57450e79e..e03960f9f4cf6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8352,6 +8352,20 @@ tracing_buffers_read(struct file *filp, char __user *ubuf,
+ 	return size;
+ }
+ 
++static int tracing_buffers_flush(struct file *file, fl_owner_t id)
++{
++	struct ftrace_buffer_info *info = file->private_data;
++	struct trace_iterator *iter = &info->iter;
++
++	iter->wait_index++;
++	/* Make sure the waiters see the new wait_index */
++	smp_wmb();
++
++	ring_buffer_wake_waiters(iter->array_buffer->buffer, iter->cpu_file);
++
++	return 0;
++}
++
+ static int tracing_buffers_release(struct inode *inode, struct file *file)
+ {
+ 	struct ftrace_buffer_info *info = file->private_data;
+@@ -8363,12 +8377,6 @@ static int tracing_buffers_release(struct inode *inode, struct file *file)
+ 
+ 	__trace_array_put(iter->tr);
+ 
+-	iter->wait_index++;
+-	/* Make sure the waiters see the new wait_index */
+-	smp_wmb();
+-
+-	ring_buffer_wake_waiters(iter->array_buffer->buffer, iter->cpu_file);
+-
+ 	if (info->spare)
+ 		ring_buffer_free_read_page(iter->array_buffer->buffer,
+ 					   info->spare_cpu, info->spare);
+@@ -8582,6 +8590,7 @@ static const struct file_operations tracing_buffers_fops = {
+ 	.read		= tracing_buffers_read,
+ 	.poll		= tracing_buffers_poll,
+ 	.release	= tracing_buffers_release,
++	.flush		= tracing_buffers_flush,
+ 	.splice_read	= tracing_buffers_splice_read,
+ 	.unlocked_ioctl = tracing_buffers_ioctl,
+ 	.llseek		= no_llseek,
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 6f7cb619aa5e4..8f761417a9fa0 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -7109,7 +7109,7 @@ void __init workqueue_init_early(void)
+ 					      WQ_FREEZABLE, 0);
+ 	system_power_efficient_wq = alloc_workqueue("events_power_efficient",
+ 					      WQ_POWER_EFFICIENT, 0);
+-	system_freezable_power_efficient_wq = alloc_workqueue("events_freezable_power_efficient",
++	system_freezable_power_efficient_wq = alloc_workqueue("events_freezable_pwr_efficient",
+ 					      WQ_FREEZABLE | WQ_POWER_EFFICIENT,
+ 					      0);
+ 	BUG_ON(!system_wq || !system_highpri_wq || !system_long_wq ||
+diff --git a/lib/pci_iomap.c b/lib/pci_iomap.c
+index ce39ce9f3526e..2829ddb0e316b 100644
+--- a/lib/pci_iomap.c
++++ b/lib/pci_iomap.c
+@@ -170,8 +170,8 @@ void pci_iounmap(struct pci_dev *dev, void __iomem *p)
+ 
+ 	if (addr >= start && addr < start + IO_SPACE_LIMIT)
+ 		return;
+-	iounmap(p);
+ #endif
++	iounmap(p);
+ }
+ EXPORT_SYMBOL(pci_iounmap);
+ 
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 01ba298739dda..f31d18741a0d6 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -2701,16 +2701,11 @@ enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order,
+ 		unsigned int alloc_flags, const struct alloc_context *ac,
+ 		enum compact_priority prio, struct page **capture)
+ {
+-	int may_perform_io = (__force int)(gfp_mask & __GFP_IO);
+ 	struct zoneref *z;
+ 	struct zone *zone;
+ 	enum compact_result rc = COMPACT_SKIPPED;
+ 
+-	/*
+-	 * Check if the GFP flags allow compaction - GFP_NOIO is really
+-	 * tricky context because the migration might require IO
+-	 */
+-	if (!may_perform_io)
++	if (!gfp_compaction_allowed(gfp_mask))
+ 		return COMPACT_SKIPPED;
+ 
+ 	trace_mm_compaction_try_to_compact_pages(order, gfp_mask, prio);
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 5be7887957c70..321349e2c9c7e 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -4150,7 +4150,23 @@ static void filemap_cachestat(struct address_space *mapping,
+ 				/* shmem file - in swap cache */
+ 				swp_entry_t swp = radix_to_swp_entry(folio);
+ 
++				/* swapin error results in poisoned entry */
++				if (non_swap_entry(swp))
++					goto resched;
++
++				/*
++				 * Getting a swap entry from the shmem
++				 * inode means we beat
++				 * shmem_unuse(). rcu_read_lock()
++				 * ensures swapoff waits for us before
++				 * freeing the swapper space. However,
++				 * we can race with swapping and
++				 * invalidation, so there might not be
++				 * a shadow in the swapcache (yet).
++				 */
+ 				shadow = get_shadow_from_swap_cache(swp);
++				if (!shadow)
++					goto resched;
+ 			}
+ #endif
+ 			if (workingset_test_recent(shadow, true, &workingset))
+diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
+index 34515a106ca57..23906e886b577 100644
+--- a/mm/kasan/kasan_test.c
++++ b/mm/kasan/kasan_test.c
+@@ -451,7 +451,8 @@ static void kmalloc_oob_16(struct kunit *test)
+ 	/* This test is specifically crafted for the generic mode. */
+ 	KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC);
+ 
+-	ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
++	/* RELOC_HIDE to prevent gcc from warning about short alloc */
++	ptr1 = RELOC_HIDE(kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL), 0);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
+ 
+ 	ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 792fb3a5ce3b9..27c9f451d40dd 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4379,7 +4379,7 @@ static void __mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
+ 	 * only one element of the array here.
+ 	 */
+ 	for (; i >= 0 && unlikely(t->entries[i].threshold > usage); i--)
+-		eventfd_signal(t->entries[i].eventfd, 1);
++		eventfd_signal(t->entries[i].eventfd);
+ 
+ 	/* i = current_threshold + 1 */
+ 	i++;
+@@ -4391,7 +4391,7 @@ static void __mem_cgroup_threshold(struct mem_cgroup *memcg, bool swap)
+ 	 * only one element of the array here.
+ 	 */
+ 	for (; i < t->size && unlikely(t->entries[i].threshold <= usage); i++)
+-		eventfd_signal(t->entries[i].eventfd, 1);
++		eventfd_signal(t->entries[i].eventfd);
+ 
+ 	/* Update current_threshold */
+ 	t->current_threshold = i - 1;
+@@ -4431,7 +4431,7 @@ static int mem_cgroup_oom_notify_cb(struct mem_cgroup *memcg)
+ 	spin_lock(&memcg_oom_lock);
+ 
+ 	list_for_each_entry(ev, &memcg->oom_notify, list)
+-		eventfd_signal(ev->eventfd, 1);
++		eventfd_signal(ev->eventfd);
+ 
+ 	spin_unlock(&memcg_oom_lock);
+ 	return 0;
+@@ -4650,7 +4650,7 @@ static int mem_cgroup_oom_register_event(struct mem_cgroup *memcg,
+ 
+ 	/* already in OOM ? */
+ 	if (memcg->under_oom)
+-		eventfd_signal(eventfd, 1);
++		eventfd_signal(eventfd);
+ 	spin_unlock(&memcg_oom_lock);
+ 
+ 	return 0;
+@@ -4942,7 +4942,7 @@ static void memcg_event_remove(struct work_struct *work)
+ 	event->unregister_event(memcg, event->eventfd);
+ 
+ 	/* Notify userspace the event is going away. */
+-	eventfd_signal(event->eventfd, 1);
++	eventfd_signal(event->eventfd);
+ 
+ 	eventfd_ctx_put(event->eventfd);
+ 	kfree(event);
+diff --git a/mm/memtest.c b/mm/memtest.c
+index 32f3e9dda8370..c2c609c391199 100644
+--- a/mm/memtest.c
++++ b/mm/memtest.c
+@@ -51,10 +51,10 @@ static void __init memtest(u64 pattern, phys_addr_t start_phys, phys_addr_t size
+ 	last_bad = 0;
+ 
+ 	for (p = start; p < end; p++)
+-		*p = pattern;
++		WRITE_ONCE(*p, pattern);
+ 
+ 	for (p = start; p < end; p++, start_phys_aligned += incr) {
+-		if (*p == pattern)
++		if (READ_ONCE(*p) == pattern)
+ 			continue;
+ 		if (start_phys_aligned == last_bad + incr) {
+ 			last_bad += incr;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index b9a43872acadb..23072589fe83b 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -954,13 +954,21 @@ static struct vm_area_struct
+ 	} else if (merge_prev) {			/* case 2 */
+ 		if (curr) {
+ 			vma_start_write(curr);
+-			err = dup_anon_vma(prev, curr, &anon_dup);
+ 			if (end == curr->vm_end) {	/* case 7 */
++				/*
++				 * can_vma_merge_after() assumed we would not be
++				 * removing prev vma, so it skipped the check
++				 * for vm_ops->close, but we are removing curr
++				 */
++				if (curr->vm_ops && curr->vm_ops->close)
++					err = -EINVAL;
+ 				remove = curr;
+ 			} else {			/* case 5 */
+ 				adjust = curr;
+ 				adj_start = (end - curr->vm_start);
+ 			}
++			if (!err)
++				err = dup_anon_vma(prev, curr, &anon_dup);
+ 		}
+ 	} else { /* merge_next */
+ 		vma_start_write(next);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 6d2a74138f456..b1d9dcdabd030 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4042,6 +4042,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 						struct alloc_context *ac)
+ {
+ 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
++	bool can_compact = gfp_compaction_allowed(gfp_mask);
+ 	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
+ 	struct page *page = NULL;
+ 	unsigned int alloc_flags;
+@@ -4112,7 +4113,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 	 * Don't try this for allocations that are allowed to ignore
+ 	 * watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
+ 	 */
+-	if (can_direct_reclaim &&
++	if (can_direct_reclaim && can_compact &&
+ 			(costly_order ||
+ 			   (order > 0 && ac->migratetype != MIGRATE_MOVABLE))
+ 			&& !gfp_pfmemalloc_allowed(gfp_mask)) {
+@@ -4210,9 +4211,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 
+ 	/*
+ 	 * Do not retry costly high order allocations unless they are
+-	 * __GFP_RETRY_MAYFAIL
++	 * __GFP_RETRY_MAYFAIL and we can compact
+ 	 */
+-	if (costly_order && !(gfp_mask & __GFP_RETRY_MAYFAIL))
++	if (costly_order && (!can_compact ||
++			     !(gfp_mask & __GFP_RETRY_MAYFAIL)))
+ 		goto nopage;
+ 
+ 	if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
+@@ -4225,7 +4227,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ 	 * implementation of the compaction depends on the sufficient amount
+ 	 * of free memory (see __compaction_suitable)
+ 	 */
+-	if (did_some_progress > 0 &&
++	if (did_some_progress > 0 && can_compact &&
+ 			should_compact_retry(ac, order, alloc_flags,
+ 				compact_result, &compact_priority,
+ 				&compaction_retries))
+diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c
+index 062d1c1097ae3..ce514e700d2f6 100644
+--- a/mm/shmem_quota.c
++++ b/mm/shmem_quota.c
+@@ -116,7 +116,7 @@ static int shmem_free_file_info(struct super_block *sb, int type)
+ static int shmem_get_next_id(struct super_block *sb, struct kqid *qid)
+ {
+ 	struct mem_dqinfo *info = sb_dqinfo(sb, qid->type);
+-	struct rb_node *node = ((struct rb_root *)info->dqi_priv)->rb_node;
++	struct rb_node *node;
+ 	qid_t id = from_kqid(&init_user_ns, *qid);
+ 	struct quota_info *dqopt = sb_dqopt(sb);
+ 	struct quota_id *entry = NULL;
+@@ -126,6 +126,7 @@ static int shmem_get_next_id(struct super_block *sb, struct kqid *qid)
+ 		return -ESRCH;
+ 
+ 	down_read(&dqopt->dqio_sem);
++	node = ((struct rb_root *)info->dqi_priv)->rb_node;
+ 	while (node) {
+ 		entry = rb_entry(node, struct quota_id, node);
+ 
+@@ -165,7 +166,7 @@ static int shmem_get_next_id(struct super_block *sb, struct kqid *qid)
+ static int shmem_acquire_dquot(struct dquot *dquot)
+ {
+ 	struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type);
+-	struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node;
++	struct rb_node **n;
+ 	struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info;
+ 	struct rb_node *parent = NULL, *new_node = NULL;
+ 	struct quota_id *new_entry, *entry;
+@@ -176,6 +177,8 @@ static int shmem_acquire_dquot(struct dquot *dquot)
+ 	mutex_lock(&dquot->dq_lock);
+ 
+ 	down_write(&dqopt->dqio_sem);
++	n = &((struct rb_root *)info->dqi_priv)->rb_node;
++
+ 	while (*n) {
+ 		parent = *n;
+ 		entry = rb_entry(parent, struct quota_id, node);
+@@ -264,7 +267,7 @@ static bool shmem_is_empty_dquot(struct dquot *dquot)
+ static int shmem_release_dquot(struct dquot *dquot)
+ {
+ 	struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type);
+-	struct rb_node *node = ((struct rb_root *)info->dqi_priv)->rb_node;
++	struct rb_node *node;
+ 	qid_t id = from_kqid(&init_user_ns, dquot->dq_id);
+ 	struct quota_info *dqopt = sb_dqopt(dquot->dq_sb);
+ 	struct quota_id *entry = NULL;
+@@ -275,6 +278,7 @@ static int shmem_release_dquot(struct dquot *dquot)
+ 		goto out_dqlock;
+ 
+ 	down_write(&dqopt->dqio_sem);
++	node = ((struct rb_root *)info->dqi_priv)->rb_node;
+ 	while (node) {
+ 		entry = rb_entry(node, struct quota_id, node);
+ 
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 022581ec40be3..91397a2539cbe 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1226,6 +1226,11 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p,
+  * with get_swap_device() and put_swap_device(), unless the swap
+  * functions call get/put_swap_device() by themselves.
+  *
++ * Note that when only holding the PTL, swapoff might succeed immediately
++ * after freeing a swap entry. Therefore, immediately after
++ * __swap_entry_free(), the swap info might become stale and should not
++ * be touched without a prior get_swap_device().
++ *
+  * Check whether swap entry is valid in the swap device.  If so,
+  * return pointer to swap_info_struct, and keep the swap entry valid
+  * via preventing the swap device from being swapoff, until
+@@ -1603,13 +1608,19 @@ int free_swap_and_cache(swp_entry_t entry)
+ 	if (non_swap_entry(entry))
+ 		return 1;
+ 
+-	p = _swap_info_get(entry);
++	p = get_swap_device(entry);
+ 	if (p) {
++		if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) {
++			put_swap_device(p);
++			return 0;
++		}
++
+ 		count = __swap_entry_free(p, entry);
+ 		if (count == SWAP_HAS_CACHE &&
+ 		    !swap_page_trans_huge_swapped(p, entry))
+ 			__try_to_reclaim_swap(p, swp_offset(entry),
+ 					      TTRS_UNMAPPED | TTRS_FULL);
++		put_swap_device(p);
+ 	}
+ 	return p != NULL;
+ }
+diff --git a/mm/vmpressure.c b/mm/vmpressure.c
+index 22c6689d93027..bd5183dfd8791 100644
+--- a/mm/vmpressure.c
++++ b/mm/vmpressure.c
+@@ -169,7 +169,7 @@ static bool vmpressure_event(struct vmpressure *vmpr,
+ 			continue;
+ 		if (level < ev->level)
+ 			continue;
+-		eventfd_signal(ev->efd, 1);
++		eventfd_signal(ev->efd);
+ 		ret = true;
+ 	}
+ 	mutex_unlock(&vmpr->events_lock);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index bba207f41b148..ebaf79d6830ed 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -5733,7 +5733,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
+ /* Use reclaim/compaction for costly allocs or under memory pressure */
+ static bool in_reclaim_compaction(struct scan_control *sc)
+ {
+-	if (IS_ENABLED(CONFIG_COMPACTION) && sc->order &&
++	if (gfp_compaction_allowed(sc->gfp_mask) && sc->order &&
+ 			(sc->order > PAGE_ALLOC_COSTLY_ORDER ||
+ 			 sc->priority < DEF_PRIORITY - 2))
+ 		return true;
+@@ -5978,6 +5978,9 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
+ {
+ 	unsigned long watermark;
+ 
++	if (!gfp_compaction_allowed(sc->gfp_mask))
++		return false;
++
+ 	/* Allocation can already succeed, nothing to do */
+ 	if (zone_watermark_ok(zone, sc->order, min_wmark_pages(zone),
+ 			      sc->reclaim_idx, 0))
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 7d5334b529834..0592369579ab2 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2842,7 +2842,7 @@ static void hci_cancel_cmd_sync(struct hci_dev *hdev, int err)
+ 	cancel_delayed_work_sync(&hdev->ncmd_timer);
+ 	atomic_set(&hdev->cmd_cnt, 1);
+ 
+-	hci_cmd_sync_cancel_sync(hdev, -err);
++	hci_cmd_sync_cancel_sync(hdev, err);
+ }
+ 
+ /* Suspend HCI device */
+@@ -2862,7 +2862,7 @@ int hci_suspend_dev(struct hci_dev *hdev)
+ 		return 0;
+ 
+ 	/* Cancel potentially blocking sync operation before suspend */
+-	hci_cancel_cmd_sync(hdev, -EHOSTDOWN);
++	hci_cancel_cmd_sync(hdev, EHOSTDOWN);
+ 
+ 	hci_req_sync_lock(hdev);
+ 	ret = hci_suspend_sync(hdev);
+@@ -4178,7 +4178,7 @@ static void hci_send_cmd_sync(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	err = hci_send_frame(hdev, skb);
+ 	if (err < 0) {
+-		hci_cmd_sync_cancel_sync(hdev, err);
++		hci_cmd_sync_cancel_sync(hdev, -err);
+ 		return;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 183501f921814..5ce71c483b7c2 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -679,7 +679,10 @@ void hci_cmd_sync_cancel_sync(struct hci_dev *hdev, int err)
+ 	bt_dev_dbg(hdev, "err 0x%2.2x", err);
+ 
+ 	if (hdev->req_status == HCI_REQ_PEND) {
+-		hdev->req_result = err;
++		/* req_result is __u32 so error must be positive to be properly
++		 * propagated.
++		 */
++		hdev->req_result = err < 0 ? -err : err;
+ 		hdev->req_status = HCI_REQ_CANCELED;
+ 
+ 		wake_up_interruptible(&hdev->req_wait_q);
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 4ccfc104f13a5..fe501d2186bcf 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -95,7 +95,7 @@ static inline struct scatterlist *esp_req_sg(struct crypto_aead *aead,
+ 			     __alignof__(struct scatterlist));
+ }
+ 
+-static void esp_ssg_unref(struct xfrm_state *x, void *tmp)
++static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb)
+ {
+ 	struct crypto_aead *aead = x->data;
+ 	int extralen = 0;
+@@ -114,7 +114,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp)
+ 	 */
+ 	if (req->src != req->dst)
+ 		for (sg = sg_next(req->src); sg; sg = sg_next(sg))
+-			put_page(sg_page(sg));
++			skb_page_unref(skb, sg_page(sg), false);
+ }
+ 
+ #ifdef CONFIG_INET_ESPINTCP
+@@ -260,7 +260,7 @@ static void esp_output_done(void *data, int err)
+ 	}
+ 
+ 	tmp = ESP_SKB_CB(skb)->tmp;
+-	esp_ssg_unref(x, tmp);
++	esp_ssg_unref(x, tmp, skb);
+ 	kfree(tmp);
+ 
+ 	if (xo && (xo->flags & XFRM_DEV_RESUME)) {
+@@ -639,7 +639,7 @@ int esp_output_tail(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 	}
+ 
+ 	if (sg != dsg)
+-		esp_ssg_unref(x, tmp);
++		esp_ssg_unref(x, tmp, skb);
+ 
+ 	if (!err && x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
+ 		err = esp_output_tail_tcp(x, skb);
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 2cc1a45742d82..a3fa3eda388a4 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -112,7 +112,7 @@ static inline struct scatterlist *esp_req_sg(struct crypto_aead *aead,
+ 			     __alignof__(struct scatterlist));
+ }
+ 
+-static void esp_ssg_unref(struct xfrm_state *x, void *tmp)
++static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb)
+ {
+ 	struct crypto_aead *aead = x->data;
+ 	int extralen = 0;
+@@ -131,7 +131,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp)
+ 	 */
+ 	if (req->src != req->dst)
+ 		for (sg = sg_next(req->src); sg; sg = sg_next(sg))
+-			put_page(sg_page(sg));
++			skb_page_unref(skb, sg_page(sg), false);
+ }
+ 
+ #ifdef CONFIG_INET6_ESPINTCP
+@@ -294,7 +294,7 @@ static void esp_output_done(void *data, int err)
+ 	}
+ 
+ 	tmp = ESP_SKB_CB(skb)->tmp;
+-	esp_ssg_unref(x, tmp);
++	esp_ssg_unref(x, tmp, skb);
+ 	kfree(tmp);
+ 
+ 	esp_output_encap_csum(skb);
+@@ -677,7 +677,7 @@ int esp6_output_tail(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ 	}
+ 
+ 	if (sg != dsg)
+-		esp_ssg_unref(x, tmp);
++		esp_ssg_unref(x, tmp, skb);
+ 
+ 	if (!err && x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP)
+ 		err = esp_output_tail_tcp(x, skb);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index b382c2e0a39a0..ebaf930bb4c90 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1869,7 +1869,7 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ 					      sband->band);
+ 	}
+ 
+-	ieee80211_sta_set_rx_nss(link_sta);
++	ieee80211_sta_init_nss(link_sta);
+ 
+ 	return ret;
+ }
+@@ -2164,15 +2164,14 @@ static int ieee80211_change_station(struct wiphy *wiphy,
+ 		}
+ 
+ 		if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
+-		    sta->sdata->u.vlan.sta) {
+-			ieee80211_clear_fast_rx(sta);
++		    sta->sdata->u.vlan.sta)
+ 			RCU_INIT_POINTER(sta->sdata->u.vlan.sta, NULL);
+-		}
+ 
+ 		if (test_sta_flag(sta, WLAN_STA_AUTHORIZED))
+ 			ieee80211_vif_dec_num_mcast(sta->sdata);
+ 
+ 		sta->sdata = vlansdata;
++		ieee80211_check_fast_rx(sta);
+ 		ieee80211_check_fast_xmit(sta);
+ 
+ 		if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 84df104f272b0..e0a792a7707ce 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2109,7 +2109,7 @@ enum ieee80211_sta_rx_bandwidth
+ ieee80211_sta_cap_rx_bw(struct link_sta_info *link_sta);
+ enum ieee80211_sta_rx_bandwidth
+ ieee80211_sta_cur_vht_bw(struct link_sta_info *link_sta);
+-void ieee80211_sta_set_rx_nss(struct link_sta_info *link_sta);
++void ieee80211_sta_init_nss(struct link_sta_info *link_sta);
+ enum ieee80211_sta_rx_bandwidth
+ ieee80211_chan_width_to_rx_bw(enum nl80211_chan_width width);
+ enum nl80211_chan_width
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index 9d33fd2377c88..0efdaa8f2a92e 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -37,7 +37,7 @@ void rate_control_rate_init(struct sta_info *sta)
+ 	struct ieee80211_supported_band *sband;
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 
+-	ieee80211_sta_set_rx_nss(&sta->deflink);
++	ieee80211_sta_init_nss(&sta->deflink);
+ 
+ 	if (!ref)
+ 		return;
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index 7acf2223e47aa..f4713046728a2 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -3,7 +3,7 @@
+  * Copyright 2002-2005, Devicescape Software, Inc.
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright(c) 2020-2023 Intel Corporation
++ * Copyright(c) 2020-2024 Intel Corporation
+  */
+ 
+ #ifndef STA_INFO_H
+@@ -482,6 +482,8 @@ struct ieee80211_fragment_cache {
+  *	same for non-MLD STA. This is used as key for searching link STA
+  * @link_id: Link ID uniquely identifying the link STA. This is 0 for non-MLD
+  *	and set to the corresponding vif LinkId for MLD STA
++ * @op_mode_nss: NSS limit as set by operating mode notification, or 0
++ * @capa_nss: NSS limit as determined by local and peer capabilities
+  * @link_hash_node: hash node for rhashtable
+  * @sta: Points to the STA info
+  * @gtk: group keys negotiated with this station, if any
+@@ -518,6 +520,8 @@ struct link_sta_info {
+ 	u8 addr[ETH_ALEN];
+ 	u8 link_id;
+ 
++	u8 op_mode_nss, capa_nss;
++
+ 	struct rhlist_head link_hash_node;
+ 
+ 	struct sta_info *sta;
+diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c
+index b3a5c3e96a720..bc13b1419981a 100644
+--- a/net/mac80211/vht.c
++++ b/net/mac80211/vht.c
+@@ -4,7 +4,7 @@
+  *
+  * Portions of this file
+  * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2023 Intel Corporation
++ * Copyright (C) 2018 - 2024 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -541,15 +541,11 @@ ieee80211_sta_cur_vht_bw(struct link_sta_info *link_sta)
+ 	return bw;
+ }
+ 
+-void ieee80211_sta_set_rx_nss(struct link_sta_info *link_sta)
++void ieee80211_sta_init_nss(struct link_sta_info *link_sta)
+ {
+ 	u8 ht_rx_nss = 0, vht_rx_nss = 0, he_rx_nss = 0, eht_rx_nss = 0, rx_nss;
+ 	bool support_160;
+ 
+-	/* if we received a notification already don't overwrite it */
+-	if (link_sta->pub->rx_nss)
+-		return;
+-
+ 	if (link_sta->pub->eht_cap.has_eht) {
+ 		int i;
+ 		const u8 *rx_nss_mcs = (void *)&link_sta->pub->eht_cap.eht_mcs_nss_supp;
+@@ -627,7 +623,15 @@ void ieee80211_sta_set_rx_nss(struct link_sta_info *link_sta)
+ 	rx_nss = max(vht_rx_nss, ht_rx_nss);
+ 	rx_nss = max(he_rx_nss, rx_nss);
+ 	rx_nss = max(eht_rx_nss, rx_nss);
+-	link_sta->pub->rx_nss = max_t(u8, 1, rx_nss);
++	rx_nss = max_t(u8, 1, rx_nss);
++	link_sta->capa_nss = rx_nss;
++
++	/* that shouldn't be set yet, but we can handle it anyway */
++	if (link_sta->op_mode_nss)
++		link_sta->pub->rx_nss =
++			min_t(u8, rx_nss, link_sta->op_mode_nss);
++	else
++		link_sta->pub->rx_nss = rx_nss;
+ }
+ 
+ u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
+@@ -637,7 +641,7 @@ u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
+ 	enum ieee80211_sta_rx_bandwidth new_bw;
+ 	struct sta_opmode_info sta_opmode = {};
+ 	u32 changed = 0;
+-	u8 nss, cur_nss;
++	u8 nss;
+ 
+ 	/* ignore - no support for BF yet */
+ 	if (opmode & IEEE80211_OPMODE_NOTIF_RX_NSS_TYPE_BF)
+@@ -647,23 +651,17 @@ u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
+ 	nss >>= IEEE80211_OPMODE_NOTIF_RX_NSS_SHIFT;
+ 	nss += 1;
+ 
+-	if (link_sta->pub->rx_nss != nss) {
+-		cur_nss = link_sta->pub->rx_nss;
+-		/* Reset rx_nss and call ieee80211_sta_set_rx_nss() which
+-		 * will set the same to max nss value calculated based on capability.
+-		 */
+-		link_sta->pub->rx_nss = 0;
+-		ieee80211_sta_set_rx_nss(link_sta);
+-		/* Do not allow an nss change to rx_nss greater than max_nss
+-		 * negotiated and capped to APs capability during association.
+-		 */
+-		if (nss <= link_sta->pub->rx_nss) {
+-			link_sta->pub->rx_nss = nss;
+-			sta_opmode.rx_nss = nss;
+-			changed |= IEEE80211_RC_NSS_CHANGED;
+-			sta_opmode.changed |= STA_OPMODE_N_SS_CHANGED;
++	if (link_sta->op_mode_nss != nss) {
++		if (nss <= link_sta->capa_nss) {
++			link_sta->op_mode_nss = nss;
++
++			if (nss != link_sta->pub->rx_nss) {
++				link_sta->pub->rx_nss = nss;
++				changed |= IEEE80211_RC_NSS_CHANGED;
++				sta_opmode.rx_nss = link_sta->pub->rx_nss;
++				sta_opmode.changed |= STA_OPMODE_N_SS_CHANGED;
++			}
+ 		} else {
+-			link_sta->pub->rx_nss = cur_nss;
+ 			pr_warn_ratelimited("Ignoring NSS change in VHT Operating Mode Notification from %pM with invalid nss %d",
+ 					    link_sta->pub->addr, nss);
+ 		}
+diff --git a/net/mac802154/llsec.c b/net/mac802154/llsec.c
+index 8d2eabc71bbeb..f13b07ebfb98a 100644
+--- a/net/mac802154/llsec.c
++++ b/net/mac802154/llsec.c
+@@ -265,19 +265,27 @@ int mac802154_llsec_key_add(struct mac802154_llsec *sec,
+ 	return -ENOMEM;
+ }
+ 
++static void mac802154_llsec_key_del_rcu(struct rcu_head *rcu)
++{
++	struct ieee802154_llsec_key_entry *pos;
++	struct mac802154_llsec_key *mkey;
++
++	pos = container_of(rcu, struct ieee802154_llsec_key_entry, rcu);
++	mkey = container_of(pos->key, struct mac802154_llsec_key, key);
++
++	llsec_key_put(mkey);
++	kfree_sensitive(pos);
++}
++
+ int mac802154_llsec_key_del(struct mac802154_llsec *sec,
+ 			    const struct ieee802154_llsec_key_id *key)
+ {
+ 	struct ieee802154_llsec_key_entry *pos;
+ 
+ 	list_for_each_entry(pos, &sec->table.keys, list) {
+-		struct mac802154_llsec_key *mkey;
+-
+-		mkey = container_of(pos->key, struct mac802154_llsec_key, key);
+-
+ 		if (llsec_key_id_equal(&pos->id, key)) {
+ 			list_del_rcu(&pos->list);
+-			llsec_key_put(mkey);
++			call_rcu(&pos->rcu, mac802154_llsec_key_del_rcu);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0130c2782cdc7..d07872814feef 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -5000,6 +5000,12 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ 		if ((flags & (NFT_SET_EVAL | NFT_SET_OBJECT)) ==
+ 			     (NFT_SET_EVAL | NFT_SET_OBJECT))
+ 			return -EOPNOTSUPP;
++		if ((flags & (NFT_SET_ANONYMOUS | NFT_SET_TIMEOUT | NFT_SET_EVAL)) ==
++			     (NFT_SET_ANONYMOUS | NFT_SET_TIMEOUT))
++			return -EOPNOTSUPP;
++		if ((flags & (NFT_SET_CONSTANT | NFT_SET_TIMEOUT)) ==
++			     (NFT_SET_CONSTANT | NFT_SET_TIMEOUT))
++			return -EOPNOTSUPP;
+ 	}
+ 
+ 	desc.dtype = 0;
+@@ -5423,6 +5429,7 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 
+ 	if (list_empty(&set->bindings) && nft_set_is_anonymous(set)) {
+ 		list_del_rcu(&set->list);
++		set->dead = 1;
+ 		if (event)
+ 			nf_tables_set_notify(ctx, set, NFT_MSG_DELSET,
+ 					     GFP_KERNEL);
+diff --git a/net/wireless/wext-core.c b/net/wireless/wext-core.c
+index a161c64d1765e..838ad6541a17d 100644
+--- a/net/wireless/wext-core.c
++++ b/net/wireless/wext-core.c
+@@ -4,6 +4,7 @@
+  * Authors :	Jean Tourrilhes - HPL - <jt@hpl.hp.com>
+  * Copyright (c) 1997-2007 Jean Tourrilhes, All Rights Reserved.
+  * Copyright	2009 Johannes Berg <johannes@sipsolutions.net>
++ * Copyright (C) 2024 Intel Corporation
+  *
+  * (As all part of the Linux kernel, this file is GPL)
+  */
+@@ -662,7 +663,8 @@ struct iw_statistics *get_wireless_stats(struct net_device *dev)
+ 	    dev->ieee80211_ptr->wiphy->wext &&
+ 	    dev->ieee80211_ptr->wiphy->wext->get_wireless_stats) {
+ 		wireless_warn_cfg80211_wext();
+-		if (dev->ieee80211_ptr->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO)
++		if (dev->ieee80211_ptr->wiphy->flags & (WIPHY_FLAG_SUPPORTS_MLO |
++							WIPHY_FLAG_DISABLE_WEXT))
+ 			return NULL;
+ 		return dev->ieee80211_ptr->wiphy->wext->get_wireless_stats(dev);
+ 	}
+@@ -704,7 +706,8 @@ static iw_handler get_handler(struct net_device *dev, unsigned int cmd)
+ #ifdef CONFIG_CFG80211_WEXT
+ 	if (dev->ieee80211_ptr && dev->ieee80211_ptr->wiphy) {
+ 		wireless_warn_cfg80211_wext();
+-		if (dev->ieee80211_ptr->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO)
++		if (dev->ieee80211_ptr->wiphy->flags & (WIPHY_FLAG_SUPPORTS_MLO |
++							WIPHY_FLAG_DISABLE_WEXT))
+ 			return NULL;
+ 		handlers = dev->ieee80211_ptr->wiphy->wext;
+ 	}
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index ad01997c3aa9d..444e58bc3f440 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -2017,6 +2017,9 @@ static int copy_to_user_tmpl(struct xfrm_policy *xp, struct sk_buff *skb)
+ 	if (xp->xfrm_nr == 0)
+ 		return 0;
+ 
++	if (xp->xfrm_nr > XFRM_MAX_DEPTH)
++		return -ENOBUFS;
++
+ 	for (i = 0; i < xp->xfrm_nr; i++) {
+ 		struct xfrm_user_tmpl *up = &vec[i];
+ 		struct xfrm_tmpl *kp = &xp->xfrm_vec[i];
+diff --git a/samples/vfio-mdev/mtty.c b/samples/vfio-mdev/mtty.c
+index 69ba0281f9e0b..2284b37512402 100644
+--- a/samples/vfio-mdev/mtty.c
++++ b/samples/vfio-mdev/mtty.c
+@@ -234,10 +234,10 @@ static void mtty_trigger_interrupt(struct mdev_state *mdev_state)
+ 
+ 	if (is_msi(mdev_state)) {
+ 		if (mdev_state->msi_evtfd)
+-			eventfd_signal(mdev_state->msi_evtfd, 1);
++			eventfd_signal(mdev_state->msi_evtfd);
+ 	} else if (is_intx(mdev_state)) {
+ 		if (mdev_state->intx_evtfd && !mdev_state->intx_mask) {
+-			eventfd_signal(mdev_state->intx_evtfd, 1);
++			eventfd_signal(mdev_state->intx_evtfd);
+ 			mdev_state->intx_mask = true;
+ 		}
+ 	}
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 2fe6f2828d376..16c750bb95faf 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -143,6 +143,8 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
+ KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
+ KBUILD_CFLAGS += $(call cc-disable-warning, cast-function-type-strict)
++KBUILD_CFLAGS += -Wno-enum-compare-conditional
++KBUILD_CFLAGS += -Wno-enum-enum-conversion
+ endif
+ 
+ endif
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 898358f57fa08..6788e73b6681b 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -33,6 +33,18 @@
+ #include "ruleset.h"
+ #include "setup.h"
+ 
++static bool is_initialized(void)
++{
++	if (likely(landlock_initialized))
++		return true;
++
++	pr_warn_once(
++		"Disabled but requested by user space. "
++		"You should enable Landlock at boot time: "
++		"https://docs.kernel.org/userspace-api/landlock.html#boot-time-configuration\n");
++	return false;
++}
++
+ /**
+  * copy_min_struct_from_user - Safe future-proof argument copying
+  *
+@@ -173,7 +185,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ 	/* Build-time checks. */
+ 	build_check_abi();
+ 
+-	if (!landlock_initialized)
++	if (!is_initialized())
+ 		return -EOPNOTSUPP;
+ 
+ 	if (flags) {
+@@ -398,7 +410,7 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd,
+ 	struct landlock_ruleset *ruleset;
+ 	int err;
+ 
+-	if (!landlock_initialized)
++	if (!is_initialized())
+ 		return -EOPNOTSUPP;
+ 
+ 	/* No flag for now. */
+@@ -458,7 +470,7 @@ SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,
+ 	struct landlock_cred_security *new_llcred;
+ 	int err;
+ 
+-	if (!landlock_initialized)
++	if (!is_initialized())
+ 		return -EOPNOTSUPP;
+ 
+ 	/*
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 1f1ea8529421f..e1e297deb02e6 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -1312,7 +1312,8 @@ static int smack_inode_setxattr(struct mnt_idmap *idmap,
+ 		check_star = 1;
+ 	} else if (strcmp(name, XATTR_NAME_SMACKTRANSMUTE) == 0) {
+ 		check_priv = 1;
+-		if (size != TRANS_TRUE_SIZE ||
++		if (!S_ISDIR(d_backing_inode(dentry)->i_mode) ||
++		    size != TRANS_TRUE_SIZE ||
+ 		    strncmp(value, TRANS_TRUE, TRANS_TRUE_SIZE) != 0)
+ 			rc = -EINVAL;
+ 	} else
+@@ -2853,6 +2854,15 @@ static int smack_inode_setsecurity(struct inode *inode, const char *name,
+ 	if (value == NULL || size > SMK_LONGLABEL || size == 0)
+ 		return -EINVAL;
+ 
++	if (strcmp(name, XATTR_SMACK_TRANSMUTE) == 0) {
++		if (!S_ISDIR(inode->i_mode) || size != TRANS_TRUE_SIZE ||
++		    strncmp(value, TRANS_TRUE, TRANS_TRUE_SIZE) != 0)
++			return -EINVAL;
++
++		nsp->smk_flags |= SMK_INODE_TRANSMUTE;
++		return 0;
++	}
++
+ 	skp = smk_import_entry(value, size);
+ 	if (IS_ERR(skp))
+ 		return PTR_ERR(skp);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0c746613c5ae0..27be1feb8c53e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10045,6 +10045,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c70, "HP EliteBook 835 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c71, "HP EliteBook 845 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c72, "HP EliteBook 865 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8c8a, "HP EliteBook 630", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8c8c, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8c90, "HP EliteBook 640", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+@@ -11046,6 +11050,8 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+  *   at most one tbl is allowed to define for the same vendor and same codec
+  */
+ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1025, "Acer", ALC2XX_FIXUP_HEADSET_MIC,
++		{0x19, 0x40000000}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+ 		{0x19, 0x40000000},
+ 		{0x1b, 0x40000000}),
+@@ -11735,8 +11741,7 @@ static void alc897_hp_automute_hook(struct hda_codec *codec,
+ 
+ 	snd_hda_gen_hp_automute(codec, jack);
+ 	vref = spec->gen.hp_jack_present ? (PIN_HP | AC_PINCTL_VREF_100) : PIN_HP;
+-	snd_hda_codec_write(codec, 0x1b, 0, AC_VERB_SET_PIN_WIDGET_CONTROL,
+-			    vref);
++	snd_hda_set_pin_ctl(codec, 0x1b, vref);
+ }
+ 
+ static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
+@@ -11745,6 +11750,10 @@ static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec,
+ 	struct alc_spec *spec = codec->spec;
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ 		spec->gen.hp_automute_hook = alc897_hp_automute_hook;
++		spec->no_shutup_pins = 1;
++	}
++	if (action == HDA_FIXUP_ACT_PROBE) {
++		snd_hda_set_pin_ctl_cache(codec, 0x1a, PIN_IN | AC_PINCTL_VREF_100);
+ 	}
+ }
+ 
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index 5179b69e403ac..9f3dc13a3b6ad 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -71,7 +71,7 @@ struct tas2781_hda {
+ 	struct snd_kcontrol *dsp_prog_ctl;
+ 	struct snd_kcontrol *dsp_conf_ctl;
+ 	struct snd_kcontrol *prof_ctl;
+-	struct snd_kcontrol *snd_ctls[3];
++	struct snd_kcontrol *snd_ctls[2];
+ };
+ 
+ static int tas2781_get_i2c_res(struct acpi_resource *ares, void *data)
+@@ -179,8 +179,12 @@ static int tasdevice_get_profile_id(struct snd_kcontrol *kcontrol,
+ {
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	ucontrol->value.integer.value[0] = tas_priv->rcabin.profile_cfg_id;
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return 0;
+ }
+ 
+@@ -194,11 +198,15 @@ static int tasdevice_set_profile_id(struct snd_kcontrol *kcontrol,
+ 
+ 	val = clamp(nr_profile, 0, max);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	if (tas_priv->rcabin.profile_cfg_id != val) {
+ 		tas_priv->rcabin.profile_cfg_id = val;
+ 		ret = 1;
+ 	}
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return ret;
+ }
+ 
+@@ -235,8 +243,12 @@ static int tasdevice_program_get(struct snd_kcontrol *kcontrol,
+ {
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	ucontrol->value.integer.value[0] = tas_priv->cur_prog;
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return 0;
+ }
+ 
+@@ -251,11 +263,15 @@ static int tasdevice_program_put(struct snd_kcontrol *kcontrol,
+ 
+ 	val = clamp(nr_program, 0, max);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	if (tas_priv->cur_prog != val) {
+ 		tas_priv->cur_prog = val;
+ 		ret = 1;
+ 	}
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return ret;
+ }
+ 
+@@ -264,8 +280,12 @@ static int tasdevice_config_get(struct snd_kcontrol *kcontrol,
+ {
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	ucontrol->value.integer.value[0] = tas_priv->cur_conf;
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return 0;
+ }
+ 
+@@ -280,33 +300,16 @@ static int tasdevice_config_put(struct snd_kcontrol *kcontrol,
+ 
+ 	val = clamp(nr_config, 0, max);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	if (tas_priv->cur_conf != val) {
+ 		tas_priv->cur_conf = val;
+ 		ret = 1;
+ 	}
+ 
+-	return ret;
+-}
++	mutex_unlock(&tas_priv->codec_lock);
+ 
+-/*
+- * tas2781_digital_getvol - get the volum control
+- * @kcontrol: control pointer
+- * @ucontrol: User data
+- * Customer Kcontrol for tas2781 is primarily for regmap booking, paging
+- * depends on internal regmap mechanism.
+- * tas2781 contains book and page two-level register map, especially
+- * book switching will set the register BXXP00R7F, after switching to the
+- * correct book, then leverage the mechanism for paging to access the
+- * register.
+- */
+-static int tas2781_digital_getvol(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
+-{
+-	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+-	struct soc_mixer_control *mc =
+-		(struct soc_mixer_control *)kcontrol->private_value;
+-
+-	return tasdevice_digital_getvol(tas_priv, ucontrol, mc);
++	return ret;
+ }
+ 
+ static int tas2781_amp_getvol(struct snd_kcontrol *kcontrol,
+@@ -315,19 +318,15 @@ static int tas2781_amp_getvol(struct snd_kcontrol *kcontrol,
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
++	int ret;
+ 
+-	return tasdevice_amp_getvol(tas_priv, ucontrol, mc);
+-}
++	mutex_lock(&tas_priv->codec_lock);
+ 
+-static int tas2781_digital_putvol(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
+-{
+-	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+-	struct soc_mixer_control *mc =
+-		(struct soc_mixer_control *)kcontrol->private_value;
++	ret = tasdevice_amp_getvol(tas_priv, ucontrol, mc);
++
++	mutex_unlock(&tas_priv->codec_lock);
+ 
+-	/* The check of the given value is in tasdevice_digital_putvol. */
+-	return tasdevice_digital_putvol(tas_priv, ucontrol, mc);
++	return ret;
+ }
+ 
+ static int tas2781_amp_putvol(struct snd_kcontrol *kcontrol,
+@@ -336,9 +335,16 @@ static int tas2781_amp_putvol(struct snd_kcontrol *kcontrol,
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
++	int ret;
++
++	mutex_lock(&tas_priv->codec_lock);
+ 
+ 	/* The check of the given value is in tasdevice_amp_putvol. */
+-	return tasdevice_amp_putvol(tas_priv, ucontrol, mc);
++	ret = tasdevice_amp_putvol(tas_priv, ucontrol, mc);
++
++	mutex_unlock(&tas_priv->codec_lock);
++
++	return ret;
+ }
+ 
+ static int tas2781_force_fwload_get(struct snd_kcontrol *kcontrol,
+@@ -346,10 +352,14 @@ static int tas2781_force_fwload_get(struct snd_kcontrol *kcontrol,
+ {
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	ucontrol->value.integer.value[0] = (int)tas_priv->force_fwload_status;
+ 	dev_dbg(tas_priv->dev, "%s : Force FWload %s\n", __func__,
+ 			tas_priv->force_fwload_status ? "ON" : "OFF");
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return 0;
+ }
+ 
+@@ -359,6 +369,8 @@ static int tas2781_force_fwload_put(struct snd_kcontrol *kcontrol,
+ 	struct tasdevice_priv *tas_priv = snd_kcontrol_chip(kcontrol);
+ 	bool change, val = (bool)ucontrol->value.integer.value[0];
+ 
++	mutex_lock(&tas_priv->codec_lock);
++
+ 	if (tas_priv->force_fwload_status == val)
+ 		change = false;
+ 	else {
+@@ -368,6 +380,8 @@ static int tas2781_force_fwload_put(struct snd_kcontrol *kcontrol,
+ 	dev_dbg(tas_priv->dev, "%s : Force FWload %s\n", __func__,
+ 		tas_priv->force_fwload_status ? "ON" : "OFF");
+ 
++	mutex_unlock(&tas_priv->codec_lock);
++
+ 	return change;
+ }
+ 
+@@ -375,9 +389,6 @@ static const struct snd_kcontrol_new tas2781_snd_controls[] = {
+ 	ACARD_SINGLE_RANGE_EXT_TLV("Speaker Analog Gain", TAS2781_AMP_LEVEL,
+ 		1, 0, 20, 0, tas2781_amp_getvol,
+ 		tas2781_amp_putvol, amp_vol_tlv),
+-	ACARD_SINGLE_RANGE_EXT_TLV("Speaker Digital Gain", TAS2781_DVC_LVL,
+-		0, 0, 200, 1, tas2781_digital_getvol,
+-		tas2781_digital_putvol, dvc_tlv),
+ 	ACARD_SINGLE_BOOL_EXT("Speaker Force Firmware Load", 0,
+ 		tas2781_force_fwload_get, tas2781_force_fwload_put),
+ };
+diff --git a/sound/sh/aica.c b/sound/sh/aica.c
+index 320ac792c7fe2..3182c634464d4 100644
+--- a/sound/sh/aica.c
++++ b/sound/sh/aica.c
+@@ -278,7 +278,8 @@ static void run_spu_dma(struct work_struct *work)
+ 		dreamcastcard->clicks++;
+ 		if (unlikely(dreamcastcard->clicks >= AICA_PERIOD_NUMBER))
+ 			dreamcastcard->clicks %= AICA_PERIOD_NUMBER;
+-		mod_timer(&dreamcastcard->timer, jiffies + 1);
++		if (snd_pcm_running(dreamcastcard->substream))
++			mod_timer(&dreamcastcard->timer, jiffies + 1);
+ 	}
+ }
+ 
+@@ -290,6 +291,8 @@ static void aica_period_elapsed(struct timer_list *t)
+ 	/*timer function - so cannot sleep */
+ 	int play_period;
+ 	struct snd_pcm_runtime *runtime;
++	if (!snd_pcm_running(substream))
++		return;
+ 	runtime = substream->runtime;
+ 	dreamcastcard = substream->pcm->private_data;
+ 	/* Have we played out an additional period? */
+@@ -350,12 +353,19 @@ static int snd_aicapcm_pcm_open(struct snd_pcm_substream
+ 	return 0;
+ }
+ 
++static int snd_aicapcm_pcm_sync_stop(struct snd_pcm_substream *substream)
++{
++	struct snd_card_aica *dreamcastcard = substream->pcm->private_data;
++
++	del_timer_sync(&dreamcastcard->timer);
++	cancel_work_sync(&dreamcastcard->spu_dma_work);
++	return 0;
++}
++
+ static int snd_aicapcm_pcm_close(struct snd_pcm_substream
+ 				 *substream)
+ {
+ 	struct snd_card_aica *dreamcastcard = substream->pcm->private_data;
+-	flush_work(&(dreamcastcard->spu_dma_work));
+-	del_timer(&dreamcastcard->timer);
+ 	dreamcastcard->substream = NULL;
+ 	kfree(dreamcastcard->channel);
+ 	spu_disable();
+@@ -401,6 +411,7 @@ static const struct snd_pcm_ops snd_aicapcm_playback_ops = {
+ 	.prepare = snd_aicapcm_pcm_prepare,
+ 	.trigger = snd_aicapcm_pcm_trigger,
+ 	.pointer = snd_aicapcm_pcm_pointer,
++	.sync_stop = snd_aicapcm_pcm_sync_stop,
+ };
+ 
+ /* TO DO: set up to handle more than one pcm instance */
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 90360f8b3e81b..1d1452c29ed02 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -199,13 +199,6 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21HY"),
+ 		}
+ 	},
+-	{
+-		.driver_data = &acp6x_card,
+-		.matches = {
+-			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "21J2"),
+-		}
+-	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
+index 72535f00572f6..72ea363d434db 100644
+--- a/tools/include/linux/btf_ids.h
++++ b/tools/include/linux/btf_ids.h
+@@ -3,6 +3,8 @@
+ #ifndef _LINUX_BTF_IDS_H
+ #define _LINUX_BTF_IDS_H
+ 
++#include <linux/types.h> /* for u32 */
++
+ struct btf_id_set {
+ 	u32 cnt;
+ 	u32 ids[];
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index 8d7c31bd2ebfc..cd64ae44ccbde 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -1027,8 +1027,8 @@ static int perf_top__start_counters(struct perf_top *top)
+ 
+ 	evlist__for_each_entry(evlist, counter) {
+ try_again:
+-		if (evsel__open(counter, top->evlist->core.user_requested_cpus,
+-				     top->evlist->core.threads) < 0) {
++		if (evsel__open(counter, counter->core.cpus,
++				counter->core.threads) < 0) {
+ 
+ 			/*
+ 			 * Specially handle overwrite fall back.
+diff --git a/tools/testing/selftests/mm/gup_test.c b/tools/testing/selftests/mm/gup_test.c
+index ec22291363844..18a49c70d4c63 100644
+--- a/tools/testing/selftests/mm/gup_test.c
++++ b/tools/testing/selftests/mm/gup_test.c
+@@ -50,39 +50,41 @@ static char *cmd_to_str(unsigned long cmd)
+ void *gup_thread(void *data)
+ {
+ 	struct gup_test gup = *(struct gup_test *)data;
+-	int i;
++	int i, status;
+ 
+ 	/* Only report timing information on the *_BENCHMARK commands: */
+ 	if ((cmd == PIN_FAST_BENCHMARK) || (cmd == GUP_FAST_BENCHMARK) ||
+ 	     (cmd == PIN_LONGTERM_BENCHMARK)) {
+ 		for (i = 0; i < repeats; i++) {
+ 			gup.size = size;
+-			if (ioctl(gup_fd, cmd, &gup))
+-				perror("ioctl"), exit(1);
++			status = ioctl(gup_fd, cmd, &gup);
++			if (status)
++				break;
+ 
+ 			pthread_mutex_lock(&print_mutex);
+-			printf("%s: Time: get:%lld put:%lld us",
+-			       cmd_to_str(cmd), gup.get_delta_usec,
+-			       gup.put_delta_usec);
++			ksft_print_msg("%s: Time: get:%lld put:%lld us",
++				       cmd_to_str(cmd), gup.get_delta_usec,
++				       gup.put_delta_usec);
+ 			if (gup.size != size)
+-				printf(", truncated (size: %lld)", gup.size);
+-			printf("\n");
++				ksft_print_msg(", truncated (size: %lld)", gup.size);
++			ksft_print_msg("\n");
+ 			pthread_mutex_unlock(&print_mutex);
+ 		}
+ 	} else {
+ 		gup.size = size;
+-		if (ioctl(gup_fd, cmd, &gup)) {
+-			perror("ioctl");
+-			exit(1);
+-		}
++		status = ioctl(gup_fd, cmd, &gup);
++		if (status)
++			goto return_;
+ 
+ 		pthread_mutex_lock(&print_mutex);
+-		printf("%s: done\n", cmd_to_str(cmd));
++		ksft_print_msg("%s: done\n", cmd_to_str(cmd));
+ 		if (gup.size != size)
+-			printf("Truncated (size: %lld)\n", gup.size);
++			ksft_print_msg("Truncated (size: %lld)\n", gup.size);
+ 		pthread_mutex_unlock(&print_mutex);
+ 	}
+ 
++return_:
++	ksft_test_result(!status, "ioctl status %d\n", status);
+ 	return NULL;
+ }
+ 
+@@ -170,7 +172,7 @@ int main(int argc, char **argv)
+ 			touch = 1;
+ 			break;
+ 		default:
+-			return -1;
++			ksft_exit_fail_msg("Wrong argument\n");
+ 		}
+ 	}
+ 
+@@ -198,11 +200,12 @@ int main(int argc, char **argv)
+ 		}
+ 	}
+ 
+-	filed = open(file, O_RDWR|O_CREAT);
+-	if (filed < 0) {
+-		perror("open");
+-		exit(filed);
+-	}
++	ksft_print_header();
++	ksft_set_plan(nthreads);
++
++	filed = open(file, O_RDWR|O_CREAT, 0664);
++	if (filed < 0)
++		ksft_exit_fail_msg("Unable to open %s: %s\n", file, strerror(errno));
+ 
+ 	gup.nr_pages_per_call = nr_pages;
+ 	if (write)
+@@ -213,27 +216,24 @@ int main(int argc, char **argv)
+ 		switch (errno) {
+ 		case EACCES:
+ 			if (getuid())
+-				printf("Please run this test as root\n");
++				ksft_print_msg("Please run this test as root\n");
+ 			break;
+ 		case ENOENT:
+-			if (opendir("/sys/kernel/debug") == NULL) {
+-				printf("mount debugfs at /sys/kernel/debug\n");
+-				break;
+-			}
+-			printf("check if CONFIG_GUP_TEST is enabled in kernel config\n");
++			if (opendir("/sys/kernel/debug") == NULL)
++				ksft_print_msg("mount debugfs at /sys/kernel/debug\n");
++			ksft_print_msg("check if CONFIG_GUP_TEST is enabled in kernel config\n");
+ 			break;
+ 		default:
+-			perror("failed to open " GUP_TEST_FILE);
++			ksft_print_msg("failed to open %s: %s\n", GUP_TEST_FILE, strerror(errno));
+ 			break;
+ 		}
+-		exit(KSFT_SKIP);
++		ksft_test_result_skip("Please run this test as root\n");
++		return ksft_exit_pass();
+ 	}
+ 
+ 	p = mmap(NULL, size, PROT_READ | PROT_WRITE, flags, filed, 0);
+-	if (p == MAP_FAILED) {
+-		perror("mmap");
+-		exit(1);
+-	}
++	if (p == MAP_FAILED)
++		ksft_exit_fail_msg("mmap: %s\n", strerror(errno));
+ 	gup.addr = (unsigned long)p;
+ 
+ 	if (thp == 1)
+@@ -264,7 +264,8 @@ int main(int argc, char **argv)
+ 		ret = pthread_join(tid[i], NULL);
+ 		assert(ret == 0);
+ 	}
++
+ 	free(tid);
+ 
+-	return 0;
++	return ksft_exit_pass();
+ }
+diff --git a/tools/testing/selftests/mm/soft-dirty.c b/tools/testing/selftests/mm/soft-dirty.c
+index cc5f144430d4d..7dbfa53d93a05 100644
+--- a/tools/testing/selftests/mm/soft-dirty.c
++++ b/tools/testing/selftests/mm/soft-dirty.c
+@@ -137,7 +137,7 @@ static void test_mprotect(int pagemap_fd, int pagesize, bool anon)
+ 		if (!map)
+ 			ksft_exit_fail_msg("anon mmap failed\n");
+ 	} else {
+-		test_fd = open(fname, O_RDWR | O_CREAT);
++		test_fd = open(fname, O_RDWR | O_CREAT, 0664);
+ 		if (test_fd < 0) {
+ 			ksft_test_result_skip("Test %s open() file failed\n", __func__);
+ 			return;
+diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
+index 0e74635c8c3d9..dff3be23488b4 100644
+--- a/tools/testing/selftests/mm/split_huge_page_test.c
++++ b/tools/testing/selftests/mm/split_huge_page_test.c
+@@ -253,7 +253,7 @@ void split_file_backed_thp(void)
+ 		goto cleanup;
+ 	}
+ 
+-	fd = open(testfile, O_CREAT|O_WRONLY);
++	fd = open(testfile, O_CREAT|O_WRONLY, 0664);
+ 	if (fd == -1) {
+ 		perror("Cannot open testing file\n");
+ 		goto cleanup;
+diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
+index 02b89860e193d..ba6777cdf4235 100644
+--- a/tools/testing/selftests/mm/uffd-common.c
++++ b/tools/testing/selftests/mm/uffd-common.c
+@@ -17,6 +17,7 @@ bool map_shared;
+ bool test_uffdio_wp = true;
+ unsigned long long *count_verify;
+ uffd_test_ops_t *uffd_test_ops;
++atomic_bool ready_for_fork;
+ 
+ static int uffd_mem_fd_create(off_t mem_size, bool hugetlb)
+ {
+@@ -507,6 +508,8 @@ void *uffd_poll_thread(void *arg)
+ 	pollfd[1].fd = pipefd[cpu*2];
+ 	pollfd[1].events = POLLIN;
+ 
++	ready_for_fork = true;
++
+ 	for (;;) {
+ 		ret = poll(pollfd, 2, -1);
+ 		if (ret <= 0) {
+diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
+index 7c4fa964c3b08..1f0d573f30675 100644
+--- a/tools/testing/selftests/mm/uffd-common.h
++++ b/tools/testing/selftests/mm/uffd-common.h
+@@ -32,6 +32,7 @@
+ #include <inttypes.h>
+ #include <stdint.h>
+ #include <sys/random.h>
++#include <stdatomic.h>
+ 
+ #include "../kselftest.h"
+ #include "vm_util.h"
+@@ -97,6 +98,7 @@ extern bool map_shared;
+ extern bool test_uffdio_wp;
+ extern unsigned long long *count_verify;
+ extern volatile bool test_uffdio_copy_eexist;
++extern atomic_bool ready_for_fork;
+ 
+ extern uffd_test_ops_t anon_uffd_test_ops;
+ extern uffd_test_ops_t shmem_uffd_test_ops;
+diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
+index 2c68709062da5..92d51768b7be1 100644
+--- a/tools/testing/selftests/mm/uffd-unit-tests.c
++++ b/tools/testing/selftests/mm/uffd-unit-tests.c
+@@ -770,6 +770,8 @@ static void uffd_sigbus_test_common(bool wp)
+ 	char c;
+ 	struct uffd_args args = { 0 };
+ 
++	ready_for_fork = false;
++
+ 	fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
+ 
+ 	if (uffd_register(uffd, area_dst, nr_pages * page_size,
+@@ -785,6 +787,9 @@ static void uffd_sigbus_test_common(bool wp)
+ 	if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ 		err("uffd_poll_thread create");
+ 
++	while (!ready_for_fork)
++		; /* Wait for the poll_thread to start executing before forking */
++
+ 	pid = fork();
+ 	if (pid < 0)
+ 		err("fork");
+@@ -824,6 +829,8 @@ static void uffd_events_test_common(bool wp)
+ 	char c;
+ 	struct uffd_args args = { 0 };
+ 
++	ready_for_fork = false;
++
+ 	fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
+ 	if (uffd_register(uffd, area_dst, nr_pages * page_size,
+ 			  true, wp, false))
+@@ -833,6 +840,9 @@ static void uffd_events_test_common(bool wp)
+ 	if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ 		err("uffd_poll_thread create");
+ 
++	while (!ready_for_fork)
++		; /* Wait for the poll_thread to start executing before forking */
++
+ 	pid = fork();
+ 	if (pid < 0)
+ 		err("fork");
+@@ -1219,7 +1229,8 @@ uffd_test_case_t uffd_tests[] = {
+ 		.uffd_fn = uffd_sigbus_wp_test,
+ 		.mem_targets = MEM_ALL,
+ 		.uffd_feature_required = UFFD_FEATURE_SIGBUS |
+-		UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_PAGEFAULT_FLAG_WP,
++		UFFD_FEATURE_EVENT_FORK | UFFD_FEATURE_PAGEFAULT_FLAG_WP |
++		UFFD_FEATURE_WP_HUGETLBFS_SHMEM,
+ 	},
+ 	{
+ 		.name = "events",
+diff --git a/tools/testing/selftests/mqueue/setting b/tools/testing/selftests/mqueue/setting
+new file mode 100644
+index 0000000000000..a953c96aa16e1
+--- /dev/null
++++ b/tools/testing/selftests/mqueue/setting
+@@ -0,0 +1 @@
++timeout=180
+diff --git a/tools/testing/selftests/net/mptcp/diag.sh b/tools/testing/selftests/net/mptcp/diag.sh
+index 4d8c59be1b30c..7f89623f1080e 100755
+--- a/tools/testing/selftests/net/mptcp/diag.sh
++++ b/tools/testing/selftests/net/mptcp/diag.sh
+@@ -69,7 +69,7 @@ __chk_nr()
+ 		else
+ 			echo "[ fail ] expected $expected found $nr"
+ 			mptcp_lib_result_fail "${msg}"
+-			ret=$test_cnt
++			ret=${KSFT_FAIL}
+ 		fi
+ 	else
+ 		echo "[  ok  ]"
+@@ -115,11 +115,11 @@ wait_msk_nr()
+ 	if [ $i -ge $timeout ]; then
+ 		echo "[ fail ] timeout while expecting $expected max $max last $nr"
+ 		mptcp_lib_result_fail "${msg} # timeout"
+-		ret=$test_cnt
++		ret=${KSFT_FAIL}
+ 	elif [ $nr != $expected ]; then
+ 		echo "[ fail ] expected $expected found $nr"
+ 		mptcp_lib_result_fail "${msg} # unexpected result"
+-		ret=$test_cnt
++		ret=${KSFT_FAIL}
+ 	else
+ 		echo "[  ok  ]"
+ 		mptcp_lib_result_pass "${msg}"
+diff --git a/tools/testing/selftests/wireguard/qemu/arch/riscv32.config b/tools/testing/selftests/wireguard/qemu/arch/riscv32.config
+index 2fc36efb166dc..a7f8e8a956259 100644
+--- a/tools/testing/selftests/wireguard/qemu/arch/riscv32.config
++++ b/tools/testing/selftests/wireguard/qemu/arch/riscv32.config
+@@ -3,6 +3,7 @@ CONFIG_ARCH_RV32I=y
+ CONFIG_MMU=y
+ CONFIG_FPU=y
+ CONFIG_SOC_VIRT=y
++CONFIG_RISCV_ISA_FALLBACK=y
+ CONFIG_SERIAL_8250=y
+ CONFIG_SERIAL_8250_CONSOLE=y
+ CONFIG_SERIAL_OF_PLATFORM=y
+diff --git a/tools/testing/selftests/wireguard/qemu/arch/riscv64.config b/tools/testing/selftests/wireguard/qemu/arch/riscv64.config
+index dc266f3b19155..daeb3e5e09658 100644
+--- a/tools/testing/selftests/wireguard/qemu/arch/riscv64.config
++++ b/tools/testing/selftests/wireguard/qemu/arch/riscv64.config
+@@ -2,6 +2,7 @@ CONFIG_ARCH_RV64I=y
+ CONFIG_MMU=y
+ CONFIG_FPU=y
+ CONFIG_SOC_VIRT=y
++CONFIG_RISCV_ISA_FALLBACK=y
+ CONFIG_SERIAL_8250=y
+ CONFIG_SERIAL_8250_CONSOLE=y
+ CONFIG_SERIAL_OF_PLATFORM=y
+diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
+index e033c79d528e0..28658b9e0d968 100644
+--- a/virt/kvm/async_pf.c
++++ b/virt/kvm/async_pf.c
+@@ -87,7 +87,27 @@ static void async_pf_execute(struct work_struct *work)
+ 	__kvm_vcpu_wake_up(vcpu);
+ 
+ 	mmput(mm);
+-	kvm_put_kvm(vcpu->kvm);
++}
++
++static void kvm_flush_and_free_async_pf_work(struct kvm_async_pf *work)
++{
++	/*
++	 * The async #PF is "done", but KVM must wait for the work item itself,
++	 * i.e. async_pf_execute(), to run to completion.  If KVM is a module,
++	 * KVM must ensure *no* code owned by the KVM (the module) can be run
++	 * after the last call to module_put().  Note, flushing the work item
++	 * is always required when the item is taken off the completion queue.
++	 * E.g. even if the vCPU handles the item in the "normal" path, the VM
++	 * could be terminated before async_pf_execute() completes.
++	 *
++	 * Wake all events skip the queue and go straight done, i.e. don't
++	 * need to be flushed (but sanity check that the work wasn't queued).
++	 */
++	if (work->wakeup_all)
++		WARN_ON_ONCE(work->work.func);
++	else
++		flush_work(&work->work);
++	kmem_cache_free(async_pf_cache, work);
+ }
+ 
+ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
+@@ -114,7 +134,6 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
+ #else
+ 		if (cancel_work_sync(&work->work)) {
+ 			mmput(work->mm);
+-			kvm_put_kvm(vcpu->kvm); /* == work->vcpu->kvm */
+ 			kmem_cache_free(async_pf_cache, work);
+ 		}
+ #endif
+@@ -126,7 +145,10 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
+ 			list_first_entry(&vcpu->async_pf.done,
+ 					 typeof(*work), link);
+ 		list_del(&work->link);
+-		kmem_cache_free(async_pf_cache, work);
++
++		spin_unlock(&vcpu->async_pf.lock);
++		kvm_flush_and_free_async_pf_work(work);
++		spin_lock(&vcpu->async_pf.lock);
+ 	}
+ 	spin_unlock(&vcpu->async_pf.lock);
+ 
+@@ -151,7 +173,7 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
+ 
+ 		list_del(&work->queue);
+ 		vcpu->async_pf.queued--;
+-		kmem_cache_free(async_pf_cache, work);
++		kvm_flush_and_free_async_pf_work(work);
+ 	}
+ }
+ 
+@@ -186,7 +208,6 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ 	work->arch = *arch;
+ 	work->mm = current->mm;
+ 	mmget(work->mm);
+-	kvm_get_kvm(work->vcpu->kvm);
+ 
+ 	INIT_WORK(&work->work, async_pf_execute);
+ 
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 89912a17f5d57..c0e230f4c3e93 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -61,7 +61,7 @@ static void irqfd_resampler_notify(struct kvm_kernel_irqfd_resampler *resampler)
+ 
+ 	list_for_each_entry_srcu(irqfd, &resampler->list, resampler_link,
+ 				 srcu_read_lock_held(&resampler->kvm->irq_srcu))
+-		eventfd_signal(irqfd->resamplefd, 1);
++		eventfd_signal(irqfd->resamplefd);
+ }
+ 
+ /*
+@@ -786,7 +786,7 @@ ioeventfd_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this, gpa_t addr,
+ 	if (!ioeventfd_in_range(p, addr, len, val))
+ 		return -EOPNOTSUPP;
+ 
+-	eventfd_signal(p->eventfd, 1);
++	eventfd_signal(p->eventfd);
+ 	return 0;
+ }
+ 


^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2024-04-03 13:53 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-12 22:29 [gentoo-commits] proj/linux-patches:6.7 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2024-01-20 11:44 Mike Pagano
2024-01-21 15:14 Mike Pagano
2024-01-25 13:26 Mike Pagano
2024-01-26  0:17 Mike Pagano
2024-01-26  0:20 Mike Pagano
2024-01-26 22:44 Mike Pagano
2024-01-26 22:44 Mike Pagano
2024-01-30 19:20 Mike Pagano
2024-01-30 19:22 Mike Pagano
2024-02-01  1:22 Mike Pagano
2024-02-02 17:52 Mike Pagano
2024-02-05 20:59 Mike Pagano
2024-02-05 21:13 Mike Pagano
2024-02-16 18:55 Mike Pagano
2024-02-23 12:33 Mike Pagano
2024-02-23 13:28 Mike Pagano
2024-02-23 14:19 Mike Pagano
2024-02-24 19:39 Mike Pagano
2024-03-01 13:06 Mike Pagano
2024-03-02 22:36 Mike Pagano
2024-03-06 18:06 Mike Pagano
2024-03-15 21:59 Mike Pagano
2024-03-27 11:23 Mike Pagano
2024-04-03 13:53 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox